text
stringlengths
216
4.52M
meta
dict
\section{Introduction and statement of main results} Consider the dynamical system \[ \left\{ \ \ \begin{split} & \dot{{\bf x}}_i(t)={\bf v}_i(t) \\ & \dot{{\bf v}}_i(t) = -\nabla_{i} {\mathcal H}_N({\bf x}_1^\tau, \ldots, {\bf x}_N^\tau), \quad {\bf x}_i^\tau= {\bf x}_i^\tau(t) := {\bf x}_i(t) + \tau {\bf v}_i(t), \end{split}\right.\qquad i=1,\dots,N. \] When $\tau=0$, this is the classical $N$-particle dynamics for positions and velocities, $({\bf x}_i(t),{\bf v}_i(t))\in (\mathbb{R}^d,\mathbb{R}^d)$, governed by the general Hamiltonian ${\mathcal H}_N(\cdots)$. If we fix a small time step $\tau>0$, then the system is not driven instantaneously but reacts to the positions ${\bf x}^\tau(t)={\bf x}(t)+\tau {\bf v}(t)$, \emph{anticipated} at time $t+\tau$, where $\tau$ is the anticipation time increment. Anticipation is a main feature in social dynamics of $N$-agent and $N$-player systems, \cite{GLL2010, MCEB2015,GTLW2017}. A key feature in the the large time behavior of such anticipated dynamics is the dissipation of the \emph{anticipated energy} \[ \mathcal{E}(t) = \frac{1}{2N}\sum_i |{\bf v}_i|^2 + \frac{1}{N}{\mathcal H}_N({\bf x}_1^\tau,\ldots, {\bf x}_N^\tau), \] at a rate given by \[ \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\mathcal{E}(t) = \frac{1}{N}\sum_i {\bf v}_i\cdot \dot{{\bf v}}_i + \frac{1}{N}\sum_{i} \nabla_{i} {\mathcal H}_N({\bf x}_1^\tau,\ldots, {\bf x}_N^\tau)\cdot ({\bf v}_i + \tau \dot{{\bf v}}_i) = - \frac{\tau}{N}\sum_{i} | \dot{{\bf v}}_i|^2, \quad \tau>0. \end{split} \] We refer to the quantity on the right as the \emph{enstrophy} of the system. \subsection{Pairwise interactions} In this work we study the anticiptaion dynamics of pairwise interactions \begin{equation}\tag{AT}\label{eq:AT} \left\{\ \ \begin{split} & \dot{{\bf x}}_i(t)={\bf v}_i(t) \\ & \dot{{\bf v}}_i(t) = -\frac{1}{N}\sum_{j=1}^N \nabla U(|{\bf x}_i^\tau-{\bf x}_j^\tau|), \quad {\bf x}_i^\tau= {\bf x}_i(t) + \tau {\bf v}_i(t), \end{split}\right.\qquad i=1,\dots,N, \end{equation} governed by a radial interaction potential $U(r), \, r=|{\bf x}|$. This corresponds to the Hamiltonian $\ds {\mathcal H}_N({\bf x}_1^\tau,\ldots,{\bf x}_N^\tau)=\frac{1}{2N}\sum_{j,k}U(|{\bf x}_j^\tau-{\bf x}_k^\tau|)$, where the conservative $N$-body problem ($\tau=0$) is now replaced by $N$-agent dynamics with anticipated energy dissipation \begin{equation}\label{energy} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\mathcal{E}(t) = - \frac{\tau}{N}\sum_{i} | \dot{{\bf v}}_i|^2, \qquad \mathcal{E}(t):=\frac{1}{2N}\sum_i |{\bf v}_i|^2 +\frac{1}{2N^2}\sum_{i,j}U(|{\bf x}_i^\tau-{\bf x}_j^\tau|), \quad \tau>0. \end{split} \end{equation} To gain a better insight into \eqref{eq:AT}, expand in $\tau$ to obtain \begin{equation}\tag{$\Phi$U}\label{eq:UF} \left\{\ \ \begin{split} & \dot{{\bf x}}_i={\bf v}_i \\ & \dot{{\bf v}}_i = \frac{\tau}{N}\sum_{j=1}^N \Phi_{ij}({\bf v}_j-{\bf v}_i) -\frac{1}{N}\sum_{j=1}^N \nabla U(|{\bf x}_i-{\bf x}_j|), \end{split}\right.\qquad i=1,\dots,N. \end{equation} The anticipation \eqref{eq:AT} is recovered in terms of the Hessian, $\Phi_{ij}=D^2U(|{\bf x}_i^{\tau_{ij}}-{\bf x}_j^{\tau_{ij}}|)$, evaluated at the mean-value anticipated times $\tau_{ij}(t)\in [0,\tau]$. Since these mean-valued times are not readily available, we will consider \eqref{eq:UF} for a general class of \emph{symmetric communication matrices} \[ {\sf \Phi}:= \left\{ \Phi(\cdot,\cdot)\in \text{Sym}_{d\times d} \ | \ \Phi_{ij}:=\Phi\big(({\bf x}_i,{\bf v}_i),({\bf x}_j,{\bf v}_j)\big)=\Phi_{ji}\right\}. \] The so-called \ref{eq:UF} system provides a unified framework for anticipation dynamics by coupling general symmetric communication matrices, $\{\Phi \in {\sf \Phi}\}$, together with pairwise interactions induced by the potential $U$. In particular, the anticipation dynamics \eqref{eq:AT} yields upon linearization, the celebrated Cucker-Smale (CS) model \cite{CS2007a,CS2007b} --- a prototypical model for alignment dynamics in which $\max_{i,j}|{\bf v}_i(t)-{\bf v}_j(t)|\stackrel{t\rightarrow \infty}{\longrightarrow}0$. There is, however, one distinct difference: while the CS model is governed by a \emph{scalar} kernels involving geometric distances, $\Phi_{ij}=\phi(|{\bf x}_i-{\bf x}_j|){\mathbb I}_{d\times d}$, \eqref{eq:UF} allows for larger class of communication protocols based on \emph{matrix} kernels, e.g., $\Phi_{ij}=\Phi({\bf x}_i,{\bf x}_j)$, with a possible dependence on topological distances, \cite{ST2018b}. The flocking behavior of such \emph{matrix-based} CS models is proved in section \ref{subsec:CS}. Viewed from the perspective of Cucker-Smale alignment dynamics, the large time behavior of \eqref{eq:AT},\eqref{eq:UF}, is expected to emerge into \emph{flocking} due to alignment of velocities. Moreover, our study \cite{ST2019} shows that the presence of pairwise interactions leads (at least in the quadratic case $U(r) \sim \nicefrac{r^2}{2}$), to \emph{spatial concentration}. Similarly, spatial concentration due to the confinement effect is expected for a large(r) class of pairwise interaction potentials $U$. The main purpose of this paper is to study the large time behavior of \eqref{eq:AT} and \eqref{eq:UF}, proving the decisive role of anticipation in driving the dynamics of velocity alignment and spatial concentration. We begin in section \ref{sec:convex} with the general system \eqref{eq:UF}. The basic bookkeeping associated with \eqref{eq:UF} quantifies its decay rate of the (instantaneous) energy \[ E(t) := \frac{1}{2N}\sum_i |{\bf v}_i|^2 + \frac{1}{2N^2}\sum_{i,j} U(|{\bf x}_i-{\bf x}_j|), \] which is given by \begin{equation}\label{energy1} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}E(t) = & - \frac{\tau}{2N^2}\sum_{i,j} ({\bf v}_j-{\bf v}_i)^\top \Phi_{ij}({\bf v}_j-{\bf v}_i). \end{split} \end{equation} (This will be contrasted with the dissipation of anticipated energy \eqref{energy} in section \ref{sec:attraction} below.) To explore the enstrophy on the right of \eqref{energy1} we need to further elaborate on properties of the the communication matrices $\Phi_{ij}=\Phi(\cdot,\cdot)$ vis a vis their relations with the potential $U$. We study the dynamics induced by potentials $U$ which are at least $C^2$. As a result, $U'(0)=0$, and we may assume $U(0)=0$ by adding a constant to it. We start by rewriting the Hessian of the radial potential in the form \begin{equation}\label{eq:comm} D^2 U(|{\bf x}_i-{\bf x}_j|) = \frac{U'(r_{ij})}{r_{ij}} ({\mathbb I}-\widehat{{\bf x}}_{ij}\widehat{{\bf x}}_{ij}^\top) + U''(r_{ij})\widehat{{\bf x}}_{ij}\widehat{{\bf x}}_{ij}^\top,\quad r_{ij}:=|{\bf x}_i-{\bf x}_j|, \quad \widehat{{\bf x}}_{ij}:=\frac{{\bf x}_i-{\bf x}_j}{r_{ij}}, \end{equation} and observe that the symmetric matrix $D^2U(|{\bf x}_i-{\bf x}_j)$ has a single eigenvalue $U''(r_{ij})$ in the radial direction, ${\bf x}_{i}-{\bf x}_j$, and $d-1$ multiple of the eigenvalues $\frac{U'(r_{ij})}{r_{ij}}$ in tangential directions $({\bf x}_i-{\bf x}_j)^\perp$. We now classify the classes of potentials we will be working with, according to the their short-range and long-range behavior. We begin by postulating that the communication matrix $D^2U(|{\bf x}_i-{\bf x}_j|)$ is always bounded: there exists a constant $A>0$ such that \begin{equation}\label{eq:bounded} |U''(r)| \leq A. \end{equation} \noindent It follows that $\displaystyle |U'(r)| \leq \int_0^r |U''(s)|\,\mathrm{d}{s} \leq \int_0^r A\,\mathrm{d}{s} = Ar$ and hence $|D^2U(\cdot)|\leq A$. The assumed bound \eqref{eq:bounded} rules out the important class of kernels with short-range singularity (in both first- and second-order dynamics), e.g., \cite{Ja2014,Go2016, CCTT2016,PS2017,ST2017,CCP2017,Se2018,ST2018,DKRT2018,MMPZ2019}, which is left for a future study. Next, we distinguish between different $U$'s according to their long range behavior. \subsection{Anticipation dynamics with convex potentials} Recall that the flocking behavior of CS model , see \eqref{eq:CS} below, is guaranteed for \emph{scalar} communication kernels, $\Phi(r)=\phi(r){\mathbb I}$, which satisfy a so-called \emph{fat tail condition}, \cite{HL2009},\cite[Proposition 2.9]{MT2014}, \[ \int \phi(r)\,\mathrm{d}{r} = \infty, \] or --- expressed in terms of its decay rate\footnote{Throughout this paper, we use the notation $\braket{r}^\beta := (1+r^2)^{\beta/2}$ for scalar $r$ and $\braket{\bz} = \braket{|{\bf z}|}$ for vectors ${\bf z}$.}, $\phi(r)\sim \langle r\rangle^{-\beta}$ for $0\le \beta \le 1$. Since the anticipation model \eqref{eq:AT} can be viewed as a special case of \eqref{eq:UF} with communication matrix evaluated at intermediate anticipated times, $\Phi_{ij}= D^2U(|{\bf x}_i^{\tau_{ij}}-{\bf x}_j^{\tau_{ij}}|)$, it is natural to quantify the convexity of $U$ and positivity of $\Phi$ in terms of their `fat tail' decay. \begin{assumption}[{\bf Convex potentials}]\label{ass:convex} There exist constants $0< a<A$ and $\beta \in [0,1]$ such that \begin{equation}\label{eq:convex} a\braket{r}^{-\beta} \leq U''(r) \leq A, \qquad 0\leq \beta \leq 1. \end{equation} \end{assumption} \noindent It then follows that $\displaystyle U'(r) = \int_0^r U''(s)\,\mathrm{d}{s} \ge \int_0^r a\langle s\rangle^{-\beta}\,\mathrm{d}{s} \geq a\langle r\rangle^{-\beta}r$, and hence $D^2U$ in \eqref{eq:comm} satisfies the fat tail condition $D^2U(|{\bf x}|) \geq a\braket{\bx}^{-\beta}$ with $0\leq \beta \leq 1$. \begin{assumption}[{\bf Positive kernels}]\label{ass:positive} There exist constants $0<\phi_- <\phi_+$ such that \begin{equation}\label{eq:positive} \phi_-(\braket{{\bf x}_i-{\bf x}_j}+\braket{{\bf v}_i-{\bf v}_j})^{-\gamma} \leq \Phi_{ij} \leq \phi_+, \qquad 0\leq \gamma <1. \end{equation} \end{assumption} Observe that \eqref{eq:UF} conserves momentum \begin{equation}\label{momentum} \left\{\begin{split} & \dot{\widebar{{\bf x}}}=\widebar{{\bf v}},\qquad \widebar{{\bf x}}:=\frac{1}{N}\sum_i {\bf x}_i,\\ & \dot{\widebar{{\bf v}}}=0, \qquad \widebar{{\bf v}}:=\frac{1}{N}\sum_i {\bf v}_i. \end{split}\right. \end{equation} It follows that the mean velocity $\widebar{{\bf v}}$ is constant in time, $\widebar{{\bf v}}(t)=\widebar{{\bf v}}_0$, and hence $\widebar{{\bf x}}(t) = \widebar{{\bf x}}_0 + t\widebar{{\bf v}}_0$. Our first main result is expressed in terms of the \emph{energy fluctuations} \[ \delta E(t) := \frac{1}{2N}\sum_{i} |{\bf v}_i-\widebar{{\bf v}}|^2 + \frac{1}{2N^2}\sum_{i,j} U(|{\bf x}_i-{\bf x}_j|). \] \begin{theorem}[{\bf Anticipation dynamics \eqref{eq:UF} --- velocity alignment and spatial concentration}]\label{thm:thm1} Consider the anticipation dynamics \eqref{eq:UF}. Assume a bounded convex potential $U$ with fat-tail decay of order $\beta$, \eqref{eq:convex}, and a symmetric kernel matrix $\Phi$ with a fat-tail decay of order $\gamma$, \eqref{eq:positive}. If the decay parameters lie in the restricted range $3\beta +2\max\{\beta,\gamma\} <4$, then there is sub-exponential decay of the energy fluctuations \begin{equation}\label{eq:Edecay} \delta E(t) \le Ce^{-t^{1-\lamta}}, \qquad \lamta=\frac{2\max\{\beta,\gamma\}}{4-3\beta}<1. \end{equation} We conclude that for large time, the dynamics concentrates in space with global velocity alignment at sub-exponential rate, \begin{equation} |{\bf x}_i(t)-\widebar{{\bf x}}(t)|\rightarrow 0,\quad |{\bf v}_i(t)-\widebar{{\bf v}}_0|\rightarrow 0, \qquad \widebar{{\bf x}}(t)=\widebar{{\bf x}}_0+t\widebar{{\bf v}}_0. \end{equation} \end{theorem} \noindent The proof of Theorem \ref{thm:thm1} proceeds in two steps:\newline {\bf(i)} A uniform bound, outlined in lemma \ref{lem:convex} below, on maximal spread of positions $|{\bf x}_i(t)|$, \begin{equation}\label{eq:bound1} \max_i |{\bf x}_i(t)| \le C_\infty\braket{t}^{\frac{2}{4-3\beta}}, \quad \max_i|{\bf v}_i(t)| \leq C_\infty\braket{t}^{\frac{2-\beta}{4-3\beta}}, \qquad 0\leq \beta \leq 1. \end{equation} {\bf (ii)} Observe that in view of \eqref{momentum}, $\frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\delta E(t)=\frac{\,\mathrm{d}}{\,\mathrm{d}{t}}E(t)$. The energy dissipation \eqref{energy1} combined with the bounds \eqref{eq:positive},\eqref{eq:bound1} imply the decay of energy fluctuations \[ \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\delta E(t) =\frac{\,\mathrm{d}}{\,\mathrm{d}{t}}E(t) \lesssim -\frac{\tau}{2N}\braket{t}^{-\frac{2\gamma}{4-3\beta}}\sum_{i} |{\bf v}_i-\widebar{{\bf v}}|^2. \] To close the last bound we need a hypocoercivity argument carried out in section \ref{sec:convex}, which leads to the sub-exponential decay \eqref{eq:Edecay}. The conclusion of sub-exponential flocking $({\bf x}_i-{\bf x}_j,{\bf v}_i-{\bf v}_i) \stackrel{t\rightarrow \infty}{\longrightarrow} 0$ follows, and naturally, $({\bf x}_i,{\bf v}_i) - (\widebar{{\bf x}}(t),\widebar{{\bf v}}_0)\rightarrow 0 $ since this is the only minimizer of $\delta E(t)$. Since the anticipation dynamics \eqref{eq:AT} can be viewed as a special case of \ref{eq:UF} system with $\Phi=D^2U$ at intermediate anticipated time $\tau_{ij}$, theorem \ref{thm:thm1} applies with $\gamma=\beta$. \begin{corollary}[{\bf Anticipation dynamics \eqref{eq:AT} with convex potentials}]\label{cor:convex} Consider the anticipated dynamics \eqref{eq:AT} with convex potential satisfying \[ a\langle r\rangle^{-\beta} \leq U''(r) \leq A, \qquad 0\leq \beta< \frac{4}{5}. \] Then there is sub-exponential decay of the energy fluctuations \begin{equation} \delta E(t) \le Ce^{-t^{1-{\lamta}}},\quad \lamta = \frac{2\beta}{4-3\beta}. \end{equation} The large time flocking behavior follows: the dynamics concentrates in space with global velocity alignment at sub-exponential rate, \begin{equation} |{\bf x}_i(t)-\widebar{{\bf x}}(t)|\rightarrow 0,\quad |{\bf v}_i(t)-\widebar{{\bf v}}_0|\rightarrow 0 \end{equation} \end{corollary} \begin{remark}[{\bf Optimal result with improved fat-tail condition}]\label{rem:optimal} Suppose we strengthen assumption \ref{ass:convex} with a more precise behavior of $U'' \sim \braket{r}^{\beta}$, thus replacing \eqref{eq:convex} with the requirement that there exist constants $0<a < A$ and $\beta\in [0,1]$ such that \begin{equation}\label{eq:convexop} a\braket{r}^{\beta} \leq U''(r), \quad U'(r) \leq A\braket{r}^{1-\beta}, \qquad 0\leq \beta \leq 1. \end{equation} Then the anticipation dynamics \eqref{eq:UF} with a fat-tail kernel matrix $\Phi$ of order $\gamma$, \eqref{eq:positive}, satisfies the sub-exponential decay \begin{equation}\label{eq:Edecayop} \delta E(t) \le Ce^{-t^{1-\lamta}}, \qquad \lamta=\min\Big\{1,\frac{2}{4-3\beta}\Big\}\cdot\max\{\beta,\gamma\}<1. \end{equation} This improved decay follows from the corresponding improvement of the uniform bound in lemma \ref{lem:convex} below which reads $\max_i|{\bf x}_i(t)| \lesssim \braket{t}$. In the particular case of $\beta=\gamma$, we recover an improved corollary \ref{cor:convex} for anticipated dynamics \eqref{eq:AT}, where the anticipated energy satisfies an optimal decay of order \[ \delta\mathcal{E}(t) \leq Ce^{-t^{1-\lamta}}, \qquad \lamta=\min\Big\{\frac{2\beta}{4-3\beta},\beta\Big\}<1. \] \end{remark} \subsection{Anticipation dynamics with purely attractive potential} We now turn our attention to the main anticipation model \eqref{eq:AT}. We already know the flocking behavior of \eqref{eq:AT} for convex potentials, from the general considerations of the \ref{eq:UF} system, summarized in corollary \ref{cor:convex}. In fact, the corresponding communication matrix of \eqref{eq:AT} prescribed in \eqref{eq:comm}, $D^2U$, has a special structure of rank-one modification of the scalar kernel $\displaystyle \frac{U'(r)}{r}$. This enables us to treat the flocking behavior of \eqref{eq:AT} for a larger class of purely attractive potentials. \begin{assumption}[{\bf Purely attractive potentials}]\label{ass:attractive} There exists constant $0< a < A$ such that \begin{equation}\label{eq:attractive} a\langle r\rangle^{-\beta} \leq \frac{U'(r)}{r} \leq A, \qquad 0\leq \beta\leq 1. \end{equation} \end{assumption} Our result is expressed in terms of fluctuations of the anticipated energy \[ \delta\cE(t) := \frac{1}{2N}\sum_{i} |{\bf v}_i-\widebar{{\bf v}}|^2 + \frac{1}{2N^2}\sum_{i,j} U(|{\bf x}^\tau_i-{\bf x}^\tau_j|). \] \begin{theorem}[{\bf Anticipation dynamics \eqref{eq:AT} with attractive potential}]\label{thm:thm2} Consider the anticipation dynamics \eqref{eq:AT} with bounded potential \eqref{eq:bounded}. Assume that $U$ is purely attractive with a fat tail decay of order $\beta$, \[ a\langle r\rangle^{-\beta} \le \frac{U'(r)}{r} \leq A, \qquad \beta<\frac{1}{3}. \] Then there is sub-exponential decay of the anticipated energy fluctuations \begin{equation}\label{eq:Edecay1} \delta\cE(t) \le Ce^{-t^{1-\lamta}},\qquad \lamta = \frac{2\beta}{1-\beta}. \end{equation} It follows that for large time, the anticipation dynamics concentrates in space with global velocity alignment at sub-exponential rate, \begin{equation} |{\bf x}_i(t)-\widebar{{\bf x}}(t)|\rightarrow 0,\quad |{\bf v}_i(t)-\widebar{{\bf v}}_0|\rightarrow 0 \end{equation} \end{theorem} \begin{remark} This result is surprising if one interprets \eqref{eq:AT} in its equivalent matrix formulation \eqref{eq:UF}, since attractive potentials do \emph{not} necessarily induce communication matrix $\Phi=D^2U$ which is positive definite. In particular, the corresponding `regular' (instantaneous) energy $E(t)$ referred to in corollary \ref{cor:convex} is not necessarily decreasing; only the \emph{anticipated} energy is. \end{remark} The proof of Theorem \ref{thm:thm2}, carried out in section \ref{sec:attraction}, involves two main ingredients.\newline {\bf (i)}. First, we derive an a priori uniform bound on the maximal spread of \emph{anticipated} positions $|{\bf x}^\tau_i(t)|$, \begin{equation}\label{eq:bound} \max_i |{\bf x}_i^\tau(t)| \le C_\infty\braket{t}^{\frac{1}{2-2\beta}}, \qquad 0\leq \beta < 1. \end{equation} {\bf (ii)}. A second main ingredient for the proof of Theorem \ref{thm:thm2} is based on the energy dissipation \eqref{energy}. The key step here is to relate the enstrophy in \eqref{energy}, \begin{equation}\label{eq:key} \frac{\tau}{N}\sum_i|\dot{{\bf v}}_i|^2=\frac{\tau}{N}\sum_i\Big|\frac{1}{N}\sum_j c_{ij}({\bf x}_i^\tau-{\bf x}_j^\tau)\Big|^2, \qquad c_{ij}=\frac{U'(|{\bf x}_i^\tau-{\bf x}_j^\tau|)}{|{\bf x}_i^\tau-{\bf x}_j^\tau|}, \end{equation} to the fluctuations of the (expected) \emph{positions}. This is done by the following proposition, interesting for its own sake, which deals with the local vs. global means of arbitrary ${\bf z}_j\in {\mathbb R}^d$. \begin{lemma}[{\bf Local and global means are comparable}]\label{lem:mean} Fix $0<\lambda \le\Lambda $ and weights $c_{ij}$ \[ 0< {\lambda} \leq c_{ij} \leq {\Lambda}. \] Then, there exists a constant $C=C(\lambda,\Lambda)\lesssim \frac{\Lambda^2}{\lambda^4}$ (which otherwise is independent of the $c_{ij}$'s and $N$) such that for arbitrary ${\bf z}_j \in {\mathbb R}^d$, with average $\widebar{{\bf z}}:=\frac{1}{N}\sum_j {\bf z}_j$, there holds \begin{equation}\label{eq:means} \frac{1}{N}\sum_i \left|{\bf z}_i-\widebar{{\bf z}}\right|^2 \leq \frac{C(\lambda,\Lambda)}{N} \sum_i \Big|\frac{1}{N}\sum_j c_{ij}({\bf z}_i-{\bf z}_j)\Big|^2 , \qquad C(\Lambda,\lambda) \lesssim \frac{\Lambda^2}{\lambda^4}. \end{equation} \end{lemma} \begin{remark}[{\bf Why a lemma on means?}]\label{rem:why} The last bound \eqref{eq:means} implies --- and in fact is equivalent up to scaling, to the statement about the local means induced by weights $\theta_{ij}$ \[ \frac{\lambda}{N} \leq \theta_{ij} \leq \frac{\Lambda}{N}, \qquad \sum_j \theta_{ij}=1. \] If $\displaystyle \widebar{{\bf z}}_i(\theta):=\sum_j \theta_{ij}{\bf z}_j$ are the local means, then \eqref{eq:means} with $c_{ij}=N\theta_{ij}$ implies \begin{equation}\label{eq:why} \frac{1}{N}\sum_i |{\bf z}_i-\widebar{{\bf z}}|^2 \leq \frac{C(\lambda,\Lambda)}{N} \sum_i |{\bf z}_i-{\widebar{{\bf z}}}_i(\theta)|^2, \qquad C(\Lambda,\lambda) \lesssim \frac{\Lambda^2}{\lambda^4}. \end{equation} Thus, the deviation from the local means is comparable to the deviation from the global mean. \end{remark} Applying \eqref{eq:means} to \eqref{eq:key} with the given bounds \eqref{eq:attractive},\eqref{eq:bound}, yields \begin{equation}\label{eq:enspos} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\mathcal{E}(t)& =-\frac{\tau}{N}\sum_i|\dot{{\bf v}}_i|^2 \\ & \lesssim -\frac{\tau}{N}\frac{A^2}{a^4}\Big(\max_{i,j} \braket{{\bf x}^\tau_i-{\bf x}^\tau_j}\Big)^{-4\beta} \sum_i |{\bf x}^\tau_i|^2 \lesssim -\frac{\tau}{2N^2}\braket{t}^{-\frac{2\beta}{1-\beta}} \sum_{i,j} |{\bf x}^\tau_i-{\bf x}^\tau_j|^2. \end{split} \end{equation} Observe that in this case, the enstrophy of the anticipated energy is bounded by the fluctuations of the anticipated positions (compared with velocity fluctuations in the `regular' energy decay \eqref{energy1}). We close the last bound by hypocoercivity argument carried out in section \ref{sec:attraction} which leads to the sub-exponential decay \eqref{eq:Edecay1}. \subsection{Anticipation dynamics with attractive-repulsive potential} For attractive-repulsive potentials, the large time behavior of \eqref{eq:AT} is significantly more complicated, due to the following two reasons: \newline $\bullet$ The topography of the total potential energy $\frac{1}{2N^2}\sum_{i,j} U(|{\bf x}_i-{\bf x}_j|)$ which includes multiple local minima with different geometric configurations could be very complicated, see e.g., \cite{LRC2000,DCBC2006,CDP2009,KSUB2011,BCLR2013,CHM2014,Se2015,CCP2017} and the references therein.\newline $\bullet$ It is numerically observed in \cite{GTLW2017} that the decay of $E(t)$ is of order ${\mathcal O}(t^{-1})$. Therefore one cannot hope that a sub-exponential energy dissipation rate $\dot{E}(t) \lesssim -\braket{t}^{-\lamta}E(t)$ or its hypocoercivity counterpart to hold. Here we focus on the second difficulty, and give a first rigorous result in this direction. \begin{theorem}[{\bf Anticipation with repulsive-attractive potential}]\label{thm:thm3} Consider the 2D anticipated dynamics \eqref{eq:AT} of $N=2$ agents subject to repulsive-attractive potential which has a local minimum at $r=r_0>0$ where $U''(r_0) = a >0$. Then there exists a constant $\epsilon>0$, such that if the initial data is small enough, \begin{equation}\label{eq:small_enough} \big||{\bf x}_1(0)-{\bf x}_2(0)|-r_0\big|^2 + |{\bf v}_1(0)-{\bf v}_2(0)|^2 \le \epsilon, \end{equation} then the solution to \eqref{eq:AT} satisfies the following algebraic decay: \begin{equation} \big||{\bf x}_1(t)-{\bf x}_2(t)|-r_0\big| \le C\braket{t}^{-1}\ln^{1/2}\braket{1+t},\quad |{\bf v}_1(t)-{\bf v}_2(t)| \le C\braket{t}^{-1/2}. \end{equation} \end{theorem} The proof, based on nonlinear hypocoercivity argument for the anticipated energy is carried out in section \ref{sec:ra}. \begin{remark} The detailed description of the dynamics outlined in the proof, reveals that the radial component of the velocity, $v_r\lesssim \braket{t}^{-1}\ln^{1/2}\braket{1+t}$, decays faster than its tangential part, $v_\theta \lesssim \braket{t}^{-1/2}$. Therefore, although the dynamics of \eqref{eq2} can be complicated at the beginning, it will finally settles as a circulation around the equilibrium, provided the initial data is small enough. \end{remark} \subsection{Anticipation hydrodynamics} The large crowd (hydro-)dynamics associated with \eqref{eq:AT} is described by density and momentum $(\rho,\rho{\bf u})$ governed by\footnote{Under a simplifying assumption of a mono-kinetic closure.} \begin{equation}\label{eq:hydro} \left\{\begin{split} \rho_t + \nabla_{\bf x}\cdot (\rho {\bf u}) &= 0 \\ (\rho{\bf u})_t + \nabla_{\bf x}\cdot (\rho{\bf u}\otimes {\bf u}) & = - \int \nabla U(|{\bf x}^\tau - {\bf y}^\tau|) \,\mathrm{d}\rho({\bf y}), \quad {\bf x}^\tau := {\bf x} + \tau {\bf u}(t,{\bf x}). \end{split}\right. \end{equation} The large-time flocking behavior of \eqref{eq:hydro} is studied in terms of lemma \ref{lem:Mean} --- a continuum version of the discrete lemma of means proved in section \ref{sec:means}. That is, we obtain a sub-linear time bound on the spread of $\text{supp}\,\rho(t,\cdot)$, which in turn is used to control the enstrophy of the anticipated energy. In section \ref{sec:hydro} we outline the proof of our last main result, which states that \emph{if} \eqref{eq:hydro} admits a global smooth solution then such smooth solution must flock, in agreement with the general paradigm for Cucker-Smale dynamics discussed in \cite{TT2014,HeT2017}. \begin{theorem}[{\bf Anticipation hydrodynamics: smooth solutions must flock}]\label{thm:thm4} Let $(\rho,{\bf u})$ be a smooth solution of the anticipation hydrodynamics \eqref{eq:hydro} with an attractive potential subject to a fat tail decay, \eqref{eq:attractive}, of order $\beta<\frac{1}{3}$. Then there is sub-exponential decay of the anticipated energy fluctuations \begin{equation}\label{eq:Edecay2} \int\int \left(\frac{1}{2m_0}|{\bf u}({\bf x})-\widebar{{\bf u}}_0|^2+ U(|{\bf x}^\tau-{\bf y}^\tau|)\right) \,\mathrm{d}\rho({\bf x})\,\mathrm{d}\rho({\bf y})\le Ce^{-t^{1-\lamta}}, \quad \lamta=\frac{2\beta}{1-\beta}<1. \end{equation} It follows that there is large time flocking, with sub-exponential alignment \[ |{\bf u}(t,{\bf x})-\widebar{{\bf u}}_0|^2\,\mathrm{d}{\rho({\bf x})} \stackrel{t \rightarrow \infty}{\longrightarrow} 0, \qquad \widebar{{\bf u}}_0=\frac{1}{m_0}\int (\rho{\bf u})_0({\bf x}) \,\mathrm{d}{{\bf x}}, \ \ m_0=\int \rho_0({\bf x})\,\mathrm{d}{{\bf x}}. \] \end{theorem} In proposition \ref{prop:1Dexist} we verify the existence of global smooth solution (and hence flocking) of the 1D system, \eqref{eq:hydro}, provided the threshold condition, $u'_0(x) \geq -C(\tau,m_0,a)$ holds, for a proper negative constant depending on $\tau, m_0$ and the minimal convexity $a=\min U''>0$. \section{A priori $L^\infty$ bounds for confining potentials}\label{sec:uniform} In this section we prove the uniform bounds asserted in \eqref{eq:bound1} and \eqref{eq:bound}, corresponding to the anticipation dynamics in \eqref{eq:UF} and, respectively, \eqref{eq:AT}. Due to the momentum conservation \eqref{momentum}, we may assume without loss of generality that in both cases $\widebar{{\bf x}}(t)=\widebar{{\bf v}}(t)\equiv0$. This will always be assumed in the rest of this paper. We recall the dynamics of \eqref{eq:UF} assumes that $U$ lies in the class of convex potentials, \eqref{eq:convex}, and the dynamics of \eqref{eq:AT} assumes a larger class of attractive potentials, \eqref{eq:attractive}. In fact, here we prove uniform bounds under a more general setup of \emph{confining potentials}. \begin{assumption}[{\bf Confining potentials}]\label{ass:confining} There exists constant $a>0, L\geq0$ such that \begin{equation}\label{eq:confining} U(r) \geq a\left(\langle r\rangle^{2-\beta}-L\right), \qquad 0\leq \beta\leq 1. \end{equation} \end{assumption} \noindent Observe that in particular, attractive potentials are confining\footnote{Thus, we have the increasing hierarchy of three classes of potentials --- convex, attractive and confining potentials.}, \begin{equation}\label{eq:attconf} U(r)=\int_0^r U'(s)\,\mathrm{d}{s} \geq \int_0^r a\langle s\rangle^{-\beta}s\,\mathrm{d}{s}= \frac{a}{2-\beta}\left(\langle r\rangle^{2-\beta}-1\right). \end{equation} The class of confining potentials is much larger, however, and it includes repulsive-attractive potentials (discussed in section \ref{sec:ra} below). In particular, a confining $U$ needs not be positive. \begin{lemma}[{\bf Uniform bound on positions for \ref{eq:UF} system}]\label{lem:convex} Consider the anticipation dynamics \eqref{eq:UF} with bounded positive communication matrix $0\leq \Phi\leq \phi_+$, and bounded confining potential \eqref{eq:bounded},\eqref{eq:confining}. Then the solution $\{({\bf x}_i(t),{\bf v}_i(t))\}$ satisfies the a priori estimate \begin{equation}\label{eq:cX1} \max_i |{\bf x}_i(t)| \le C_\infty\braket{t}^{\frac{2}{4-3\beta}}, \quad \max_i|{\bf v}_i(t)| \leq C_\infty\braket{t}^{\frac{2-\beta}{4-3\beta}}, \qquad 0\leq \beta \leq 1. \end{equation} \end{lemma} \begin{remark} Note that we require a positive communication matrix $\Phi$ but otherwise, we do not insist on any fat tail condition \eqref{eq:positive}. \end{remark} \begin{proof} Our proof is based on the technique introduced in \cite[\S2.2]{ST2019}, in which we prove uniform bounds in terms of the \emph{particle energy}\footnote{In fact $E_i$ is not a proper particle energy, since $\sum_i E_i \ne N E$ (the pairwise potential is counted twice). However, it is the ratio of the kinetic energy and potential energy in (\ref{cEi}) which is essential, as one would like to eliminate all the positive terms with indices $i$ in (\ref{cEi}), in order to avoid exponential growth of $E_i$.} \begin{equation}\label{cEi} E_i(t) := \frac{1}{2}|{\bf v}_i|^2 + \frac{1}{N}\sum_j U(|{\bf x}_i-{\bf x}_j|). \end{equation} We start by relating the local energy to the position of particle $i$: using \eqref{eq:confining} followed by Jensen inequality for the convex mapping\footnote{\label{fn:jensen}$(\braket{r}^{2-\beta})'' = -\beta(2-\beta) r^2\braket{r}^{-2-\beta} + (2-\beta)\braket{r}^{-\beta} = (2-\beta)\big((1-\beta) r^2 + 1\big)\braket{r}^{-2-\beta}>0$ for $\beta\leq 1$.} ${\bf x}\mapsto \langle {\bf x} \rangle^{2-\beta}$, we find \[ \frac{E_i(t)}{a} \ge \frac{1}{N}\sum_j \left(\langle {\bf x}_i-{\bf x}_j\rangle^{2-\beta}-L\right) \ge \big\langle \frac{1}{N}\sum_j ({\bf x}_i-{\bf x}_j) \big\rangle^{2-\beta}-L = \langle {\bf x}_i\rangle^{2-\beta}-L \] It follows that the maximal spread of positions, $\cX(t):=\max_i|{\bf x}_i(t)|$, does not exceed \begin{equation}\label{cF1} \cX(t) \le \left(\frac{E_\infty(t)}{a}+L\right)^{\frac{1}{2-\beta}}, \qquad \cX(t)=\max_i |{\bf x}_i(t)|, \quad E_\infty(t)= \max_i E_i(t). \end{equation} Next we bound the energy dissipation rate of each particle. By \eqref{eq:positive} the communication matrices $\Phi_{ij}$ are non-negative and bounded\footnote{Observe that we do not use the fat tail decay \eqref{eq:positive}.}, $0\leq \Phi_{ij}\leq \phi_+$, and since $\sum_j {\bf v}_j=0$, \begin{equation} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}E_i(t) = & {\bf v}_i\cdot\left(-\frac{1}{N}\sum_j \nabla U(|{\bf x}_i-{\bf x}_j|) + \frac{1}{N}\sum_j \Phi_{ij}({\bf v}_j-{\bf v}_i)\right) \\ \quad & + \frac{1}{N}\sum_j \nabla U(|{\bf x}_i-{\bf x}_j|)\cdot({\bf v}_i-{\bf v}_j) \\ = & \frac{1}{N}\sum_j \Phi_{ij}({\bf v}_j-{\bf v}_i)\cdot{\bf v}_i - \frac{1}{N}\sum_j \nabla U(|{\bf x}_i-{\bf x}_j|)\cdot{\bf v}_j \\ = & -\frac{1}{2N}\sum_j \Phi_{ij}{\bf v}_i\cdot{\bf v}_i-\frac{1}{2N}\sum_j \Phi_{ij}({\bf v}_j-{\bf v}_i)\cdot({\bf v}_j-{\bf v}_i) \\ & + \frac{1}{2N}\sum_j \Phi_{ij}{\bf v}_j\cdot{\bf v}_j- \frac{1}{N}\sum_j \Big(\nabla U(|{\bf x}_i-{\bf x}_j|)-\nabla U(|{\bf x}_i|)\Big)\cdot{\bf v}_j \\ \le & {\phi_+ E(0)} + \sqrt{2E(0)}\left(\frac{1}{N}\sum_j \big|\nabla U(|{\bf x}_i-{\bf x}_j|)-\nabla U(|{\bf x}_i|)\big|^2\right)^{1/2}. \end{split} \end{equation} To bound the sum on the right, we use the fact that $D^2U$ is bounded, \eqref{eq:bounded}, followed by \eqref{cF1} to find, \begin{equation}\label{xitau2} \begin{split} \frac{1}{N}\sum_{j} |&\nabla U(|{\bf x}_i-{\bf x}_j|)-\nabla U(|{\bf x}_i|)|^2 \\ & \le \sup_{{\bf x}}|D^2U(|{\bf x}|)|^2 \frac{1}{N}\sum_{j} |{\bf x}_j|^2 \le \frac{A^2}{N}\sum_{j} |{\bf x}_j|^2 \\ & = \frac{A^2}{2N^2}\sum_{i,j} |{\bf x}_i-{\bf x}_j|^2 \le 2^\beta A^2\max_i|{\bf x}_i|^\beta\times \frac{1}{N^2}\sum_{i,j} |{\bf x}_i-{\bf x}_j|^{2-\beta} \\ & \le 2^{\beta} A^2 \cX^{\beta} \frac{1}{2N^2}\sum_{i,j} |{\bf x}_i-{\bf x}_j|^{2-\beta} \le 2^{\beta}A^2 \cX^{\beta} \frac{1}{2N^2}\sum_{i,j} \left(\frac{U(|{\bf x}_i-{\bf x}_j|)}{a}+L\right) \\ & \le 2^{\beta}A^2 \cX^{\beta}\left(\frac{E(0)}{a}+\frac{L}{2}\right). \end{split} \end{equation} Therefore \[ \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}E_i(t) \le &{\phi_+ E(0)} + \sqrt{2E(0)}\left(2^{\beta}A^2\cX^{\beta}\Big(\frac{E(0)}{a}+\frac{L}{2}\Big)\right)^{1/2} \\ \le & {\phi_+ E(0)} + \sqrt{2E(0)}\left(2^{\beta}A^2\Big(\frac{E_\infty}{a}+L\Big)^{\frac{\beta}{2-\beta}}\Big(\frac{E(0)}{a}+\frac{L}{2}\Big)\right)^{1/2} \\ \end{split} \] and taking maximum among all $i$'s, \begin{equation} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}E_\infty(t) \le & {\phi_+ E(0)} + \sqrt{2E(0)}\left(2^{\beta}A^2\Big(\frac{E_\infty(t)}{a}+L\Big)^{\frac{\beta}{2-\beta}}\Big(\frac{E(0)}{a}+\frac{L}{2}\Big)\right)^{1/2}. \end{split} \end{equation} Set $f(t) := E_\infty(t) + aL$, then the last inequality tells us $f' \le C_1 + C_2 f^\alpha$ with $\alpha:=\frac{\beta}{(4-2\beta)}$, and since by assumption $\alpha<\nicefrac{1}{2}$, then \[ f \lesssim \braket{t}^{\frac{1}{1-\alpha}} = C \braket{t}^{\frac{2(2-\beta)}{4-3\beta}}, \] which implies the uniform bound on velocities in \eqref{eq:cX1}, \[ \max_i|{\bf v}_i(t)| \leq 2\sqrt{E_\infty(t)+aL} \lesssim \braket{t}^{\frac{2-\beta}{4-3\beta}}. \] The uniform bound on positions, $\max_i|{\bf x}_i(t)|$, follows in view of \eqref{cF1}. \end{proof} Lemma \ref{lem:convex} applies, in particular, to the anticipation dynamics \eqref{eq:AT} with convex potential, so that $D^2U$ is positive definite. Next, we prove uniform bounds for more general confining $U$'s. \begin{lemma}[{\bf Uniform bound on anticipated positions}]\label{lem:bound} Consider the anticipation dynamics \eqref{eq:AT} with bounded confining potential \eqref{eq:bounded},\eqref{eq:confining}. Then the solution of the anticipation dynamics \eqref{eq:AT} satisfies the a priori estimate \begin{equation}\label{eq:cX} \max_i |{\bf x}_i^\tau(t)| \le C_\infty\braket{t}^{\frac{1}{2-2\beta}}, \qquad 0\leq \beta < 1. \end{equation} \end{lemma} \begin{remark} The a priori bound \eqref{eq:cX} is weaker than lemma \ref{lem:convex} and may not be optimal for $\beta$ close to 1. We do not pursue an improved bound since it does not provide an increased range of $\beta$'s for which Theorem \ref{thm:thm2} holds. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:bound}] The key quantity for proving the priori bound \eqref{eq:cX} is the `anticipated particle energy' in \eqref{eq:AT}, \begin{equation}\label{Ei} \mathcal{E}_i(t) := \frac{1}{2}|{\bf v}_i|^2 + \frac{1}{N}\sum_j U(|{\bf x}_i^\tau-{\bf x}_j^\tau|). \end{equation} Similar to the previous proof, the confining property of $U$ implies that the diameter of anticipated positions, $\cXtau(t):=\max_i |{\bf x}_i^\tau(t)|$, does not exceed \begin{equation}\label{F1} \cXtau(t) \le \left(\frac{\mathcal{E}_\infty(t)}{a}+L\right)^{\frac{1}{2-\beta}}, \qquad \cXtau(t):=\max_i |{\bf x}^\tau_i(t)|, \quad \mathcal{E}_\infty(t)= \max_i \mathcal{E}_i(t). \end{equation} Next we bound the energy dissipation rate of each particle: since $\sum_j ({\bf v}_j+\tau\dot{{\bf v}}_j)=0$, \begin{equation}\label{dEi} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\mathcal{E}_i(t) = & \ {\bf v}_i\cdot \dot{{\bf v}}_i + \frac{1}{N}\sum_{j} \nabla U(|{\bf x}_i^\tau-{\bf x}_j^\tau|)\cdot ({\bf v}_i + \tau \dot{{\bf v}}_i - {\bf v}_j - \tau \dot{{\bf v}}_j) \\ = & -\tau |\dot{{\bf v}_i}|^2 - \frac{1}{N}\sum_{j} \nabla U(|{\bf x}_i^\tau-{\bf x}_j^\tau|)\cdot({\bf v}_j+ \tau \dot{{\bf v}}_j) \\ = & -\tau |\dot{{\bf v}_i}|^2 - \frac{1}{N}\sum_{j} \big(\nabla U(|{\bf x}_i^\tau-{\bf x}_j^\tau|)-\nabla U(|{\bf x}_i^\tau|)\big)\cdot({\bf v}_j+ \tau \dot{{\bf v}}_j). \end{split} \end{equation} As before, the boundedness of $D^2U$ followed by \eqref{F1} to find imply \begin{equation}\label{xitau2} \frac{1}{N}\sum_{j} \big|\nabla U(|{\bf x}_i^\tau-{\bf x}_j^\tau|)-\nabla U(|{\bf x}_i^\tau|)\big|^2 \le 2^{\beta}A^2 \cXtau^{\beta}\left(\frac{\mathcal{E}(0)}{a}+\frac{L}{2}\right). \end{equation} Inserting \eqref{xitau2} into the RHS of \eqref{dEi} and adding the energy-enstrophy balance \eqref{energy} we find\footnote{Note that a confining potential need not be positive yet $U\geq -aL$ and hence $1/2N\sum_j |{\bf v}_j|^2 \le \mathcal{E}(0)+aL$.} \[ \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}&(\mathcal{E}(t) + \mathcal{E}_i(t)) \\ & \le - \frac{\tau}{N}\sum_j |\dot{{\bf v}}_j|^2 - \tau|\dot{{\bf v}}_i|^2 + \frac{c}{N}\sum_j |{\bf v}_j|^2 + \frac{c\tau^2}{N}\sum_j |\dot{{\bf v}}_j|^2 \\ & \ \ \ \ + \frac{1}{4cN}\sum_{j} \big|\nabla U(|{\bf x}_i^\tau-{\bf x}_j^\tau|)-\nabla U(|{\bf x}_i^\tau|)\big|^2 \\ & \le - \frac{\tau(1- c \tau)}{N}\sum_j |\dot{{\bf v}}_j|^2 - \tau|\dot{{\bf v}}_i|^2 + 2 c \big(\mathcal{E}(0) + aL\big) + \frac{1}{4c}2^{\beta}A^2 \cXtau^{\beta}\left(\frac{\mathcal{E}(0)}{a}+\frac{L}{2}\right) \\ & \le \frac{2}{\tau} \big(\mathcal{E}(0) +aL\big) + \frac{\tau}{4}2^{\beta}A^2 \cXtau^{\beta}\left(\frac{\mathcal{E}(0)}{a}+\frac{L}{2}\right), \qquad \mbox{(taking $c=1/\tau$)}. \end{split} \] By taking maximum among all $i$, \[ \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}(\mathcal{E}(t) + \mathcal{E}_\infty(t)) \le & \frac{2}{\tau} \big(\mathcal{E}(0) +aL\big) + \frac{\tau}{4}2^{\beta}A^2 \cXtau^{\beta}\left(\frac{\mathcal{E}(0)}{a}+\frac{L}{2}\right) \\ \le & \frac{2}{\tau} \big(\mathcal{E}(0) +aL\big) + \frac{\tau}{4}2^{\beta}A^2\left(\frac{\mathcal{E}_\infty(t)}{a}+L\right)^{\frac{\beta}{2-\beta}}\left(\frac{\mathcal{E}(0)}{a}+\frac{L}{2}\right) \\ \end{split} \] The last inequality tells us that $f(t) := \mathcal{E}(t) + \mathcal{E}_\infty(t) + aL$ satisfies $f' \le C_1 + C_2 f^\alpha$ with $\alpha:=\frac{\beta}{2-\beta}$. Since by assumption $\alpha<1$, then \[ f \lesssim \braket{t}^{\frac{1}{1-\alpha}} = C \braket{t}^{\frac{2-\beta}{2-2\beta}}, \] and the uniform bound \eqref{eq:cX} follows in view of \eqref{F1}. \end{proof} \section{Anticipation with convex potentials and positive communication kernels}\label{sec:convex} Equipped with the uniform bound \eqref{eq:cX}, we turn to prove Theorem \ref{thm:thm1} by hypocoercivity argument. In \cite{ST2019} we use hypocoercivity to prove the flocking with \emph{quadratic potentials}. Here, we make a judicious use of the assumed fat tail conditions, \eqref{eq:positive},\eqref{eq:convex}, to extend these arguments for general convex potentials. \begin{proof}[Proof of Theorem \ref{thm:thm1}] We introduce the \emph{modified energy}, $\widehat{E}(t)$, by adding a multiple of the cross term $\nicefrac{1}{N}\sum_i {\bf x}_i\cdot{\bf v}_i$, \[ \widehat{E}(t):=E(t) + \frac{\epsilon(t)}{N}\sum_i{\bf x}_i(t)\cdot{\bf v}_i(t). \] We claim that with a proper choice $\epsilon(t)$, the modified energy is positive definite. Indeed, the convex (hence attractive) potential satisfies the pointwise bound \eqref{eq:attconf}, and together with the uniform bound \eqref{eq:bound1} they imply \[ \begin{split} |\epsilon(t)\frac{1}{N}\sum_i {\bf x}_i \cdot{\bf v}_i| \le & \frac{1}{4N}\sum_i |{\bf v}_i|^2 + \frac{\epsilon^2(t)}{N}\sum_i |{\bf x}_i|^2 = \frac{1}{4N}\sum_i |{\bf v}_i|^2 + \frac{\epsilon^2(t)}{2N^2}\sum_{i,j} |{\bf x}_i-{\bf x}_j|^2 \\ \le & \frac{1}{4N}\sum_i |{\bf v}_i|^2 + \epsilon^2(t) (2\cX(t))^{\beta}\frac{1}{2N^2}\sum_{i,j} \left(\frac{U(|{\bf x}_i-{\bf x}_j|)}{a}+1\right) \\ \le & \frac{1}{4N}\sum_i |{\bf v}_i|^2 + \epsilon^2(t)(2C_\infty)^{\beta} \braket{t}^{\frac{2\beta}{4-3\beta}}\frac{1}{2N^2}\sum_{i,j} \left(\frac{U(|{\bf x}_i-{\bf x}_j|)}{a}+1\right). \end{split} \] Therefore it suffices to choose \begin{equation}\label{epsilon2} \epsilon(t) = \epsilon_0\braket{t}^{-\alpha}, \qquad \ \alpha > \frac{\beta}{4-3\beta}, \end{equation} with small enough $\epsilon_0>0$ and \emph{any} $\alpha > \frac{\beta}{4-3\beta}$ which is to be determined later, to guarantee $|\nicefrac{\epsilon(t)}{N}\sum_i{\bf x}_i\cdot{\bf v}_i| \le {E(t)}/{2}$, hence the positivity of $\widehat{E}(t)\geq E(t)/2>0$. Next, we turn to verify the coercivity of $\widehat{E}(t)$. First notice that Lemma \ref{lem:convex} implies the following $L^\infty$ bound on ${\bf x}_i^{\tau_{ij}}$: \ |{\bf x}_i^{\tau_{ij}}| \le |{\bf x}_i| + \tau|{\bf v}_i| \le (1+\tau)C_\infty\braket{t}^{\frac{2}{4-3\beta}} \ This together with the assumed fat-tails of $\Phi$ and $D^2U$, imply their lower-bounds: by \eqref{eq:positive} $\Phi_{ij}$ are bounded from below by, \begin{equation}\label{eq:phiminus} \Phi_{ij}\geq \phi_-(t) := c\braket{t}^{-\frac{2\gamma}{4-3\beta}}, \end{equation} and integrating \eqref{eq:convex} $U''(r) \geq a\langle r\rangle^{-\beta}$ twice, implies $U$ has the lower bound \eqref{eq:attconf} \begin{equation}\label{psi} U(|{\bf x}_i-{\bf x}_j|) \geq c|{\bf x}_i-{\bf x}_j|^2 \langle |{\bf x}_i-{\bf x}_j| \rangle^{-\beta} \geq |{\bf x}_i-{\bf x}_j|^2\psi_-(t), \quad \psi_-(t):=c\braket{t}^{-\frac{2\beta}{4-3\beta}}. \end{equation} Now, we turn to conduct hypocoercivity argument based on the energy estimate \eqref{energy1}. To this end, we append to $E(t)$, a proper multiple of the cross term $\sum{\bf x}_i\cdot{\bf v}_i$, consult e.g., \cite{ST2019,DS2019}. Using the symmetry of $\Phi_{ij}$, the time derivative of this cross term is given by \begin{equation}\label{eq:cross} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\frac{1}{N}\sum_i &{\bf x}_i\cdot{\bf v}_i \\ = & \ \frac{1}{N}\sum_i |{\bf v}_i|^2 + \frac{1}{N}\sum_i {\bf x}_i\cdot\left( -\frac{1}{N}\sum_j \nabla U(|{\bf x}_i-{\bf x}_j|) + \frac{1}{N}\sum_j \Phi_{ij}({\bf v}_j-{\bf v}_i) \right) \\ = & \ \frac{1}{2N^2}\sum_{i,j} |{\bf v}_i-{\bf v}_j|^2 - \frac{1}{2N^2}\sum_{i,j} ({\bf x}_i-{\bf x}_j)\cdot \nabla U(|{\bf x}_i-{\bf x}_j|) \\ & + \ \frac{1}{2N^2}\sum_{i,j} \Phi_{ij}({\bf v}_j-{\bf v}_i)\cdot({\bf x}_i-{\bf x}_j). \end{split} \end{equation} We prepare the following three bounds. Noticing that since $U$ is convex $U'(r)$ is increasing, hence $\displaystyle U(r) = \int_0^r U'(s) \,\mathrm{d}{s} \leq rU'(r)$ implies \begin{subequations}\label{eqs:cross} \begin{equation}\label{eq:cross1} \begin{split} \frac{1}{2N^2}\sum_{i,j} ({\bf x}_i-{\bf x}_j)\cdot \nabla U(|{\bf x}_i-{\bf x}_j|) &= \frac{1}{2N^2}\sum_{i,j}U'(|{\bf x}_i-{\bf x}_j|) |{\bf x}_i-{\bf x}_j| \\ & \ge \frac{1}{2N^2}\sum_{i,j} U(|{\bf x}_i-{\bf x}_j|). \end{split} \end{equation} Using the weighted Cauchy-Schwarz twice --- weighted by the positive definite $0 < \Phi_{ij}\le \phi_+$, and then by the yet-to-be determined $\kapta(t)>0$, \begin{equation}\label{eq:cross2} \begin{split} \left| \frac{1}{2N^2}\sum_{i,j}\right. & \left.\Phi_{ij}({\bf v}_j-{\bf v}_i)\cdot({\bf x}_i-{\bf x}_j) \right| \\ & \le \frac{\kapta}{4N^2}\sum_{i,j} \Phi_{ij}({\bf v}_i-{\bf v}_j)\cdot({\bf v}_i-{\bf v}_j) + \frac{1}{4\kapta N^2}\sum_{i,j} \Phi_{ij}({\bf x}_i-{\bf x}_j)\cdot({\bf x}_i-{\bf x}_j)\\ & \le \frac{\kapta(t)}{4N^2}\sum_{i,j} \Phi_{ij}({\bf v}_i-{\bf v}_j)\cdot({\bf v}_i-{\bf v}_j) + \frac{\phi_+}{4\kapta(t) N^2}\sum_{i,j} |{\bf x}_i-{\bf x}_j|^2 \end{split} \end{equation} Recall that with the choice of $\epsilon(t)=\epsilon_0\braket{t}^{-\alpha}$ in \eqref{epsilon2}, we have $\displaystyle |\dot{\epsilon}(t)| \le \alpha \frac{\epsilon(t)}{\braket{t}}$. We have the final bound \begin{equation}\label{eq:cross3} \begin{split} \Big|\frac{\dot{\epsilon}(t)}{N}\sum_i{\bf x}_i\cdot {\bf v}_i \Big| & \leq \left|\dot{\epsilon}(t)\right|\frac{1}{2\delta(t) N^2}\sum_{i,j} |{\bf x}_i|^2 + \left|\dot{\epsilon}(t)\right| \frac{\delta(t)}{2N^2}\sum_{i,j} |{\bf v}_i|^2\\ & \leq \frac{\alpha }{2\delta(t) \braket{t}}\frac{\epsilon(t)}{2N^2}\sum_{i,j} |{\bf x}_i-{\bf x}_j|^2 + \frac{\alpha \delta(t)}{2 \braket{t}}\frac{\epsilon(t)}{2N^2}\sum_{i,j} |{\bf v}_i-{\bf v}_j|^2 \end{split} \end{equation} \end{subequations} Adding \eqref{eq:cross} to the energy decay \eqref{energy1} we find that the dissipation rate of the modified energy $\widehat{E}(t):=E(t) + \nicefrac{\epsilon(t)}{N}\sum_i{\bf x}_i(t)\cdot{\bf v}_i(t))$ does not exceed, in view of \eqref{eqs:cross} \begin{equation}\label{eq:dissipative} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\widehat{E}(t) \le & \left( -\tau+\frac{\kapta(t)}{2}\epsilon(t)\right)\frac{1}{2N^2}\sum_i \Phi_{ij}({\bf v}_i-{\bf v}_j)\cdot({\bf v}_i-{\bf v}_j) \\ & \ \ + \left(\frac{\phi_+}{2\kapta(t)}\epsilon(t) + \frac{\alpha}{2\delta(t)\braket{t}}\epsilon(t)\right)\frac{1}{2N^2}\sum_{i,j} |{\bf x}_i-{\bf x}_j|^2\\ & \ \ \ \ + \left( \epsilon(t) +\frac{\alpha\delta(t)}{2\braket{t}}\epsilon(t)\right)\frac{1}{2N^2}\sum_{i,j} |{\bf v}_i-{\bf v}_j|^2\\ & \ \ \ \ \ \ -\epsilon(t)\frac{1}{2N^2}\sum_{i,j} U(|{\bf x}_i-{\bf x}_j|) \\ & =: I + II + III +IV. \end{split} \end{equation} To complete the (hypo-)coercivity argument, we guarantee the terms on the right of \eqref{eq:dissipative} are negative. To this end, set $\kapta(t)=\tau/\epsilon(t)$ so the first pre-factor$\leq -\nicefrac{\tau}{2}$ hence \[ I \leq -\frac{\tau}{2}\phi_-(t) \frac{1}{2N^2}\sum_{i,j} |{\bf v}_i-{\bf v}_j|^2=-\frac{\tau}{2}\phi_-(t) \frac{1}{N}\sum_i |{\bf v}_i|^2, \qquad \kapta(t)=\frac{\tau}{\epsilon(t)}. \] Next, we set $\displaystyle \delta(t)=\frac{\delta_0}{\epsilon(t)\braket{t}}$ so that the second pre-factor $\displaystyle\leq \left(\frac{\phi_+}{\tau}+\frac{\alpha}{2\delta_0}\right)\epsilon^2(t)$, hence the second term does not exceed, in view of \eqref{psi} \[ II \leq \left(\frac{\phi_+}{\tau}+\frac{\alpha}{2\delta_0}\right)\frac{\epsilon^2(t)}{\psi_-(t)} \frac{1}{2N^2}\sum_{i,j} U(|{\bf x}_i-{\bf x}_j|), \qquad \delta(t)=\frac{\delta_0}{\epsilon(t)\braket{t}}. \] With these choices of $\kapta$ and $\delta$, the third term does not exceed \[ III \le \left( \epsilon(t) +\frac{\alpha\delta_0}{2\braket{t}^2}\right)\frac{1}{2N^2}\sum_{i,j} |{\bf v}_i-{\bf v}_j|^2 =\left( \epsilon(t) +\frac{\alpha\delta_0}{2\braket{t}^2}\right)\frac{1}{N}\sum_{i} |{\bf v}_i|^2 \] We conclude \begin{equation}\label{eq:hypo} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\widehat{E}(t) & \le \left(-\frac{\tau}{2}\phi_-(t)+\epsilon(t)+\frac{\alpha\delta_0}{2\braket{t}^2}\right) \frac{1}{N} \sum_i |{\bf v}_i|^2\\ & \ \ +\left(- \epsilon(t) +\Big(\frac{\phi_+}{\tau}+\frac{\alpha}{2\delta_0}\Big)\frac{\epsilon^2(t)}{\psi_-(t)}\right) \frac{1}{2N^2}\sum_{i,j} U(|{\bf x}_i-{\bf x}_j|). \end{split} \end{equation} Now set $\displaystyle\alpha\geq \frac{2\gamma}{4-3\beta}$ so that $\phi_-(t)$ decays no faster than $\epsilon(t)$; moreover, $\phi_-(t)$ decays no faster than $\braket{t}^{-2}$ since $6\beta+2\gamma \leq 8$, and hence, with small enough $\epsilon_0,\delta_0>0$, the first pre-factor on the right of \eqref{eq:hypo} does not exceed $-{\tau}\phi_-(t)/4$. Next, let $\displaystyle\alpha\geq \frac{2\beta}{4-3\beta}$ so that $\epsilon(t)/\psi_-(t)$ is bounded: hence, with small enough $\epsilon_0 \ll \delta_0$, the second pre-factor on the right of \eqref{eq:hypo} does not exceed $-\epsilon(t)/2$. We conclude \[ \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\widehat{E}(t) \lesssim -\frac{\phi_-(t)}{N}\sum_i |{\bf v}_i|^2 -\frac{\epsilon(t)}{2N^2}\sum_{i,j} U(|{\bf x}_i-{\bf x}_j|) \lesssim -\braket{t}^{-\lamta} \widehat{E}(t), \quad {\lamta}=\frac{2\max\{\beta,\gamma\}}{4-3\beta}. \] This implies the sub-exponential decay of $\widehat{E}$, and thus that of the comparable $E$. \end{proof} \subsection{Flocking of matrix-based Cucker-Smale dynamics}\label{subsec:CS} The Cucker-Smale model \cite{CS2007a,CS2007b}, \begin{equation}\label{eq:CS} \left\{\begin{split} & \dot{{\bf x}}_i={\bf v}_i \\ & \dot{{\bf v}}_i = \frac{\tau}{N}\sum_{j=1}^N \Phi_{ij}({\bf v}_j-{\bf v}_i), \end{split}\right. \end{equation} is a special case of \eqref{eq:UF} with no external potential $U=0$, which formally corresponds to $\beta=0$, in which case theorem \ref{thm:thm1} would yield flocking for $\gamma <\nicefrac{1}{2}$. Here we justify these formalities and prove the velocity alignment of \eqref{eq:CS} (no spatial concentration effect, however), under a slightly larger threshold. \begin{proposition}[{\bf Alignment of} \eqref{eq:CS} {\bf model with positive kernels}]\label{prop:CS} Consider the Cucker-Smale dynamics \eqref{eq:CS} with symmetric matrix kernel $\Phi$ satisfying \[ \phi_-\braket{{\bf x}_i-{\bf x}_j}^{-\gamma} \leq \Phi({\bf x}_i,{\bf x}_j) \leq \phi_+, \qquad 0\leq \gamma < \nicefrac{2}{3}. \] Then there is sub-exponential decay of the energy fluctuations \begin{equation}\label{eq:CSfracdecay} \delta E(t) \le Ce^{-t^{1-\lamta}}, \qquad \lamta=\frac{3\gamma}{2}, \qquad \delta E(t):= \frac{1}{2N}\sum_{i} |{\bf v}_i-\widebar{{\bf v}}|^2. \end{equation} It follows that there is a flock formation around the mean $\widebar{{\bf x}}(t)$ with large time velocity alignment at sub-exponential rate: \begin{equation} {\bf v}_i(t)\rightarrow \widebar{{\bf v}}_0,\quad {\bf x}_i(t) - \widebar{{\bf x}}(t) \rightarrow {\bf x}_i^\infty, \qquad \widebar{{\bf x}}(t):=\widebar{{\bf x}}_0+t\widebar{{\bf v}}_0, \end{equation} for some constants ${\bf x}_i^\infty$. \end{proposition} The proof is similar but follows a slightly different strategy from that of theorem \ref{thm:thm1}: we start by a priori estimate for the particle energy $E_i$, and then proceed to controlling the position $\braket{{\bf x}_i}$, which in turn gives enough energy dissipation. \begin{proof} Define the particle energy \begin{equation} E_i(t) := \frac{1}{2}|{\bf v}_i|^2,\quad E_\infty(t) = \max_i E_i(t). \end{equation} Observe that it satisfies \begin{equation} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} E_i(t) = & \frac{1}{N}\sum_{j=1}^N \Phi_{ij}({\bf v}_j-{\bf v}_i)\cdot {\bf v}_i \\ = & -\frac{1}{2N}\sum_{j=1}^N \Phi_{ij}({\bf v}_j-{\bf v}_i)\cdot ({\bf v}_j-{\bf v}_i)-\frac{1}{2N}\sum_{j=1}^N \ \Phi_{ij}{\bf v}_i\cdot {\bf v}_i+ \frac{1}{2N}\sum_{j=1}^N \Phi_{ij}{\bf v}_j\cdot {\bf v}_j\\ \le & -\phi_-(t)\frac{|{\bf v}_i|^2}{2} + \phi_+E(t), \end{split} \end{equation} where $\phi_-(t)$, the lower-bound of the symmetric $\Phi({\bf x}_i,{\bf x}_j)$ which is given, in view of \eqref{eq:positive}, \begin{equation} \phi_-(t) = a 2^{-\gamma}\cX^{-\gamma}(t) ,\quad \cX(t) := \max_i |{\bf x}_i(t)|. \end{equation} Taking $i$ as the particle with the largest $E_i$, then \begin{equation}\label{Einfty_1} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} E_\infty(t) \le -\phi_-(t)E_\infty(t) + \phi_+E(t) \le -\phi_-(t)E_\infty(t) + \phi_+E(0). \end{equation} This implies \begin{equation} E_\infty(t) \le E_\infty(0)+\phi_+E(0)t. \end{equation} Next, we notice that \begin{equation} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \cX(t) \le \max_i|{\bf v}_i| \le \sqrt{2E_\infty(t)} \le \sqrt{2(E_\infty(0)+\phi_+E(0)t)}. \end{equation} This yields $\cX(t) \le C\braket{t}^{3/2}$, and in view of the fat tail \eqref{eq:positive}, we end with the lower bound \[ \phi_-(t) \ge c\braket{t}^{-\frac{3\gamma}{2}}. \] Therefore the energy dissipation \eqref{energy1} gives \begin{equation} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} E(t) \le -\phi_-(t) E(t) \le -c\braket{t}^{-\frac{3\gamma}{2}}E(t), \end{equation} which implies the sub-exponential decay \eqref{eq:CSfracdecay}, $\displaystyle E(t) \le E(0)e^{-c\braket{t}^{1-{\lamta}}}$ with ${\lamta}=\frac{3\gamma}{2}<1$. Equipped with this sub-exponential decay of $E(t)$, we revisit \eqref{Einfty_1}: this time it implies \begin{equation} \begin{split} E_\infty(t) &\le e^{-\Phi_-(t)} E_\infty(0) + \phi_+\int_0^t e^{-\Phi_-(t-s)} E(s) \,\mathrm{d}{s}\\ & \le C\braket{t} e^{-c\braket{t}^{1-{\lamta}}}, \qquad \Phi_-(t) := \int_0^t \phi_-(s)\,\mathrm{d}{s} \ge c\braket{t}^{1-{\lamta}} \end{split} \end{equation} This shows the sub-exponential decay of the kinetic energy of each agent, $E_\infty(t)$, independent oh $N$, $|{\bf v}_i(t)-\widebar{{\bf v}}_0| \rightarrow 0$. As a result, $\ds {\bf x}_i(t) = {\bf x}_i(0) + \int_0^t {\bf v}_i(s)\,\mathrm{d}{s}$, converges as $t\rightarrow\infty$ since the last integral converges absolutely in view of $|{\bf v}_i(t)| \le \sqrt{2E_\infty(t)}$. \end{proof} \section{Local vs. global weighted means}\label{sec:means} In this section we prove lemma \ref{lem:mean} about discrete means, which in turn will be used in proving the hypocoercivity of the discrete anticipation dynamics \eqref{eq:AT}. We also treat the corresponding continuum lemma of means in lemma \ref{lem:Mean}, which is utilized in the hypocoercivity of the hydrodynamic anticipation model \eqref{eq:hydro}. We begin with the proof of the Lemma of means \ref{lem:mean}: \begin{proof}[Proof of Lemma \ref{lem:mean}] We first treat the scalar setup, in which case we may assume, without loss of generality that the $z_i$'s are rearranged in a decreasing order, $z_1\ge z_2 \ge \cdots \ge z_N$, and have a zero mean $\sum_j z_j=0$. Let $i_0$ be the smallest index $i$ such that \begin{equation}\label{eq:igti0} \frac{1}{N}\sum_{j=1}^{i-1} z_j \ge \frac{\lambda}{2(\Lambda-\lambda)}z_i. \end{equation} Noticing that if $i_+$ is the maximal index of the positive entries, $z_{i\leq i_+}\geq0$, then \eqref{eq:igti0} clearly holds for $i > i_+$ (where LHS $> 0 >$ RHS), hence $i_0\leq i_+$, and since LHS is increasing (for $i \leq i_+$) and RHS is decreasing, see figure \ref{fig:graph} below, \eqref{eq:igti0} holds for all $i\geq i_0$ \begin{equation}\label{eq:ave2} \frac{1}{N}\sum_{j=1}^{i-1} z_j \ge \frac{\lambda}{2(\Lambda-\lambda)}z_i, \quad i\geq i_0. \end{equation} For $i<i_0$ we have $z_i\geq0$, hence \begin{equation}\label{eq:ave1} \begin{split} \frac{1}{N}\sum_j c_{ij}(z_i-z_j) &= \frac{1}{N}\sum_{j=1}^{i} c_{ij}(z_i-z_j) + \frac{1}{N}\sum_{j=i+1}^{N} c_{ij}(z_i-z_j) \\ &\ge \frac{\Lambda}{N}\sum_{j=1}^{i} (z_i-z_j) + \frac{\lambda}{N}\sum_{j=i+1}^{N} (z_i-z_j) \\ & = -\frac{\Lambda}{N}\sum_{j=1}^{i} z_j + \frac{\lambda}{N}\sum_{j=1}^{i} z_j +\frac{\Lambda}{N}\sum_{j=1}^{i} z_i + \frac{\lambda}{N}\sum_{j=i+1}^{N} z_i \\ &\ge -\frac{\Lambda-\lambda}{N}\sum_{j=1}^{i-1} z_j + \lambda z_i, \qquad i<i_0, \end{split} \end{equation} and therefore, by the minimality of $i_0$ in \eqref{eq:ave2} \[ \frac{1}{N}\sum_j c_{ij}(z_i-z_j) \geq -\frac{\Lambda-\lambda}{2(\Lambda-\lambda)}\lambda z_i +\lambda z_i = \frac{\lambda}{2}z_i \geq 0, \qquad i < i_0. \] It follows that \begin{equation}\label{eq:ave3} \frac{1}{N} \sum_{i=1}^{i_0-1} \Big|\frac{1}{N}\sum_j c_{ij}(z_i-z_j)\Big|^2 \ge \frac{\lambda^2}{4}\frac{1}{N}\sum_{i=1}^{i_0-1} z_i^2. \end{equation} Else, for $i\geq i_0$, \eqref{eq:ave2} implies \[ z_i\leq z_{i_0} \le \frac{2(\Lambda-\lambda)}{\lambda}\frac{1}{N}\sum_{j=1}^{i_0-1} z_j, \quad i\geq i_0. \] It follows that for all positive entries, $0\leq z_i\leq z_{i_0}$, \begin{equation}\label{eq:ave4} \frac{1}{N}\sum_{i=i_0}^{i_+}z_i^2 \le z^2_{i_0} \leq \frac{4(\Lambda-\lambda)^2}{\lambda^2}\frac{1}{N^2}\left(\sum_{j=1}^{i_0-1} z_j\right)^2 \leq \frac{4(\Lambda-\lambda)^2}{\lambda^2}\frac{1}{N}\sum_{j=1}^{i_0-1} z_j^2 \end{equation} Therefore, by \eqref{eq:ave4},\eqref{eq:ave3}, \begin{equation}\label{eq:pos} \begin{split} \frac{1}{N}\sum_{z_i\ge 0}z_i^2 &= \frac{1}{N}\sum_{i=1}^{i_0-1}z_i^2 +\frac{1}{N}\sum_{i=i_0}^{i_+}z_i^2 \\ & \leq \left(1+ \frac{4(\Lambda-\lambda)^2}{\lambda^2}\right)\frac{1}{N}\sum_{j=1}^{i_0-1} z_j^2 \\ & \le \frac{4}{\lambda^2}\left(1+4\Big(\frac{\Lambda}{\lambda} -1\Big)^2\right)\frac{1}{N} \sum_i \Big|\frac{1}{N}\sum_j c_{ij}(z_i-z_j)\Big|^2. \end{split} \end{equation} \begin{figure} \includegraphics[height=4.5in]{antici_fig.pdf} \caption{Comparison of means }\label{fig:graph} \end{figure} Now apply \eqref{eq:pos} to $z_i$ replaced by $-z_i$, to find the same upper-bound on the negative entries \begin{equation}\label{eq:neg} \begin{split} \frac{1}{N}\sum_{z_i\le 0}z_i^2 \le \frac{4}{\lambda^2}\left(1+4\Big(\frac{\Lambda}{\lambda}-1\Big)^2\right)\frac{1}{N} \sum_i \Big|\frac{1}{N}\sum_j c_{ij}(z_i-z_j)\Big|^2. \end{split} \end{equation} The scalar result follows from \eqref{eq:pos},\eqref{eq:neg}. For the $d$-dimensional case, notice that \[ \begin{split} \sum_i |{\bf z}_i|^2 &= \sum_{k=1}^d \sum_i |z_i^k|^2 \\ \sum_i \Big|\frac{1}{N}\sum_j c_{ij}({\bf z}_i-{\bf z}_j) \Big|^2 &= \sum_{k=1}^d \sum_i\Big|\frac{1}{N}\sum_j c_{ij}(z^k_i-z^k_j) \Big|^2 \end{split} \] where superscript stands for component. Therefore the conclusion follows by applying the scalar result to the components of ${\bf z}_i=\{z_i^k\}_k$ for each fixed $k$, ending with the same constant $C(\Lambda,\lambda)$ which is independent of $d$. \end{proof} Next, we extend the result from the discrete framework to the continuum. \begin{lemma}[{\bf Local and global means are comparable}]\label{lem:Mean} Let $(\Omega,\mathcal{F},\mu)$ be a probability measure, and ${\bf X}:\Omega\rightarrow\mathbb{R}^d$ be a random variable with mean $\ds \widebar{{\bf X}}:= \int {\bf X}(\omega')\,\mathrm{d}{\mu(\omega')}$ and finite second moment, $\ds \int |{\bf X}(\omega')|^2\,\mathrm{d}{\mu(\omega')}<\infty$. Then there exists a constant $C(\Lambda,\lambda) \lesssim \Lambda^2 \lambda^{-4}$, such that for any $c=c(\omega,\omega'): \Omega\times\Omega \mapsto {\mathbb R}$ satisfying \[ 0<\lambda\le c(\omega,\omega') \le \Lambda, \] there holds \[ \int |{\bf X}(\omega)-\widebar{{\bf X}}|^2 \,\mathrm{d}{\mu(\omega)} \le C(\lambda,\Lambda)\int \left| \int c(\omega,\omega') ({\bf X}(\omega)-{\bf X}(\omega'))\,\mathrm{d}{\mu(\omega')} \right|^2 \,\mathrm{d}{\mu(\omega)}, \quad C(\Lambda,\lambda) \lesssim \frac{\Lambda^2}{\lambda^{4}}. \] \end{lemma} Observe that the particular case of $\ds \,\mathrm{d}{\mu}=\frac{1}{N}\sum_j\delta({\bf x}-{\bf z}_j)$ recovers the discrete case of lemma \ref{lem:mean}. \begin{proof} We first prove the 1D case, for which the map ${\bf X}$, denoted by $X$, may assume a zero mean, $\widebar{X}=0$, without loss of generality. Take $\omega$ with $X(\omega):=x\ge 0$, then \[ \begin{split} \int c(\omega,\omega')(x&-X(\omega'))\,\mathrm{d}{\mu(\omega')} \\ = & -\int_{\omega': X(\omega')>x} \hspace*{-0.4cm}c(\omega,\omega')(X(\omega')-x)\,\mathrm{d}{\mu(\omega')} + \int_{\omega': X(\omega')\le x} \hspace*{-0.4cm}c(\omega,\omega')(x-X(\omega'))\,\mathrm{d}{\mu(\omega')} \\ \ge & -\Lambda\int_{\omega': X(\omega')>x} (X(\omega')-x)\,\mathrm{d}{\mu(\omega')} + \lambda\int_{\omega': X(\omega')\le x} (x-X(\omega'))\,\mathrm{d}{\mu(\omega')} \\ = & -(\Lambda-\lambda)\int_{\omega': X(\omega')>x} (X(\omega')-x)\,\mathrm{d}{\mu(\omega')} + \lambda x\\ \ge & -(\Lambda-\lambda)\int_{\omega': X(\omega')>x} X(\omega')\,\mathrm{d}{\mu(\omega')} + \lambda x \\ \end{split} \] Let \begin{equation} x_0 := \sup\left\{x: Y(x) \ge 0\right\}, \qquad Y(x) =\int_{\omega': X(\omega')>x}X(\omega')\,\mathrm{d}{\mu(\omega')} - \frac{\lambda}{2(\Lambda-\lambda)}x. \end{equation} Since $\lim_{x\rightarrow\infty}Y(x)=-\infty$ and $Y(0)\ge 0$, $x_0$ is finite and non-negative. It is clear that $Y(x)$ is decreasing and right-continuous. Therefore $Y(x)\ge 0$ for any $x< x_0$, and $Y(x)\le 0$ for any $x\ge x_0$. If $x\ge x_0$, then \begin{equation} \begin{split} \int c(\omega,\omega')(x-X(\omega'))\,\mathrm{d}{\mu(\omega')} \ge -(\Lambda-\lambda)\frac{\lambda}{2(\Lambda-\lambda)}x + \lambda x = \frac{\lambda}{2}x \end{split} \end{equation} Thus taking square and integrating in $\omega$ with $x=X(\omega)\ge x_0 \ge 0$ gives \begin{equation}\label{meanc1} \int_{\omega: X(\omega)\ge x_0} \left(\int c(\omega,\omega')(X(\omega)-X(\omega'))\,\mathrm{d}{\mu(\omega')}\right)^2\,\mathrm{d}{\mu(\omega)} \ge \frac{\lambda^2}{4}\int_{\omega: X(\omega)\ge x_0} \hspace*{-0.9cm}X^2(\omega) \,\mathrm{d}{\mu(\omega)}. \end{equation} Then we claim that the above integral on $\{\omega: X(\omega)\ge x_0\}$ is enough to get the conclusion. Notice that for any $\epsilon>0$, one has $Y(x_0-\epsilon)\ge 0$, i.e., \begin{equation} x_0-\epsilon \le \frac{2(\Lambda-\lambda)}{\lambda}\int_{\omega: X(\omega)> x_0-\epsilon} X(\omega')\,\mathrm{d}{\mu(\omega')} \end{equation} and therefore \[ \begin{split} (x_0-\epsilon)^2 & \le \frac{4(\Lambda-\lambda)^2}{\lambda^2}\left(\int_{\omega: X(\omega)> x_0-\epsilon} X(\omega')\,\mathrm{d}{\mu(\omega')}\right)^2 \\ & \le \frac{4(\Lambda-\lambda)^2}{\lambda^2}\left(\int_{\omega: X(\omega)> x_0-\epsilon} X(\omega')^2\,\mathrm{d}{\mu(\omega')}\right)\left(\int_{\omega: X(\omega)> x_0-\epsilon} \,\mathrm{d}{\mu(\omega')}\right) \\ &\le \frac{4(\Lambda-\lambda)^2}{\lambda^2}\int_{\omega: X(\omega)> x_0-\epsilon} X(\omega')^2\,\mathrm{d}{\mu(\omega')} \end{split} \] Taking $\epsilon\rightarrow 0$, noticing that the RHS integral domain $\{\omega: X(\omega)> x_0-\epsilon\}$ converges to $\{\omega: X(\omega)\ge x_0\}$, we get \begin{equation}\label{meanc2} x_0^2 \le \frac{4(\Lambda-\lambda)^2}{\lambda^2}\int_{\omega: X(\omega)\ge x_0} X(\omega')^2\,\mathrm{d}{\mu(\omega')} \end{equation} Thus, using (\ref{meanc2}) and (\ref{meanc1}) we find \[ \begin{split} \int_{\omega': X(\omega')\ge 0} \hspace*{-0.6cm}X(\omega')^2&\,\mathrm{d}{\mu(\omega')} = \int_{\omega': 0\le X(\omega')< x_0} X(\omega')^2\,\mathrm{d}{\mu(\omega')} + \int_{\omega': X(\omega')\ge x_0} X(\omega')^2\,\mathrm{d}{\mu(\omega')} \\ \le & \left(\frac{4(\Lambda-\lambda)^2}{\lambda^2}+1\right)\int_{\omega': X(\omega')\ge x_0} X(\omega')^2\,\mathrm{d}{\mu(\omega')} \\ \le & \left(\frac{4(\Lambda-\lambda)^2}{\lambda^2}+1\right)\frac{4}{\lambda^2}\int_{[x_0,\infty)} \left(\int c(\omega,\omega')(X(\omega)-X(\omega'))\,\mathrm{d}{\mu(\omega')}\right)^2\,\mathrm{d}{\mu(\omega)}. \end{split} \] Apply the last bound with $X(\cdot)$ replaced by $-X(\cdot)$ to find that the $\ds \int_{\omega': X(\omega')\le 0} \hspace*{-0.6cm}X(\omega')^2\,\mathrm{d}{\mu(\omega')} $ satisfies the same bound on the right, which completes the scalar part of the proof. For the $d$-dimensional case with ${\bf X} = (X_1,\dots,X_d)$, notice that \[ \int |{\bf X}(\omega')|^2 \,\mathrm{d}{\mu(\omega')} = \sum_{k=1}^d \int |X_k(\omega')|^2 \,\mathrm{d}{\mu(\omega')} \] and similarly, \[ \int \left| \int c(\omega,\omega') ({\bf X}(\omega)-{\bf X}(\omega'))\,\mathrm{d}{\mu(\omega')} \right|^2 \,\mathrm{d}{\mu(\omega)} = \sum_{k=1}^d \int \left| \int c(\omega,\omega') (X_k(\omega)-X_k(\omega'))\,\mathrm{d}{\mu(\omega')} \right|^2 \,\mathrm{d}{\mu(\omega)} \] Applying the 1D result to the random variable $X_k$ gives \stepcounter{equation} \begin{equation}\tag*{(\theequation)$_{k}$}\label{eq:mk} \int |X_k(\omega')|^2 \,\mathrm{d}{\mu(\omega')} \le C(\lambda,\Lambda)\int \left| \int c(\omega,\omega') (X_k(\omega)-X_k(\omega'))\,\mathrm{d}{\mu(\omega')} \right|^2 \,\mathrm{d}{\mu(\omega)}. \end{equation} Summing \ref{eq:mk} recovers the desired result with the constant $C(\Lambda,\lambda)$ independent of $d$. \end{proof} \section{Anticipation dynamics with attractive potentials}\label{sec:attraction} In this section we prove the flocking behavior of \eqref{eq:AT} asserted in Theorem \ref{thm:thm2}. Here, we treat the larger class of attractive potentials, thus extending the case of convex potentials of Theorem \ref{thm:thm1}. The starting point is the anticipated energy balance \eqref{energy} \[ \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\mathcal{E}(t) = - \frac{\tau}{N}\sum_{i} | \dot{{\bf v}}_i|^2. \] \begin{remark} We note in passing that the first-order system \[ \dot{{\bf x}}_i = -\frac{1}{N}\sum_{j=1}^N \nabla U(|{\bf x}_i-{\bf x}_j|), \] satisfies an energy estimate, reminiscent of the energy-enstrophy balance in the anticipation dynamics \eqref{eq:AT}, \[ \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \frac{1}{2N^2}\sum_{i,j}U(|{\bf x}_i-{\bf x}_j|) = -\frac{1}{N}\sum_i|\dot{{\bf x}}_i|^2. \end{split} \] \end{remark} \begin{proof}[Proof of Theorem \ref{thm:thm2}] We aim to conduct a hypocoercivity argument to complement the anticipated energy estimate \eqref{energy}. To this end, we use the `anticipated' cross term, \begin{equation} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}(-\frac{1}{N}\sum_i {\bf x}_i^\tau\cdot{\bf v}_i) = & \frac{1}{N}\sum_i \left( -({\bf v}_i + \tau\dot{{\bf v}}_i)\cdot{\bf v}_i - {\bf x}_i^\tau\cdot\dot{{\bf v}}_i \right) \\ \le & \frac{1}{N}\sum_i \left( -|{\bf v}_i|^2 + \tau(\frac{\tau}{2}|\dot{{\bf v}}_i|^2 + \frac{1}{2\tau}|{\bf v}_i|^2) + \frac{1}{2}(|{\bf x}_i^\tau|^2 + |\dot{{\bf v}}_i|^2) \right) \\ \le & -\frac{1}{2} \frac{1}{N}\sum_i |{\bf v}_i|^2 + \frac{1}{2}\frac{1}{N}\sum_i|{\bf x}_i^\tau|^2 + \frac{\tau^2+1}{2}\frac{1}{N}\sum_i|\dot{{\bf v}}_i|^2 \end{split} \end{equation} Consider the modified anticipated energy $\widehat{\mathcal{E}}(t):= \mathcal{E}(t)-\epsilon(t)\frac{1}{N}\sum_i {\bf x}_i^\tau\cdot{\bf v}_i$, where $\epsilon(t)>0$ is small, decreasing, and is yet to be chosen. We first need to guarantee that this modified energy is positive definite, and in fact --- comparable to $\mathcal{E}(t)$, \begin{equation}\label{pd1} \big|\epsilon(t)\frac{1}{N}\sum_i {\bf x}_i^\tau\cdot{\bf v}_i\big| \le \frac{\mathcal{E}(t)}{2} \end{equation} Indeed, notice that \[ \begin{split} |\epsilon(t)\frac{1}{N}\sum_i {\bf x}_i^\tau\cdot{\bf v}_i| \le & \frac{1}{4N}\sum_i |{\bf v}_i|^2 + \epsilon^2(t)\frac{1}{N}\sum_i |{\bf x}_i^\tau|^2 \\ \le & \frac{1}{4N}\sum_i |{\bf v}_i|^2 + \epsilon(t)^2 (2\cXtau)^{\beta}\frac{1}{2N^2}\sum_{i,j} \left(\frac{U(|{\bf x}_i^\tau-{\bf x}_j^\tau|)}{a}+1\right) \\ \le & \frac{1}{4N}\sum_i |{\bf v}_i|^2 + \epsilon^2(t) (2C_\infty)^{\beta} \braket{t}^{\frac{\beta}{2-2\beta}}\frac{1}{2N^2}\sum_{i,j} \left(\frac{U(|{\bf x}_i^\tau-{\bf x}_j^\tau|)}{a}+1\right). \end{split} \] The second inequality is obtained similarly to (\ref{xitau2}), and the third inequality uses Lemma \ref{lem:bound}. Therefore it suffices to choose \begin{equation}\label{epsilon1} \epsilon(t) = \epsilon_0(10+t)^{-\alpha},\quad \alpha \ge \frac{\beta}{4-4\beta} \end{equation} with small enough $\epsilon_0$ to guarantee \eqref{pd1}. We now turn to verify the (hypo-)coercivity of $\widehat{\mathcal{E}}(t)$, \begin{equation}\label{hypo1} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \Big(\mathcal{E}(t) &-\epsilon(t)\frac{1}{N}\sum_i {\bf x}_i^\tau\cdot{\bf v}_i\Big) \\ \le & - \frac{\tau}{N}\sum_{i} | \dot{{\bf v}}_i|^2 -\frac{\epsilon(t)}{2} \frac{1}{N}\sum_i |{\bf v}_i|^2 \\ & \ \ + \epsilon(t)\left(\frac{1}{2}\frac{1}{N}\sum_i|{\bf x}_i^\tau|^2 + \frac{\tau^2+1}{2}\frac{1}{N}\sum_i|\dot{{\bf v}}_i|^2\right) + |\dot{\epsilon}(t)|\frac{1}{N}\sum_i |{\bf x}_i^\tau\cdot{\bf v}_i| \\ \le & - \left(\tau- \epsilon(t)\frac{\tau^2+1}{2}\right) \frac{1}{N}\sum_{i} | \dot{{\bf v}}_i|^2 \\ & \ \ -\frac{\epsilon(t)- |\dot{\epsilon}(t)|}{2} \frac{1}{N}\sum_i |{\bf v}_i|^2 + \frac{\epsilon(t)+ |\dot{\epsilon}(t)|}{2}\frac{1}{N}\sum_i|{\bf x}_i^\tau|^2 \end{split} \end{equation} The first pre--factor on the right of \eqref{hypo1} $\leq -\tau/2$ for small enough $\epsilon_0$. The second pre-factor is negative since \[ |\dot{\epsilon}(t)| = \alpha\epsilon_0 (10+t)^{-\alpha-1} \le \frac{\alpha}{10}\epsilon(t). \] It remains to control the last term on the right of \eqref{hypo1}. To this end we recall that $U$ is assumed attractive, $U'(r)/r \geq \langle r\rangle ^{-\beta}$, hence, by Lemma \ref{lem:bound} \[ A \geq \frac{U'(r^\tau_{ij})}{r^\tau_{ij}} \ge a\langle r^\tau_{ij}\rangle^{-\beta} \ge c\braket{t}^{-\frac{\beta}{2-2\beta}}, \qquad r^\tau_{ij}=|{\bf x}_i^\tau-{\bf x}_j^\tau|. \] We now invoke Lemma \ref{lem:mean}: it implies \begin{equation} \begin{split} \frac{1}{N}\sum_{i} | \dot{{\bf v}}_i|^2 = \frac{1}{N} \sum_i \left|\frac{1}{N}\sum_j \frac{U'(r^\tau_{ij})}{r^\tau_{ij}} ({\bf x}_i^\tau-{\bf x}_j^\tau)\right|^2 \ge c \braket{t}^{-\lamta}\frac{1}{N}\sum_i|{\bf x}_i^\tau|^2 ,\quad \lamta = \frac{2\beta}{1-\beta} \end{split} \end{equation} Therefore, the last term on the right of \eqref{hypo1} does not exceed $\lesssim \epsilon(t)\braket{t}^\lamta \frac{1}{N}\sum_i |\dot{{\bf v}}_i|^2$ and choosing $\epsilon(t)$ as in (\ref{epsilon1}) with $\alpha=\lamta$ yields \begin{equation} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \widehat{\mathcal{E}}(t) \le - \frac{\tau}{4} \frac{1}{N}\sum_{i} | \dot{{\bf v}}_i|^2 -\frac{\epsilon_0(10+t)^{-\lamta}}{4} \frac{1}{N}\sum_i |{\bf v}_i|^2. \end{equation} As before, since $U$ is bounded, it has at most quadratic growth, \[ \begin{split} \frac{1}{2N^2}\sum_{i,j} U(|{\bf x}_i^\tau-{\bf x}_j^\tau|) \le & A\frac{1}{2N^2}\sum_{i,j} |{\bf x}_i^\tau-{\bf x}_j^\tau|^2 \le C\braket{t}^{\frac{2\beta}{1-\beta}}\frac{1}{N}\sum_{i} | \dot{{\bf v}}_i|^2 = C\braket{t}^{\lamta}\frac{1}{N}\sum_{i} | \dot{{\bf v}}_i|^2, \end{split} \] and we conclude the sub-exponential decay \begin{equation} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\widehat{\mathcal{E}}(t) \le -c\braket{t}^{-\lamta} \widehat{\mathcal{E}}(t)\ \leadsto \ \widehat{\mathcal{E}}(t) \le C e^{-t^{1-\lamta}}, \end{equation} which implies the same decay rate of $\mathcal{E}(t)$. \end{proof} \section{Anticipation dynamics with repulsive-attractive potential}\label{sec:ra} In this section we prove Theorem \ref{thm:thm3}. The assumption $\sum_i{\bf x}_i=\sum_i{\bf v}_i=0$ amounts to saying that ${\bf x}:={\bf x}_1=-{\bf x}_2$, ${\bf v}:={\bf v}_1=-{\bf v}_2$. Replacing $U(|{\bf x}|)$ by $U(2|{\bf x}|)$ and $r_0$ by $r_0/2$, \eqref{eq:AT} becomes \begin{equation}\label{eq2} \left\{\begin{split} & \dot{{\bf x}}={\bf v}\\ & \dot{{\bf v}} = - \nabla U(|{\bf x}^\tau|) \end{split}\right. \end{equation} where $U(r)$ has a local minimum at $r=r_0>0$ with $U''(r_0) = a >0$. We use polar coordinates \begin{equation} \left\{\begin{split} & x_1^\tau = r\cos\theta \\ & x_2^\tau = r\sin\theta \\ \end{split}\right. \end{equation} and \begin{equation} \left\{\begin{split} & v_r = v_1\cos\theta+v_2\sin\theta \\ & v_\theta = -v_1\sin\theta+v_2\cos\theta \\ \end{split}\right. \end{equation} Then (\ref{eq2}) becomes \begin{equation}\label{eqkr} \left\{\begin{split} & \dot{r} = v_r - \tau U'(r) \\ & \dot{\theta}=\frac{v_\theta}{r} \\ & \dot{v}_r= -U'(r) + \frac{v_\theta^2}{r}\\ & \dot{v}_\theta= \frac{-v_rv_\theta}{r} \\ \end{split}\right. \end{equation} We will focus on perturbative solutions near $r=r_0, v_r=v_\theta=0$. Write $r:=r_0+\delta_r} %\mathfrak{r}$, and there hold the approximations \begin{equation} U(r) \approx \frac{a}{2}\delta_r} %\mathfrak{r}^2,\quadU'(r) \approx a \delta_r} %\mathfrak{r} ,\quadU''(r) \approx a \end{equation} Observe that our assumed initial configuration in \eqref{eq:small_enough} implies, and in fact is equivalent to the assumption of smallness on the \emph{anticipated} energy, $\mathcal{E}(0) \le 2(1+\tau) \epsilon$. Theorem \ref{thm:thm3} is a consequence of the following proposition on the polar system \eqref{eqkr}, \begin{proposition}[{\bf polar coordinates}] There exists a constant $\epsilon>0$, such that if the initial data is small enough: \begin{equation} \mathcal{E}_0 := \left(U(r) + \frac{1}{2}v_r^2 + \frac{1}{2}v_\theta^2\right)_{|{t=0}} \le \epsilon \end{equation} then the solution to \eqref{eqkr} decays to zero at the following algebraic rates: \begin{equation} \delta_r} %\mathfrak{r} \le C\braket{t}^{-1}\ln^{1/2}\braket{t},\quad v_r\le C\braket{t}^{-1}\ln^{1/2}\braket{t},\quad v_\theta \le C\braket{t}^{-1/2} \end{equation} \end{proposition} \begin{proof} Fix $0<\zeta\le \min\{\frac{r_0}{2},1\}$ as a small number such that \begin{equation} \frac{a}{2}\le U''(r) \le 2a,\quad \forall \delta_r} %\mathfrak{r}^2 \le \zeta, \end{equation} and as a result, \begin{equation}\label{zeta} \frac{a}{2}|\delta_r} %\mathfrak{r}| \le |U'(r)| \le 2a|\delta_r} %\mathfrak{r}|,\quad \frac{a}{4}\delta_r} %\mathfrak{r}^2 \le U(r) \le a\delta_r} %\mathfrak{r}^2,\quad \forall \delta_r} %\mathfrak{r}^2 \le \zeta. \end{equation} We start from the energy estimate for the anticipated energy $\displaystyle \mathcal{E}(t):= U(r) + \frac{1}{2}v_r^2 + \frac{1}{2}v_\theta^2$ \[ \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\mathcal{E}(t) & = U'(r) \cdot (v_r - \tau U'(r)) + v_r\cdot \left(-U'(r) + \frac{v_\theta^2}{r}\right) + v_\theta\cdot \frac{-v_rv_\theta}{r}= -\tau U'(r)^2 \end{split} \] Therefore, for any positive $\ds \epsilon\le \frac{a}{4}\zeta$ to be chosen later, if $\mathcal{E}_0 \le \epsilon$, then \begin{equation}\label{small} \delta_r} %\mathfrak{r}^2 \le \frac{4}{a}U(r) \le \frac{4}{a}\epsilon< \zeta, \qquad v_r^2 \le \epsilon, \end{equation} hold for all time which in turn implies that \eqref{zeta} holds. Next we consider the cross terms \begin{equation}\label{eq:rtcross1} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} (-v_rv_\theta^2) = - \left( -U'(r) + \frac{v_\theta^2}{r}\right)v_\theta^2 - 2v_rv_\theta\frac{-v_rv_\theta}{r} = - \frac{v_\theta^4}{r} + U'(r) v_\theta^2 + 2\frac{v_r^2v_\theta^2}{r}, \end{split} \end{equation} and \begin{equation}\label{eq:rtcross2} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} (-U'(r)v_r) = & -U''(r)v_r\cdot(v_r - \tau U'(r)) - U'(r)\left(-U'(r) + \frac{v_\theta^2}{r}\right) \\ = & -U''(r)v_r^2 + \tau U''(r)v_r U'(r) + U'(r)^2 - U'(r)\frac{v_\theta^2}{r} \end{split} \end{equation} We now introduce the modified energy, \[ \widehat{\mathcal{E}}(t):= U(r) + \frac{1}{2}v_r^2 + \frac{1}{2}v_\theta^2 -c v_rv_\theta^2 - c U'(r)v_r, \] depending on a small $c>0$ which is yet to be determined. A straightforward calculation, based on \eqref{zeta} shows its decay rate does not exceed \[ \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\widehat{\mathcal{E}}(t) &= -\tau U'(r)^2 - \frac{c}{r}v_\theta^4 -cU''(r)v_r^2 \\ & \ \ \ + c\left(U'(r) v_\theta^2 + 2\frac{v_r^2v_\theta^2}{r}\right) + c\left(\tau U''(r)v_r U'(r) + U'(r)^2 - U'(r)\frac{v_\theta^2}{r}\right) \\ & \le - \tau \frac{a^2}{4}\delta_r} %\mathfrak{r}^2 - \frac{c}{2r_0}v_\theta^4 -c\frac{a}{2}v_r^2 \\ & \ \ \ + c\left(2a|\delta_r} %\mathfrak{r}| v_\theta^2 + 4\frac{v_r^2v_\theta^2}{r_0}\right) + c\left(4\tau a^2 |v_r \delta_r} %\mathfrak{r}| + 4a^2 \delta_r} %\mathfrak{r}^2 + 4a|\delta_r} %\mathfrak{r}| \frac{v_\theta^2}{r_0}\right), \end{split} \] and by Cauchy-Schwarz \begin{equation}\label{eq:hatE} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\widehat{\mathcal{E}}(t) & \le - \tau \frac{a^2}{4}\delta_r} %\mathfrak{r}^2 - \frac{c}{2r_0}v_\theta^4 -c\frac{a}{2}v_r^2 \\ & \ \ \ + c\left(\frac{1}{\kapta} a\delta_r} %\mathfrak{r}^2 + \kapta a v_\theta^4 + \kapta \frac{2}{r_0}v_\theta^4 + \frac{1}{\kapta}\frac{2}{r_0}v_r^4 \right) \\ & \ \ \ + c\left(2\kapta \tau a^2 v_r^2 + \frac{1}{\kapta}2\tau a^2 \delta_r} %\mathfrak{r}^2 + 4a^2 \delta_r} %\mathfrak{r}^2 + \frac{1}{\kapta}\frac{2a}{r_0}\delta_r} %\mathfrak{r}^2+ \kapta\frac{2a}{r_0}v_\theta^4 \right) \\ & = - \left(\tau \frac{a^2}{4} - \frac{c}{\kapta}\big(a+2\tau a^2 + 4\kapta a^2 +\frac{2a}{r_0}\big) \right)\delta_r} %\mathfrak{r}^2 \\ & \ \ \ - c\left( \frac{1}{2r_0} -\kapta(a+\frac{2}{r_0} + \frac{2a}{r_0})\right)v_\theta^4 - c\left(\frac{a}{2} - \frac{1}{\kapta}\frac{2}{r_0}v_r^2 - 2\kapta\tau a^2\right)v_r^2, \end{split} \end{equation} with $\kapta\in (0,1)$ which is yet to be determined. We want to guarantee that the three pre-factors on the right are positive. To this end, we first fix the ratio \stepcounter{equation} \begin{equation}\tag*{(\theequation)$_{1}$}\label{eq:pre1} \frac{c}{\kapta} = \frac{\tau \frac{a^2}{8}}{a+2\tau a^2 + 4 a^2 +\frac{2a}{r_0}} \end{equation} so that the first pre-factor is lower-bounded by $\tau \frac{a^2}{8}$. Then we choose \begin{equation}\tag*{(\theequation)$_{2}$}\label{eq:pre2} \kapta \le \min\left\{ 1, \frac{\frac{1}{4r_0}}{a+\frac{2}{r_0} + \frac{2a}{r_0}}, \frac{\frac{a}{4}}{2\tau a^2} \right\} \end{equation} so that the second pre-factor, the coefficient of $v_\theta^4$, becomes $\ds \ge\frac{c}{4r_0}$. Finally, the third pre-factor is also positive because (i) a small enough $\kapta$ was chosen in \ref{eq:pre2}, and (ii) a key aspect in which $v_r$ can be made small enough to compensate for small $\kapta$, so that the negative contribution of $\ds - \frac{1}{\kapta}\frac{2}{r_0}v_r^2$ can be absorbed into the rest: indeed, if \begin{equation}\tag*{(\theequation)$_{3}$}\label{vr_small} v_r^2 \le \frac{\frac{a}{8}}{\frac{1}{\kapta}\frac{2}{r_0}} = \frac{ar_0}{16}\kapta \end{equation} then \[ c\left(\frac{a}{2} - \frac{1}{\kapta}\frac{2}{r_0}v_r^2 - 2\kapta\tau a^2\right) \leq c\left(\frac{a}{2} - \frac{1}{\kapta}\frac{2}{r_0} \frac{\frac{a}{8}}{\frac{1}{\kapta}\frac{2}{r_0}} - \frac{a}{4}\right) = \frac{ca}{8}. \] Therefore, \eqref{eq:hatE} implies the decay rate \begin{equation}\label{eq:Ehatdecay} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\widehat{\mathcal{E}}(t) \le -\lamta_1(\delta_r} %\mathfrak{r}^2 + v_\theta^4 + v_r^2), \qquad \lamta_1 = \min\left\{\tau \frac{a^2}{8}, \frac{c}{4r_0}, \frac{ca}{8}\right\}, \end{equation} provided (\ref{zeta}) and \ref{eq:pre1}--\ref{vr_small} are satisfied. Moreover, we claim that $\widehat{\mathcal{E}}$ is comparable to the original anticipated energy $\mathcal{E}$. Indeed, if in addition \stepcounter{equation} \begin{equation}\tag*{(\theequation)$_{1}$}\label{eq:pre4} c\sqrt{\frac{ar_0}{16}\kapta} \le \frac{1}{4} \end{equation} holds, then in view of \ref{vr_small}, $\ds c|v_rv_\theta^2| \le \frac{1}{4}v_\theta^2$, and if \begin{equation}\tag*{(\theequation)$_{2}$}\label{eq:pre5} c\le \min\Big\{\frac{1}{8}, \frac{1}{4a}\Big\}, \end{equation} holds, then in view of \eqref{zeta}, \[ c|U'(r)v_r| \le ca(\delta_r} %\mathfrak{r}^2 + v_r^2) \le ca\Big(\frac{4}{a}U(r) + v_r^2\Big) \le \frac{1}{2}\Big(U(r)+\frac{1}{2}v_r^2\Big). \] It follows that \begin{equation}\label{eq:comp} \frac{1}{2}\mathcal{E}(t) \le \widehat{\mathcal{E}}(t)\le 2\mathcal{E}(t), \end{equation} provided \ref{eq:pre4}--\ref{eq:pre5} are satisfied. These last two conditions are clearly met for small enough $\kapta$: recall that the ratio $c/\kapta$ was fixed in \ref{eq:pre1} then \stepcounter{equation} \begin{equation}\tag*{(\theequation)$_{1}$}\label{eq:pre6} \kapta \le \frac{a+2\tau a^2 + 4 a^2 +\frac{2a}{r_0}}{\tau \frac{a^2}{8}} \min\{ \sqrt{ar_0}, \frac{1}{8}, \frac{1}{4a}\} \end{equation} suffices to guarantee \ref{eq:pre4}--\ref{eq:pre5}. Thus, we finally choose small enough $\kapta$ to satisfy both \ref{eq:pre2},\ref{eq:pre6}, and small enough $\ds \epsilon < \min\big\{\frac{a}{4}\zeta,\frac{ar_0}{16}\kapta\big\} $ so that \eqref{small} and \ref{vr_small} hold. By now we proved \eqref{eq:Ehatdecay} and \eqref{eq:comp}. Finally, notice that for small enough $\delta_r} %\mathfrak{r},v_r$ we have \[ \delta_r} %\mathfrak{r}^2 + v_\theta^4 + v_r^2 \ge \delta_r} %\mathfrak{r}^4 + v_\theta^4 + v_r^4 \ge \frac{1}{3}(\delta_r} %\mathfrak{r}^2 + v_\theta^2 + v_r^2)^2 \ge \frac{1}{3}\min\Big\{\frac{1}{a^2},1\Big\}\mathcal{E}^2(t). \] We conclude, in view of \eqref{eq:Ehatdecay} and \eqref{eq:comp}, \[ \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\widehat{\mathcal{E}}(t) \le -\lamta \widehat{\mathcal{E}}^2(t) \ \leadsto \ \widehat{\mathcal{E}}(t) \le \frac{1}{\lamta t + 1/\widehat{\mathcal{E}}(0)},\quad \lamta = \frac{\lamta_1}{12}\min\left\{\frac{1}{a^2},1\right\}. \] It follows that $|v_\theta|\le C\braket{t}^{-1/2}$. To get a better decay rate for $\delta_r} %\mathfrak{r}$ and $v_r$, we use yet another modified energy functional, \[ \widecheck{\mathcal{E}}(t):= U(r) + \frac{1}{2}v_r^2 - c_1 U'(r)v_r, \] for which we find \[ \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\widecheck{\mathcal{E}}(t) = & -\tau U'(r)^2 - c_1U''(r)v_r^2 + \frac{v_r v_\theta^2}{r} + c_1\left(\tau U''(r)v_r U'(r) + U'(r)^2 - U'(r)\frac{v_\theta^2}{r}\right) \\ \le & -\tau \frac{a^2}{4}\delta_r} %\mathfrak{r}^2 -c_1\frac{a}{2}v_r^2 + \frac{2v_r v_\theta^2}{r_0} + c_1\left(4\tau a^2 |v_r \delta_r} %\mathfrak{r}| + 4a^2 \delta_r} %\mathfrak{r}^2 + 4a|\delta_r} %\mathfrak{r}| \frac{v_\theta^2}{r_0}\right) \\ \le & -\left(\tau \frac{a^2}{4} - \frac{c_1}{\kapta_1}\Big(2\tau a^2 + 4\kapta_1 a^2 + \frac{2a}{r_0}\Big) \right) \delta_r} %\mathfrak{r}^2 -c_1\left(\frac{a}{2} - \kapta_1 \frac{1}{r_0} - \kapta_1\cdot 2\tau a^2\right)v_r^2 \\ & + \left( \frac{1}{c_1\kapta_1r_0} + c_1\kapta_1\frac{2a}{r_0}\right) v_\theta^4. \end{split} \] By similar choices of $c_1$ and $\kapta_1$, one can guarantee that $\widecheck{\mathcal{E}}(t)$ is equivalent to $\delta_r} %\mathfrak{r}^2+v_r^2$, and the coefficients of $\delta_r} %\mathfrak{r}^2$ and $v_r^2$ are positive. Therefore \begin{equation} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\widecheck{\mathcal{E}}(t) \le -\lamta_2 \widecheck{\mathcal{E}}(t) + Cv_\theta^4 \le -\lamta_2 \widecheck{\mathcal{E}}(t) + C\braket{t}^{-2} \end{equation} This gives \begin{equation} \widecheck{\mathcal{E}}(t) = e^{-\lamta_2 t}\widecheck{\mathcal{E}}(0) + C\int_0^t e^{-\lamta_2(t-s)}(1+s)^{-2} \,\mathrm{d}{s} \end{equation} We estimate the last integral for large enough $t$, \begin{equation} \begin{split} \int_0^t e^{-\lamta_2(t-s)}(1+s)^{-2} \,\mathrm{d}{s} & \le \left(\int_0^{t-\frac{1}{\lamta_2}\ln \braket{t}} + \int_{t-\frac{1}{\mu}\ln \braket{t}}^t\right)e^{-\lamta_2(t-s)}(1+s)^{-2} \,\mathrm{d}{s}\\ &\le \braket{t}^{-1} \int_0^t (1+s)^{-2} \,\mathrm{d}{s} + \Big(1+(t-\frac{1}{\lamta_2}\ln \braket{t})\Big)^{-2}\frac{1}{\lamta_2}\ln \braket{t}\\ & \le \braket{t}^{-2} + \frac{2}{\lamta_2} \braket{t}^{-2}\ln \braket{t}. \end{split} \end{equation} This shows that $\widecheck{\mathcal{E}}(t) \le C\braket{t}^{-2}\ln\braket{1+t}$, and therefore $ |v_r|+|\delta_r} %\mathfrak{r}|\le C\braket{t}^{-1}\ln^{1/2}\braket{1+t}$. \end{proof} Finally, we conclude by noting that the last bound on $\delta_r} %\mathfrak{r}$ tells us that \[ \big||{\bf x}^\tau_1(t)-{\bf x}^\tau_2(t)|-r_0\big| \le C\braket{t}^{-1}\ln^{1/2}\braket{1+t},\quad |{\bf v}_1(t)-{\bf v}_2(t)| \le C\braket{t}^{-1/2}. \] Observe that this bound on relative \emph{anticipated} positions is in fact equivalent to the claimed statement of the current positions, $\big||{\bf x}_1(t)-{\bf x}_2(t)|-r_0\big|\lesssim \braket{t}^{-1}\ln^{1/2}\braket{1+t}$, which concludes the proof of theorem \ref{thm:thm3}. \begin{remark} Numerical examples \cite[sec. 1]{GTLW2017} show that the rate $v_\theta={\mathcal O}(t^{-\nicefrac{1}{2}})$ is optimal. Therefore, \[ \theta(t) = \theta_0 + \int_0^t \frac{1}{r(s)}v_\theta(s)\,\mathrm{d}{s} = {\mathcal O}(\sqrt{t}), \] which means that $\theta$ needs not converge to any point, even for small initial data. Thus, although we trace the dynamics of $\delta_r} %\mathfrak{r},v_r,v_\theta$ using essentially perturbative arguments, the dynamics of \eqref{eq2} is not. \end{remark} \section{Anticipation dynamics: hydrodynamic formulation}\label{sec:hydro} The large crowd dynamics associated with \eqref{eq:AT} is captured by the macroscopic density $\rho(t,{\bf x}): \mathbb{R}_+\times \mathbb{R}^d \mapsto \mathbb{R}_+$ and momentum $\rho{\bf u}(t,{\bf x}): \mathbb{R}_+\times \mathbb{R}^d \mapsto \mathbb{R}^d$, which are governed by the hydrodynamic description \eqref{eq:hydro} \[ \left\{\begin{split} \rho_t + \nabla_{\bf x}\cdot (\rho {\bf u}) &= 0 \\ (\rho{\bf u})_t + \nabla_{\bf x}\cdot (\rho{\bf u}\otimes {\bf u}) & = - \int \nabla U(|{\bf x}^\tau - {\bf y}^\tau|) \,\mathrm{d}\rho({\bf y}), \quad {\bf x}^\tau := {\bf x} + \tau {\bf u}(t,{\bf x}). \end{split}\right. \] The flux on the left involves additional second-order moment fluctuations, ${\mathcal P}$, which can be dictated by proper closure relations. As in \cite{HT2008}, we will focus on the mono-kinetic case, in which case ${\mathcal P}=0$. To study the large time behavior we appeal, as in the discrete case, to the basic bookkeeping of energy and enstrophy: here we consider the \emph{anticipated energy} \begin{equation} \mathcal{E}(t) := \int \frac{1}{2}|{\bf u}(t,{\bf x})|^2 \rho(t,{\bf x})\,\mathrm{d}{{\bf x}} + \frac{1}{2}\int\int U(|{\bf x}^\tau(t)-{\bf y}^\tau(t)|) \,\mathrm{d}\rho(t,{\bf x})\,\mathrm{d}\rho(t,{\bf y}). \end{equation} Away from vacuum, the velocity field ${\bf u}({\bf x})={\bf u}(t,{\bf x})$ satisfies the transport equation \begin{subequations}\label{eqs:trans} \begin{equation}\label{eq:trans} {\bf u}_t(t,{\bf x}) + {\bf u}\cdot\nabla_{\bf x} {\bf u}(t,{\bf x}) = {\mathbf A}(\rho,{\bf u})(t,{\bf x}), \end{equation} where ${\mathbf A}(\rho,{\bf u})$ denotes the anticipated interaction term \begin{equation}\label{eq:al} {\mathbf A}(\rho,{\bf u})(t,{\bf x}):=-\int \nabla U(|{\bf x}^\tau(t) - {\bf y}^\tau(t)|) \,\mathrm{d}\rho(t,{\bf y}), \qquad {\bf x}^\tau(t) = {\bf x} + \tau {\bf u}(t,{\bf x}). \end{equation} \end{subequations} We compute (suppressing the time dependence) \[ \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \mathcal{E}(t) = & \int {\bf u}({\bf x})\cdot(-{\bf u}\cdot\nabla {\bf u} +{\mathbf A}({\bf x})) \,\mathrm{d} \rho({\bf x}) + \int \frac{1}{2}|{\bf u}({\bf x})|^2 (-\nabla\cdot (\rho {\bf u}))\,\mathrm{d}{{\bf x}} \\ & + \frac{\tau}{2}\int\int \nabla U(|{\bf x}^\tau-{\bf y}^\tau|)\cdot ( -{\bf u}({\bf x})\cdot\nabla {\bf u}({\bf x}) +{\mathbf A}({\bf x}) + {\bf u}({\bf y})\cdot\nabla {\bf u}({\bf y}) -{\mathbf A}({\bf y})) \,\mathrm{d}\rho({\bf x})\,\mathrm{d}\rho({\bf y}) \\ & + \frac{1}{2}\int\int U(|{\bf x}^\tau-{\bf y}^\tau|) (-\nabla\cdot (\rho {\bf u})({\bf y})))\,\mathrm{d}\rho({\bf x})\,\mathrm{d}{{\bf y}}\\ & + \frac{1}{2}\int\int U(|{\bf x}^\tau-{\bf y}^\tau|) (-\nabla\cdot (\rho {\bf u})({\bf x}))\,\mathrm{d}{{\bf x}}\,\mathrm{d}\rho({\bf y}) \\ = & \int {\bf u}({\bf x})\cdot {\mathbf A}({\bf x}) \,\mathrm{d}\rho({\bf x})+ \frac{\tau}{2}\int\int \nabla U(|{\bf x}^\tau-{\bf y}^\tau|)\cdot ( {\mathbf A}({\bf x}) - {\mathbf A}({\bf y}) ) \,\mathrm{d}\rho({\bf y})\,\mathrm{d}\rho({\bf x}) \\ & + \frac{1}{2}\int\int \nabla U(|{\bf x}^\tau-{\bf y}^\tau|)\cdot ( -{\bf u}({\bf y})+{\bf u}({\bf x})) \,\mathrm{d}\rho({\bf y})\,\mathrm{d}\rho({\bf x})\\ = & \tau\int\int \nabla U(|{\bf x}^\tau-{\bf y}^\tau|)\cdot {\mathbf A}({\bf x}) \,\mathrm{d}\rho({\bf y})\,\mathrm{d}\rho({\bf x})\\ = & -\tau \int\Big|\int \nabla U(|{\bf x}^\tau-{\bf y}^\tau|) \,\mathrm{d}\rho({\bf y})\Big|^2\,\mathrm{d}\rho({\bf x}). \end{split} \] This is the continuum analogue of the discrete enstrophy statement \eqref{energy}, which becomes apparent when it is expressed in terms of the \emph{material derivative}, \begin{equation}\label{eq:hydroens} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \mathcal{E}(t) = -\tau \int\Big|\int \nabla U(|{\bf x}^\tau-{\bf y}^\tau|) \,\mathrm{d}\rho({\bf y})\Big|^2\,\mathrm{d}\rho({\bf x}) = -\tau \int \left|\frac{\,\mathrm{D}}{\,\mathrm{D}{t}}{\bf u}(t,{\bf x})\right|^2 \,\mathrm{d}\rho(t,{\bf x}). \end{equation} \subsection{Smooth solutions must flock}\label{subsec:must} We consider the anticipation hydrodynamics \eqref{eq:hydro} with attractive potentials, \eqref{eq:attractive} \[ a\langle r\rangle^{-\beta} \leq \frac{U'(r)}{r}, \quad U''(r) \leq A, \qquad 0<a<A. \] The study of its large time `flocking' behavior proceeds precisely along the lines of our discrete proof of theorem \ref{thm:thm2}. Here are the three main ingredients in the proof of theorem \ref{thm:thm4}. {\bf Step (i)} We begin where we left with the anticipated energy balance \eqref{eq:hydroens}, which we express as \[ \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\mathcal{E}(t) = \int\Big|\int \nabla U(|{\bf x}^\tau-{\bf y}^\tau|) \,\mathrm{d}\rho({\bf y})\Big|^2\,\mathrm{d}\rho({\bf x}) = \int\Big|\int \frac{ U'(|{\bf x}^\tau-{\bf y}^\tau|)}{|{\bf x}^\tau-{\bf y}^\tau|} ({\bf x}^\tau-{\bf y}^\tau)\,\mathrm{d}\rho({\bf y})\Big|^2\,\mathrm{d}\rho({\bf x}). \] We now appeal to the special case of lemma \ref{lem:Mean} with $\Omega=\mathbb{R}^d$ (with variable ${\bf x}$), $\,\mathrm{d}{\mu}=\rho({\bf x})\,\mathrm{d}{{\bf x}}$, ${\bf X}({\bf x}) = {\bf x}^\tau, {\bf X}({\bf y})={\bf y}^\tau$ and $\ds c({\bf x},{\bf y})=\frac{U'(|{\bf x}^\tau-{\bf y}^\tau|)}{|{\bf x}^\tau-{\bf y}^\tau|}$, in which case we have \begin{equation} \int |{\bf x}^\tau|^2 \rho({\bf x})\,\mathrm{d}{{\bf x}} \le C(\lambda,\Lambda)\int \left| \int \frac{U'(|{\bf x}^\tau-{\bf y}^\tau|)}{|{\bf x}^\tau-{\bf y}^\tau|} ({\bf x}^\tau-{\bf y}^\tau)\rho({\bf y})\,\mathrm{d}{{\bf y}} \right|^2 \rho({\bf x})\,\mathrm{d}{{\bf x}} \end{equation} where $\Lambda$ and $\lambda$ are the upper- and respectively, lower-bounds of $\ds \frac{ U'(|{\bf x}^\tau-{\bf y}^\tau|)}{|{\bf x}^\tau-{\bf y}^\tau|}$, \begin{equation}\label{eq:stepi} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}}\mathcal{E}(t) &= -\tau\int\Big|\int \frac{ U'(|{\bf x}^\tau-{\bf y}^\tau|)}{|{\bf x}^\tau-{\bf y}^\tau|} ({\bf x}^\tau-{\bf y}^\tau)\,\mathrm{d}\rho({\bf y})\Big|^2\,\mathrm{d}\rho({\bf x})\\ & \lesssim \Big(\min \frac{ U'(|{\bf x}^\tau-{\bf y}^\tau|)}{|{\bf x}^\tau-{\bf y}^\tau|}\Big)^{-4} \int\int \big|{\bf x}^\tau-{\bf y}^\tau\big|^2\,\mathrm{d}\rho({\bf y})\,\mathrm{d}\rho({\bf x}) \end{split} \end{equation} {\bf Step (ii)}. A bound on the spread of the anticipated positions supported on non-vacuous states \begin{equation}\label{eq:Spread} \max_{{\bf x}^\tau\in \text{supp}\,\rho(t,\cdot)}|{\bf x}^\tau| \leq c\braket{t}^\eta. \end{equation} Arguing along the lines of lemma \ref{lem:bound} one finds that \eqref{eq:Spread} holds with $\ds \eta=\frac{1}{2(1-\beta)}$, hence \[ a\braket{t}^{-\frac{\beta}{2(1-\beta)}} \lesssim \frac{U'(|{\bf x}^\tau-{\bf y}^\tau|)}{|{\bf x}^\tau-{\bf y}^\tau|} \le A, \qquad {\bf x}^\tau,{\bf y}^\tau \in \text{supp}\,\rho(t,\cdot), \] and \eqref{eq:stepi} implies \begin{equation}\label{eq:Enspos} \begin{split} \frac{\,\mathrm{d}}{\,\mathrm{d}{t}} \mathcal{E}(t) & = -\tau\int\Big|\int \frac{ U'(|{\bf x}^\tau-{\bf y}^\tau|)}{|{\bf x}^\tau-{\bf y}^\tau|} ({\bf x}^\tau-{\bf y}^\tau)\,\mathrm{d}\rho({\bf y})\Big|^2\,\mathrm{d}\rho({\bf x}) \\ & \lesssim \tau \braket{t}^{-\frac{2\beta}{1-\beta}}\int\int |{\bf x}^\tau-{\bf y}^\tau|^2 \,\mathrm{d}\rho({\bf x})\,\mathrm{d}\rho({\bf y}). \end{split} \end{equation} We are now exactly at the point we had with the discrete anticipation dynamics, in which the decay of anticipated energy is controlled by the fluctuations of anticipated position, \eqref{eq:enspos}. {\bf Step (iii)}. To close the decay rate \eqref{eq:Enspos} one invokes hypocoercivity argument on the modified energy, \[ \widehat{\mathcal{E}}(t):= \mathcal{E}(t)-\epsilon(t)\int {\bf x}^\tau\cdot{\bf u}({\bf x})\,\mathrm{d}{\rho({\bf x})}. \] Arguing along the lines of section \ref{sec:attraction}, one can find a suitable $\epsilon(t)>0$ which leads to the sub-exponential decay of $\widehat{\mathcal{E}}(t)$ and hence of the comparable $\mathcal{E}(t)$, thus completing the proof of theorem \ref{thm:thm4}. \subsection{Existence of smooth solutions -- the 1D case}\label{subsec:1D} We study the existence of smooth solutions of the 1D anticipated hydrodynamic system \begin{equation}\label{eq:1Dhydro} \left\{ \ \ \begin{split} & \partial_t \rho + \partial_x (\rho u) = 0 \\ & \partial_t u + u\partial_x u = - \int U'(|x^\tau - y^\tau|) \rho(y) \,\mathrm{d}{y}, \qquad x^\tau=x+\tau u(t,x), \end{split}\right. \end{equation} subject to uniformly convex potential $U''(\cdot) \geq a>0$. Let $\texttt{d}:= \partial_x u$. Then \begin{equation}\label{eq:ed} \begin{split} & \partial_t \texttt{d} + u\partial_x \texttt{d} + \texttt{d}^2 = - (1+\tau \texttt{d})\int U''(|x^\tau - y^\tau|) \rho(y) \,\mathrm{d}{y} \end{split} \end{equation} or \begin{equation}\label{eq:quad} \begin{split} \texttt{d}' = - \texttt{d}^2 - c(1+\tau \texttt{d}), \qquad {}':=\partial_t + u\partial_x, \end{split} \end{equation} where by uniform convexity $c=c(t,x):= \int U''(|x^\tau - y^\tau|) \rho(y) \,\mathrm{d}{y} \in [m_0 a,m_0A]$. The discriminant of RHS, given by $(\tau c)^2 - 4c = c(\tau^2 c -4)$ is non-negative, provided $\tau^2 m_0 a \ge 4$. In this case, the smaller root of \eqref{eq:quad} is given by \begin{equation} \frac{1}{2}(-\tau c - \sqrt{c(\tau^2 c -4)}) \le -\frac{1}{2}(\tau m_0 a + \sqrt{m_0 a(\tau^2 m_0 a -4)}), \end{equation} and the region to its right is an invariant of the dynamics \eqref{eq:ed}. We conclude the following. \begin{proposition}[{\bf Existence of global smooth solution}]\label{prop:1Dexist} Consider the 1D anticipation hydrodynamic system \eqref{eq:1Dhydro} with uniformly convex potential $0< a\leq U''\leq A$. It admits a global smooth solution for sub-critical initial data, $(\rho_0,u_0)$, satisfying \[ \min_x u'_0(x) \ge -\frac{1}{2}(\tau m_0 a + \sqrt{m_0 a(\tau^2 m_0 a -4)}), \qquad \tau \geq \frac{2}{\sqrt{m_0 a}}. \] \end{proposition}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Riemann zeta function is defined, for $\,\Re{(s)}>1$, as $\,\zeta(s) := \sum_{k=1}^\infty{{\,1/k^s}}$. This function has a singularity (in fact, a simple pole) at $\,s=1$, which corresponds to the divergence of the harmonic series. For real values of $\,s$, $s>1$, the series converges to a real number between $1$ and $2$, according to the integral test. For positive integer values of $\,s$, $s>1$, one has a well-known formula by Euler (see Ref.~\cite{DeAmo} and references therein): \begin{equation} \zeta(2n) = (-1)^{n-1} \, \frac{2^{2n-1} \, B_{2n}}{(2n)!} \; \pi^{2n} , \label{eq:Euler} \end{equation} $n$ being a positive integer. Here, $\,B_n$ are Bernoulli numbers, i.e.~the rational coefficients of ${\,z^n/n!}\,$ in the Taylor series expansion of ${\,z/(e^z-1)}$, $0 < |z| < 2\,\pi$. For $\,\zeta{(2n+1)}$, on the other hand, no analogous expression is currently known. This scenario has a `reverse' counterpart on the values of the Dirichlet beta function, defined as $\,\beta(s) := \sum_{k=0}^\infty{(-1)^k/(2\,k+1)^s}$, $s>0$, in the sense that, for integer values of $s$, the following analogue of Eq.~\eqref{eq:Euler} is known:\footnote{Note that this formula remains valid for $\,n=0$, since $\,E_0 =1\,$ and $\,\beta{(1)} = \pi/4$.} \begin{equation*} \beta(2n+1) = (-1)^{n} \, \frac{E_{2n}}{2^{2n+2} \, (2n)!} \; \pi^{2n+1} , \label{eq:Euler2} \end{equation*} where $\,E_{n}$ are Euler numbers, i.e. the (integer) coefficients of ${\,z^n/n!}\,$ in the Taylor expansion of $\, \mathrm{sech}{(z)}$, $|z| < {\,\pi/2}\,$. For $\,\beta(2n)$, no analogous expression is known, not even for $\,\beta{(2)}$, known as the Catalan's constant $\,G$. Some progress in this direction was reached by K\"{o}lbig (1996), who proved that~\cite{Kolbig}: \begin{equation} \beta(2n) = \frac{\psi^{(2n-1)}{\left( \frac14 \right)}}{2\,(2n-1)!\,4^{2n-1}} \, - \frac{(2^{2n}-1) \, |B_{2n}|}{ 2\, (2n)!} \, \pi^{2n}, \label{eq:beta2n} \end{equation} where $\psi^{(n)}{(x)}$ is the polygamma function, i.e.~the $n$-th derivative of $\psi{(x)}$, the digamma function.\footnote{The function $\psi{(x)}$, in turn, is defined as the logarithm derivative of $\,\Gamma{(x)}$, the classical gamma function.} Although this identity resembles Euler's formula, the arithmetic nature of $\,\psi^{(2n-1)}{\left(\frac14 \right)}\,$ is currently unknown. In a very recent paper, I have succeeded in applying the Dancs and He series expansion method, as introduced in Ref.~\cite{Dancs}, to find similar formulae for $\,\beta(2n)\,$~\cite{Lima2012}. I could then prove that \begin{eqnarray} \sum_{k=1}^\infty{\frac{\zeta(2k)}{2k \, (2k+1) \, \ldots \, (2k+2n-1)} \, \left(\frac{1}{2^{2k}}-\frac{1}{4^{2k}}\right)} = \nonumber \\ (-1)^n\,\frac{2^{2n-2}}{\pi^{2n-1}} \, \beta{(2n)} +\frac{n}{(2n)!}\,\ln{2} + \, \frac12 \, \sum_{m=1}^{n-1}{(-1)^m \, \frac{2^{2m}-1}{\pi^{2m}} \, \frac{\zeta{(2m+1)}}{(2n-2m-1)!}} \, . \quad \label{eq:my2} \end{eqnarray} Since both \begin{subequations} \begin{equation} \sum_{k=1}^\infty{\frac{(2k-1)!}{(2k+2n-1)!} ~ \frac{\zeta(2k)}{2^{\,2k}}} \label{subeqn1} \end{equation} and \begin{equation} \sum_{k=1}^\infty{\frac{(2k-1)!}{(2k+2n-1)!} ~ \frac{\zeta(2k)}{4^{\,2k}}} \label{subeqn2} \end{equation} \end{subequations} converge absolutely, the zeta series in Eq.~\eqref{eq:my2} equals the difference of these individual series. However, the best I could do there in Ref.~\cite{Lima2012} was to investigate the pattern of the analytical results found for the first series, Eq.~\eqref{subeqn1}, for small values of $n$. This did lead me to \emph{conjecture} a formula for its summation which should be valid for any positive integer $\,n$. Here in this work, I make use of a classical Wilton's formula to prove the above mentioned conjecture. The proof of another conjecture, involving the zeta series in Eq.~\eqref{subeqn2}, follows from Eq.~\eqref{eq:my2}. Finally, on aiming at a generalization of these formulae I substitute the fractions $(1/2)^{2k}$ and $(1/4)^{2k}$ by $\,x^{2k}$, $\,x\,$ being any non-null real number with $\, |x| \le 1$. This has led me to develop a simpler, more direct proof of a theorem by Katsurada (1999)~\cite{Katsurada}. \section{Zeta series for $\,\zeta(2n+1)\,$ and $\,\beta(2n)$} The proof of the first conjecture in Ref.~\cite{Lima2012}, involving the zeta series in Eq.~\eqref{subeqn1}, above, follows from a classical result by Wilton for a rapidly convergent series representation for $\,\zeta{(2n+1)}\,$~\cite{Wilton}. Hereafter, we define $\,\,H_n := \sum_{k=1}^n{1/k}\,$ as the $n$-th harmonic number. \begin{lema}[Wilton's formula] \label{le:Wilton} \; Let $\,n\,$ be a positive integer. Then \begin{eqnarray*} \frac{\zeta(2n+1)}{(-1)^{\,n-1} \, \pi^{2n}} = \frac{H_{2n+1}-\ln{\pi}}{(2n+1)!} +\sum_{k=1}^{n-1}{\frac{(-1)^k}{(2n-2k+1)!} \, \frac{\zeta{(2k+1)}}{\pi^{2k}}} +\sum_{k=1}^\infty{\frac{(2k-1)!}{(2n+2k+1)!} \, \frac{\zeta(2k)}{2^{\,2k-1}}} . \end{eqnarray*} \end{lema} For a skeleton of the proof, see the original work of Wilton~\cite{Wilton}. For a more complete proof, see Sec.~4.2 (in particular, pp.~412--413) of Ref.~\cite{LTSri}, a systematic collection of zeta series recently published by Srivastava and Choi. This lemma allows us to prove the first conjecture of Ref.~\cite{Lima2012}, namely that in its Eq.~(29). \begin{teo}[First zeta series] \label{teo:zetaser1} \; Let $\,n\,$ be a positive integer. Then \begin{eqnarray*} \sum_{k=1}^\infty{\frac{(2k-1)! \; \zeta(2k)}{(2k+2n-1)!} \, \left(\frac{1}{2}\right)^{2k}} = \frac12 \! \left[ \frac{\ln{\pi}-H_{2n-1}}{(2n-1)!} +\sum_{m=1}^{n-1}{(-1)^{m+1} \, \frac{\zeta{(2m+1)}}{\pi^{2m}\,(2n-2m-1)!}} \right] . \end{eqnarray*} \end{teo} \begin{proof} \; From Lemma~\ref{le:Wilton}, we know that \begin{eqnarray*} \frac{(-1)^n \, \zeta(2n+1)}{\pi^{2n}} = \frac{\ln{\pi} -H_{2n+1}}{(2n+1)!} -\sum_{k=1}^{n-1}{\frac{(-1)^k}{(2n-2k+1)!} \, \frac{\zeta{(2k+1)}}{\pi^{2k}}} \nonumber \\ - \,2 \, \sum_{k=1}^\infty{\frac{(2k-1)!}{(2n+2k+1)!} \, \frac{\zeta(2k)}{2^{\,2k}}} \, . \end{eqnarray*} By isolating the last term, one has \begin{equation*} 2 \sum_{k=1}^\infty{\frac{(2k-1)!}{(2n+2k+1)!} \, \frac{\zeta(2k)}{2^{\,2k}}} = \frac{\ln{\pi} -H_{2n+1}}{(2n+1)!} -\sum_{k=1}^{n}{\frac{(-1)^k}{(2n-2k+1)!} \, \frac{\zeta{(2k+1)}}{\pi^{2k}}} \, . \end{equation*} By substituting $\,n = \ell -1\,$ in the above equation, one finds \begin{equation*} 2 \sum_{k=1}^\infty{\frac{(2k-1)!}{(2k +2\ell-1)!} \, \frac{\zeta(2k)}{2^{\,2k}}} = \frac{\ln{\pi} -H_{2\ell-1}}{(2\ell-1)!} -\sum_{k=1}^{\ell-1}{\frac{(-1)^{k}}{(2\ell-2k-1)!} \, \frac{\zeta{(2k+1)}}{\pi^{2k}}} \, . \end{equation*} A division by $2$ completes the proof. \qed \end{proof} For instance, on putting $\,n=2\,$ in this theorem we get a rapidly converging series representation for $\,\zeta(3)$, namely \begin{equation} \zeta(3) = \pi^2 \, \sum_{k=1}^\infty{\frac{\zeta(2k)}{k\,(2k + 1)\,(2k + 2)\,(2k + 3) \; 2^{2k}}} +\frac{11}{36}\,\pi^2 -\frac16 \, \pi^2\,\ln{\pi} \, . \end{equation} This formula converges to $\zeta(3)$ much faster than $\,\sum_{k\ge1}{1/k^3}$ and even than $\:\frac52 \, \sum_{k \ge 1}{{(-1)^{k+1}} \slash \, {[\,k^3 \, \binom{2k}{k}\,]}}$, a central binomial series used by Ap\'{e}ry~\cite{Apery}. Numerical computation using \emph{Mathematica} shows that only ten terms of the above zeta series are enough for a ten decimal places accuracy. Now we can use Eq.~\eqref{eq:my2} to prove the other conjecture raised in Ref.~\cite{Lima2012}, namely that in its Eq.~(30). \begin{teo}[Second zeta series] \label{teo:zetaser2} \; Let $\,n\,$ be a positive integer. Then \begin{eqnarray*} \sum_{k=1}^\infty{\frac{(2k-1)! \; \zeta(2k)}{(2k+2n-1)!} \, \left(\frac{1}{4}\right)^{\!2k}} = \frac12 \left[ \frac{\,\ln{(\pi/2)} -H_{2n-1}}{(2n-1)!} -(-1)^{n} \left(\frac{2}{\pi}\right)^{\! 2n-1} \!\! \beta{(2n)} \right. \nonumber \\ \left. \begin{array}{c} ^{} \\ ^{} \end{array} -\sum_{m=1}^{n-1}{(-1)^m \left(\frac{2}{\pi}\right)^{\!2m} \frac{\zeta{(2m+1)}}{(2n-2m-1)!\,}} \, \right] \! . \end{eqnarray*} \end{teo} \begin{proof} \; From Eq.~\eqref{eq:my2}, we know that \begin{eqnarray*} \sum_{k=1}^\infty{\frac{(2k-1)! \; \zeta(2k)}{(2k+2n-1)!} \, \left(\frac{1}{2^{2k}}\right)} -\sum_{k=1}^\infty{\frac{(2k-1)! \; \zeta(2k)}{(2k+2n-1)!} \, \left(\frac{1}{4^{2k}}\right)} \nonumber \\ = (-1)^n\,\frac{2^{2n-2}}{\pi^{2n-1}} \, \beta{(2n)} +\frac{n}{(2n)!}\,\ln{2} + \, \frac12 \, \sum_{m=1}^{n-1}{(-1)^m \, \frac{2^{2m}-1}{\pi^{2m}} \, \frac{\zeta{(2m+1)}}{(2n-2m-1)!}} \: . \end{eqnarray*} On substituting the first zeta series at the left-hand side by the result proved in Theorem~\ref{teo:zetaser1}, one has \begin{eqnarray*} \frac{\ln{\pi}-H_{2n-1}}{(2n-1)!} -\sum_{m=1}^{n-1}{(-1)^{m} \, \frac{\zeta{(2m+1)}}{\pi^{2m}\,(2n-2m-1)!}} \,-2 \sum_{k=1}^\infty{\frac{(2k-1)! \; \zeta(2k)}{(2k+2n-1)!} \, \left(\frac{1}{4^{2k}}\right)} \nonumber \\ = (-1)^n \left(\frac{2}{\pi}\right)^{2n-1} \! \beta{(2n)} +\frac{\ln{2}}{(2n-1)!} +\, \sum_{m=1}^{n-1}{(-1)^m \, \frac{2^{2m}-1}{\pi^{2m}} \, \frac{\zeta{(2m+1)}}{(2n-2m-1)!}} \: . \end{eqnarray*} By isolating the remaining zeta series, one finds \begin{eqnarray*} 2 \sum_{k=1}^\infty{\frac{(2k-1)! \: \zeta(2k)}{(2k+2n-1)!} \, \left(\frac{1}{4^{2k}}\right)} = \frac{\ln{(\pi/2)}-H_{2n-1}}{(2n-1)!} -(-1)^n\,\left(\frac{2}{\pi}\right)^{\!2n-1} \! \beta{(2n)} \nonumber \\ -\sum_{m=1}^{n-1}{(-1)^{m} \, \left(\frac{2}{\pi}\right)^{2m} \frac{\zeta{(2m+1)}}{\,(2n-2m-1)! \,}} \, . \end{eqnarray*} A division by $2$ completes the proof. \qed \end{proof} On putting $\,n=1\,$ in Theorem~\ref{teo:zetaser2} we get a rapidly converging series representation for $\,\beta(2) = G$, namely \begin{equation} G = \pi \, \sum_{k=1}^\infty{\frac{\zeta(2k)}{2k\,(2k + 1)} \: \frac{1}{4^{2k}}} - \frac{\pi}{2} \: \ln{\!\left(\frac{\pi}{2}\right)}+\frac{\pi}{2} \, . \end{equation} This formula converges much faster than $\,\sum_{k \ge 0}{(-1)^k/(2k+1)^2}\,$ and even faster than $\,\frac{\pi}{8} \, \ln{\left( 2 +\sqrt{3} \,\right)} +\frac38 \, \sum_{n \ge 0}{1/\!\left[(2n + 1)^2 \, \binom{2n}{n}\right]}$, a rapidly converging central binomial series discovered by Ramanujan~\cite{rama}. Numerically, only six terms of the above zeta series are enough for a result accurate to ten decimal places. After an extensive search for similar zeta series in literature, I have found a formula by Srivastava and Tsumura (2000) in Ref.~\cite{Sri2000} (see also Ref.~\cite{LTSri}, p.~421, Eq.~(30)\,). In fact, this formula could be taken into account for an independent proof of our Theorem~\ref{teo:zetaser2}, after some simple manipulations, as the reader can easily check.\footnote{\label{ft:polygam} For this, it will be useful to know that $\,\zeta{(s,a)} := \sum_{k=0}^{\infty} 1/(k+a)^s\,$ is the Hurwitz zeta function, for which it is well-known that $\,\zeta{(n+1,a)} = (-1)^{n+1} \, \psi^{(n)}(a)/n!\,$ (see, e.g., Eq.~(25.11.12) of Ref.~\cite{Nist}). Then $\,\zeta{\left( 2m, \frac14 \right)} = \psi^{(2m-1)}\left(\frac14 \right) / (2m-1)!$ and, from our Eq.~\eqref{eq:beta2n}, it follows that $\,\psi^{(2m-1)}{\left( \frac14 \right)} / (2m-1)! = 2^{4m-1} \, \beta(2m) +(-1)^{m-1} \, 2^{4m-2} \, (2^{2m}-1) \, B_{2m} \, \pi^{2m} / (2m)!\,$, where $\,|B_{2m}|\,$ was substituted by $\,(-1)^{m-1} \, B_{2m}$.} On investigating the substitution of $\,(1/2)^{2k}$ and $\,(1/4)^{2k}$ by $\,x^{2k}$, $\,x$ being any non-null real number with $\,|x| \le 1$, in the zeta series treated in the previous theorems, I have found the following general result. \begin{teo}[Generalization] \label{teo:geral} \quad Let $\,n\,$ be a positive integer and $\,x\,$ be any real number with $\, 0 < |x| \le 1$. Then \begin{eqnarray*} \zeta{(2n+1)} \,- \frac{1}{2\pi x} \, \sum_{\ell=1}^\infty{\frac{\sin{(2 \pi \ell x)}}{\ell^{\,2n+2}}} = (-1)^{n-1} \: (2\pi x)^{2n} \left[ \frac{\,H_{2n+1} -\ln{\! \left( 2 \pi \, |x| \right)}}{(2n+1)!} \right. \nonumber \\ \left. \begin{array}{c} ^{} \\ ^{} \end{array} + \sum_{k=1}^{n-1}{(-1)^k \frac{\zeta{(2k+1)}}{(2n-2k+1)! \; (2\pi x)^{2k}}} \; + 2 \sum_{k=1}^\infty{\frac{(2k-1)! \: \zeta{(2k)}}{(2n+2k+1)!} ~ x^{\,2k}\,} \right] \! . \end{eqnarray*} \end{teo} \begin{proof} From the Euler's product formula for the sine function, we know that, for all non-null real $\,z\,$ with $\,|z| \le 1$, the following identity holds: \begin{equation} \frac{\sin{(\pi z)}}{\pi z} = \prod_{n=1}^\infty{\left( 1-\frac{z^2}{n^2} \right)} . \end{equation} On taking the logarithm on both sides, we have \begin{equation} \ln{\left[\frac{\sin{(\pi z)}}{\pi z}\right]} = \sum_{n=1}^\infty{\ln{\left( 1-\frac{z^2}{n^2} \right)}} = - \, \sum_{n=1}^\infty{ \left( \sum_{k=1}^\infty{ \frac{z^{2k}}{k \: n^{2k}} }\right)} , \end{equation} where the logarithm was expanded in a Taylor series in the last step. This implies that \begin{equation} \ln{\left[\frac{\,2 \, \sin{(\pi z)}}{2 \, \pi z}\right]} = - \, \sum_{k=1}^\infty{ \left( \, \sum_{n=1}^\infty{ \frac{1}{n^{2k}} } \right) \, \frac{z^{2k}}{k}} = -\sum_{k=1}^\infty{\frac{z^{2k}}{k} \, \, \zeta{(2k)}} \end{equation} and then \begin{equation} \ln{\left| 2 \, \sin{(\pi z)}\right|} - \ln{\left|2 \, \pi z\right|} = \, - \, \sum_{k=1}^\infty{\frac{z^{2k}}{k} \, \, \zeta{(2k)}} \, . \end{equation} On multiplying both sides by $\,(x-z)^{2n}$, $n$ being a positive integer, and integrating from $0$ to $x$, $x$ being any non-null real in the interval $[-1,1]$, we have \begin{eqnarray} \int_0^x{(x-z)^{2n} \, \ln{\left| 2 \, \sin{(\pi z)}\right|} \: dz} \, - \int_0^x{(x-z)^{2n} \, \ln{\left| 2 \, \pi z\right|} \: dz} \nonumber \\ = \, - \, \int_0^x{(x-z)^{2n} \, \sum_{k=1}^\infty{\frac{z^{2k}}{k} \, \zeta{(2k)}} \: dz} \, . \label{eq:Zints} \end{eqnarray} Let us solve each definite integral carefully. The first integral in Eq.~\eqref{eq:Zints} can be expanded in a trigonometric series as follows. Since, for all $\,\theta \in (0,2\pi)$,\footnote{This is a well-known Fourier series expansion.} \begin{equation} \ln{\left[2 \, \sin{\left(\frac{\theta}{2}\right)} \right]} = - \sum_{k=1}^\infty{\frac{\cos{(k \, \theta)}}{k}} \, , \end{equation} then, on substituting $\,z = \theta / (2\pi)\,$ in the integral, one finds \begin{eqnarray} I_n(x) := \int_0^x{(x-z)^{2n} \: \ln{\left| 2 \, \sin{(\pi z)}\right|} \, dz} \nonumber \\ = \frac{1}{(2\pi)^{2n+1}} \, \int_0^{2 \pi x}{(2 \pi x -\theta)^{2n} \: \ln{\left| 2 \, \sin{\left(\frac{\theta}{2}\right)} \right|} \: d \theta } \nonumber \\ = \, - \, \frac{1}{(2\pi)^{2n+1}} \, \int_0^{2 \pi x}{(2 \pi x -\theta)^{2n} \: \sum_{k=1}^\infty{\frac{\cos{(k \, \theta)}}{k}} \: d \theta } \nonumber \\ = \, - \, \frac{1}{(2\pi)^{2n+1}} \, \int_0^{2 \pi x}{(2 \pi x -\theta)^{2n} \: d\left(\sum_{k=1}^\infty{\frac{\sin{(k \, \theta)}}{k^2}}\right) } . \: \end{eqnarray} On integrating by parts, one has \begin{eqnarray} I_n(x) = \frac{2n}{(2\pi)^{2n+1}} \, \int_0^{2 \pi x}{(2 \pi x -\theta)^{2n-1} \; d\left(\sum_{k=1}^\infty{\frac{\cos{(k \, \theta)}}{k^3}}\right) } \nonumber \\ = - \frac{2n}{(2 \pi)^2} \, x^{2n-1} \, \zeta(3) + \frac{(2n)!}{(2\pi)^{2n+1} \, (2n-2)!} \, \int_0^{2 \pi x}{(2 \pi x -\theta)^{2n-2} \; d\left(\sum_{\,k=1}^\infty{\frac{\sin{(k \, \theta)}}{k^4}}\right) } . \qquad {} \end{eqnarray} On integrating by parts again and again, we find, after some algebra, that \begin{eqnarray} I_n(x) = \sum_{j=1}^n{\frac{(-1)^j \, (2n)! \, \zeta(2j+1)}{(2\pi)^{2j} \, (2n+1-2j)!} \, x^{2n+1-2j} } \, + \, \frac{(-1)^{n-1} \, (2n)!}{(2\pi)^{2n+1}} \, \sum_{k=1}^\infty{\frac{\sin{(2 k \, \pi \, x)}}{k^{2n+2}}} \, . \qquad \label{eq:Zint1} \end{eqnarray} The second integral in Eq.~\eqref{eq:Zints} is readily solved by noting that \begin{eqnarray} \int_0^x{(x-z)^{2n} \, \ln{| 2 \, \pi z |} \: dz} = \, - \, \frac{1}{2n+1} \, \int_0^x{\ln{|2\pi\,z|} \, \, d \left[ (x-z)^{2n+1} -x^{2n+1} \right]} \nonumber \\ = \, \frac{x^{2n+1}}{2n+1} \, \left[ \ln{|2\,\pi\,x|} + \sum_{\ell=1}^{2n+1}{\frac{(-1)^\ell}{\ell} \: \binom{2n+1}{\ell}} \right] . \quad \label{eq:Zint2} \end{eqnarray} The third integral, i.e.~the one at the right-hand side of Eq.~\eqref{eq:Zints}, can be written in the form of a zeta series on integrating it by parts directly. This yields \begin{equation} \int_0^x{(x-z)^{2n} \, \sum_{k=1}^\infty{\frac{z^{2k}}{k} \, \zeta{(2k)}} \: dz} = 2 \, (2n)! \, \sum_{k=1}^\infty{\frac{(2k-1)! \, \zeta(2k) \, x^{2k+2n+1}}{(2k+2n+1)!}} \, . \label{eq:Zint3} \end{equation} Finally, on substituting the results in Eqs.~\eqref{eq:Zint1}, \eqref{eq:Zint2}, and \eqref{eq:Zint3} on the integrals in Eq.~\eqref{eq:Zints}, we find \begin{eqnarray} \sum_{j=1}^n{\frac{(-1)^j \, (2n)! \, \zeta(2j+1)}{(2\pi)^{2j} \, (2n+1-2j)!} \, \frac{1}{x^{2j}} } \, + \, \frac{(-1)^{n-1} \, (2n)!}{(2\pi \, x)^{2n+1}} \, \sum_{k=1}^\infty{\frac{\sin{(2 k \, \pi \, x)}}{k^{2n+2}}} \nonumber \\ -\frac{1}{2n+1} \left[ \ln{|2\,\pi\,x|} + \sum_{\ell=1}^{2n+1}{\frac{(-1)^\ell}{\ell} \: \binom{2n+1}{\ell}} \right] = - 2 \, (2n)! \, \sum_{k=1}^\infty{\frac{(2k-1)! \, \zeta(2k)}{(2k+2n+1)!} \: x^{2k}} . \qquad \label{eq:Zexpr} \end{eqnarray} On dividing both sides by $[-(2n)!]$, we have \begin{eqnarray} 2 \sum_{k=1}^\infty{\frac{(2k-1)!}{(2k+2n+1)!} \, \zeta(2k) \: x^{2k}} = \, - \, \sum_{j=1}^n{\frac{(-1)^j \, \zeta(2j+1)}{(2\pi \, x)^{2j} \, (2n+1-2j)!} } \, + \frac{\ln{(2\,\pi\,|x|)} - H_{2n+1}}{(2n+1)!} \nonumber \\ + \, \frac{(-1)^n}{(2 \pi \, x)^{2n+1}} \, \sum_{k=1}^\infty{\frac{\sin{(2 k \, \pi \, x)}}{k^{2n+2}}} \, , \qquad {} \label{eq:Zexprfim} \end{eqnarray} where we have made use of the binomial sum $\,H_{2n+1} = \, - \, \sum_{\ell=1}^{2n+1}{\frac{(-1)^\ell}{\ell} \: \binom{2n+1}{\ell}}$, which is easily proved by induction on $\,n$. On extracting the last term of the sum involving odd zeta values (i.e., that for $j=n$) we bring up the desired result. \qed \end{proof} Our proof of Theorem~\ref{teo:geral}, above, has allowed us to detect some typos in Theorem~2 of Ref.~\cite{Katsurada}, as well as some mistakes in its proof, which is based upon the Mellin transform technique. In fact, the formula as printed in Katsurada's paper cannot be correct because as $\,x \rightarrow 0\,$ the left-hand side approaches $\,2 \, \zeta(2n+1)\,$ whereas the right-hand side approaches zero. Unfortunately, that incorrect formula is reproduced on p.~442 of Ref.~\cite{LTSri}, which is currently the main reference on zeta series in literature. Our Theorem~\ref{teo:geral} corrects both the absence of a modulus in the argument of the logarithm and the `$+$' sign preceding the sine series in Eq.~(1.6) of Ref.~\cite{Katsurada}. Interestingly, the formula in our Theorem~\ref{teo:geral} can be reduced to a more appropriate form for both symbolic and numerical computations. This is easily obtained by noting that $\,\sum_{\ell=1}^\infty{\sin{(2 \pi x \, \ell)}/\ell^{\,2n}} = \mathrm{Cl}_{2n}(2\pi x)\,$ for all real $x$, where $\,\mathrm{Cl}_{2n}(\theta) := \Im{\left\{\mathrm{Li}_{2n}\left(e^{i\,\theta}\right)\right\}} \,$ is the Clausen function of order $\,2 n$. By substituting $\:n = m-1\:$ and $\:x = \theta/(2 \pi)\,$ in Theorem~\ref{teo:geral}, we readily find \begin{eqnarray} \sum_{k=1}^\infty{\frac{(2k-1)! ~ \zeta{(2k)}}{(2m+2k-1)!} \; {\left( \frac{\theta}{2 \pi}\right)\!}^{2k}} = \, \frac12 \left[\frac{\:\ln{|\theta|} -H_{2m-1}}{(2m-1)!} \,-(-1)^m \, \frac{\mathrm{Cl}_{2m}(\theta)}{\theta^{2m-1}} \right. \nonumber \\ \left. \begin{array}{c} ^{} \\ ^{} \end{array} -\sum_{k=1}^{m-1}{(-1)^k \, \frac{\zeta{(2k+1)}}{(2m-2k-1)! \, ~ \theta^{2k}}} \, \right] \! , \label{eq:Cl2m} \end{eqnarray} which holds for any non-null real $\,\theta\,$ with $\, |\theta| \le 2\pi$. Advantageously, this form remains valid for all \emph{positive integer} values of $\,m$, as long as the sum at the right-hand side is taken as null when $\,m=1$, as usual. Since $\,\mathrm{Cl}_{2m}(\pi) = 0\,$ and $\,\mathrm{Cl}_{2m}(\pi/2) = \beta{(2m)}$, Eq.~\eqref{eq:Cl2m} allows for prompt proofs of Theorems~\ref{teo:zetaser1} and~\ref{teo:zetaser2}, respectively, showing that they are special cases of Theorem~\ref{teo:geral}, as expected. This is indeed the case for a number of rapidly convergent zeta series in literature, as e.g.~some of the zeta series given in Refs.~\cite{Borwein,Dancs,Lima2012,Sri98,Wilton} and many zeta series presented in Chaps.~3 and 4 of Ref.~\cite{LTSri}. This reflects the generality of our Theorem~\ref{teo:geral}. Equation~\eqref{eq:Cl2m} can thus be viewed as a source of rapidly converging zeta series for odd zeta values and Clausen functions of even order. For instance, on taking $\,\theta = \pi/3\,$ in Eq.~\eqref{eq:Cl2m}, one finds \begin{eqnarray} 2 \, \sum_{k=1}^\infty{\frac{(2k-1)!}{(2m+2k-1)!} \; \frac{\,\zeta{(2k)}}{\,6^{2k}}} = \frac{\:\ln{(\pi/3)} -H_{2m-1}}{(2m-1)!} \,-(-1)^m \, \frac{\,\mathrm{Cl}_{2m}(\pi/3)}{\,(\pi/3)^{2m-1}} \nonumber \\ -\sum_{k=1}^{m-1}{(-1)^k \, \frac{\zeta{(2k+1)}}{(2m-2k-1)! \, ~ (\pi/3)^{2k}}} \, . \quad \end{eqnarray} For $\,m = 1$, this reduces to \begin{equation} \sum_{k=1}^\infty{\frac{\zeta{(2k)}}{k \, (2k+1)} \; \frac{1}{6^{2k}} } = \ln{\!\left(\frac{\pi}{3}\right)} -1 +\frac{3}{\pi} \: \mathrm{Cl}_{2}\!\left(\frac{\pi}{3}\right) , \label{eq:pi3m1} \end{equation} whereas for $\,m = 2\,$ one has \begin{eqnarray} \sum_{k=1}^\infty{\frac{\zeta{(2k)}}{k\,(2k+1)\,(2k+2)\,(2k+3)} \; \frac{1}{\,6^{2k}}} = \frac16 \, \ln{\!\left(\frac{\pi}{3}\right)} -\frac{11}{36} \,- \frac{27}{\,\pi^3} \: \mathrm{Cl}_{4}\!\left(\frac{\pi}{3} \right) + \frac{9}{\,\pi^2} \; \zeta{(3)} \, . \qquad \label{eq:pi3m2} \end{eqnarray} Similarly, for $\,\theta = \pi/4\,$ one has \begin{equation} \sum_{k=1}^\infty{\frac{\zeta{(2k)}}{k \, (2k+1)} \; \frac{1}{8^{2k}} } = \ln{\!\left(\frac{\pi}{4}\right)} -1 +\frac{4}{\,\pi} \; \mathrm{Cl}_{2}\!\left(\frac{\pi}{4}\right) \label{eq:pi4m1} \end{equation} for $\,m = 1\,$ and \begin{equation} \sum_{k=1}^\infty{\frac{\zeta{(2k)}}{k\,(2k+1)\,(2k+2)\,(2k+3)} \; \frac{1}{8^{2k}} } = \frac16 \, \ln{\!\left(\frac{\pi}{4} \right)} -\frac{11}{36} -\frac{64}{\,\pi^3} \: \mathrm{Cl}_{4}\!\left( \frac{\pi}{4} \right) +\frac{16}{\,\pi^2} \; \zeta{(3)} \label{eq:pi4m2} \end{equation} for $\,m = 2$. I could not find these formulae explicitly in literature. \section{Rates of convergence} All zeta series investigated here belong to the class embraced by Theorem~\ref{teo:geral}. Their rate of convergence can be analyzed as follows. For convenience, denote by $\,\mathcal{S}_k\,$ the summand of the zeta series at the right-hand side of the formula in our Theorem~\ref{teo:geral}. By applying Stirling's formula, namely $\,k! \sim \left( k/e \right)^k \sqrt{2 \pi k}\,$, and noting that $\,\zeta{(2k)} \rightarrow 1\,$ as $\,k \rightarrow \infty$, one finds \begin{eqnarray} \mathcal{S}_k \sim \frac{(2k-1)^{2k-1}}{(2n+2k+1)^{2n+2k+1}} ~ e^{2n+2} \, \frac{\sqrt{2k-1}}{\sqrt{2n+2k+1}} ~ x^{2k} \nonumber \\ = \frac{1}{\left(1+\frac{2n+2}{2k-1}\right)^{2k-1}} ~ e^{2n+2} \, \frac{1}{\sqrt{1+\frac{2n+2}{2k-1}}} \, \frac{1}{(2k+2n+1)^{2n+2}} ~ x^{2k} \nonumber \\ \sim \frac{1}{e^{2n+2}} ~ e^{2n+2} \, \frac{1}{\sqrt{1+\frac{2n+2}{2k-1}}} \, \frac{1}{(2k+2n+1)^{2n+2}} ~ x^{2k} \nonumber \\ = \frac{1}{\sqrt{1+\frac{2n+2}{2k-1}}} \, \frac{1}{(2k+2n+1)^{2n+2}} ~ x^{2k} \nonumber \\ \sim \frac{1}{(2k+2n+1)^{2n+2}} ~ x^{2k} \sim \frac{1}{(2k)^{2n+2}} ~ x^{2k} \, , \end{eqnarray} where we have made use of $\,\lim_{\,y \rightarrow \infty} \left(1+\alpha/y \right)^y = e^\alpha\,$ and the binomial approximation $\,(1+y)^\alpha \approx 1+\alpha\,y\,$. Therefore, \begin{equation} \mathcal{S}_k = O \! \left(\frac{x^{2k}}{(2k)^{2n+2}} \right) \qquad (k \rightarrow \infty, \, n \in \mathbb{N}) \, , \end{equation} which is valid for any real $\,x\,$ with $\,|x| \le 1$. This allows for a direct comparison of rates of convergence for the zeta series investigated here in this paper. For instance, the zeta series in Wilton's formula has $\, x = \frac12 \,$, hence $\,\mathcal{S}_k = O \! \left({2^{-2k} \: (2k)^{-2n-2}} \right) = O \! \left({2^{-2k-2n-2} \: k^{-2n-2}} \right) = O \! \left({2^{-2k} \: k^{-2n-2}} \right)$, whereas the zeta series in our Theorem~\ref{teo:zetaser2} has $\,x=\frac14\,$, hence $\,\mathcal{S}_k = O \! \left({4^{-2k} \: (2k)^{-2n-2}} \right) = O \! \left({4^{-2k-n-1} \: k^{-2n-2}} \right) = O \! \left({4^{-2k} \: k^{-2n-2}} \right)$. This is why the latter converges somewhat faster than the former. \begin{acknowledgements} The author would like to thank Mrs.~Claudionara de Carvalho for many interesting and useful discussions. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:introduction}Introduction} Despite huge advantages in terms of accuracy and systematic improvability, wavefunction-based quantum chemical methods are routinely used by only a small fraction of electronic structure theorists, in contrast to density functional theory (DFT) which dominates the community\cite{Weitao2012}. Nowhere is this more true than in the solid state, where application of high-level quantum chemistry methods are only beginning to emerge in a recently growing field \cite{BGKA2013,Scuseria2001,Stoll2005,Schutz2007,Pisani2007,Kresse2009,Manby2009,Kresse2010,Gillan2010,Manby2010,Kresse2011,Schutz2011,Stoll2012,Paulus2012,Vandevondele2012,Manby2012}. The reason for this slow uptake is the computational cost of these methods, which generally scale as a high power of the system size, compared to the lower mean-field scaling of DFT. This is exacerbated in the solid state where increasing the size of the supercell to converge finite size effects is far more costly than for mean-field counterparts. Much of this expense originates from the need to expand out the many-electron wavefunction in terms of anti-symmetrized one-particle functions of a specified basis set. This itself must then be expanded and generally extrapolated to near completeness to obtain accurate results and justify the use of the high level of correlation treatment. Although methods more familiar to the solid state, such as DFT\cite{Burke2013} and Diffusion Monte Carlo (DMC)\cite{Foulkes2001} require a basis set, there is only a weak dependence since no many-electron wavefunctions are expanded in this basis. The difficulty with the expansion of many-electron wavefunctions as antisymmetric products of one-particle basis functions (Slater determinants) has been known since the early days of electronic structure theory, and is due to the short-ranged or `dynamic' correlation between electron pairs. As the electrons coalesce, a derivative discontinuity or `cusp' must arise, so that a divergence in the kinetic energy operator cancels an opposite one in the potential. Within an expansion of Slater determinants the exact cusp is never obtained, and a quantitatively correct linear form at small interelectronic distances only arises with large basis sets of high momenta. A description of these cusps was initially formulated by Kato\cite{Kato:CPAM10-151,Pack:JCP45-556,Ashcroft82,Sahni2003}, who found the wavefunction to be linear to first-order as a function of the interelectronic distance between the pairs ($r_{12}$). Moreover, the gradient of this linear behavior was found to be exactly a half (or quarter for triplet pairs), regardless of the form of the rest of the potential in the system. Higher order terms in $r_{12}$ however are affected by the rest of the potential\cite{Tew08}. For many years, methods were developed which tried to exploit this knowledge of the form of the exact wavefunction in the small $r_{12}$ limit, but the methods which resulted, such as methods utilizing exponentially correlated Gaussians\cite{Boys1960} and the transcorrelated method\cite{Handy69} among others, were generally expensive, plagued by many electron integrals, and limited to systems of only a small size. A major breakthrough was achieved in 1985 by Kutzelnigg, where two electron geminal functions were introduced into the wavefunction which satisfied the electron cusps, and augmented a traditional Slater determinant expansion\cite{Kutzelnigg:TCA68-445,Klopper87,Kutzelnigg:JCP94-1985}. This resulting wavefunction expansion was then used within the formulation of traditional quantum chemical methods, and crucially, an approximate resolution of identity (RI) was performed as a way to factorize the many-electron integrals into sums of products of at most two-electron quantities. A small set of these geminals dramatically improved the convergence of quantum chemical methods with respect to basis set size, since fewer high momenta functions were required for these energetically significant cusp regions. This dual basis of traditional determinants and strongly orthogonal geminals, and the methods for evaluating resultant expectation values, has been named F12 theory. In the intervening years this approach has matured, with important advances taking it from a promising technique to an indispensable tool for high-accuracy quantum chemical methods for large systems\cite{Hattig:CR112-4,Kong:CR112-75,Tew:BOOK2010,Werner:BOOK2010,Tenno12,Tenno12_2,Shiozaki2010}. These advances include the introduction of a complementary auxiliary basis set in which to perform the RI\cite{Klopper:JCP116-6397,Valeev:CPL395-190}, refinement of the approximations used in order to minimize the impact of the RI and maintain orbital invariance\cite{Kedzuch:IJQC105-929,Ten-no:JCP121-117,Werner:JCP126-164102,Tew:MP108-315,Manby05}, a more general function of the interelectronic coordinate to approximately capture longer range effects\cite{Ten-no:CPL398-56,Tew:JCP123-074101}, and the introduction of specially designed basis sets for optimal efficiency\cite{Peterson08,Tew09}. The result are methods which share the intrinsic accuracy of the complete basis set (CBS) limit of their parent method, but which approach this limit far more rapidly, thereby reducing the cost of the method. Combining this with density fitting\cite{Manby03}, local approximations\cite{Tew11,Werner11}, and multireference methods\cite{Gdanitz:CPL210-253,Ten-no:CPL446-175,Shiozaki:JCP131-141103,Shiozaki:JCP134-034113,Shiozaki:JCP134-184104,Kedzuch:CPL511-418,Haunschild:CPL531-247,Torheyden:JCP131-171103,Kong:JCP135-214105,Kong:JCP133-174126,Booth2012,Werner13} has greatly extended the reach of quantum chemistry in recent years. All F12 approaches to date have taken place within the framework of a traditional atom-centered Gaussian basis set. Although these functions are ubiquitous in gas-phase molecular quantum chemistry, where their local nature generally suits the wavefunction, it is unclear whether these are well suited for extended systems, especially when the wavefunction is intrinsically delocalized. These systems have been traditionally studied in a discrete basis of plane waves, chosen such that the boundary conditions at the edges of the unit cell are fulfilled, although this is by no means the only choice in solids. However, a plane wave basis confers many advantages in the solid state. There is a single basis set parameter (the orbital kinetic energy cutoff), which allows the CBS limit to be approached systematically and straightforwardly, without the need for basis set optimization. These basis functions are also strictly orthogonal, and therefore no issues with linear dependencies occur as the basis increases, in contrast to Gaussian functions. However, for all these advantages of a plane wave basis, the features of electronic cusps are still missing, and are difficult to capture without including very high energy plane waves in the expansion which dramatically increases the cost. This convergence has been found to have the same scaling behavior as the Gaussian expansion\cite{Shepherd2012_3,Kutzelnigg92}, though generally requires many more functions to reach the complete basis limit. In this paper, we attempt to overcome these difficulties by combining a plane wave basis with the explicitly correlated F12 approach, and evaluate energies at the level of second-order M\o ller--Plesset (MP) theory to analyze the benefit. We first consider the 3D finite-electron uniform electron gas (UEG) for this approach, which has recently received attention as a model system for wavefunction-based quantum chemistry~\cite{Ashcroft82,Grueneis2009,Shepherd2012_1,Shepherd2012_2,Shepherd2012_3,Shepherd2013,Pederiva2013}, as well as long being an important model, especially in the development of density functional theory\cite{Ceperley1980,Perdew1981}. As the simplest model for a fully-periodic metallic system, it has many advantages. The plane waves in the UEG are exact natural orbitals, but in addition they are also exact Hartree--Fock solutions, and kinetic energy eigenfunctions. This means that the generalized Brillouin condition (GBC) and the extended Brillouin condition (EBC) are exactly satisfied, which decouples the conventional and F12 energy contributions\cite{Klopper87,Klopper:JCP116-6397,Manby05}. In addition, all three-electron integrals have simple analytic forms, whose RI can be saturated completely with the addition of at most just a single auxiliary orbital. Tractable expressions for the electron repulsion integrals mean that extrapolation to the CBS limit is straightforward to derive and understand; these energies can be easily found and used as benchmarks~\cite{Shepherd2012_3}. We note in passing that the CBS limit is also well-defined for the MP2 energy of a finite system, even though the energy diverges in the thermodynamic limit~\cite{Kresse2010,Shepherd2013}. This is because the divergence is caused by low-momenta excitations in the large box limit, rather than the high-momenta basis functions responsible for converging the basis set incompleteness error. The simple model Hamiltonian also allows us to calculate the exact MP1 wavefunction for the two electron UEG analytically, whose expansion about $r_{12} =0$ we find to take a different form than the traditional Slater-type correlation factor now established in molecular F12 theory. We use this to compare the Slater-type form to a new correlation factor which we find to be optimal for the UEG, and which may have advantages in other solid-state (or even potentially molecular) systems. Finally, we apply the method to the most widely studied solid-state system with quantum chemical methods, rocksalt lithium hydride crystal, to check the transferability of the findings into realistic {\em ab initio} solid state systems. \section{\label{sec:theory}Theory} This section outlines the theoretical methods that are employed in the present work to study the uniform electron gas simulation cell Hamiltonian. We briefly recapitulate second-order M\o ller-Plesset perturbation (MP2) theory, explicit correlation and the Hylleraas functional. Furthermore, we elaborate on the use of a plane-wave basis set in the many-electron wavefunction expansion and its implications for explicitly correlated methods. Analytical expressions for the integrals required in the above methods are derived and techniques to treat finite size effects as well as singularities are discussed. Finally a new correlation factor that we term Yukawa-Coulomb correlation factor is derived. \subsection{\label{ssec:mp2}Second-order M\o ller-Plesset perturbation theory} In MP2 theory, electron correlation is treated using many-body Rayleigh-Schr\"odinger perturbation theory, taking the $N$-electron Fock operator as the unperturbed Hamiltonian $H^{(0)}$ \cite{Moller}. The Hartree--Fock wave function $|\Psi^{(0)} \rangle$ both defines and is defined by the Fock operator. Formally, it is the ground state Slater determinant with occupied orbitals that are eigenstates of the $1$-electron Fock operator \begin{equation} F \vert i \rangle = \epsilon_i \vert i \rangle \end{equation} In practical computations, however, the $\vert i \rangle$ are rarely true eigenstates of the Fock operator, since they are expressed in a finite and in general, insufficient $1$-electron basis. Nevertheless, these Hartree--Fock orbitals define $H^{(0)}$. For the UEG, which is the main focus of this work, the Hartree--Fock orbitals are determined by symmetry. They are therefore exact eigenstates and the generalized and extended Brillouin conditions are fulfilled. For the UEG, $|\Psi^{(0)} \rangle$ is the exact ground state of the zeroth-order Hamiltonian $H^{(0)}$. In MP2 theory the standard route to obtaining the first-order wavefunction is to expand it in the basis of excited Slater determinants $|\Psi_{ij}^{ab}\rangle$: \begin{equation} |\Psi^{(1)}\rangle = \frac{1}{2}\sum_{ij}^{\rm occ.}\sum_{ab}^{\rm virt.} t_{ij}^{ab} E_{ij}^{ab}|\Psi^{(0)} \rangle, \label{eq:psi1} \end{equation} where $i$, $j$ and $a$, $b$ refer to occupied and unoccupied spatial Hartree--Fock orbitals respectively, from the full set of $M$ one-electron basis functions. $E_{ij}^{ab}$ is the spin-free two electron excitation operator. The coefficients of the excited determinants $t_{ij}^{ab}$ are readily calculated and read \begin{equation} t_{ij}^{ab}=\frac{\langle ij | ab \rangle}{\epsilon_i + \epsilon_j -\epsilon_a -\epsilon_b}. \end{equation} In the above expression, $\epsilon_n$ corresponds to the one-electron HF eigenvalues, and $\langle ij | ab \rangle$ are the conventional electron repulsion integrals. The calculation of two-electron integrals will be outlined in Sec.~\ref{ssec:integralevaluation}. In M\o ller-Plesset perturbation theory the second-order energy is the leading order correction to the correlation energy that can be obtained by calculating $\langle\Psi^{(0)}|H-H^{(0)}|\Psi^{(1)}\rangle$, which simplifies to \begin{equation} E_c^{\rm MP2}=\sum_{ij}^{\rm occ.}\sum_{ab}^{\rm virt.} \frac{\langle ij | ab \rangle( 2\langle ab | ij \rangle - \langle ba | ij \rangle)} {\epsilon_i + \epsilon_j -\epsilon_a -\epsilon_b}. \end{equation} Both the energy and the equations that determine the first-order wavefunction separate into decoupled equations for each occupied pair. The pair correlation energy can alternatively be obtained by optimizing the first-order pair correlation function $\vert u_{ij}\rangle$ to minimize the Hylleraas energy functional \begin{equation} E_c^{ij} = \min [ \langle u_{ij} | F_1 + F_2 - \epsilon_i - \epsilon_j | u_{ij} \rangle + 2 \langle u_{ij} | \frac{1}{r_{12}} | ij \rangle ]. \label{eq:hylleraas} \end{equation} This expression is useful in explicitly correlated methods. In conventional MP2 theory, the spinless first-order pair function and its contravariant counterpart, have the expansion \begin{align} | u_{ij} \rangle &= \frac{1}{2} \sum_{ab}^{\rm virt.} t_{ij}^{ab} | ab \rangle, \\ \langle u_{ij} \vert &= \frac{1}{2} \sum_{ab}^{\rm virt.} \langle ab |(2t_{ij}^{ab} - t_{ij}^{ba}). \label{eq:uij} \end{align} \begin{figure} \includegraphics[width=8.0cm,clip=true]{ExactWavefunc.eps} \caption{\label{fig:psitwoelec} The MP1 wavefunction for the two electron uniform electron gas at $r_s = 5$~a.u. with increasing plane wave orbital basis sets, up to a total basis of 15,625 plane waves. One electron is fixed at the center of the box, and the other is moved in a line through the coalescence point. This illustrates the slow convergence to the exact wavefunction as the interelectronic distance tends to zero. The Hartree--Fock wavefunction shows no variation with interelectronic distance, as only the average electronic potential is felt across the box. This demonstrates the same qualitative cusp convergence in plane waves as demonstrated elsewhere for molecular systems in Gaussian basis sets\cite{Hattig:CR112-4,Kong:CR112-75}. } \end{figure} Figure~\ref{fig:psitwoelec} visualizes the zeroth- (HF) and first-order wavefunctions using the example of two electrons in a box with a homogeneous neutralizing background charge. The wavefunctions are plotted with respect to the interelectronic distance $r_{12}$. While the first-order wavefunction accounts for electronic correlation by decreasing the probability of finding both electrons at short interelectronic distances, the zeroth-order wavefunction does not exhibit this so-called correlation hole centered at the electron coalescence point ($r_{12}$=0), and is depicted by a flat line. We note that the first-order wavefunction converges very slowly to a cusp in the complete basis set limit with respect to the number of employed virtual one-electron orbitals used in the expansion of $| \Psi^{(1)} \rangle$ [see Eq.~(\ref{eq:psi1})]. \subsection{\label{ssec:f12}Explicitly correlated MP2} As outlined above and in Refs.~\onlinecite{Shepherd2012_3,Hattig:CR112-4,Kong:CR112-75}, the MP1 wavefunction converges frustratingly slowly to the complete basis set limit, with an $M^{-1}$ dependence. Concomitantly the correlation energy converges very slowly and usually requires the treatment of a large one-electron basis sets of high momenta, that result in significant computational effort. However, as shown in figure~\ref{fig:psitwoelec}, a large fraction of the basis is needed to describe the many-electron wavefunction in the vicinity of the electron-electron coalescence points. The first-order cusp condition defines the shape of the many-electron wavefunction close to the electron coalescence of singlet pairs~\cite{Kato:CPAM10-151} \begin{equation} \left . \frac{\partial \left ( \frac{\Psi({\bf r}_{ij})}{\Psi(0)} \right ) }{\partial {\bf r}_{ij}} \right |_{{\bf r}_{ij}=0}=\frac{1}{2} \quad . \end{equation} The above equation implies that the many-electron wavefunction exhibits a derivative discontinuity about ${\bf{r}}_{12}=0$, with a linear behavior as you move from this point regardless of the external potential of the system, and as seen in figure~\ref{fig:psitwoelec}. Explicitly correlated methods fulfill the first-order cusp conditions exactly by augmenting the ansatz for the many-electron wavefunction with two-electron terms that depend on the interelectronic distance of each electron pair. In explicitly correlated MP2-F12 theory\cite{Werner:JCP126-164102,Bachorz:JCC32-2492} the first-order pair functions $| u_{ij}^{\rm F12} \rangle$ are expanded as \begin{equation} | u_{ij}^{\rm F12} \rangle = \frac{1}{2} \sum_{ab}^{\rm virt.} t_{ij}^{ab} | ab \rangle + t_{ij} \hat{Q}_{12} f_{12} \vert ij \rangle, \label{eq:psif12} \end{equation} where $t_{ij}$ are geminal amplitudes determined by the universal cusp conditions, $f_{12}$ is the correlation factor that models the shape of the correlation hole and is typically chosen to be a Slater-type function\cite{Ten-no:CPL398-56} \begin{equation} f_{12}^{\rm STG}= e^{-\gamma r_{12}} . \label{eq:STGCorrFac} \end{equation} This choice ensures that the geminal functions included in the basis are linear with respect to $r_{12}$ in the vicinity of the electron-electron cusp and decay to zero at large $r_{12}$, where the wavefunction is expected to vary smoothly and is generally well-represented by the conventional determinantal basis. We note that it is more common in explicitly correlated Gaussian implementations to approximate this functional form by a fixed combination of Gaussian type correlation factors to simplify integral evaluation over this kernel, however this is not a problem here (see section~\ref{ssec:integralevaluation}), and an exact Slater-type geminal function is used. This form of the correlation factor is an empirical choice, and its longer-ranged decay is not motivated by an underlying theory, but rather intuition\cite{Ten-no:CPL398-56}. However, it has been shown to be accurate compared to various other alternatives in molecular systems\cite{Tew:JCP123-074101,Tew:JCP125-094302}. In molecules, it is likely that the rapid exponential decay of the correlation factor to zero (the lack of long range structure) is an advantage because it separates out the long-range behavior which is not expected to be able to be modeled by a simple function of $r_{12}$ due to the anisotropy of the external potential\cite{Tew:JCP125-094302}. Furthermore, in Ref.~\onlinecite{Tew:JCP123-074101} the function has been shown to not increase monotonically to a constant, but rather to reach a maximum and then decrease, due to the fact that the remaining molecular electron density is reduced at large interelectronic distances. However here we will also investigate a new correlation factor derived from perturbation theory that we term Yukawa-Coulomb correlation factor, \begin{equation} f_{12}^{\rm YC}= \frac{2}{\gamma}\frac{1-e^{-\gamma r_{12}}}{r_{12}}. \label{eq:yc} \end{equation} The MP2-F12 theory outlined in this work is, however, independent from the specific form of $f_{12}$. Therefore we will return to the discussion of $f_{12}^{\rm YC}$ in Sec.~\ref{sec:yukcoulcorr}. The projector $\hat{Q}_{12}$ enforces strong orthogonality between $|\Psi^{(1)} \rangle$ and $|\Psi^{(0)} \rangle$, and it also enforces orthogonality between the standard and F12 contributions to the first-order wave function. \begin{equation} \hat{Q}_{12}=(1-\hat{O}_1)(1-\hat{O}_2)-\hat{V}_1\hat{V}_2, \label{eq:proj} \end{equation} where \begin{equation*} O_1=\sum_i^{\rm occ.} |r'_1 \rangle \langle r'_1 | i \rangle \langle i | r_1 \rangle \langle r_1 | , { ~~~~} V_1=\sum_a^{\rm virt.}|r'_1 \rangle \langle r'_1 | a \rangle \langle a | r_1 \rangle \langle r_1 | \end{equation*} In F12 theory it is convenient to obtain the second-order correlation energy by optimizing $| u_{ij}^{\rm F12} \rangle$ to minimize the Hylleraas functional Eq.~(\ref{eq:hylleraas}). The F12 contributions involve non-factorizable many-electron integrals, which increase the computational cost of evaluating $E_c^{\rm MP2-F12}$ compared to $E_c^{\rm MP2}$. The calculation of many-electron integrals can, however, be approximated by the introduction of resolutions of identity using the orbital basis, and an additional orthogonal complimentary auxiliary basis set (CABS). In this work we employ unoccupied Hartree--Fock orbitals as CABS. For calculations on the UEG, since the EBC is fulfilled, the contributions involving $t_{ij}^{ab}$ do not depend on the F12 terms\cite{Klopper87,Manby05,Hattig:CR112-4}. The energy then decomposes into the standard MP2 correlation energy and an F12 correction \begin{equation} E_c^{\rm MP2-F12}(M) =E_c^{\rm MP2}(M) + E_c^{\rm F12}(M). \end{equation} The dependence of the above energies on $M$ indicates that these energies are calculated using a finite orbital basis that is composed of $M$ plane waves with a kinetic energy below a specified cutoff. The complete basis set limit is approached for $M\rightarrow\infty$. The expressions for $E_c^{\rm F12}$ have been derived elsewhere\cite{Werner:JCP126-164102,Hattig:CR112-4,Kong:CR112-75,Shiozaki2010}, and are given here as \begin{align} E_c^{\rm F12}(M)=&2 V_{mn}^{ij} ( 2 t_{mn}^{ij} - t_{nm}^{ij} ) \nonumber \\ & + t_{mn}^{kl} B_{kl}^{ij} ( 2 t_{mn}^{ij} - t_{nm}^{ij} ) \nonumber \\ & - (\epsilon_m + \epsilon_n) t_{mn}^{kl} X_{kl}^{ij} (2 t_{mn}^{ij} - t_{nm}^{ij}). \label{eq:ecf12} \end{align} In the above expression, the indices $i$, $j$, $k$, $l$, $m$ and $n$ refer to occupied HF orbitals, and Einstein summation convention is assumed. $t_{kl}^{ij}$ are the geminal amplitudes that fulfill the first-order cusp condition and are kept fixed at the diagonal orbital-invariant ansatz of Ten-no\cite{Ten-no:JCP121-117}, which exactly satisfy the first order cusp-conditions of singlet and triplet electron pairs, \begin{align} t_{ii}^{ii}=-\frac{1}{2}\gamma^{-1} \label{eq:tii}\\ t_{ij}^{ij}=-\frac{3}{8}\gamma^{-1} \label{eq:tij}\\ t_{ij}^{ji}=-\frac{1}{8}\gamma^{-1} \label{eq:tji}. \end{align} \begin{table}[t] \caption{ Index notation for different orbital subspaces of the complete one-electron basis. CABS refers to the complimentary auxiliary basis set\cite{Valeev:CPL395-190}, and OBS refers to the orbital basis set over which the conventional MP1 amplitudes are defined. } \label{tab:index} \begin{ruledtabular} \begin{tabular}{lccc} & Occ. OBS orbitals & Virt. OBS orbitals & CABS \\ \hline $i,j,k,l,m,n$ & Yes & No & No \\ $a,b$ & No & Yes & No \\ $p,q$ & Yes & Yes & No \\ $P,Q,R$ & Yes & Yes & Yes \\ $a'$ & No & No & Yes \\ \end{tabular} \end{ruledtabular} \end{table} The intermediates $V$, $X$ and $B$ are defined as, \begin{align} V_{mn}^{ij}=&Y_{mn}^{ij}-R_{mn}^{pq} v_{pq}^{ij}-R_{mn}^{la'} v_{la'}^{ij}-R_{mn}^{a'l} v_{a'l}^{ij} \label{eq:vintdef}\\ X_{mn}^{ij}=&\bar{R}_{mn}^{ij}-R_{mn}^{pq} R_{pq}^{ij}-R_{mn}^{la'} R_{la'}^{ij}-R_{mn}^{a'l} R_{a'l}^{ij} \label{eq:xintdef} \\ B_{mn}^{ij}=&{\tau}_{mn}^{ij} + \hat{S}_{12} \left (\frac{1}{2} \hat{S}_{H} \bar{R}_{mP}^{ij} h_n^P \right . \nonumber\\ &-R_{mn}^{PQ} k_P^R R_{RQ}^{ij} - R_{mn}^{Pk} f_P^Q R_{Qk}^{ij} \nonumber \\ &+R_{mn}^{ka'} f_k^l R_{la'}^{ij} - R_{mn}^{pa} f_p^q R_{qa}^{ij} \nonumber \\ &\left . -\hat{S}_{H} R_{mn}^{ka'} f_k^P R_{Pa'}^{ij} -\hat{S}_{H} R_{mn}^{a'b} f_{a'}^p R_{pb}^{ij} \right ) \end{align} For this work the $B$ intermediate is calculated using approximation C~\cite{Kedzuch:IJQC105-929,Noga2008}. Table~\ref{tab:index} summarizes the meaning of the above indices. $f_P^Q$, $h_P^Q$ and $k_P^Q$ refer to the Fock-, Hartree- and exchange matrix. We note that $f_P^Q=h_P^Q-k_P^Q$. The following section outlines their evaluation for the UEG. $Y_{mn}^{ij}$, $R_{mn}^{ij}$, $\bar{R}_{mn}^{ij}$, ${\tau}_{mn}^{ij}$ and $v_{mn}^{ij}$ correspond to two electron integrals defined as \begin{align} Y_{mn}^{ij}=& \left \langle \phi_m \phi_n \left | f_{12}v_{12} \right | \phi_i \phi_j \right \rangle \label{eq:yukawar} \\ R_{mn}^{ij}=& \left \langle \phi_m \phi_n \left | f_{12} \right | \phi_i \phi_j \right \rangle \label{eq:f12r} \\ \bar{R}_{mn}^{ij}=&\left \langle \phi_m \phi_n \left | {f}_{12}^2 \right | \phi_i \phi_j \right \rangle \label{eq:squf12r} \\ {\tau}_{mn}^{ij}=&\left \langle \phi_m \phi_n \left | (\nabla_1 f_{12})^2 \right | \phi_i \phi_j \right \rangle \label{eq:taur} \\ v_{mn}^{ij}=& \left \langle \phi_m \phi_n \left | v_{12} \right | \phi_i \phi_j \right \rangle. \label{eq:coulr} \end{align} $f_{12}$ and $v_{12}$ is the correlation factor and the electron repulsion kernel, respectively. We will return to the evaluation of the above integrals in reciprocal space in Sec.~\ref{ssec:integralevaluation}, which is performed in the Vienna ab-initio simulation package ({\tt VASP})\cite{Kresse96}. The operators $\hat{S}_{12}$ and $\hat{S}_{H}$ symmetrize four index quantities such that \begin{equation} \hat{S}_{12} T_{mn}^{ij}= T_{mn}^{ij} + T_{nm}^{ji}. \end{equation} \begin{equation} \hat{S}_{H} T_{mn}^{ij}= T_{mn}^{ij} + T_{ij}^{mn}. \end{equation} We stress that the above expressions hold for general systems with real as well as complex electron repulsion integrals, so that the introduction of $k$-point symmetry in {\em ab initio} systems follows naturally. \subsection{\label{ssec:pwbasis}The homogeneous electron gas in a plane wave basis set} In this work we seek to apply explicitly correlated second-order M\o ller-Plesset perturbation theory to a finite-size (insulating) uniform electron gas model. The $N$-electron homogeneous electron gas simulation-cell Hamiltonian reads \begin{equation} \hat{H}=-\sum_\alpha \frac{1}{2} \nabla^2_\alpha + \sum_{\alpha, \beta} \frac{1}{2}\hat{v}_{\alpha \beta}, \end{equation} where $\alpha$ and $\beta$ are electron indices and the two-electron Ewald interaction $\hat{v}_{\alpha \beta}$ is given by \begin{equation} \hat{v}_{\alpha \beta}=\frac{1}{\Omega} \sum_{{\bf G}} \frac{4\pi}{{\bf G}^2}e^{i {\bf G} ({\bf r}_\alpha-{\bf r}_\beta)} , \end{equation} and $\Omega$ refers to the volume of the real-space simulation cell. For all calculations in the present work, we employ a cubic real-space unit cell with 54 electrons unless stated otherwise. The reciprocal lattice vectors $\bf G$ are defined as \begin{equation} {\bf G}=\frac{2 \pi}{L} \left( \begin{array}{c} n \\ m \\ l \end{array} \right) \end{equation} where $n$, $m$, and $l$ are integer numbers and $L$ is the real-space box length such that $L^3=\Omega$. The one-electron orbitals are chosen to be plane waves \begin{equation} \phi_n({\bf r})=\frac{1}{\sqrt{\Omega}} e^{i {{\bf k}_{n} {\bf r}}}, \label{eq:pworb} \end{equation} where $\bf k$ refers to the unique reciprocal lattice vector of the orbital. The one-electron Hartree--Fock Hamiltonian becomes diagonal in this orbital basis and reads \begin{align} \left \langle \phi_n \left | H^{(0)} \right | \phi_m \right \rangle =& f_n^m = \delta_{n,m} (h_n^m-k_n^m)=\epsilon_n, {~~\rm where} \nonumber \\ h_n^n=&\frac{1}{2}{\bf k_n}^2 {~~\rm and} \\ k_n^n=&- \sum_i \left \langle ni \left | v_{12} \right | in \right \rangle. \label{eq:exchange} \end{align} \subsection{\label{ssec:integralevaluation}Evaluation of the integrals in reciprocal space} It can be advantageous to calculate the electron repulsion integrals in reciprocal space if a plane-wave basis set is employed. This reduces the computational effort from a six-dimensional integral in real space to a three-dimensional sum in reciprocal space over the Fourier components of the given electron pair codensities \begin{align} \langle ij | v_{12} | ab \rangle=\sum_{\bf G} C_{ia{\bf G}} \tilde{{v}}_{\bf G} C^*_{bj \bf G}, \label{eq:erirec} \end{align} where \begin{equation} \sum_{{\bf G}} C_{ia \bf G} e^{i {\bf G} {\bf r}}=\phi_i^*({\bf r})\phi_a({\bf r}). \label{eq:olapden} \end{equation} The Fourier components of the integral kernels in Eqs.~(\ref{eq:f12r}) and (\ref{eq:coulr}) read \begin{align} \tilde{f}_{\bf G}^{\rm STG}=\mathcal{FT}\left ( f_{12}^{\rm STG} \right )=&\frac{4 \pi}{({\bf G}^2+\gamma^2)^2} \label{eq:f12} \\ \tilde{f}_{\bf G}^{\rm YC}=\mathcal{FT}\left ( f_{12}^{\rm YC} \right )=&\frac{4 \pi}{({\bf G}^2+\gamma^2){\bf G}^2}. \label{eq:corryc} \\ \tilde{v}_{\bf G}=\mathcal{FT}\left ( v_{12} \right )=&\frac{4 \pi}{{\bf G}^2} \label{eq:potcoul} \end{align} We note that if the orbitals correspond to plane waves, as it is the case in the UEG, momentum conservation applies. $\langle ij | v_{12} | ab \rangle$ is non-zero only if ${\bf k}_i+{\bf k}_j={\bf k}_a+{\bf k}_b$. Moreover, in the UEG all orbital codensities, and therefore two-electron integrals can be defined uniquely from the momentum transfer vector ${\bf k}_i-{\bf k}_a$ such that \begin{equation} \langle ij | v_{12} | ab \rangle=\tilde{{v}}_{{\bf k}_i-{\bf k}_a}. \label{eq:uegeri} \end{equation} \subsubsection{\label{ssec:singularities}Treatment of singularities in reciprocal potentials} The reciprocal kernels in Equations~(\ref{eq:corryc}) and (\ref{eq:potcoul}) diverge at ${\bf G}=0$. Although these singularities become only problematic for integrals $\langle vw | vw \rangle$ (due to the orthogonality of the orbitals), a direct numerical evaluation of the ${\bf G}=0$ contribution to the electron repulsion integrals according to Eq.~(\ref{eq:uegeri}) is not possible. The singularities are, however, integrable and well-known solutions to this problem have already been proposed~\cite{Gygi1986}. We will employ a technique that introduces a Gaussian charge distribution $C_{\bf G}$ whose integral over the reciprocal space with the corresponding kernels can be calculated analytically as \begin{equation} \frac{1}{\Omega} \sum_{\bf G} C_{\bf G} \tilde{v}_{\bf G} \rightarrow \int d{\bf G} e^{-\alpha {\bf G}^2} \tilde{v}_{\bf G},{~~\rm where~} C_{\bf G}= e^{-\alpha {\bf G}^2}. \end{equation} $\alpha$ is chosen such that the charge distribution decays to zero at the boundary of the employed plane wave grid and is constant in the vicinity of ${\bf G}=0$. Adding and removing this Gaussian charge distribution to $C_{ia{\bf G}}C^*_{bj{\bf G}}$ on the right-hand side of Eq.~(\ref{eq:erirec}) gives \begin{align} &\frac{1}{\Omega}\sum_{{\bf G}} (C_{nn{\bf G}}C^*_{mm{\bf G}}-C_{\bf G}+C_{\bf G}) \tilde{{v}}_{\bf G} \nonumber \\ =&\frac{1}{\Omega}\underbrace{\sum_{{\bf G}} (C_{nn{\bf G}}C^*_{mm{\bf G}}-C_{\bf G})\tilde{{v}}_{\bf G}}_{{\bf G}=0~{\rm contribution~vanishes}} +\frac{1}{\Omega}\underbrace{\sum_{{\bf G}} C_{\bf G}\tilde{{v}}_{\bf G}}_{\rm analytical~integration}. \label{eq:erirec2} \end{align} The difference between the Gaussian and orbital charge distribution vanishes for ${\bf G}=0$, removing the ${\bf G}=0$ contribution from the sum in the first term on the right-hand side of the above equation. The last term on the right-hand side can be integrated analytically. Depending on the kernel, we obtain the following results for the integrals. \begin{widetext} \begin{align} \frac{1}{\Omega} \sum_{{\bf G}} C_{\bf G}\tilde{{v}}_{\bf G} \rightarrow \int d{\bf G} \frac{4 \pi e^{-\alpha {\bf G}^2} }{{\bf G}^2}=& 2\pi \sqrt{\frac{\pi}{\alpha}} \\ \frac{1}{\Omega} \sum_{{\bf G}} C_{\bf G}\tilde{{f}}^{\rm YC}_{\bf G} \rightarrow \int d{\bf G} \frac{4 \pi e^{-\alpha {\bf G}^2} }{({\bf G}^2+\gamma^2){\bf G}^2}=& \frac{2 \pi ^2 e^{\alpha \gamma ^2} \text{Erfc}\left(\sqrt{\alpha } \gamma \right)}{\gamma }. \end{align} \end{widetext} Practically speaking, the ${\bf G}=0$ component is computed once per kernel and stored. This one-time effort does not require significant optimization. \subsubsection{\label{ssec:conv}Convolution of integral kernels in reciprocal space} We compute the reciprocal kernels for the integrals in Eqs.~(\ref{eq:yukawar}), (\ref{eq:squf12r}) and (\ref{eq:taur}) using the convolution theorem with \begin{align} \mathcal{FT}\left ( f_{12}v_{12} \right ) &=\frac{1}{\Omega}\sum_{{\bf G}'} \tilde{v}_{{\bf G-G}'} \tilde{f}_{{\bf G}'} \label{eq:yukawaconv} \\ \mathcal{FT}\left ( {f}_{12}^2 \right ) &=\frac{1}{\Omega}\sum_{{\bf G}'} \tilde{f}_{{\bf G-G}'} \tilde{f}_{{\bf G}'} \label{eq:squf12conv} \\ \mathcal{FT}\left ( (\nabla_1 f_{12})^2 \right ) &=\frac{1}{\Omega}\sum_{{\bf G}'} \tilde{f}_{{\bf G-G}'} \tilde{f}_{{\bf G}'}({\bf G}\cdot{\bf G}' - {\bf G}' \cdot {\bf G}') \label{eq:tauconv}. \end{align} The integral kernels are calculated using the convolution theorem in order to treat finite-size effects in the $B$ intermediate consistently and obtain the correct limiting behavior for the F12 contributions away from the large box-size limit. We stress that $E_c^{\rm F12}$ must vanish in the complete basis set limit ($M\rightarrow\infty$) in a non-trivial way, since the conventional determinant amplitudes recover the CBS energy in this limit. Specifically, the contributions of the $V$, $X$ and $B$ intermediates to $E_c^{\rm F12}$ must all vanish individually. In the following we will discuss this behavior for the $V$ intermediate that reads \begin{equation} V_{mn}^{ij}=Y_{mn}^{ij}-R_{mn}^{pq} v_{pq}^{ij}-R_{mn}^{la'} v_{la'}^{ij}-R_{mn}^{a'l} v_{a'l}^{ij}. \end{equation} The first term in the above equation on the right hand side must cancel with the others as the employed basis set approaches completeness. The contraction over the orbital indices in the last three terms corresponds to a resolution of identity between the Coulomb potential (present in the $v_{pq}^{ij}$ integrals) and the correlation factor (present in the $R_{mn}^{pq}$ integrals). Thus it is important that the three different integral kernels ($\frac{1}{r}$, $e^{-\gamma r}$ and $\frac{e^{-\gamma r}}{r}$) are treated in a consistent manner, which is achieved via the convolution theorem. \subsection{\label{sec:yukcoulcorr}A new MP2-F12 correlation factor: Yukawa-Coulomb} The optimal correlation factor maximizes the convergence rate of the MP2-F12 correlation energy to the CBS limit with respect to the employed orbital basis set. All MP2-F12 implementations have so far been confined to molecular systems, where different choices of correlation factors have been investigated but did not yield an improvement over the conventional Slater-type correlation factor\cite{Tew:JCP123-074101}. In this work we seek to investigate a correlation factor motivated by analytic results for two electrons in a box with a neutralizing and uniform background charge\cite{Gaskell61,Ceperley78,Ashcroft82}. The amplitudes of the first-order wavefunction for two electrons with opposite spins in a box are given by \begin{equation} t_{ii}^{ab}=\frac{\langle ii | ab \rangle}{\epsilon_i + \epsilon_i -\epsilon_a -\epsilon_b}, \label{eq:mp1amplitudes} \end{equation} where $\vert i \rangle = \Omega^{-1/2}$ is the spatial orbital at the gamma point ${\bf G}=0$. In this case the kinetic energy of the occupied orbitals are zero and momentum conservation of all two-electron excitations requires that ${\bf k}_b=-{\bf k}_a$. Therefore the denominator of Eq.~(\ref{eq:mp1amplitudes}) can be approximated by \begin{equation} \epsilon_i + \epsilon_i -\epsilon_a -\epsilon_b \approx -{\bf k}_a^2+ \tilde{\gamma}. \label{eq:denomapprox} \end{equation} In the above equation, we have approximated the contributions of the exchange $k_a^a$ [see equation~(\ref{eq:exchange})] to the HF one-electron energies by a constant $\tilde{\gamma}$. We note that in the limit ${\bf k}_a \rightarrow \infty$, the denominator will be dominated by contributions of the kinetic energy whereas the exchange contributions to $\epsilon_a$ will decay as $1/{\bf k}_a^2$. Inserting the definition of the electron repulsion integrals and noting that ${\bf k}_a$ in this instance is also equal to the momentum transfer vector of the excitation, the above approximation gives \begin{equation} t_{ii}^{ab}=-\frac{1}{\Omega} \frac{4 \pi}{{\bf k}_a^2 ({\bf k}_a^2 - \tilde{\gamma})}. \end{equation} A sum over all orbital products in the reciprocal lattice to obtain the wavefunction form then allows for an analytic inverse Fourier transform of the electron pair function to real space, to yield the first-order pair function \begin{equation} \vert u_{ii} \rangle = - \frac{2}{\gamma^2}\frac{1-e^{-\gamma r_{12}}}{r_{12}} \frac{1}{\Omega} \end{equation} with $\gamma^2 = \tilde \gamma$. The corresponding correlation factor consistent with Eq.~(\ref{eq:psif12}) and Eq.~(\ref{eq:tii}) is \begin{equation} f^{\rm YC}_{12}=\frac{2}{\gamma}\frac{1-e^{-\gamma r_{12}}}{r_{12}}. \label{eq:yccorrf} \end{equation} The above correlation factor, that we denote Yukawa-Coulomb correlation factor, becomes linear in $r_{12}$ for $r_{12}\rightarrow0$ and decays to zero for large $r_{12}$. We note that the Yukawa-Coulomb correlation factor is similar to the two-body Jastrow factor used in previous studies of the homogeneous electron gas with transcorrelated methods\cite{Umezawa2004}. This correlation factor may equivalently be derived directly in real space, starting from the differential equation for the first-order wave function for doubly occupied pairs in a UEG, \begin{align} (F_1 + F_2 - \epsilon_i - \epsilon_j) Q_{12} f(r_{12}) \frac{e^{i 2{{\bf k}_{i} {\bf s}}} }{\Omega} + Q_{12} \frac{1}{r_{12}} \frac{e^{i 2{{\bf k}_{i} {\bf s}}}}{ \Omega} = 0 \end{align} where ${\bf s} = ({\bf r}_1 + {\bf r}_2)/2$ and we have asserted that the first-order pair function can be exactly represented by the product of the $ij$ orbital pair with an isotropic function of $r_{12}$. Since the GBC and EBC are fulfilled, $[Q_{12},F_1]=0$ and we can therefore solve for $f(r_{12})$ without considering $Q_{12}$. Approximating $F_1 + F_2 \approx T_1 + T_2 + \tilde \gamma = -\nabla^2_{r_{12}} - \frac{1}{4} \nabla^2_s + \tilde \gamma$ gives \begin{align} (-\nabla^2_{r_{12}} + k_i^2 - \epsilon_i - \epsilon_j + \tilde \gamma) f(r_{12}) + \frac{1}{r_{12}} = 0 \end{align} which has the solution $f(r_{12}) = -f_{12}^{\rm YC}/2\gamma$ with $\gamma^2 = k_i^2 - 2\epsilon_i + \tilde \gamma$. The main difference between Eq.~(\ref{eq:yccorrf}) and the Slater-type function is in the longer-ranged behavior, as $f^{\rm YC}$ decays to zero as $1/r_{12}$ for $r_{12}\rightarrow\infty$ as opposed to an exponential decay for the Slater-type correlation factor in Eq.~\ref{eq:STGCorrFac} commonly used in F12 theories. From consideration of the correct van der Waals description of a minimal basis helium dimer, the same long range $1/r_{12}$ form was deduced in Ref.~\onlinecite{Ten-no:JCP121-117}. However, since this long-range part of the correlation function is continuous and able to be captured in single reference theories by basis functions of angular momentum of $L_{\mathrm{occ}}+1$, it was not deemed necessary there to include this asymptotic behavior in the form of the correlation factor. In this paper, we will show clear improvements from the Yukawa-Coulomb correlation factor in the case of the UEG where the correlation is isotropic, however it remains to be seen if any advantages are transferable to {\em ab initio} solid state or extended molecular systems, where the longer range behavior in the geminals may be projected out by the determinantal expansion in the presence of significant inhomogeneity in the potential. Although we use simple perturbative arguments to motivate correlation functions of the uniform electron gas -- the prototypical example of a metallic system where simple perturbation theory will fail -- it should be noted that for a two electron system the model is highly insulating and metallic behavior and divergent results only arise on approach to the thermodynamic limit~\cite{Shepherd2013}. In addition, this long-range $1/r_{12}$ tail for the pair correlation function can also be motivated from the random phase approximation in this thermodynamic limit, where Gaskell\cite{Gaskell61,Ceperley78} found the {\em exact} long-range behavior of the uniform electron gas to be \begin{equation} \lim_{r_{12} \rightarrow \infty} u(r_{12}) \propto r_{12}^{-(D-1)/2} , \end{equation} where $D$ is the dimension of the model, and $e^{-u(r_{12})}$ then gives the exact solution to the two-body Schr{\"o}dinger equation. This gives confirmation that the form of the correlation factor given in Eq.~\ref{eq:yccorrf} is {\em exactly} correct for both long and short distances in the two-body correlation, although not necessarily between. This is also true in the strongly correlated regime, although there higher body effects are obviously increasingly important. This knowledge has informed the choice of Jastrow factors within the quantum Monte Carlo community\cite{Uschmajew2012}, whose simplest functional form of \begin{equation} u(r_{12}) = e^{-\frac{r_{12}}{2(1+br_{12})}} , \end{equation} also has the correct long-ranged $1/r_{12}$ behavior, and is used as standard for two-body correlation in both molecular and extended systems\cite{Handy69,Schmidt1990,Ceperley84,Umrigar08}. These Jastrow factors, which can be constructed to have increasing numbers of variational parameters, additionally in higher particle number coordinates\cite{Hoggan2013} capture all correlation effects of variational Monte Carlo methods\cite{Foulkes2001}. \section{\label{sec:results}Results} This section discusses MP2 and MP2-F12 results of the finite simulation cell uniform electron gas model. Section~\ref{sec:mp2} recapitulates the well-known basis set extrapolation procedures used in MP2 theory to obtain accurate complete basis set limit reference energies. Section~\ref{sec:mp2f12comp} investigates the convergence of the MP2-F12 correlation energy with respect to the employed computational parameters such as the size of the CABS space, the variational parameter $\gamma$ used in the correlation factors and the orbital basis set. Having established CABS convergence, section~\ref{sec:optgam} examines the variation in the optimal parameter $\gamma$ governing the extent of the correlation hole, as the electron density of the system is changed. Section~\ref{sec:optpair} explores the potential benefit of a pairwise optimization of the correlation factor in order to accelerate the correlation energy convergence with respect to the employed basis set even further. Finally, section~\ref{sec:nonpar} investigates the relative accuracy in finite basis set MP2 and MP2-F12 correlation energies as a function of the electron density. \subsection{\label{sec:mp2}Basis set convergence in MP2 theory} \begin{figure} \includegraphics[width=8.0cm,clip=true]{MP2_rs5_orbconv.eps} \caption{\label{fig:MP2_rs5_conv} Convergence of the MP2 correlation energy for the 54 electron UEG simulation cell ($r_{\rm S}=5.0$~bohr) with respect to the employed number of orbitals $M$. The inset shows that the correlation energy behaves as $1/M$ using a very large orbital basis set, which allows for the extrapolation to the complete basis set limit.} \end{figure} Accurate complete basis set limit MP2 correlation energies are an indispensable prerequisite for the investigation of the quality of our MP2-F12 results. To this end we outline the calculation of the MP2 complete basis set limit energies below. Figure~\ref{fig:MP2_rs5_conv} shows the convergence of the MP2 correlation energy with respect to the employed basis set for 54 electrons in a box at a density corresponding to $r_{\rm s}=5.0$~bohr, a typical electron density for e.g. potassium metal. As derived and discussed thoroughly in Ref.~\onlinecite{Shepherd2012_3}, the MP2 correlation energy converges only as $1/M$ to the complete basis set limit, where $M$ corresponds to the number of plane waves. This rate of convergence results directly from the convergence of the first-order cusp condition by the wavefunction. In the present work, we employ this functional behavior ($1/M$) to extrapolate to the complete basis set limit ($M\rightarrow \infty$). The extrapolations were carried out using several MP2 energies obtained for orbital cutoffs yielding 5887 to 9171 orbitals. The inset in figure~\ref{fig:MP2_rs5_conv} confirms that the MP2 correlation energies for these basis sets converge as $1/M$ to the complete basis set limit. In the following we will employ extrapolated complete basis set limit energies as reliable comparisons for MP2-F12 results. \subsection{\label{sec:mp2f12comp}Computational parameters in MP2-F12 theory} \subsubsection{CABS convergence} \begin{figure} \includegraphics[width=8.0cm,clip=true]{ncabs_convergence.eps} \caption{\label{fig:ncabs_conv} Convergence of the MP2-F12 correlation energy for the 54 electron UEG simulation cell ($r_{\rm s}=5$~bohr) with respect to the employed number of basis functions in the orbital basis and the CABS. A Slater-type correlation factor and $\gamma=0.67$~\AA$^{-1}$ was used. The number of basis functions in the orbital space was fixed to 27 (upper panel) and 81 (lower panel). The CABS convergence rate does not change as the number of virtual orbitals is increased. } \end{figure} To avoid the explicit evaluation of three- and four-electron integrals in F12 calculations, a complementary auxiliary basis set (CABS) is introduced\cite{Valeev:CPL395-190}. By insertion of the resolution of identity (RI), many-electron integrals can be replaced with products of two electron integrals contracted over the union of the orbital basis and the CABS (\textit{e.g.} $\langle mnl | f_{12}f_{23} | lji \rangle \rightarrow \sum_P\langle mn | f_{12} | lP \rangle \langle Pl | f_{12} | ji \rangle $). In this work, the CABS space is trivially constructed as a set of higher momentum plane waves to those in the orbital basis, and is therefore automatically orthogonal. Figure~\ref{fig:ncabs_conv} demonstrates a rapid convergence of the MP2-F12 energy with respect to the number of CABS orbitals used in the RI, and crucially, the rate of this convergence is independent of the orbital basis size. This is because the occupied orbitals do not include components of higher momentum as the orbital basis increases, and therefore the quality of the RI for a fixed number of electrons is independent of the size of the virtual basis, and only depends on the size of the complete basis set (the union of OBS and CABS). In addition, we note that the MP2-F12 energy changes by less than 50~meV if the number of basis functions in the RI increases from 587 to 1503. As indicated in section~\ref{sec:introduction}, due to conservation of momentum, the RI for three-electron integrals in the uniform electron gas can in fact be saturated with a single function, obviating the need for a full RI in these cases. However, in order to maintain generality, this approach will not be considered further here. This invariance with respect to orbital basis size will not be strictly true for {\em ab initio} systems, and so the question of convergence with respect to the auxiliary basis will need to be readdressed at a later date. In addition, even for the uniform electron gas, the convergence will change with number of electrons, as the formal requirement for saturation of the auxiliary basis for three-electron integrals includes plane waves with momenta $3 \times k_{\rm occ}$, where $k_{\rm occ}$ is the maximum momenta of the occupied orbitals\cite{Klopper87,Kutzelnigg:JCP94-1985}. However, errors may be sufficiently small such that the RI basis can be truncated well before this limit, and the computational cost for increasing the basis is only $\mathcal{O}[M^2]$. This issue will be returned to in the context of {\em ab initio} systems at a later date. \subsubsection{$\gamma$ optimization} \begin{figure} \includegraphics[width=8.0cm,clip=true]{gammasweep_ycc_stgc.eps} \caption{\label{fig:optgamma} The variation in the MP2-F12 energy with respect to the $\gamma$ parameter in the Slater- and Yukawa-Coulomb correlation factors, with the optimal $\gamma$ giving the lowest MP2-F12 energy. We employed 123 orbitals with 724 CABS basis functions for the 54 electron system at a density of $r_{\rm s}=5$~bohr. } \end{figure} As discussed in Section~\ref{sec:theory}, the $f_{12}^{\rm STG}$ and $f_{12}^{\rm YC}$ correlation factors depend on the parameter $\gamma$ that describes how quickly the correlation factor decays to zero with increasing inter-electronic distance, modeling the physical extent of the correlation hole. The Hylleraas energy functional Eq.~\ref{eq:hylleraas} is variational and allows optimization of $\gamma$ through energy minimization. Figure~\ref{fig:optgamma} shows the dependence of the MP2-F12 energy on $\gamma$ for the Slater and Yukawa-Coulomb correlation factors. The Yukawa-Coulomb and Slater-type correlation factors minimize the MP2-F12 correlation energies when $\gamma=1.04$~\AA$^{-1}$ and $\gamma=0.67$~\AA$^{-1}$, respectively. It is instructive to compare the behavior of the two correlation factors by contrasting their series expansion about $r_{12}=0$, which gives \begin{align} - \frac{f_{12}^{\rm STG}(r_{12})}{\gamma}=- \frac{1}{\gamma}+r_{12}-\frac{\gamma r_{12}^2}{2} + \mathcal{O}(r_{12}^3) \label{eq:taylorstg} \\ - \frac{f_{12}^{\rm YC}(r_{12})}{\gamma}= - \frac{2}{\gamma}+r_{12}-\frac{\gamma r_{12}^2}{3} + \mathcal{O}(r_{12}^3) \label{eq:taylorycc}. \end{align} The zeroth-order terms on the right hand side of the above equations are constant. Constant shifts in the correlation factors are, however, always removed by the projector $\hat{Q}_{12}$ defined in Eq.~(\ref{eq:proj}) and yield no contribution to the MP2-F12 correlation energy. The first-order terms agree in both correlation factors, and are linear as required by the first-order cusp condition. Inserting the optimized $\gamma$'s to calculate the coefficients for the second-order terms in $r_{12}$ from Eqs.~(\ref{eq:taylorstg}) and (\ref{eq:taylorycc}) yield $0.347$~\AA$^{-1}$ and $0.335$~\AA$^{-1}$ for the Yukawa-Coulomb and Slater-type correlation factor respectively. This comparison shows that both correlation factors give in fact very similar results at the cusp position for the present system. However, the Yukawa-Coulomb correlation factor yields an improved minimum energy for the system, which is lower by approximately 100~meV compared to the Slater-type correlation factor, indicating its superior suitability for the system as expected. We also note that the MP2-F12 energy becomes identical for both correlation factors in the limits $\gamma\rightarrow\infty$ and $\gamma\rightarrow 0$. For $\gamma\rightarrow\infty$, the MP2-F12 energy converges to the conventional MP2 energy in the respective orbital basis set. In the limit $\gamma\rightarrow0$, both correlation factors become $r_{12}$. As such, the latter limit corresponds to the MP2-R12 correlation energy. \subsubsection{Basis set convergence} \begin{figure} \includegraphics[width=8.0cm,clip=true]{MP2+F12_rs5_conv.eps} \caption{\label{fig:MP2F12_rs5_conv} Convergence of the MP2 and MP2-F12 correlation energies for the 54 electron UEG simulation cell ($r_{\rm S}=5.0$~bohr) with respect to the employed number of orbitals $M$ using the optimum $\gamma$ (see figure~\ref{fig:optgamma}). } \end{figure} As a further test of the quality of the MP2-F12 we consider the convergence with respect to the orbital basis using the Slater-type correlation factor, and compare to the extrapolated CBS limit results outlined in Section~\ref{sec:mp2} and in Ref.~\onlinecite{Shepherd2012_3}. Figure~\ref{fig:MP2F12_rs5_conv} confirms that the correct CBS limit correlation energy (30.61~eV) is recovered in the large basis limit of our MP2-F12 implementation. As anticipated, we find that the rate of convergence for the MP2-F12 results is greatly improved compared to conventional MP2 theory. The inset in figure~\ref{fig:MP2F12_rs5_conv} shows that the MP2-F12 correlation energy converges approximately as $1/M^{\frac{7}{3}}$, significantly faster than the $1/M$ convergence of MP2. This can be rationalized from the optimal convergence of a principal expansion of the wavefunction with terms linear in $r_{12}$, which can be shown to be $(L+1)^{-7}$ where $L$ is the largest momentum in the expansion\cite{Kutzelnigg:JCP94-1985}. \begin{figure} \includegraphics[width=8.0cm,clip=true]{F12_orbconv_ycc_stgc.eps} \caption{\label{fig:MP2F12_conv_both} Convergence of the MP2-F12 correlation energy for the Slater- and Yukawa-Coulomb correlation factors at $r_{\rm s}=5.0$~bohr with respect to the employed number of orbitals compared to the MP2 energy and the CBS limit. It can be seen that MP2-F12 converges far quicker than MP2. } \end{figure} Figure~\ref{fig:MP2F12_conv_both} shows the convergence of the MP2 and MP2-F12 correlation energies with respect to the employed basis set for the Slater-type and Yukawa-Coulomb correlation factors. We stress that a logarithmic scale is used on the horizontal axis. As expected, both correlation factors converge to the correct CBS limit. Furthermore the Yukawa-Coulomb correlation factor exhibits a slightly faster rate of convergence indicating that the $1/r$ decay of of $f^{\rm YC}$ captures longer-ranged, important correlation effects that are neglected by the exponentially decaying Slater-type correlation factor. The results shown in figure~\ref{fig:MP2F12_conv_both} suggest that MP2-F12 allows for a reduction of the size of the orbital basis by approximately an order of magnitude, although often more. Even though more investigation is required, and this factor will certainly not be fixed for different systems, this suggests savings in the orbital space could be on the whole larger than those generally achieved for molecular systems within a Gaussian orbital basis. \subsection{\label{sec:optgam}Variation of $\gamma_{\rm opt}$ with electron density} \begin{figure} \includegraphics[width=8.0cm,clip=true]{optgamma_ycc_stgc.eps} \caption{\label{fig:gamma_vs_rs} Change in optimal $\gamma$ parameter for the Slater- and Yukawa-Coulomb correlation factors as the density ($\propto r_{\rm s}$) of the electron gas is varied, for the 54 electron simulation cell. } \end{figure} The physical extent of the correlation hole will change with the density of electrons, which in the electron gas model we are considering is inversely proportional to the $r_{\rm s}$ parameter. Therefore, we expect the optimal $\gamma$ for the correlation factors to increase for higher densities. This is indeed observed, as can be seen in figure~\ref{fig:gamma_vs_rs}, which shows an approximately linear relationship between the optimal $\gamma$ and the electron density ($1/r_{\rm s}$) for both correlation factors. This linear relationship allows for the determination of an approximately optimal $\gamma$ in advance of any calculation, without the need for an explicit optimization of the parameter with respect to the MP2-F12 energy. \subsection{\label{sec:optpair}Pairwise $\gamma$ optimization} \begin{figure} \includegraphics[width=8.0cm,clip=true]{gammasweep_pairwise.eps} \caption{\label{fig:gammasweep_pairwise} Pairwise optimization of $\gamma$ used for the Slater-type correlation factor shown for a core-core, core-valence and valence-valence electron pair at $r_{\rm s}=5.0$~bohr. } \end{figure} We now seek to investigate the potential improvement in the basis set convergence rate of MP2-F12 by optimizing the correlation factor for each pair of electrons. The correlation factor in MP2-F12 theory is known to depend on the orbital eigenvalues\cite{Tew08} and indeed our derivation of $f^{\rm YC}_{12}$ for doubly occupied pairs also reveals a dependence of $\gamma$ on $\epsilon_i$, although this dependence is weak since it is partially canceled by $k_i^2$. Since MP2 theory is an independent electron pair approximation one is free to use different correlation factors for each electron pair. Figure~\ref{fig:gammasweep_pairwise} shows the F12 correlation energy contributions as a function of $\gamma$ for three different classes of electron pairs: (i) a core-core electron pair, (ii) a core-valence electron pair and (iii) a valence-valence electron pair, as defined by the kinetic energy of the electrons and their plane wave momenta, rather than their density since all plane waves have a uniform density across the computational cell. Core and valence orbitals correspond to the plane wave orbitals with zero and the highest possible kinetic energy for the present 54 electrons in a cubic box system. The energy contributions are variational with respect to $\gamma$ and the respective minima are depicted by vertical lines. The optimal $\gamma$ is found to be larger for core-core electron correlation than for valence-valence and core-valence electron correlation. As such it would seem beneficial to employ pairwise-optimized correlation factors. However, the contribution of the core-core correlation energy in the present system is small compared to the contribution of the valence-valence electron pair energy. Furthermore the additional correlation energy gained by the optimized correlation factor for the core-core electron pair is almost negligible. To this end we conclude that a pairwise optimization of the electron correlation factor is not a particularly worthwhile pursuit for the uniform electron gas. Furthermore this observation indicates that the remaining errors in the finite-basis MP2-F12 calculations using optimized $\gamma$ values arise from the violation of higher-order cusp conditions. \subsection{\label{sec:nonpar}Relative errors in plane wave MP2-F12 theory} Although a rapid convergence of the absolute correlation energy with respect to the employed basis set is advantageous for the study of real solid state systems, it can be equally important that the \emph{rate of convergence} does not change significantly in the investigated coordinate space. The latter allows for the calculation of properties such as lattice constants, bond lengths or reaction energies in the complete basis set limit without having to converge the underlying absolute correlation energies, since absolute errors are relatively constant, and therefore a cancellation of these errors yield accurate energy differences. In the present system the errors in the MP2(-F12) correlation energies for a range of electron densities with respect to a fixed basis set size, provides a good test case to investigate the issues described above. Figure~\ref{fig:MP2F12_vs_rs} shows the MP2(-F12) correlation energy as a function of $r_{\rm s}$ in the complete basis set limit and for a range of finite basis sets. We find that the correlation energy increases in the limit of higher densities and that finite as well as complete basis set limit results exhibit the same qualitative behavior for increasing $r_{\rm s}$. \begin{figure} \includegraphics[width=8.0cm,clip=true]{rs_orb_sweep.eps} \caption{\label{fig:MP2F12_vs_rs} MP2(-F12) correlation energies obtained using the optimal $\gamma$ over a range of densities given by the Wigner-Seitz radius $r_{\rm s}$ for 54 correlated electrons, compared to the CBS result. } \end{figure} However, a more instructive plot is shown in figure~\ref{fig:absNPE_MP2.eps}, where the errors compared to the CBS result are given at each electron density. This shows that the non-parallelity errors (the difference between the maximum and minimum basis set errors over the electron densities considered) in finite basis conventional MP2 converge frustratingly slowly. Employing 203 orbitals yields MP2 non-parallelity errors of approximately 2~eV over this density range, which roughly corresponds to the range of realistic solid-state electron densities. \begin{figure} \includegraphics[width=8.0cm,clip=true]{absNPE_MP2.eps} \caption{\label{fig:absNPE_MP2.eps} Basis set error of conventional MP2 correlation energies compared to the CBS limit, as a function of the Wigner-Seitz radius $r_{\rm s}$.} \end{figure} In contrast to conventional MP2, MP2-F12 exhibits non-parallelity errors that converge much faster with respect to the basis set size. Figures~\ref{fig:F12_NPE_stgc.eps} and \ref{fig:F12_NPE_ycc.eps} show the MP2-F12 errors compared to the complete basis set limit for the same range of densities, for the Slater-type correlation factor and Yukawa-Coulomb correlation factor respectively. In contrast to the conventional MP2 result, 203 plane-wave orbitals suffice to obtain non-parallelity errors over this density range below 100~meV in the correlation energy, a reduction in the relative errors by over an order of magnitude for the same basis size. We note that the non-parallelity errors for all finite basis set results lead to a relative over-correlation at lower densities, where longer ranged, non-dynamic correlation is more important, and therefore the basis set convergence is seen to be faster. The only exception to this observation is seen in the non-parallelity of the MP2-F12 energy using the Slater-type correlation factor with 81 orbitals, as shown in figure~\ref{fig:F12_NPE_stgc.eps}. In this case, the MP2-F12 basis set error exhibits a minimum at $r_{\rm s}=3$~a.u. We believe that this indicates that the Slater-type correlation factor is less efficient for lower densities where the long-range behavior of the correlation factor is energetically more significant. \begin{figure} \includegraphics[width=8.0cm,clip=true]{F12_NPE_stgc.eps} \caption{\label{fig:F12_NPE_stgc.eps} Basis set error of 54 electron MP2-F12 correlation energies compared to the CBS limit as a function of the Wigner-Seitz radius $r_{\rm s}$, for the Slater-type geminal correlation factor. From about 203 orbitals, the non-parallelity errors are almost negligible. } \end{figure} \begin{figure} \includegraphics[width=8.0cm,clip=true]{F12_NPE_ycc.eps} \caption{\label{fig:F12_NPE_ycc.eps} Basis set error of 54 electron MP2-F12 correlation energies compared to the CBS limit as a function of the Wigner-Seitz radius $r_{\rm s}$, for the Yukawa-Coulomb geminal correlation factor. From about 203 orbitals, the non-parallelity errors are almost negligible. } \end{figure} \section{\label{sec:con}Conclusions and Outlook} In summary, we have shown that explicitly correlated MP2 theory can be used in conjunction with a plane-wave basis set for three dimensional fully periodic systems. The combination of infinitely delocalized plane waves and a two-electron correlation factor centered at the electron coalescence points spans a very efficient and rapidly convergent basis set for the many-electron wavefunction expansion. This allows for the accurate evaluation of the electronic correlation energy close to the complete basis set limit. Our results for the uniform electron gas show that the reduction in the size of the employed one-electron basis set is similar to the corresponding findings in Gaussian orbital based molecular systems, although tentatively we suggest that the reduction could be even larger, perhaps due to the slower convergence of the original plane wave basis compared to an optimized Gaussian-type orbital expansion. We have introduced a novel correlation factor that is termed Yukawa-Coulomb correlation factor, which in contrast to other employed correlation factors, is derived from analytic results for two electrons in a box. The Yukawa-Coulomb correlation factor differs from the Slater-type correlation factor in the long range and shows a faster rate of convergence with respect to the employed basis set. We believe that this novel correlation factor may be useful for the study of solid state systems and potentially large molecules with relatively isotropic interactions within explicitly correlated theories. The change in the optimal variational parameter $\gamma_{\rm opt}$ was investigated for a range of densities. We found that $\gamma_{\rm opt}$ increases linearly for larger electron densities, which indicates that the correlation hole becomes more localized in this limit. A close to optimal $\gamma$ can be determined solely from the density of the system and the expectation is that even in {\em ab initio} systems, a $\gamma$ optimization will not always be necessary. Furthermore we have investigated the pairwise optimization of the correlation factor for core-core, core-valence and valence-valence electron pairs. Our findings show that although $\gamma_{\rm opt}$ for the core-core electron pairs differs significantly from $\gamma_{\rm opt}$ for valence-valence electron pairs, the gain in the absolute correlation energy using pairwise optimized correlation factors is negligible. As such we believe that it is not beneficial to optimize the correlation factor for each electron pair individually. Finally we have studied the convergence of the non-parallelity error from the complete basis set limit using MP2-F12 and MP2 for a range of densities and basis sizes. This is expected to provide a good test case for the convergence of lattice constants and other energy differences in solid state systems with respect to the employed basis set. As expected the convergence of MP2-F12 clearly outperforms MP2 and also allows for a reduction by approximately an order of magnitude in the employed basis set. \begin{figure} \includegraphics[width=8.0cm,clip=true]{LiH_4x4x4_conv.eps} \caption{\label{fig:LiH.eps} Basis set convergence of the MP2 valence-only cohesive energy contribution to LiH. Our calculations were done using a 4$\times$4$\times$4 $k$-mesh and norm-conserving pseudo-potentials in the framework of the PAW method\cite{Kresse96}. The LiH unit cell volume was set to 17.03~\AA$^3$. The MP2 calculations were done using HF and approximate natural orbitals\cite{Kresse2011}. Our MP2 and MP2-F12 (using a STG) results converge to the same complete basis set limit results obtained using local MP2 (CRYSCOR, from Ref.~\onlinecite{Schutz2011}), the hierarchical method (from Ref.~\cite{Manby2009}) and the incremental scheme (from Ref.~\cite{Stoll2012}). } \end{figure} We hope that the findings of the present work will translate both to alternative UEG models\cite{Gill2012,Gill2013}, {\em ab initio} systems, and to other explicitly correlated methods in the solid state such as CCSD-F12\cite{Noga:JCP101-7738,Tew:PCCP9-1921,Tew:BOOK2010,Werner:BOOK2010,Knizia2007} or FCIQMC-F12\cite{BGKA2013,Booth2012,BTA2009,BoothC2,CBOA2012,CBA2010}, where the additional computational cost for calculating the F12 contribution becomes negligible in comparison to these more expensive parent methods. The application of the methods outlined in this work to real, {\it ab initio} solid state systems is expected to significantly expand the scope of the whole range of quantum chemical wave function based methods. Figure~\ref{fig:LiH.eps} shows a preliminary application of the MP2-F12 implementation for the LiH crystal confirming our findings for the uniform electron gas that explicitly correlated MP2 theory allows for a substantial reduction in the basis set. We will expand on these results in a forthcoming paper. \section{Acknowledgements} The authors thank Jiri Klimes and Cyrus Umrigar for fruitful discussions. A.G. acknowledges an APART-fellowship of the Austrian Academy of Sciences. J.J.S. thanks EPSRC for funding. A.A. acknowledges support from the EPSRC under grant number: EP/J003867/1. D.P.T. thanks the Royal Society for a Univerity Research Fellowship. G.H.B. acknowledges funding from Trinity College, Cambridge.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Discrete-event systems~\cite{RW87} (DES) can be modelled by finite automata over an alphabet of actions/events $\Sigma$. The fault diagnosis problem~\cite{Raja95} for DES consists in detecting \emph{faulty} sequences in the system. A \emph{faulty} sequence is a sequence of the DES containing an occurrence of a special event $f$. It is assumed that an external \emph{observer} which has to detect faults, knows the specification/model of the DES, but can partially observe the system at runtime: it is able to observe sequences of \emph{observable} events in $\Sigma_o \subseteq \Sigma$. Based on this knowledge, it has to announce whether an observation (in $\Sigma_o^*$) stems from a faulty sequence (in $(\Sigma \cup \{\tau,f\})^*$). Checking diagnosability of DES can be done in PTIME and computing a diagnoser amounts to determinizing the DES (EXPTIME)~\cite{Raja95,Jiang-01,yoo-lafortune-tac-02}. \smallskip \noindent{\it \bfseries Fault Diagnosis for Timed Automata.} The fault diagnosis problem for Timed Automata (TA) has been introduced and solved by S.~Tripakis in~\cite{tripakis-02}, where he proved that checking {diagnosability} of a timed automaton is PSPACE-complete. In the timed case however, the diagnoser may be a Turing machine. In a subsequent work by P.~Bouyer~and F.~Chevalier~\cite{Bouyerfossacs05}, the problem of checking whether a timed automaton is diagnosable using a diagnoser which is a \emph{deterministic} timed automaton (DTA) was studied, and they proved that this problem was 2EXPTIME-complete. \smallskip \noindent{\it \bfseries Our Contribution and Related Work.} In~\cite{cassez-acsd-07,cassez-tase-07} (and~\cite{cassez-fi-08} for an extended version), we have introduced \emph{dynamic observers} for fault diagnosis of DES. In this framework, an observer can choose dynamically which events it is going to observe and make a new choice after each occurrence of any (currently) observable event. In~\cite{cassez-acsd-07,cassez-fi-08} we have shown how to compute (2EXPTIME) a \emph{most permissive observer} which represents all the the dynamic observers that ensures that a DES is diagnosable. In~\cite{cassez-tase-07} we have furthermore introduced a notion of \emph{cost} of an observer, and proved that an optimal observer could also be computed in 2EXPTIME. In this paper, we extend the previous results for systems given by timed automata. We first settle the complexity of some optimization problems with static observers (section~\ref{sec-static}). We then focus on dynamic \emph{timed} observers, and show how to compute (section~\ref{sec-dynamic}) a most permissive (timed) dynamic observer, under the assumption of bounded \emph{resources}. In section~\ref{sec-cost}, we define a notion of \emph{cost} for timed observers (which extends the one we have defined for DES in\cite{cassez-tase-07}) and show how to compute the cost of a given observer. We also discuss the problem of synthesizing an optimal timed dynamic observer. \section{Preliminaries}\label{sec-prelim} $\Sigma$ denotes a finite alphabet and $\Sigma_\tau=\Sigma \cup \{\tau\}$ where $\tau \not\in \Sigma$ is the \emph{unobservable} action. $\mathbb B=\{\mbox{\textsc{true}},\mbox{\textsc{false}}\}$ is the set of boolean values, $\mathbb N$ the set of natural numbers, $\mathbb Z$ the set of integers and $\mathbb Q$ the set of rational numbers. $\mathbb R$ is the set of real numbers and $\mathbb R_{\geq 0}$ is the non-negative real numbers. \subsection{Clock Constraints} Let $X$ be a finite set of variables called \emph{clocks}. A \emph{clock valuation} is a mapping $v : X \rightarrow \mathbb R_{\geq 0}$. We let $\mathbb R_{\geq 0}^X$ be the set of clock valuations over $X$. We let $\vect{0}_X$ be the \emph{zero} valuation where all the clocks in $X$ are set to $0$ (we use $\vect{0}$ when $X$ is clear from the context). Given $\delta \in \mathbb R$, $v + \delta$ denotes the valuation defined by $(v + \delta)(x)=v(x) + \delta$. We let $\calC(X)$ be the set of \emph{convex constraints} on $X$, {i.e.},~ the set of conjunctions of constraints of the form $x \bowtie c$ with $c \in\mathbb Z$ and $\bowtie \in \{\leq,<,=,>,\geq\}$. Given a constraint $g \in \calC(X)$ and a valuation $v$, we write $v \models g$ if $g$ is satisfied by $v$. Given $R \subseteq X$ and a valuation $v$, $v[R]$ is the valuation defined by $v[R](x)=v(x)$ if $x \not\in R$ and $v[R](x)=0$ otherwise. \subsection{Timed Words} The set of finite (resp. infinite) words over $\Sigma$ is $\Sigma^*$ (resp. $\Sigma^\omega$) and we let $\Sigma^\infty=\Sigma^* \cup \Sigma ^\omega$. We let $\varepsilon$ be the empty word. A \emph{language} $L$ is any subset of $\Sigma^\infty$. A finite (resp. infinite) \emph{timed word} over $\Sigma$ is a word in $(\mathbb R_{\geq 0}.\Sigma)^*.\mathbb R_{\geq 0}$ (resp. $(\mathbb R_{\geq 0}.\Sigma)^\omega$). $\dur(w)$ is the duration of a timed word $w$ which is defined to be the sum of the durations (in $\mathbb R_{\geq 0}$) which appear in $w$; if this sum is infinite, the duration is $\infty$. Note that the duration of an infinite word can be finite, and such words which contain an infinite number of letters, are called \emph{Zeno} words. $\textit{TW\/}^*(\Sigma)$ is the set of finite timed words over $\Sigma$, $\textit{TW\/}^\omega(\Sigma)$, the set of infinite timed words and $\textit{TW\/}(\Sigma)=\textit{TW\/}^*(\Sigma) \cup \textit{TW\/}^\omega(\Sigma)$. A \emph{timed language} is any subset of $\textit{TW\/}(\Sigma)$. In this paper we write timed words as $0.4\ a\ 1.0\ b\ 2.7 \ c \cdots$ where the real values are the durations elapsed between two letters: thus $c$ occurs at global time $4.1$. We let $\textit{Unt}} \def\dta{DTA\xspace(w)$ be the \emph{untimed} ver\-sion of $w$ ob\-tai\-ned by erasing all the durations in $w$, {e.g.},~ $\textit{Unt}} \def\dta{DTA\xspace(0.4\ a\ 1.0\ b\ 2.7 \ c)=abc$. Given a timed language $L$, we let $\textit{Unt}} \def\dta{DTA\xspace(L)=\{ \textit{Unt}} \def\dta{DTA\xspace(w) \ | \ w \in L \}$. Let $\proj{\Sigma'}$ be the projection of timed words of $\textit{TW\/}(\Sigma)$ over timed words of $\textit{TW\/}(\Sigma')$. When projecting a timed word $w$ on a sub-alphabet $\Sigma' \subseteq \Sigma$, the durations elap\-sed bet\-ween two events are set accordingly: for instance $\proj{\{a,c\}}(0.4 \ a\ 1.0\ b\ 2.7 \ c )=0.4 \ a \ 3.7 \ c$ (projection erases some letters but keep the time elapsed between two letters). Given $\Sigma' \subseteq \Sigma$, $\proj{\Sigma'}(L)=\{ \proj{\Sigma'}(w) \ | \ w \in L\}$. \subsection{Timed Automata} Timed automata (TA) are finite automata extended with real-valued clocks to specify timing constraints between occurrences of events. For a detailed presentation of the fundamental results for timed automata, the reader is referred to the seminal paper of R.~Alur and D.~Dill~\cite{AlurDill94}. \noindent\begin{definition}[Timed Automaton]\label{def-ta} A \emph{Timed Automaton} $A$ is a tuple $(L,$ $l_0,$ $X,\Sigma_\tau, E, \textit{Inv}, F, R)$ where: $L$ is a finite set of \emph{locations}; $l_0$ is the \emph{initial location}; $X$ is a finite set of \emph{clocks}; $\Sigma$ is a finite set of \emph{actions}; $E \subseteq L \times{\cal C}(X) \times \Sigma_\tau \times 2^X \times L$ is a finite set of \emph{transitions}; for $(\ell,g,a,r,\ell') \in E$, $g$ is the \emph{guard}, $a$ the \emph{action}, and $r$ the \emph{reset} set; $\textit{Inv} \in {\cal C}(X)^L$ associates with each location an \emph{invariant}; as usual we require the invariants to be conjunctions of constraints of the form $x \preceq c$ with $\preceq \in \{<,\leq\}$. $F \subseteq L$ and $R \subseteq L$ are respectively the \emph{final} and \emph{repeated} sets of locations. \endef \end{definition} A \emph{state} of $A$ is a pair $(\ell,v) \in L \times \mathbb R_{\geq 0}^X$. A \emph{run} $\varrho$ of $A$ from $(\ell_0,v_0)$ is a (finite or infinite) sequence of alternating \emph{delay} and \emph{discrete} moves: \begin{eqnarray*} \varrho & = & (\ell_0,v_0) \xrightarrow{\delta_0} (\ell_0,v_0 + \delta_0) \xrightarrow{a_0} (\ell_1,v_1) \cdots \\ & & \cdots \xrightarrow{a_{n-1}} (\ell_n,v_n) \xrightarrow{\delta_n} (\ell_n,v_n+ \delta_n) \cdots \end{eqnarray*} {s.t.}~ for every $i \geq 0$: \begin{itemize} \item $v_i + \delta \models \textit{Inv}(\ell_i)$ for $0 \leq \delta \leq \delta_i$; \item there is some transition $(\ell_i,g_i,a_i,r_i,\ell_{i+1}) \in E$ {s.t.}~: ($i$) $v_i + \delta_i \models g_i$ and ($ii$) $v_{i+1}=(v_i+\delta_i)[r_i]$. \end{itemize} The set of finite (resp. infinite) runs from a state $s$ is denoted $\runs^*(s,A)$ (resp. $\runs^\omega(s,A)$) and we define $\runs^*(A)=\runs^*((l_0,\vect{0}),A)$, $\runs^\omega(A)=\runs^\omega((l_0,\vect{0}),A)$ and finally $\runs(A)=\runs^*(A) \cup \runs^\omega(A)$. If $\varrho$ is finite and ends in $s_n$, we let $\textit{last}(\varrho)=s_n$. Because of the den\-se\-ness of the time domain, the transition graph of $A$ is infinite (uncountable number of states and delay edges). The \emph{trace}, $\textit{tr}(\varrho)$, of a run $\varrho$ is the timed word $\proj{\Sigma}(\delta_0 a_0 \delta_1 a_1 \cdots a_n \delta_n \cdots)$. We let $\dur(\varrho)=\dur(\textit{tr}(\varrho))$. For $V \subseteq \runs(A)$, we let $\textit{Tr}(V)=\{\textit{tr}(\varrho) \ | \ \textit{ $\varrho \in V$}\}$. A finite (resp. infinite) timed word $w$ is \emph{accepted} by $A$ if it is the trace of a run of $A$ that ends in an $F$-location (resp. a run that reaches infinitely often an $R$-location). $\lang^*(A)$ (resp. $\lang^\omega(A)$) is the set of traces of finite (resp. infinite) timed words accepted by $A$, and $\lang(A)=\lang^*(A) \cup \lang^\omega(A)$ is the set of timed words accepted by $A$. In the sequel we often omit the sets $R$ and $F$ in TA and this implicitly means $F=L$ and $R=\varnothing$. A timed automaton $A$ is \emph{deterministic} if there is no $\tau$ labelled transition in $A$, and if, whenever $(\ell,g,a,r,\ell')$ and $(\ell,g',a,r',\ell'')$ are transitions of $A$, $g \wedge g' \equiv \mbox{\textsc{false}}$. $A$ is \emph{complete} if from each state $(\ell,v)$, and for each action $a$, there is a transition $(\ell,g,a,r,\ell')$ such that $v \models g$. We note \dta the class of deterministic timed automata. \iffalse \medskip A finite automaton (FA) is a particular TA with $X=\varnothing$. Consequently guards and invariants are vacuously true and time elapsing transitions do not exist. We write $A=(L,$ $l_0,\Sigma_\tau,E,F,R)$ for a FA. A run is thus a sequence of the form: \begin{eqnarray*} \varrho & = & \ell_0 \xrightarrow{a_0} \ell_1 \cdots \cdots \xrightarrow{a_{n-1}} \ell_n \cdots \end{eqnarray*} where for each $i \geq 0$, $(\ell_i,a_i,\ell_{i+1}) \in E$. Definitions of traces and languages are straightforward. In this case, the duration of a run $\varrho$ is the number of steps (including $\tau$-steps) of $\varrho$: if $\varrho$ is finite and ends in $\ell_n$, $\dur(\varrho)=n$ and otherwise $\dur(\varrho)=\infty$. \fi \subsection{Region Graph of a TA} The \emph{region graph} $\textit{RG}(A)$ of a TA $A$ is a finite quotient of the infinite graph of $A$ which is time-abstract bisimilar to $A$~\cite{AlurDill94}. It is a finite automaton (FA) on the alphabet $E'= E \cup \{\tau\}$. The states of $\textit{RG}(A)$ are pairs $(\ell,r)$ where $\ell \in L$ is a location of $A$ and $r$ is a \emph{region} of $\mathbb R_{\geq 0}^X$. More generally, the edges of the graph are tuples $(s,t,s')$ where $s,s'$ are states of $\textit{RG}(A)$ and $t \in E'$. Genuine unobservable moves of $A$ labelled $\tau$ are labelled by tuples of the form $(s,(g,\tau,r),s')$ in $\textit{RG}(A)$. An edge $(g,\lambda,R)$ in the region graph corresponds to a discrete transition of $A$ with guard $g$, action $\lambda$ and reset set $R$. A $\tau$ move in $\textit{RG}(A)$ stands for a delay move to the time-successor region. The initial state of $\textit{RG}(A)$ is $(l_0,\vect{0})$. A final (resp. repeated) state of $\textit{RG}(A)$ is a state $(\ell,r)$ with $\ell \in F$ (resp. $\ell \in R$). A fundamental property of the region graph~\cite{AlurDill94} is: \begin{theorem}[\cite{AlurDill94}] \label{thm-alur} $\lang(\textit{RG}(A))=\textit{Unt}} \def\dta{DTA\xspace(\lang(A))$. \end{theorem} \iffalse In other words: \begin{enumerate} \item if $w$ is accepted by $\textit{RG}(A)$, then there is a timed word $v$ with $\textit{Unt}} \def\dta{DTA\xspace(v)=w$ {s.t.}~ $v$ is accepted by $A$. \item if $v$ is accepted by $A$, then $\textit{Unt}} \def\dta{DTA\xspace(w)$ is accepted $\textit{RG}(A)$. \end{enumerate} \fi The (maximum) size of the region graph is exponential in the number of clocks and in the maximum constant of the automaton $A$ (see~\cite{AlurDill94}): $|\textit{RG}(A)|=|L|\cdot |X|! \cdot 2^{|X|} \cdot K^{|X|}$ where $K$ is the largest constant used in $A$. \subsection{Product of TA} \begin{definition}[Product of two TA] \label{def-prod-sync} Let $A_i=(L_i,l_0^i,X_i,$ $\Sigma^i_{\tau},E_i,\textit{Inv}_i)$ for $i \in\{1,2\}$, be two TA {s.t.}~ $X_1 \cap X_2 = \varnothing$. The \emph{product} of $A_1$ and $A_2$ is the TA $A_1 \times A_2=(L,l_0,X,\Sigma_{\tau},$ $E,\textit{Inv})$ given by: $L=L_1 \times L_2$; $l_0=(l_0^1,l_0^2)$; $\Sigma=\Sigma^1 \cup \Sigma^2$; $X = X_1 \cup X_2$; and $E \subseteq L \times {\cal C}(X) \times \Sigma_\tau \times 2^X \times L$ and $((\ell_1,\ell_2),g_{1,2},\sigma,r,(\ell'_1,\ell'_2)) \in E$ if: \begin{itemize} \item either $\sigma \in (\Sigma_1 \cap \Sigma_2) \setminus \{\tau \}$, and ($i$) $(\ell_k,g_k,\sigma,r_k,\ell'_k) \in E_k$ for $k=1$ and $k=2$; ($ii$) $g_{1,2} = g_1 \wedge g_2$ and ($iii$) $r=r_1 \cup r_2$; \item or for $k=1$ or $k=2$, $\sigma \in (\Sigma_k \setminus \Sigma_{3-k}) \cup \{\tau\}$, and ($i$) $(\ell_k,g_k,\sigma,r_k,\ell'_k) \in E_k$; ($ii$) $g_{1,2}=g_k$ and ($iii$) $r=r_k$; \end{itemize} and finally $\textit{Inv}(\ell_1,\ell_2)= \textit{Inv}(\ell_1) \wedge \textit{Inv}(\ell_2)$. \endef \end{definition} \section{Fault Diagnosis Problems \& Known Results}\label{sec-fd} \subsection{The Model} To model timed systems with faults, we use timed automata on the alphabet $\Sigma_{\tau,f}=\Sigma_{\tau}\cup \{f\}$ where $f$ is the \emph{faulty} (and unobservable) event. We only consider one type of fault, but the results we give are valid for many types of faults $\{f_1,f_2, \cdots,f_n\}$: indeed solving the many types diagnosability problem amounts to solving $n$ one type diagnosability problems~\cite{yoo-lafortune-tac-02}. The observable events are given by $\Sigma_o \subseteq \Sigma$ and $\tau$ is always unobservable. The system we want to supervise is given by a TA $A=(L,l_0,$$X,\Sigma_{\tau,f},E,\textit{Inv})$. Fig.~\ref{fig-ex-diag1} gives an example of such a system. Invariants in the automaton ${\cal A}$ are written within square brackets as in $[x \leq 3]$. \begin{figure}[hbtp] \centering \begin{tikzpicture}[thick,node distance=1cm and 2.5cm \node[state,initial] (l0) {$l_0$}; \node[state] (l1) [above right=of l0,xshift=-1cm,yshift=0.5cm,label=-87:{$[x \leq 3]$}] {$l_1$}; \node[state] (l2) [above right=of l1,label=-87:{$[x \leq 3]$}] {$l_2$}; \node[state] (l3) [below right=of l1,label=-87:{$[x \leq 3]$}] {$l_3$}; \node[state] (l4) [above right=of l3] {$l_4$}; \node[state] (l5) [below right=of l0] {$l_5$}; \path[->] (l0) edge node[pos=0.5] {$f$} (l1) edge[bend angle=30,bend left] node[pos=0.5] {$a$} (l5) edge node[pos=0.5] {$b$} (l5) edge[bend angle=30,bend right] node[swap,pos=0.7] {$c$} (l5) (l1) edge [pos=0.7] node {$a$; $x \leq 2$} (l2) (l1) edge node {$a$; $x > 2$} (l3) (l2) edge [] node {$b$} (l4) (l3) edge [] node {$c$} (l4) ; \path[->] (l4) edge[loop above] node {$\tau$} (l4); \path[->] (l5) edge[loop right] node {$\tau$} (l5); \end{tikzpicture} \caption{The Timed Automaton ${\cal A}$} \label{fig-ex-diag1} \end{figure} \noindent Let $\Delta \in \mathbb N$. A run of $A$ \begin{eqnarray*} \varrho & = & (\ell_0,v_0) \xrightarrow{\delta_0} (\ell_0,v_0 + \delta_0) \xrightarrow{a_0} (\ell_1,v_1) \cdots \\ & & \cdots \xrightarrow{a_{n-1}} (\ell_n,v_n) \xrightarrow{\delta_n} (\ell_n,v_n+ \delta) \cdots \end{eqnarray*} is $\Delta$-faulty if: (1) there is an index $i$ {s.t.}~ $a_i=f$ and (2) the duration of the run $\varrho'=(\ell_{i},v_i) \xrightarrow{\delta_{i}} \cdots \xrightarrow{\delta_n} (\ell_n,v_n+\delta_n) \cdots$ is larger than $\Delta$. We let $\textit{Faulty}_{\geq \Delta}(A)$ be the set of $\Delta$-faulty runs of $A$. Note that by definition, if $\Delta' \geq \Delta$ then $\textit{Faulty}_{\geq \Delta'}(A) \subseteq \textit{Faulty}_{\geq \Delta}(A)$. We let $\textit{Faulty}(A)=\cup_{\Delta \geq 0}\textit{Faulty}_{\geq \Delta}(A)=\textit{Faulty}_{\geq 0}(A)$ be the set of faulty runs of $A$, and $\textit{NonFaulty}(A)=\runs(A) \setminus \textit{Faulty}(A)$ be the set of non-faulty runs of $A$. Moreover we use $$\textit{Faulty}^{\textit{tr}}_{\geq \Delta}(A)=\textit{Tr}(\textit{Faulty}_{\geq \Delta}(A))$$ and $$\textit{NonFaulty}^{\textit{tr}}(A)=\textit{Tr}(\textit{NonFaulty}(A))$$ which are the traces\footnote{Notice that $\textit{tr}(\varrho)$ erases $\tau$ and $f$.} of $\Delta$-faulty and non-faulty runs of $A$. \subsection{Diagnosers} The purpose of fault diagnosis is to detect a fault as soon as possible. Faults are unobservable and only the events in $\Sigma_o$ can be observed as well as the time elapsed between these events. Whenever the system generates a timed word $w$, the observer can only see $\proj{\Sigma_o}(w)$. If an observer can detect faults in this way it is called a \emph{diagnoser}. A diagnoser must detect a fault within a given delay $\Delta \in \mathbb N$. \begin{definition}[$(\Sigma_o,\Delta)$-Diagnoser]\label{def-diag} Let $A$ be a TA over the alphabet $\Sigma_{\tau,f}$, $\Sigma_o \subseteq \Sigma$ and $\Delta \in \mathbb N$. A \emph{$(\Sigma_o,\Delta)$-diagnoser} for $A$ is a mapping $D: \textit{TW\/}^*(\Sigma_o)\rightarrow \{0,1\}$ such that: \begin{itemize} \item for each $\varrho \in \textit{NonFaulty}(A)$, $D(\proj{\Sigma_o}(\varrho))=0$, \item for each $\varrho \in \textit{Faulty}_{\geq \Delta}(A)$, $D(\proj{\Sigma_o}(\varrho))=1$. \endef \end{itemize} \end{definition} $A$ is $(\Sigma_o,\Delta)$-diagnosable if there exists a $(\Sigma_o,\Delta)$-diagnoser for $A$. $A$ is $\Sigma_o$-diagnosable if there is some $\Delta \in \mathbb N$ {s.t.}~ $A$ is $(\Sigma_o,\Delta)$-diagnosable. \begin{example} The TA ${\cal A}$ in Fig.~\ref{fig-ex-diag1} with $\Sigma=\Sigma_o=\{a,b,c\}$ is $(\Sigma,3)$-diagnosable. For the timed words with an $a$ followed by either a $b$ or a $c$ a fault must have occurred. Otherwise no fault should be reported. If $\Sigma_o=\{b\}$, in ${\cal A}$ there are two runs: \begin{eqnarray*} \rho_1 \!\!\! & = & \!\!\! (l_0,0) \xrightarrow{\; f \; } (l_1,0) \xrightarrow{\; a \; } (l_2,0) \xrightarrow{\; 3 \; } (l_2,3) \xrightarrow{\; b\; } (l_4,3) \cdots \\ \rho_2 \!\!\! & = & \!\!\! (l_0,0) \xrightarrow{3} (l_0,3) \xrightarrow{\ b \ } (l_5,3) \cdots \end{eqnarray*} that satisfy $\textit{tr}(\rho_1)=\textit{tr}(\rho_2)$, and thus ${\cal A}$ is not $(\{b\},3)$-dia\-gnosable. To diagnose a fault in ${\cal A}$, $a$ must be observed. \endex \end{example} \subsection{Classical Diagnosis Problems} \noindent Assume $A=(L,\ell_0,X,\Sigma_{\tau,f},E,\textit{Inv})$ is a TA . The classical fault diagnosis problems are the following: \begin{prob}[Bounded or $\Delta$-Diagnosability] \label{prob-delta-diag} \mbox{} \\ \textsc{Inputs:} A TA $A$, $\Sigma_o \subseteq \Sigma$, and $\Delta \in \mathbb N$. \\ \textsc{Problem:} Is $A$ $(\Sigma_o,\Delta)$-diagnosable? \end{prob} \begin{prob}[Diagnosability] \label{prob-diag} \mbox{} \\ \textsc{Inputs:} A TA $A$ and $\Sigma_o \subseteq \Sigma$. \\ \textsc{Problem:} Is $A$ $\Sigma_o$-diagnosable? \end{prob} \begin{prob}[Maximum delay] \label{prob-delay} \mbox{} \\ \textsc{Inputs:} A TA $A$ and and $\Sigma_o \subseteq \Sigma$. \\ \textsc{Problem:} If $A$ is $\Sigma_o$-diagnosable, what is the minimum $\Delta$ {s.t.}~ $A$ is $(\Sigma_o,\Delta)$-diagnosable ? \end{prob} \smallskip According to Definition~\ref{def-diag}, $A$ is $\Sigma_o$-diagnosable, iff\xspace, there is some $\Delta \in \mathbb N$ {s.t.}~ $A$ is $(\Sigma_o,\Delta)$-diagnosable. Thus $A$ is not $\Sigma_o$-diagnosable iff\xspace $\forall \Delta \in \mathbb N$, $A$ is not $(\Sigma_o,\Delta)$-diagnosable. Moreover a trace based definition of $(\Sigma_o,\Delta)$-diagnosability can be stated as\footnote{This definition does not take into account \emph{Zeno} runs; this is not difficult to add and the reader is referred to~\cite{cassez-cdc-09} for more details.}: $A$ is $(\Sigma_o,\Delta)$-diagnosable iff\xspace \begin{equation} \proj{\Sigma_o}(\textit{Faulty}_{\geq \Delta}^{\textit{tr}}(A)) \cap \proj{\Sigma_o}(\textit{NonFaulty}^{\textit{tr}}(A)) = \varnothing \mathpunct. \label{eq-base} \end{equation} This gives a necessary and sufficient condition for non $\Sigma_o$-diagnosability: \begin{eqnarray} \label{eq-diagnos2} \hskip0em\text{$A$ is not $\Sigma_o$-diagnosable} & \hskip-1.2em \iff & \hskip-1.3em \begin{cases} \forall \Delta \in \mathbb N, \\ \quad \exists \rho \in \textit{NonFaulty}(A) \\ \quad \exists \rho' \in \textit{Faulty}_{\geq \Delta}(A) \textit{ {s.t.}~ } \\ \quad\;\;\proj{\Sigma_o}(\rho) = \proj{\Sigma_o}(\rho') \mathpunct, \end{cases} \end{eqnarray} or in other words, there is no pair of runs $(\rho_1,\rho_2)$ with $\rho_1 \in \textit{Faulty}_{\geq \Delta}(A)$, $\rho_2 \in \textit{NonFaulty}(A)$ the $\Sigma_o$-traces of which are equal. Complexity results for the diagnosis problems on timed automata were established in~\cite{tripakis-02} (see~\cite{cassez-cdc-09} for a comprehensive study) and Problems~\ref{prob-delta-diag}--\ref{prob-delay} are PSPACE-complete (note that PSPACE-completeness already holds for $\Sigma_o=\Sigma$). \section{Sensor Minimization with Static Observers} \label{sec-static} In this section, we extend the results of~\cite{cassez-acsd-07} to systems given by TA. \begin{prob}[Minimum Cardinality Set] \label{prob-static-minimum} \mbox{} \\ \textsc{Inputs:} A TA $A=(L,\ell_0,X,\Sigma_{\tau,f},E,\textit{Inv})$ and $n \in \mathbb N$. \\ \textsc{Problem:} \begin{itemize} \item[(A)] Is there any set $\Sigma_o \subseteq \Sigma$, with $|\Sigma_o| =n$ {s.t.}~ $A$ is $\Sigma_o$-diagnosable ? \item[(B)] If the answer to~(A) is ``yes'', compute the minimum value for $n$. \end{itemize} \end{prob} \begin{theorem} Problem~\ref{prob-static-minimum} is PSPACE-complete. \end{theorem} \begin{proof} PSPACE-easiness for (A) can be established as follows: guess a set $\Sigma_o$ with $|\Sigma_o| =n$ and check (in PS\-PACE) whether $A$ is $\Sigma_o$-diagnosable. This proves NPSPACE and thus in PSPACE. PSPACE-hardness follows from the reduction of Problem~\ref{prob-diag} to Problem~\ref{prob-static-minimum}.(A) with $n=|\Sigma|$. This establishes PSPACE-completeness for (A). Computing the minimum $n$ can be done using a binary search (dichotomy) and thus (B) is also in PSPACE. \end{proof} \medskip The previous results also hold in a more general setting using \emph{masks}. Masks are useful to capture the notion of \emph{distinguishability} among observable events. Indeed, there are cases where two events $a$ and $b$ are observable but not distinguishable, that is, the diagnoser knows that $a$ or $b$ occurred, but not which of the two. This is not the same as considering $a$ and $b$ to be unobservable, since in that case the diagnoser would not be able to detect the occurrence of $a$ or $b$. Distinguishability of events is captured by the notion of a \emph{mask}~{\cite{Varaiyaetal88}}. \begin{definition}[Mask]\label{def-mask} A \emph{mask} $(M,n)$ (of size $n$) over $\Sigma$ is a total, surjective function $M: \Sigma \rightarrow \{\mathbf{1},\cdots,\mathbf{n}\} \cup \{\varepsilon\}$. \endef \end{definition} $M$ induces a morphism $M^* : \textit{TW\/}^*(\Sigma) \rightarrow \textit{TW\/}^*(\{\mathbf{1},\cdots,\mathbf{n}\})$, where $M^*(\varepsilon)=\varepsilon$ and $M^*(a.\rho)=M(a).M^*(\rho)$, for $a\in\Sigma$ and $\rho\in\Sigma^*$. For example, if $\Sigma=\{a,b,c,d\}$, $n=2$ and $M(a)=M(d)=\mathbf{1}$, $M(c)=\mathbf{2}$, $M(b)=\varepsilon$, then we have $M^*(a\ 0.4 \ b \ 0.2 \ c \ 1.1 \ b \ 0.7 \ d) = \mathbf{1} \ 0.6 \ \mathbf{2} \ 1.8\ \mathbf{1}$. \begin{definition}[$(M,n),\Delta)$-diagnoser]\label{def-mask-diag} Let $(M,n)$ be a mask over $\Sigma$. A mapping $D: \textit{TW\/}^*(\{\mathbf{1},\cdots,\mathbf{n}\}) \rightarrow \{0,1\}$ is a \emph{$((M,n),\Delta)$-diagnoser} for $A$ if: \begin{itemize} \item for each $\rho \in \textit{NonFaulty}(A)$, $D(M^*(\textit{tr}(\rho)))=0$; \item for each $\rho \in \textit{Faulty}_{\geq k}(A)$, $D(M^*(\textit{tr}(\rho)))=1$. \endef \end{itemize} \end{definition} $A$ is $((M,n),\Delta)$-diagnosable if there is a $((M,n),\Delta)$-diagnoser for $A$. $A$ is said to be $(M,n)$-diagnosable if there is some $\Delta$ such that $A$ is $((M,n),\Delta)$-diagnosable. Given a mask $(M,n)$ and $A$, checking whether $A$ is $(M,n)$-diagnosable can be done in PSPACE: it suffices to replace each event $a \in \Sigma$ by $M(a)$ and check for diagnosability. It is PSPACE-complete as using an identity mask of cardinality $|\Sigma|$ solves Problem~\ref{prob-diag}. The counterpart of Problem~\ref{prob-static-minimum} with masks is the following: \begin{prob}[Minimum Cardinality Mask] \label{prob-static-mask} \mbox{} \\ \textsc{Inputs:} A TA $A=(L,\ell_0,X,\Sigma_{\tau,f},E,\textit{Inv})$ and $n \in \mathbb N$. \\ \textsc{Problem:} \begin{itemize} \item[(A)] Is there any mask $(M,n)$, {s.t.}~ $A$ is $(M,n)$-diagnosable? \item[(B)] If the answer to~(A) is ``yes'', compute the minimum value for $n$. \end{itemize} \end{prob} \begin{theorem}\label{thm-mask} Problem~\ref{prob-static-mask} is PSPACE-complete. \end{theorem} \begin{proof} PSPACE-easiness is proved by: 1) guessing a mask $(M,n)$ and checking (in PSPACE) that $A$ is $(M,n)$-diagnosable. PSPACE-hardness is proved as follows. If there is a mask $(M,n)$ with $n=|\Sigma|$ {s.t.}~ $A$ is $(M,n)$-diagnosable, then, as $M$ is surjective, it must be the case that $M$ is a one-to-one mapping from $\Sigma$ to $\{\mathbf{1},\cdots,\mathbf{n}\}$. It follows that $A$ is $\Sigma$-diagnosable. Conversely, assume $\Sigma=\{a_1,\cdots,a_n\}$. If $A$ is $\Sigma$-diagnosable then there is a mask $(M,|\Sigma|)$ with $M(a_i)=i$ {s.t.}~ $A$ is $(M,|\Sigma|)$-diagnosable. Hence Problem~\ref{prob-static-mask}.(A) is PSPACE-complete. Problem~\ref{prob-static-mask}.(B) can be solved in PSPACE as well using a binary search. It is not difficult to reduce reachability for TA with one action to checking whether there is a mask of size $1$ and thus Problem~\ref{prob-static-mask}.(B) is PSPACE-complete. \end{proof} \begin{remark} The assumption that a mask is surjective can be lifted still preserving Theorem~\ref{thm-mask}. Indeed, if there is a mask $(M,|\Sigma|)$ {s.t.}~ $A$ is $(M,|\Sigma|)$-diagnosable and $M$ is not surjective, then we can build $(M',|\Sigma|)$ with $M'$ surjective {s.t.}~ $A$ is $(M',|\Sigma|)$-diagnosable (intuitively, $M'$ is more discriminating than $M$ and has a greater distinguishing power). \end{remark} \section{Sensor Minimization with Dynamic Observers} \label{sec-dynamic} The use of \emph{dynamic observers} was already advocated for DES in~\cite{cassez-acsd-07,cassez-fi-08}. We start with an example that shows that dynamically choosing what to observe can be even more efficient using timing information. \begin{example} Let ${\cal A}$ be the automaton of Figure~\ref{fig-ex-diag1}. To diagnose ${\cal A}$, we can use a \emph{dynamic observer} that switches $a$, $b$ and $c$-sensors on/off. If we do not measure time, to be able to detect faults in ${\cal A}$, we have to switch the $a$ sensor on at the beginning. When an $a$ has occurred, we must be ready for either an $b$ or a $c$ and therefore, switch on the $b$ and $c$ sensors on. A dynamic observer must thus first observe $\{a\}$ and after an occurrence of $a$, observe $\{b,c\}$. If the observer can measure time using a clock, say $y$, it can first switch the $a$ sensor on. If an $a$ occurs when $y \leq 2$, then switch the $b$ sensor on and if $y >2$ switch the $c$ sensor on. This way the observer never has to observe more than event at each point in time. \endex \end{example} \subsection{Dynamic Observers} The choice of the events to observe can depend on the choices the observer has made before and on the observations (event, time-stamp) it has made. Moreover an observer may have \emph{unbounded} memory. The following definition extends the notion of observers introduced in~\cite{cassez-acsd-07} to the timed setting. \begin{definition}[Observer]\label{def-observer2} An \emph{observer} \textit{Obs}\xspace over $\Sigma$ is a \emph{deterministic and complete} timed automaton $\textit{Obs}\xspace=(N,n_0,Y,$ $\Sigma,\delta,\textit{Inv}_{\mbox{\textsc{true}}})$ together with a mapping $O: N \rightarrow 2^\Sigma$, where $N$ is a (possibly infinite) set of locations, $n_0\in N$ is the initial location, $\Sigma$ is the set of observable events, $\delta : N \times \Sigma \times {\cal C}(Y) \rightarrow N \times 2^Y$ is the transition function (a total function), and $O$ is a labeling function that specifies the set of events that the observer wishes to observe when it is at location $n$. The invariant\footnote{In the sequel, we omit the invariant when a TA is an observer, and replace it by the mapping $O$.} $\textit{Inv}_{\mbox{\textsc{true}}}$ maps every location to $\mbox{\textsc{true}}$, implying that an observer cannot prevent time from elapsing. We require that, for any location $n$ and any $a\in\Sigma$, if $a\not\in O(n)$ then $\delta(n,a,\cdot)=(n,\varnothing)$: this means the observer does not change its location nor resets its clocks when an event it has chosen not to observe occurs. \endef \end{definition} As an observer is deterministic we let $\delta(n_0,w)$ denote the state $(n,v)$ reached after reading the timed word $w$ and $O(\delta(n_0,w))$ is the set of events $\textit{Obs}\xspace$ observes after $w$. \noindent An observer defines a {\em transducer} which is a mapping $\sem{\textit{Obs}\xspace} : \textit{TW\/}^*(\Sigma)\rightarrow \textit{TW\/}^*(\Sigma)$. Given a word $w$, $\sem{\textit{Obs}\xspace}(w)$ is the out\-put of the transducer on $w$. It is called the \emph{observation} of $w$ by the observer \textit{Obs}\xspace. \subsection{Diagnosability with Dynamic Observers} \begin{definition}[$(\textit{Obs}\xspace,\Delta)$-diagnoser] \label{def-obsk-diag} Let $A$ be a TA over $\Sigma_{\tau,f}$ and \textit{Obs}\xspace be an observer over $\Sigma$. $D:\textit{TW\/}^*(\Sigma) \rightarrow \{0,1\}$ is an \emph{$(\textit{Obs}\xspace,\Delta)$-diagnoser} for $A$ if: \begin{itemize} \item $\forall \rho \in \textit{NonFaulty}(A)$, $D(\sem{\textit{Obs}\xspace}(\textit{tr}(\rho)))=0$ and \item $\forall \rho \in \textit{Faulty}_{\geq \Delta}(A)$, $D(\sem{\textit{Obs}\xspace}(\textit{tr}(\rho)))=1$. \endef \end{itemize} \end{definition} $A$ is $(\textit{Obs}\xspace,\Delta)$-diagnosable if there is an $(\textit{Obs}\xspace,\Delta)$-diagnoser for $A$. $A$ is \textit{Obs}\xspace-diagnosable if there is some $\Delta$ such that $A$ is $(\textit{Obs}\xspace,\Delta)$-diagnosable. We now show how to check $\textit{Obs}\xspace$-diagnosability when the observer $\textit{Obs}\xspace$ is a DTA. \begin{prob}[Deterministic Timed Automata Observers] \label{prob-dynamic-ta} \mbox{} \\ \textsc{Inputs:} A TA $A=(L,\ell_0,X,\Sigma_{\tau,f},E,\textit{Inv})$ and an observer given by a \dta $\textit{Obs}\xspace=(N,n_0,Y,\Sigma,\delta,O)$. \\ \textsc{Problem:} \begin{itemize} \item[(A)] Is $A$ $\textit{Obs}\xspace$-diagnosable? \item[(B)] If the answer to~(A) is ``yes'', compute the minimum $\Delta \in \mathbb N$ {s.t.}~ $A$ is $(\textit{Obs}\xspace,\Delta)$-diagnosable. \end{itemize} \end{prob} \begin{theorem} Problem~\ref{prob-dynamic-ta} is PSPACE-complete. \end{theorem} \begin{proof} PSPACE-hardness follows from the fact that taking an observer which always observes $\Sigma_o \subseteq \Sigma$ solves Problem~\ref{prob-diag}. We prove that Problem~\ref{prob-dynamic-ta} is in PSPACE. The following construction is an extension of the one for DES~\cite{cassez-fi-08}. Recall that $\textit{Obs}\xspace$ is complete. Define the timed automaton $A \otimes \textit{Obs}\xspace=(L \times N,(\ell_0,n_0),X \cup Y,\Sigma_{\tau,f},\rightarrow,\textit{Inv}_\otimes)$ as follows: $\textit{Inv}_\otimes(\ell,n)=\textit{Inv}(\ell)$ and the transition relation $\rightarrow$ is given by: \begin{itemize} \item $(\ell,n) \xrightarrow{\,(g \wedge g',\beta,R \cup Y')\,} (\ell',n')$ iff $\exists \lambda \in \Sigma$ {s.t.}~ $\ell \xrightarrow{\,(g,\lambda,R)\,} \ell'$, $(n',Y')=\delta(n,\lambda,g')$ and $\beta=\lambda$ if $\lambda \in O(n)$, $\beta=\tau$ otherwise; \item $(\ell,n) \xrightarrow{\,(g,\lambda,R)\,} (\ell',n)$ iff $\exists \lambda \in \{\tau,f\}$ {s.t.}~ $\ell \xrightarrow{\,(g,\lambda,R)\,} \ell'$. \end{itemize} The TA $A \otimes \textit{Obs}\xspace$ is an unfolding of $A$ which reveals what is observable at each product location. From the previous construction, it follows that: for each $\Delta \in \mathbb N$, $A$ is $(\textit{Obs}\xspace,\Delta)$-diagnosable iff $A \otimes \textit{Obs}\xspace$ is $(\Sigma,\Delta)$-diagnosable. As the size of $A \otimes \textit{Obs}\xspace$ is $|A| \times |\textit{Obs}\xspace|$, we can solve Problem~\ref{prob-dynamic-ta}.(A) in PSPACE. Problem~\ref{prob-dynamic-ta}.(B) can also be solved using a binary search, in PSPACE. \end{proof} \subsection{Synthesis of the Most Permissive Dynamic Diagnoser} In this section we address the problem of \emph{synthesizing} a \dta dynamic observer which ensures diagnosability. Following~\cite{cassez-fi-08}, we want to compute a \emph{most permissive} observer ($\varnothing$ if none exists), which gives a representation of all the good observers. Indeed, checking whether there exists a DTA observer $\textit{Obs}\xspace$ {s.t.}~ $A$ is $\textit{Obs}\xspace$-diagnosable is not an interesting problem: it suffices to check that $A$ is $\Sigma$-diagnosable as the DTA observer which observes $\Sigma$ continuously will be a solution. When synthesizing (deterministic) timed automata, an important issue is the amount of \emph{resources} the timed automaton can use: this can be formally defined~\cite{BDMP-cav-2003} by the (number of) clocks, $Z$, that the automaton can use, the maximal constant $\max$, and a \emph{granularity} $\frac{1}{m}$. As an example, a TA of resource $\mu=(\{c,d\},2,\frac{1}{3})$ can use two clocks, $c$ and $d$, and the clocks constraints using the rationals $-2 \leq k/m \leq 2$ where $k \in \mathbb Z$ and $m=3$. A \emph{resource} $\mu$ is thus a triple $\mu=(Z,\max,\frac{1}{m})$ where $Z$ is finite set of clocks, $\max \in \mathbb N$ and $\frac{1}{m} \in \mathbb Q_{>0}$ is the \emph{granularity}. DTA$_\mu$\xspace is the class of \dta of resource $\mu$. \begin{remark} Notice that the number of locations of the \dta in DTA$_\mu$\xspace is not bounded and hence this family has an infinite (yet countable) number of elements. \end{remark} We now focus on the following problem : \begin{prob}[Most Permissive Dynamic $\Delta$-Diagnoser] \label{prob-dynamic-synth} \mbox{} \\ \textsc{Inputs:} A TA $A=(L,\ell_0,X,\Sigma_{\tau,f},E,\textit{Inv})$, $\Delta \in \mathbb N$, and a resource $\mu=(Z,\max,\frac{1}{m})$.\\ \textsc{Problem:} Compute the set $O$ of all observers in DTA$_\mu$\xspace, {s.t.}~ $A$ is $(\textit{Obs}\xspace,\Delta)$-diagnosable iff $\textit{Obs}\xspace \in O$. \end{prob} For DES, the previous problem can be solved by computing a most permissive observer, and we refer to~\cite{cassez-fi-08} section~5.5 for the formal definition of the most permissive observer. This can be done in 2EXPTIME~\cite{cassez-fi-08}, and the solution is a reduction to a safety control problem under partial observation. For the timed case, we cannot use the same solution as controller synthesis under partial observation is undecidable~\cite{BDMP-cav-2003}. The solution we present for Problem~\ref{prob-dynamic-synth} is a modification of an algorithm originally introduced in~\cite{Bouyerfossacs05}. \subsection{Fault Diagnosis with DTA~\cite{Bouyerfossacs05}}\label{sec-algo} In case a TA $A$ is $\Sigma_o$-diagnosable, the diagnoser is a mapping~\cite{tripakis-02} which performs a state estimate of $A$ after a timed word $w$ is read by $A$. For DES, it is obtained by \emph{determinizing} the system, but we cannot always determinize a TA $A$ (see~\cite{AlurDill94}). And unfortunately testing whether a timed automaton is determinizable is undecidable~\cite{Finkel05,TripakisFolk}. P.~Bouyer and F.~Chevalier in~\cite{Bouyerfossacs05} considers the problem of deciding whether there exists a diagnoser which is a DTA using resources in $\mu$: \begin{prob}[DTA$_\mu$\xspace $\Delta$-Diagnoser~\cite{Bouyerfossacs05}] \label{prob-dtamu} \mbox{} \\ \textsc{Inputs:} A TA $A=(L,\ell_0,X,\Sigma_{\tau,f},E,\textit{Inv})$, $\Delta \in \mathbb N$, and a resource $\mu=(Z,\max,\frac{1}{m})$.\\ \textsc{Problem:} Is there any $D \in \text{DTA}_\mu$ {s.t.}~ $A$ is $(D,\Delta)$-dia\-gnosable ? \end{prob} \begin{theorem}[\cite{Bouyerfossacs05}] Problem~\ref{prob-dtamu} is 2EXPTIME-complete. \end{theorem} The solution to the previous problem is based on the construction of a \emph{two-player game}, the solution of which gives the \emph{set} of all $\text{DTA}_\mu$ diagnosers (the most permissive diagnosers) which can diagnose $A$ (or $\varnothing$ is there is none). We recall here the construction of the two-player game. Let $A=(L,\ell_0,X,\Sigma_{\tau,f},\rightarrow,\textit{Inv})$ be a TA, $\Sigma_o \subseteq \Sigma$. Define $A(\Delta)=(L_1 \cup L_2 \cup L_3,\ell^1_0,X \cup \{z\},\Sigma_{\tau,f},\rightarrow_\Delta,\textit{Inv}_\Delta)$ as follows: \begin{itemize} \item $L_i=\{\ell^i, \ell \in L\}$, for $i\in \{1,2,3\}$, {i.e.},~ $L_i$ elements are copies of the locations in $L$, \item $z$ is (new) clock not in $X$, \item for $\ell \in L$, $\textit{Inv}(\ell^1)=\textit{Inv}(\ell)$, $\textit{Inv}(\ell^2)=\textit{Inv}(\ell) \wedge z \leq \Delta$, and $\textit{Inv}(\ell^3)=\mbox{\textsc{true}}$, \item the transition relation is given by: \begin{itemize} \item for $i \in \{1,2,3\}$, $\ell^i \xrightarrow{\ (g,a,R)\ }_\Delta \ell'^i$ if $a \neq f$ and $\ell \xrightarrow{\ (g,a,R)\ } \ell'$, \item for $i \in \{2,3\}$, $\ell^i \xrightarrow{\ (g,f,R)\ }_\Delta \ell'^i$ if $a \neq f$ and $\ell \xrightarrow{\ (g,f,R)\ } \ell'$, \item $\ell^1 \xrightarrow{\ (g,f,R \cup \{z\})\ }_\Delta \ell'^2$ if $a \neq f$ and $\ell \xrightarrow{\ (g,f,R)\ } \ell'$, \item $\ell^2 \xrightarrow{\ (z=\Delta,\tau,\varnothing)\ }_\Delta \ell^3$. \end{itemize} \end{itemize} The previous construction creates $3$ copies of $A$: the system starts in copy $1$, when a fault occurs it switches to copy $2$, resetting the clock $z$, and when in copy $2$ (a fault has occurred) it can switch to copy $3$ after $\Delta$ time units. We can then define $L_1$ as the non-faulty locations, and $L_3$ as the $\Delta$-faulty locations. Given a resource $\mu=(Y,\max,\frac{1}{m})$ ($X \cap Y =\varnothing$), a \emph{minimal guard} for $\mu$ is a guard which defines a region of granularity $\mu$. We define the (symbolic) \emph{universal automaton} ${\cal U}=(\{0\},\{0\},Y,\Sigma,E_\mu,\textit{Inv}_\mu)$ by: \begin{itemize} \item $\textit{Inv}_\mu(0)=\mbox{\textsc{true}}$, \item $(0,g,a,R,0) \in E_\mu$ for each $(g,a,R)$ {s.t.}~ $a \in \Sigma$, $R \subseteq Y$, and $g$ is a minimal guard for $\mu$. \end{itemize} ${\cal U}$ is finite because $E_\mu$ is finite. Nevertheless ${\cal U}$ is not deterministic because it can choose to reset different sets of clocks $Y$ for a pair ``(guard, letter)'' $(g,a)$. To diagnose $A$, we have to find when a set of clocks has to be reset. This can provide enough information to distinguish $\Delta$-faulty words from non-faulty words. The algorithm of~\cite{Bouyerfossacs05} requires the following steps: \begin{enumerate} \item define the region graph $\textit{RG}(A(\Delta) \times {\cal U})$, \item compute a \emph{projection} of this region graph: \begin{itemize} \item let $(g,a,R)$ be a label of an edge in $\textit{RG}(A(\Delta) \times {\cal U})$, \item let $g'$ be the unique minimal guard {s.t.}~ $\sem{g} \subseteq \sem{g'}$; \item define the projection $p_{\cal U}(g,a,R)$ by $(g',\lambda,R \cap Y)$ with $\lambda=a$ if $a \in \Sigma_o$ and $p_{\cal U}(g,a,R)=\tau$ otherwise. \end{itemize} The projected automaton $p_{\cal U}(\textit{RG}(A(\Delta) \times {\cal U}))$ is the automaton $\textit{RG}(A(\Delta) \times {\cal U})$ where each label $\alpha$ is repla\-ced by $p_{\cal U}(\alpha)$. \item determinize $p_{\cal U}(\textit{RG}(A(\Delta) \times {\cal U}))$ (removing $\tau$ actions) and obtain $H_{A,\Delta,\mu}$, \item build a two-player safety game $G_{A,\Delta,\mu}$ as follows: \begin{itemize} \item each transition $s \xrightarrow{\ (g,a,Y) \ } s'$ % in $H_{A,\Delta,\mu}$ yi\-elds a transition in $G_{A,\Delta,\mu}$ of the form: % \begin{center} \tikz[node distance=1cm and 2.8cm]{ \node[circle,draw] (l0) {$s$}; \node[rectangle,draw,right=of l0] (l2) {$(s,g,a)$}; \node[circle,draw,right=of l2] (l1) {$s'$}; \path[->] (l0) edge node {$(g,a)$} (l2) ; \path[->] (l2) edge node {$(g,a,Y)$} (l1) ; } \end{center} \item the round-shaped state are the states of Player~1, whereas the square-shaped states are Player~0 states (the choice of the clocks to reset). \item the $\text{Bad}$ states (for Player~0) are the states of the form $\{(\ell_1,r_1),(\ell_2,r_2),\cdots,(\ell_k,r_k)\}$ with both a $\Delta$-faulty (in $L_3$) and a non-faulty (in $L_1$) location. \end{itemize} \end{enumerate} The main results of~\cite{Bouyerfossacs05} are: \begin{itemize} \item there is a TA $D \in$ DTA$_\mu$\xspace {s.t.}~ $A$ is $(D,\Delta)$-diagnosable iff Player~0 can win the safety game ``avoid Bad'' $G_{A,\Delta,\mu}$, \item it follows that Problem~\ref{prob-dtamu} can be solved in 2EXPTIME as $G_{A,\Delta,\mu}$ has size doubly exponential in $A$, $\Delta$ and $\mu$, \item the acceptance problem for Alternating Turing machines of exponential space can be reduced to Problem~\ref{prob-dtamu} and thus it is 2EXPTIME-hard. \end{itemize} \subsection{Problem~\ref{prob-dynamic-synth} is in 2EXPTIME} \label{sec-synth} We now show how to modify the previous algorithm to solve Problem~\ref{prob-dynamic-synth}, and obtain the following result: \begin{theorem} Problem~\ref{prob-dynamic-synth} can be solved in 2EXPTIME. \end{theorem} \begin{proof} We modify the previous algorithm as follows: \begin{enumerate} \item the automaton ${\cal U}$ is defined as follows: each location corresponds to a choice of a subset of events to observe. Define the (symbolic) \emph{universal automaton} ${\cal U}'=(2^\Sigma,2^\Sigma,Y,\Sigma,E_\mu,\textit{Inv}_\mu)$ by: \begin{itemize} \item for $S \in 2^\Sigma$, $\textit{Inv}_\mu(S)=\mbox{\textsc{true}}$, \item $(S,g,a,R,S') \in E_\mu$ for each $S,S' \in 2^\Sigma$, $(g,a,R)$ {s.t.}~ $a \in \Sigma$, $R \subseteq Y$, and $g$ is a minimal guard for $\mu$. \end{itemize} \item when computing $\textit{RG}(A(\Delta) \times {\cal U}'))$, the set of observable events (step~2 in the algorithm of section~\ref{sec-algo}) are defined according to the location $S$ of ${\cal U}'$. Formally, the projection of $a \in \Sigma$ is $a$ if the location of ${\cal U}'$ is $S$ and $a \in S$ and $\tau$ otherwise. \end{enumerate} The size of $\textit{RG}(A(\Delta) \times {\cal U}'))$ is $|L| \cdot 2^{|\Sigma|} \cdot |X \cup Y|! \cdot K^{|X \cup Y|}$ where $K$ is the maximal constant of $A \times {\cal U}'$; it is thus exponential in $\mu$ and $\Sigma$. The determinization is thus doubly exponential in $A$, $\mu$ and $\Sigma$. We can then build a new game $G'_{A,\Delta,\mu}$ as described in section~\ref{sec-algo} before. The proof that the most permissive strategy in the new game $G'_{A,\Delta,\mu}$ is the most permissive observer is along the lines of the one given in~\cite{Bouyerfossacs05} with minor modifications. % Solving a safety game is linear in the size of the game and thus computing the most permissive observer of resource $\mu$ can de done in 2EXPTIME. \end{proof} \begin{remark} In~\cite{Bouyerfossacs05} it is also proved that for Event Recording Automata (ERA)~\cite{AFH94} Problem~\ref{prob-dtamu} becomes PSPACE-complete. This result does not carry over in our case, as there is still an exponential step with the choice of the sets of events to be observed. \end{remark} \section{Optimal Dynamic Observers}\label{sec-cost} In this section we extend the notion of \emph{cost} defined for finite state observers in~\cite{cassez-fi-08} to the case of timed observers. \subsection{Weighted/Priced Timed Automata} Weighted/priced timed automata were introduced in~\cite{BFH+01,ATP01} and they extend TA with \emph{prices/costs/weights} on the time elapsing and discrete transitions. \begin{definition}[Priced Timed Automata] A \emph{priced timed auto\-ma\-ton (PTA)} is a pair $(A,\textit{Cost})$ where $A=(L,\ell_0,X,$ $\Sigma_{\tau,f},E,\textit{Inv})$ is a timed automaton and $\textit{Cost}$ is a \emph{cost function} which is a mapping from $L \cup E$ to $\mathbb N$. \endef \end{definition} Let \begin{eqnarray*} \varrho & = & (\ell_0,v_0) \xrightarrow{\delta_0} (\ell_0,v_0 + \delta_0) \xrightarrow{a_0} (\ell_1,v_1) \cdots \\ & & \cdots \xrightarrow{a_{n-1}} (\ell_n,v_n) \xrightarrow{\delta_n} (\ell_n,v_n+ \delta_n) \end{eqnarray*} be a run of $A$. We denote by $e_i=(\ell_i,(g_i,a_i,R_i),\ell_{i+1})$ the discrete transition taken from $(\ell_i,v_i+\delta_i)$ to $(\ell_{i+1},v_{i+1})$. The \emph{cost} of the run $\varrho$ is defined by: \[ \textit{Cost}(\varrho) = \Sigma_{i \in 0..n} \textit{Cost}(\ell_i) \cdot \delta_i + \Sigma_{i \in 0..n-1} \textit{Cost}(e_i)\mathpunct. \] The \emph{mean cost} of $\varrho$ is defined to be the cost per time unit and given\footnote{Runs of duration $0$ are not taken into account.} by $\textit{$\overline{\Cost}$}(\varrho)=\textit{Cost}(\varrho)/\dur(\varrho)$. The cost of runs of duration $t \in \mathbb R_{>0}$ is defined by $\textit{$\overline{\Cost}$}(t)=\sup \{ \textit{$\overline{\Cost}$}(\sem{\textit{Obs}\xspace}(\varrho)) \ | \ \dur(\varrho)=t \}\mathpunct.$ The \emph{maximal mean cost} of $(A,\textit{Cost})$ is $\textit{$\overline{\Cost}$}(A)=\limsup_{t\rightarrow \infty} \textit{$\overline{\Cost}$}(t)$. The minimal mean cost is defined dually and denoted $\underline{\textit{Cost}}(A)$. \subsection{Cost of an Observer} To select a best or optimal dynamic observer which ensures $\Delta$-diagnosability, we need to define a metric to compare them. We extend the one defined in~\cite{cassez-fi-08} for DES to take into account (real) time elapsing. Let $A$ be a TA and $\textit{Obs}\xspace$ a DTA observer. $\textit{Obs}\xspace$ is extended into a P(D)TA by associating costs with locations and transitions. The cost associated with the discrete transitions is the cost of switching on the sensors for a set of observable events, and the cost of a location is the cost per time unit of having a set of sensors activated. Let $\varrho$ be a run of $A$. As \textit{Obs}\xspace is deterministic (and complete) there is exactly one run of $\textit{Obs}\xspace$ the trace of which is $\sem{\textit{Obs}\xspace}(\textit{tr}(\varrho))$. Given $\varrho$, let $\sem{\textit{Obs}\xspace}(\varrho)$ be this unique run. The average cost of the run $\varrho$ observed by \textit{Obs}\xspace is $\textit{$\overline{\Cost}$}(\sem{\textit{Obs}\xspace}(\varrho))$. Given $t \in \mathbb R_{>0}$, the \emph{maximal mean cost} of runs of duration $t$ is defined by: \[ \textit{$\overline{\Cost}$}(A,\textit{Obs}\xspace,t)=\sup_{\varrho \in \runs^*(A) \wedge \dur(\varrho)=t} \{ \textit{$\overline{\Cost}$}(\sem{\textit{Obs}\xspace}(\varrho))\}\mathpunct. \] The \emph{maximal average cost} of the pair $<\!A,\textit{Obs}\xspace\!>$ is defined \[ \textit{$\overline{\Cost}$}(<\!A,\textit{Obs}\xspace\!>)= \limsup_{t\rightarrow \infty} \textit{$\overline{\Cost}$}(A,\textit{Obs}\xspace,t) \mathpunct. \] We can then state the following problem: \begin{prob}[Cost of an Observer] \label{prob-cost-obs} \mbox{} \\ \textsc{Inputs:} A TA $A$ and $(\textit{Obs}\xspace,\textit{Cost})$ a PDTA observer. \\ \textsc{Problem:} Compute $\textit{$\overline{\Cost}$}(<\!A,\textit{Obs}\xspace\!>)$. \end{prob} \subsection{Computing the Cost of a Given Timed Observer} The computation of optimal infinite schedules for TA has been addressed in~\cite{BBL-fmsd06}. The main result of~\cite{BBL-fmsd06} is: \begin{theorem}[Minimal/Maximal Mean Cost~\cite{BBL-fmsd06}]\label{thm-pat} Given a PTA $A$, com\-puting $\textit{$\overline{\Cost}$}$ and $\underline{\textit{Cost}}$ is PSPACE-complete. \end{theorem} The definition of the cost of an observer is exactly the defi\-ni\-tion of the maximal mean cost in~\cite{BBL-fmsd06} and thus: \begin{theorem} Problem~\ref{prob-cost-obs} is PSPACE-complete. \end{theorem} \begin{proof} PSPACE-easiness follows from Theorem~\ref{thm-pat}: note that Theorem~\ref{thm-pat} assumes that the TA is bounded which is not a restriction as every TA can be transformed into an equivalent (timed bisimilar) bounded TA. For PSPACE-hardness, to compute the maximal mean cost of a PDTA $B$, let $A$ be the universal automaton on the alphabet of $B$. Consider $B$ as an observer and solve Problem~\ref{prob-cost-obs}. This solves the maximal mean cost computation problem for DTA. This completes the hardness proof. \end{proof} \subsection{Optimal Synthesis Problem} Checking whether the mean cost of a given observer is less than $k$ requires that we have computed or are given such an observer. A more difficult version of Problem~\ref{prob-cost-obs} is to check for the existence of cheap dynamic observer: \begin{prob}[Bounded Cost Dynamic Observer] \label{prob-bounded-cost} \mbox{} \\ \textsc{Inputs:} A TA $A=(L,\ell_0,X,\Sigma_{\tau,f},E,\textit{Inv})$, $\Delta \in \mathbb N$, $\mu$ a resource and $k \in \mathbb N$. \\ \textsc{Problem:} \begin{itemize} \item[(A)] Is there a dynamic observer $D \in$ DTA$_\mu$\xspace {s.t.}~ $A$ is $(D,\Delta)$-diagnosable and $\textit{$\overline{\Cost}$}(<\!A,D\!>) \leq k$ ? \item[(B)] If the answer to~(A) is ``yes'', compute a witness dynamic observer? \end{itemize} \end{prob} We cannot provide of proof that Problem~\ref{prob-bounded-cost} is decidable. However, we give a lower bound for Problem~\ref{prob-bounded-cost} and later discuss the exact complexity. \begin{theorem} Problem~\ref{prob-bounded-cost} is 2EXPTIME-hard. \end{theorem} \begin{proof} We reduce Problem~\ref{prob-dtamu} which is 2EXPTIME-hard~\cite{Bouyerfossacs05} to Problem~\ref{prob-bounded-cost}. Let $A$ be a TA for which we want to check whether there exists a DTA observer $D \in$ DTA$_\mu$\xspace {s.t.}~ $A$ is $(\Delta,D)$-diagnosable. Let $\alpha$ be a fresh letter not in $\Sigma$. Define the automaton $B$ depicted on Figure~\ref{fig-reduc-bounded}. The upper part of $B$ generates faulty and non-faulty runs with each letter including $\alpha$. From each location of $A$ (bottom part), we add a $\tau$ transition to the initial state of $B$. The transitions of $A$ are not depicted. For $B$ to be diagnosable with $\Delta \geq 1$, we must have: 1) $\alpha$ always observable and 2) $\Sigma$ always observable. Moreover, if $A$ is $(\Delta,\Sigma)$-diagnosable, then $B$ is $(\Delta,\Sigma\cup\{\alpha\})$-diagnosable. Conversely, if $B$ is $(\Delta,\Sigma\cup\{\alpha\})$-diagnosable, then $B$ is $(\Delta,\Sigma)$-diagnosable. Hence $A$ is $(\Delta,\Sigma)$-diagnosable iff $B$ is $(\Delta,\Sigma\cup\{\alpha\})$-diagnosable. Define the cost of the locations to be $1$, and $0$ for the transitions in $B$. $B$ is diagnosable with a DTA $D \in$ DTA$_\mu$\xspace iff there is a dynamic (yet it has to choose $\Sigma\cup\{\alpha\}$ continuously) observer $D$ with $\textit{$\overline{\Cost}$}(<\!A,D\!>) \leq 1$. It follows that: there exists a DTA$_\mu$\xspace diagnoser $D$ {s.t.}~ $A$ is $(\Delta,\Sigma)$-diagnosable iff $B$ is $(\Delta,O)$-diagnosable with a DTA observer $O \in$ DTA$_\mu$\xspace and $\textit{$\overline{\Cost}$}(<\!A,O\!>) \leq 1$. \end{proof} \begin{figure}[t] \centering \begin{tikzpicture}[thick,node distance=1cm and 1cm \small \tikzset{every state/.style={circle,minimum size=0.2cm,inner sep=0,draw=blue!50,very thick,fill=blue!20},bend angle=20} \node[state,initial] (l0) {}; \node[state] (l1) [above right=of l0] {}; \node[state] (l2) [right=of l0,xshift=1cm] {}; \node[state] (l4) [above right=of l2] {}; \node[state] (l5) [right=of l2,xshift=1cm] {}; \node (s1) [below left=of l0,xshift=-.3cm] {$\bullet$}; \node[circle,draw,inner sep=0] (s2) [right=of s1,yshift=-.7cm,xshift=-.4cm,label=-90:{$\ell_0$}] {$\bullet$}; \node (s3) [below=of s2,xshift=1.2cm] {$\bullet$}; \node (s4) [right=of s3,xshift=2cm] {$\bullet$}; \node (s5) [right=of s2,xshift=1cm] {$\bullet$}; \node[draw=black,inner sep=3pt,rounded corners,thick,fit= (s1) (s2) (s3) (s4) (s5),label=10:{$A$}] (env) {}; \path[->] (l0) edge node[pos=0.5] {$f$} (l1) edge node[swap,pos=0.5,swap] {$\alpha$} (l2) (l1) edge [pos=0.5] node {$\alpha$} (l2) (l2) edge node {$f$} (l4) (l4) edge node {$\forall a \in \Sigma$} (l5) (l2) edge [] node {$\forall a \in \Sigma$} (l5); \path[->] (l0) edge[dashed,bend left] node {$\tau$} (s2); \path[->] (s1) edge[dashed,bend left] node[pos=0.2] {$\tau$} (l0); \path[->] (s2) edge[dashed,bend left] node[pos=0.2] {$\tau$} (l0); \path[->] (s3) edge[dashed,bend right] node {$\tau$} (l0); \path[->] (s4) edge[dashed,bend right,swap] node[pos=0.3] {$\tau$} (l0); \path[->] (s5) edge[dashed,bend right,swap] node[pos=0.1] {$\tau$} (l0); \end{tikzpicture} \caption{Automaton $B$} \label{fig-reduc-bounded} \end{figure} The status of Problem~\ref{prob-bounded-cost} is clearly unsettled as the 2EXP\-TIME-hardness result does not imply it is even decidable. A solution to this problem would be to mimic the one given for DES~\cite{cassez-fi-08}: solve a mean payoff \emph{timed} game with a counterpart of Zwick and Paterson algorithm~\cite{zwick-95} using the most permissive observers obtained in section~\ref{sec-synth}. The type of priced timed games we would have to solve has the following features: 1) they are turn-based, as one Player picks up (controllable moves) a set of events to be observed and then hands it over to the other Player who tries to produce a confusing\footnote{Which is the trace of both a faulty and non-faulty run.} run (uncontrollable moves); 2) they have at least two clocks (one for the system $A$ and one for the DTA observer); 3) the controllable choices are \emph{urgent} {i.e.},~ no time can elapse in Player~1 locations. We denote S-PTGA for the class of timed game automata previously defined. Unfortunately, there is no counterpart of the general result of Zwick \& Paterson for timed automata. Only very few results are known for timed mean payoff games~\cite{BLMR-fsttcs2006,bflms-formats08,bbjlr-formats08,BFLM-hscc10} and none of them can be used in our setting. Nevertheless, due to the particular nature of the mean payoff price timed game we construct (in the class S-PTGA), we might be able to compute the optimal choices of observable events using an algorithm similar to~\cite{BBL-fmsd06}. Hence we could obtain a 2EXPTIME algorithm for Problem~\ref{prob-bounded-cost}. \section{Conclusion}\label{sec-conclu} The results of the paper are summarized by the line ``TA'' in Table~\ref{tab-summary}. The complexity/decidability status of Problem~\ref{prob-bounded-cost} is left open. A solution to this problem would be to solve the following optimization problem on the class of S-PTGA: \begin{prob}[Optimal Infinite Schedule in S-PTGA] \mbox{}\\ \textsc{Inputs:} A S-PTGA $(A,\textit{Cost})$, a set of \emph{Bad} states and $k \in \mathbb N$.\\ \textsc{Problem:} Is there a strategy $f$ for Player~1 in $A$ {s.t.}~ $f(A)$ ($A$ controlled by $f$) avoids \emph{Bad} and satisfies $\textit{$\overline{\Cost}$}(f(A)) \leq k$? \end{prob} \newcommand{\vtab}[1]{ \begin{tabular}[c]{c} #1 \end{tabular} } \begin{table}[t] \centering \caption{Summary of the Results} \label{tab-summary} \begin{tabular}[t]{||c|c|c|c||}\hline\hline & Static Observers & \multicolumn{2}{c||}{Dynamic Observers} \\\cline{3-4} & Min. Cardinality & Most Perm. Obs. & Optimal Observer \\\hline\hline DES & NP-Complete~\cite{cassez-acsd-07} & 2EXPTIME~\cite{cassez-acsd-07} & 2EXPTIME~\cite{cassez-tase-07} \\\hline TA & PSPACE-Complete & 2EXPTIME & 2EXPTIME-hard \\\hline\hline \end{tabular} \end{table} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\textbf{Model}} We have considered the dynamics of a Brownian particle in an effective potential $U(x,t)$ and in the presence of a linear velocity dependent force, $f_s(1-\frac{v}{v_0})$, with $f_{s}$ as constant and $v_{0}$ as the autonomous velocity of the particle, respectively \cite{ganguly2013}. The motion of the particle is governed by the Langevin equation of motion\cite{chandrasekhar1943stochastic,jayannavar2007charged,sahoo2019transport} \begin{equation} m\frac{dv}{dt}=-\gamma v-\frac{\partial U\left( x,t\right) }{\partial x}+f_s\left( 1-\frac{v}{v_0}\right)+\xi\left(t\right). \end{equation} Here, $m$ and $v(t)$ represent the mass and instantaneous velocity of the Brownian particle, respectively and $\gamma$ is the viscous coefficient of the medium. $\xi(t)$ is the randomly fluctuating thermal noise which satisfies the properties, $<\xi(t)>=0$ and $<\xi(t_1)\xi(t_2)>=2\gamma k_BT\delta(t_{1}-t_{2})$. The angular bracket $\left\langle ...\right\rangle $ represents the ensemble average over the noise, $k_B$ is the Boltzmann constant, and $T$ is the absolute temperature of the medium or environment. The potential $U(x,t)$ depends on both position($x$) and time($t$). In this work, we have studied the dynamics for three different cases. In all these studies, we are mainly focused on the nature of the position and velocity auto correlation functions. First, we have simply considered the inertial dynamics of a Brownian particle in the presence of a linear velocity dependent force, $f_s(1-\frac{v}{v_0})$. In this case, since the particle is not confined in a potential, $U(x,t)=0$, and hence Eq.~(1) reduces to \begin{equation} m\frac{dv}{dt}=-\gamma v+f_s\left( 1-\frac{v}{v_0}\right)+\xi\left( t\right). \end{equation} By solving Eq.~(2), the velocity, $v(t)$ at any instant of time is found to be \begin{equation} \begin{split} v\left( t\right) =v_0 e^{-\frac{\beta}{m} t}+\left( \frac{f_s}{m}\right) \int_{0}^{t}e^{-\frac{\beta}{m}\left( t-t^{'}\right)} dt^{'}+\\\frac{1}{m}\int_{0}^{t} e^ {-\frac{\beta}{m}\left( t-t^{'}\right)} \xi\left( t^{'}\right) dt^{'}, \end{split} \end{equation} where $\beta=(\gamma+\frac{f_s}{v_0})$ and $g=2\gamma k_{B}T$. Thus, the velocity auto-correlation function, $C_v(t_{1},t_{2})=\left[\left\langle v(t_{1})v(t_{2})\right\rangle \right]$ is calculated as \begin{equation} \begin{split} C_v(t_{1},t_{2})=v_{0}^{2}e^{\dfrac{-\beta (t_{1}+t_{2})}{m}}\\+\dfrac{v_{0}f_{s}}{\beta}\left( e^{\dfrac{-\beta t_{1}}{m}}+e^{\dfrac{-\beta t_{2}}{m}}-2e^{\dfrac{-\beta (t_{1}+t_{2})}{m}}\right)\\+\left( \dfrac{f_{s}}{\beta} \right) ^{2}\left(1- e^{\dfrac{-\beta t_{2}}{m}}-e^{\dfrac{-\beta t_{1}}{m}}+e^{\dfrac{-\beta (t_{1}+t_{2})}{m}}\right)\\+\dfrac{g}{2m\beta}\left( e^{\dfrac{-\beta \vert(t_{1}-t_{2})\vert}{m}}-e^{\dfrac{-\beta (t_{1}+t_{2})}{m}}\right). \end{split} \end{equation} From this expression, the stationary state correlation function $C_{v}(t)(=<v(0)v(t)>)$ can be calculated as \begin{equation} C_{v}(t)=\left( \dfrac{f_{s}}{\beta} \right) ^{2}+\dfrac{g}{2m\beta}\left( e^{\dfrac{-\beta t}{m}}\right) \end{equation} Next, we have attempted to study the position and velocity autocorrelation functions when the particle is confined in a harmonic well $[U(x,t)=\frac{1}{2}kx^{2}]$. The dynamics of the particle is given by \begin{equation} m\frac{dv}{dt}=-kx-\gamma v+f_s\left( 1-\frac{v}{v_0}\right)+\xi\left( t\right), \end{equation} where $k$ is the harmonic constant. From the solution of Eq.~(5), we get the position [$x(\omega)$] and velocity [$v(\omega)$] of the particle in the Fourier space as \begin{eqnarray}\nonumber x\left( \omega\right) =\left(\frac{f_s\delta\left( \omega\right)+\xi \left( \omega\right) }{-m\omega^{2}-i \omega \beta +k}\right) \end{eqnarray} and \begin{eqnarray}\nonumber v\left( \omega\right) =\omega\left( \frac{f_s\delta\left( \omega\right)+\xi \left( \omega\right) }{-im\omega^{2}+\omega \beta +ik}\right). \end{eqnarray} The spectral densities corresponding to the position $S_x\left( \omega\right)$ [=$\mid x\left( \omega \right) \vert\ ^{2} $] and velocity $S_v\left( \omega\right)$ [= $\mid v\left( \omega \right) \vert\ ^{2} $] are \begin{equation} S_{x}\left( \omega\right) =\left(\frac{f_{s}^{2}+2 \gamma k_{B}T}{\left( m \omega^{2}-k\right) ^{2}+\left( \omega \beta\right) ^{2}}\right) \end{equation} and \begin{equation} S_{v}\left( \omega\right) =\omega^{2}\left( \frac{ f_{s}^{2}+2 \gamma k_{B}T}{\left( m \omega^{2}-k\right) ^{2}+\left( \omega \beta\right) ^{2}}\right). \end{equation} By doing the inverse Fourier transform of Eqs.~(7) and (8), the position autocorrelation function, $C_x(t)$ and the velocity autocorrelation function $C_v(t)$ at any instant of time $t$ are found to be \begin{equation} C_x\left( t\right) =\left( \frac{f_s^{2}+2\gamma k_{B}T}{2k\beta}\right) e^{ -\frac{\beta t}{2m}} \left( \cos \omega_1 t +\frac{\beta}{2m \omega_{1}} \sin \omega_1 t\right) \end{equation} and \begin{equation} C_v\left( t\right) =\left( \frac{f_s^{2}+2\gamma k_{B}T}{2m\beta}\right) e^{-\frac{\beta t}{2m}} \left( \cos \omega_1 t -\frac{\beta}{2m \omega_{1}} \sin \omega_1 t\right). \end{equation} Here, $\omega_{1}=\sqrt{\frac{k}{m}-\frac{\beta^{2}}{4m^{2}}}$. Finally, we have considered the case when the particle is confined in a harmonic well and driven by a sinusoidal driving force $A\sin(\omega t)$, where $A$ and $\omega$ as the amplitude and frequency of the drive, respectively. The dynamics can be described as \begin{equation} m\frac{dv}{dt}=-\gamma v-kx+ A\sin \left( \omega t\right)+f_s\left( 1-\frac{v}{v_0}\right)+ \xi\left( t\right). \end{equation} By solving the dynamics and following the same procedure as in the above two cases, the power spectral densities are obtained to be \setlength\belowdisplayskip{0pt} \begin{equation} S_x\left( \omega\right) =\left(\frac{f_{s}^{2}+2 \gamma k_{B}T+\frac{A^{2}}{16}}{\left( m \omega^{2}-k\right) ^{2}+\left( \omega \beta\right) ^{2}}\right) \end{equation} and \begin{equation} S_{v}\left( \omega\right) =\omega^{2}\left( \frac{ f_{s}^{2}+2 \gamma k_{B}T+\frac{A^{2}}{16}}{\left( m \omega^{2}-k\right) ^{2}+\left( \omega \beta\right) ^{2}}\right). \end{equation} Similarly, $C_x(t) $ and $C_v(t)$ are calculated to be \begin{equation} C_x\left( t\right) =P e^{ -\frac{\beta t}{2m}} \left( \cos \omega_1 t + \frac{\beta}{2m \omega_{1}} \sin \omega_1 t\right) \end{equation} and \begin{equation} C_v\left( t\right) =Q e^{ -\frac{\beta t}{2m}}\left( \cos \omega_1 t - \frac{\beta}{2m \omega_{1}} \sin \omega_1 t\right). \end{equation} with $P=\left( \frac{f_s^{2}+2\gamma k_{B}T+\frac{A^{2}}{16}}{2k\beta }\right)$ and $Q=\left( \frac{f_s^{2}+2\gamma k_BT+\frac{A^{2}}{16}}{2m\beta}\right)$ In the case of confined harmonic particle, we have also calculated the correlation $C^{T}_{x}$ corresponding to the shift relative to the average position of the particle, which is in the form \begin{equation} C^{T}_{x} =\left(\frac{\gamma k_{B}T}{k\beta}\right) e^{ -\frac{\beta t}{2m}} \left( \cos \omega_1 t +\frac{\beta}{2m \omega_{1}} \sin \omega_1 t\right). \end{equation} This is simply the thermal part of the correlation function $C_{x}(t)$ in both Eq.~[9] and Eq.~[14]. \section{\textbf{Results and Discussion}} In Fig.~1, we have plotted the velocity auto correlation function as a function of time [Eq.~(5)] for different values of $f_{s}$. It is observed that $C_{v}(t)$ decays exponentially with time reflecting that the correlation in velocity decreases with time. In the longer time regime for $f_{s}=0$, $C_{v}(t)$ approaches zero value due to decorrelation. When the particle is subjected to a linear velocity dependent force, $C_{v}(t)$ also decay exponentially with time and gets saturated to a finite value in the time asymptotic regime. This clearly indicates that the correlation between the velocities at two different times still exists even after the system has reached the steady state. The critical time beyond which $C_{v}(t)$ gets saturated, decreases with increase in the $f_s$ value. In the absence of force ($f_{s}=0$), the particle moves freely. When the force is applied, the motion of the particle is controlled and driven in the direction of the force. As a result, the free movement of the particle is restricted. As the value of $f_{s}$ increases, the particle attains the steady state faster leading to a reduction in the critical time. On the other hand, for a fixed $f_s$ value, $C_{v}(t)$ decays faster with $\gamma$. The nonzero value of the velocity autocorrelation also reflect the equilibrium solution and energetic stability of the particle in the time asymptotic limit. The equilibrium value of course depend on the nature of the velocity dependent force. Similar conclusion has been drawn for a particular case of the model studied in Ref.~[\onlinecite{karmeshu1976motion}]. In their model, the viscous force consists of a deterministic part which represents the intrinsic damping of the medium and a stochastic part which is basically a random velocity dependent force. Further, they have considered two different cases i.e. in the presence and in the absence of a random driving force. In the former case, when the coefficient of the random velocity dependent force is a constant, our model can be treated as a special case of the model used in Ref.~[\onlinecite{karmeshu1976motion}]. \begin{figure}[hbtp] \centering \includegraphics[scale=0.6]{fig1.eps} \caption{$C_v(t)$ [Eq.~(5)] as a function of $t$ for different values of $f_s$. The other parameters ($\gamma$, $v_0$, and $m$) are fixed to $1$}. \end{figure} The power spectral density for the position of the particle [$S_x(\omega)$] and the corresponding autocorrelation function [$C_{x}(t)$] for various parameter regimes of the model when the particle is confined in a harmonic well, are presented in Fig.~2. Figures~2(a) and 2(b) show $S_x$ and $C_x$ as a function of $\omega$ and $t$, respectively for various $f_s$ values. The $S_x(\omega)$ spectra for all values of $f_{s}$, exhibit symmetric behaviour with exponential tails on either sides, as in case of a driven harmonic system\cite{SAHOO20086284}. For $f_{s}=0$, the spectrum has a double peaked structure. The optimum values of $\omega$ at which the spectrum $S_{x}(\omega)$ shows peaks are obtained by equating $\frac{dS_{x}(\omega)}{d\omega}$ to zero. For any value of $f_{s}$, the peaks are calculated to be either at $\omega=0$ or at $\omega=\pm \sqrt{\frac{k}{m}-\frac{\beta^{2}}{2m^{2}}}$. For $f_{s}=0$, the peaks are found to be at $\omega=\pm \sqrt{\frac{k}{m}-\frac{\gamma^{2}}{2m^{2}}}$. With increase in $f_{s}$ value, the double peaking behaviour of the spectrum is transformed into a single peak, centered at zero. However, the intensity of the spectrum is found to be reduced initially and then increases for larger $f_{s}$ values as one can notice from Eq. 7. The position autocorrelation function, $C_x(t)$ [Fig.~2(b)] for $f_{s}=0$ shows a non monotonic variation with time. In the lower $t$ range, it decays exponentially, attains a minimum with negative value in the intermediate time range, and then attains the zero value in the long time limit. This showcases the conventional behaviour of the position autocorrelation function\cite{wright2003forward}. The approach of $C_{x}(t)$ to zero reflects that the position at different times get decorrelated in the time asymptotic limit ($t \rightarrow \infty$). As $f_{s}$ increases, the minimum gradually disappear and the variation of $C_{x}$ becomes more and more monotonic with $t$. In the long time limit, $C_{x}(t)$ approaches zero much faster than the case of $f_{s}=0$. This is because, as the value of $f_{s}$ increases the particle mostly follow the direction of force and the probability of changing it's position inside the harmonic well decreases. As a result of this, the non monotonic behaviour of $C_x(t)$ gets suppressed and approaches zero comparatively faster than the small $f_{s}$ values. Figures~ 2(c) and 2(d) depict $S_{x}(\omega)$ and $C_{x}(t)$, respectively for different values of $\gamma$. In Fig.~2(c), it is observed that the peak value of the $S_{x}(\omega)$ spectrum increases and the spectrum becomes sharper with increase in $\gamma$ value. The corresponding $C_x(t)$ in Fig.~2(d) decays with time and does not show any systematic behaviour with $\gamma$. The behaviour of the position correlation with $\gamma$ is not well understood because of the complex interplay of the dynamics. Similarly, $S_{x}(\omega)$ and $C_{x}(t)$ are calculated for different values of $k$ and are presented in Fig.~2(e) and 2(f), respectively. The peak value in the $S_{x}(\omega)$ spectrum decreases and the spectrum becomes broader with increase in $k$ [Fig.~2(e)] which is opposite to the behaviour observed for varying $\gamma$ values. The corresponding $C_x(t)$ in Fig.~2(f) shows an exponentially decaying behaviour with time and becomes zero in the long time limit. With increase in the value of $k$, the variation of $C_{x}(t)$ with $t$ becomes more and more non-monotonic and approaches a conventional damped oscillatory behaviour in the larger $k$ limit. As the value of $k$ increases, the particle becomes more and more confined in the well and the random fluctuations of the particle across the mean position of the harmonic well increases. As a result, the fluctuations in the spectrum increases with increase in $k$, thereby, suppressing the $S_{x}(\omega)$ spectral intensity and producing a damped oscillatory behaviour of $C_{x}(t)$. \begin{figure}[htbp] \includegraphics[scale =0.7]{fig2.eps} \caption{For different values of $f_{s}$, (a) $S_{x}$ vs $\omega$ [Eq.~(7)] and (b) $C_{x}$ vs $t$ [Eq.~(9)], fixing $\gamma=k=1$. For different values of $\gamma$, (c) $S_{x}$ vs $\omega$ [Eq.~(7)] and (d) $C_{x}$ vs $t$ [Eq.~(9)], fixing $f_{s}=k=1$. For different values of $k$, (e) $S_{x}$ vs $\omega$ [Eq.~(7)]and (f) $C_{x}$ vs $t$ [Eq.~(9)], fixing $f_{s}=\gamma=1$. The other common fixed parameters are $v_0=m=1$} \end{figure} The spectral density for the velocity of the particle [$S_{v}(\omega)$] and the corresponding $C_{v}(t)$ in various parameter regimes of the model when the particle is confined in a harmonic well, are plotted in Fig.~3. In Figs.~3(a) and 3(b), $S_{v}(\omega)$ and the corresponding $C_v(t)$ are shown for different values of $f_{s}$. The symmetric nature of $S_{v}(\omega)$ resembles the classical behaviour as expected \cite{wright2003forward}. By equating $\frac{dS_{v}(\omega)}{d\omega}$ to zero, the peak positions in the $S_{v}(\omega)$ spectrum are obtained to be at $\omega=\pm \sqrt{\frac{k}{m}}$ and hence are independent of $f_{s}$ values. For this reason, the peaks in the spectrum appear at the same $\omega$ irrespective of the change in $f_{s}$ values. However, the peak heights in the $S_{v}(\omega)$ spectrum get suppressed with increase in $f_{s}$ and the corresponding $C_v(t)$ decays exponentially with $t$. For $f_{s}=0$, the nature of the curve follows the damped oscillating behaviour as expected classically\cite{wright2003forward}. In the absence of the linear velocity dependent force, the particle is confined in a harmonic trap and oscillates randomly across the mean position of the well, making both forward and backward movements frequently. Therefore, $C_{v}(t)$ oscillates between the positive and negative values and finally becomes zero in the $t\rightarrow\infty$ limit. With increase in $f_{s}$, the particle is dragged and follows the direction of the linear velocity dependent force. As a result, the harmonic effect is dominated and the oscillatory behaviour of the particle gets suppressed for larger $f_{s}$ values. This leads to faster decay of $C_{v}(t)$. In Figs.~3(c) and 3(d), we have plotted $S_v(\omega)$ and the corresponding $C_v(t)$ for different values of $\gamma$. We observe that with increase in $\gamma$, the peak in the $S_v(\omega)$ spectrum is suppressed and the spectrum becomes broader. However, the peak positions remain unchanged with varying $\gamma$. The corresponding $C_{v}(t)$ decays with $t$ and approaches zero in the time asymptotic limit. A weak damped oscillatory behavior is also observed for $C_{v}(t)$ but the variation of $C_v(t)$ with $\gamma$ is found to be not systematic. It is expected that with increase in $\gamma$, the motion of the particle should be obstructed by the presence of viscous drag which may reduce the harmonic oscillatory behaviour. However, in the present case, the non-systematic variation of $C_{v}(t)$ with $\gamma$ implies a complex interplay of the dynamics and reflects it's non-equilibrium nature. In Figs.~3(e) and 3(f), we have plotted $S_v(\omega)$ and the corresponding $C_v(t)$ for different values of $k$. It is observed that the peaks in the $S_{v}(\omega)$ spectrum shift towards the +ve and -ve direction of $\omega$ values, respectively with increase in $k$, as expected. The corresponding $C_v(t)$ shows a prominent damped oscillatory behaviour with increase in $k$. This is due to the fact that the harmonic confinement becomes stronger with increase in $k$. The probability of oscillatory behaviour of motion of the particle across the mean position of the well increases, reversing it's direction of motion frequently \cite{wright2003forward}. \begin{figure}[htbp] \centering \includegraphics[scale =0.75]{fig3.eps} \caption{For different values of $f_{s}$, (a) $S_{v}$ vs $\omega$ [Eq.~(8)] and (b) $C_{v}$ vs $t$ [Eq.~(10)], fixing $\gamma=k=1$. For different values of $\gamma$, (c) $S_{v}$ vs $\omega$ [Eq.~(8)] and (d) $C_{v}$ vs $t$ [Eq.~(10)], fixing $f_{s}=k=1$. For different values of $k$, (e) $S_{v}$ vs $\omega$ [Eq.~(8)] and (f) $C_{v}$ vs $t$ [Eq.~(10)], fixing $f_{s}=\gamma=1$. The other common fixed parameters are $v_0=m=1$.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale =0.69]{fig4.eps} \caption{For different values of $f_{s}$, (a) $S_{x}$ vs $\omega$ [Eq.~(12)] and (b) $C_{x}$ vs $t$ [Eq.~(14)], fixing $\gamma=k=A=1$. For different values of $A$, (c) $S_{v}$ vs $\omega$ [Eq.~(12)] and (d) $C_{v}$ vs $t$ [Eq.~(14)], fixing $k=\gamma=f_{s}=1$. The other common fixed parameters are $v_0=m=\gamma=k=1$.} \end{figure} Figure~4 presents $S_{x}$ vs $\omega$ and the corresponding $C_{x}$ vs $t$ for different values of $f_{s}$ and in different parameter regimes of the model, when the particle is confined in a harmonic well and is subjected to an external driving force, $A\sin(\omega t)$. The behaviour of $S_{x}(\omega)$ [Fig.~4(a)] and $C_{x}(t)$ [Fig.~4(b)] are similar to the behaviour observed in Figs.~2(a) and 2(b) in the absence of the sinusoidal driving force. The intensity of the $S_x(\omega)$ spectrum [Fig.~4(c)] is found to increase with increase in amplitude of drive ($A$). The corresponding $C_{x}(t)$ [Fig.~4(d)] decays exponentially with time and in the transient regime, the magnitude of $C_{x}(t)$ is found to increase with increase in $A$. We have also studied $S_{x}(\omega)$ and $C_{x}(t)$ for different values of $\gamma$ and $k$, fixing other parameters to unity, but no significant change in behaviour is observed. Finally for a confined harmonic particle, we have exactly calculated the correlation function corresponding to the shift relative to the average position of the particle and is given by the expression in Eq.~16. From this expression, it is confirmed that this term is due to the thermal contribution of the position correlation function and is also $f_{s}$ dependent. This term survives and contributes to the the whole position correlation function $C_{x}(t)$ even in the absence of linear velocity dependent force, i.e., in $f_{s} \rightarrow 0$ limit. However, for the athermal regime, i.e., in $T \rightarrow 0$ limit, the correlation function is entirely due to the presence of linear velocity dependent force, $f_{s}$. \begin{figure} \centering \includegraphics[scale =0.7]{fig5.eps} \caption{For different values of $f_{s}$, (a) $S_{v}$ vs $\omega$ [Eq.~(13)]and (b) $C_{v}$ vs $t$ [Eq.~(15)], fixing $k=\gamma=A=1$. For different values of $A$, (c) $S_{v}$ vs $\omega$ [Eq.~(13)] and (d) $C_{v}$ vs $t$ [Eq.~(15)], fixing $k=\gamma=f_{s}=1$. The other common fixed parameters are $v_0=m=\gamma=k=1$.} \end{figure} In Fig.~5, we have plotted $S_v(\omega)$ and the corresponding $C_v(t)$ for different parameter regimes of the model. The overall behaviour of $S_{v}(\omega)$ [Fig.~5(a)] and $C_{v}(t)$ [Fig.~5(b)] are found to be similar to the behaviour observed in Figs.~3(a) and 3(b), respectively for varying $f_{s}$ and in the absence of sinusoidal driving force. The peak in the $S_v(\omega)$ spectrum increases with increase in $A$. The corresponding $C_v(t)$ decays exponentially with time and approaches zero in the time asymptotic limit. At a particular instant of time, the magnitude of $C_v(t)$ increases with increase in $A$. This implies that the correlation between velocities at different times increases with increase in the amplitude of the drive. This is because, the particle while randomly fluctuating across the mean position of the well, takes the advantage of the sinusoidal drive and follows the nature of the driving force, giving a positive contribution to $C_{v}(t)$. The behaviour of $S_{v}(\omega)$ and $C_{v}(t)$ are also studied for varying $k$ and $\gamma$ but no visible changes are observed. \section{Conclusion} To understand the dynamical properties of a Brownian particle confined in a harmonic well and in the presence of a linear velocity dependent force, we have exactly calculated and analyzed the position and velocity autocorrelation functions. It is observed that in the presence of a linear velocity dependent force and when the particle is not confined, the velocity autocorrelation function decays exponentially with time and saturates in the time asymptotic limit. The nonzero and constant value in the long time limit clearly indicate that a finite correlation still exists in the time asymptotic limit. This also confirms that the mean energy approaches equilibrium value and provides a stable solution of the particle at the time asymptotic limit, depending on the nature of the velocity dependent force. When the particle is confined in a harmonic well, the velocity autocorrelation function shows a damped oscillatory behaviour before approaching to zero in the long time limit. A non-systematic variation of the correlation is observed with viscous coefficient of the medium. When the particle is further subjected to a sinusoidal driving force, the damped oscillatory behaviour is suppressed, the velocity autocorrelation function decays exponentially with time, and the steady state is approached faster. It is worth mentioning that our results can be applicable to the simple models of molecular motor obeying the linear force velocity relation. This is possible if we assume that the motors are self propelled particles in a Langevin heat bath and the self propulsion mechanism is due to the autonomous linear velocity dependent force, ignoring all other specific details of self propulsion mechanism. \section{Acknowledgments} We would like to thank Arnab Saha for his valuable suggestions. We also thank Debasish Chaudhury for useful discussions during 7th Indian Statistical Physics Community Meeting (Code:ICTS/ispsm2020/02) held at ICTS. MS acknowledges the INSPIRE Faculty award (IFA 13 PH-66) by the Department of Science and Technology and the UGC Faculty recharge program (FRP-56055) for the financial support.\\ \textbf{Contribution of Authors:} MS designed the research problem, developed the model and supervised the complete work. Some initial calculations are done by SS and AN completed all the calculations. AN and MS analyzed the data and MS wrote the final manuscript. \bibliographystyle{unsrtnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Hammond and Lewis \cite{Ha-Le} found some elementary results for $2$-colored partitions mod $5$. Let $E(q) = \prod_{n=1}^\infty(1-q^n)$, and $$ \sum_{n=0}^\infty p_{-2}(n) q^n = \frac{1}{E(q)^2}, $$ which is the generating function for pairs of partitions $(\pi_1,\pi_2)$ (or $2$-colored partitions). Throughout this paper we refer to such pairs of partitions as {\it bipartitions}. It is not hard to show that \begin{equation} p_{-2}(5n+2) \equiv p_{-2}(5n+3) \equiv p_{-2}(5n+4) \equiv 0 \pmod{5}. \mylabel{eq:2colcongs} \end{equation} Hammond and Lewis \cite{Ha-Le} found a crank for these congruences. By crank we mean a statistic that divides the relevant partitions into equinumerous classes. They define \begin{equation} \mbox{birank}(\pi_1,\pi_2) = \#(\pi_1) - \#(\pi_2), \mylabel{eq:birankdef} \end{equation} where $\#(\pi)$ denotes the number of parts in the partition $\pi$. They show that the residue of the birank mod $5$ divides the bipartitions of $n$ into $5$ equal classes provided $n\equiv2$, $3$ or $4\pmod{5}$. The proof is elementary. It relies on Jacobi's triple product identity and the method of \cite{Ga88}, which uses roots of unity. We have found two other analogs. \bigskip \noindent \textbf{First Analog - The Dyson-birank} \noindent Dyson \cite{Dy44} defined the \textit{rank} of a partition as the largest part minus the number of parts. We define \begin{equation} \mbox{Dyson-birank}(\pi_1,\pi_2) = \mbox{rank}(\pi_1) + 2\,\mbox{rank}(\pi_2). \mylabel{eq:Dysonbirankdef} \end{equation} In Section \sect{Dbirank}, we show that the residue of the Dyson-birank mod $5$ divides the bipartitions of $n$ into $5$ equal classes provided $n\equiv2$, or $4\pmod{5}$. Unfortunately the Dyson-birank does not work if $n\equiv3\pmod{5}$. Nonetheless, for the other residue classes this is a surprising and deep result because of the nature of the rank generating function. The proof depends on known results for the rank mod $5$ due to Atkin and Swinnerton-Dyer \cite{At-Sw}. \bigskip \noindent \textbf{Second Analog - The $5$-core-birank} \noindent In \cite{Ga-Ki-St} new statistics were defined in terms of $t$-cores which gave new combinatorial interpretations of Ramanujan's partition congruences mod $5$, $7$ and $11$. For example, for a partition $\pi$ the \textit{$5$-core-crank} is defined as \begin{equation} \mbox{$5$-core-crank}(\pi) = r_1 + 2 r_2 - 2 r_3 - r_4, \mylabel{eq:GKScrankdef} \end{equation} where $r_j$ is the number of cells labelled $j$ in the $5$-residue diagram of $\pi$. Then in \cite{Ga-Ki-St} we proved combinatorially that the residue of the $5$-core-crank divides the partitions of $5n+4$ into $5$ equal classes. We define \begin{equation} \mbox{$5$-core-birank}(\pi_1,\pi_2) = \mbox{$5$-core-crank}(\pi_1) + 2\,(\mbox{$5$-core-crank}(\pi_2)). \mylabel{eq:GKSbirankdef} \end{equation} In Section \sect{5cbirank}, we show that the $5$-core-birank divides the bipartitions of $n$ into $5$ equal classes for $n\equiv2$, $3$ or $4\pmod{5}$. This is quite a surprising result. The proof relies on the $5$-dissection of the $5$-core-crank generating function for $5$-cores. The crank of a partition is defined to be the largest part if it contains no ones and otherwise it is the difference between number of parts larger than the number of ones, and the number of ones. The crank gives a combinatorial of Ramanujan's partition congruences mod $5$, $7$ and $11$ and solves a problem of Dyson \cite{Dy44}, \cite[p.52]{Dy96}. See \cite{An-Ga88}. This crank is different to the $5$-core crank given in \cite{Ga-Ki-St}. It is natural to ask whether there is a crank analog of the birank. This question has been answered in part by Andrews \cite{An08}. In Section \sect{bicrank}, we consider Andrews result and how it may be extended. In \cite{An08}, Andrews also considered congruences for more general multipartitions. In Section \sect{ghlbirank}, we give multipartition analogs of the Hammond-Lewis birank which explain these more general congruences. In Section \sect{mcranks}, we extend Andrews bicrank to multicranks of what we call extended multipartitions, and give alternative explanations of our multipartition congruences. In Section \sect{end}, we close with some further problems. \subsection*{Notation} For a partition $\pi$ we denote the sum of parts by $\abs{\pi}$. We will use the standard $q$-notation. $$ (z;q)_n=(z)_n= \begin{cases} \prod_{j=0}^{n-1}(1-zq^j), & n>0 \\ 1, & n=0, \end{cases} $$ and $$ (z;q)_\infty=(z)_\infty = \lim_{n\to\infty} (z;q)_n =\prod_{n=1}^\infty (1-z q^{(n-1)}), $$ where $\abs{q}<1$. We will also you the following notation for Jacobi-type theta-products. $$ J_{a,m}(q) := (q^a;q^m)_\infty (q^{m-a};q^m)_\infty (q^m;q^m)_\infty. $$ \section{The Hammond-Lewis Birank} \label{sec:HLbirank} For completeness we include some details of the Hammond-Lewis birank. For a bipartition $\bpi=\tpi$ we denote the sum of parts by \begin{equation} \abs{\bpi} = \abs{\pi_1} + \abs{\pi_2}. \mylabel{eq:abspi} \end{equation} We denote the Hammond-Lewis birank by \begin{equation} \mbox{HL-birank}(\bpi) = \#(\pi_1) - \#(\pi_2), \mylabel{eq:HLbirankdef} \end{equation} where $\#(\pi)$ denotes the number of parts in the partition $\pi$. The HL-birank generating function is \begin{equation} \sum_{\bpi=(\pi_1,\pi_2)} z^{\HLbr} q^{\abs{\bpi}} = \frac{1}{(zq;q)_\infty (z^{-1}q;q)_\infty}. \mylabel{eq:HLgenfunc} \end{equation} We let $N_{\HL}(m,t,n)$ denote the number of bipartitions $\bpi=\tpi$ with HL-birank congruent to $m\pmod{t}$. Suppose $\zeta$ is primitive $5$th root of unity. By letting $z=\zeta$ in \eqn{HLgenfunc} and using Jacobi's triple product identity, Hammond and Lewis found that \begin{align} \sum_{n=0}^\infty \sum_{k=0}^4 \zeta^k N_{\HL}(k,5,n) q^n &= \frac{1}{(\zeta q;q)_\infty (\zeta^{-1}q;q)_\infty} = \frac{(\zeta^2q;q)_\infty (\zeta^{-2}q;q)_\infty(q;q)_\infty} {(q^5;q^5)_\infty},\mylabel{eq:birankmod5}\\ &=(q^{25};q^{25})_\infty\left( \frac{1}{J_{1,5}(q^5)} + (\zeta + \zeta^{-1}) \frac{q}{J_{2,5}(q^5)} \right). \nonumber \end{align} Since the coefficient of $q^n$ on the right side of \eqn{birankmod5} is zero when $n\equiv2$, $3$ or $4\pmod{5}$, Hammond and Lewis's main result follows. \begin{theorem} \cite{Ha-Le} \label{thm:thm1} The residue of the HL-birank mod $5$ divides the bipartitions of $n$ into $5$ equal classes provided $n\equiv2$, $3$ or $4\pmod{5}$. \end{theorem} We illustrate Theorem \thm{thm1} for the case $n=3$. \begin{equation*} \begin{array}{rr} \mbox{Bipartitions of $3$} & \mbox{HL-birank (mod $5$)}\\ (3,-) & 1-0 \equiv 1\\ (2+1,-) & 2-0 \equiv 2\\ (1+1+1,-) & 3-0 \equiv 3\\ (2,1) & 1-1 \equiv 0\\ (1+1,1) & 2-1 \equiv 1\\ (1,2) & 1-1 \equiv 0\\ (1,1+1) & 1-2 \equiv4\\ (-,3) & 0-1 \equiv 4\\ (-,2+1) & 0-2 \equiv 3\\ (-,1+1+1) & 0-3 \equiv 2\\ \end{array} \end{equation*} Thus $$ N_{\HL}(0,5,3) = N_{\HL}(1,5,3) = N_{\HL}(2,5,3) = N_{\HL}(3,5,3) = N_{\HL}(4,5,3) = 2, $$ and we see that the residue of the HL-birank mod $5$ divides the $10$ bipartitions of $3$ into $5$ equal classes. \section{The Dyson-Birank} \label{sec:Dbirank} Dyson \cite{Dy44}, \cite[p.52]{Dy96} defined the rank of a partition as the largest part minus the number of parts. We define the Dyson-analog of the birank for bipartitions $\bpi=\tpi$ by \begin{equation} \mbox{Dyson-birank}(\bpi) = \mbox{rank}(\pi_1) + 2\,\mbox{rank}(\pi_2). \mylabel{eq:Dysonbirankdef2} \end{equation} In this section we prove \begin{theorem} \label{thm:thm2} The residue of the Dyson-birank mod $5$ divides the bipartitions of $n$ into $5$ equal classes provided $n\equiv2$, or $4\pmod{5}$. \end{theorem} We let $N_{\DY}(m,t,n)$ denote the number of bipartitions $\bpi=\tpi$ with Dyson-birank congruent to $m\pmod{t}$. We illustrate Theorem \thm{thm2} for the case $n=2$. \begin{equation*} \begin{array}{rr} \mbox{Bipartitions of $2$} & \mbox{Dyson-birank (mod $5$)}\\ (2,-) & 1+0 \equiv 1\\ (1+1,-) & -1+0 \equiv 4\\ (1,1) & 0+0 \equiv 0\\ (-,2) & 0+2 \equiv 2\\ (-,1+1) & 0-2 \equiv 3\\ \end{array} \end{equation*} Thus $$ N_{\DY}(0,5,2) = N_{\DY}(1,5,2) = N_{\DY}(2,5,2) = N_{\DY}(3,5,2) = N_{\DY}(4,5,2) = 1, $$ and we see that the residue of the Dyson-birank mod $5$ divides the $5$ bipartitions of $2$ into $5$ equal classes. We note that Theorem \thm{thm2} does not hold for $n\equiv 3\pmod{5}$. The first counterexample occurs when $n=13$. The Dyson-birank mod $5$ fails to divide the 1770 bipartitions of $13$ into $5$ equal classes. We have $$ N_{\DY}(0,5,13) = 358,\quad\mbox{but}\quad N_{\DY}(1,5,13) = N_{\DY}(2,5,13) = N_{\DY}(3,5,13) = N_{\DY}(4,5,13) = 353. $$ To prove Theorem \thm{thm2} we need the $5$-dissection of the rank generating function when $z=\zeta$. The Dyson-rank generating function is \begin{equation} f(z,q) = \sum_{\pi} z^{\srank(\pi)} q^{\abs{\pi}} = 1 + \sum_{n=1}^\infty \frac{q^{n^2}}{(zq;q)_n (z^{-1}q;q)_n}. \mylabel{eq:rankgenfunc} \end{equation} We let $N(m,t,n)$ denote the number of ordinary partitions of $n$ with rank congruent to $m$ mod $t$. Then \begin{align} f(\zeta,q)=&\sum_{n=0}^\infty \sum_{k=0}^4 \zeta^k N(k,5,n) q^n =1+\sum_{n=1}^\infty \frac{q^{n^2}}{(\zeta q;q)_n ({\zeta}^{-1}q;q)_n} \mylabel{eq:fdissect}\\ &= (A(q^5) - (3 + \zeta^2 + \zeta^3)\,\phi(q^5)) + q\,B(q^5) + q^2 (\zeta + \zeta^4)\, C(q^5) \nonumber\\ &\qquad +q^3 ( (1+\zeta^2+\zeta^3)\, D(q^5) + (1 + 2\zeta^2 + 2\zeta^3)\,\psi(q^5)), \nonumber \end{align} where \begin{align} A(q) &= \frac{E^2(q)\,J_{2,5}(q)}{J_{1,5}^2(q)},\mylabel{eq:Adef}\\ B(q) &= \frac{E^2(q)}{J_{1,5}(q)},\mylabel{eq:Bdef}\\ C(q) &= \frac{E^2(q)}{J_{2,5}(q)},\mylabel{eq:Cdef}\\ D(q) &= \frac{E^2(q)\,J_{1,5}(q)}{J_{2,5}^2(q)},\mylabel{eq:Ddef}\\ \phi(q) &=-1+\sum_{n=0}^\infty\frac{q^{5n^2}}{(q;q^5)_{n+1}(q^4;q^5)_n} \mylabel{eq:phidef}\\ &= \frac{q}{E(q^5)}\sum_{m=-\infty}^\infty (-1)^m\frac{q^{\frac{15}2m(m+1)}}{1-q^{5m+1}},\nonumber \\ \noalign{\mbox{and}} \psi(q) &=\dfrac1{q}\biggl\{-1+\sum_{n=0}^\infty\frac{q^{5n^2}}{(q^2;q^5)_{n+1}(q^3;q^5)_n} \biggr\} \mylabel{eq:psidef}\\ &=\frac{q}{E(q^5)} \sum_{m=-\infty}^\infty (-1)^m\frac{q^{\frac{15}2m(m+1)}}{1-q^{5m+2}}. \nonumber \end{align} Equation \eqn{fdissect} has an unusual history. It is originally due to Ramanujan since appears in the Lost Notebook. It is closely related to Dyson's conjectures on the rank \cite{Dy44}, which were proved by Atkin and Swinnerton-Dyer \cite{At-Sw}. As pointed out in \cite{Ga88} and \cite{Ga88b}, equation \eqn{fdissect} is actually equivalent to one of Atkin and Swinnerton-Dyer's main results. Dyson, Atkin and Swinnerton-Dyer were unaware of Ramanujan's result. The Dyson-birank generating function is \begin{equation} \sum_{\bpi=(\pi_1,\pi_2)} z^{\DYbr} q^{\abs{\bpi}} = f(z,q) \, f(z^2,q), \mylabel{eq:DYgenfunc} \end{equation} where $f(z,q)$ is the generating function for the Dyson rank of ordinary partitions given in \eqn{rankgenfunc}. Thus we have \begin{equation} \sum_{n=0}^\infty \sum_{k=0}^4 \zeta^k N_{\DY}(k,5,n) q^n = f(\zeta,q)\, f(\zeta^2,q). \mylabel{eq:Dbrmod5} \end{equation} Using only \eqn{fdissect} and the fact that \begin{equation} B^2(q) = A(q)\,C(q),\qquad C^2(q) = B(q)\,D(q), \mylabel{eq:ABCDfact} \end{equation} we find that the coefficient of $q^n$ in the $q$-expansion of $f(\zeta,q)\, f(\zeta^2,q)$ is zero if $n\equiv2$, or $4\pmod{5}$. Theorem \thm{thm2} then follows from \eqn{Dbrmod5}. Although Theorem \thm{thm2} does not hold when $n\equiv 3\pmod{5}$, there is some simplification in the product $f(\zeta,q)\, f(\zeta^2,q)$. We find that \begin{equation} \sum_{n=0}^\infty \sum_{k=0}^4 \zeta^k N_{\DY}(k,5,5n+3) q^n = 5 \, \phi(q)\, \psi(q), \mylabel{eq:phipsiid} \end{equation} using the fact that \begin{equation} A(q) \, D(q) = B(q) \, C(q). \mylabel{eq:ABCDfact2} \end{equation} \section{The $5$-Core-Birank} \label{sec:5cbirank} For ordinary partitions $\pi$ the {\it $5$-core-crank} is defined by \begin{equation} \mbox{$5$-core-crank}(\pi) = r_1 + 2 r_2 - 2 r_3 - r_4, \mylabel{eq:GKScrankdef2} \end{equation} where $r_j$ is the number of cells labelled $j$ in the $5$-residue diagram of $\pi$. See \cite[Prop.1,p.7]{Ga-Ki-St}. We define the $5$-core-crank analog for bipartitions $\bpi=\tpi$ by \begin{equation} \mbox{$5$-core-birank}(\bpi) = \mbox{$5$-core-crank}(\pi_1) + 2\,(\mbox{$5$-core-crank}(\pi_2)). \mylabel{eq:GKSbirankdef2} \end{equation} In this section we prove \begin{theorem} \label{thm:thm3} The residue of the $5$-core-birank mod $5$ divides the bipartitions of $n$ into $5$ equal classes provided $n\equiv2$, $3$ or $4\pmod{5}$. \end{theorem} We let $N_{\FC}(m,t,n)$ denote the number of bipartitions $\bpi=\tpi$ with $5$-core-birank congruent to $m\pmod{t}$. We illustrate Theorem \thm{thm3} for the case $n=3$. \begin{equation*} \begin{array}{rr} \mbox{Bipartitions of $3$} & \mbox{$5$-core-birank (mod $5$)}\\ (3,-) & 3+0 \equiv 3\\ (2+1,-) & 0+0 \equiv 0\\ (1+1+1,-) & -3+0 \equiv 2\\ (2,1) & 1+0 \equiv 1\\ (1+1,1) & -1+0 \equiv 4\\ (1,2) & 0+2 \equiv 2\\ (1,1+1) & 0-2 \equiv3\\ (-,3) & 0+6 \equiv 1\\ (-,2+1) & 0+0 \equiv 0\\ (-,-6) & 0-3 \equiv 4\\ \end{array} \end{equation*} Thus $$ N_{\FC}(0,5,3) = N_{\FC}(1,5,3) = N_{\FC}(2,5,3) = N_{\FC}(3,5,3) = N_{\FC}(4,5,3) = 2, $$ and we see that the residue of the $5$-core-birank mod $5$ divides the $10$ bipartitions of $3$ into $5$ equal classes. We note that although the Dyson-birank does not in general divide the bipartitions of $5n+3$ into $5$ equal classes the $5$-core-birank does. To prove Theorem \thm{thm3} we need the $5$-dissection of the $5$-core-crank generating function when $z=\zeta$. The $5$-core-crank generating function is \begin{equation} \Phi(z,q) = \sum_{\pi} z^{\fccrank(\pi)} q^{\abs{\pi}} = \frac{1}{E^5(q^5)} T(z,q), \mylabel{eq:fccrankgenfunc} \end{equation} where \begin{equation} T(z,q) := \sum_{\mbox{$\pi$ a $5$-core}} z^{\mbox{$5$-core-crank($\pi$)}} q^{\abs{\pi}} = \sum_{\substack{\vec{n}\in\Z^5\\\vec{n}\cdot\vec{1}=0}} z^{n_1 + 3n_2 + n_3} q^{\tfrac{5}{2}||\vec{n}||^2 + \vec{b}\cdot\vec{n}}, \mylabel{eq:Tdef} \end{equation} where $\vec{1}=(1,1,1,1,1)$ and $\vec{b}=(0,1,2,3,4)$. Equation \eqn{fccrankgenfunc} can be proved combinatorially and in a straightforward manner using Bijections 1 and 2 from \cite[pp.2-3]{Ga-Ki-St} and \cite[(4.2), p.6]{Ga-Ki-St}. We need the $5$-dissection of $T(\zeta,q)$: \begin{equation} T(\zeta,q) = W(q^5)(1 + q R(q^5) + q^2 (\zeta^2+\zeta^3) R(q^5)^2 - q^3 (\zeta^2+\zeta^3) R(q^5)^3), \mylabel{eq:T5dissect} \end{equation} where \begin{align} W(q) &:= J_{2,5}(q)^3( J_{10,25}(q) - q (1 + \zeta^2 + \zeta^3) J_{5,25}(q) ), \mylabel{eq:Wdef}\\ \noalign{\mbox{and}} R(q) &:= \frac{ J_{1,5}(q)}{J_{2,5}(q)}.\mylabel{eq:Rdef} \end{align} We will prove \eqn{T5dissect} in the next section. Theorem \thm{thm3} follows easily from \eqn{T5dissect}. The $5$-core-birank generating function is \begin{equation} \sum_{\bpi=(\pi_1,\pi_2)} z^{\fcbr} q^{\abs{\bpi}} = \frac{1}{E^{10}(q^5)}\, T(z,q) \, T(z^2,q), \mylabel{eq:fcbrgenfunc} \end{equation} where $T(z,q)$ is the generating function for the $5$-core-crank of partitions that are $5$-cores given in \eqn{Tdef}. Thus we have \begin{equation} \sum_{n=0}^\infty \sum_{k=0}^4 \zeta^k N_{\FC}(k,5,n) q^n = \frac{1}{E^{10}(q^5)}\,T(\zeta,q)\, T(\zeta^2,q). \mylabel{eq:FCbrmod5} \end{equation} From \eqn{T5dissect} we find that \begin{equation} T(\zeta,q) \, T(\zeta^2,q) = W^2(q^5)( 1 + 2\,q^5\,R^5(q^5) + q R(q^5)( 2 - q^5\,R^5(q^5)). \mylabel{eq:TTid} \end{equation} Since coefficient of $q^n$ in the $q$-expansion of $T(\zeta,q) \, T(\zeta^2,q)$ is zero when $n\equiv2$, $3$ or $4\pmod{5}$, Theorem \thm{thm3} then follows from \eqn{FCbrmod5}. \section{A Theta-Function Identity} \label{sec:thetaid} In this section we will prove the following theta-function identity. \begin{equation} U(z,q) = F_0(q)\, S_0(z,q) + F_1(q)\, S_1(z,q) + F_2(q)\, S_2(z,q) + F_3(q)\, S_3(z,q) + F_4(q)\, S_4(z,q), \mylabel{eq:Uid} \end{equation} where \begin{align} F_0(q) &= W(q^{10})\,(1 + q^{2}\,R(q^{10}) + (\zeta^2+\zeta^3)\,q^{4}\,R(q^{10})^2 - (\zeta^2+\zeta^3)\,q^{6}\,R(q^{10})^3),\mylabel{eq:f0def}\\ F_1(q) &= W(q^{10})\,(\zeta^4 + \zeta\,q^{2}\,R(q^{10}) + (1+\zeta)\,q^{4},R(q^{10})^2 - (\zeta^2+\zeta^3)\,q^{6}\,R(q^{10})^3),\mylabel{eq:f1def}\\ F_2(q) &= W(q^{10})\,(1 + \zeta^4\,q^{2}\,R(q^{10}) + (1+\zeta)\,q^{4}\,R(q^{10})^2 - (1+\zeta^4)\,q^{6}\,R(q^{10})^3),\mylabel{eq:f2def}\\ F_3(q) &= W(q^{10})\,(1 + \zeta\,q^{2}\,R(q^{10}) + (1+\zeta^4)\,q^{4}\,R(q^{10})^2 - (1+\zeta)\,q^{6}\,R(q^{10})^3),\mylabel{eq:f3def}\\ F_4(q) &= W(q^{10})\,(\zeta + \zeta^4\,q^{2}\,R(q^{10}) + (1+\zeta^4)\,q^{4}\,R(q^{10})^2 - (\zeta^2+\zeta^3)\,q^{6}\,R(q^{10})^3),\mylabel{eq:f4def} \end{align} \begin{align} S_0(z,q) &= \sum_{n=-\infty}^{\infty} z^{5n}q^{25n^2+20n}, \mylabel{eq:S0def} \\ S_1(z,q) &= \sum_{n=-\infty}^{\infty} z^{5n+1}q^{25n^2+30n+5}, \mylabel{eq:S1def} \\ S_2(z,q) &= \sum_{n=-\infty}^{\infty} z^{5n+2}q^{25n^2+40n+12}, \mylabel{eq:S2def} \\ S_3(z,q) &= \sum_{n=-\infty}^{\infty} z^{5n+3}q^{25n^2+50n+21}, \mylabel{eq:S3def} \\ S_4(z,q) &= \sum_{n=-\infty}^{\infty} z^{5n+4}q^{25n^2+60n+32}, \mylabel{eq:S4def} \end{align} and \begin{equation} U(z,q) = \sum_{\substack{\vec{n}\in\Z^5}} z^{\vec{n}\cdot\vec{1}} \zeta^{n_1 + 3n_2 + n_3} q^{5||\vec{n}||^2 + 2\vec{b}\cdot\vec{n}}, \mylabel{eq:Udef} \end{equation} where $W(q)$ and $R(q)$ defined in \eqn{Wdef} and \eqn{Rdef} respectively, and the vectors $\vec{n}=(n_0,n_1,n_2,n_3,n_4$, $\vec{1}=(1,1,1,1,1)$ and $\vec{b}=(0,1,2,3,4)$ as before. We note that \eqn{T5dissect} follows from \eqn{Uid} by taking the coefficient of $z^0$ and replacing $q$ by $q^{1/2}$. Equation \eqn{T5dissect} was the crucial identity needed in the proof of Theorem \thm{thm3}. We prove the identity \eqn{Uid} using standard techniques. We show that both sides satisfy the same functional equation and both sides agree for enough values of the parameter $z$. Most of these evaluations can be proved by elementary means using Jacobi's triple product. For one evaluation we will need the theory of modular functions. We define the following Jacobi theta function \begin{equation} \Theta(z,q) = \sum_{n=-\infty}^\infty z^n q^{n^2}, \mylabel{eq:Thetadef} \end{equation} for $z\ne0$ and $\abs{q}<1$. We will need Jacobi's triple product identity \begin{equation} \sum_{n=-\infty}^\infty z^n q^{n^2} =(-zq;q^2)_\infty (-z^{-1}q;q^2)_\infty (q^2;q^2)_\infty, \mylabel{eq:jtprod} \end{equation} and the well-known functional equation \begin{equation} \Theta(zq^2,q) = z^{-1} q^{-1} \Theta(z,q), \mylabel{eq:Thetafid} \end{equation} for $z\ne0$ and $0<\abs{q}<1$. From the definition \eqn{Udef} we have \begin{equation} U(z,q) = \Theta(z\zeta^4,q^5) \, \Theta(zq^2\zeta,q^5) \, \Theta(zq^4,q^5) \, \Theta(z\zeta q^6,q^5) \, \Theta(z\zeta^4 q^8,q^5). \mylabel{eq:Uprodid} \end{equation} From \eqn{jtprod} and \eqn{Thetafid} we have \begin{equation} U(z q^{10},q) = z^{-5} q^{-45} U(z,q), \mylabel{eq:Ufid} \end{equation} and \begin{equation} U(z,q) = 0\quad\mbox{for}\quad z = -q^5\,\zeta,\quad -q^3\,\zeta^4, -q, -\zeta^4\,q^9, -\zeta\,q^7. \mylabel{eq:Uzeros} \end{equation} Let $V(z,q)$ denote the function of the right side of \eqn{Uid}. Each $S_j(z,q)$ can be written in terms of the theta function $\Theta(z,q)$ and we find that $S_j(z q^{10},q) = z^{-5} q^{-45} S_j(z,q)$ for each $j$ so that \begin{equation} V(z q^{10},q) = z^{-5} q^{-45} V(z,q). \mylabel{eq:Vfid} \end{equation} Hence the left and right sides of \eqn{Uid} satisfy the same functional equation (i.e.\ \eqn{Ufid}, \eqn{Vfid}). In view of \cite[Lemma 2]{At-Sw} or \cite[Lemma 1]{Hi-Ga-Bo}, it suffices to show that \eqn{Uid} holds for $6$ distinct values of $z$ with $\abs{q}^{10} < \abs{z} \le 1$. We claim that \begin{equation} V(z,q) = 0\quad\mbox{for}\quad z = -q^5\,\zeta,\quad -q^3\,\zeta^4, -q, -\zeta^4\,q^9, -\zeta\,q^7. \mylabel{eq:Vzeros} \end{equation} Using \eqn{jtprod} we can easily evaluate each $S_j(z,q)$ for these values of $z$. \begin{align} S_0(-\zeta q^5,q) &= -q^{-20}J_{2,5}(q^{10}), \mylabel{eq:Sv1}\\ S_1(-\zeta q^5,q) &= \zeta q^{-20} J_{2,5}(q^{10}), \mylabel{eq:Sv2}\\ S_2(-\zeta q^5,q) &= -\zeta^2 q^{-18} J_{1,5}(q^{10}), \mylabel{eq:Sv3}\\ S_3(-\zeta q^5,q) &= 0, \mylabel{eq:Sv4}\\ S_4(-\zeta q^5,q) &= \zeta^4 q^{-18} J_{1,5}(q^{10}), \mylabel{eq:Sv5}\\ \noalign{\medskip} S_0(-q^3\zeta^4,q) &= -q^{-10}J_{1,5}(q^{10}), \mylabel{eq:Sv6}\\ S_1(-q^3\zeta^4,q) &= \zeta^4 q^{-12} J_{2,5}(q^{10}), \mylabel{eq:Sv7}\\ S_2(-q^3\zeta^4,q) &= -\zeta^3 q^{-12} J_{2,5}(q^{10}), \mylabel{eq:Sv8}\\ S_3(-q^3\zeta^4,q) &= \zeta^2 q^{-10} J_{1,5}(q^{10}), \mylabel{eq:Sv9}\\ S_4(-q^3\zeta^4,q) &= 0 , \mylabel{eq:Sv10}\\ \noalign{\medskip} S_0(-q,q) &= 0, \mylabel{eq:Sv11}\\ S_1(-q,q) &= q^{-4}J_{1,5}(q^{10}), \mylabel{eq:Sv12}\\ S_2(-q,q) &= -q^{-6}J_{2,5}(q^{10}), \mylabel{eq:Sv13}\\ S_3(-q,q) &= q^{-6}J_{2,5}(q^{10}), \mylabel{eq:Sv14}\\ S_4(-q,q) &= -q^{-4}J_{1,5}(q^{10}), \mylabel{eq:Sv15}\\ \noalign{\medskip} S_0(-\zeta^4q^9,q) &= -q^{-40}J_{1,5}(q^{10}), \mylabel{eq:Sv16}\\ S_1(-\zeta^4q^9,q) &= 0 \mylabel{eq:Sv17}\\ S_2(-\zeta^4q^9,q) &= \zeta^3q^{-40}J_{1,5}(q^{10}), \mylabel{eq:Sv18}\\ S_3(-\zeta^4q^9,q) &= -\zeta^2q^{-42}J_{2,5}(q^{10}), \mylabel{eq:Sv19}\\ S_4(-\zeta^4q^9,q) &= \zeta q^{-42}J_{2,5}(q^{10}), \mylabel{eq:Sv20}\\ \noalign{\medskip} S_0(-\zeta q^7,q) &= -q^{-30}J_{2,5}(q^{10}), \mylabel{eq:Sv21}\\ S_1(-\zeta q^7,q) &= \zeta q^{-28}J_{1,5}(q^{10}), \mylabel{eq:Sv22}\\ S_2(-\zeta q^7,q) &= 0, \mylabel{eq:Sv23}\\ S_3(-\zeta q^7,q) &= -\zeta^3q^{-28}J_{1,5}(q^{10}), \mylabel{eq:Sv24}\\ S_4(-\zeta q^7,q) &= \zeta^4q^{-30}J_{2,5}(q^{10}). \mylabel{eq:Sv25} \end{align} The verification of \eqn{Vzeros} is just a routine calculation. Thus both sides of \eqn{Uid} agree for $5$ distinct values of $z$ in the region $\abs{q}^{10} < \abs{z} \le 1$. We show that both sides agree for $z=-1$, and then our identity \eqn{Uid} will follow. To achieve this we use the theory of modular functions. Since this is a standard technique we just sketch some of the details. First, we calculate the $5$-dissection of each theta function on the right side of \eqn{Uprodid} when $z=-1$. By \eqn{jtprod} we find that \begin{align} \Theta(-\zeta^4,q^5) &= J_{1,2}(q^{50})+(1+\zeta^2+\zeta^3)q^{5}J_{3,10}(q^{25}) +(\zeta^2+\zeta^3)q^{20}J_{1,10}(q^{25}), \mylabel{eq:Peval1}\\ \Theta(-\zeta q^2,q^5)&= J_{27,50}(q^{5})+\zeta^3q^{16}J_{7,50}(q^{5}) -\zeta q^{7}J_{13,50}(q^{5})-\zeta^4q^{3}J_{17,50}(q^{5}) +q^{24}\zeta^2J_{7,50}(q^{5}), \mylabel{eq:Peval2}\\ \Theta(-q^4,q^5) &= J_{43,50}(q^{5})-q\,J_{19,50}(q^{5})+q^{12}J_{9,50}(q^{5}) +q^{28}J_{1,50}(q^{5}) -q^{9}J_{11,50}(q^{5}), \mylabel{eq:Peval3}\\ q\,\Theta(-\zeta q^6,q^5)&= -\zeta^4J_{21,50}(q^{5})+q\,J_{19,50}(q^{5}) -\zeta q^{12}J_{9,50}(q^{5}) -\zeta^2q^{28}J_{1,50}(q^{5}) +\zeta^3q^{9}J_{23,50}(q^{5}), \mylabel{eq:Peval4}\\ q^{3} \Theta(-\zeta^4 q^8,q^5) &=-\zeta J_{23,50}(q^{5})-\zeta^4q^{16}J_{7,50}(q^{5}) +\zeta^2q^{7}J_{27,50}(q^{5}) +q^{3}J_{17,50}(q^{5}) -\zeta^3q^{24}J_{3,50}(q^{5}). \mylabel{eq:Peval5} \end{align} Next, we evaluate each $S_j(-1,q)$ using \eqn{jtprod} \begin{align} S_0(-1,q) &= S_1(-1,q) = J_{1,10}(q^{5}),\mylabel{eq:Seval1}\\ S_2(-1,q) &= S_4(-1,q) = -q^{-3}J_{3,10}(q^{5}),\mylabel{eq:Seval2}\\ S_3(-1,q) &= q^{-4}J_{1,2}(q^{25}).\mylabel{eq:Seval3} \end{align} For $0\le r \le 4$, we define the operator $\mathcal{U}_{r,5}$ by \begin{equation} \mathcal{U}_{r,5}\left(\sum_{n} a(n) q^n \right) = \sum_{n} a(5n+r) q^n. \mylabel{eq:Uop} \end{equation} To show that \eqn{Uid} holds for $z=-1$ we need to prove $5$ identities \begin{equation} \mathcal{U}_{r,5}\left(q^4\,U(-1,q)\right) = \mathcal{U}_{r,5}\left(q^4\,V(-1,q)\right), \mylabel{eq:UVrid} \end{equation} for $0\le r \le 4$. It turns out that each of these identities is equivalent to a modular function identity for the group $\Gamma_1(50)$. We provide some detail for the case $r=0$. Using \eqn{Seval1}--\eqn{Seval3} we find that \begin{align} \mathcal{U}_{0,5}\left(q^4\,V(-1,q)\right) &= \left(J_{2,5}(q^{10}) - q^2 (1 + \zeta^2 + \zeta^3) J_{1,5}(q^{10})\right) \mylabel{eq:U0Vid}\\ &\times \left( J_{1,2}(q^5) J_{4,10}^3(q) + q (-1 + \zeta^2 + \zeta^3) J_{2,10}^2(q) J_{3,10}(q) J_{4,10}(q)\right. \nonumber\\ &\hphantom{XXXXXXXXXXXXXX}\left. - 2 q^2 (\zeta^2+\zeta^3) J_{2,10}^3 J_{1,10}(q) \right). \nonumber \end{align} We can utilize \eqn{Peval1}--\eqn{Peval5} to write the left side of the $r=0$ case of \eqn{UVrid} as a sum of $135$ explicit theta products \begin{align} &\mathcal{U}_{0,5}\left(q^4\,U(-1,q)\right) \mylabel{eq:U0Uid}\\ &\quad = q^{10} J_{3,50}^2(q) J_{19,50}^2(q) J_{25,50}(q) + \cdots + 2 q^{10} (\zeta^2 + \zeta^3) J_{1,50}(q) J_{9,50}(q) J_{13,50}(q) J_{17,50}(q) J_{25,50}(q). \nonumber \end{align} We have to prove that the right side of \eqn{U0Vid} equals the right side of \eqn{U0Vid}. After dividing both sides by $J_{2,5}(q^{10}) J_{1,2}(q^5) J_{4,10}^3(q)$ we find that this is equivalent to showing that a certain linear combination of $140$ generalized eta-quotients simplifies to the constant $1$ \begin{equation} (1+\zeta^2+\zeta^3)\,\eta_{50,10} \, \eta_{50,20}^{-1} + \cdots \eta_{50,4}^{-3} \, \eta_{50,5}^{-2} \, \eta_{50,6}^{-3} \, \eta_{50,10}^{-4} \, \eta_{50,14}^{-3} \, \eta_{50,15}^{-2} \, \eta_{50,16}^{-3} \, \eta_{50,20}^{-5} \, \eta_{50,24}^{-3} \, \eta_{50,23}^{2} \, \eta_{50,21}^{2} = 1. \mylabel{eq:bigcombo} \end{equation} Here \begin{equation} \eta_{n,m} = \eta_{n,m}(\tau) = \exp(\pi i P_2(m/n) n\tau)\, \prod_{k\equiv\pm m\pmod{n}}(1 - \exp(2\pi ik\tau) = q^{n P_2(m/n)/2} J_{m,n}(q), \mylabel{eq:getadef} \end{equation} where $P_2(t) = \{t\}^2 - \{t\} + \tfrac{1}{6}$, and $q=\exp(2\pi i\tau)$. Using \cite[Theorem 2.9, p.7]{Choi06}, \cite[Theorem 3, p.126]{Ro94} that each generalised eta-quotient in \eqn{bigcombo} is indeed a modular function on $\Gamma_1(50)$. As usual we need the valence formula \begin{equation} \sum_{z\in\mathcal{F}} \ORD(f;z,\Gamma) = 0, \mylabel{eq:valform} \end{equation} provided $f$ is a nontrivial modular function on $\Gamma$, and $\mathcal{F}$ is a fundamental set for $\Gamma$. Using MAGMA, the following is a complete set of inequivalent cusps for $\Gamma_1(50)$ \begin{align} \mathcal{C} &=\{\infty, \tfrac{0}{1},\, \tfrac{1}{10},\, \tfrac{1}{9},\, \tfrac{2}{17},\, \tfrac{3}{25},\, \tfrac{1}{8},\, \tfrac{2}{15},\, \tfrac{3}{22},\, \tfrac{4}{29},\, \tfrac{5}{36},\, \tfrac{6}{43},\, \tfrac{7}{50},\, \tfrac{3}{20},\, \tfrac{2}{13},\, \tfrac{3}{19},\, \tfrac{4}{25},\, \tfrac{1}{6},\, \tfrac{6}{35},\, \tfrac{9}{52},\, \tfrac{13}{75},\, \tfrac{4}{23},\, \tfrac{7}{39},\, \tfrac{9}{50},\, \tfrac{7}{38},\, \tfrac{3}{16},\, \mylabel{eq:CG}\\ &\qquad\tfrac{5}{26},\, \tfrac{1}{5},\, \tfrac{27}{125},\, \tfrac{7}{32},\, \tfrac{11}{50},\, \tfrac{12}{53},\, \tfrac{17}{75},\, \tfrac{6}{25},\, \tfrac{1}{4},\, \tfrac{13}{50},\, \tfrac{53}{200},\, \tfrac{67}{250},\, \tfrac{27}{100},\, \tfrac{11}{40},\, \tfrac{18}{65},\, \tfrac{7}{25},\, \tfrac{3}{10},\, \tfrac{7}{20},\, \tfrac{9}{25},\, \tfrac{11}{30},\, \tfrac{19}{50},\, \tfrac{49}{125},\, \tfrac{2}{5},\, \tfrac{21}{50},\, \nonumber\\ &\qquad \tfrac{11}{25},\, \tfrac{11}{20},\, \tfrac{26}{45},\, \tfrac{3}{5},\, \tfrac{59}{85},\, \tfrac{7}{10}\}, \nonumber \end{align} with corresponding widths \begin{align} &\{ 1,\, 50,\, 5,\, 50,\, 50,\, 2,\, 25,\, 10,\, 25,\, 50,\, 25,\, 50,\, 1,\, 5,\, 50,\, 50,\, 2,\, 25,\, 10,\, 25,\, 2,\, 50,\, 50,\, 1,\, 25,\, \mylabel{eq:WG}\\ &\qquad 25,\, 25,\, 10,\, 2,\, 25,\, 1,\, 50,\, 2,\, 2,\, 25,\, 1,\, 1,\, 1,\, 1,\, 5,\, 10,\, 2,\, 5,\, 5,\, 2,\, 5,\, 1,\, 2,\, 10,\, 1,\, 2,\, 5,\, \nonumber\\ &\qquad 10,\, 10,\, 10,\, 5\}. \nonumber \end{align} Using known results for the invariant order of generalized eta-quotients at cusps \cite[(2.3), p.7]{Choi06}, \cite[pp.127-128]{Ro94} we have calculated the order at each cusp of every function in \eqn{bigcombo}. As check we verified that the total Order of each function is zero. With $\mathcal{J}$ being the set generalized eta-quotients ocurring in \eqn{bigcombo} we calculated \begin{equation} \sum_{c\in\mathcal{C}\setminus\{\infty\}} \min_{ f\in \mathcal{J}}(\ORD(f;c;\Gamma_1(50)),0) = -145. \mylabel{eq:minords} \end{equation} Hence, by the valence formula \eqn{valform} it suffices to verify \eqn{bigcombo} (or equivalently \eqn{UVrid} with $r=0$) up to $q^{145}$, since generalized eta-quotients have no poles or zeros in the upper-half plane. We have actually verified the result up to $q^{200}$. All calculations, except for \eqn{CG},and \eqn{WG}, were done using MAPLE. The calculations needed to verify \eqn{UVrid} for $r=1$, $2$, $3$, $4$ are similiar and have been carried out. This conpletes our proof of \eqn{Uid}. \section{The Andrews Bicrank and Extensions} \label{sec:bicrank} For a partition $\pi$, let $\ell(\pi)$ denote the largest part of $\pi$, $\varpi(\pi)$ denote the number of ones in $\pi$, and $\mu(\pi)$ denote the number of parts of $\pi$ larger than $\varpi(\pi)$. The crank of $\pi$ is given by \begin{equation} \mbox{crank}(\pi) = \begin{cases} \ell(\pi), &\mbox{if $\varpi(\pi)=0$}, \\ \mu(\pi) - \varpi(\pi), &\mbox{if $\varpi(\pi)>0$}. \end{cases} \mylabel{eq:crankdef} \end{equation} The crank gives a combinatorial interpretation of Ramanujan's partition congruences mod $5$, $7$ and $11$ and solves a problem of Dyson \cite{Dy44}, \cite[p.52]{Dy96}. See \cite{An-Ga88}. In \cite{An08}, Andrews gave a combinatorial interpretation of the congruence \begin{equation} p_{-2}(5n+3) \equiv 0 \pmod{5}, \mylabel{eq:p25n3} \end{equation} in terms of the crank. This result is a crank analog of the Dyson-birank but is more complicated since it on involves positive and negative weights. This complication is because of the nature of the generating function for the crank. Let $M(m,n)$ denote the number of partitions of $n$ with crank $m$. Then \begin{equation} \sum_{n\ge0} \sum_m M(m,n) z^m q^n = (1-z)q + \frac{(q;q)_\infty}{(zq;q)_\infty (z^{-1}q;q)_\infty}. \mylabel{eq:Mgf} \end{equation} Define $M'(m,n)$ by \begin{equation} \sum_{n\ge0} \sum_m M'(m,n) z^m q^n = \frac{(q;q)_\infty}{(zq;q)_\infty (z^{-1}q;q)_\infty} = 1 + (z -1 + z^{-1})q + (z^2 + z^{-2})q^2 + \cdots. \mylabel{eq:Mvgf} \end{equation} We need to interpret $M'(m,n)$ combinatorially. To this we need to definition of partition. To the set of partitions we need to add two additional partitions of $1$ which we denote by $1_a$ and $1_b$. We call this new set $\mathcal{E}$, the set of extended partitions. \begin{equation} \mathcal{E} = \{(-), 1_a, 1_b, 1, 2, 1+1, 3, 2+1, 1+1+1, \cdots\}. \mylabel{eq:eptns} \end{equation} We have $\abs{1_a}=\abs{1_b}=1$. Here as usual $(-)$ is the empty partition of $0$. For these extended partitions define a weight function $w(\pi)$ defined by \begin{equation} w(\pi) = \begin{cases} -1, &\mbox{if $\pi=1_b$}.\\ \abs{\pi}, &\mbox{otherwise}. \end{cases} \mylabel{eq:wdef} \end{equation} Thus for the three extended partitions of $1$ we have $w(1)=w(1_a)=1$, and $w(1_b)=-1$, and the total weight is still $p(1)=1$. Therefore \begin{equation} \sum_{\substack{\pi\in\mathcal{E}\\ \abs{\pi}=n}} w(\pi) = p(n). \mylabel{eq:totw} \end{equation} We also extend the definition of crank by $\mbox{crank}(1_a)=1$, and $\mbox{crank}(1_b)=0$. Recall that for ordinary partition of $1$ we have $\mbox{crank}(1_b)=-1$. We now have our desired combinatorial interpretation of $M'(m,n)$. \begin{equation} F(z,q) = \sum_{\pi\in\mathcal{E}} w(\pi) z^{\mbox{crank}(\pi)} q^{\abs{\pi}} = \sum_{n\ge0} \sum_m M'(m,n) z^m q^n = \frac{(q;q)_\infty}{(zq;q)_\infty (z^{-1}q;q)_\infty}. \mylabel{eq:Mvid} \end{equation} In other words, \begin{equation} M'(m,n) = \sum_{\substack{\pi\in\mathcal{E}\\ \abs{\pi}=n,\, \mbox{crank}(\pi)=m}} w(\pi). \mylabel{eq:Mvid2} \end{equation} We note that the function $F(z,q)$ (at least as an infinite product) occured in Ramanujan's Lost Notebook. We define the set of extended bipartitions by $\mathcal{E} \times \mathcal{E}$,i.e.\ an extended bipartition is simply a pair of extended partitions. For an extended bipartition $\pi=(\pi_1,\pi_2)$ we define a sum of parts function and a weight function in the natural way \begin{equation} \abs{\pi} = \abs{\pi_1} + \abs{\pi_2},\quad\mbox{and}\qquad w(\pi) = w(\pi_1) \, w(\pi_2). \mylabel{eq:ewdef} \end{equation} We denote Andrews's bicrank function by $\mbox{bicrank}_1$. We give a variant which we call $\mbox{bicrank}_2$. For an extended bipartition $\pi=(\pi_1,\pi_2)$ we define \begin{align} \mbox{bicrank}_1(\pi) &= \mbox{crank}(\pi_1) + \mbox{crank}(\pi_2), \mylabel{eq:bic1}\\ \mbox{bicrank}_2(\pi) &= \mbox{crank}(\pi_1) + 2\,\mbox{crank}(\pi_2), \mylabel{eq:bic2} \end{align} Amazingly together these two bicrank functions give a new interpretation for all three congruences in \eqn{2colcongs}. For $j=1$, $2$ we define $M_j(m,t,n)$ by \begin{equation} M_j(m,t,n) = \sum_{\substack{\pi\in\mathcal{E}\times\mathcal{E}\\ \abs{\pi}=n,\, \mbox{bicrank}_j(\pi)\equiv m\pmod{t}}} w(\pi). \mylabel{eq:Mjdef} \end{equation} In other words, $M_j(m,t,n)$ is the number of extended bipartitions of $n$ with $\mbox{bicrank}_j$ congruent to $m$ mod $t$ counted by the weight $w$. In this section we prove \begin{theorem} \label{thm:thm4} $$ \hphantom{x} $$ \begin{enumerate} \item[(i)] The residue of the $\mbox{bicrank}_1(\pi)$ mod $5$ divides the extended bipartitions of $n$ into $5$ classes of equal weight provided $n\equiv3\pmod{5}$. \item[(ii)] The residue of the $\mbox{bicrank}_2(\pi)$ mod $5$ divides the extended bipartitions of $n$ into $5$ classes of equal weight provided $n\equiv2$ or $4\pmod{5}$. \end{enumerate} \end{theorem} We illustrate the first case of Theorem \thm{thm4} (i). There are $18$ extended bipartitions of $3$ giving a total weight of $p_{-2}(3)=10$. \begin{equation*} \begin{array}{rrr} \mbox{Extended bipartitions of $3$} & \mbox{bicrank}_1 \pmod{5} & \mbox{weight}=w \\ (2+1,-) & 0 \equiv 0 & 1\\ (-,2+1) & 0+0 \equiv 0 & 1\\ (2,1) & 2-1 \equiv 1 & 1\\ (1,2) &-1+2 \equiv 1 & 1\\ (1+1+1,-)& -3 \equiv 2 & 1\\ (2,1_b) & 2+0 \equiv 2 & -1\\ (1+1,1) & -2-1 \equiv 2 & 1\\ (1_b,2) & 0+2 \equiv 2 & -1\\ (1,1+1) &-1-2 \equiv 2 & 1\\ (-,1+1+1) &0-3 \equiv 2 & 1\\ (3,-) & 3 \equiv 3 & 1\\ (2,1_a) & 2+1 \equiv 3 & 1\\ (1+1,1_b)&-2+0 \equiv 3 & -1\\ (1_a,2) & 1+2 \equiv 3 & 1\\ (1_b,1+1)& 0-2 \equiv 3 & -1\\ (-,3) & 0+3 \equiv 3 & 1\\ (1+1,1_a)& -2+1 \equiv 4 & 1\\ (1_a,1+1)& 1-2 \equiv 4 & 1\\ \end{array} \end{equation*} Thus $$ M_1(0,5,3) = M_1(1,5,3) = M_1(2,5,3) = M_1(3,5,3) = M_1(4,5,3) = 2, $$ and we see that the residue of the $\mbox{bicrank}_1$ mod $5$ divides the $18$ bipartitions of $3$ into $5$ classes of equal total weight $2$. We illustrate the first case of Theorem \thm{thm4} (ii). There are $13$ extended bipartitions of $2$ giving a total weight of $p_{-2}(2)=5$. \begin{equation*} \begin{array}{rrr} \mbox{Extended bipartitions of $2$} & \mbox{bicrank}_2 \pmod{5} & \mbox{weight}=w \\ (2,-) & 2+0 \equiv 2 & 1\\ (1+1,-) &-2+0 \equiv 3 & 1\\ (1,1) & -1-2 \equiv 2 & 1\\ (1,1_a) & -1+2 \equiv 1 & 1\\ (1,1_b) & -1+0 \equiv 4 &-1\\ (1_a,1) & 1-2 \equiv 4 & 1\\ (1_a,1_a)& 1+2 \equiv 3 & 1\\ (1_a,1_b) & 1+0 \equiv 1 &-1\\ (1_b,1) & 0-2 \equiv 3 &-1\\ (1_b,1_a) & 0+2 \equiv 2 &-1\\ (1_b,1_b) & 0+0 \equiv 0 & 1\\ (-,2) & 0+4 \equiv 4 & 1\\ (-,1+1) & 0-4 \equiv 1 & 1\\ \end{array} \end{equation*} Thus $$ M_2(0,5,2) = M_2(1,5,2) = M_2(2,5,2) = M_2(3,5,2) = M_2(4,5,2) = 1, $$ and we see that the residue of the $\mbox{bicrank}_2$ mod $5$ divides the $13$ bipartitions of $2$ into $5$ classes of equal total weight $1$. Theorem \thm{thm4} (i) is due to Andrews \cite{An08}. Theorem \thm{thm4} (ii) is a natural extension, and its proof is analogous. For completeness we include a sketch of the proof. We define \begin{equation} M'(r,t,n) = \sum_{m\equiv r\pmod{t}} M'(m,n), \end{equation} which is the number of ordinary partitions of $n$ with crank congruent to $r$ mod $t$ when $n\ne1$. When $n=1$ it is counting extended partitions. Then \begin{align} F(\zeta,q)=&\sum_{n=0}^\infty \sum_{k=0}^4 \zeta^k M'(k,5,n) q^n = \frac{(q;q)_\infty}{(\zeta q;q)_\infty (\zeta^{-1}q;q)_\infty} \mylabel{eq:Fdissect}\\ &= A(q^5) - q (\zeta + \zeta^4)^2 \,B(q^5) + q^2 (\zeta^2 + \zeta^3)\, C(q^5) -q^3 (\zeta+\zeta^4)\, D(q^5), \nonumber \end{align} where $F(z,q)$ is given in \eqn{Mvid}, $A(q)$, $B(q)$, $C(q)$, and $D(q)$ are given in \eqn{Adef}--\eqn{Ddef}. Equation \eqn{Fdissect} appears in Ramanujan's Lost Notebook \cite[p.20]{Ra88} and is proved in \cite[(1.30)]{Ga88}. The two bicrank generating functions are given by \begin{align} \sum_{\bpi=(\pi_1,\pi_2)} z^{\bcra} q^{\abs{\bpi}} &= F(z,q)^2, \mylabel{eq:bcr1genfunc}\\ \sum_{\bpi=(\pi_1,\pi_2)} z^{\bcrb} q^{\abs{\bpi}} &= F(z,q) \, F(z^2,q), \mylabel{eq:bcr2genfunc} \end{align} where $F(z,q)$ is the generating function for the crank of extended partitions \eqn{Mvid}. Thus we have \begin{align} \sum_{n=0}^\infty \sum_{k=0}^4 \zeta^k M_{1}(k,5,n) q^n &= F(\zeta,q)^2, \mylabel{eq:bcr1mod5}\\ \sum_{n=0}^\infty \sum_{k=0}^4 \zeta^k M_{1}(k,5,n) q^n &= F(\zeta,q)\, F(\zeta^2,q). \mylabel{eq:bcr2mod5} \end{align} Using only \eqn{Fdissect} and equations \eqn{ABCDfact} and \eqn{ABCDfact2} we easily find that find that the coefficient of $q^n$ in the $q$-expansion of $F(\zeta,q)^2$ is zero if $n\equiv3\pmod{5}$, ant that the the coefficient of $q^n$ in the $q$-expansion of $F(\zeta,q)\, F(\zeta^2,q)$ is zero if $n\equiv2$, or $4\pmod{5}$. Both parts of Theorem \thm{thm4} then follow from equations \eqn{bcr1mod5} and \eqn{bcr2mod5}. \section{A Multirank Analog of the Hammond-Lewis Birank} \label{sec:ghlbirank} Let $\mathcal{P}$ denote the set of partitions. A multipartition with $r$ components or an $r$-colored partition of $n$ is simply an $r$-tuple \begin{equation} \vec{\pi}=(\pi_1,\pi_2,\dots,\pi_r) \in \mathcal{P} \times \mathcal{P} \times \cdots \times \mathcal{P} =\mathcal{P}^r, \end{equation} where \begin{equation} \sum_{k=1}^r \abs{\pi_k}=n. \end{equation} It is clear that the number of $r$-colored partitions of $n$ is $p_{-r}(n)$ where \begin{equation} \sum_{n\ge0} p_{-r}(n) q^n = \frac{1}{E(q)^r}. \end{equation} There are two elementary and well-known congruences. \begin{theorem} \label{thm:thm5} Let $t>3$ be prime. \begin{enumerate} \item[(i)] If $24n+1$ is a quadratic nonresidue mod $t$, then then \begin{equation} p_{1-t}(n) \equiv 0 \pmod{t}. \end{equation} \item[(ii)] If $8n+1$ is not a quadratic residue mod $t$, then \begin{equation} p_{3-t}(n) \equiv 0 \pmod{t}. \end{equation} \end{enumerate} \end{theorem} These results follow easily from identities of Euler and Jacobi. Theorem \thm{thm5} (i) follows from \begin{equation} \sum_{n\ge0} p_{1-t}(n) q^{24n+1} = \frac{qE(q^{24})}{E(q^{24})^t} \equiv \frac{1}{E(q^{24t})} \sum_{n=-\infty}^\infty (-1)^n q^{(6n+1)^2} \pmod{t}. \end{equation} Here we have used Euler's Pentagonal Number Theorem \cite[Thm 353]{Ha-Wr-BOOK} \begin{equation} E(q) = \sum_{n=-\infty}^\infty (-1)^n q^{n(3n+1)/2}. \mylabel{eq:Epent} \end{equation} Theorem \thm{thm5} (ii) follows from \begin{equation} \sum_{n\ge0} p_{3-t}(n) q^{8n+1} = \frac{qE^3(q^{8})}{E(q^{8})^t} \equiv \frac{1}{E(q^{8t})} \sum_{n\ge0}(-1)^n (2n+1) q^{(2n+1)^2} \pmod{t}, \end{equation} where we have used Jacobi's Identity \cite[Thm 237]{Ha-Wr-BOOK} \begin{equation} E(q)^3 = \sum_{n\ge0} (-1)^n (2n+1) q^{n(n+1)/2}. \mylabel{eq:jacid} \end{equation} Theorem \thm{thm6} (ii) is Theorem 1 in \cite{An08}. In this section we construct analogs of the Hammond-Lewis birank to combinatorially explain the two congruences in Theorem \thm{thm5}. Andrews's bicrank \cite{An08} (see also equation \eqn{bic1}) gave a combinatorial interpretation of Theorem \thm{thm5} (ii) for the case $t=5$, and $n\equiv 3\pmod{5}$. The Hammond-Lewis birank gave a combinatorial interpretation of Theorem \thm{thm5} (ii) for the case $t=5$, and all relelvant $n$. For even $r$, we define the generalized Hammond-Lewis multirank by \begin{equation} \mbox{gHL-multirank}(\vec{\pi}) = \sum_{k=1}^{r/2} k\,\left(\#(\pi_k) - \#(\pi_{r+1-k})\right), \mylabel{eq:gHLmrankdef} \end{equation} for $\vec{\pi}=(\pi_1,\pi_2,\dots,\pi_{r})$ a multipartition with $r$ components. The $r=2$ case corresponds to the Hammond-Lewis birank. In this section we prove \begin{theorem} \label{thm:thm6} Let $t>3$ be prime. \begin{enumerate} \item[(i)] The residue of the generalized-Hammond-Lewis-multirank mod $t$ divides the multipartitions of $n$ with $r=t-1$ components into $t$ equal classes provided $24n+1$ is a quadratic nonresidue mod $t$. \item[(ii)] The residue of the generalized-Hammond-Lewis-multirank mod $t$ divides the multipartitions of $n$ with $r=t-3$ components into $t$ equal classes provided $8n+1$ is not a quadratic residue mod $t$. \end{enumerate} \end{theorem} We illustrate Theorem \thm{thm6} (ii) for $t=7$ and $n=2$. \begin{equation*} \begin{array}{rr} \mbox{Multipartitions of $2$} & \mbox{generalized-HL-multirank}\\ \mbox{with $4$ components} & \mbox{(mod $7$)}\\ (-, -, -, 1+1) & -2\equiv5\\ ( -, -, -, 2) & -1\equiv6\\ ( -, -, 1, 1) & -3\equiv4\\ (-, -, 1+1,-) & -4\equiv3\\ ( -, -, 2,-) & -2\equiv5\\ ( -, 1, -, 1) & 1\equiv1\\ ( -, 1, 1,-) & 0\equiv0\\ (-, 1+1, -,-) & 4\equiv4\\ ( -, 2, -,-) & 2\equiv2\\ ( 1, -, -, 1) & 0\equiv0\\ ( 1, -, 1,-) & -1\equiv6\\ ( 1, 1, -,-) & 3\equiv3\\ (1+1, -, -,-) & 2\equiv2\\ ( 2, -, -,-) & 1\equiv1\\ \end{array} \end{equation*} We see that the residue of generalized-Hammond-Lewis-multirank mod $7$ divides the $14$ $4$-colored partitions of $2$ into $7$ equal classes. Both parts of Theorem \thm{thm6} are easy to prove. For (i), we need only Euler's pentagonal number theorem \eqn{Epent}. We let $\zeta_t$ be a primitive $t$-th root of unity. We have \begin{align} \sum_{\vec{\pi}\in\mathcal{P}^{t-1}} \zeta_t^{\gHLmr} q^{\abs{\vec{\pi}}} &= \prod_{k=1}^{(t-1)/2} \frac{1}{(\zeta_t^k q;q)_\infty (\zeta_t^{-k}q;q)_\infty} \mylabel{eq:gHLgenfunc}\\ &= \frac{(q;q)_\infty} {(q^t;q^t)_\infty}. \end{align} From \eqn{Epent} we have \begin{equation} \sum_{\vec{\pi}\in\mathcal{P}^{t-1}} \zeta_t^{\gHLmr} q^{24\abs{\vec{\pi}}+1} = \frac{1}{(q^{24t};q^{24t})_\infty} \sum_{n=-\infty}^\infty (-1)^n q^{(6n+1)^2}. \mylabel{eq:gHLgenfunc2} \end{equation} We see that in the $q$-expansion on the right side of \eqn{gHLgenfunc2} the coefficient of $q^n$ is zero when $n$ is a quadratic nonresidue mod $t$. Theorem \thm{thm6} (i) follows. For part (ii) of Theorem \thm{thm6} we only need Jacobi's triple product identity \eqn{jtprod}. We have \begin{align} \sum_{\vec{\pi}\in\mathcal{P}^{t-3}} \zeta_t^{\gHLmr} q^{\abs{\vec{\pi}}} &= \prod_{k=1}^{(t-3)/2} \frac{1}{(\zeta_t^k q;q)_\infty (\zeta_t^{-k}q;q)_\infty} \mylabel{eq:gHLgenfunc3}\\ &= \frac{(\zeta_t^{(t-1)/2}q;q)_\infty (\zeta_t^{-(t-1)/2}q;q)_\infty (q;q)_\infty} {(q^t;q^t)_\infty}\nonumber\\ &= \frac{\displaystyle \sum_{m=-\infty}^\infty (-1)^{m+1} (\zeta_t^{(m+1)(t-1)/2} - \zeta_t^{-m(t-1)/2}) q^{m(m+1)/2}} {(1 - \zeta_t^{(t-1)/2}) (q^t;q^t)_\infty}, \nonumber \end{align} and \begin{align} &\sum_{\vec{\pi}\in\mathcal{P}^{t-3}} \zeta_t^{\gHLmr} q^{8\abs{\vec{\pi}}+1} \mylabel{eq:gHLgenfunc4}\\ &\quad = \frac{1}{(1 - \zeta_t^{(t-1)/2}) (q^{8t};q^{8t})_\infty} \sum_{m=-\infty}^\infty (-1)^{m+1} \zeta_t^{-m(t-1)/2} (\zeta_t^{(2m+1)(t-1)/2} - 1) q^{(2m+1)^2}. \nonumber \end{align} We see that in the $q$-expansion on the right side of \eqn{gHLgenfunc4} the coefficient of $q^n$ is zero when $n$ is not a quadratic residue mod $t$, i.e.\ when $n$ is either a quadratic nonresidue or $n\equiv0\pmod{t}$. Theorem \thm{thm6} (ii) follows. \section{Multicranks} \label{sec:mcranks} In this section we give some extensions of the bicrank to multipartitions and provide alternative interpretations for some of the congruences given in Theorem \thm{thm5}. We define two multicranks. These multicranks are defined in terms of cartesian products of extended partitions and ordinary partitions. In Section \sect{bicrank}, we defined the set of extended partitions $\mathcal{E}$ and its associated crank and weight function. Recall from Section \sect{ghlbirank} that $\mathcal{P}$ denotes the set of ordinary partitions, and $\mathcal{P}\subset\mathcal{E}$. Let $r$ be a positive even integer. For an extended multipartition \begin{equation} \vec{\pi}=(\pi_1,\pi_2,\dots,\pi_{r}) \in \mathcal{E} \times \cdots \times \mathcal{E} \times \mathcal{P} \times \cdots \times \mathcal{P} =\mathcal{E}^{r/2} \times \mathcal{P}^{r/2}, \end{equation} we define multicrank-I by \begin{equation} \mbox{multicrank-I}(\vec{\pi}) = \sum_{k=1}^{r/2} k\cdot\mbox{crank}(\pi_k). \mylabel{eq:mcrankIdef} \end{equation} For an extended multipartition \begin{equation} \vec{\pi}=(\pi_1,\pi_2,\dots,\pi_{r}) \in \mathcal{E} \times \mathcal{E} \times \mathcal{P} \times \cdots \times \mathcal{P} = \mathcal{E} \times \mathcal{E} \times \mathcal{P}^{r-2}, \end{equation} we define multicrank-II by \begin{equation} \mbox{multicrank-II}(\vec{\pi}) = \sum_{k=1}^{2} k\cdot\mbox{crank}(\pi_k) + \sum_{k=3}^{r} k\,\left(\#(\pi_k) - \#(\pi_{r-k+3})\right). \mylabel{eq:mcrankIIdef} \end{equation} We note that the $\mbox{bicrank}_2$ corresponds to the multicrank-II when $r=2$. For both types of extended multipartitions we define a sum of parts function and a weight function in the natural way \begin{equation} \abs{\vec{\pi}} = \sum_{k=1}^{r} \abs{\pi_k},\quad\mbox{and}\qquad w(\vec{\pi}) = \prod_{k=1}^{r} w(\pi_k). \mylabel{eq:ewmdef} \end{equation} We have \begin{equation} \sum_{\substack{\vec{\pi}\in \mathcal{E}^{r/2}\mathcal{P}^{r/2}\\ \abs{\vec{\pi}}=n}} w(\vec{\pi}) = \sum_{\substack{\vec{\pi}\in \mathcal{E}^{2}\mathcal{P}^{r-2}\\ \abs{\vec{\pi}}=n}} w(\vec{\pi}) = p_{-r}(n). \mylabel{eq:totw1} \end{equation} \begin{theorem} \label{thm:thm7} Let $t>3$ be prime. \begin{enumerate} \item[(i)] The residue of the multicrank-I mod $t$ divides the extended multipartitions of $n$ from $\mathcal{E}^{(t-1)/2}\times\mathcal{P}^{(t-1)/2}$ into $t$ equal classes of equal weight provided $24n+1$ is a quadratic nonresidue mod $t$. \item[(ii)] The residue of the multicrank-I mod $t$ divides the extended multipartitions of $n$ from $\mathcal{E}^{(t-3)/2}\times\mathcal{P}^{(t-3)/2}$ into $t$ equal classes of equal weight provided $8n+1$ is not a quadratic residue mod $t$. \item[(iii)] The residue of the multicrank-II mod $t$ divides the extended multipartitions of $n$ from $\mathcal{E}^{2}\times\mathcal{P}^{t-5}$ into $t$ equal classes of equal weight provided $8n+1$ is a quadratic nonresidue mod $t$. \end{enumerate} \end{theorem} In view of \eqn{totw1}, Theorem \thm{thm7} (i), (ii) provides alternative combinatorial intepretations of our congruences for multipartitions given in Theorem \thm{thm5} (i), (ii). The result in part (iii) is weaker than (ii). We include it since it is a generalization of the $\mbox{bicrank}_2$. The proof of Theorem \thm{thm7} is very similar to Theorem \thm{thm6}. We have \begin{align} \sum_{\vec{\pi}\in \mathcal{E}^{(t-1)/2}\times\mathcal{P}^{(t-1)/2}} \zeta_t^{\mcrI} w(\vec{\pi}) q^{\abs{\vec{\pi}}} &= \left(\prod_{k=1}^{(t-1)/2} F(\zeta_t^k,q) \right) \frac{1}{E^{(t-1)/2}(q)} \mylabel{eq:mcIgenfunc}\\ &= \prod_{k=1}^{(t-1)/2} \frac{1}{(\zeta_t^k q;q)_\infty (\zeta_t^{-k}q;q)_\infty} \nonumber\\ &= \frac{(q;q)_\infty} {(q^t;q^t)_\infty}, \nonumber \end{align} where $F(z,q)$ is the crank generating function given in \eqn{Mvid}. Theorem \thm{thm7} (i) then follows from \eqn{gHLgenfunc2}. Similarly, \begin{align} \sum_{\vec{\pi}\in \mathcal{E}^{(t-3)/2}\times\mathcal{P}^{(t-3)/2}} \zeta_t^{\mcrI} w(\vec{\pi}) q^{\abs{\vec{\pi}}} &= \left(\prod_{k=1}^{(t-3)/2} F(\zeta_t^k,q) \right) \frac{1}{E^{(t-3)/2}(q)} \mylabel{eq:mcIgenfunc2}\\ &= \prod_{k=1}^{(t-3)/2} \frac{1}{(\zeta_t^k q;q)_\infty (\zeta_t^{-k}q;q)_\infty}. \nonumber \end{align} Theorem \thm{thm7} (ii) then follows from \eqn{gHLgenfunc3}, \eqn{gHLgenfunc4}. We have \begin{align} \sum_{\vec{\pi}\in \mathcal{E}^{2}\times\mathcal{P}^{t-5}} \zeta_t^{\mcrII} w(\vec{\pi}) q^{\abs{\vec{\pi}}} &= F(\zeta_t,q)\,F(\zeta_t^2,q)\, \prod_{k=3}^{(t-1)/2} \frac{1}{(\zeta_t^k q;q)_\infty (\zeta_t^{-k}q;q)_\infty}. \mylabel{eq:mcIIgenfunc}\\ & = \frac{(q)_\infty^3}{(q^t;q^t)_\infty}. \nonumber \end{align} Then \begin{equation} \sum_{\vec{\pi}\in \mathcal{E}^{2}\times\mathcal{P}^{t-5}} \zeta_t^{\mcrII} w(\vec{\pi}) q^{8\abs{\vec{\pi}}+1} = \frac{1}{E(q^{8t})} \sum_{n\ge0}(-1)^n (2n+1) q^{(2n+1)^2}, \mylabel{eq:mcIIgenfunc2} \end{equation} and Theorem \thm{thm7} (iii) follows. \section{Concluding Remarks} \label{sec:end} The two main results of this paper are the combinatorial inpretations of the 2-colored partition congruences \eqn{2colcongs} in terms of the Dyson-birank and the $5$-core-birank. The author has been unable to extended these two results to higher dimensional multipartitions. The extensions of the Hammond-Lewis birank and Andrews bicrank are much easier because the generating functions involved are simple infinite products. It seems unlikely that a combinatorial proof of \eqn{T5dissect} is possible. This identity gives the $5$-dissection of the $5$-core-crank generating function when $z=\zeta_5$. The proof given in the paper relies on a heavy use of the theory of modular functions. A more elementary proof is desirable. In \cite{Ga-Ki-St}, a combinatorial proof is given that the residue of $5$-core-crank mod $5$ divides the $5$-cores of $5n+4$ into $5$ equal classes. It would interesting to see if the methods of \cite{Ga-Ki-St} could be extended to give a combinatorial proof of Theorem \thm{thm3}, which is our result for the $5$-core-birank. It is clear that the generalized-Hammond-Lewis multiranks and our multicranks are related. For instance, from equations \eqn{gHLgenfunc}, \eqn{mcIgenfunc}, \eqn{gHLgenfunc3}, and \eqn{mcIgenfunc2} we have \begin{align} \sum_{\vec{\pi}\in\mathcal{P}^{t-1}} \zeta_t^{\gHLmr} q^{\abs{\vec{\pi}}} &= \sum_{\vec{\pi}\in \mathcal{E}^{(t-1)/2}\times\mathcal{P}^{(t-1)/2}} \zeta_t^{\mcrI} w(\vec{\pi}) q^{\abs{\vec{\pi}}}, \mylabel{eq:gHLmcrid1}\\ \sum_{\vec{\pi}\in\mathcal{P}^{t-3}} \zeta_t^{\gHLmr} q^{\abs{\vec{\pi}}} &= \sum_{\vec{\pi}\in \mathcal{E}^{(t-3)/2}\times\mathcal{P}^{(t-3)/2}} \zeta_t^{\mcrI} w(\vec{\pi}) q^{\abs{\vec{\pi}}}. \mylabel{eq:gHLmcrid2} \end{align} It would interesting to find a combinatorial proof these identities. However what would be more interesting is to find bijective proofs of Theorems \thm{thm4} and \thm{thm6}. This is a reasonable problem since the generating functions involved are simple infinite products. \noindent \textbf{Acknowledgement} \noindent I would like to thank \dots. \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \setcounter{equation}{0} The study of frustrated spin systems is an important and fascinating issue in the condensed matter physics \cite{Balents}. Due to the competition of various interactions, many interesting phenomena such as different magnetic ordered states are induced in this kind of system \cite{2}. A typical one-dimensional frustrated model is the $J_1-J_2$ spin chain where the nearest-neighbor (NN) and the next-nearest-neighbor (NNN) couplings are included \cite{Sha81}. Besides the spin-exchanging interaction, the chiral three spins coupling is another important interaction. In 1987, Kalmeyer and Laughlin presented that the spin-singlet ground state of a frustrated Heisenberg antiferromagnet can describe the fractional quantum Hall state \cite{Laughlin87}, which is known as the chiral spin liquid. Later, Wen et al \cite{wen89} and Baskaran \cite{Bask89} clarified that the expectation value of the chiral three-spin operator can be used as the order parameter of spin liquid states. After that, the related physical properties such as spin-charge separation, bulk gap and chiral edge states are studied extensively \cite{Fradkin91,Yang93}. The quantum liquid behavior of one-dimensional system is a very interesting issue \cite{Imam12} and the pseudoparticle approach is a powerful method to exactly calculate the physical quantities based on the integrable models \cite{Car18}. Recently, the models with spin chirality terms cause renewed attention in the condensed matter physics \cite{Nom92,93,97,98}, statistical mechanics \cite{Djo16} and quantum field theory \cite{Jaf06,Goro15,Chen17}. At present, the exact energy spectrum of the one-dimensional $J_1-J_2$ model is still an open question. However, when adding the spin chirality terms, the generalized $J_1-J_2$ model can be mapped into a two quantum spin chains coupled system and is exactly solvable \cite{Pop93,Zvy95-1,Zvy95-2}. For example, Frahm et al constructed an integrable family of coupled Heisenberg spin chains \cite{Frahm96} and studied the zero-temperature properties utilizing algebraic Bethe ansatz \cite{Frahm97}. Ikhlef, Jacobsen and Saleur constructed an integrable $Z_2$ staggered vertex model and explained the possible applications \cite{IK08,IK10}. Other interesting developments in this direction can be found in the references \cite{Arna00,Zvy01}. The integrable boundary conditions of the generalized $J_1-J_2$ model are the periodic, antiperiodic and open ones. When we consider the antiperiodic or the non-diagonal boundary conditions, the $U(1)$ symmetry of the system is broken and the coordinate and algebraic Bethe ansatz methods do not work because of lacking the reference state. Due to the extensive applications in open string theory, non-equilibrium statistical mechanics and topological physics, the exact solutions of quantum integrable systems without $U(1)$ symmetry are very important. Several methods such as gauge transformation \cite{cao03}, $T-Q$ relation based on the fusion \cite{Yung95,nep021}, $q$-Onsager algebra \cite{Bas1,Bas2}, separation of variables \cite{sk2-2,Niccoli13-1}, modified algebraic Bethe ansatz \cite{Bel13-1,Bel13-2,Bel13-3} and off-diagonal Bethe ansatz \cite{cysw,Book} have been proposed to overcome this obstacle. Focus on the generalized $J_1-J_2$ model, the eigenvalue of the transfer matrix is characterized by the inhomogeneous $T-Q$ relation, where the associated Bethe ansatz equations (BAEs) are also inhomogeneous \cite{Qiao20}. Then another problem arises, that is it is quite hard to calculate the physical quantities in the thermodynamic limit because of the existence of the inhomogeneous term. In order to solve this difficulty, the $t-W$ relation is proposed \cite{Qiao2020}. Based on it, the elementary excitations and surface energy of XXZ spin chain with antiperiodic boundary condition are obtained. In this paper, we study a more general quantum spin chain that includes the NN, NNN and chiral three-spin interactions. We consider the antiperiodic boundary condition. We find that if we parameterize the eigenvalue of the transfer matrix by its zero roots, we could obtain the homogeneous BAEs, which makes it possible for us to take the thermodynamic limit and calculate the physical quantities exactly. We remark that the fusion relation can give all the necessary information to determine the energy spectrum. Based on them, we calculate the exact physical quantities in the thermodynamic limit. We obtain the ground state energy, elementary excitations and corresponding dispersion relations. The conserved momentum and charge operators of this $U(1)$-symmetry broken system are also given. The paper is organized as follows. In the next section, we introduce the model Hamiltonian and explain its integrability. In section 3, we show how to parameterize the eigenvalue of the transfer matrix and how to obtain the homogeneous BAEs. In section 4, we derive the conserved momentum and charge operators of the system. The ground state energy with real $a$ and imaginary $\eta$ is derived and the thermodynamic limit is calculated in section 5. In section 6, three kinds of elementary excitations and corresponding excited energies are deduced. Concluding remarks and discussions are given in section 7. \section{The system and its integrability} \setcounter{equation}{0} We consider an anisotropic quantum spin chain which includes the NN, NNN and chiral three-spin interactions with the antiperiodic boundary condition. The model Hamiltonian reads \begin{eqnarray}\label{Ham1} H =-\sum^{2N}_{j=1} \sum_{\alpha=x,y,z} \big[ J_1^\alpha \sigma_j^\alpha \sigma_{j+1}^\alpha +J_2 \sigma_j^\alpha \sigma_{j+2}^\alpha + (-1)^j J_3^{\alpha} \sigma_{j+1}^\alpha(\vec{\sigma}_{j} \times \vec{\sigma}_{j+2} )^\alpha\big], \end{eqnarray} where $2N$ is the number of sites, $\sigma_j^\alpha$ is the Pauli matrix at $j$-th site along the $\alpha$-direction, and $(\vec{\sigma}_{j} \times \vec{\sigma}_{j+2} )^\alpha$ is the operator vector along the $\alpha$-direction. Here NN coupling has the $XXZ$-type anisotropy. Without losing generality, we put \begin{eqnarray} J_1^x=J_1^y=\cosh(2a), \quad J_1^z=\cosh\eta, \end{eqnarray} where $a$ is a model parameter and $\eta$ is the crossing or anisotropic parameter. $J_2$ describes the NNN coupling which is isotropic. The integrability of Hamiltonian (\ref{Ham1}) requires that the coupling constant $J_2$ satisfies \begin{eqnarray} J_2=-\frac{\sinh^2(2a)\cosh\eta}{2\sinh^2\eta}.\label{coeffJ2} \end{eqnarray} $J_3^\alpha$ describes the chiral three-spin interaction which satisfies the constraint \begin{eqnarray} J_3^x=J_3^y= \frac{i \sinh(2a)}{2 \sinh\eta}\cosh\eta, \quad J_3^z=\frac{i \sinh(2a)}{4 \sinh\eta}\cosh(2a).\label{coeffJ3} \end{eqnarray} Thus the spin chirality terms are anisotropic. The Hermitian of Hamiltonian (\ref{Ham1}) requires that $a$ is real if $\eta$ is imaginary, and $a$ is imaginary if $\eta$ is real. If $a=0$, the NNN and three-spin chirality terms vanish and the model (\ref{Ham1}) reduces to the ordinary XXZ spin chain. The antiperiodic boundary condition is achieved by \begin{eqnarray} \sigma^{\alpha}_{2N+n}=\sigma^{x}_{n} \sigma^{\alpha}_{n} \sigma^{x}_{n},\quad n=1,2, \quad \alpha=x,y,z.\label{APB} \end{eqnarray} The Hamiltonian (\ref{Ham1}) is generated by the generation functionals $t(u)$ and $\hat t(u)$ as \begin{eqnarray}\label{J1J2ham} H = -\phi^{1-N}(2a)\sinh\eta\big\{ \hat{t}(-a)\frac{\partial \, t(u)}{\partial u}\big|_{u=a}+ \hat{t}(a) \frac{\partial \, t(u)}{\partial u}\big|_{u=-a} \big\}+E_0. \end{eqnarray} Here the constants $\phi(2a)$ and $E_0$ are given by \begin{eqnarray} \phi(2a)=-\frac{\sinh(2a+\eta)\sinh(2a-\eta)}{\sinh^2\eta},\quad E_0=-\frac{N\cosh\eta[\cosh^2(2a)-\cosh(2\eta)]}{\sinh^2\eta}. \end{eqnarray} $t(u)$ and $\hat t(u)$ are the transfer matrices with the definitions \begin{eqnarray} &&t(u)=tr_0\{\sigma^x_0 R_{0,1}(u+a) R_{0,2}(u-a) \cdots R_{0,2N-1}(u+a) R_{0,2N}(u-a)\}, \nonumber \\ &&\hat{t}(u)=tr_0 \{\sigma^x_0 R_{0,2N}(u+a) R_{0,2N-1}(u-a)\cdots R_{0,2}(u+a) R_{0,1}(u-a)\}, \label{trans} \end{eqnarray} where $tr_0$ means the partial trace in the auxiliary space ${\bf V}_0$, the subscript $j=\{1, \cdots, 2N\}$ denotes the $j$-th quantum or physical space ${\bf V}_j$, $u$ is the spectral parameter and $R_{0,j}(u)$ is the six vertex $R$-matrix defined in the tensor space ${\rm\bf V}_0\otimes {\rm\bf V}_j$ \begin{eqnarray} R_{0,j}(u)=\frac{\sinh(u+\eta)+\sinh u}{2\sinh \eta}+\frac{1}{2} (\sigma^x_0 \sigma^x_j +\sigma^y_0 \sigma^y_j) + \frac{\sinh(u+\eta)-\sinh u}{2\sinh \eta} \sigma^z_0 \sigma^z_j. \label{R-matrix} \end{eqnarray} From Eq.(\ref{trans}), we know that the transfer matrices $t(u)$ and $\hat t(u)$ are defined in the tensor space ${\rm\bf V}_1\otimes {\rm\bf V}_2\otimes\cdots\otimes{\rm\bf V}_{2N}$. Throughout this paper, we adopt the standard notations. The ${\rm\bf V}$ denotes a $2$-dimensional linear space. For any matrix $A\in {\rm End}({\rm\bf V})$, $A_j$ is an embedding operator in the tensor space ${\rm\bf V}\otimes {\rm\bf V}\otimes\cdots$, which acts as $A$ on the $j$-th space and as identity on the other factor spaces. For any matrix $B\in {\rm End}({\rm\bf V}\otimes {\rm\bf V})$, $B_{i,j}$ is an embedding operator of $B$ in the tensor space, which acts as identity on the factor spaces except for the $i$-th and $j$-th ones. The $R$-matrix (\ref{R-matrix}) has following properties \begin{eqnarray} &&\hspace{-1.5cm}\mbox{ Initial condition}:\,R_{0,j}(0)= P_{0,j},\label{Int-R} \\ &&\hspace{-1.5cm}\mbox{ Unitarity relation}:\,R_{0,j}(u)R_{j,\,0}(-u)= \phi(u)\times{\bf id}, \label{Unitarity} \\ &&\hspace{-1.5cm}\mbox{ Crossing relation}:\,R_{0,j}(u)=V_0R_{0,j}^{t_j}(-u-\eta)V_0,\quad V_0=-i\sigma_0^y, \label{crosing-unitarity} \\ &&\hspace{-1.5cm}\mbox{ PT-symmetry}:\,R_{0,j}(u)=R_{j,\,0}(u)=R^{t_0\,t_j}_{0,j}(u),\label{PT} \\ &&\hspace{-1.4cm}\mbox{$Z_2$-symmetry}: \;\; \sigma^\alpha_0\sigma^\alpha_jR_{0,j}(u)=R_{0,j}(u)\sigma^\alpha_0\sigma^\alpha_j,\quad \mbox{for}\,\,\, \alpha=x,y,z,\label{Z2-sym} \\ &&\hspace{-1.5cm}\mbox{ Quasi-periodicity}:\, R_{0,j}(u+i\pi)=-\sigma^z_0R_{0,j}(u)\sigma^z_0,\label{quasi-}\\ &&\hspace{-1.5cm}\mbox{ Fusion relation}:\, R_{0,j}(-\eta)=-2P^{(-)}_{0,j},\label{fu-} \end{eqnarray} where ${\bf id}$ means the identity operator, $R_{j,0}(u)=P_{0,j}R_{0,j}(u)P_{0,j}$ with $P_{0,j}$ being the permutation operator, $t_l$ denotes transposition in the $l$-th space with $l=\{0, j\}$, $P^{(-)}_{0,j}$ is the one-dimensional antisymmetric projection operator, and $P^{(-)}_{0,j}=(1- P_{0,j})/2$. Besides, the $R$-matrix (\ref{R-matrix}) satisfies the Yang-Baxter equation \begin{eqnarray} R_{0,j}(u_1-u_2)R_{0,l}(u_1-u_3)R_{j,l}(u_2-u_3) =R_{j,l}(u_2-u_3)R_{0,l}(u_1-u_3)R_{0,j}(u_1-u_2).\label{QYB} \end{eqnarray} Using the crossing symmetry (\ref{crosing-unitarity}), we obtain the relations between transfer matrices $t(u)$ and $\hat t(u)$ \begin{eqnarray}\label{tt} t(u)=-\hat{t}(-u-\eta), \quad \hat{t}(u)=-t(-u-\eta). \end{eqnarray} Meanwhile, $t(u)$ and $\hat t(u)$ have the periodicity \begin{eqnarray} t(u+i\pi)=(-1)^{2N-1}t(u), \quad \hat t(u+i\pi)=(-1)^{2N-1}\hat t(u). \end{eqnarray} From the commutation relation (\ref{Z2-sym}) and Yang-Baxter equation (\ref{QYB}), one can prove that the transfer matrices with different spectral parameters commute with each other, i.e., \begin{eqnarray} [t(u), t(v)]=[\hat t(u), \hat t(v)] =[t(u), \hat t(u)]=0. \label{t-commu1} \end{eqnarray} Therefore, $t(u)$ and $\hat t(u)$ serve as the generating functionals of conserved quantities of the system. The Hamiltonian is generated by the transfer matrices as (\ref{J1J2ham}), then the model (\ref{Ham1}) is integrable. We note that the transfer matrices $t(u)$ and $\hat t(u)$ have common eigenstates. \section{The exact solution and $t-W$ scheme} \setcounter{equation}{0} Now, we exactly solve the Hamiltonian (\ref{Ham1}). The energy spectrum of the system is determined by the eigenvalues of transfer matrices $t(u)$ and $\hat t(u)$. From the one-to-one correspondence (\ref{tt}), we know that $t(u)$ and $\hat t(u)$ are not independent, thus we only need to diagonalize the transfer matrix $t(u)$. The process of diagonalizing $t(u)$ is as follows. According to the definition (\ref{trans}), $t(u)$ is an operator-valued trigonometric polynomial of $u$ with the degree $2N-1$. Thus the eigenvalue of $t(u)$ is a trigonometric polynomial of $u$ with the degree $2N-1$, which can be completely determined by $2N$ constraints. Therefore, the next task is to seek these constraints. The basic technique is fusion. Fusion is a significant method and has various applications in the representation theory of quantum algebras \cite{Kulish81,Resh83}. The main idea of fusion is the $R$-matrix degenerates into the projection operators at some special points. Based on it, we can obtain the high-dimensional representation of ceratin algebraic symmetry. We shall note that some new conserved quantities including the novel Hamiltonian quantifying some interesting interactions could also be constructed using the fusion. The $R$-matrix (\ref{R-matrix}) is a $4\times 4$ matrix. At the point of $u=-\eta$, $R_{12}(u)$ degenerates into the one-dimensional antisymmetric projection operator $P_{1,2}^{(-)}$. The accompanied three-dimensional symmetric projection operator is $P^{(+)}_{1,2}=(1+ P_{1,2})/2$. Using the fusion, we obtain \begin{eqnarray} t(u)t(u-\eta)=tr_{1,2}\{P^{(-)}_{1,2}T_2(u)T_1(u-\eta)P^{(-)}_{1,2}\}+tr_{1,2}\{P^{(+)}_{1,2}T_2(u)T_1(u-\eta)P^{(+)}_{1,2}\}.\label{ai} \end{eqnarray} During the derivation, we have used the relations \begin{eqnarray} {P}_{1,2}^{(-)}+{P}_{1,2}^{(+)}={\bf id},\quad {P}_{1,2}^{(-)}{P}_{1,2}^{(+)}={P}_{1,2}^{(+)}{P}_{1,2}^{(-)}=0. \end{eqnarray} With the help of properties of projection operators, we obtain following $t-W$ relation \begin{eqnarray} t(u)t(u-\eta)=-d(u+\eta)d(u-\eta)\times{\bf id}+d(u){\bf W}(u),\label{tw} \end{eqnarray} where \begin{eqnarray} d(u)=\frac{\sinh^N(u+a)\sinh^N(u-a)}{\sinh^{2N}\eta},\label{ai1} \end{eqnarray} and ${\bf W}(u)$ is the undetermined operator. It is obvious that $d(\pm a)=0$, thus the $t-W$ relation (\ref{tw}) is closed at the points of $u=\pm a$, i.e., \begin{eqnarray} t(a)t(a-\eta)=-d(a+\eta)d(a-\eta)=t(-a)t(-a-\eta). \end{eqnarray} The fusion does not break the integrability. Thus $t(u)$ and ${\bf W}(u)$ commute with each other and they have the common eigenstates. Assume $|\Psi\rangle$ is a common eigenstate \begin{eqnarray} t(u)|\Psi\rangle=\Lambda(u)|\Psi\rangle, \quad {\bf W}(u)|\Psi\rangle=W(u)|\Psi\rangle, \end{eqnarray} where $\Lambda(u)$ and $W(u)$ are the eigenvalues of $t(u)$ and ${\bf W}(u)$, respectively. Acting Eq.(\ref{tw}) on the eigenstate $|\Psi\rangle$, we have \begin{eqnarray} \Lambda(u)\Lambda(u-\eta)=-d(u+\eta)d(u-\eta)+d(u)W(u).\label{t-w} \end{eqnarray} At the points of $u=\pm a$, the relation (\ref{t-w}) reduces to \begin{eqnarray} \Lambda(a)\Lambda(a-\eta)=-d(a+\eta)d(a-\eta)=\Lambda(-a)\Lambda(-a-\eta). \end{eqnarray} Because the eigenstate $|\Psi\rangle$ does not depend upon $u$, the eigenvalue $\Lambda(u)$ is a trigonometric polynomial of $u$ with the degree $2N-1$. Meanwhile, the eigenvalue should satisfy the periodicity $\Lambda(u+i\pi)=(-1)^{2N-1}\Lambda(u)$. Thus, we parameterize the eigenvalue $\Lambda(u)$ as \begin{eqnarray} \Lambda(u)=\Lambda_0\,\prod_{j=1}^{2N-1}\,\sinh (u-z_j+\eta/2),\label{Zero-points} \end{eqnarray} where $\{z_j-\eta/2|j=1,\cdots,2N-1\}$ are the $2N-1$ zero roots and $\Lambda_0$ is an overall coefficient. From Eqs.(\ref{ai1}) and (\ref{t-w}), we also know that the eigenvalue ${W}(u)$ is a trigonometric polynomial of $u$ with the degree $2N$. We parameterize ${W}(u)$ as \begin{eqnarray} W(u)=W_0\sinh^{-2N}\eta\,\prod_{l=1}^{2N}\sinh(u-w_l), \end{eqnarray} where $\{w_j |j=1,\cdots,2N\}$ are the $2N$ zero roots and $W_0$ is a constant. The polynomial analysis shows that (\ref{t-w}) is a polynomial equation of $e^u$ with the degree $4N$, thus gives $4N+1$ independent constraints for the coefficients, which are sufficient to completely determine the $2N-1$ shifted zero roots $\{z_j\}$, $2N$ zero roots $\{w_l\}$, the constants $\Lambda_0$ and $W_0$. Since $\Lambda(u)$ is a trigonometric polynomial of $u$ with the degree $2N-1$, the leading terms in the right hand side of Eq.(\ref{t-w}) must be zero. Then we have \begin{eqnarray} W_0e^{\pm\sum_{l=1}^{2N} w_l}=1.\label{BA0} \end{eqnarray} Putting $u=\{z_j-\eta/2| j=1,\cdots, 2N-1\}$ in Eq.(\ref{t-w}), we obtain the first set of constraints of zero roots $\{z_j\}$ and $\{w_l\}$ \begin{eqnarray} &&\sinh^N(z_j+\frac\eta2+a)\,\sinh^N(z_j+\frac\eta2-a)\sinh^N(z_j-\frac{3\eta}2+a)\,\sinh^N(z_j-\frac{3\eta}2-a)\nonumber\\ &&=W_0\,\sinh^N (z_j-\frac\eta2+a)\,\sinh^N (z_j-\frac\eta2-a)\prod_{l=1}^{2N}\sinh(z_j-\frac\eta2-w_l),\nonumber\\ && \quad j=1,\cdots, 2N-1.\label{BA1} \end{eqnarray} Putting $u=\{w_l|l=1, \cdots, 2N\}$ in Eq.(\ref{t-w}), we obtain the second set of constraints of zero roots $\{z_j\}$ and $\{w_l\}$ \begin{eqnarray} &&-\sinh^{-4N}\eta\,\sinh^N(w_l+\eta+a)\,\sinh^N(w_l+\eta-a)\sinh^N(w_l-\eta+a)\,\sinh^N(w_l-\eta-a)\nonumber\\ &&=\Lambda_0^2\prod_{j=1}^{2N-1}\sinh(w_l-z_j+\frac\eta2)\sinh(w_l-z_j-\frac\eta2),\quad l=1,\cdots, 2N.\label{BA2} \end{eqnarray} The coefficient $\Lambda_0$ can be determined by putting $u=a$ in Eq.(\ref{t-w}) as \begin{eqnarray} &&\Lambda_0^2\prod_{j=1}^{2N-1}\sinh(a-z_j+\frac\eta2)\sinh(a-z_j-\frac\eta2)\nonumber\\ &&=(-1)^{N-1}\sinh^{-2N}\eta\,\sinh^{N}(2a+\eta)\,\sinh^N(2a-\eta).\label{BA3} \end{eqnarray} Then the $4N+1$ parameters $\Lambda_0$, $W_0$, $\{z_j\}$ and $\{w_l\}$ should satisfy the $4N+1$ BAEs (\ref{BA0})-(\ref{BA3}). According to the construction (\ref{J1J2ham}), the energy spectrum of the Hamiltonian (\ref{Ham1}) can be expressed in terms of the zero roots $\{z_j\}$ as \begin{eqnarray}\label{J1J2redu} &&E=\phi^{1-N}(2a)\sinh\eta \big\{ \Lambda(a-\eta)\frac{\partial \Lambda(u)}{\partial u}\big|_{u=a} + \Lambda(-a-\eta) \frac{\partial \Lambda(u)}{\partial u}\big|_{u=-a}\big\}+E_0\nonumber \\ &&\quad =-\phi(2a)\sinh\eta\sum_{j=1}^{2N-1}\big\{\coth(a-z_j+\eta/2)- \coth(a+z_j-\eta/2)\big\}+E_0. \end{eqnarray} \begin{table}[!h] \centering \caption{The solutions of BAEs (\ref{BA0})-(\ref{BA3}) and the energy spectrum of the Hamiltonian (\ref{Ham1}) with $2N=4$, $a=0.2$ and $\eta=0.6i$. Here $E_n$ is the eigenenergy and $n$ is the energy level. Each level is doubly degenerate and $W_0=1$. The energy $E_n$ calculated from the BAEs is exactly the same as that obtained from the numerical exact diagonalization of Hamiltonian (\ref{Ham1}).}\label{c1} { \footnotesize \begin{tabular}{lllll} \hline $ w_1 $ & $ w_2 $ & $ w_3 $ & $ w_4 $ & $ z_1 $ \\ \hline $-1.2826$ & $-0.2473$ & $0.2473$ & $1.2826$ & $-0.3477$ \\ $-1.0902$ & $-0.0812$ & $0.5857-0.9540i$ & $0.5857+0.9540i$ & $-0.2913$ \\ $-0.5857-0.9540i$ & $-0.5857+0.9540i$ & $0.0812$ & $1.0902$ & $-1.8173$\\ $-0.7051$ & $0.0000-1.0960i$ & $0.0000+1.0960i$ & $0.7051$ & $-0.2105$ \\ $-0.3577-0.8792i$ & $-0.3577+0.8792i$ & $0.3577-0.8792i$ & $0.3577+0.8792i$ & $-0.8076-1.5708i$ \\ $-0.8868$ & $0.1556-0.8924i$ & $0.1556+0.8924i$ & $0.5756$ & $-0.2332$ \\ $-0.5756$ & $-0.1556-0.8924i$ & $-0.1556+0.8924i$ & $0.8868$ & $-0.1899-0.6221i$ \\ $-0.1910-0.8162i$ & $-0.1910+0.8162i$ & $0.1910-0.8162i$ & $0.1910+0.8162i$ & $0.0000-1.5708i$ \\ \hline $ z_2 $ & $ z_3 $ & $ \Lambda_{0}^2 $ & $ E_n $ & $ n $\\ \hline $0.0000$ & $0.3477$ & $-399.7321$ & $-5.2630$ & $1$ \\ $0.0728$ & $1.8173$ & $-10.8865$ & $-2.9735$ &$2$\\ $-0.0728$ & $0.2913$ & $-10.8865$ & $-2.9735$ & $3$\\ $0.0000-1.5708i$ & $0.2105$ & $105.6802$ & $-2.7257$ & $4$\\ $0.0000$ & $0.8076-1.5708i$ & $-5.9241$ & $0.9387$ & $5$\\ $0.1899-0.6221i$ & $0.1899+0.6221i$ & $-127.8985$ & $2.9735$ & $6$\\ $-0.1899+0.6221i$ & $0.2332$ & $-127.8983$ & $2.9735$ & $7$\\ $0.0000-0.6474i$ & $0.0000+0.6474i$ & $22.4078$ & $7.0500$ &$8$\\ \hline \end{tabular}} \end{table} \begin{table}[!h] \centering \caption{The solutions of BAEs (\ref{BA0})-(\ref{BA3}) and the energy spectrum of the Hamiltonian (\ref{Ham1}) with $2N=4$, $a=0.2i$ and $\eta=0.6$. Here $E_n$ is the eigenenergy and $n$ is the energy level. Each level is doubly degenerate and $W_0=1$. The energy $E_n$ calculated from the BAEs is exactly the same as that obtained from the numerical exact diagonalization of Hamiltonian (\ref{Ham1}).}\label{c2} {\footnotesize \begin{tabular}{lllll} \hline $ w_1 $ & $ w_2 $ & $ w_3 $ & $ w_4 $ & $ z_1 $ \\ \hline $-1.6145-1.5708i$ & $0.0000-0.2869i$ & $0.0000+0.2869i$ & $1.6145+1.5708i$ & $0.0000-0.3690i$ \\ $-1.3531+0.6671i$ & $0.0000-1.2799i$ & $0.0000-0.0542i$ & $1.3531+0.6671i$ & $0.0000-0.2868i$ \\ $-1.3531-0.6671i$ & $0.0000+0.0542i$ & $0.0000+1.2799i$ & $1.3531-0.6671i$ & $0.0000-0.7901i$ \\ $-1.4230$ & $0.0000-0.5427i$ & $0.0000+0.5427i$ & $1.4230$ & $0.0000-1.5708i$ \\ $-0.8913-0.3487i$ & $-0.8913+0.3487i$ & $0.8913-0.3487i$ & $0.8913+0.3487i$ & $-0.9699-1.5708i$ \\ $-0.8929-0.1418i$ & $-0.2682-1.4290i$ & $0.2682+1.7126i$ & $0.8929-0.1418i$ & $-0.6190-0.2120i$ \\ $-0.8929+0.1418i$ & $-0.2682-1.7126i$ & $0.2682+1.4290i$ & $0.8929+0.1418i$ & $-0.6190+0.2120i$ \\ $-0.8693-0.2039i$ & $-0.8693+0.2039i$ & $0.8693-0.2039i$ & $0.8693+0.2039i$ & $-0.6253$ \\ \hline $ z_2 $ & $ z_3 $ & $ \Lambda_{0}^2 $ & $ E_n $ & $ n $\\ \hline $0.0000$ & $0.0000+0.3690i$ & $308.1505$ & $-4.6408$ & $1$\\ $0.0000+0.0800i$ & $0.0000+0.7901i$ & $140.6619$ & $-3.3343$ & $2$ \\ $0.0000-0.0800i$ & $0.0000+0.2868i$ & $140.6615$ & $-3.3343$ & $3$ \\ $0.0000-0.1996i$ & $0.0000+0.1996i$ & $79.2054$ & $-3.1881$ & $4$ \\ $0.0000$ & $0.9699-1.5708i$ & $2.6434$ & $0.9566$ & $5$ \\ $0.0000+0.2360i$ & $0.6190-0.2120i$ & $59.4799$ & $3.3343$ & $6$ \\ $0.0000-0.2360i$ & $0.6190+0.2120i$ & $59.4799$ & $3.3343$ & $7$ \\ $0.0000-1.5708i$ & $0.6253$ & $10.2842$ & $6.8723$ & $8$\\ \hline \end{tabular}} \end{table} Now, we check the above analytical results numerically. We first solve the BAEs (\ref{BA0})-(\ref{BA3}) and obtain the values of zero roots $\{z_j\}$. Substituting these values into Eq.(\ref{J1J2redu}), we obtain the eigenenergies of the Hamiltonian (\ref{Ham1}). The results are listed in Tables \ref{c1} and \ref{c2}. After that, we numerically diagonalize the Hamiltonian (\ref{Ham1}) with the same model parameters. We find that the eigenvalues obtained by solving the BAEs are exactly the same as those obtained by the exact numerical diagonalization. Therefore, the expression (\ref{J1J2redu}) gives the complete spectrum of the system. \section{Conserved momentum and charge operators} \setcounter{equation}{0} In this section, we discuss the conserved momentum and charge operators. Although the antiperiodic boundary condition breaks the translational invariance and the $U(1)$ charge is not conserved in the present system, we can still construct the conserved momentum and charge in the topological manifold of twisted spin spaces. Define the shift operator $U$ as \begin{eqnarray}\label{U} U=\phi^{-N}(2a)t(a)t(-a). \end{eqnarray} With the help of transfer matrix $t(u)$ at the points of $u=\pm a$, we obtain \begin{eqnarray}\label{tata} &&t(a)t(-a)=\sigma_{2N}^xR_{1,2N}(2a)P_{2,2N}\cdots R_{2N-1,2N}(2a)R_{1,2}(-2a)P_{1,3}\cdots R_{1,2N}(-2a)\sigma_1^x \nonumber\\ &&\hspace{1.8cm}=\phi^{N}(2a)\sigma_{2N}^xP_{2,2N}P_{4,2N}\cdots P_{2N-2,2N} P_{1,3}P_{1,5}\cdots P_{1,2N-1}\sigma_1^x. \end{eqnarray} By using the properties of permutation operator, we find that $U^{2N}=1$. Accordingly, we construct the topological momentum operator as $\hat K=-i\ln U$. Then the eigenvalues of $\hat K$ are \begin{eqnarray} k=\frac{\pi l}N,{~~} l=-N, -N+1,\cdots, N-1.\label{perio} \end{eqnarray} Substituting the parametrization (\ref{Zero-points}) of eigenvalue of $t(u)$ into Eq.(\ref{U}), we obtain that the eigenvalue $k$ can also be expressed by the zero roots $\{z_j\}$ as \begin{eqnarray} k=-i\sum_{j=1}^{2N-1}\ln\frac{\sinh(a+z_j-\frac\eta2)}{\sinh(a-z_j-\frac\eta2)}{~~}mod\,\{2\pi\}. \label{ioppoi} \end{eqnarray} The transfer matrix $t(u)$ is the generating functional of the system. Expand $t(u)$ with respect to $u$. All the expansion coefficients commute with each other and can be regarded as the conserved quantities. Here we consider the larding term. From the asymptotic behavior of $t(u)$, we define the conserved charge operator $Q$ as \begin{eqnarray} Q=\frac{(2\sinh\eta)^{2N-1}}{4e^{{(2N-1)\eta}/2}}\lim_{u\to\infty}e^{-(2N-1)u}t(u)=\frac14(Q^++Q^-),\label{ai13} \end{eqnarray} where \begin{eqnarray} Q^\pm=\sum_{j=1}^{2N} e^{(-1)^ja}e^{\mp\frac\eta2\sum_{k=1}^{j-1}\sigma_k^z}\sigma_j^\pm e^{\pm\frac\eta2\sum_{k=j+1}^{2N}\sigma_k^z}.\label{ai3} \end{eqnarray} Correspondingly, the asymptotic behavior of $\Lambda(u)$ gives that the eigenvalue of conserved charge $Q$ is \begin{eqnarray} q=\frac14\sinh^{2N-1}\eta\,\Lambda_0\,e^{-\sum_{k=1}^{2N-1}z_k}. \end{eqnarray} Some remarks are in order. In the limit of $a\to 0$, the model (\ref{Ham1}) degenerates to the antiperiodic XXZ spin chain and the factor $e^{(-1)^ja}$ in Eq.(\ref{ai3}) tends to one. The conserved charge $Q$ (\ref{ai13}) covers the previous one given in Ref.\cite{Qiao2020}. Moreover, when $a\to 0$ and $\eta\to 0$, the model (\ref{Ham1}) degenerates to the antiperiodic isotropic spin chain and the $U(1)$ symmetry is recovered. In this case, the conserved charge reads $Q=\sum_{j=1}^N \sigma_j^x/2$, which is the total spin along the $x$-direction. \section{The ground state} \setcounter{equation}{0} Now, we study the ground state of the system (\ref{Ham1}). For simplicity, we consider the case that $a$ is real and $\eta$ is imaginary. By using the crossing symmetry (\ref{crosing-unitarity}), the transfer matrix $t(u)$ can be rewritten as \begin{eqnarray}\label{ai111} &&t(u)=(-1)^{2N-1}tr_0\{\sigma^x_0 R_{0,1}^{t_{0}}(-u-a-\eta) R_{0,2}^{t_{0}}(-u+a-\eta) \cdots \nonumber \\ &&\qquad\quad \times R_{0,2N-1}^{t_{0}}(-u-a-\eta) R_{0,2N}^{t_{0}}(-u+a-\eta)\}. \end{eqnarray} Because $\eta$ is imaginary, the $R$-matrix (\ref{R-matrix}) satisfies \begin{eqnarray}\label{R-R-relation} R_{0,j}^{\ast t_{j}}(-u-\eta)=R_{0,j}^{t_{0}}(u^{\ast}-\eta). \end{eqnarray} Substituting Eq.(\ref{R-R-relation}) into (\ref{ai111}) and taking the Hermitian conjugate, we obtain \begin{eqnarray}\label{t-t-relation} t^{\dagger}(u)=-t(u^{\ast}-\eta). \end{eqnarray} By using the $t-W$ relation (\ref{tw}), we have \begin{eqnarray} \Lambda(u)=- \Lambda^*(u^*-\eta), \quad W(u)=W^*(u^*), \end{eqnarray} which gives that both the zero roots $\{z_j\}$ and the $\{w_l\}$ take the real values or form the conjugate pairs. These patterns allow us to calculate the physical quantities in the thermodynamic limit. At the ground state, all the zero roots $\{z_j\}$ and $\{w_l\}$ are real and distribute around the origin symmetrically. The numerical check is shown in Fig.\ref{fig-e1}(a). Taking the complex conjugate of BAEs (\ref{BA1}), dividing it by Eq.(\ref{BA1}) and taking the logarithm of resulted equation, we obtain \begin{eqnarray} &&2\alpha_1(z_j+a)+2\alpha_1(z_j-a)-\alpha_3(z_j+a)-\alpha_3(z_j-a)=\frac{4\pi I_j}N-\frac1N\sum_{l=1}^{2N}\alpha_1(z_j-w_l),\nonumber \\ && \qquad j=1,\cdots, 2N-1, \label{bae1} \end{eqnarray} where $\alpha_n(x)=-i\ln\sinh(x-n\eta/2)+i\ln\sinh(x+n\eta/2)$ and $I_j$ is the quantum number characterizing the ground state \begin{eqnarray*} \{I_j\}=\left\{-N+1,-N+2,\cdots,N-2, N-1\right\}. \end{eqnarray*} Multiplying the complex conjugate of BAEs (\ref{BA1}) by (\ref{BA1}) and taking the logarithm of resulted equation, we have \begin{eqnarray} \beta_3(z_j+a)+\beta_3(z_j-a)=\frac1N\sum_{l=1}^{2N}\beta_1(z_j-w_l),\quad j=1,\cdots, 2N-1,\label{bae2} \end{eqnarray} where $\beta_n(x)=\ln\sinh(x-n\eta/2)+\ln\sinh(x+n\eta/2)$. In the thermodynamic limit $N \to\infty$, the zero roots $z_j$ and $w_l$ in Eqs.(\ref{bae1})-(\ref{bae2}) become continue variables $z$ and $w$, respectively, and the associated functions become the continue ones. Taking the derivative of Eqs.(\ref{bae1})-(\ref{bae2}), we have \begin{eqnarray} &&2a_1(z+a)+2a_1(z-a)-a_3(z+a)-a_3(z-a)=4\rho(z)+4\rho^h(z)-2a_1*\sigma(z),\label{z1}\\ &&b_3(z+a)+b_3(z-a)= 2b_1*\sigma(z),\label{z2} \end{eqnarray} where $\gamma=-i\eta$ is real, $a_n(x)=\sin(n\gamma)/[\pi(\cosh2x-\cos n\gamma)]$, $b_n(x)=\sinh(2x)/[\pi(\cosh2x-\cos n\gamma)]$, $\rho(z)$, $\sigma(w)$ and $\rho^h(z)$ are the densities of $z$-roots, $w$-roots and holes in the $z$-axis, respectively, and the notation $*$ indicates the convolution. The reason for existing the density of holes $\rho^h(z)$ at the ground state is as follows. In the antiperiodic boundary condition, the total number of $z$-roots is $2N-1$ while there are $2N$ possible occupations in the Brillouin zone. At the ground state, the holes should be put at the infinity of spectral space to minimize the energy. We shall note that both $\rho(z)$ and $\rho^h(z)$ are distributed symmetrically around the origin. Thus we have $2N\int_L^\infty\rho^h(z)dz=1/2$ and $2N\int_{-\infty}^{-L}\rho^h(z)dz=1/2$, where $L\to\infty$. This distribution feature gives that two half holes are related to two zero modes, where the energies of holes are zero in the thermodynamic limit. Thus the ground state of the system (\ref{Ham1}) has two zero modes which carry zero energy due to the double degeneracy. Taking the Fourier transformation of Eqs.(\ref{z1})-(\ref{z2}), we obtain \begin{eqnarray} \rho(z)+\rho^h(z)=\frac{\sin\frac{\pi\gamma}{2\pi-2\gamma}}{\pi-\gamma} \bigg \{ \frac{\cosh\frac{\pi z+\pi a}{\pi-\gamma}}{\cosh\frac{2\pi z+2\pi a}{\pi-\gamma} -\cos\frac{\pi\gamma}{\pi-\gamma}}+\frac{\cosh\frac{\pi z-\pi a}{\pi-\gamma}}{\cosh\frac{2\pi z-2\pi a}{\pi-\gamma} -\cos\frac{\pi\gamma}{\pi-\gamma}} \bigg \}.\label{z-density} \end{eqnarray} Eq.\eqref{z-density} gives the density of $z$-roots at the ground state of the system (\ref{Ham1}) in the thermodynamic limit. Based on it, the physical quantities can be calculated analytically. Substituting Eq.\eqref{z-density} into \eqref{ioppoi}, we obtain the momentum at the ground state and the result is zero. Substituting \eqref{z-density} into (\ref{J1J2redu}), we obtain the ground state energy. Dividing it by the system size, we obtain the ground state energy density as \begin{eqnarray} &&e_g=\frac{\cos(2\gamma)-\cosh(4a)}{\sin\gamma} \int \frac{\cos^2(2a\tau)\cosh[(\pi-2\gamma)\tau] \tanh[(\pi-\gamma)\tau]} {\sinh(\pi\tau)}d\tau \nonumber\\ &&\qquad+\frac{\cos\gamma[\cosh^2(2a)-\cos(2\gamma)]}{2\sin^2\gamma}.\label{ttaa} \end{eqnarray} The ground state energy density (\ref{ttaa}) versus the model parameter $a$ and $\gamma$ are shown in Fig.\ref{fig-e1}(b) and (c), respectively. \begin{figure}[t] \begin{center} \includegraphics[width=5cm]{fig1a.eps} \includegraphics[width=5cm]{fig1b.eps} \includegraphics[width=5cm]{fig1c.eps} \caption{(a) The distributions of zero roots $\{z_j\}$ and $\{w_l\}$ at the ground states obtained by solving BAEs (\ref{BA0})-(\ref{BA3}) with $2N=8$, $a=0.2$ and $\gamma=0.6$. The ground state energy densities (\ref{ttaa}) versus the model parameter (b) $a$ with $\gamma=0.6$ and (c) $\gamma$ with $a=0.6$.}\label{fig-e1} \end{center} \end{figure} \section{Elementary excitation} \setcounter{equation}{0} Next, we study the elementary excitation of the system. The simplest elementary excitation is described by $(2N-2)$ real $z$-roots and one single complex root $\lambda-i \pi/2$, where $\lambda$ is a real parameter. Accordingly, the $2N-2$ $w$-roots still remain in the real axis while two $w$-roots form a conjugate pair $\lambda_1\pm i m\gamma/2$, where $\lambda_1$ and $m$ are real parameters. Due to the constraints of BAEs (\ref{BA0})-(\ref{BA3}), the values of $\lambda$, $\lambda_1$ and $m$ are not independent. The distributions of zero roots $\{z_j\}$ and $\{w_l\}$ for such an excitation with $2N=8$ are shown in Fig.\ref{fig-ee1}(a). In the thermodynamic limit, substituting the two sets of $z$- and $w$-roots into Eqs.(\ref{z1})-(\ref{z2}), we obtain the constraints of $\lambda$, $\lambda_1$ and $m$ as \begin{eqnarray}\label{13} \lambda=\lambda_1,\quad m=\frac{\pi}{\gamma}-1. \end{eqnarray} We see that the parameter $\lambda$ is free while $m$ is fixed. The deviation of density of $z$-roots from that at the ground state is \begin{eqnarray} \delta \rho_1(z)=\frac{1}{N(\gamma-\pi)}\frac{\cosh\frac{\pi (z-\lambda)}{\pi-\gamma} \cos\frac{\pi\gamma}{2\pi-2\gamma}}{\cosh\frac{2\pi (z-\lambda)}{\pi-\gamma}+\cos\frac{\pi\gamma}{\pi-\gamma}} +\frac1{2N}\delta\big(z-\lambda+\frac{i\pi}2\big). \label{poiiop} \end{eqnarray} Based on it, we obtain the excitation energy \begin{eqnarray}\label{e1} &&\delta e_1 = \frac{\cosh(4a)-\cos(2\gamma)}{\sin\gamma}\int \frac{\cos(2a\tau)\cos(2\lambda\tau)\cosh(\gamma\tau) \tanh[(\pi-\gamma)\tau]} {\sinh(\pi\tau)}d\tau \nonumber \\ &&\qquad + \frac{\cosh(4a)-\cos(2\gamma)}{2\sin\gamma} \bigg[\frac{\sin\gamma} {\cosh(2\lambda+2a)+\cos\gamma}+\frac{\sin\gamma}{\cosh(2\lambda-2a)+\cos\gamma}\bigg], \end{eqnarray} which is a function of $\lambda$. The excitation energies versus the different values of model parameters $a$ and $\gamma$ with $\lambda=0.2$ are shown in Fig.\ref{fig-ee1}(b). Substituting Eq.(\ref{poiiop}) into the thermodynamic limit expression of (\ref{ioppoi}), we obtain the corresponding momentum as \begin{eqnarray}\label{k1} &&k_1 = \frac{2i}{\pi-\gamma}\int\frac{\cosh\frac{\pi (z-\lambda)}{\pi-\gamma} \cos\frac{\pi\gamma}{2\pi-2\gamma}}{\cosh\frac{2\pi (z-\lambda)}{\pi-\gamma}+\cos\frac{\pi\gamma}{\pi-\gamma}}\ln\frac{\sinh(a+z-\frac{i\gamma}2)}{\sinh(a-z-\frac{i\gamma}2)}dz\nonumber\\ &&\qquad -i\ln\frac{\sinh(a+\lambda-\frac{i\pi}2-\frac{i\gamma}2)}{\sinh(a-\lambda+\frac{i\pi}2-\frac{i\gamma}2)} {~~}mod\,\{2\pi\}. \end{eqnarray} \begin{figure}[t] \begin{center} \includegraphics[width=7cm]{fig2a.eps} \includegraphics[width=7cm]{fig2b.eps}\\ \includegraphics[width=7cm]{fig2c.eps} \includegraphics[width=7cm]{fig2d.eps} \caption{(a) The distribution of $z$- and $w$-roots of type I elementary excitations with $2N=8$, $a=0.2$ and $\gamma=0.6$. (b) The excited energies (\ref{e1}) versus the different values of model parameter $a$ with $\lambda=0.2$ and $\gamma=0.2, 0.4, 1$. (c) The dispersion relation of type I elementary excitation with $\gamma=0.6$ and $a=0, 0.6, 0.8$. (d) The dispersion relation of type I elementary excitation with $a=0.6$ and $\gamma=0.2, 0.4, 1.5$. We shall note that the data in (b)-(d) are the results in the thermodynamic limit.}\label{fig-ee1} \end{center} \end{figure} Since the excitation energy $\delta e_1$ and the quasi-momentum $k_1$ rely on the single parameter $\lambda$, the dispersion relation of this kind of elementary excitation can be derived from Eqs.(\ref{e1})-(\ref{k1}), and the results with different model parameters $a$ and $\gamma$ are shown in Fig.\ref{fig-ee1}(c) and (d), respectively. From Fig.\ref{fig-ee1}(c), we see that if the model parameter $a$ is big, the excitation spectrum has two peaks where the corresponding momentums are not $\pm\pi$. With the decreasing of $a$, the peaks converge to the point of $\pm\pi$. When $a=0$, that is the NNN and three-spin chirality interactions vanish, the excitation spectrum becomes to that of antiperiodic XXZ spin chain. Fig.\ref{fig-ee1}(d) shows the excitation spectrum with different values of anisotropic parameter $\gamma$. From them, we also know that $\delta e_1 \propto k_1$ if the momentum is small, which means that it is a linear spin-wave like excitation. The second kind of elementary excitation is described by $2N-3$ real $z$-roots and one conjugate pair $\lambda\pm i \gamma$. Using the similar procedure mentioned above, in the thermodynamic limit, the constraints of BAEs (\ref{z1})-(\ref{z2}) give that the $w$-roots should be $2N-2$ real values and one conjugate pair $\lambda\pm3 i \gamma/2$. The distribution of $z$- and $w$-roots with $2N=8$ is shown in Fig.\ref{fig-ee2}(a). The energy carried by this excitation is \begin{eqnarray}\label{e2} &&\delta e_2 = \frac{\cosh(4a)-\cos(2\gamma)}{\sin\gamma} \int \frac{\cos(2a\tau)\cos(2\lambda\tau)\cosh[(\pi-3\gamma)\tau] \tanh[(\pi-\gamma)\tau]} {\sinh(\pi\tau)}d\tau \nonumber \\ &&\qquad\;\;+ \frac{\cosh(4a)-\cos(2\gamma)}{2\sin\gamma} \bigg[\frac{2\sin\gamma} {\cosh(2\lambda+2a)-\cos\gamma}+\frac{2\sin\gamma}{\cosh(2\lambda-2a)-\cos\gamma}\nonumber\\ &&\qquad\;\;- \frac{\sin(3\gamma)}{\cosh(2\lambda+2a)-\cos(3\gamma)} -\frac{\sin(3\gamma)}{\cosh(2\lambda-2a)-\cos(3\gamma)}\bigg]. \end{eqnarray} \begin{figure}[t] \begin{center} \includegraphics[width=7cm]{fig3a.eps} \includegraphics[width=7cm]{fig3b.eps}\\ \includegraphics[width=7cm]{fig3c.eps} \includegraphics[width=7cm]{fig3d.eps} \caption{(a) The distribution of $z$- and $w$-roots for the type II elementary excitation with $2N=8$, $a=0.2$ and $\gamma=0.6$. (b) The excited energies (\ref{e2}) versus the different values of model parameter $a$ with $\lambda=0.2$ and $\gamma=0.4, 0.6, 1$. (c) The dispersion relation of type II elementary excitation with $\gamma=0.6$ and $a=0, 0.6, 0.8$. (d) The dispersion relation of type II elementary excitation with $a=0.6$ and $\gamma=0.4, 0.6, 1$. We shall note that the data in (b)-(d) are the results in the thermodynamic limit.}\label{fig-ee2} \end{center} \end{figure} The excited energies with different values of model parameters $a$ and $\gamma$ are shown in Fig.\ref{fig-ee2}(b). The associated quasi-momentum reads \begin{eqnarray}\label{k2} &&k_2 =\frac{2i}{\pi-\gamma}\int\frac{\cosh\frac{\pi (z-\lambda)}{\pi-\gamma} \cos\frac{\pi(\pi-3\gamma)}{2\pi-2\gamma}}{\cosh\frac{2\pi (z-\lambda)}{\pi-\gamma} +\cos\frac{\pi(\pi-3\gamma)}{\pi-\gamma}}\ln\frac{\sinh(a+z-\frac{i\gamma}2)}{\sinh(a-z-\frac{i\gamma}2)}dz\nonumber\\ &&\qquad +i\ln\frac{\sinh(a+\lambda-\frac{i\gamma}2)\sinh(a-\lambda+\frac{i\gamma}2)\sinh(a-\lambda-\frac{i3\gamma}2)} {\sinh(a-\lambda-\frac{i\gamma}2)\sinh(a+\lambda+\frac{i\gamma}2)\sinh(a+\lambda-\frac{i3\gamma}2)} {~~}mod\,\{2\pi\}. \end{eqnarray} Based on Eqs.(\ref{e2}) and (\ref{k2}), the dispersion relations with different values of model parameters $a$ and $\gamma$ are shown in Fig.\ref{fig-ee2}(c) and (d), respectively. Comparing them, we find that the contribution of parameter $a$ to the excited energy is similar to that of anisotropic parameter $\gamma$. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm]{fig4a.eps} \includegraphics[width=7cm]{fig4b.eps}\\ \includegraphics[width=7cm]{fig4c.eps} \includegraphics[width=7cm]{fig4d.eps} \caption{(a) The distribution of $z$- and $w$-roots for the type III elementary excitation with $2N=8$, $n=3$, $a=0.2$ and $\gamma=0.6$. (b) The excited energies (\ref{e03}) versus the different values of model parameter $a$ with $\lambda=0.2$ and $\gamma=0.2, 0.4, 0.8$. (c) The type III excitation spectrum with $n=5$, $\gamma=0.6$ and $a=0, 0.6, 0.8$. (d) The type III excitation spectrum with $n=5$, $a=0.6$ and $\gamma=0.2, 0.4, 0.8$. We shall note that the data in (b)-(d) are the results in the thermodynamic limit.}\label{fig-ee3} \end{center} \end{figure} We now turn to the more general excitation determined by $2N-3$ real $z$-roots and one conjugate pair $\lambda \pm ni\gamma/2$ with $n\geq3$. Accordingly, the related $w$-roots should be $2N-4$ real solutions and 2 conjugate pair with the form of $\lambda\pm(n+1)i\gamma/2$, $\lambda\pm(n-1)i\gamma/2$. The roots distribution for $2N=8$ and $n=3$ is shown in Fig.\ref{fig-ee3}(a). Following the same procedure used above, we obtain the type III excitation energy as \begin{eqnarray} &&\delta e_3 = \frac{\cosh(4a)-\cos(2\gamma)}{\sin\gamma} \int \frac{\cos(2a\tau)\cos(2\lambda\tau)f(\tau) \tanh[(\pi-\gamma)\tau]} {\sinh(\pi\tau)}d\tau \nonumber \\ &&\qquad\;\;+ \frac{\cosh(4a)-\cos(2\gamma)}{2\sin\gamma} \bigg[\frac{\sin(n-1)\gamma} {\cosh(2\lambda+2a)-\cos(n-1)\gamma}\nonumber\\ &&\qquad\;\;+\frac{\sin(n-1)\gamma}{\cosh(2\lambda-2a)-\cos(n-1)\gamma} - \frac{\sin(n+1)\gamma}{\cosh(2\lambda+2a)-\cos(n+1)\gamma}\nonumber\\ &&\qquad\;\;-\frac{\sin(n+1)\gamma}{\cosh(2\lambda-2a)-\cos(n+1)\gamma}\bigg], \label{e03} \end{eqnarray} where $f(\tau)=\cosh(y_{n-1}\tau)+\cosh(y_{n+1}\tau)$, $y_{n}=\pi-2\pi\delta_{n}$ and $\delta_n =n\gamma/(2\pi)-\lfloor n\gamma/(2\pi)\rfloor$ denoting the fractional part of $n\gamma/(2\pi)$. The excitation energies with different values of model parameters $a$ and $\gamma$ are shown in Fig.\ref{fig-ee3}(b). The momentum can be similarly calculated as \begin{eqnarray}\label{k3} &&\hspace{-1.7cm}k_3 = \frac{2i}{\pi-\gamma}\int\bigg\{ \frac{\cosh[x(z-\lambda)] \cos(xy_{n+1}/2)}{\cosh[2x(z-\lambda)] +\cos(xy_{n+1})}+\frac{\cosh[x(z-\lambda)] \cos(xy_{n-1}/2)}{\cosh[2x(z-\lambda)] +\cos(xy_{n-1})} \bigg \} \nonumber\\ &&\hspace{-1cm} \times \ln\frac{\sinh(a+z-\frac{i\gamma}2)}{\sinh(a-z-\frac{i\gamma}2)}dz -i\ln\frac{\sinh(a+\lambda+\frac{n-1}2 i\gamma)\sinh(a+\lambda-\frac{n+1}2 i\gamma)} {\sinh(a-\lambda+\frac{n-1}2 i\gamma)\sinh(a-\lambda-\frac{n+1}2 i\gamma)} {~~}mod\,\{2\pi\}, \end{eqnarray} where $x=\pi/(\pi-\gamma)$. The dispersion relations with different values of model parameters $a$ and $\gamma$ are shown in Fig.\ref{fig-ee3}(c) and (d), respectively. From them, we find that the nonlinearity of excitations becomes remarkable with the increasing of model parameters $a$ and $\gamma$. \section{Conclusions} In this paper, we have studied the exact solution of an integrable anisotropic spin chain with antiperiodic boundary condition, where the interactions include the NN, NNN and chiral three-spin couplings. We obtain the conserved topological momentum operator, energy spectrum and homogeneous BAEs. In the thermodynamic limit, we calculate the ground state energy, three kinds of elementary excitations and dispersion relations when the model parameter $a$ is real and anisotropic parameter $\eta$ is imaginary. We find that due to the NNN and chirality interactions, the two peaks of excitation spectrum can locate away from the boundaries of Brillouin zone $\pm\pi$, which is quite different from that of the model with NN interaction. Meanwhile, the nonlinearity of excitation spectrum can be enhanced. We also note that the $t-W$ scheme and the new parametrization of transfer matrix provided in this paper can be generalized to study the model (\ref{Ham1}) with integrable non-diagonal boundary magnetic fields. We shall note that when adding an external magnetic field along the $z$-direction, the system with periodic boundary condition is integrable. Thus we can study the elementary excitations based on the corresponding exact solution with the similar method given in this paper. Due to the existence of magnetic field, the spins would be polarized and the $Z_2$ invariance is broken. Thus the patterns of $z$-roots are changed and the Fermi points are shifted. Expanding the quasi-momentum at the points of new Fermi points, we could obtain the nonlinear elementary excitations induced by the magnetic field. Other interesting physical quantities such as dynamic structure factor and the spectral function can be calculated exactly \cite{Imam12}. The recent proposed pseudoparticle approach \cite{Car18} can also be used to study both the static and the dynamical properties of the system. For the antiperiodic boundary condition, the model Hamiltonian and the external magnetic field along the $z$-direction do not commutate with each other. Thus the system is non-integrable. Due to the $U(1)$ symmetry broken, the eigenstates of the system are quite different from those with the periodic boundary condition. The present eigenstates are the helical ones and the elementary excitations would show some interesting properties such as the spinons are confined. Although the particle number with fixed spin state is not conserved, we can define the topological conserved charge by combining the spin-up and spin-down states with the suitable coefficients. The helicity would be affected by the magnetic field. All these issues are worth studying. \section*{Acknowledgments} The financial supports from the National Key R\&D Program of China (Grants Nos. 2021YFA1400243 and 2016YFA0301500), the National Natural Science Foundation of China (Grant Nos. 61835013, 11774150, 12074178, 12074410, 12047502, 11934015, 11975183, 11947301, 11774397 and 12147160), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant Nos. XDB01020300, XDB21030300 and XDB33000000) and the fellowship of China Postdoctoral Science Foundation (Grant No. 2020M680724) are gratefully acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Quantum transport at the nanoscale \cite{DiCarlo2004,Zimbovskaya2011,Cunibertibook} is a blooming field where the properties of matter can be explored in a realm where quantum effects become crucial. In particular, the control of quantum interference phenomena and their interplay with the electronic structure offers a fascinating opportunity to overcome some of the usual constraints of our macroscopic classical world. \cit {Nature2001,Nature2012,Nitzan2003,Bustos2013,Rickhaus} However, at the nanoscale, both quantum \textit{and} classical behavior can be expected. This last emerges from the unavoidable environmental degrees of freedom. \cite{Ratner2013} An exciting example of the competition among those behaviors is electron-transfer in natural and artificial photosynthesis. There, the interplay between localizing interferences and environmentally induced decoherence seems to have a fundamental role in optimizing excitonic transfer. \cite{Huelga2008,Lloyd2009} This phenomenon falls in line with what is known in low dimensional conductors. Indeed, transport properties of highly ordered 1-D systems is determined by the fast quantum diffusion of local excitations, and thus become weakened by decoherence. On the other hand, in disordered 1-D wires, quantum coherence allows the destructive interferences that produce electronic localization. While these phenomena are roughly described by introducing imaginary energies in the Kubo formulation, it is at the cost of overlooking charge conservation. \cit {Thouless-Kirkpatrick} Landauer's picture has almost no rival in what concerns to electronic coherent transport.\cite{Landauer1999} In its simplest form, conductance is determined by the transmission probability (either quantum or classical) among electrodes. Paradoxically, quantum transmittance is much simpler to evaluate than its classical counterpart. Thus, the great majority of work focus on the evaluation of the coherent transmittance setting aside incoherent processes. An extension of this approach, developed by Markus \"{u}ttiker,\cite{Buttiker1986} applies the Kirchhoff laws to a system connected to multiple terminals. This allows to consider different voltage probes as well as multiple current sources and drains. The self-consistent non-equilibrium chemical potentials at the voltmeters must ensure current cancellation. The resulting transport coefficients fulfill the Onsager's reciprocity relations. Additionally, B\"{u}ttiker had the crucial insigh \cite{Buttiker1986PRB} that a voltage probe implies a classical measurement and thus it acts as a decoherence source. This concept was further formulated by D'Amato and Pastawski introducing a Hamiltonian description \cite{Damato-Pastawski} (henceforth the DP model). In this description, the decoherent local probes can be assimilated to incoherent scattering by delta-function potentials\cite{GLBE1,GLBE2}. This is founded in the Keldysh, Kadanoff and Baym's quantum fields formalism\cite{Danielewicz1984} for the non-equilibrium Green's functions. \cit {Danielewicz1984,Kadanoff1989,Rammer1986} There, the integro-differential equations are simplified by evaluating the currents and chemical potentials in a linearized scheme that involves a matrix containing only transmittances among different points in the sample. The DP model also provides a compact solution for an arbitrary distribution of incoherent local scattering processes. These lead to a momentum relaxing decoherence that produces diffusion and a further increase in the resistance. The final set of linear equations relate the local chemical potentials and the currents through a transmittances matrix. \cite{Datta90} This results in the Generalized Landauer-B\"{u}ttiker Equations\ (GLBE) that solve the DP model. The original presentation of the DP model is constrained to two terminal problems. Thus, in spite of the growing need to include the effects of decoherent processes,\cite{Maassen2009, Horvat2013} its applications remained mostly reduced to a few one-dimensional problems. \cit {Zimbovskaya2002,Zwolak2002,Gagel1996,Nozaki2008,PAni-CBP,Nozaki2012,Anantram2013} Besides, since the method deals with a great number of self-consistent local chemical potentials, it often involves a cumbersome matrix inversion. Thus, a general multi-terminal formulation of the DP model for decoherent transport and an efficient computational strategy are still lacking. In this paper we generalize the D'Amato-Pastawski model for multi-terminal problems, presenting a decimation-based method for the calculation of the decoherent conductance. In Sec. \ref{sec:BasicTools} we introduce the basic tools, based on a decimation procedure that yields the parameters of an effective Hamiltonian. In Sec. \ref{sec:DP} we overview the original DP model. In Sec. \ref{sec:Computational}, we generalize the DP model for multi-terminal setups. We also provide a recursive algorithm for the calculation of Green's functions of general banded Hamiltonians. Then, we show two application examples. In Sec. \ref{sec:example1} we consider a simple model of a phonon-laser (SASER) based on the electron-phonon interaction in a quantum dot \cite{Kent} where we asses the role of decoherence in the SASER efficiency. In Sec. \ref{sec:example2}, we consider the spin dependent electronic transport in a ferromagnetic wire where the Giant Magnetoresistance (GMR) \cite{Fert08} shows up. We show that our formulation describes the complete cross-over from a quantum transport to the GMR semiclassical regime. In Sec. \ref{sec:conclusions} we summarize our results and conclude that our formulation can handle decoherent transport in a wide variety of problems beyond the typical two-terminal calculations. \section{\label{sec:BasicTools}Decimation Procedures and Effective Hamiltonians} Even the simplest quantum devices involve a huge number of degrees of freedom and thus their study can not be carried out without proper simplifications. For example, a tight-binding Hamiltonian describing a device or molecule with $N$ states (or orbitals) is, \cite{Pastawski-Medina} \begin{equation} \hat{H}_{S}=\sum\limits_{i=1}^{N}\left\{ E_{i}\hat{c}_{i}^{\dagger }\hat{c _{i}^{{}}+\sum\limits_{\substack{ j=1 \\ (j\neq i)}}^{N}\left[ V_{i,j}\hat{ }_{i}^{\dagger }\hat{c}_{j}^{{}}+V_{j,i}\hat{c}_{j}^{\dagger }\hat{c _{i}^{{}}\right] \right\} . \label{Hamil-sys} \end{equation Here, $\hat{c}_{i}^{\dagger }$ and $\hat{c}_{i}^{{}}$correspond to the creation and anihilation fermionic operators acting on the vacuum \left\vert 0\right\rangle $. \ Site energies are $E_{i}$ and hopping amplitudes $V_{i,j}$ define the matrix Hamiltonian whose single particle eigenstates are $\left\vert k\right\rangle =\sum_{i}u_{i,k}\hat{c _{i}^{\dagger }\left\vert 0\right\rangle $ of energy $\varepsilon _{k}$ which are filled up to the Fermi energy, $\varepsilon _{F}$. The decimation procedures, inspired in the renormalization group techniques of statistical mechanics \cite{Kadanoff1983,Jose1982}, seek to recursively reduce the number of degrees of freedom of a general $N\times N$ Hamiltonian into another of lower rank, without altering the physical properties. The basic idea can be captured by considering a system with $N=3$ states whose secular equation is \begin{equation} \left[ \begin{array}{ccc} \varepsilon -E_{1} & -V_{12} & -V_{13} \\ -V_{21} & \varepsilon -E_{2} & -V_{23} \\ -V_{31} & -V_{32} & \varepsilon -E_{3 \end{array \right] \left( \begin{array}{c} u_{1} \\ u_{2} \\ u_{3 \end{array \right) =\left[ \varepsilon \mathbb{I}-\mathbb{H}_{S}\right] \overrightarrow u}\equiv \overrightarrow{0}. \end{equation Quite often we are interested in the transfer of an excitation from an initial state to another one, say 1 and 2. Thus, instead of diagonalizing the matrix, we could isolate $u_{3}$ from the third row and use it to eliminate $u_{3}$ in the first and the second equations. In this way, we obtain a new set of equations where $u_{3}$ is \textit{decimated} \begin{equation} \left[ \begin{array}{cc} \varepsilon -\overline{E}_{1} & -\overline{V}_{12} \\ -\overline{V}_{21} & \varepsilon -\overline{E}_{2 \end{array \right] \left( \begin{array}{c} u_{1} \\ u_{2 \end{array \right) \label{H_S2eff} \\ =[\varepsilon \mathbb{I}-\mathbb{H}_{\mathrm{eff.}}]\vec{u}=0. \end{equation The renormalized coefficients hide their non-linear dependence on the energy variable $\varepsilon :$ \begin{equation} \begin{array}{c} \overline{E}_{1}=E_{1}+\Sigma _{1}(\varepsilon )=E_{1}+V_{13}\dfrac{1} \varepsilon -E_{3}}V_{31}, \\ \overline{E}_{2}=E_{2}+\Sigma _{2}(\varepsilon )=E_{2}+V_{23}\dfrac{1} \varepsilon -E_{3}}V_{32}, \\ \overline{V}_{12}=V_{12}+V_{13}\dfrac{1}{\varepsilon -E_{3}}V_{32} \end{array} \label{eq:decim-ex} \end{equation In this case, the terms $\Sigma _{j}(\varepsilon );j=1,2$ are the real self-energies accounting for the energy shifts due to the coupling with the eliminated state. Notice that as long as one conserves the analytical dependence on $\varepsilon $ of $\Sigma _{j}$, the actual secular equation is still cubic in $\varepsilon $ and provides the exact spectrum of the whole system. This procedure can be performed systematically in a Hamiltonian of any size $N\times N$ to end up with an effective Hamiltonian of size one desires, in particular a $2\times 2$ one. The effective interaction parameter $\overline{V}_{12}$, together with the self-energies \Sigma _{j}$, accounts for transport through the whole sample. Their dependence on $\varepsilon $ provides all the needed information on the steady state transport as well as on quantum dynamics. \cite{Levstein1990} In practice, it is convenient to add an infinitesimal imaginary part, $ \mathrm{i}\eta $, to each energy $E_{j}\rightarrow E_{j}-\mathrm{i}\eta $. Since a finite $\eta >0$ is equivalent to a decay process, it ensures that one recovers the retarded time dependences of the observables through a well defined Fourier transform. The terminals connected to the system are described as semi-infinite leads coupled to it. They are handled in a similar way as the system itself. The idea is to eliminate all the internal degrees of freedom decimating them progressively, renormalizing the states of the system which are directly coupled to the external reservoirs. For further clarification we consider a lead modeled as a semi-infinite one dimensional chain, \begin{equation} \hat{H}_{L}=\sum\limits_{i=0}^{-\infty }\left\{ E_{i}\hat{c}_{i}^{\dagger \hat{c}_{i}^{{}}-V\left[ \hat{c}_{i}^{\dagger }\hat{c}_{i-1}^{{}}+\hat{c _{i-1}^{\dagger }\hat{c}_{i}^{{}}\right] \right\} , \end{equation that yields a tridiagonal matrix of infinite dimension. The elements $E_{i} 's and $V$'s are now the diagonal and off-diagonal terms of a tridiagonal matrix $\mathbb{H}_{L}.$ This lead is connected at the left of the system, say, with site $1$: \begin{equation} \hat{V}_{SL}=V_{L}\left[ \hat{c}_{1}^{\dagger }\hat{c}_{0}^{{}}+\hat{c _{0}^{\dagger }\hat{c}_{1}^{{}}\right] . \end{equation Instead of dealing with the whole Hamiltonian \begin{equation} \hat{H}=\hat{H}_{S}+\hat{H}_{L}+\hat{V}_{SL}, \end{equation we perform the decimation procedure. It becomes particularly simple because of the chain structure of the lead. The energy of the $i$-th site, is \ \textquotedblleft shifted\textquotedblright\ by the elimination of $(i-1) -th site, which itself is shifted by sites at its left \cit {Pastawski-Medina}, with the self-energies resulting in a continued-fraction: \begin{eqnarray} \Sigma _{i} &=&V_{i,i-1}\dfrac{1}{\varepsilon -E_{i-1}-\Sigma _{i-1} V_{i-1,i} \\ (i &=&0,-1,-2,...-\infty ) \notag \end{eqnarray In a perfect propagating channel: $V_{i,i-1}\equiv V$ and $E_{i}=E_{0}$, and thus, $\Sigma _{i}=\Sigma _{i-1}\equiv \Sigma $, we arrive to the self-consistent solution: \begin{eqnarray} \label{Dyson} &&\Sigma (\varepsilon )=\dfrac{V^{2}}{\varepsilon -E_{0}-\Sigma }=\Delta (\varepsilon )-\mathrm{i}\Gamma (\varepsilon ). \notag \label{sigma-leads} \\ &=&\dfrac{\varepsilon -E_{0}+\mathrm{i}\eta }{2}-\text{sgn}(\varepsilon -E_{0})\sqrt{\left( \dfrac{\varepsilon -E_{0}+\mathrm{i}\eta }{2}\right) ^{2}-V^{2}}, \end{eqnarray where the generalized square root \cite{SquareRoot} in the limit $\eta \rightarrow 0^{+},$ yields the imaginary component of the self-energy for \varepsilon $ within the band of allowed energies. It becomes real otherwise. Thus, once the states in the left lead are fully decimated the energy of the first site becomes \begin{eqnarray} \widetilde{E}_{1}(\varepsilon ) &=&\overline{E}_{1}(\varepsilon )+\Sigma _{L1}(\varepsilon ) \\ \text{with~~}\Sigma _{L1}(\varepsilon ) &=&\left( \frac{V_{L}}{V_{{}} \right) ^{2}\Sigma (\varepsilon ) \\ &=&\Delta _{L1}(\varepsilon )-\mathrm{i}\Gamma _{L1}(\varepsilon ) \label{self_energy} \end{eqnarray As before, the real part $\Delta _{L1}(\varepsilon )$ indicates how the unperturbed site energies are shifted by the leads. The important difference with the simple decimation example discussed above is that, as a consequence of the infinite nature of the lead, the self-energies may acquire a finite imaginary component, $\Gamma _{L1}(\varepsilon ),$ even in the limit $\eta \rightarrow 0^{+}$. It describes the rate at which coherent density excitation in the system decays into the lead propagating states. Note that, the imaginary part is roughly consistent with the exponential decays of the survival probability predicted by the Fermi Golden Rule (FGR). For instance, in a \textquotedblleft system\textquotedblright\ with a single state $\left\vert 1\right\rangle $ interacting with a lead, the survival probability at time $t$ after it has been placed in state $\left\vert 1\right\rangle $ is, \begin{eqnarray} \left\vert \left\langle 1\right\vert \exp [-\mathrm{i}\hat{H}~t/\hbar ]\left\vert 1\right\rangle \theta (t)\right\vert ^{2} &\equiv &\left\vert \mathrm{i}\hbar G_{11}^{R}(t)\right\vert ^{2} \\ &\simeq &\exp [-2\Gamma _{L1}(E_{1})t/\hbar ], \end{eqnarray where we introduced the time dependent retarded Green's function, G_{11}^{R}(t)$. However, we remember that the self-energies obtained above have an explicit functional dependence on $\varepsilon .$ In consequence, the actual decay can depart from this naive exponential approximation. Indeed, a quantum decay should start quadratically as $1-\left( V_{L}t/\hbar \right) ^{2}$ turning into an exponential at very short times. At very long times the decay may even become a non-monotonous. \cite{Rufeil2006} In practice, we will stay in the exponential approximation by neglecting the dependence on $\varepsilon $ unless it is close to a band edge. For the sake of simplicity, we may idealize the terminal leads as quasi 1-D wires. As waveguides, they can be described in terms of open channels at the Fermi energy or propagating modes. Thus, we chose a basis for the system's Hamiltonian in which each independent propagation mode $l$ of a lead is connected to a single system's state. This might require a unitary transformation to choose a system's basis that matches the propagating modes of leads (see Fig. \ref{Gr:leads-indep}). There is no restriction to the converse: i.e. each \textquotedblleft site\textquotedblright\ can be coupled to different quantum channels. Since the leads can be represented by homogeneous infinite tight-binding chains, their decimation is just the procedure implemented above with the appropriate $V$'s and $E$'s describing each mode $l$. \begin{figure*}[tbh] \begin{center} \includegraphics[width=5.5in]{fig1.eps} \end{center} \caption{Diagrammatic representation of an unitary transformation of the system to a basis in which leads are independent. Here, dots represent diagonal elements of the Hamiltonian in a site basis and lines non-diagonal ones.} \label{Gr:leads-indep} \end{figure*} The observation of DP was that any \textquotedblleft local\textquotedblright\ electronic state weakly coupled to a huge number of environmental degrees of freedom should decay from its initial decoupled state according to the FGR. This would require a restitution or re-injection of any escaping particle. Thus the DP model treats these decoherent scattering channels sources as on-site fictitious voltage probes. Much as it occurs with real voltmeters, local current conservation on each scattering channels must be imposed. This ensures that each electron with definite energy that escapes from a state towards a fictitious probe, is balanced by an electron \textit{with the same energy} re-injected into the same state. In the DP model, these decoherent channels are described by local corrections to site energies of the sample, on the same footing as the real channels: \begin{equation} \hat{\Sigma}_{\phi i}=-\mathrm{i}\Gamma _{\phi i}\hat{c}_{i}^{\dagger }\hat{ }_{i}^{{}}. \label{sigma-decoher} \end{equation Here, $\Gamma _{\phi i}$ represents an energy uncertainty associated with the interaction process $\phi $ that mixes the local electron state $i$ with environmental degrees of freedom. This introduces a decay of the state $i$ that can be described by the FGR. Notice that the state $i$ does not necessarily represent a local basis, but it could be a channel mode or a momentum basis state as well. The energy uncertainties due to decoherent processes can be estimated for each specific process, \cite{PAni-CBP} and may not necessarily be the same for every state $i$. Accordingly, each \textquotedblleft site\textquotedblright\ $i$ may be subject to different decay processes $\alpha $: those associated with real leads, $\alpha ={l,}$ and those related to decoherent processes (or fictitious probes), $\alpha = \phi }$. The resulting effective Hamiltonian, $\hat{H}_{\mathrm{eff.}}$, that includes the real and fictitious probes, is non-Hermitian \cit {Rotter2009}: \begin{equation} \hat{H}_{\mathrm{eff.}}=(\hat{H}_{S}-\mathrm{i}\eta \hat{I )+\sum\limits_{\alpha }\sum\limits_{i=1}^{N}\hat{\Sigma}_{\alpha i}. \label{Hamil-efectivo} \end{equation Here, $\mathrm{Im}\hat{\Sigma}_{\alpha i}\neq 0$ only for those sites $i$ subject to decoherent processes ($\alpha =\phi $) or escapes to the leads ( \alpha =l$). Trivially, if the full imaginary part correction were homogeneous (the same value for each state $i$), it just shifts the eigenenergies into the complex plane. In contrast, inhomogeneous corrections might produce spectral bifuctations that result in a quantum dynamical phase transition. \cite{Dente2008} In transport problems, most of the information on system dynamics is distilled into the retarded and advanced Green functions. More practical expressions are obtained using its Fourier transform into the energy variable $\varepsilon $, from the effective Hamiltonian given by Eq. \re {Hamil-efectivo}. In matrix representation: \begin{equation} \mathbb{G}^{R}(\varepsilon )=\left[ \varepsilon \mathbb{I}-\mathbb{H}_ \mathrm{eff}.}\right] ^{-1}=\mathbb{G}^{A\dagger }(\varepsilon ) \label{eq:green-def} \end{equation These Green's functions contain all the information of the quantum system coupled to the leads and environment and constitute the kernel to move into the non-equilibrium problem. Also, diagonal elements provide the \textquotedblleft local\textquotedblright\ density of state \begin{equation} N_{i}(\varepsilon )=-\frac{1}{\pi }\mathrm{Im}G_{i,i}^{R}(\varepsilon )= \frac{1}{2\pi \mathrm{i}}\left[ G_{i,i}^{R}(\varepsilon )-G_{i,i}^{A}(\varepsilon )\right] \end{equation In particular, the transmission amplitudes of electronic excitations\ between the channels identified with process $\alpha $ at site $i$ and process $\beta $ at site $j$ can be evaluated from the generalized form of Fisher-Lee formula \cite{Pastawski-Medina} \begin{equation} t_{\alpha i,\beta j}(\varepsilon )=\mathrm{i}2~\sqrt{\Gamma _{\beta j}^{{}}(\varepsilon )}~G_{j,i}^{R}(\varepsilon )~\sqrt{\Gamma _{\alpha i}^{{}}(\varepsilon )} \end{equation and the transmission probabilities are given by \begin{eqnarray} T_{\alpha i,\beta j}(\varepsilon ) &=&\left\vert t_{\alpha i,\beta j}(\varepsilon )\right\vert ^{2}\text{~~~~}(\alpha i\neq \beta j) \notag \\ &=&4\Gamma _{\beta j}(\varepsilon )G_{j,i}^{R}(\varepsilon )\Gamma _{\alpha i}(\varepsilon )G_{i,j}^{A}(\varepsilon ) \label{Fisher-Lee-generalizada} \end{eqnarray where ${\Gamma }_{\alpha i}=\mathrm{i}(\Sigma _{\alpha ,i}^{R}-\Sigma _{\alpha ,i}^{A})/2$ is proportional the escape rate at site $i$ due to a process $\alpha $. \section{\label{sec:DP}Two-terminal D'Amato-Pastawski Model.} Retarded and advanced Green's functions and the transmission probabilities associated with them contain the basic quantum dynamics. In order to describe the non-equilibrium properties of a system, one has to evaluate the density matrix or simply the diagonal terms of non-equilibrium density functions, \begin{equation} G_{j,j}^{<}(\varepsilon )=\mathrm{i}2\pi N_{j}(\varepsilon )\mathrm{f _{j}(\varepsilon ). \label{eq:Keldyshdensitydef} \end{equation These, in turn, are determined by the boundary conditions imposed by the external reservoirs $\beta j$ that act as a source or drain of particles. Their occupation is described by a non-equilibrium distribution function approximated by a shifted Fermi distribution $\mathrm{f}_{\beta j}(\varepsilon )=1/(\exp [\left( \varepsilon -\varepsilon _{F}-\delta \mu _{\beta j}\right) /k_{B}T])$. In the Quantum Fields formalism, the $G_{\phi j,\phi j}^{<}(\varepsilon )$ Green's functions result from the quantum evolution in presence of the boundary conditions. In the time independent case, energy is conserved, and the non-equilibrium density function takes the form, \begin{equation} G_{j,k}^{<}(\varepsilon )=2\mathrm{i}\sum\limits_{\alpha i}G_{j,i}^{R}(\varepsilon )\Gamma _{\alpha i}(\varepsilon )\mathrm{f _{\alpha i}(\varepsilon )G_{i,k}^{A}(\varepsilon ), \label{eq:Keldyshdensity-integral} \end{equation i.e. densities and correlations inside the system result from the occupations $\mathrm{f}_{\beta i}(\varepsilon )$ imposed by the experimentalist at the current terminals and the environment at the\ \textquotedblleft fictitious\textquotedblright\ probes. The equilibrium density function $G_{j,j}^{(0)<}(\varepsilon )$ results when $\delta \mu _{\beta j}\equiv 0$ for all $\beta j$. The actual observables are evaluated from this non-equilibrium density function. The change respect to the equilibrium in the local density can be expressed in terms of the above boundary conditions as \cite{GLBE2}: \begin{eqnarray} \delta \rho _{j}^{{}} &=&-\frac{\mathrm{i}}{2\pi }\int \left[ G_{j,j}^{<}-G_{j,j}^{(0)<}\right] \mathrm{d}\varepsilon \\ &\simeq &N_{j}(\varepsilon _{F})\delta \mu _{j}, \notag \end{eqnarray while the currents between sites $i$ and $j$ are given b \begin{equation} I_{i,j}=\int \left[ V_{i,j}G_{j,i}^{<}-V_{j,i}G_{i,j}^{<}\right] \mathrm{d \varepsilon . \end{equation These integral expressions of the observables, expressed in the linear response approximation of small biases $e\mathtt{V}_{L}=\mu _{Li}-\varepsilon _{F}\ll \varepsilon _{F}$, become the Generalized Landauer-B\"{u}ttiker equations that describe the balance of electronic current. These are no other than the Kirchhoff laws expressed in terms of the generalized Landauer's conductances, given by the Fisher-Lee formulas of Eq. \ref{Fisher-Lee-generalizada}. Because of the linear approximation these transmittances are evaluated at the Fermi energy, and now become: \begin{equation} I_{\alpha i}=\frac{e}{h}\underset{\text{processes}}{\sum\limits_{\beta =L,\phi }}\underset{}{}\underset{\text{sites}}{\sum\limits_{j=1(\alpha i\neq \beta j)}^{N}}\left( T_{\alpha i,\beta j}\delta \mu _{\beta j}-T_{\beta j,\alpha i}\delta \mu _{\alpha i}\right) \label{eq:Kirchhoff1} \end{equation where the quantities $\delta \mu _{\alpha i}=\mu _{\alpha i}-\varepsilon _{F},$ are the chemical potentials of the electron reservoirs, at state $i$ for a process $\alpha $. The requirement in the DP model that no net current flows through the decoherent channels imposes \begin{equation} 0\equiv I_{\phi i}. \label{eq:CurrentConservation} \end{equation These equations imply the self-consistent determination of the internal non-equilibrium chemical potentials $\delta \mu _{\phi i}.$ Thus, we are faced to a linear problem. Once again, its solution can be laid as a decimation procedure, as we did to obtain the effective Hamiltonian. Consider the case where two real leads are connected to the sites $1$ and $N$ of the system (thus identified as channels $\ell 1$ and $\ell N$), and a \textit{single} decoherent process $\phi k$ is connected to the state $k$. Thus, charge conservation implies: \begin{equation} 0=T_{\phi k,\ell 1}\delta \mu _{\ell 1}+T_{\phi k,\ell N}\delta \mu _{\ell N}-(T_{\ell 1,\phi k}+T_{\ell N,\phi k})\delta \mu _{\phi k}, \label{eq:Decim-T_first} \end{equation which can be rewritten as: \begin{equation} \delta \mu _{\phi k}=\frac{T_{\phi k,\ell N}}{(T_{\ell 1,\phi k}+T_{\ell N,\phi k})}\delta \mu _{LN}+\frac{T_{\phi k,\ell 1}}{(T_{\ell 1,\phi k}+T_{\ell N,\phi k})}\delta \mu _{\ell 1} \end{equation Using this relation for the current on real channels we obtain: \begin{equation} I_{\ell N}=-I_{\ell 1}=\frac{e}{h}\tilde{T}_{\ell N,\ell 1}(\delta \mu _{\ell N}-\delta \mu _{\ell 1}), \label{eq:currentDP} \end{equation where $\tilde{T}_{\ell N,\ell 1}$ represent the \textquotedblleft effective\textquotedblright\ transmission between leads $\ell 1$ and $\ell N$ after the decimation of the incoherent channel associated with $\phi k$, given by: \begin{equation} \tilde{T}_{\ell N,\ell 1}=T_{\ell N,\ell 1}+T_{\ell N,\phi k}\frac{1} (T_{\ell 1,\phi k}+T_{\ell N,\phi k})}T_{\phi k,\ell 1}. \label{eq:Decim-T} \end{equation Note that the zero current constrain at the decoherent channels allows us to pile up (i.e. decimate) those processes into an incoherent contribution to the total transmission. This is the reason why Eq. \re {eq:CurrentConservation} is the key factor in the computation of the total transmission. At this point one recognizes the analogy of the second term on the right-hand side of Eq. \ref{eq:Decim-T} with the effective interaction shown in Eq. \ref{eq:decim-ex}. This analogy will be used in the following section to develop a simple matrix solution for the total decoherent transmission in a multi-terminal setup. In the case of two current probes, identifying the index label $L=\ell 1$ and $R=\ell N$ for the leads, and \phi k=k$ for the decoherence probes, one has that the total transmission probability is given by:\cite{Damato-Pastawski} \begin{equation} \tilde{T}_{L,R}=T_{L,R}+\sum\limits_{i,j}T_{R,i}\left[ \mathbb{W}^{-1}\right] _{i,j}T_{j,L}. \label{eq:Teff_DP} \end{equation The elements of the matrix $\mathbb{W}$ are: \begin{equation} W_{ij}=-T_{ij}+\left( \sum\limits_{j=L,i,R}T_{ij}\right) \delta _{ij}. \end{equation} Eqs. \ref{eq:currentDP} and \ref{eq:Teff_DP} provide the decoherent current and the effective transmission of DP model for two-terminal setups. However, they need to be reformulated to deal with a multi-terminal setup as when there are more than two externally controlled chemical potentials or when one requires to discriminate among different processes that contribute to the current. \section{\label{sec:Computational}Multi-Terminal D'Amato-Pastawski Model} The two-probe Landauer conductance requires the computation of a single element of the Green's function matrix: that connecting sites where the leads are attached. In a 1-D case, this is $G_{1N}$ (where $N$ is the number of sites of the system) and can be calculated through a decimation procedure \cite{Levstein1990} While this can be readily generalized to deal with finite systems of any dimension, not all formulations result numerically stable in presence of strong disorder or band gaps.\cite{PastawskiSlutzky} We will present a particular algorithm that is stable in such conditions. The method is applicable to block tridiagonal Hamiltonians. These are very common in many physicaly relevant situations, specifically when interactions are truncated, or when the Hamiltonian matrix presents some form of banded structure. The DP model requires the computation of the transmittances among all possible pairs of fictitious and physical probes, roughly $M(M-1)/2$, where M~(\leq N)$ is the number of phase-breaking scattering channels. Also the computation of the effective transmission requires the inversion of $\mathbb W}$, a $M\times M$ matrix, as expressed in Eq. \ref{eq:Teff_DP}. It is our purpose to extend the scheme of the DP model to account for decoherence in quantum transport problems that involves many terminals. We seek for a decoherent transmission analogous to Eq. \ref{eq:Teff_DP} for each pair of physical leads. Thus, the computational approach to the DP model would require an efficient matrix inversion algorithm. In the next subsection, we present a computational procedure that, being based on decimation schemes, preserves the physical meaning of matrix inversions. This may allow one to take advantage of system's symmetries as they can usually be expressed as relations between $\mathbb{G}$'s elements. \subsection{Green's Function and recursive algorithms.} In order to obtain the Green's functions of Eq. \ref{eq:green-def}, a matrix inversion is needed. The \textit{matrix continued fractions} \cit {Butler1973,MCF-Pastawski} scheme offers a decimative approach well suited to perform this task. This procedure can be constructed recalling the well known $2\times 2$ block matrix inversion, \begin{widetext} \begin{equation} \left[ \begin{array}{cc} \mathbb{A} & \mathbb{B} \\ \mathbb{C} & \mathbb{D \end{array \right] ^{-1}=\left[ \begin{array}{cc} (\mathbb{A}-\mathbb{BD}^{-1}\mathbb{C})^{-1} & -\mathbb{A}^{-1}\mathbb{B} \mathbb{D}-\mathbb{CA}^{-1}\mathbb{B})^{-1} \\ -\mathbb{D}^{-1}\mathbb{C}(\mathbb{A}-\mathbb{BD}^{-1}\mathbb{C})^{-1} & \mathbb{D}-\mathbb{CA}^{-1}\mathbb{B})^{-1 \end{array \right] , \label{eq:BlockInvert} \end{equation \end{widetext}where $\mathbb{A}$, $\mathbb{B}$, $\mathbb{C}$ and $\mathbb{D}$ are arbitrary size subdivisions of the original matrix. Let's assume that we have an effective Hamiltonian, $\hat{H}_{\mathrm{eff.}}$ which has block tridiagonal structure. We start \textquotedblleft partitioning\textquotedblright\ the basis states in two portions: a cluster labeled as $1$ that contains the first block, and the cluster of remaining states of the system which we label as $B$. Thus, the Green's function matrix in Eq. \ref{eq:green-def} is subdivided into four blocks, (\varepsilon \mathbb{I}-\mathbb{E}_{1})$,$(\varepsilon \mathbb{I}-\mathbb{E _{B})$,$-\mathbb{V}_{1B}$, and $-\mathbb{V}_{B1}$ of dimensions $N_{1}\times N_{1}$, $N_{B}\times N_{B}$, $N_{1}\times N_{B}$ and $N_{B}\times N_{1}$ respectively. Thus, \begin{equation} \mathbb{G}(\varepsilon )=\left[ \begin{array}{cc} \mathbb{G}_{11} & \mathbb{G}_{1B} \\ \mathbb{G}_{B1} & \mathbb{G}_{BB \end{array \right] =\left[ \begin{array}{cc} \varepsilon \mathbb{I}-\mathbb{E}_{1} & -\mathbb{V}_{1B} \\ -\mathbb{V}_{B1} & \varepsilon \mathbb{I}-\mathbb{E}_{B \end{array \right] ^{-1}. \label{eq:Green-block} \end{equation Here, it is important to recall that the effective Hamiltonian $\hat{H _{eff.}$ already includes all corrections due to fictitious and real probes, by virtue of Eq. \ref{Hamil-efectivo}. In this way, the block with energies and interactions, denoted here by $\mathbb{E}_{i}$, contain the self-energies that account for the openness of the system, and may be complex numbers. Combining Eq. \ref{eq:BlockInvert} and Eq. \re {eq:Green-block} is easy to show that, \begin{equation} \begin{array}{c} \mathbb{G}_{11}=\left( \varepsilon \mathbb{I}-{\mathbb{E}}_{1}-\mathbf \Sigma }_{1}^{(B)}\right) ^{-1}=\left( \varepsilon \mathbb{I}-\tilde{\mathbb E}}_{1}\right) ^{-1}, \\ \mathbb{G}_{BB}=\left( \varepsilon \mathbb{I}-{\mathbb{E}}_{B}-\mathbf \Sigma }_{B}^{(1)}\right) ^{-1}=\left( \varepsilon \mathbb{I}-\tilde{\mathbb E}}_{B}\right) ^{-1}, \\ \mathbb{G}_{1B}=\mathbb{G}_{11}\mathbb{V}_{1B}(\varepsilon \mathbb{I} \mathbb{E}_{B})^{-1}=\mathbb{G}_{11}\left[ \mathbf{\Sigma }_{1}^{(B)}\mathbb V}_{B1}^{-1}\right] ,\text{ and} \\ \mathbb{G}_{B1}=\mathbb{G}_{BB}\mathbb{V}_{B1}(\varepsilon \mathbb{I} \mathbb{E}_{1})^{-1}=\mathbb{G}_{BB}\left[ \mathbf{\Sigma }_{B}^{(1)}\mathbb V}_{1B}^{-1}\right] \end{array} \label{eq:Green-blocks2} \end{equation Here, the similarity with Eq. \ref{eq:decim-ex} allows us to define the block self energies, $\mathbf{\Sigma }$'s, which in this simple $2\times 2$ block scheme, are given by: \begin{equation} \begin{array}{c} \left[ \mathbf{\Sigma }_{1}^{(B)}\mathbb{V}_{B1}^{-1}\right] =\left[ \mathbb V}_{1B}(\varepsilon \mathbb{I}-\mathbb{E}_{B})^{-1}\right] , \\ \left[ \mathbf{\Sigma }_{B}^{(1)}\mathbb{V}_{1B}^{-1}\right] =\left[ \mathbb V}_{B1}(\varepsilon \mathbb{I}-\mathbb{E}_{1})^{-1}\right] \end{array} \label{eq:Sigmas1} \end{equation Notice, that in the expressions of Eqs. \ref{eq:Green-blocks2} and \re {eq:Sigmas1}, the inverse of the hopping matrix must cancel with the hopping that enters in the self-energies definition. Since the hoppings may be non-square matrices, this definition is crucial to avoid its inversion. Considering the bracket factors $\left[ \mathbf{\Sigma }\mathbb{V}_{{}}^{-1 \right] $ as a single object ensures stability of the recurrence procedure. The decimation of the degrees of freedom associated with the portion $B$ of the effective Hamiltonian is implied in Eq. \ref{eq:Green-blocks2}, where: \begin{equation} \tilde{\mathbb{E}}_{1}=\mathbb{E}_{1}+\mathbf{\Sigma }_{1}^{(B)}=\mathbb{E _{1}+\left[ \mathbb{V}_{1B}(\varepsilon \mathbb{I}-\mathbb{E}_{B})^{-1 \right] \mathbb{V}_{B1}. \label{eq:decimation_block1+R} \end{equation Likewise, the decimation of block $1$ into $B$ gives the effective block: \begin{equation} \tilde{\mathbb{E}}_{B}=\mathbb{E}_{B}+\mathbf{\Sigma }_{B}^{(1)}=\mathbb{E _{B}+\left[ \mathbb{V}_{B1}(\varepsilon \mathbb{I}-\mathbb{E}_{1})^{-1 \right] \mathbb{V}_{1B}. \label{eq:decimation_blockR+1} \end{equation Note that with the adopted notation for the self energies, $\Sigma _{i}^{(j)} $ is the correction to block site $i$ when all block sites between $i$ and $j $ (with $j$ included) are decimated. Therefore the supra-index in parentheses indicate the subspace that has been decimated. Since we are dealing with tridiagonal block matrices, we may resort to a further partition for the matrix inversion involved in Eq. \re {eq:decimation_block1+R}. i.e. the block $B$ describes states that can be subdivided into two clusters where the first one, labeled 2, corresponds to the first tridiagonal block from $(\varepsilon \mathbb{I}-\mathbb{E}_{B})$. The other block $B^{\prime }$ now satisfies $\mathbb{V}_{1B^{\prime }}\equiv \mathbb{O}$. Then, we hav \begin{equation} \mathbb{G}(\varepsilon )=\left[ \begin{array}{c|cc} \varepsilon \mathbb{I}-\mathbb{E}_{1} & -\mathbb{V}_{12} & \mathbb{O} \\ \hline -\mathbb{V}_{21} & \varepsilon \mathbb{I}-\mathbb{E}_{2} & -\mathbb{V _{2B^{\prime }} \\ \mathbb{O} & -\mathbb{V}_{B^{\prime }2} & \varepsilon \mathbb{I}-\mathbb{E _{B^{\prime } \end{array \right] ^{-1}. \end{equation Again, we can also decimate the degrees of freedom associated with block $2 , taking \begin{equation} \begin{array}{cc} \tilde{\mathbb{E}}_{1}=\mathbb{E}_{1}+\mathbf{\Sigma }_{1}^{(2)}, & \tilde \mathbb{E}}_{B^{\prime }}=\mathbb{E}_{B^{\prime }}+\mathbf{\Sigma _{B^{\prime }}^{(2)} \\ \multicolumn{2}{c}{\tilde{\mathbb{V}}_{1B^{\prime }}=\mathbb{V _{12}(\varepsilon \mathbb{I}-\mathbb{E}_{2})^{-1}\mathbb{V}_{2B^{\prime }} \end{array \end{equation which leads to an effective equation analogous to Eq. \ref{eq:Green-block}, in terms of the new effective block sites: \begin{equation} \left[ \begin{array}{cc} \mathbb{G}_{11} & \mathbb{G}_{1B^{\prime }} \\ \mathbb{G}_{B^{\prime }1} & \mathbb{G}_{B^{\prime }B^{\prime } \end{array \right] =\left[ \begin{array}{cc} \varepsilon \mathbb{I}-\tilde{\mathbb{E}}_{1} & -\tilde{\mathbb{V} _{1B^{\prime }} \\ -\tilde{\mathbb{V}}_{B^{\prime }1} & \varepsilon \mathbb{I}-\tilde{\mathbb{E }_{B^{\prime } \end{array \right] ^{-1} \end{equation Therefore, an expression analogous to Eq. \ref{eq:Green-blocks2} is obtained: \begin{equation} \begin{array}{c} \mathbb{G}_{11}=\left( \varepsilon \mathbb{I}-\mathbb{E}_{1}-\mathbf{\Sigma _{1}^{(B^{\prime })}\right) ^{-1} \\ \mathbb{G}_{B^{\prime }B^{\prime }}=\left( \varepsilon \mathbb{I}-\mathbb{E _{B^{\prime }}-\mathbf{\Sigma }_{B^{\prime }}^{(1)}\right) ^{-1} \\ \mathbb{G}_{1B^{\prime }}=\mathbb{G}_{11}\tilde{\mathbb{V}}_{1B^{\prime }}(\varepsilon \mathbb{I}-\tilde{\mathbb{E}}_{B^{\prime }})^{-1} \\ \mathbb{G}_{B^{\prime }1}=\mathbb{G}_{B^{\prime }B^{\prime }}\tilde{\mathbb{ }}_{B^{\prime }1}(\varepsilon \mathbb{I}-\tilde{\mathbb{E}}_{1})^{-1 \end{array} \label{eq:Green-blocks3} \end{equation where the diagonal blocks of the Green's function matrix involv \begin{equation} \begin{array}{c} \mathbf{\Sigma }_{1}^{(B^{\prime })}=\left[ \mathbb{V}_{12}(\varepsilon \mathbb{I}-\mathbb{E}_{2}-\mathbf{\Sigma }_{2}^{(B^{\prime })})^{-1}\right] \mathbb{V}_{12} \\ \mathbf{\Sigma }_{B^{\prime }}^{(1)}=\left[ \mathbb{V}_{B^{\prime }2}(\varepsilon \mathbb{I}-\mathbb{E}_{2}-\mathbf{\Sigma }_{2}^{(1)})^{-1 \right] \mathbb{V}_{2B^{\prime } \end{array} \label{eq:Sigmas_border} \end{equation Note that in the self-energies of Eq. \ref{eq:Sigmas_border}, the decimated space (denoted by the supra-index) always includes one of the border blocks (in this case, $1$ or $B$). However, as shown hereafter, the non-diagonal terms can also be written in terms of the block self-energies $\mathbf \Sigma }^{(1)}$'s and $\mathbf{\Sigma }^{(B^{\prime })}$'s \begin{equation} \begin{array}{c} \mathbb{G}_{1B^{\prime }}=\mathbb{G}_{11}[\mathbf{\Sigma }_{1}^{(B^{\prime })}\mathbb{V}_{12}^{-1}][\mathbf{\Sigma }_{2}^{(B^{\prime })}\mathbb{V _{B^{\prime }2}^{-1}], \\ \mathbb{G}_{B^{\prime }1}=\mathbb{G}_{B^{\prime }B^{\prime }}[\mathbf{\Sigma }_{B^{\prime }}^{(1)}\mathbb{V}_{1B^{\prime }}^{-1}][\mathbf{\Sigma _{2}^{(1)}\mathbb{V}_{12}^{-1}] \end{array} \label{eq:G1Bprime} \end{equation Both expressions are crucial to visualize the seed of our recursive procedure. The generalization by further partition into an arbitrary number of clusters is straightforward. The Green's functions are expressed as a product of non-singular self-energy blocks that are calculated recursively. Independently of how the effective Hamiltonian is subdivided, if there are N $ blocks of arbitrary size and the entire system is decimated into the $i -th and $j$-th block, we have simply as matrix continued fractions: \cit {MCF-Pastawski} \begin{equation} \begin{array}{c} \mathbf{\Sigma }_{i}^{(j)}=\left[ \mathbb{V}_{i,i+1}\left( \varepsilon \mathbb{I}-\mathbb{E}_{i+1}-\mathbf{\Sigma }_{i+1}^{(j)}\right) ^{-1}\right] \mathbb{V}_{i+1,i} \\ \mathbf{\Sigma }_{j}^{(i)}=\left[ \mathbb{V}_{j,j-1}\left( \varepsilon \mathbb{I}-\mathbb{E}_{j-1}-\mathbf{\Sigma }_{j-1}^{(i)}\right) ^{-1}\right] \mathbb{V}_{j-1,j}^{{}} \\ \text{for}~~~~j>i \end{array \end{equation provided that the final structure preserves a block three-diagonal. We recall that matrix inversions are further stabilized by the presence of the imaginary site energies imposed by the real and fictitious probes (Eq. \re {Hamil-efectivo}). In this way, the decimation of the entire system into the arbitrary \textquotedblleft block\textquotedblright\ sites $i$ and $j$, leads to the effective quantities \begin{equation} \begin{array}{c} \tilde{\mathbb{E}}_{i}=\mathbb{E}_{i}+\mathbf{\Sigma }_{i}^{(1)}+\mathbf \Sigma }_{i}^{(j)} \\ \tilde{\mathbb{E}}_{j}=\mathbb{E}_{j}+\mathbf{\Sigma }_{j}^{(i)}+\mathbf \Sigma }_{j}^{(N)} \\ \tilde{\mathbb{V}}_{i,j}=\tilde{\mathbb{V}}_{i,j-1}(\varepsilon \mathbb{I} \mathbb{E}_{j}-\mathbf{\Sigma }_{j}^{(1)})^{-1}\mathbb{V}_{j-1,j \end{array \end{equation \textit{which determine exactly} each $(i,j)$ element of the total Green's function, \begin{equation} \left[ \begin{array}{cc} \mathbb{G}_{ii} & \mathbb{G}_{ij} \\ \mathbb{G}_{ij} & \mathbb{G}_{jj \end{array \right] =\left[ \begin{array}{cc} \varepsilon \mathbb{I}-\tilde{\mathbb{E}}_{i} & -\tilde{\mathbb{V}}_{ij} \\ -\tilde{\mathbb{V}}_{ji} & \varepsilon \mathbb{I}-\tilde{\mathbb{E}}_{j \end{array \right] ^{-1}. \end{equation The last expression is similar to Eq. \ref{eq:Green-block}, and therefore we have, \begin{equation} \begin{array}{c} \mathbb{G}_{ii}=\left[ (\varepsilon \mathbb{I}-{\mathbb{E}}_{i})-\mathbf \Sigma }_{i}^{(1)}-\mathbf{\Sigma }_{i}^{(N)}\right] ^{-1}, \\ \mathbb{G}_{jj}=\left[ (\varepsilon \mathbb{I}-{\mathbb{E}}_{j})-\mathbf \Sigma }_{j}^{(1)}-\mathbf{\Sigma }_{j}^{(N)}\right] ^{-1}, \\ \mathbb{G}_{ij}=\mathbb{G}_{ii}\left[ \tilde{\mathbb{V}}_{ij}(\varepsilon \mathbb{I}-\tilde{\mathbb{E}}_{j})^{-1}\right] , \\ \mathbb{G}_{ji}=\mathbb{G}_{jj}\left[ \tilde{\mathbb{V}}_{ji}(\varepsilon \mathbb{I}-\tilde{\mathbb{E}}_{i})^{-1}\right] \end{array} \label{eq:Green-blocksF} \end{equation This procedure is shown diagrammatically on Fig. \ref{Gr:Decimation}. \begin{figure*}[tbph] \begin{center} \includegraphics[width=5.5in]{fig2.eps} \end{center} \caption{Decimation scheme for the calculation of the elements of the Green's function matrix.} \label{Gr:Decimation} \end{figure*} Note that the diagonal elements are easily calculated evaluating $\mathcal{O }N)$ energy corrections of the form $\Sigma _{i}^{(1)}$ and $\Sigma _{i}^{(N)}$, where all the sites have been decimated into site $i$. Also, in order to compute all the non diagonal elements of the Green's function matrix in Eq. \ref{eq:Green-blocks3} and \ref{eq:Green-blocksF} we would need to evaluate $\sim N^{2}$ energy corrections $\mathbf{\Sigma }_{i}^{(j)} 's. However, following the insight given in Eq. \ref{eq:G1Bprime}, for tridiagonal block Hamiltonians, the non-diagonal block matrix elements of the Green function can be obtained in terms of the diagonal ones, avoiding the need of the evaluation of $\mathcal{O(}N^{2})$ terms $\Sigma _{i}^{(j)} 's. In this case, if the Hamiltonian matrix is subdivided in $N$ arbitrary blocks, we have \begin{eqnarray} &&\underset{\text{where}~i<j}{\mathbb{G}_{ij}=\mathbb{G}_{ii}\pro \limits_{k=i}^{j-1}\left[ \mathbf{\Sigma }_{k}^{(N)}\mathbb{V}_{k+1,k}^{-1 \right] }, \label{eq:thoulessR} \\ &&\underset{\text{where}~i<j}{\mathbb{G}_{ji}=\mathbb{G}_{jj}\pro \limits_{k=j}^{i+1}\left[ \mathbf{\Sigma }_{k}^{(1)}\mathbb{V}_{k-1,k}^{-1 \right] }. \label{eq:thoulessL} \end{eqnarray Note that now it is not necessary to evaluate any extra $\mathbf{\Sigma }$ in order to calculate $\mathbb{G}_{ij}$ for $i\neq j$, because those self-energies have been already calculated for the diagonal Green's Functions matrix blocks, $\mathbb{G}_{ii}$. This implies that only $\mathcal O}\left( N\right) $ self-energies are required for the calculation of the whole Green's function. These equations can help to take advantage of possible symmetries of the $\mathbb{V}$ and $\mathbf{\Sigma }$ matrices to speed up even more the calculation of Green's Functions. Although Eqs. \ref{eq:thoulessR}-\ref{eq:thoulessL} have been formally written in terms of hopping matrix inverses, $\mathbb{V}^{-1}$, these expressions are accurate even when the hopping matrices are singular. This is because the hopping matrix inverse cancels out with the hopping in the \Sigma $ definition, as it can be seen, for example, in Eq. \ref{eq:Sigmas1 . In most cases, $\mathbb{G}_{ij}^{R}=\mathbb{G}_{ji}^{R}$, and therefore Eqs. \ref{eq:thoulessR} and \ref{eq:thoulessL} are equivalent. However, both equations are needed in some cases of quantum pumping \cite{LuisMPQP} or in the presence of magnetic fields. The origin of the extraordinary stability of Eqs. \ref{eq:thoulessR}-\ref{eq:thoulessL} can be easily grasped analytically by considering a linear chain with three sites and expressing the self-energies in terms of continued fractions before applying Eq. \re {eq:G1Bprime}. Explicitly \begin{widetext} \begin{equation} G_{1,3}(\varepsilon )=\cfrac{1}{\varepsilon -E_{1}-V_{12}\cfrac{1} \varepsilon -E_{2}-V_{23}\cfrac{1}{\varepsilon -E_{3}}V_{32}}V_{21}}V_{12 \cfrac{1}{\varepsilon -E_{2}-V_{23}\cfrac{1}{\varepsilon -E_{3}}V_{32}}V_{23 \cfrac{1}{\varepsilon -E_{3}}. \label{Eq-G13} \end{equation} \end{widetext}Here, we clearly see that the divergences in the last factor are exactly canceled by the zeros of the second one, while the singularities in this one, are canceled by the zeros in the first factor. This equation holds when the elements $E$'s and $V$'s are replaced by matrices, with \mathbb{V}_{n,n+1}$'s mixing subspaces $\mathbb{E}_{n}$ and $\mathbb{E _{n+1} $ of different dimensions. In general, the divergences in $\left[ \mathbf{\Sigma }_{k}^{(N)}\mathbb{V}_{k-1,k}^{-1}\right] $ are compensated by the zeros of the previous term, i.e. $\left[ \mathbf{\Sigma }_{k-1}^{(N) \mathbb{V}_{k,k-1}^{-1}\right] $. Furthermore, the regularization of poles and divergencies imposed by decoherent processes (see below) ensure the numerical precision of this cancellation. \subsection{Physical Observables in the Multi-Terminal D'Amato-Pastawski model} The application of the DP model for multi-terminal devices requires a generalization of Eq. \ref{eq:Teff_DP}. To obtain the total transmission on each terminal, we can take advantage of the decimation procedures discussed above. Eq. \ref{eq:Kirchhoff1} is easily rearranged in terms of the transmissivity $(1-R_{\alpha i})$ from each channel $\alpha i$. \cit {Pastawski-Medina} For process $\alpha $ at site $i$, one defines \begin{eqnarray} \left\vert t_{\alpha i,\alpha i}\right\vert ^{2}+(1-R_{\alpha i}) &=&\left\vert t_{\alpha i,\alpha i}\right\vert ^{2}\underset{\left( \beta j\neq \alpha i\right) }{+\sum_{\beta ,j}}T_{\beta j,\alpha i} \label{Eq:1-R} \\ &=&(1/g_{\alpha ,i})=4\pi N_{i}\Gamma _{\alpha i}, \label{Eq:g-T} \end{eqnarray where $N_{i}$ is the density of states at the site $i$. The Fisher-Lee formula is extended by defining a \textquotedblleft self-transmission\textquotedblright\ $\left\vert t_{\alpha i,\alpha i}\right\vert ^{2}$ that is not a transmittance in the standard sense, and certainly it is not the diagonal term $T_{\alpha i,\alpha i}\equiv R_{\alpha i}-1$. However, it is required to obtain the sum of Eq. \ref{Eq:g-T} as the product of the local density of states and the decay rate. It describes all the electrons that, at a certain instant, are leaving the $\alpha i^{\mathrm th}}$ reservoir to eventually return after wandering around. The inclusion of this term is important because it contributes to define $(1/g_{\alpha ,i}) $, which plays a central role in a Keldysh perturbative expansion \cit {GLBE1,Pastawski-Medina} and in a time dependent formulation of transport. \cite{GLBE2} Therefore, in a steady state calculation is enough to express eq. \re {eq:Kirchhoff1} as: \begin{equation} I_{\alpha i}=\frac{\left\vert e\right\vert }{h}\left[ (R_{\alpha i}-1)\delta \mu _{\alpha i}+\sum\limits_{\beta =L,\phi }\underset{\alpha i\neq \beta j} \sum\limits_{j=1}^{N}}T_{\alpha i,\beta j}\delta \mu _{\beta ,j}\right] . \label{Eq:Kirchhoff-current} \end{equation It can be arranged in a compact matrix notation, separating the processes associated with the leads from the decoherent ones. The actual currents at the leads are arranged in the vector $\overrightarrow{I}_{\lambda }$ while the vanishing currents at the decoherent channels, in $\overrightarrow{I _{\phi }\equiv \overrightarrow{0}$. Thus, \begin{equation} \left( \begin{array}{c} \overrightarrow{I}_{\lambda } \\ \overrightarrow{0 \end{array \right) =\frac{\left\vert e\right\vert }{h}\mathbb{T}\left( \begin{array}{c} \delta \overrightarrow{\mu _{\lambda }} \\ \delta \overrightarrow{\mu _{\phi } \end{array \right) . \label{eq:CurrentsMatrix} \end{equation Here, the non-diagonal elements of $\mathbb{T}$ are transmission probabilities and thus, they are definite positive. In contrast, the diagonal elements are negative. Thus, a sum over any column or row cancels out. This matrix can also be subdivided in the same block structure: \begin{equation} \mathbb{T}=\left[ \begin{array}{cc} \mathbb{T}_{\lambda \lambda } & \mathbb{T}_{\lambda \phi } \\ \mathbb{T}_{\phi \lambda } & \mathbb{T}_{\phi \phi \end{array \right] . \end{equation This notation stress that $\mathbb{T}_{\lambda \lambda }$ only involves terms that connect real leads, $\mathbb{T}_{\phi \phi }$ only involves transmissions between decoherent channels and, finally, the blocks $\mathbb{ }_{\lambda \phi }$ and $\mathbb{T}_{\phi \lambda }$ connect leads with decoherent processes. Thus, both $\lambda $ and $\phi $ subscripts may be vectors themselves indicating processes (current leads $\ell $ or dephasing processes $\phi $) and states in the system ($n=1,...N$). For instance, for a system with a single resonant state identified as 1 coupled to two terminals and a single decoherent process, $\lambda =(L1,R1)$ and $\phi =\phi 1$. The fact that on-site chemical potentials at decoherent channels ensure that no net current flows through them, allows us to evaluate \overrightarrow{\delta \mu }_{\phi }$, from Eq. \ref{eq:CurrentsMatrix}: \begin{equation} \overrightarrow{\delta \mu }_{\phi }=\left[ -\mathbb{T}_{\phi \phi }\right] ^{-1}\mathbb{T}_{\phi \lambda }\overrightarrow{\delta \mu }_{\lambda }. \end{equation Here, $\overrightarrow{\delta \mu }_{\phi }$ provides the chemical potential profile at the sites undergoing decoherence. Notice that, if used in a local space representation, these chemical potentials do not distinguish left from right going electrons. Thus they induce momentum relaxing decoherence. \cit {GLBE1,Gasparian96,Datta2007} The decimative procedure involves a simple algebraic relation between the real channels of the system and the chemical potentials associated with currents drains or sources. From Eq. \ref{eq:CurrentsMatrix}, it is straightforward to isolate $\overrightarrow{I}_{\lambda }$, arriving to the expression \begin{equation} \overrightarrow{I}_{\lambda }=\frac{e}{h}\widetilde{\mathbb{T}}_{\lambda \lambda }\overrightarrow{\delta \mu }_{\lambda }, \label{eq:Idec_matrix} \end{equation and therefore, the adimensional effective conductances are the non-diagonal elements of the matrix \begin{equation} \tilde{\mathbb{T}}_{\lambda \lambda }=\mathbb{T}_{\lambda \lambda }+\mathbb{ }_{\lambda \phi }[-\mathbb{T}_{\phi \phi }]^{-1}\mathbb{T}_{\phi \lambda }, \label{eq:Teff_matrix} \end{equation where the first term represents the coherent transmissions while the second involves all the possible transmissions undergoing at least one decoherent process. This last term, involves the inversion of a typically big $N\times N $ matrix. Notice that the matrix in square brackets would correspond to \mathbb{W}$ in the original D'Amato and Pastawski's paper, see Eq. \re {eq:Teff_DP}. \cite{Damato-Pastawski} However, the matrix inversion can be performed resorting to a recursive decimation of the ${N}$ dephasing channels, taken one by one. Starting from the first one, at each stage of decimation, all the remaining probes and dephasing channels become renormalized according to the following recursive scheme for the matrix elements of $\tilde{\mathbb{T}}$: \begin{eqnarray} \tilde{T}_{ij}^{\left[ 0\right] } &=&T_{ij}^{{}} \label{Eq:Trans_order_0} \\ \tilde{T}_{ij}^{\left[ k\right] } &=&\tilde{T}_{ij}^{\left[ k-1\right] } \tilde{T}_{i,k}^{\left[ k-1\right] }\frac{-1}{\tilde{T}_{k,k}^{\left[ k- \right] }}\tilde{T}_{k,j}^{\left[ k-1\right] }. \label{Eq:Trans_recursive} \end{eqnarray Here, $k$ runs over the dephasing channel index $\phi {1}...\phi {N}$ and \tilde{T}_{ij}^{\left[ k\right] }$ stands for the matrix element ${i,j}$ \ (each of them take the values $\{\ell {1}...\ell {M,}\phi {1,}...,\phi {N \}) $ of matrix $\mathbb{T},$ after the decimation of $k$ incoherent channels. This recursion algorithm could become particularly useful when only the effective transmission among a few external channels is needed. Once that all of them were decimated, we have an effective transmission matrix $\tilde{\mathbb{T}}\equiv \tilde{\mathbb{T}}^{(N)}$ given by: \begin{equation} \widetilde{\mathbb{T}}=\left[ \begin{array}{cccc} \tilde{R}_{\ell 1}-1 & \tilde{T}_{\ell 1,L2} & \cdots & \tilde{T}_{\ell 1,\ell M} \\ \vdots & \vdots & \ddots & \vdots \\ \tilde{T}_{\ell M,\ell 1} & \tilde{T}_{\ell M,\ell 2} & \cdots & \tilde{R _{\ell M}- \end{array \right] \label{eq:Weff-matrix} \end{equation which accounts for the overall (coherent plus incoherent) transmission through the system between different current channels. This effective transmission matrix relates real currents on each site of the sample with the voltages associated with each electron reservoir. It should by noticed that sums over rows or columns, both on the original $\mathbb{T}$ and on \tilde{\mathbb{T}},$ must be zero, in accordance to the Kirchhoff law. At this point there is a particular situation that should be discussed: a unique voltage difference between two channel sets. This results in a single chemical potential difference. For example, assuming that all the channels associated with a current source in the \textquotedblleft left\textquotedblright\ source $L$ have the same chemical potential, $\delta \mu _{L}$ and all those in the current sink $R,$ have $\delta \mu _{R}$. We can rewrite the net current as: \begin{eqnarray} \mathtt{I} &=&\sum\limits_{i}I_{i}=\frac{e}{h}\sum\limits_{j}^{M_{R}}\su \limits_{i}^{M_{L}}\tilde{T}_{Rj,Li}(\delta \mu _{L}-\delta \mu _{R}) \label{eq:Curr_2Volt} \\ &=&\mathrm{Tr}\left[ 4\boldsymbol{\Gamma }_{R}\mathbb{G}_{N1}^{R}\mathbf \Gamma }_{L}\mathbb{G}_{1N}^{A}\right] (\delta \mu _{L}-\delta \mu _{R}) \label{Eq. Tr(ImGImG)} \\ &=&\mathtt{GV} \label{eq:GV} \end{eqnarray where $\mathtt{G}$ is the effective conductance, $\mathtt{V}=(\delta \mu _{L}-\delta \mu _{R})/e$ is the applied voltage. Notice that $\Gamma _{L}$ and $\Gamma _{R}$ are square matrices with dimensions $M_{L}\times M_{L}$ and $M_{R}\times M_{R}$ associated with the $M=M_{L}+M_{R}$ quantum channels at the leads $L$ and $R$. Since the final expression is the trace of a matrix product, the result does not depend on the chosen basis. For the most general case of several chemical potentials, Eqs. \re {eq:Curr_2Volt}-\ref{eq:GV} can not be used and one should rely on Eqs. \re {eq:Idec_matrix} and \ref{eq:Teff_matrix} that are the general solution to the multi-terminal DP model. These are the main results of this work together with the algorithms for the Green's functions, Eqs. \re {eq:thoulessL} and \ref{eq:thoulessR}, and for the effective transmittances, Eqs. \ref{Eq:Trans_order_0} and \ref{Eq:Trans_recursive}. All of them will be tested in physically relevant situations in the next two sections. \section{\label{sec:example1} Application: Decoherence in a Model for a SASE } The explicit description of vibrational degrees of freedom in a transport problem requires a multichannel formulation even in a two probe configuration. This is because one must resort to a Fock-space representation of the Hamiltonian describing electrons and phonons. This situations occur in vibrational spectroscopy\cite{Stipe98,Park}, polaronic models,\cite{BoncaTrugman,BoncaTrugman97} photon-assisted tunneling \cite{Stafford96,Jauho98} as well as in time-dependent classical electromagnetic fields in Floquet representations.\cite{LuisAPL11} We will analyze a simple model that represents this family of problems: independent electrons tunneling through a resonance where they are strongly coupled to a quantized vibrational mode. In particular, we describe the optical phonon-assisted tunneling in a double barrier device. It manifests as a satellite peak in the I-V curve. This mechanism led to one \cit {Foa2001} of the various proposals for a phonon laser (SASER).\cite{Kent2010} In such proposal, a substantial part of the electrons contributing to the current emit an optical phonon. This constitute the basis for a coherent ultrasound source. \cite{ChemPhys2002,Camps2001}. The efficiency of the device depends on the contrast between the satellite peak and the valley, which in turn is determined by specific quantum interferences among the participating channels. Thus, we will explore if these interferences survive the decoherence induced by the acoustic phonons. \begin{figure}[tbph] \begin{center} \includegraphics[width=2.8in]{fig3.eps} \end{center} \caption{Fock-space representation of states $|j,n\rangle $. The middle row represents local electronic states $j$ with $n$ phonons. Lower and upper rows describe the same electronic tight-binding chain but with different numbers of phonons. Vertical lines are local electron-phonon couplings restricted to site 0th. } \label{Gr:FockSpace} \end{figure} \textit{Model.} Consider a \textquotedblleft local\textquotedblright\ electronic resonant state labeled as $0$. There, the electron is coupled to a single vibrational mode, with frecuency $\omega _{0}$, whose occupation is associated with the bosonic number operator $\hat{b}^{\dagger }\hat{b}$. This is represented by the electron-phonon Hamiltonian \begin{equation} \hat{H}_{S}=E_{0}\hat{c}_{0}^{+}\hat{c}_{0}^{{}}+\left( \hbar \omega _{0} \tfrac{1}{2}\right) \hat{b}^{+}\hat{b}^{{}}+V_{g}(\hat{b}^{+}+\hat{b}^{{}} \hat{c}_{0}^{+}\hat{c}_{0}^{{}}. \end{equation The eigenstates of this Hamiltonian are the polaron states,\cit {BrazJP,Wingreen1988} whose eigenenergies are \begin{equation} E_{0,n}=E_{0}+\hbar \omega _{0}\left( n+\frac{1}{2}\right) -\frac{|V_{g}|^{2 }{\hbar \omega _{0}}. \end{equation The electrons can jump \textit{in} and \textit{out} the resonant state to the left and right leads. They can also suffer decoherent processes with a rate $2\Gamma _{\phi }/\hbar $ in a FGR approximation. The effective Hamiltonian results: \begin{equation} \hat{H}_{\mathrm{eff}}=\hat{H}_{S}+\hat{\Sigma}_{L}+\hat{\Sigma}_{R}+\hat \Sigma}_{\phi }, \end{equation where $\hat{\Sigma}_{L}$ and $\hat{\Sigma}_{R}$ describe the escape to the current leads and $\hat{\Sigma}_{\phi }$ the escape associated with decoherence. They are, \begin{equation} \hat{\Sigma}_{L}+\hat{\Sigma}_{R}+\hat{\Sigma}_{\phi }=\left[ \Sigma _{L}(\varepsilon )+\Sigma _{R}(\varepsilon )-\mathrm{i}\Gamma _{\phi }\right] \hat{c}_{0}^{+}\hat{c}_{0}^{{}}. \end{equation Notice that, these self-energies must account for the high voltage difference required by SASER operation as an offset in the band centers of the left and right leads $E_{L}-E_{R}=e\mathtt{V}$. We have omitted a real part of the decoherent process which is not relevant in the present case. As discussed before \cite{ChemPhys2002}, the optical phonon absorption and emission can be viewed as a \textquotedblleft vertical\textquotedblright\ processes in a two-dimensional network. Thus, transport in the Fock space is computationally equivalent to a tight-binding model with an expanded dimensionality, as shown in Fig. \ref{Gr:FockSpace}.\cit {ChemPhys2002,BrazJP,BoncaTrugman} \begin{figure*}[tbph] \begin{center} \includegraphics[width=5.5in]{fig4.eps} \end{center} \caption{Multichannel decoherent transmission for the polaron model, with \hbar \protect\omega _{0}=0.2$ eV, $E_{0}=-1.5$ eV. (a) Local electronic state without coupling with the phonons ($V_{g}=0$); (b) Transmission probability for an electron leaving the sample without a change in the phonon state ($V_{g}=0.1$); (c) Transmission probability for an electron that leaves the sample emitting one phonon ($V_{g}=0.1$); (d) Total decoherent transmission probability} \label{Gr:all} \end{figure*} When an electron comes from the left side, it arrives at the resonant site where it couples to the $n_{0}$ phonons present in the well. It can either keep its original kinetic energy $\varepsilon -\left( n+\tfrac{1}{2}\right) \hbar \omega _{0}$ or change it by emitting or absorbing $\Delta n$ phonons. Thus, the transmission probabilities of each contribution are given by: \begin{equation} T_{R(n_{0}+\Delta n),Ln_{0}}^{{}}=2\Gamma _{R(n_{0}+\Delta n)}G_{n_{0}+\Delta n,n_{0}}^{R}2\Gamma _{Ln_{0}}G_{n_{0},n_{0}+\Delta n}^{A}. \label{eq:T_eph} \end{equation Notice that the subscripts represent channels in the Fock space. As a consequence of the trivial energy shift, associated with the presence of phonons, \begin{equation} \Gamma _{\alpha n}(\varepsilon )=\Gamma \left( \varepsilon -E_{\alpha }-\left( n+\tfrac{1}{2}\right) \hbar \omega _{0}\right) , \end{equation for $\alpha =L,R,$ as defined in Eqs. \ref{Dyson}-\ref{self_energy}. Voltages are accounted by $E_{\alpha }$. Each of this processes contributes to the total coherent transmission which is given by, \begin{equation} T_{RL}(\varepsilon )=\sum\limits_{\Delta n=-n_{0}}^{\infty }T_{n_{0}+\Delta n,n_{0}}(\varepsilon ). \end{equation In an actual device, the current would be obtained integrating $\varepsilon $ with the appropriate Fermi functions. Here, we might recall that Ref. \cit {Emberly2000} suggested that in the Fock space, \textquotedblleft vertical\textquotedblright\ hoppings could be blocked by the presence of other electrons arriving with different initial energies. However, when the kinetic energy of the incoming electrons satisfies $E_{F}\leq \hbar \omega _{0}\leq e\mathtt{V}$, the applied voltage always enables phonon emission \cite{ChemPhys2002,BrazJP,BoncaTrugman} ruling out the eventual problem of overflow \cite{Cattena2012} ensuring the physical significance of our model. The decoherence is induced by the finite lifetime for the polaron states through an imaginary correction in the self-energies of Eq. \re {sigma-decoher}. The available \textquotedblleft direct\textquotedblright\ channels are associated with the transmission probabilities of Eq. \re {eq:T_eph}. Because of the wide band approximation for the dephasing channels, the energy uncertainty is independent of $\varepsilon $ \begin{equation} \Gamma _{\phi n}(\varepsilon )\equiv \Gamma _{\phi }. \end{equation Optical phonon emission or absorption processes give rise to decoherent processes, even when $\Gamma _{\phi }=0$. This leaves us with several possible dephasing channels, whose transmittances ar \begin{equation} T_{\beta (n_{0}+\Delta n),\alpha n_{0}}^{{}}=2\Gamma _{\beta \left( n_{0}+\Delta n\right) }(\varepsilon )|G_{n_{0}+\Delta n,n_{0}}^{R}(\epsilon )|^{2}2\Gamma _{\alpha n_{0}}(\varepsilon ). \label{eq:T_eph_dec} \end{equation Here $\alpha $,$\beta $ are either$~R,L$ or $\phi $. From these transmissions, and using Eqs. \ref{eq:Teff_matrix} and \ref{eq:Weff-matrix}, we obtain the effective transmissions through the available real channels. Instead of using the SASER operation regime ($n_{0}\gg 1$), for pedagogical reasons we will assume that injected electrons find $n_{0}=0$ phonons, a situation that describes a vibrational spectroscopy experiments. Then the total transmission is simply, \begin{equation} \tilde{T}_{LR}(\varepsilon )=\sum\limits_{n=0}^{N}\tilde{T _{LR}^{(n)}(\varepsilon ), \end{equation where each $\tilde{T}_{LR}^{(n)}$ includes the decimation of the incoherent channels as in Eq. \ref{eq:Decim-T}. In what follows we will analyze $\tilde T}_{LR}(\varepsilon )$ which is also the relevant quantity to study the non-linear response (see Eq. 134 in Ref. \cite{Pastawski-Medina}). The total transmission as function of energy is shown in Fig. \ref{Gr:all}. The Hamiltonian parameters are roughly representative of a double-well resonant tunneling devices where electron-phonon interactions manifest as a satellite peak in the conductance. \cite{Foa2001} There $E_{0}=-1.5$ eV, V_{R}=V_{L}=-0.1$ eV, $\hbar \omega _{0}=0.2$ eV and $V_{g}\simeq -0.1$ eV. We discriminate among different vertical processes contributing to the total transmittance. When the coupling between the local electronic state and the phonon mode is neglected, $V_{g}=0$, the problem becomes one dimensional with a unique resonance, as shown in Fig. \ref{Gr:all}-a. The effect of the environment, accounted with the DP model, is a broadening of the original resonance. When the local electronic state is strongly coupled with the phonon field, $\left\vert V_{g}\right\vert \gg 0$, there are extra available paths for the conduction electrons in the Fock space. Different electron pathways in the coherent picture can interfere destructively, e.g. those that traverse the resonance straight away and those that previously emit and absorb a virtual phonon. These give rise to anti-resonances in Figs. \re {Gr:all}-b and \ref{Gr:all}-c. Since they are a coherent phenomena, they may be destroyed when decoherent events are present. This is made evident in Fig. \ref{Gr:all}-d where the total electron transmission probability in a multi-phonon process is compared with the same configuration with added decoherence, according to the multi-terminal DP model. The energy uncertainty used is $\Gamma _{\phi }=0.026$ $e$\texttt{V} $\sim k_{B}T_{R}$, where $k_{B}$ is the Boltzmann constant and $T_{R}$ stands for room temperature of $300K$. Although one might evaluate $\Gamma _{\phi }$ from the electronic energy uncertainties obtained with the help of ab-initio computations, the behavior of $\tilde{T}$ as a function of $\Gamma _{\phi }$ is smooth, provided that these local uncertainties are small compared with typical tunneling rates from the local resonances, $\Gamma _{L(R)}\gg \Gamma _{\phi }$. Therefore, small variations of the precise value of $\Gamma _{\phi }$ do not change the general behavior of $\tilde{T}$. This is illustrated in Fig. \ref{Gr:colormap} where a color map shows how $\Gamma _{\phi }$ affects the total transmission probability in the range [$0$ eV, 0.025$ eV]. \begin{figure}[tbph] \begin{center} \includegraphics[width=3.4in]{fig5.eps} \end{center} \caption{Multichannel decoherent transmission for the polaron model in a color map. The transmission probability is shown in a color scale, as a function of the incident electron Fermi energy and the strength of the imaginary energy shift $\Gamma _{\protect\phi }$. The behavior of $\tilde{T}$ is shown to be a smooth function of $\Gamma _{\protect\phi }$.} \label{Gr:colormap} \end{figure} We confirmed the general trend that decoherence broadens and lowers the resonance peaks and raise the tails. But more importantly, valleys are shaped by multi-phonon coherent processes that produce anti-resonances. These resulted very sensitive to decoherence. Thus, these effects should be considered in assessing the efficiency of a SASER. \section{\label{sec:example2} Application: Quantum to Classical transition in a Model for Giant Magnetoresistance.} Spintronics often requires to distinguish how each spin projection contribute to the current and to identify the spin dependent voltage profiles, i.e. the chemical potentials $\delta \mu $'s. These are absent from the original solution of the DP model that just provides the total current, $I_{LR}=(e/h)T_{eff}\delta \mu $ (see section \ref{sec:DP} ). This limitation was overcomed by the previous sections, where a specific current I_{j}$, at spin-channel $j$, can be readily calculated from eq. \re {eq:CurrentsMatrix}, as $I_{j}=e/h\sum_{i}\left( \mathbb{T}\right) _{ji}\delta \mu _{i}$. Spin-dependent electron transport in ferromagnetic metals presents high rates of scattering events that could make a fully coherent treatment somewhat unrealistic. The standard approach is to use the semiclassical Boltzmann equation \cite{Valet-Fert}. However, in these models quantum mechanic effects are completely neglected from the very beginning. These effects can become important and interesting to study. For instance, ref. \cite{FernAlcPast13} shows that spin-dependent transmittances in nanowires with a modulated magnetic field may present Rabi oscillations. In these situations, a Hamiltonian model capable of reaching a semiclassical limit, such as the DP, can be very useful. In this section, we use the multi-terminal DP model to treat one of the paradigmatic phenomena of the spintronics, the Giant Magnetoresistance (GMR). We will show that one can go from a purely quantum regime, described by a Hamiltonian, to the (semi)classical limit of GMR, just by varying a single parameter: the `decoherent' scattering rate. Giant Magnetoresistance may occur in systems composed of two layers of a ferromagnetic metal where their relative magnetization can be switched. In these materials, the rate of scattering depends on the electron spin. Thus, the electrical resistance depends on the relative orientation between the spin and the layer's magnetization. If the two layers have their magnetization aligned, there is a spin orientation with low resistance that dominates transport. On the other hand, when the magnetizations are anti-aligned, both spin channels have high resistance. \cite{Fert08} \textit{Model}. Let us consider a one-dimensional system composed of two adjacent `layers' or portions of a single-domain ferromagnetic metal. We choose the relative magnetization in a anti-aligned configuration (Fig. \re {fig:ChemPot}-$a$). This system is connected to non-magnetic contacts at each side, labeled by $L$ and $R$. Thus, the current flows perpendicular to the magnetic interface. As usual, each spin is regarded as an independent channel at the contacts. Thus, at the leads, each spin projection is characterized by the chemical potentials $\mu _{L\uparrow }$, $\mu _{L\downarrow }$, $\mu _{R\uparrow }$, and $\mu _{R\downarrow }$. Since we are considering non-ferromagnetic contacts, the chemical potentials at the leads are spin independent. Inside the system, the electrons undergo scattering processes producing the spin dependent resistance. Since its fair to neglect Anderson localization, we can use the equivalence between delta function impurities and local decoherent scattering processes. As in the Ohmic limit of the DP model \cit {Damato-Pastawski,GLBE1} they can be characterized by the parameter $\Gamma _{\phi }$. This is related with the mean free time, $\tau _{\sigma }$, through $\Gamma _{\sigma }=\hbar /(2\tau _{\sigma })$. Then, the Ohmic conductance is proportional to the mean free path $\ell _{\sigma }$ which results $\ell _{\sigma }=v_{F}\tau _{\sigma }$. Note that the rate $\Gamma _{\sigma }$ depends on the relative orientation between the spin and the local magnetization. Thus, spin $\uparrow $ has a scattering rate $\Gamma _{\phi 1}$, at the first layer, and $\Gamma _{\phi 2}$, at the second one. The opposite spin has the complementary rates. As in previous works,\cite{Gopar,FernAlcPast13} the system's Hamiltonian \hat{H}_{S}$ is described in a tight-binding approach that includes local spin-reversing interactions: \begin{eqnarray} \hat{H}_{S} &=&{\sum\limits_{i=-N}^{N}}\sum\limits_{\sigma =\uparrow ,\downarrow }[E_{i,\sigma }^{{}}\hat{c}_{i,\sigma }^{\dag }\hat{c}_{i,\sigma }^{{~}}+V\left[ \hat{c}_{i\sigma }^{\dagger }\hat{c}_{i+1\sigma }^{{~}} \mathrm{c.c.}\right] \notag \\ &&+{\sum\limits_{i=-N}^{N}}V_{\downarrow \uparrow }\left[ \hat{c _{i\downarrow }^{\dagger }\hat{c}_{i\uparrow }^{{~}}+\mathrm{c.c.}\right] . \label{HamGMR} \end{eqnarray The label $i$ indicates sites on a lattice with unit cell $a$, $E_{i,\sigma }^{{}}$ is the energy at the site $i$ with spin $\sigma $, the operator \hat{c}_{i,\sigma }^{\dag }$ ($\hat{c}_{i,\sigma }^{{~}}$) creates (annihilates) a particle at the site $i$ with spin $\sigma $. The firsts two terms of $\hat{H}$ accounts for the site energies and the spin-conserving hopping, $V$, between adjacent sites. $V$ is chosen as the unit of energy. In a graphical representation, each spin orientation is represented by a chain of sites interconnected by $V$. Thus, two chains of sites are needed to represent the spin-dependent transport along this ferromagnetic system (Fig. \ref{fig:ChemPot}-$a)$). The last term of $\hat{H}$, models the scattering processes that may change the spin projection, such as scattering with magnetic impurities. Thus, $V_{\downarrow \uparrow }$ is the local spin-reversing or spin-mixing hopping parameter. This is related to a characteristic length scale identified as the spin diffusion length, $L_{sd} , by \begin{equation} L_{sd}=\frac{\hslash v_{F}}{2\left\vert V_{\downarrow \uparrow }\right\vert , \end{equation where $v_{F}$ is the Fermi velocity and $L_{sd}$ is the length scale at which the spin-flipping processes relax the diffusing spin. Thus, within this length, both spin orientations can be considered as independent. L_{sd} $ is typically much larger than the mean free path. When the electron gets into the ferromagnetic material, it undergoes an \textit{exchange interaction} that can be regarded as a Zeeman interaction. Thus, the site energy is $E_{i,\uparrow (\downarrow )}=E_{0}\pm \Delta E_{Z}$, where $i$ is a site of the first layer. As in Eq. \ref{Hamil-efectivo}, the effective Hamiltonian incorporates the leads and the scattering processes through the appropriate self-energies. Now, $\hat{\Sigma}_{L(R)}=\hat{\Sigma}_{L(R)\uparrow }+\hat{\Sigma _{L(R)\downarrow }$ is the self-energy operator describing the escape to the left (right) lead, given by Eq. \ref{Dyson}, where all hoppings are equal to $V$. Decoherent channels accounting for resistive scattering are associated to each site and included into $\hat{H}$ through the $\hat{\Sigma}_{\phi }$ operator. Thus, $\hat{\Sigma}_{\phi }$ is diagonal in a matrix representation. In the wide band limit, their elements are purely imaginary, i.e. $\left( \hat{\Sigma}_{\phi }\right) _{ii}=-\mathrm{i}\Gamma _{\phi i}$. \begin{figure}[tbh] \begin{center} \includegraphics[width=3.4in]{magnetoresistance.eps} \end{center} \caption{-$a)$ On top is a scheme showing the layer's magnetization in the two resistor model for GMR. Below is a tight binding representation discriminating the spin projection. The coherence lengths of electrons in the first layer are $\ell _{1}$ and $\ell _{2}$ for up and down spin electrons respectively. Note that coherence lengths are inverted in the next layer. $\ell _{1}/\ell _{2}=1/2$ in all cases. Fig. $b)$ to $d)$ Site dependent chemical potentials with $\ell _{1}=15~a$ in Fig. $b)$, $\ell _{1}=1500~a$ in Fig. $c)$, and $\ell _{1}=150~a$ in Fig. $d)$. The system length is $1000a$ and $V_{\downarrow \uparrow }=0$ ($L_{sd}\rightarrow \infty $), and the chemical potentials at the leads are $\protect\mu _{L}= \mathrm{V}$ and $\protect\mu _{R}=0$. The Fermi wavelengths at the left side are $\protect\lambda _{F}=45a$, for up spins, and $\protect\lambda _{F}=30a , for down spins. The opposite holds at the right ferromagnet. The chosen parameters do not represent a specific experimental set up.} \label{fig:ChemPot} \end{figure} \textit{Classical regime of GMR: two resistors model (TRM)}. Here, the system length is much shorter than $L_{sd}$, i.e. $V_{\downarrow \uparrow }\approx 0$ in Eq. \ref{HamGMR}. Here, when electrons enters into a ferromagnetic layer they undergo an electrical resistance $\delta R=I_{LR \mathrm{V}$ (Ohm's law) that manifest in a linear drop in the chemical-potential $\delta \mu $. Therefore, in the anti-aligned configuration, there are two linear potential drops of $\delta \mu $ with slopes proportional to the spin-dependent resistance of each layer. Then, it is expected a splitting of the chemicals potentials that form a diamond like figure. This is precisely what we obtain using the multi-terminal DP method with mean free paths shorter that the system size. Fig. \ref{fig:ChemPot}-$b $ to \ref{fig:ChemPot}-$d$ show this, through the site-dependent chemical potential. In contrast, for the quantum limit of long mean free paths, quantum interferences are evident. However, they are smoothed out by increasing the scattering rate until they reach the expected classical diamond like figure (Fig. \ref{fig:ChemPot}-$b$). \begin{figure}[tbh] \begin{center} \includegraphics[width=2.5in]{diamond.eps} \end{center} \caption{Upper figure, site-dependent chemical potential $\protect\delta \protect\mu _{i}$ profile for the semiclassical model of GMR with finite spin diffusion length, $L_{sd}=100~a$. Lower figure, shows the local currents $I_{i}$ for up and down spin electrons. System size is $1000a$, \ell _{1}/\ell _{2}=1/2$, and $\ell _{1}=15a$.} \label{fig:diamond} \end{figure} \textit{Semiclassical regime of GMR: Valet and Fert theory.} Considering finite values for the spin diffusion length, $L_{sd}$, Valet and Fert \cit {Valet-Fert} showed that the difference of the spin-dependent local chemical potentials decays exponentially with the distance to the magnetic interface with a length scale given by $L_{sd}$. They also showed that the spin-dependent current is inverted in this length scale. In Fig \re {fig:diamond} we show that the multi-terminal DP model is also capable to reproduce these behaviors provided that we turn on the spin flip term in Eq. \ref{HamGMR}. In the upper figure we show the spin and site dependent chemical potentials. One can see that in regions far from the interface, distances larger than $L_{sd}$, the chemical potentials are nearly the same. In regions close to the interface, the chemical potential drop forms a diamond-like figure that show the expected spin-dependent exponential contributions summed up to the trivial mean linear drop. In the lower figure we can observe how the inversion of the currents is produced in the length scale $L_{sd}$. For longer distances, the currents reach a stationary value. All these behaviors are in agreement with Ref. \cite{Valet-Fert}. This situation reinforces the descriptive conceptual value of the DP model and the versatility of the numerical algorithms developed in this paper. \section{\label{sec:conclusions}Conclusion} In this work, we first reviewed the original two-terminal DP model, which accounts for decoherent effects in quantum transport. Then, we presented an extension of this model which is capable to deal with multi-terminal setups. Also, we introduced recursive algorithms that allows us to take advantage of the problem symmetries, in particular in the case of general banded Hamiltonians. The incorporation of a unified notation gives more transparency to its potentialities. Using the specific Hamiltonian models for phonon laser and giant magnetoresistance, we exemplified how to treat multi-channel problems in presence of decoherence. We made special emphasis on the role of decimation procedures in the context of banded effective Hamiltonians, since they can be used as the basis for efficient computational schemes. In particular, one of the keys is given by Eqs. \ref{eq:thoulessR}- \ref{eq:thoulessL}. Note that, in the very common situation of block tridiagonal (i.e. banded) matrix Hamiltonians, these recursive equations provide an efficient decimation procedure that allows one to obtain all the $N\times (N-1)$ non-diagonal blocks of the whole Green's function matrix, $\mathbb{G}$, in terms of the $N$ diagonal blocks. In turn, these last can be calculated as matrix continued fractions. \cit {MCF-Pastawski} The idea here is to take advantage of particular system's symmetries using these expressions to build an efficient computation approach for the problem under study. Profiting from a parallelism between the computation of $\mathbb{G}$ and the decoherent transmitance $\tilde{\mathbb{T}}$ already hinted by the DP solution \cite{GLBE1}, we also derived a compact matrix equation for $\tilde \mathbb{T}}$ in a generalized multi-terminal scheme. This recursive algorithm relies on decimation procedures. As a first application, we added decoherent processes to the usual model for phonon-assisted tunneling in the configuration used for a phonon laser or SASER. As is well known, \cite{Foa2001} in the I-V curve of a SASER configuration, the contrast between the valley (out of resonance) and the satellite peak (corresponding to phonon emission) is enhanced by the effect of antiresonances. These last result from the interference between different paths in the Fock's space. \cite{ChemPhys2002} Besides of the expected smoothing out of the resonances because of the decoherence, we found that it leads to the degradation of the contrast mainly from the suppression of the antiresonances. This could set up new bounds for the efficiency of SASER operation. \cite{Camps2001} We also solved a simple multi-terminal DP model representative of the giant magnetoresistance (GMR) phenomenon. There, each spin orientation is a different conduction channel. Thus, the spin-dependent transport is intrinsically multi-terminal. We essentially showed that the main characteristics of the GMR can be well reproduced with this simple model. While preserving a Hamiltonian description, it is able to reach the expected classical and semiclassical regimes by means of a single parameter, the local decoherent rate $\Gamma _{\phi i}$. What is more important, as in Fig. \ref{fig:ChemPot}-$c$ and $d$, it opens the possibility to profit from situations where quantum interference becomes relevant. \cit {FernAlcPast13,Richter12} With increasing system's size, molecular electronics suffers a paradigm shift on its dominant transport mechanism, from \textquotedblleft coherent tunneling\textquotedblright\ to \textquotedblleft incoherent hopping\textquotedblright. Within this context, the present work should result specially helpful in providing a computational bridge between these limiting situations, while maintaining a general, transparent, and efficient approach to quantum transport. \section{Acknowledgments} We acknowledge L. E. F. Foa Torres for his comments and stimulating discussions at an early stage of this work. We received financial support from ANPCyT, CONICET, MiNCyT-Cor, and SeCyT-UNC.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Discussions and outlook}\label{Sec:discussion} In this paper we continued our journey of exploring the structure of five-point functions of $\frac{1}{2}$-BPS operators of 4d $\mathcal{N}=4$ SYM in the strongly coupled regime which is dual to $AdS_5\times S^5$ IIB supergravity. We improved the bootstrap approach of \cite{Goncalves:2019znr} which relies only on superconfromal symmetry and consistency with factorization. The important difference compared to the old approach is that both constraints are now implemented in Mellin space. Moreover, in the new method we only need to use the Drukker-Plefka twist and the chiral algebra condition is not needed. Using this approach, we obtained in a closed form the Mellin amplitudes for the infinite family of correlators of the form $\langle pp222 \rangle$. Compared to the simplest $\langle 22222\rangle$ case studied in \cite{Goncalves:2019znr}, the pole structure of the Mellin amplitudes of operators with higher KK levels is in general more complicated. However, an important simplifying feature we observed in this paper is a new type of pole truncation phenomenon. We find that the residues of certain poles associated with conformal descendants vanish. Morevoer, in the $\langle pp222\rangle$ case the number of poles does not grow with respect to $p$ when $p$ is large enough. Consequently, the pole structure of the Mellin amplitudes is much simpler than what is naively expected. This property played an important role in obtaining the $\langle pp222\rangle$ amplitudes and also gives us hope to bootstrap in closed forms more general families of five-point functions with different KK levels. Note that in deriving the pole truncation conditions, we have only used general properties of Mellin factorization. The same argument holds in many other theories and we expect similar simplifications in the pole structure. This leads to a number of possible extensions of our results in different setups. A prime example to consider is the gluon sector of certain 4d $\mathcal{N}=2$ SCFTs which is dual to SYM in $AdS_5\times S^3$. The first five-point function for the lowest KK level has been computed in \cite{Alday:2022lkk}. To make further progress in computing amplitudes of higher KK levels, one can adapt the strategy used here. One important ingredient which still needs to be worked out is the relations between different component correlators of the super four-point functions (see \cite{Bissi:2022wuh} for progress in this direction). This would be the input for exploiting the full power of the Mellin factorization. However, this will be a direct generalization to what we have done in Appendix \ref{Sec:higherkkSmult}. Another interesting application is to the 6d $\mathcal{N}=(2,0)$ theory which is dual to eleven dimensional supergravity in $AdS_7\times S^4$. Going beyond five-point functions, an exciting future direction is to compute the super graviton six-point function of $AdS_5\times S^5$ IIB supergravity. This will provide a new benchmark for the program of holographic correlators at higher points. The results in this paper can already help us gain a nontrivial amount of knowledge of the structure of this new correlator. Moreover, much of the technology developed here, in particular the Mellin Drukker-Plefka twist, can also be straightforwardly applied to that problem. It appears to be a feasible target and we hope to report progress in this direction in the near future. Finally, let us mention that the $\langle pp222\rangle$ five-point functions we computed in this paper contain a wealth of new data of 4d $\mathcal{N}=4$ SYM. Through OPE, we can extract various non-protected three- and four-point functions. In \cite{Goncalves:2019znr} we constructed five-point conformal blocks (see \cite{Bercini:2020msp,Buric:2021ywo,Buric:2020dyz,Antunes:2021kmm,Fortin:2022grf} for progress in higher-point conformal blocks) and explained how to use them to extract data from the $p=2$ five-point correlator. It would be interesting to perform a similar analysis here for the $\langle pp222\rangle$ correlators. The expression we have for general $p$ will be helpful for solving the mixing problem for the CFT data which is similar to the one appearing in four-point functions. It would also be interesting to extract the chiral algebra correlator from our supergravity result and compare with the field theory calculation. The four-point function case has been analyzed in \cite{Rastelli:2017ymc,Behan:2021pzk}. \section{Higher R-charge super multiplet}\label{Sec:higherkkSmult} A key element of the bootstrap analysis undertaken in the main text is the factorization of Mellin amplitudes into lower-point correlators. As explained in Section \ref{Subsec:Mellinfactorization} we do need as an input the explicit expression for the Mellin amplitudes associated with the four-point functions \begin{align} \langle \mathcal{O}_2\mathcal{O}_2\mathcal{O}_2\mathcal{O}_2\rangle \qquad \langle \mathcal{J}_2\mathcal{O}_2\mathcal{O}_2\mathcal{O}_2\rangle \qquad \langle \mathcal{T}_2\mathcal{O}_2\mathcal{O}_2\mathcal{O}_2\rangle \\ \langle \mathcal{O}_2\mathcal{O}_p\mathcal{O}_p\mathcal{O}_2\rangle \qquad \langle \mathcal{J}_2\mathcal{O}_p\mathcal{O}_p\mathcal{O}_2\rangle \qquad \langle \mathcal{T}_2\mathcal{O}_p\mathcal{O}_p\mathcal{O}_2\rangle \\ \langle \mathcal{O}_p\mathcal{O}_p\mathcal{O}_2\mathcal{O}_2\rangle \qquad \langle \mathcal{J}_p\mathcal{O}_p\mathcal{O}_2\mathcal{O}_2\rangle \qquad \langle \mathcal{T}_p\mathcal{O}_p\mathcal{O}_2\mathcal{O}_2\rangle \end{align} where $\mathcal O_p$, $\mathcal{J}_p$ and $\mathcal{T}_p$ denotes the following components of the half-BPS supermultiplet $\mathbb{O}_p$ \begin{align} \mathcal{O}_p\,:&\qquad \Delta=p\,,\,\,\,\,\,\,\,\,\,\,\,\,\,\mathcal{R}=[0,p,0]\,,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{spin $0$}\,, \\ \mathcal{J}_p\,:&\qquad \Delta=p+1\,,\,\,\,\mathcal{R}=[1,p-2,1]\,,\,\,\,\,\text{spin $1$}\,, \\ \mathcal{T}_p\,: &\qquad \Delta=p+2\,,\,\,\,\mathcal{R}=[0,p-2,0]\,,\,\,\,\,\text{spin $2$}\,. \end{align} In the special case $p=2$ they correspond respectively to the $\mathfrak{su}(4)$ current and stress tensor, hence their names. The first goal of this appendix is to explain how the correlators above can be extracted from the $\langle \mathcal{O}_p\mathcal{O}_p\mathcal{O}_2\mathcal{O}_2\rangle$ component. This is a generalization of what has been done in \cite{Belitsky:2014zha} for the case $p=2$. The second goal of this appendix is to explain how the factorization in Mellin space is implemented in the presence of some global symmetry. This is done in Appendix \ref{Sec:rSymmetryGluing}. A final warning about notation is necessary. In the main text we use the six component null vectors on which the R-symmetry act linearly. Here, as it is natural from the super-space prospective will use four component R-symmetry variables $y$. The basic two-point invariants are identified as \begin{equation} t_{ij}=y_{ij}^2\,. \end{equation} \subsection{Conventions} In the following we will list all the conventions for raising and lowering indices \begin{equation} y^{a \dot a} = \epsilon^{a b} y_{\dot b b} \epsilon^{\dot b \dot a}\,, \end{equation} where the $\epsilon$ tensor is defined with \begin{equation} \epsilon^{12}=\epsilon_{12}=1\,. \end{equation} It follows that \begin{equation} (y_{1i})_{ \dot a a} (y_{1j})^{a \dot a } = y_{1i}^2 + y_{1j}^2 - y_{ij}^2\,, \end{equation} which, in a particular case becomes \begin{equation} \det y_{ij} = \frac{1}{2} (y_{ij})_{ \dot a a} (y_{ij})^{a \dot a } = y_{ij}^2 \,. \end{equation} The Schouten identity can be used to show that \begin{equation} \epsilon^{\dot a \dot b} \epsilon^{ b a} y_{ij}^2 = y_{ij}^{a \dot a } y_{ij}^{b \dot b }-y_{ij}^{b \dot a } y_{ij}^{a \dot b } \,. \end{equation} Finally, the inverse can easily be seen to be \begin{equation} y_{ \dot a a }^{-1} = \frac{y_{\dot a a}}{y^2}\,, \end{equation} and, with these conventions we also have \begin{equation} \frac{\partial}{\partial y^{a \dot a }} y^2 = y_{ \dot a a}\,. \end{equation} \subsection{Differential Operators} In order to consider different components of the $\frac{1}{2}$-BPS supermultiplets we will work in analytic superspace. The eight bosonic and eight fermionic coordinates of this superspace are packaged in a supermatrix \begin{equation} X^{\mathsf{A} \dot{\mathsf{A}}} = \begin{pmatrix} x^{\alpha \dot \alpha } & \rho^{\alpha \dot a} \\ \bar{\rho}^{a \dot \alpha} & y^{ a \dot a} \end{pmatrix}\,, \end{equation} whose superdeterminant is \begin{equation} \mathrm{sdet} X = \frac{\det \left(x^{\alpha \dot \alpha } - \rho^{ \alpha \dot a} y^{-1}_{\dot a a} \bar{\rho} ^{ a \dot \alpha}\right) }{ \det y^{ a \dot a}}\,. \end{equation} The supersymmetrization of the propagator $d_{ij}= y_{ij}^2 / x_{ij}^2$ is given by \begin{equation}\label{SuperD} \hat d_{ij} = \frac{1}{\mathrm{sdet}(X_{ij})} = \frac{y_{ij}^2}{\hat x_{ij}^2}\,, \end{equation} where we introduce the short-hand notation \begin{equation} \hat x^{\alpha \dot \alpha} = x^{ \alpha \dot \alpha} - \rho^{ \alpha \dot a} y^{-1}_{\dot a a} \bar{\rho} ^{ a \dot\alpha}\,. \end{equation} The two-point function of half-BPS superfields $\mathbb{O}_p$ is then simply \begin{equation} \langle \mathbb{O}_p (X_i) \mathbb{O}_p(X_j) \rangle = (\hat d_{ij})^p\,. \end{equation} The relevant superdescendants are obtained extracting the appropriate component by acting with certain differential operators: \begin{align} \mathcal J_p&= \frac{1}{2} \mathcal D^{(J)}\mathbb{O}_p(X)\big{|}_{\rho=\bar{\rho}=0} \,,\nonumber\\ \mathcal T_p &= \frac{1}{4} \mathcal D^{(T)}\mathbb{O}_p(X)\big{|}_{\rho=\bar{\rho}=0}\,. \end{align} Given the charges and symmetries of those operators the ansatz for the differential operators needs to be\footnote{These differential operators depend on $p$ through the coefficients $\mu$, $\nu_1$, $\nu_2$. This dependence is not explicit in the notation.} \begin{equation}\label{DJ} \mathcal D^{(J)}=\lambda^{\alpha}\bar{\lambda}^{\dot{\alpha}}v^a\bar{v}^{\dot{a}}\Big{(} \frac{\partial}{\partial \bar{\rho}^{ a \dot \alpha}} \frac{\partial}{\partial \rho^{ \alpha \dot a}} + \mu \frac{\partial}{\partial y^{ a \dot a}} \frac{\partial}{\partial x^{ \alpha \dot \alpha}}\Big{)}\,, \end{equation} and \begin{align}\label{DT} \mathcal D^{(T)} =& \lambda^{\alpha_1}\lambda^{\alpha_2}\bar{\lambda}^{\dot{\alpha}_1}\bar{\lambda}^{\dot{\alpha}_2} \epsilon^{\dot a_1 \dot a_2}\epsilon^{a_1a_2} \times \Big{(} \frac{\partial}{\partial \bar{\rho}^{a_1 \dot{\alpha}_1}} \frac{\partial}{\partial \bar{\rho}^{a_2 \dot \alpha_2}} \frac{\partial}{\partial \rho^{\alpha_1 \dot a_1}} \frac{\partial}{\partial \rho^{\alpha_2 \dot a_2}}+ \nonumber \\ + \nu_1\, \frac{\partial}{\partial \bar{\rho}^{a_1 \dot \alpha_1}} \frac{\partial}{\partial\rho^{\alpha_1 \dot{a}_1}} \frac{\partial}{\partial y^{a_2\dot a_2}} \frac{\partial}{\partial x^{\alpha_2 \dot \alpha_2}} +\nu_2\, \frac{\partial}{\partial y^{a_1\dot a_1}} \frac{\partial}{\partial y^{a_2\dot a_2}} \frac{\partial}{\partial x^{\alpha_1 \dot \alpha_1}} \frac{\partial}{\partial x^{\alpha_2 \dot \alpha_2}}\Big{)}\,, \end{align} Before fixing the coefficients let us quote two simple identities which are very useful in the following\footnote{The second identity is obtained as follows \begin{equation} 0= \frac{\partial}{\partial X^{\mathsf{A}\dot{\mathsf{A}}}}\, \delta^{\mathsf{C}}_{\mathsf{B}}= \frac{\partial}{\partial X^{\mathsf{A}\dot{\mathsf{A}}}}\, X^{\mathsf{C} \dot{\mathsf{B}}}X^{-1}_{\dot{\mathsf{B}}\mathsf{B}} =\delta^{\mathsf{C}}_{\mathsf{A}}\,\delta^{\dot{\mathsf{B}}}_{\dot{\mathsf{A}}}\, X^{-1}_{\dot{\mathsf{B}}\mathsf{B}} +(-1)^{(|\mathsf{A}|+|\dot{\mathsf{A}}|)(|\mathsf{C}|+|\dot{\mathsf{B}}|)}\, X^{\mathsf{C} \dot{\mathsf{B}}}\, \frac{\partial}{\partial X^{\mathsf{A}\dot{\mathsf{A}}}}\, X^{-1}_{\dot{\mathsf{B}}\mathsf{B}}\,. \end{equation} Multiplying this equation by $(-1)^{(|\mathsf{A}|+|\dot{\mathsf{A}}|)(|\mathsf{C}|+|\dot{\mathsf{B}}|)}$ and $X^{-1}_{\dot{\mathsf{C}}\mathsf{C}}$ from the left (with summation over $\mathsf{C}$) we obtain \eqref{derivativeXinverse}. } \begin{equation} \frac{\partial}{\partial X^{\mathsf{A}\dot{\mathsf{A}}}} \frac{1}{\text{sdet}(X)}=-(-1)^{|\mathsf{A}|}\,\frac{X^{-1}_{\dot{\mathsf{A}}\mathsf{A}} }{\text{sdet}(X)}\,, \end{equation} \begin{equation} \label{derivativeXinverse} \frac{\partial}{\partial X^{\mathsf{A}\dot{\mathsf{A}}}}\, X^{-1}_{\dot{\mathsf{B}}\mathsf{B}}\,=\, -(-1)^{(|\mathsf{A}|+|\dot{\mathsf{A}}|)(|\mathsf{A}|+|\dot{\mathsf{B}}|)}\, X^{-1}_{\dot{\mathsf{B}}\mathsf{A}}\,X^{-1}_{\dot{\mathsf{A}}\mathsf{B}}\,, \end{equation} where $|\alpha|=|\dot{\alpha}|=0\,, \,\,\,\, |a|=|\dot{a}|=1$. In order to fix the coefficients in the ansatz \eqref{DJ}, \eqref{DT} it suffices to impose that two-point functions do not have off-diagonal components between different superdescendants. So we impose \begin{equation} \langle \mathcal J_p(1) \mathcal O_p (2) \rangle = \mathcal D^{(J)}_1 \langle \mathbb{O}_p (X_1) \mathbb{O}_p(X_2) \rangle \big|_{\rho,\bar{\rho}=0} \stackrel{!}{=} 0\,, \end{equation} which fixes the unknown coefficient in $\mathcal D^{(J)}$ to be \begin{equation}\label{mu} \mu=\frac{1}{p}\,. \end{equation} The action of the resulting operator on the two-point function is given by\footnote{The fact that is vanishes when $p=1$ is consistent with the fact that in this case the (field strength) supermultiplet is ultrashort and does not possess a $\mathcal{J}$ component.} \begin{equation} \mathcal{D}_1^{(J)}\, (\hat d_{12})^p= (1-p^2)\, \lambda_1^{\alpha}\, \bar{\lambda}_1^{\dot{\alpha}}\, v_1^a\, \bar{v}_1^{\dot{a}}\, X^{-1}_{\dot{\alpha} a}\, X^{-1}_{\dot{a} \alpha}\, (\hat d_{12})^p\,, \end{equation} where $X=X_{12}$, from which one derives the two-point function of the descendant $\mathcal{J}$ using the formula \begin{equation} \mathcal{D}_1^{(J)}\,\mathcal{D}_2^{(J)}\, (\hat d_{12})^p\big|_{\rho,\bar{\rho}=0}\,=\, (1-p^2) \left(\bar{\lambda}_1 x_{12}^{-1}\lambda_2\right) \left(\bar{\lambda}_2 x_{12}^{-1}\lambda_1\right) \left(\bar{v}_1 y_{12}^{-1}v_2\right) \left(\bar{v}_2 y_{12}^{-1}v_1\right) (d_{12})^p\,. \end{equation} From this equation we can extract the normalization of $\mathcal{J}_p$. For the spin 2 operator we need to consider \begin{align} \langle \mathcal T_p(1) \mathcal O_p (2) \rangle &= \mathcal D^{(T)}_1 \langle \mathbb{O}_p (X_1) \mathbb{O}_p(X_2) \rangle \big|_{\rho,\bar{\rho}=0}\stackrel{!}{=} 0 \,,\nonumber\\ \langle \mathcal T_p(1) \mathcal J_p (2) \rangle &= \mathcal D^{(T)}_1 \mathcal D^{(J)}_2 \langle \mathbb{O}_p (X_1) \mathbb{O}_p(X_2) \rangle \big|_{\rho,\bar{\rho}=0}\stackrel{!}{=}0 \,, \end{align} which in turn fixes the coefficients $\nu_i$ to be \begin{equation}\label{nu} \nu_1 = - \frac{4}{2+p} \,,\qquad \nu_2 = - \frac{2}{(1+p)(2+p)}\,. \end{equation} In the case of the stress tensor multiplet, when $p=2$, these coefficients agree with those found in \cite{Belitsky:2014zha}. The action of the resulting operator on the two-point function is given by \begin{equation} \mathcal{D}_1^{(T)}\, (\hat d_{12})^p= 2p^2(p-1)(p+3) \lambda_1^{\alpha_1}\lambda_1^{\alpha_2}\bar{\lambda}_1^{\dot{\alpha}_1}\bar{\lambda}_1^{\dot{\alpha}_2} \epsilon^{\dot a_1 \dot a_2}\epsilon^{a_1a_2} X_{\dot{\alpha}_1 a_1}^{-1} X_{\dot{\alpha}_2 a_2}^{-1} X_{\dot{a}_1 \alpha_1}^{-1} X_{\dot{a}_2 \alpha_2}^{-1} (\hat d_{12})^p\,, \end{equation} where $X=X_{12}$, from which one derives the two-point function of the descendant $\mathcal{T}$ using the formula \begin{equation} \mathcal{D}_1^{(T)}\,\mathcal{D}_2^{(T)}\, (\hat d_{12})^p\big|_{\rho,\bar{\rho}=0}= 16 p^2(p-1)(p+3)\, \left(\bar{\lambda}_1 x_{12}^{-1}\lambda_2\right)^2 \left(\bar{\lambda}_2 x_{12}^{-1}\lambda_1\right)^2 \frac{(y_{12}^2)^{p-2}}{(x_{12}^2)^{p+2}}\,. \end{equation} Three-point function with one descendant operator can be obtained using the formulae \begin{align} \mathcal{D}_1^{(J)}\, (\hat d_{12})^{a} (\hat d_{13})^{p-a}\big|_{\rho,\bar{\rho}=0} & =\,A \,\Lambda_{1,23} \,V_{1,23}\, (d_{12})^{a} (d_{13})^{p-a}\,, \\ \mathcal{D}_1^{(T)}\, (\hat d_{12})^{p} (\hat d_{13})^{p-a}\big|_{\rho,\bar{\rho}=0} & =\,B \, \left(\Lambda_{1,23}\right)^2 \,\det(y_{12}^{-1}-y_{13}^{-1}) (d_{12})^{a} (d_{13})^{p-a}\,, \end{align} where\begin{equation} \label{eq:LambdaxandLambdaYdef} \Lambda_{1,23}:=\bar{\lambda}_1 (x_{12}^{-1}-x_{13}^{-1})\lambda_1\,, \qquad V_{1,23}:=\bar{v}_1 (y_{12}^{-1}-y_{13}^{-1})v_1\,. \end{equation} and \begin{equation} A=-\frac{a(p-a)}{p}\,, \qquad B=-\frac{8a(a+1)(p-a)(p-a+1)}{(p+1)(p+2)}\,. \end{equation} \subsection{Four-point functions} The two- and three-point functions of $\mathbb{O}_p$ operators are related in a simple way to the ones of their superprimaries $\mathcal{O}_p$: they are obtained by replacing the propagators $d_{ij}$ with the super-propagators $\hat{d}_{ij}$. For four-point functions the situation is more involved due to the presence of cross ratios, but it is still true that the correlators of $\mathbb{O}_p$ is uniquely fixed by the one of $\mathcal{O}_p$. This is achieved by replacing the familiar space-time and R-symmetry cross ratios \begin{align} u&= \frac{x_{12}^2 x_{34}^2}{x_{13}^2 x_{24}^2} = z \bar z\,, & v&= \frac{x_{14}^2 x_{23}^2}{x_{13}^2 x_{24}^2}=(1-z)(1-\bar z)\,, \nonumber\\ \sigma&= \frac{y_{12}^2 y_{34}^2}{y_{13}^2 y_{24}^2} = \alpha \bar \alpha\,, & \tau&= \frac{y_{14}^2 y_{23}^2}{y_{13}^2 y_{24}^2}=(1-\alpha)(1-\bar \alpha)\,. \end{align} with their super-symmetrizations, namely the four eigenvalues of the supermatrix \begin{equation} Z = X^{}_{12} X_{13}^{-1} X^{}_{34} X_{24}^{-1} \,. \end{equation} More explicitly, we can extract the independent superconformal invariants by taking four independent supertraces \begin{equation} \label{superTraces} \widehat{t}_k = \mathrm{Str} (Z^k)\,=\, \hat{z}^k + \hat{\bar z}^k - \hat{\alpha}^k -\hat{\bar\alpha}^k\,, \qquad k=1,2,3,4\,. \end{equation} When all fermionic variables are set to zero the matrix above reduces to \begin{equation} Z \big|_{\rho,\bar{\rho}=0} = \begin{pmatrix} x^{}_{12} x_{13}^{-1} x^{}_{34} x_{24}^{-1} &0 \\ 0& y^{}_{12} y_{13}^{-1} y^{}_{34} y_{24}^{-1} \end{pmatrix}\,. \end{equation} and upon taking the supertrace gives \begin{equation}\label{t0} t_k:=\widehat{t}_k \big|_{\rho,\bar{\rho}=0} = z^k + \bar z^k - \alpha^k -\bar\alpha^k\,, \end{equation} which establishes the relation between the quantities $t_k$ and the cross ratios introduced above. In terms of the cross-ratios the four point function reads \begin{equation} \label{superOfourpoints} \langle \mathbb{O}_{p_1}(X_1) \mathbb{O}_{p_1}(X_2) \mathbb{O}_{p_2}(X_3) \mathbb{O}_{p_2}(X_4) \rangle = (\hat d_{12})^{p_1} \,(\hat d_{34})^{p_2}\; \mathcal G( \hat{z}, \hat{\bar z}; \hat{\alpha},\hat{\bar\alpha})\,. \end{equation} The function $\mathcal G$ satisfies the super-conformal Ward Identities and have a specific polynomial dependence on the R-symmetry cross ratios. We will come back to these constraints momentarily. To extract the relevant components from \eqref{superOfourpoints} we need to act with the differential operators $\mathcal D^{(J)}$ and $\mathcal D^{(T)}$ given in \eqref{DJ}, \eqref{DT}. \paragraph{Action of $\mathcal D^{(J)}$, $\mathcal D^{(T)}$ on four-point functions.} The spinning four-point functions are extracted by the action of the differential operators from \eqref{DJ} and \eqref{DT} \begin{align} \langle \mathcal J_{p_1}(1) \mathcal O_{p_1}(2) \mathcal O_{p_2}(3) \mathcal O_{p_2}(4) \rangle &= \tfrac{1}{2}\mathcal D_1^{(J)} \langle \mathbb{O}(X_1) \mathbb{O}_{p_1}(X_2) \mathbb{O}_{p_2}(X_3)\mathbb{O}_{p_2}(X_4) \rangle \big|_{\rho,\bar{\rho}=0} \,,\nonumber\\ \langle \mathcal T_{p_1}(1) \mathcal O_{p_1}(2) \mathcal O_{p_2}(3) \mathcal O_{p_2}(4) \rangle &= \tfrac{1}{4}\mathcal D_1^{(T)} \langle \mathbb{O}(X_1) \mathbb{O}_{p_1}(X_2) \mathbb{O}_{p_2}(X_3)\mathbb{O}_{p_2}(X_4)\rangle \big|_{\rho,\bar{\rho} =0} \,, \label{Don4pt} \end{align} with coefficients determined in \eqref{mu} and \eqref{nu} above. In what follows we will always apply the differential operator at point $1$, so we will need to consider two particular cases of the four-point function, either $p_1=2$ and $p_2=p$, or the opposite. The action of derivatives on the superpropagators are discussed in the previous section. The action of derivatives on the $\mathcal{G}$ factor is done in two steps. First we relate the derivatives with respect to the eigenvalues of the $Z$ matrix \begin{equation} z_1 = \hat{z}\,,\qquad z_2 = \hat{\bar z}\,,\qquad z_3 = \hat{\alpha}\,,\qquad z_4 = \hat{\bar \alpha}\,. \end{equation} to derivatives with respect to the supertraces \eqref{superTraces}. This is done by using the chain rule \begin{equation} \frac{\partial \mathcal G}{\partial \widehat{t}_j} = \sum_{i=1}^4\, \frac{\partial z_i}{\partial \widehat{t}_j}\, \frac{\partial \mathcal G}{\partial z_i}\,. \end{equation} The Jacobian matrix can be derived easily since the variables are related according to \eqref{t0}, and is given by \begin{equation} \frac{\partial z_i}{\partial t_j} =\frac{(-1)^{j+F_i}}{j} \frac{Q^{(i)}_{4-j}}{\prod_{k\neq i}(z_i-z_k)}\,, \end{equation} where $Q^{(i)}_{4-j}$ are symmetric polynomials formed with the three variables $z_{k\neq i}$ (here written for $i=4$) \begin{align} Q^{(4)}_0 &=1 \,, & Q^{(4)}_1&= z_1 + z_2 + z_3 \,,\nonumber\\ Q^{(4)}_2 &=z_1 z_2 + z_1 z_3 + z_2 z_3 \,,& Q^{(4)}_3&= z_1 z_2 z_3\,, \end{align} and $F_1=F_2=0$ and $F_3=F_4=1$. The second step is to take derivatives of $\widehat{t}_k$ with respect to the supercoordinates $ X_1^{\mathsf{A}\dot{\mathsf{A}}}$ using, for example \begin{equation} \frac{\partial}{\partial Z^{\mathsf{A}}_{\mathsf{B}}} \widehat{t}_k = k\,(-1)^{|\mathsf{A}|}\, (Z^{k-1})_{\mathsf{A}}^{\mathsf{B}}\,, \end{equation} \begin{equation} \frac{\partial}{\partial X_1^{\mathsf{A}\dot{\mathsf{A}}}} \widehat{t}_k = k\,(-1)^{|\mathsf{A}|}\, (X_{12}^{-1}Z^{k}X_{12}^{})_{\dot{\mathsf{A}}}^{\dot{\mathsf{B}}}\, (X_{12}^{-1}-X_{13}^{-1})_{\dot{\mathsf{B}} \mathsf{A}}\,, \end{equation} and similarly for higher derivatives. This procedure is straightforward but tedious, the result takes the schematic form given in \eqref{JTpos}. \paragraph{General structure of the correlator.} Superconformal Ward identities and polynomiality in the R-symmetry variables imply that \begin{equation}\label{Factorized} \langle \mathcal O_{p}(1) \mathcal O_{p}(2) \mathcal O_{2}(3) \mathcal O_{2}(4) \rangle = G^{\mathrm{free}} + d_{12}^{p-2}\, R\, H_p(u,v)\,, \end{equation} where $R$ is the well-known function \begin{align} R ={}&v \,d_{12}^2 d_{34}^2 + \frac{v}{u} d_{13}^2 d_{24}^2 + \frac{v^2}{u} \,d_{14}^2 d_{23}^2 + \frac{v}{u}(v -u -1 ) d_{12} d_{13} d_{24} d_{34} \nonumber\\ &+\frac{v}{u}( 1- u-v) d_{12} d_{14} d_{23} d_{34}+ \frac{v}{u}(u-1-v)d_{13} d_{14} d_{23} d_{24} \,. \end{align} The free piece of the correlator can be supersymmetrized as shown in the next paragraph, while the supersymmetrization of the anomalous component is achieved with the method described above, where we supersymmetrize the cross ratios. The spinning anomalous functions will then be expressed in terms of derivatives of the dynamical function $H_p(u,v)$. \paragraph{The free theory check.} As a check of the formulae derived in the previous section, will now consider the case of correlators in the free field theory. In the $SU(N)$ gauge theory, and for the particular configuration we are interested in, the tree-level four-point functions at any value of $N$ are \begin{align}\label{Test1} \langle \mathcal O_p(1) \mathcal O_p (2) \mathcal O_2 (3)\mathcal O_2(4) \rangle^{\text{free}} ={}&d_{12}^p d_{34}^2+\delta_{2p}\left( d_{14}^2 d_{23}^2 + d_{13}^2 d_{24}^2\right) + \frac{2p(p-1)}{N^2-1} d_{12}^{p-2} d_{14} d_{23} d_{13} d_{24} \nonumber\\ &+ \frac{2p}{N^2-1} d_{12}^{p-1} d_{34} (d_{14} d_{23}+ d_{13} d_{24}) \,. \end{align} The four-point function $\langle \mathbb O_p \mathbb O_p \mathbb O_2\mathbb O_2\rangle$ is obtained from the above by simply replacing the propagator $d_{ij}$ with its supersymmetrized version $\hat d_{ij}$ introduces in \eqref{SuperD}. We can rewrite this expression in terms of cross ratios as \begin{equation} \label{Gpp22} G_{pp22} = 1 + \delta_{2p}\left( \frac{v^2 \sigma^2}{u^2 \tau^2} + \frac{\sigma^2}{u^2}\right)+ \frac{2p}{N^2-1}\left((p-1)\frac{u^2\tau}{v\sigma^2} + \frac{u\tau }{v\sigma}+ \frac{u}{\sigma} \right)\,. \end{equation} In this case, the correlation function of superdescendants can be obtained either applying the general procedure discussed in the previous paragraph or by replacing the propagator $d_{ij}$ with $\hat d_{ij}$ in \eqref{Test1} and then applying the differential operators $\mathcal{D}^{(J)}$, $\mathcal{D}^{(T)}$. Both procedures give the same result, as they should, providing a check of the general procedure. \paragraph{Frame simplifications.} The computation we described can be simplified by choosing a frame. First, we wish only to apply the differential operator on the point $1$ of the four-point function, so we can set to zero the fermionic variables associated to the remaining points from the beginning. Second, the matrix $Z$ is superconformally invariant, so we can take advantage of conformal and $R$-symmetry transformations to send both $x_2$ and $y_2$ to 0, while sending $x_3$ and $y_3$ to infinity. Effectively the computation simplifies significantly to the evaluation of \begin{equation} \widehat{t}_k= \mathrm{Str} \left( (X_1^{} X_4^{-1})^k\right) \big|_{\rho_{i>1},\bar{\rho}_{i>1}=0}\,, \end{equation} where the matrix $Z$ becomes \begin{equation} \left(X^{}_1 X_4^{-1} \right)^{\mathsf{A}}_{\mathsf{B}} \big|_{\rho_4,\bar{\rho}_4=0}= \begin{pmatrix} (x_1^{} x_4^{-1} )^{\alpha}_{\beta} & (\rho_1^{} y_4^{-1})^{ \alpha}_{b} \\ (\bar{\rho}^{}_1 x_4^{-1})^{a}_{\beta} & (y_1^{} y_4^{-1})^{ a}_{b} \end{pmatrix}\,, \end{equation} and the cross ratios in this frame are given by \begin{align} \frac{x_1^2}{x_4^2} &= z \bar z \,,& \frac{x_{14}^2}{x_4^2}= (1-z)(1-\bar z) \,,\nonumber\\ \frac{y_1^2}{y_4^2} &= \alpha \bar \alpha \,,& \frac{y_{14}^2}{y_4^2}= (1-\alpha)(1-\bar \alpha) \,. \end{align} With a simple calculation we obtain (in this frame) \begin{align} \widehat{t}_1 &= t_1= \mathrm{Tr}(x^{}_1 x_4^{-1}) - \mathrm{Tr}(y^{}_1 y_4^{-1}) = z + \bar z - \alpha - \bar \alpha \\ \widehat{t}_2 &= t_2 - 2 \;\mathrm{Tr}\left(\bar{\rho} x_4^{-1} \rho y_4^{-1}\right)\,, \nonumber \\ \widehat{t}_3 &= t_3 -3 \; \mathrm{Tr}\left(\bar{\rho} x_4^{-1} x_1^{} x_4^{-1} \rho y_4^{-1}\right) -3 \; \mathrm{Tr}\left(\bar{\rho} x_4^{-1}\rho y_4^{-1} y_1^{} y_4^{-1} \right)\,,\nonumber \\ \widehat{t}_4 &= t_4 -4 \; \mathrm{Tr}\left(\bar{\rho} x_4^{-1} (x_1^{} x_4^{-1})^2 \rho y_4^{-1}\right) -4 \; \mathrm{Tr}\left(\bar{\rho} x_4^{-1}\rho y_4^{-1} (y_1^{} y_4^{-1})^2 \right) \nonumber\\ & \hspace{17.5mm} -4 \; \mathrm{Tr}\left(\bar{\rho} x_4^{-1} x_1^{} x_4^{-1} \rho y_4^{-1} y_1^{} y_4^{-1}\right) -2 \; \mathrm{Tr}\left(\bar{\rho} x_4^{-1} \rho y_4^{-1}\bar{\rho} x_4^{-1} \rho y_4^{-1}\right)\,, \end{align} where $\rho=\rho_1$, $\bar{\rho}=\bar{\rho}_1$. \paragraph{Summary.} The final expression for the spinning correlators in \eqref{Don4pt} involves the structures $\Lambda_{1,ij}$ and $V_{1,ij}$ introduced in \eqref{eq:LambdaxandLambdaYdef}. These quantities are not independent but satisfy the relation \begin{equation}\label{relation} \Lambda_{1,24}= \Lambda_{1,23} + \Lambda_{1,34}\,, \end{equation} and similarly for $V_{1,ij}$. In particular the correlator involving $\mathcal{J}_p$ is linear in $\Lambda_{1,ij}$ and $H_{1,ij}$, while the one involving $\mathcal{T}_p$ is quadratic in $\Lambda_{1,ij}$ and independent of $H_{1,ij}$. Once the general expression for the correlator is obtained in terms of $\Lambda_{1,ij}$, one can decompose into \begin{equation} \bar{\lambda}_1x_{1k}^{-1}\lambda_1=\frac{z \cdot x_{1k}}{x_{1k}^2}\,, \qquad z^{\mu}=\sigma^{\mu}_{\alpha \dot{\alpha}}\lambda_1^{\alpha}\bar{\lambda}_1^{\dot \alpha} \end{equation} elements, which will have a natural counterpart in the Mellin approach of the next section (compare to \eqref{scalardeltaLR}) \begin{align}\label{JTpos} \langle \mathcal J_{p_1}(1) \mathcal O_{p_1}(2) \mathcal O_{p_2}(3) \mathcal O_{p_2}(4)\rangle &= \frac{1}{x_{12}^{2 p_1} x_{34}^{2p_2}} \sum_{k=2}^4 \alpha^{(k)}_{p_1,p_2}(u,v; y_{ij},Y_{1,ij})\frac{z \cdot x_{1k}}{x_{1k}^2}\,,\nonumber\\ \langle \mathcal T_{p_1}(1) \mathcal O_{p_1}(2) \mathcal O_{p_2}(3) \mathcal O_{p_2}(4)\rangle &= \frac{1}{x_{12}^{2p_1} x_{34}^{2p_2}} \sum_{k,l=2}^4 \beta_{p_1,p_2}^{(k,l)}(u,v; y_{ij})\frac{z \cdot x_{1k}}{x_{1k}^2}\frac{z \cdot x_{1l}}{x_{1l}^2}\,, \end{align} where \begin{equation} Y_{1,ij}= y_{1i}^2 y_{1j}^2\, V_{1,ij}\,. \end{equation} \subsection{R- Symmetry gluing}\label{Sec:rSymmetryGluing} \paragraph{Realization of $\mathfrak{su}(4)$ R-symmetry in the space of polynomials.} It is convenient to use an index free notation to implement finite dimensional representations of $\mathfrak{su}(4)$. The components of a given representation are packaged in a polynomial $\mathsf{O}_{\mathcal{R}}(y,v,\bar{v})$ in the variables $y^{a\dot a}, v^a, \bar{v}^{\dot{a}}$ (here $a \in \{1,2\}, \dot{a} \in \{\dot{1},\dot{2}\}$) subject to certain constraints that depend on the $\mathfrak{su}(4)$ Dynkin labels $\mathcal{R}=[q,p,r]$. The fist constraint states that $\mathsf{O}_{\mathcal{R}}(y,v,\bar{v})$ is homogeneous in $v$ and $\bar{v}$ of degree $q$ and $r$ respectively. The second constraint is slightly more involved. In the case $\mathcal{R}=[0,p,0]$, so that $\mathsf{O}_{\mathcal{R}}$ is independent of $v,\bar{v}$ it reads \begin{equation} \left( w^a \bar{w}^{\dot{a}}\frac{\partial}{\partial y^{a\dot{a}}}\right)^{p+1}\mathsf{O}_{\mathcal{R}}(y)=0\,, \qquad \forall\,\,\,\,w,\bar{w}\,. \end{equation} The case $\mathcal{R}=[1,p-2,1]$ is more involved. Since we will not use it in this work we will not present the identification of $\mathcal{R}=[1,p-2,1]$ as the kernel of differential operators. Two-point functions take the form \begin{equation} \label{2ptRsymmG} G_{[q,p,r]}(1,2)= (y^2_{12})^p\, (v_1 y_{12} \bar{v}_2)^q\,(v_2 y_{12} \bar{v}_1)^r\,. \end{equation} \paragraph{Projections and \emph{gluing}.} To implement factorization in Mellin space in the presence of some global symmetry (in our case the $\mathfrak{su}(4)$ R-symmetry) it is necessary to take into account this extra structure. To do so, we introduce a projector that singles out the contribution of a given operator\footnote{Here we use the notation $\mathsf{O}$ instead of $\mathcal{O}$ since we are ignoring the space-time part.} $\mathsf{O}$ which we denote by \begin{equation} \label{ProjectionORsymm} |\mathsf{O}|= \frac{1}{\mathcal{N}_{\mathsf{O}}}\,\, \mathcal{D}^{(\ell,r)}_{\mathcal{R}[{\mathsf{O}]}} \,|\mathsf{O}(\ell)\rangle \langle \mathsf{O}^*(r)|\,\,\Big{|}_{\ell=r}\,, \end{equation} where $\mathcal{D}$ is a differential operator which is fixed (up to a normalization that will be explained momentarily) by the requirement that \eqref{ProjectionORsymm} is invariant under $\mathfrak{su}(4)$. The notation ${}^*$ denotes conjugation which acts on representations as $[q,p,r]^*=[r,p,q]$. When we insert the quantity $|\mathsf{O}|$ in an n-point correlation function it is understood that we first place $|\mathsf{O}(\ell)\rangle \langle \mathsf{O}^*(r)|$, next act with the differential operator $\mathcal{D}$ on the coordinates $\ell$ and $r$ and finally set the coordinates $\ell$ and $r$ to be equal. To fix the normalization of $\mathcal{D}$ we insert $|\mathsf{O}|$ in the two-point function \begin{equation} \langle \mathsf{O}^*(1) \mathsf{O}(2)\rangle \,=\,\mathcal{N}_{\mathsf{O}}\,G_{\mathcal{R}[\mathsf{O}]}(1,2)\,, \end{equation} where $G_{\mathcal{R}}$ is given in \eqref{2ptRsymmG} and obtain the condition \begin{equation} \label{eq:DGGisG} \mathcal{D}^{(\ell,r)}_{\mathcal{R}} G_{\mathcal{R}}(1,\ell) G_{\mathcal{R}}(r,2) \,\Big{|}_{\ell=r}\,=\, G_{\mathcal{R}}(1,2)\,. \end{equation} The explicit form of $\mathcal{D}_{\mathcal{R}}$ is slightly complicated. The simplest one is given by \begin{equation} \mathcal{D}_{[0,p,0]}^{(\ell,r)} =\sum_{n=0}^p \frac{\left(-\partial_{\ell}\cdot \partial_r\right)^{p-n}}{\Gamma(p+1)\Gamma(p+2)} \sum_{k=0}^n\, (-1)^{n-M(k,n-k)} \frac{(p-n+1)_{m(k,n-k)+1}}{\Gamma(m(k,n-k)+1)} (\square_\ell)^k (\square_r)^{n-k}\,, \end{equation} where $(a)_n$ denotes the Pochhammer symbol, $M(a,b)=\text{max}(a,b)$, $m(a,b)=\text{min}(a,b)$ and \begin{equation} \partial_{i}\cdot \partial_j:= \epsilon^{a_1 a_2}\epsilon^{\dot{a}_1\dot{a}_2} \, \frac{\partial}{\partial y_{i}^{a_1\dot{a}_1}} \frac{\partial}{\partial y_{j}^{a_2\dot{a}_2}}\,, \qquad \square_i=\frac{1}{2}\,\partial_{i}\cdot \partial_{i}\,. \end{equation} The general expression for the differential operator $\mathcal{D}_{[1,p-2,1]}$ is more complicated, but it easy to obtain for fixed $p$ using the defining relation \eqref{eq:DGGisG}. Let us report the simplest member of this family as an example \begin{equation} \mathcal{D}_{[1,0,1]}^{(\ell,r)} = \left(\partial_{v_{\ell}}\partial_{v_{r}}\right) \left(\partial_{\bar{v}_{\ell}}\partial_{\bar{v}_{r}}\right) \left(\tfrac{1}{2}\partial_{\ell}\cdot \partial_r -\square_\ell-\square_r\right) -\frac{3}{16} \left(\partial_{v_{\ell}}\partial_{y_{\ell}}\partial_{\bar{v}_{\ell}}\right) \left(\partial_{v_{r}}\partial_{y_{r}}\partial_{\bar{v}_{r}}\right)\,, \end{equation} where the contraction of indices is understood using the $\epsilon$ tensor. \paragraph{Application to five-point functions.} When we insert the projector \eqref{ProjectionORsymm} in a 5-point function we will produce a product of a 3-point and a 4-point function on which the differential operator $\mathcal{D}$ acts. In the following we denote by $\rightarrow$ the combination of acting with $\mathcal{D}^{(\ell,r)}$ and setting the coordinates $\ell=r$. The case that is relevant for the exchange of $\mathcal{O}_p$ which transform in a $[0,p,0]$ representation is \begin{equation} \big{[} (y_{1\ell}^2)\,(y_{2\ell}^2)^{p-1} \big{]} \, \big{[}(y_{ri}^2)^{p-2}(y_{rj}^2)^{}(y_{rk}^2)^{} \big{]} \rightarrow \label{eq:gluingScalar} \end{equation} \begin{equation} \tfrac{1}{p} (y_{2i}^2)^{p-3} \left( y^2_{2i}\,(y^2_{2j}y^2_{1k}+y^2_{1j}y^2_{2k }) +(p-2)y^2_{1i} y^2_{2j}y^2_{2 k} -\tfrac{p-2}{p+1}\, y^2_{12}\,(y^2_{2 k}y^2_{ij}+y^2_{2j}y^2_{ik}) -\tfrac{1}{p+1}\, y^2_{12} y^2_{2i} y^2_{jk} \right) \end{equation} Similarly, using the definitions above, gluing the 3 and 5 point functions corresponding to the exchange of $\mathcal{J}_p$ (which transforms in the representation $[1,p-2,1]$) is achieved by the subsitution \begin{equation} \Big{[} (y_{2\ell}^2)^{p-2}\,Y_{\ell,12}\,\Big{]} \Big{[}\,(y_{ri}^2)^{p-3}(y_{rj}^2)^{}\,Y_{r,kl} \Big{]} \rightarrow \end{equation} \begin{equation} (y_{2i}^2)^{p-4}\left( y^2_{2i}\,y^2_{2j}\,(y^2_{1k}y^2_{2l}-y^2_{1l}y^2_{2k }) +y^2_{12}\left( \tfrac{p-3}{p+2} y^2_{2j} (y^2_{il}y^2_{2k}-y^2_{2l}y^2_{ik }) +\tfrac{1}{p+2} y^2_{2i} (y^2_{jl}y^2_{2k}-y^2_{2l}y^2_{jk }) \right) \right) \end{equation} For the exchange of $\mathcal{T}_p$ we use the same rules as \eqref{eq:gluingScalar} with $p$ replaced by $p-2$. \section{Introduction}\label{Sec:introduction} Recent years have seen significant progress in computing holographic correlators, which are key objects for exploring and exploiting the AdS/CFT correspondence. Traditionally, holographic correlators are computed by diagrammatic expansions in AdS. Such a method works in principle. However, in practice, it requires the precise knowledge of the exceedingly complicated effective Lagrangians and is extremely cumbersome to use. Therefore, for almost twenty years the diagrammatic approach led to only a handful of explicit results. The new developments, on the other hand, are based a totally different strategy which relies on new principles. This is the bootstrap approach initiated in \cite{Rastelli:2016nze,Rastelli:2017udc}, which eschews the explicit details of the effective Lagrangian altogether. The new approach works directly with the holographic correlators and uses superconformal symmetry and consistency conditions to fix these objects. The bootstrap strategy has produced an array of impressive results.\footnote{See \cite{Bissi:2022mrs} for a review.} For example, at tree level general four-point functions for $\frac{1}{2}$-BPS operators with arbitrary Kaluza-Klein (KK) levels have been computed in all maximally superconformal theories \cite{Rastelli:2016nze,Rastelli:2017udc,Alday:2020lbp,Alday:2020dtb}, as well as in theories with half the amount of maximal superconformal symmetry \cite{Rastelli:2019gtj,Giusto:2020neo,Alday:2021odx}. Note that these general results are all in the realm of four-point functions. Higher-point functions still mostly remain {\it terra incognita}. In fact, only two five-point functions have been computed in the literature for IIB supergravity on $AdS_5\times S^5$ \cite{Goncalves:2019znr} and SYM on $AdS_5\times S^3$ \cite{Alday:2022lkk} respectively, and both for the lowest KK modes only. However, studying higher-point holographic correlator is of great importance. Firstly, higher-point correlators allow us to extract new CFT data which is not included in four-point functions. For example, the OPE coefficient of two double-trace operators and one single-trace operator can only be obtained from a five-point function. Moreover, via the AdS unitarity method \cite{Aharony:2016dwx} higher-point correlators are also necessary ingredients for constructing higher-loop correlators. Secondly, via the AdS/CFT correspondence holographic correlators correspond to on-shell scattering amplitudes in AdS. Recently, there has been a lot of progress in finding AdS generalizations of flat-space properties \cite{Farrow:2018yni,Armstrong:2020woi,Albayrak:2020fyp,Alday:2021odx,Jain:2021qcl,Zhou:2021gnu,Diwakar:2021juk,Alday:2022lkk,Cheung:2022pdk,Herderschee:2022ntr,Drummond:2022dxd,Bissi:2022wuh,Armstrong:2022jsa,Lee:2022fgr,Li:2022tby}. As we know from flat space, many remarkable properties of amplitudes are only visible at higher multiplicities. To further explore the analogy between holographic correlators and scattering amplitudes it is necessary to go to higher points. Finally, it has been observed in \cite{Caron-Huot:2018kta} that a ten dimensional hidden conformal symmetry is responsible for organizing all tree-level four-point functions for IIB supergravity on $AdS_5\times S^5$. The nature of this hidden structure is still elusive. It is an interesting question whether the 10d hidden symmetry is just a curiosity for four points or it persists even at higher points. For these reasons, in this paper we continue to explore the bootstrap strategy for computing higher-point correlators. In particular, we will focus on computing the five-point functions of the form $\langle pp222\rangle$ for IIB supergravity in $AdS_5\times S^5$, where three of the operators have the lowest KK level but the other two have arbitrary KK level $p$. Our strategy will be similar to that of \cite{Goncalves:2019znr}, which computed the $p=2$ case, but with important differences. In \cite{Goncalves:2019znr}, the starting point is an ansatz in position space which is a linear combination of all possible Witten diagrams with unfixed coefficients. To fix the coefficients, one imposes various constraints from superconformal symmetry and consistency conditions. These includes factorization in Mellin space \cite{Goncalves:2014rfa}, the chiral algebra constraint \cite{Beem:2013sza} and the Drukker-Plefka twist \cite{Drukker:2009sf}. The first constraint is the consistency condition for decomposing the five-point function into four-point functions and three-point functions at its singularities. The second and the third conditions come from superconformal symmetry and are the statement that the appropriately twisted five-point function becomes topological. Although these conditions uniquely fix the $p=2$ five-point function, the strategy of \cite{Goncalves:2014rfa} suffers from a few drawbacks which make it difficult to apply efficiently to correlators with higher KK levels. Firstly, computing the higher-point Witten diagrams in the ansatz is a nontrivial task. In particular, simplifications used in \cite{Goncalves:2014rfa} for computing $p=2$ diagrams no longer exist for $p>2$ and the analysis is in general more complicated. Secondly, the three constraints were implemented in difference spaces, which makes the algorithm less efficient. Factorization is most convenient in Mellin space. However, the chiral algebra constraint and the Drukker-Plefka twist were implemented in the original position space. The position space implementation requires computing explicitly a set of five-point contact diagrams, {\it i.e.,} $D$-functions, to which the ansatz reduces. As was shown in \cite{Goncalves:2014rfa}, these $D$-functions can further be expressed in terms of one-loop box diagrams which can be written as ${\rm Li}_2$ and logarithms. But the complexity of the expression for each $D$-function is determined by its total external conformal dimensions. For $\langle pp222\rangle$ five-point functions, the sum of dimensions grows linearly with respect to $p$. Therefore, it soon becomes computationally very expensive for large enough $p$. We overcome these difficulties by proposing a new algorithm. It relies on the key observation that a more careful analysis of the Mellin factorization condition together with the Drukker-Plefka twist allow us to completely fix the five-point correlators without using the chiral algebra constraint. Although computing Witten diagrams is difficult in position space, formulating the ansatz in Mellin space is straightforward thanks to their simplified analytic structure in Mellin space. This is further aided by a new pole truncation phenomenon which keeps the number of poles fixed irrespective of the KK levels. As a result, we can write down the ansatz for the Mellin amplitude for general $p$. Moreover, we find a way to implement the Drukker-Plefka twist directly in Mellin space. Therefore, we can perform the bootstrap entirely within Mellin space without ever taking the position space detour. This allows us to compute the five-point $\langle pp222\rangle$ Mellin amplitudes for arbitrary $p$ in a closed form. Although in this paper we focused on this particular family of correlators, the strategy applies straightforwardly to more general five-point functions. The rest of the paper is organized as follows. In Sec. \ref{Sec:scfkine} we discuss the superconformal kinematics of the five-point functions. In particular, we will introduce the Drukker-Plefka twist. In Sec. \ref{Sec:Mellin} we review the Mellin space formalism and the factorization of Mellin amplitudes. We also explain how to implement the Drukker-Pleka twist in Mellin space. We bootstrap the five-point functions in Sec. \ref{Sec:bootstrapMellin} and give the general formula for the $\langle pp222\rangle$ Mellin amplitudes. In Sec. \ref{Sec:bootstrapposition} we also comment on how to perform the bootstrap in position space. We conclude in Sec. \ref{Sec:discussion} with an outlook for future directions. Technical details are contained in the two appendices. In Appendix \ref{Sec:higherkkSmult} we explain how to compute spinning four-point functions which are needed for factorizing the five-point functions. In Appendix \ref{Sec:rSymmetryGluing} we discuss how to glue together the R-symmetry dependence when performing factorization. For reader's convenience, we also included a Mathematica notebook with the arXiv submission which contains various explicit results. \section{Mellin representation}\label{Sec:Mellin} It has been commonly advertised that Mellin space \cite{Mack:2009mi,Penedones:2010ue} is a natural language for discussing holographic correlators. In this formalism, the connected correlators are expressed as a multi-dimensional inverse Mellin transformation \begin{align}\label{defMellin5pt} \langle \mathcal{O} (x_1) \dots \mathcal{O} (x_5) \rangle_{\rm conn} = \int [d\delta] \mathcal{M} (\delta_{ij})\, \prod_{1\leq i<j\leq 5} \Gamma(\delta_{ij})(x_{ij}^2)^{-\delta_{ij}}\;, \end{align} where the Mellin-Mandelstam variables satisfy \begin{equation} \delta_{ij}=\delta_{ji}\;,\quad \delta_{ii}=-\Delta_i\;,\quad\sum_{j}\delta_{ij}=0\;. \label{eq:ConstraintsMellinMand} \end{equation} The function $\mathcal{M}(\delta_{ij})$ encodes the dynamical information and is referred to as the Mellin amplitude. Note that this definition is a bit schematic. To be precise, both the correlator and the Mellin amplitude also depend on R-symmetry structures. However, for the moment we will suppress this dependence to emphasize the analytic structure related to spacetime. One of the reasons that Mellin amplitudes is convenient for describing scattering in AdS is they are meromorphic functions of the Mellin-Mandelstam variables. This follows directly from the existence of the OPE in CFT. Moreover, in the supergravity limit, the poles are associated with the exchanged single-trace particles in AdS. This makes the Mellin amplitudes have similar analytic structure as tree-level scattering amplitudes in flat space and allows us to apply flat-space intuitions in AdS. More precisely, the exchange of a conformal primary operator with spin $J$ and dimension $\Delta=\tau+J$ in a channel is represented by a series of poles in the Mellin amplitude, labelled by $m=0,1,2,\ldots$, starting from the conformal twist $\tau$ \begin{align} \mathcal{M} \approx \frac{\mathcal{Q}_m(\delta_{ij})}{\delta_{LR}-(\tau+2m)}, \ \ \ \ \delta_{LR} = \sum_{a=1}^{q}\sum_{b=q+1}^{n}\delta_{ab}.\label{eq:factorizationequation} \end{align} Here, the exchange channel divides the external particles into two sets which we refer to as L and R. We label the particles in L from $1$ to $q$ and the ones in R from $q+1$ to $n$. $\delta_{LR}$ is the Mandelstam variable in this channel. The residues $\mathcal{Q}_m(\delta_{ij})$ have nontrivial structures. They are related to the lower-point Mellin amplitudes $\mathcal{M}_L$ and $\mathcal{M}_R$ for the $(q+1)$- and $(n-q+1)$-point functions involving particles in L and R respectively (Figure \ref{fig_factorization}). The extra external state in each lower-point amplitude is the exchanged particle which has now been put on-shell. This is the basic idea of Mellin factorization \cite{Fitzpatrick:2011ia,Goncalves:2014rfa}. In fact, it is very similar to the factorization of amplitudes in flat space which has been studied for a long time. However, there are also important differences. In flat space, the poles are located at the squared masses of the exchanged particles. In Mellin space, as already pointed out, the squared mass is replaced by the conformal twist and there is in general a series of poles for each particle which are labelled by $m$ in (\ref{eq:factorizationequation}). These are related to the conformal descendants. However, in theories with special spectra such as $AdS_5\times S^5$ IIB supergravity, the series usually truncates. For example, for $p=2$ the series truncates at $m=0$ and contains just one term. Moreover, compared to flat-space amplitudes, the lower-point Mellin amplitudes also appear in the residue $\mathcal{Q}_m$ in a more complicated way. The precise expression for the residues depends on the spin of the operator that is exchanged. The goal of the following subsection is to explain all the details of this formula. In particular, we will present the explicit residue formulas for exchanged fields with spins up to 2. We should emphasize that the structure of factorization for the general $\langle pp222\rangle$ five-point functions will turn out to be far richer than for the simple case of $p=2$ which was analyzed in \cite{Goncalves:2019znr}. In particular, we will see poles with $m\geq 1$. Note that for the five-point function $G_p$ with $p>2$ there are three non-equivalent factorization channels which we choose to be \begin{equation}\label{channelsFactorization} \begin{split} {}& (12):\quad\quad \langle pp \star \rangle \, \langle \star222 \rangle \;,\\ {}&(45):\quad\quad \langle 22 \star \rangle \, \langle \star pp2 \rangle\;,\\ {}&(13):\quad\quad \langle 2p \star\rangle \, \langle \star p22\rangle \;, \end{split} \end{equation} In each of them there are exchanged primary operators with spins ranging from $0$ to $2$ as will be discussed in the following subsection. \begin{figure}[h] \centering \includegraphics[scale=0.4]{fig_factorization} \caption{ Mellin amplitudes have poles correponding to the exchange of single-trace operators. The residues at the poles are associated with lower-point Mellin amplitudes. In the channel depicted in the figure, we have $n=5$ and $q=3$. The Mellin amplitude on the left has four points while the one on the right has only three. } \label{fig_factorization} \end{figure} \subsection{Melllin factorization}\label{Subsec:Mellinfactorization} To discuss Mellin factorization, we need to be more explicit about what fields can be exchanged as they give rise to different lower-point functions. The problem of enumerating exchanged fields reduces to finding all the possible cubic vertices $s_{k_1}s_{k_2}X$ where $s_k$ is the scalar field dual to the superconformal primary $\mathcal{O}_k$ and $X$ is a field to be determined. This problem already appears in the case of four-point functions and therefore the answer is also the same. The possible cubic vertices are determined by two conditions. The first is the R-symmetry selection rule. The second is the condition that the cubic vertices cannot be extremal\footnote{It also follows that four-point functions cannot be extremal or next-to-extremal. In particular, we do not have the four-point functions $\langle 4222\rangle$ and $\langle 6222\rangle$.}. These determine the possible exchange fields to be \cite{Rastelli:2016nze,Rastelli:2017udc} \begin{eqnarray}\label{possibleexchanges} \nonumber &&\{k_1,k_2\}=\{p,p\}\;:\quad X=s_2\;,\;A_{2,\mu}\;,\;\varphi_{2,\mu\nu}\;.\\ &&\{k_1,k_2\}=\{2,2\}\;:\quad X=s_2\;,\;A_{2,\mu}\;,\;\varphi_{2,\mu\nu}\;,\\ \nonumber &&\{k_1,k_2\}=\{2,p\}\;:\quad X=s_p\;,\;A_{p,\mu}\;,\;\varphi_{p,\mu\nu}\;. \end{eqnarray} Here $s_k$ is a scalar field and is dual to the superconformal primary $\mathcal{O}_k$ which has dimension $\Delta=k$ and transform in the $[0,k,0]$ representation of $SU(4)$. $A_{k,\mu}$ is a vector field and is dual to a spin-1 operator $\mathcal{J}_{k,\mu}$ which has dimension $\Delta=k+1$ and transforms in the $[1,k-2,1]$ representation. $\varphi_{k,\mu\nu}$ is a spin-2 tensor field and is dual to a spin-2 operator $\mathcal{T}_{k,\mu\nu}$ which has dimension $\Delta=k+2$ and representation $[0,k-2,0]$. When $k=2$, $A_{2,\mu}$ is the graviphoton and $\varphi_{2,\mu\nu}$ is the graviton. Their dual operators are correspondingly the R-symmetry current and the stress energy tensor. Let us emphasize again that in this subsection we will only focus on the Mellin-Mandelstam variable dependence. Both $\mathcal{M}_L$ and $\mathcal{M}_R$ in fact also depend on R-symmmtry variables. Therefore in the residues $\mathcal{Q}_m$ there is also a gluing of the lower-point R-symmetry structures. However, this gluing is purely group theoretic. To avoid distracting the reader from the discussion of the dynamics, we will leave the details of R-symmetry gluing to Appendix \ref{Sec:rSymmetryGluing}. Alternatively, we can view the discussion in this subsection as the Mellin factorization for each R-symmetry structure. \subsubsection{Exchange of scalars} The simplest example of factorization is the exchange of a scalar operator with dimension $\Delta$. The resulting $\mathcal{M}_L$ and $\mathcal{M}_R$ are again scalar Mellin amplitudes. Nevertheless, this example contains most of the features we shall need. In particular, the $m$ dependence will be shared in the spinning cases. Therefore, we will first analyze this case in detail. The residue $\mathcal{Q}_m$ introduced in \eqref{eq:factorizationequation} is given in \cite{Goncalves:2014rfa} \begin{align} &\mathcal{Q}_m = \frac{-2\Gamma(\Delta) m!}{ \left(1+\Delta-\frac{d}{2} \right)_m } L_m R_m\,,\label{eq:QmScalarDefinition} \end{align} where $L_m$ is related to $\mathcal{M}_L$ by\footnote{Notice that $\mathcal{M}_L(\delta_{ab}+n_{ab})$ is well defined when the Mellin-Mandelstam variables satisfy the pole condition \eqref{eq:factorizationequation}, in addition to their constraints \eqref{eq:ConstraintsMellinMand}. The parallel with scattering amplitudes makes this point clear.} \begin{align} &L_m =\sum_{n_{ab}\ge 0 \atop \sum n_{ab}=m} \mathcal{M}_L(\delta_{ab}+n_{ab}) \prod_{1\le a<b\le q} \frac{ \left(\delta_{ab}\right)_{n_{ab}}}{n_{ab}!}\label{eq:Lmdefinition} \end{align} and similarly for $R_m$. Notice that here and in the following we will often leave the spacetime dimension $d$ unspecified, but it should always be set to $4$. This equation has several interesting consequences, which will become more evident after analyzing a few examples. Let us start with a three-point Mellin amplitude for $\mathcal{M}_L$, which is just a constant $c$. In this case, recalling that $\delta_{12}=\tfrac{1}{2} (\Delta_1+\Delta_2-\delta_{LR})$ and $\delta_{LR}$ is set to $\Delta+2m$ by the pole condition \eqref{eq:factorizationequation}, equation \eqref{eq:QmScalarDefinition} with $q=2$ immediately gives \begin{align} &\mathcal{M}_L^{\textrm{3-pt}}=c\implies L_m = c\frac{\big(\bar{\delta}_{LR}\big)_m}{m!}\,, \qquad \bar{\delta}_{LR}:=\tfrac{1}{2} (\Delta_1+\Delta_2-\Delta)-m\,. \label{eq:threeptFactorization} \end{align} Factorizing a five-point function leads to a three-point function and a four-point function. For $\langle pp222\rangle$, there are three inequivalent factorization channels, which can be chosen to be $(12)$, $(45)$ and $(13)$. From (\ref{possibleexchanges}), we know that the exchanged scalar operators in these three channels have twists $2$, $2$ and $p$ respectively. Thus, $\bar{\delta}_{LR}$ in each case is given by \begin{equation}\label{deltaLR} \begin{split} {}& (12):\quad\quad \bar{\delta}_{LR} = p-1-m\;,\\ {}&(45):\quad\quad \bar{\delta}_{LR} = 1-m\;,\\ {}&(13):\quad\quad\bar{\delta}_{LR} = 1-m\;, \end{split} \end{equation} and the correspoding values of $\delta_{LR}$ are $2+m,2+m,p+m$. After plugging these values in (\ref{eq:threeptFactorization}), it is straightforward to see that the residue vanishes for $m>0$ in the channels $(13)$ and $(45)$, and for $m\geq p-1$ in the channel $(12)$\footnote{The zeros in these pochhammer symbols are exactly at a position to avoid a double pole, formed by one coming from the explicit Gamma functions in the definition and the other from the factorization formula (\ref{eq:factorizationequation}). }. Naively, one would conclude that in the $(12)$ channel the number of poles increases with $p$. However, this is too fast since the other part $R_m$ can give more constraints. To see this explicitly, let us look at a four-point Mellin amplitude which has the following generic form \begin{align} &\mathcal{M}_R^{\textrm{4pt}}=\frac{c_1\delta_{45}^2+c_2\delta_{45}+c_3}{\delta_{34}-1}+c_4+c_{5}\delta_{34}+c_{6}\delta_{45}\implies R_m = \frac{1}{m! }\bigg[\frac{c_1 m \delta _{45} \left(\delta _{45}+1\right) (3 - m)_{m-1}}{\delta_{34}-1}\nonumber\\ & +\left(\frac{c_1 \delta _{45}^2+c_2 \delta _{45}}{\delta_{34}-1}+c_4 \right) (2 - m)_m+\frac{c_3 (1 - m)_m}{\delta_{34}-1}+(c_5\delta_{34}+c_6\delta_{45})(3-m)_m\bigg]\;.\label{eq:prototypical4Mellin} \end{align} Here we have evaluated the expression at the pole $\delta_{LR}=\tau+2m$. It follows that $R_m$ vanishes for this four-point Mellin $\mathcal{M}_R$ for $m\ge 3$ and therefore the number of poles does not increase for arbitrary value of $p$. Let us also emphasize that all four-point Mellin amplitudes that appear in the OPE of the correlator $\langle pp222\rangle$ have this structure as can be checked in Appendix \ref{Sec:higherkkSmult}. Let us note that the absence of poles for $m\geq p-1$ can also be understood from the pole structure of the Mellin integrand. The Gamma functions in the definition of Mellin amplitude already have poles in this location and a pole in the Mellin amplitude at $m\geq p-1$ would give rise to a double pole. Such double poles are associated with the appearance of anomalous dimension \cite{Penedones:2010ue,Rastelli:2016nze,Rastelli:2017udc}, which we do not expect at this order. On the other hand, at the moment we do not have a direct physical argument for the truncation of poles at $m \ge 3$. Finally, this truncation continues to hold for the factorization formulas when the exchanged operators have spins. This will be analyzed in the following subsubsection. \subsubsection{Exchange of operators with spins 1 and 2} In this subsection we will be interested in studying the contribution of operators with spins. As it turns out, the analysis of the scalar case straightforwardly generalizes to the spinning case. It is convenient to get rid of the Lorentz indices of these operators by contracting them with null polarization vectors \begin{align} \mathcal{O}(x,z) = \mathcal{O}^{a_1\dots a_J}(x) z^{a_1}\dots z^{a_J}\;, \end{align} where $z^2=0$ ensures the operator is traceless (we refer the reader to Section $3$ of \cite{Goncalves:2014rfa} for a more detailed review). The definition of Mellin amplitudes of one spinning operator and $n$ scalar operators is given by \cite{Goncalves:2014rfa} \begin{align}\label{scalardeltaLR} \langle \mathcal{O}(x_0,z_0) \dots \mathcal{O}_n \rangle =&\sum_{a_1,\dots,a_J=1}^{n} \prod_{i=1}^{J}(z_0\cdot x_{a_i0}) \int [d\delta] \mathcal{M}^{\{ a\} }(\delta_{ij}) \prod_{i=1}^n \frac{\Gamma(\delta_i+\{a \}_i )}{(x_{i0}^2) ^{\delta_i+\{a \}_i } } \prod_{1\leq i<j\leq n} \frac{\Gamma(\delta_{ij})}{(x_{ij}^2)^{\delta_{ij}}}, \end{align} where \begin{equation} \{a \}_i = \textrm{{\fancy{$\delta$}}}_{i}^{a_1}+\dots+\textrm{{\fancy{$\delta$}}}_{i}^{a_J}, \ \ \delta_{i}= -\sum_{j=1}^n\delta_{ij} ,\, \ \ \ \sum_{i,j=1}^n\delta_{ij} =J-\Delta_0\;. \end{equation} We have used $\textrm{{\fancy{$\delta$}}}$ to denote the Kronecker delta so that it can be distinguished from the Mellin-Mandelstam variables $\delta$. The Mellin amplitudes $\mathcal{M}^{\{ a\} }$ satisfy certain linear relations that follows from the conformal invariance of the correlator, see equation (46) in \cite{Goncalves:2014rfa}. Let us first focus on the spinning generalization of (\ref{eq:QmScalarDefinition}) for the conserved currents which reside in the $k=2$ supermultiplet. For exchanging the graviphoton, the residues are given by\footnote{As above we write $d$ to denote the dimension of space-time and we will always set $d=4$.} \begin{align} &\mathcal{Q}_m = \frac{(d-1)\Gamma(d-2) m!}{ \left(\frac{d}{2} \right)_m } \sum_{a=1}^{q}\sum_{b=q+1}^n \delta_{ab} L_m^{a} R_m^{b}\;, \end{align} For exchanging the graviton, the residues are \begin{equation} \mathcal{Q}_m = \frac{-(d+1)\Gamma(d-1)m!}{2\left(\frac{d}{2} +1\right)_m}\bigg[\mathcal{Q}_{m}^{(1)} -\left(\frac{1}{2m}+\frac{1}{d} \right)\tilde{L}_m \tilde{R}_m \bigg]\;, \end{equation} where \begin{align} &\mathcal{Q}_{m}^{(1)} = \sum_{a,b=1}^{q}\sum_{i,j=q+1}^{n}\delta_{ai}(\delta_{bj}+\textrm{{\fancy{$\delta$}}} _{b}^{a}\textrm{{\fancy{$\delta$}}}_{j}^{i} )L_m^{ab}R_{m}^{ij},\,\\ &\tilde{L}_m = \sum_{a,b=1}^{q}\delta_{ab} [ L_{m-1}^{ab} ]^{ab}\;,\quad \tilde{R}_m = \sum_{a,b=1}^{q}\delta_{ab} [ R_{m-1}^{ab} ]^{ab}\;. \nonumber \end{align} Here we used the notation $[f(\delta_{ij})]^{ab} = f(\delta_{ij}+\textrm{{\fancy{$\delta$}}}_{i}^{a}\textrm{{\fancy{$\delta$}}}_{j}^{b}+\textrm{{\fancy{$\delta$}}}_{j}^{a}\textrm{{\fancy{$\delta$}}}_{i}^{b})$. The functions $L_m^{a}$ and $L_{m}^{ab}$ (and analogously $R_m^{a}$, $R_m^{ab}$) are defined in the same way as in (\ref{eq:Lmdefinition}). Let us also add that for $m=0$ the second term in $Q_m$ for spin $2$ is zero since both $\tilde{L}_0$ and $\tilde{R}_0$ vanish from the definition. Therefore, the appearance of the pole in $m$ does not lead to a divergence. These residue formulas for spinning operators clearly are not the full story as there are also non-conserved currents in the multiplets with $k>2$. However, from (\ref{possibleexchanges}) we can see that such non-conserved currents only appear in the channel with $s_2$ and $s_p$. Similar to the scalar case (\ref{deltaLR}), the analysis of the three-point functions requires the truncation at $m=0$. The residues are \begin{align} Q_{0} &= -\Delta\Gamma(\Delta-1) \sum_{a=1}^{q}\sum_{b=q+1}^n \delta_{ab} L_0^{a} R_0^{b}, & \quad \textrm{for spin $1$}\;, \\ Q_{0} &= -\frac{(\Delta+1)\Gamma(\Delta-1)}{2}\sum_{a,b=1}^q\sum_{i,j=q+1}^{n}\delta_{ai}(\delta_{bj}+\delta_{b}^{a}\delta_{j}^{i} )L_0^{ab}R_{0}^{ij} & \quad \textrm{for spin $2$}\;. \end{align} The most general expressions for factorization with arbitrary external and internal dimensions and $m$ can be found in \cite{Goncalves:2014rfa}. But they are not needed in this paper. As in the scalar case, the truncation of poles also relies on the form of the spinning four-point amplitudes. They are given in Appendix \ref{Sec:strongcouplingcorr} (see (\ref{eq:currentMellinexample}) and (\ref{eq:StressMellinexample}) for explicit expressions). In particular, they have the same analytic structure as the scalar four-point amplitude (\ref{eq:prototypical4Mellin}) except that now they carry additional indices. As a result, the truncation of poles also holds for the exchange of spinning operators. More precisely, we have the same pole locations as in (\ref{deltaLR}) where the allowed values for $m$ are $m=0,1,2$ for $(12)$ and $m=0$ for $(45)$, $(13)$. To summarize, the Mellin factorization formulas allow us to reconstruct all the polar part of the amplitude from the lower-point Mellin amplitudes. Furthermore, the spectrum of the theory gives rise to a further simplification where the poles truncate to a finite range independent of $p$. \subsection{Drukker-Plefka twist in Mellin space}\label{subsec:DrukkerPlefkaTwist} As we reviewed in the introduction, the two superconformal constraints, namely the chiral algebra and the Drukker-Plefka twist, were both formulated and implemented in position space \cite{Goncalves:2019znr}. To have a more streamlined algorithm, we would like to perform the bootstrap entirely within Mellin space and therefore need to translate such position space constraints into Mellin space. Let us first define the Mellin amplitude more precisely by restoring the R-symmetry dependence suppressed in the definition (\ref{defMellin5pt}). For the $\langle pp222\rangle$ correlator, we have \begin{align}\label{defMellinRsymm} &G_p(x_i,t_i) = \int [d\delta] \mathcal{M} (\delta_{ij},t_{ij})\, \prod_{1\leq i<j\leq 5} \Gamma(\delta_{ij})(x_{ij}^2)^{-\delta_{ij}}\;, \end{align} where $\mathcal{M} (\delta_{ij},t_{ij})$ is a linear combination of the 29 R-symmetry structures listed in (\ref{indepRstructures}). Usually the implementation of the twists in Mellin space is achieved by using the observation that $x_{ij}^2$ monomials multiplying the Mellin transform (\ref{defMellinRsymm}) can be absorbed into the definition by shifting the Mellin-Mandelstam variables. This gives rise to difference equations in Mellin space. This strategy has been used, for example, in \cite{Zhou:2017zaw,Zhou:2018ofp} to rewrite the superconformal Ward identities in Mellin space for four-point functions. In our case, there are extra complexities. The issue is that the chiral algebra constraint requires all the operators to be on a two dimensional plane. When the number of operators $n>4$, this cannot be achieved by a conformal transformation and there are relations among the cross ratios.\footnote{In two dimensions, the number of independent cross ratios is $2n-6$ for $n\geq 2$. However, in high enough spacetime dimensions, the number of independent cross ratios is $\frac{n(n-3)}{2}$. The relation for the cross ratios can be written in form of ${\rm det}M=0$ where the matrix $M$ has elements $M_{ij}=x_{ij}^2$. } The meromorphy of the correlator after the chiral algebra twist depends crucially on these relations. On the other hand, these relations do not hold in the definition of the Mellin ampllitude where the locations of the operators are assumed to be general. Therefore, the position space chiral algebra condition cannot be translated into Mellin space using the same strategy. By contrast, the Drukker-Plefka twist only imposes conditions on the R-symmetry polarizations and has no restriction on the operator insertions. Therefore, we can use the same trick to implement the Drukker-Plefka twist in Mellin space. More precisely, we can extract a kinematic factor and rewrite (\ref{defMellinRsymm}) in terms of cross ratios (\ref{eq:crossratiosPos}), (\ref{eq:crossratiosRCharge}) \begin{equation} G_p(x_i,t_i) = K_p \int d\delta_{ij} \mathcal{M} (\delta_{ij},\sigma_i) \Gamma_{\textrm {pp222}} \,u_{1}^{p-\delta_{12}}u_{2}^{-\delta_{23}}\, u_{3}^{2-\delta_{34}}\, u_{4}^{-\delta_{45}}\, u_{5}^{1-\delta_{15}} \;. \end{equation} Here $K_p$ is a kinematic factor \begin{equation} K_p=\frac{x_{13}^2t_{12}^pt_{34}^2 t_{15} t_{35}}{(x_{12}^2)^{p} (x_{34}^2)^{2}(x_{15}^2x_{35}^2) t_{13}}\;, \end{equation} and \begin{align} &\Gamma_{\textrm {pp222}} = \Gamma \left(\delta _{12}\right) \Gamma \left(\delta _{15}\right) \Gamma \left(\delta _{23}\right) \Gamma \left(\delta _{15}-\delta _{23}-\delta _{34}+1\right) \Gamma \left(\delta _{34}\right) \Gamma \left(\delta _{23}+1-\delta _{15}-\delta _{45}\right) \Gamma \left(\delta _{45}\right)\nonumber\\ & \Gamma \left(p-\delta _{12}-\delta _{15}+\gamma _{34}-1\right) \Gamma \left(\delta _{12}-p-\delta _{34}-\delta _{45}+3\right) \Gamma \left(p-\delta _{12}-\delta _{23}+\delta _{45}-1\right)\;. \end{align} Moreover, we have chosen $\delta_{12},\delta_{23},\delta_{34},\delta_{45}$ and $\delta_{15}$ as the independent Mellin variables. Performing the Drukker--Plefka twist amounts to setting $t_{ij}\rightarrow x_{ij}^2$, or equivalently $\sigma \rightarrow u$ for the cross ratios. To implement this in practice, we notice that doing the twist reduces to multiplying the Mellin representation of different terms of the correlator $K_p^{-1}G_p(x_i,t_i)$ by monomials $u_1^{n_1}\,u_2^{n_2}u_3^{n_3}u_4^{n_4}u_5^{n_5}$ \begin{equation} \mathcal{M}(\delta_{ij},\sigma_i)=\sum_{\{n_i\}}\sigma_1^{n_1}\,\sigma_2^{n_2}\sigma_3^{n_3}\sigma_4^{n_4}\sigma_5^{n_5}\mathcal{M}_{\{n_i\}}(\delta_{ij})\to \sum_{\{n_i\}}u_1^{n_1}\,u_2^{n_2}u_3^{n_3}u_4^{n_4}u_5^{n_5}\mathcal{M}_{\{n_i\}}(\delta_{ij})\;. \end{equation} We can absorb them by shifting $\delta_{ij}$ and this has the effect on the Mellin amplitudes by acting with a difference operator \begin{equation} u_1^{n_1}\,u_2^{n_2}u_3^{n_3}u_4^{n_4}u_5^{n_5}\mathcal{M}_{\{n_i\}}(\delta_{ij}) \rightarrow \mathbb{D}_{n_1,\ldots,n_5}\circ \mathcal{M}_{\{n_i\}}(\delta_{ij})\;, \end{equation} where the explicit action of $\mathbb{D}_{n_1,\ldots,n_5}$ reads \begin{align} &\mathbb{D}_{n_1,\ldots,n_5}\circ \mathcal{M}_{\{n_i\}}(\delta_{ij}) = \mathcal{M}_{\{n_i\}}(\delta_{12} +n_1,\delta_{23}+n_2,\dots ) \times \left(\delta _{12}\right)_{n_1} \left(\delta _{15}\right)_{n_5} \left(\delta _{23}\right)_{n_2} \left(\delta _{34}\right)_{n_3} \left(\delta _{45}\right)_{n_4}\nonumber\\ &\left(\delta _{15}-\delta _{23}-\delta _{34}+1\right)_{n_5-n_2-n_3} \left(\delta _{23}-\delta _{15}-\delta _{45}+1\right)_{n_2-n_4-n_5} \left(p-\delta _{12}-\delta _{15}+\delta _{34}-1\right)_{n_3-n_1-n_5} \nonumber\\ &\left(\delta _{12}-p-\delta _{34}-\delta _{45}+3\right)_{n_1-n_3-n_4} \left(p-\delta _{12}-\delta _{23}+\delta _{45}-1\right)_{n_4-n_1-n_2}\;. \end{align} The various Pochhammer symbols come from comparing the shifted Gamma factor with the one in the Mellin representation definition. The full difference operator from the Drukker-Plefka twist, denoted as $\mathbb{D}_{\rm DP}$, is then a sum of such operators acting on different R-symmetry structures. As we explained in Sec. \ref{Subsec:DrukkerPlefkatwist}, the twisted correlator is just a constant in position space. Following \cite{Rastelli:2016nze,Rastelli:2017udc}, we should interpret its Mellin amplitude as zero. Therefore, the Drukker-Plefka twist condition becomes in Mellin space \begin{equation} \mathbb{D}_{\rm DP}\circ \mathcal{M}(\delta_{ij},\sigma_i)=0\;,\label{eq:DrukkerPlefkadifferenceOPE} \end{equation} which explicitly reads \begin{equation} \sum_{\{n_i\}}\mathbb{D}_{n_1,\ldots,n_5}\circ \mathcal{M}_{\{n_i\}}(\delta_{ij})=0\;. \end{equation} The implications of this equation are discussed in the following section. \section{Superconformal kinematics of five-point functions}\label{Sec:scfkine} We consider the correlation functions of the super primaries of the $\frac{1}{2}$-BPS multiplets. These are scalar operators $\mathcal{O}_k^{I_1\ldots I_k}$ with $I=1,\ldots,6$, $k=2,3,\ldots$, transforming in the rank $k$ symmetric traceless representation of the $SO(6)$ R-symmetry group. Their conformal dimensions are protected by supersymmetry and are determined by the R-symmetry representation $\Delta=k$. Via the AdS/CFT correspondence, they are dual to scalar fields in AdS with KK level $k$ and are usually referred to as the super gravitons. A convenient way to keep track of the R-symmetry information is to contract the indices with null polarization vectors \begin{equation} \mathcal{O}_k(x;t)=\mathcal{O}_k^{I_1\ldots I_k}t_{I_1}\ldots t_{I_k}\;,\quad t\cdot t=0\;. \end{equation} Our main target in this paper is the following five-point correlator \begin{equation} G_p(x_i;t_i)=\langle \mathcal{O}_p(x_1;t_1)\mathcal{O}_p(x_2;t_2) \mathcal{O}_2(x_3;t_3) \mathcal{O}_2(x_4;t_4) \mathcal{O}_2(x_5;t_5) \rangle\;.\label{eq:fivePtkk222def} \end{equation} More precisely, we will compute the leading connected contribution which is of order $1/N^3$ and corresponds to tree-level scattering in AdS. The disconnected piece factorize into a three-point function and a two-point function, and is protected because the lower-point functions are. Symmetry imposes strong constraints on the form the correlator. For example, conformal symmetry allows us to write the five-point function as a function of five conformal cross ratios after extracting an overall kinematic factor\footnote{We are using a different, but equivalent, set of cross ratios here compared to \cite{Goncalves:2019znr}. These new cross ratios have appeared before in \cite{Bercini:2020msp}. One reason why these variables are nice is that it is possible to associate some $x_{ij}^2$ to $u_k$, for example $x_{12}$ only appears in $u_1$. Another interesting property is that they are cyclically related to each other. } \begin{equation} u_1 = \frac{x_{12}^2 x_{35}^2}{x_{13}^2 x_{25}^2}, \ \ u_2=\frac{x_{14}^2 x_{23}^2}{x_{13}^2 x_{24}^2}, \ \ u_3=\frac{x_{25}^2 x_{34}^2}{x_{24}^2 x_{35}^2}, \ \ u_4=\frac{x_{13}^2 x_{45}^2}{x_{14}^2 x_{35}^2}, \ \ u_5=\frac{x_{15}^2 x_{24}^2}{x_{14}^2 x_{25}^2}\label{eq:crossratiosPos} \end{equation} where we have defined $x_{ij}=x_i-x_j$. Similarly, extracting a kinematic factor also allows us to express the R-symmetry dependence as a function of the following five R-symmetry cross ratios \begin{equation} \sigma_1 = \frac{t_{12}t_{35}}{t_{13}t_{25}}, \ \ \sigma_2=\frac{t_{14}t_{23}}{t_{13}t_{24}}, \ \ \sigma_3=\frac{t_{25}t_{34}}{t_{24}t_{35}}, \ \ \sigma_4=\frac{t_{13}t_{45}}{t_{14}t_{35}}, \ \ \sigma_5=\frac{t_{15}t_{24}}{t_{14}t_{25}}\label{eq:crossratiosRCharge} \end{equation} where we have introduced the shorthand notation $t_{ij}=t_i\cdot t_j$. However, there is more we can say about the R-symmetry dependence. Since the polarization vectors $t_i$ are just multiplied to saturate the R-symmetry indices, they must appear in $G_p$ with positive powers. Therefore, $G_p$ must be a collection of monomials of the form $\prod_{i<j}t_{ij}^{a_{ij}}$, with the conditions \begin{equation} a_{ij}=a_{ji}\geq 0\;,\quad \sum_{j\neq i}a_{ij}=k_i\;, \end{equation} where $k_1=k_2=p$, $k_3=k_4=k_5=2$ are the weights of the external operators. Note the number of these monomials is finite and we will refer to them as different R-symmetry structures. In Section \ref{Subsec:Rsymm}, we will explicitly write down these structures. The considerations so far have only used the bosonic symmetries in the full superconformal group. The dependence on the spacetime variables $x_{ij}^2$ and on the R-symmetry variables $t_{ij}$ are not related. However, the fermionic generators in the superconformal group will impose further constraints which correlate the $x_{ij}^2$ and $t_{ij}$ dependence. For five-point functions, a thorough analysis the full consequence of the fermionic symmetries has not been performed in the literature. However, two classes of such constraints are known. The first is the chiral algebra construction \cite{Beem:2013sza} which constrain the five-point function when all the operators are inserted on a two dimensional plane. The other is the Drukker-Plefka twist \cite{Drukker:2009sf} which imposes constraints on the correlator with generic insertion positions. In this paper, we will only need the latter. We will review these conditions in Section \ref{Subsec:DrukkerPlefkatwist}. \subsection{R-symmetry}\label{Subsec:Rsymm} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{fig_Rsymmstructures} \caption{Inequivalent R-symmetry structures in the $\langle pp222\rangle$ five-point function. Here $(a_1,a_2)$ is $(1,2)$ or $(2,1)$ and $(a_3,a_4,a_5)$ can be any permutation of $(3,4,5)$. Each thin line represents a single contraction. The thick line represents the multi-contraction $t_{12}^a$ with the power $a$ given by the number next to the line. The R-symmetry structures in the first row have counterparts in the $\langle22222\rangle$ five-point correlator. For $\langle pp222\rangle$ they are simply obtained by multiplying the $p=2$ structures with $t_{12}^{p-2}$. The R-symmetry structures in the second row are new and do not appear in $\langle22222\rangle$.} \label{fig:Rsymmstructures} \end{figure} A systematic way to enumerate the R-symmetry structures of the $\langle pp222\rangle$ five-point function is to consider the Wick contractions. Different Wick contractions are illustrated in Fig. \ref{fig:Rsymmstructures} and the corresponding R-symmetry structures are explicitly given by \begin{equation} \begin{split} P^{({\rm I})}_{a_3a_4a_5}={}&t_{12}^{p-1}t_{2a_3}t_{a_3a_4}t_{a_4a_5}t_{1a_5}\;,\\ P^{({\rm II})}_{a_3a_4a_5}={}&t_{12}^{p-2}t_{1a_3}t_{2a_3}t_{2a_4}t_{a_4a_5}t_{1a_5}\;,\\ T^{({\rm I})}_{a_3a_4a_5}={}&t_{12}^pt_{a_3a_4}t_{a_4a_5}t_{a_3a_5}\;,\\ T^{({\rm II})}_{a_3a_4a_5}={}&t_{12}^{p-1}t_{2a_3}t_{1a_3}t_{a_4a_5}^2\;,\\ T^{({\rm III})}_{a_1a_2a_3a_4a_5}={}&t_{a_1a_2}^{p-2}t_{a_1a_4}t_{a_4a_5}t_{1a_5}t_{a_2a_3}^2\;,\\ N^{({\rm I})}_{a_3a_4a_5}={}&t_{12}^{p-3}t_{1a_3}t_{1a_5}t_{2a_3}t_{2a_5}t_{1a_4}t_{2a_4}\;,\\ N^{({\rm I})}_{a_3a_4a_5}={}&t_{12}^{p-3}t_{1a_3}^2t_{2a_4}^2t_{1a_5}t_{2a_5}\;. \end{split} \end{equation} Here $(a_1,a_2)$ is $(1,2)$ or $(2,1)$ and $(a_3,a_4,a_5)$ can be any permutation of $(3,4,5)$. The Wick contractions in the first row of Fig. \ref{fig:Rsymmstructures} exist for all $p\geq 2$ while the second row are only possible when $p\geq 3$. This is a new phenomena that arises at the level of five-point functions and should be contrasted with the four-point function case. In the four-point function $\langle pp22\rangle$, the number of Wick contractions is the same irrespective of the Kaluza-Klein weight $p$.\footnote{In fact, this is true even in the more general case $\langle pqrs \rangle$ as long as the extremality $E$ of the correlator remains the same. Here extremality is defined as $E=s-p-q-r$ and we have assumed that $s$ is the largest weight of them.} For $p=2$, all the five points are on the same footing and there is no distinction between $P^{({\rm I})}_{a_3a_4a_5}$, $P^{({\rm II})}_{a_3a_4a_5}$ and among $T^{({\rm I})}_{a_3a_4a_5}$, $T^{({\rm II})}_{a_3a_4a_5}$, $T^{({\rm III})}_{a_1a_2a_3a_4a_5}$. Multiplying them by $t_{12}^{p-2}$ gives the corresponding structures when $p>2$. Note that even when $p\geq 3$, some of these R-symmetry structures in Fig. \ref{fig:Rsymmstructures} still have residual symmetries and are invariant under certain permutations of $\{a_3,a_4,a_5\}$. For example, $T^{({\rm I})}_{a_3a_4a_5}=T^{({\rm I})}_{a_4a_3a_5}=T^{({\rm I})}_{a_3a_5a_4}$ and $T^{({\rm II})}_{a_3a_4a_5}=T^{({\rm II})}_{a_3a_5a_4}$. We choose the independent R-symmetry structures to be \begin{equation}\label{indepRstructures} \begin{split} {}&P^{({\rm I,II})}_{a_3a_4a_5}:\quad\quad (a_3,a_4,a_5)\in\{(3,4,5),(3,5,4),(4,3,5),(4,5,3),(5,3,4),(5,4,3)\}\;,\\ {}& T^{({\rm I})}_{a_3a_4a_5}:\quad\quad (a_3,a_4,a_5)\in\{(3,4,5)\}\;,\\ {}& T^{({\rm II})}_{a_3a_4a_5}:\quad\quad (a_3,a_4,a_5)\in\{(3,4,5),(4,3,5),(5,3,4)\}\;,\\ {}& T^{({\rm III})}_{a_1a_2a_3a_4a_5}:\;\;\; (a_1,a_2,a_3,a_4,a_5)\in\{(1,2,3,4,5),(1,2,4,3,5),(1,2,5,3,4),(2,1,3,4,5),\\ {}&\quad\quad\quad\quad\quad\quad\quad(2,1,4,3,5),(2,1,5,3,4)\}\;,\\ {}& N^{({\rm I})}_{a_3a_4a_5}:\quad\quad (a_3,a_4,a_5)\in\{(3,4,5)\}\;,\\ {}& N^{({\rm II})}_{a_3a_4a_5}:\quad\quad (a_3,a_4,a_5)\in\{(3,4,5),(3,5,4),(4,3,5),(4,5,3),(5,3,4),(5,4,3)\}\;. \end{split} \end{equation} This gives in total 29 independent R-symmetry structures. When $p=2$, $N^{({\rm I})}_{a_3a_4a_5}$ and $N^{({\rm II})}_{a_3a_4a_5}$ do not exist and we have 22 structures. \subsection{Drukker-Plefka twist and chiral algebra}\label{Subsec:DrukkerPlefkatwist} A highly nontrivial constraint from superconformal symmetry is given by the topological twist discovered in \cite{Drukker:2009sf}, which we will refer to as the Drukker-Plefka twist. In \cite{Drukker:2009sf}, it was found that when the operators have the following position-dependent polarization vectors (commonly referred to as a twist) \begin{equation} \bar{t}_i=(i x_i^1,i x_i^2,i x_i^3,i x_i^4,\frac{i}{2}(1-(x^\mu)^2),\frac{1}{2}(1+(x^\mu)^2))\;, \end{equation} the twisted correlator preserves certain nilpotent supercharge. The twisted operators are in its cohomology. More importantly, the translations of operators while keeping the polarizations twisted are exact. It then follows that the twisted correlators are topological, {\it i.e.}, independent of the insertion locations \begin{equation}\label{twistGp} G_p(x_i;\bar{t}_i)={\rm constant}\;. \end{equation} Note that in terms of the variables $x_{ij}^2$ and $t_{ij}$, the twist condition can also be written as $t_{ij}=x_{ij}^2$. Let us also mention another twist for contrast, namely the chiral algebra \cite{Beem:2013sza}. However, we will not exploit this twist in this paper. The chiral algebra twist requires that all the operators are inserted on a two dimensional plane. The coordinates therefore can be parameterized by the complex coordinates $z$, $\bar{z}$. Furthermore, the polarization vectors need to be restricted to be four dimensional \begin{equation} t_i=(t_i^\mu,0,0)\;,\quad \mu=1,2,3,4\;, \end{equation} where $t^\mu$ can be written in terms of two-component spinors \begin{equation} t_i^\mu=\sigma^\mu_{\alpha\dot{\alpha}}v^\alpha\bar{v}^{\dot{\alpha}}\;. \end{equation} Using the rescaling freedom of the polarization vector, we can write $v$ and $\bar{v}$ as \begin{equation} v_i=(1,w_i)\;,\quad \bar{v}=(1,\bar{w}_i)\;. \end{equation} When we twist the operators by setting $\bar{w}_i=\bar{z}_i$, the correlator also preserves certain nilponent supercharge. The twisted operators are in its cohomology while the antiholomorphic twisted translations are exact. Therefore, the twisted correlator are meromorphic functions of $z_i$ only. \section{Bootstrapping five-point Mellin amplitudes}\label{Sec:bootstrapMellin} \subsection{Strategy and ansatz} After introducing all the necessary ingredients, we are now ready to state our strategy. Our strategy is comprised of three steps. First, we start by formulating an ansatz in Mellin space which is based on our analysis of the analytic structure of the Mellin amplitudes. Second, we impose the Mellin factorization condition which is the statement that the pole residues should be correctly reproduced by the lower-point amplitudes. Finally, we implement the Drukker-Plefka twist in Mellin space and completely fix the ansatz. In the following, we explain the details of each step. \vspace{0.3cm} \noindent{\it Step 1: Ansatz} \vspace{0.2cm} \noindent As we emphasized in the previous section, Mellin amplitudes are merophormic functions with simple poles corresponding to exchanging single-trace operators and residues related to lower-point amplitudes via factorization. Based on this, we have the following ansatz for the $\langle pp222\rangle$ Mellin amplitude \begin{align} \nonumber \mathcal{M}(\delta_{ij},t_{ij}) =& \sum_{m=0}^{2} \frac{A_{m}(\delta_{ij},t_{ij})}{(\delta_{12} +1+m-p)}+\sum_{\bar{a}=1,2,a=3,4,5} \frac{B_{\bar{a}a}(\delta_{ij},t_{ij})}{\delta_{\bar{a}a}-1}+ \sum_{3\leq a < b \leq5} \frac{C_{ab}(\delta_{ij},t_{ij})}{\delta_{ab}-1}\\ &+D(\delta_{ij},t_{ij})\;.\label{eq:ansatzMellin} \end{align} Here $A_{m}(\delta_{ij},t_{ij})$ is a rational function with possible poles in $\delta_{34}$, $\delta_{35}$, $\delta_{45}$. In particular, it includes simultaneous poles which correspond to double exchange processes in the (12), (34) channels {\it etc}. Similarly, $B_{\bar{a}a}(\delta_{ij},t_{ij})$ is a rational function with possible poles in $\delta_{kl}$ at $\delta_{kl}=1$. The labels $k$, $l$ need to satisfy $k,l \neq \bar{a},a$ but can be both from the set $\{3,4,5\}$, or belong to different sets $\{1,2\}$ and $\{3,4,5\}$, see equation \eqref{othertermsinAnsatz}. To avoid double counting, $C_{jk}(\delta_{ij},t_{ij})$ and $D(\delta_{ij},t_{ij})$ do not have poles and they are polynomial functions of the Mellin-Mandelstam variables. Note that here we have also used our Mellin factorization analysis for the subleading poles from Section \ref{Subsec:Mellinfactorization}. We imposed that the poles in the (12) channel truncate to $m=0,1,2$. More concretely, the function $A_m(\delta_{ij},t_{ij}) $ in the ansatz has the following form \begin{align} \label{eq:P12form} A_{m} (\delta_{ij},t_{ij}) = \frac{A_{34,m}(\delta_{ij},t_{ij})}{\delta_{34}-1}+\frac{A_{35,m}(\delta_{ij},t_{ij})}{\delta_{35}-1}+\frac{A_{45,m}(\delta_{ij},t_{ij})}{\delta_{45}-1}+ A_{\emptyset,m}(\delta_{ij},t_{ij})\;, \end{align} where $A_{34,m}$, $A_{35,m} $ and $A_{45,m}$ are polynomials of degree $2$ and $A_{\emptyset,m}$ is a polynomial of degree $1$. Written explicitly, $A_{34,m}$ reads \begin{align} A_{34,m}(\delta_{ij},t_{ij}) = \sum_{\alpha_i}^{\alpha_1+\alpha_2+\alpha_3\leq 2} \sum_{I=1}^{29} a^I_{34,m,\{\alpha_i\}} \delta_{23}^{\alpha_1}\delta_{25}^{\alpha_2}\delta_{45}^{\alpha_3} \mathcal{T}_I, \end{align} where $\{\delta_{23},\delta_{25},\delta_{45}\}$ are chosen to be the independent Mellin-Mandelstam variables in addition to $\delta_{12}$ and $\delta_{34}$ which already appear in the poles. We have also used $\{\mathcal{T}_I\}$ to denote collectively the 29 independent R-symmetry structures in (\ref{indepRstructures}). The expressions for $A_{35,m}$, $A_{45,m}$ are similar. The polynomial $A_{\emptyset,m}$ is given by \begin{align} A_{\emptyset,m} = \sum_{\alpha_i}^{\alpha_1+\alpha_2+\alpha_3+\alpha_4\leq 1} \sum_{I=1}^{29} a^I_{\emptyset,m,\{\alpha_i\}} \delta_{23}^{\alpha_1}\delta_{25}^{\alpha_2}\delta_{45}^{\alpha_3}\delta_{34}^{\alpha_4} \mathcal{T}_I. \end{align} The other terms in the ansatz are similar and are given by \begin{align} \nonumber B_{13} =& \sum_{I=1}^{29}\bigg(\sum_{\alpha_i}^{\alpha_1+\dots\alpha_3\leq 2}\bigg[\frac{ b^I_{13,24,\{\alpha_i\}} \delta_{15}^{\alpha_1} \delta_{23}^{\alpha_2}\delta_{45}^{\alpha_3} }{\delta_{24}-1} +\frac{b^I_{13,45,\{\alpha_i\}}\delta_{15}^{\alpha_1} \delta_{23}^{\alpha_2}\delta_{34}^{\alpha_3}}{\delta_{45}-1} +\frac{b^I_{13,25,\{\alpha_i\}}\delta_{23}^{\alpha_1} \delta_{34}^{\alpha_2}\delta_{45}^{\alpha_3}}{\delta_{25}-1}\bigg] \label{othertermsinAnsatz} \\ &+ \sum_{\alpha_i}^{\alpha_1+\dots\alpha_4\leq 1} b^I_{13,\emptyset,\{\alpha_i\}}\delta_{23}^{\alpha_1}\delta_{34}^{\alpha_2} \delta_{45}^{\alpha_3}\delta_{15}^{\alpha_4}\bigg)\mathcal{T}_I\;,\\ \nonumber C_{34} =& \sum_{\alpha_i}^{\alpha_1+\dots\alpha_3\leq 1}\sum_{I=1}^{29} c^I_{34,\{\alpha_i\}} \delta_{12}^{\alpha_1}\delta_{23}^{\alpha_2}\delta_{45}^{\alpha_3}\delta_{15}^{\alpha_4}\mathcal{T}_I\;,\\ \nonumber D =& \sum_{I=1}^{29} d^I \mathcal{T}_I \;. \end{align} In making the ansatz we have assumed that the degrees of various polynomials are the same as in the $p=2$ correlator. This is expected from the flat-space limit which is related to the high energy limit of the Mellin amplitude \cite{Penedones:2010ue}. This can also be confirmed by Mellin factorization, which will be used in greater detail in the next step.\footnote{For example, it is straightforward to see that these are the correct degrees when exchanging scalar operators. Exchanging vector or tensor fields is a bit more nontrivial but it is possible to check that the degrees are correct. The only subtle point which avoids the factorization argument is the degree of the regular piece. However, it is natural to assume that the degree is the same as the $p=2$ case so that it has the same high energy growth as the other terms.} \vspace{0.3cm} \noindent{\it Step 2: Mellin factorization} \vspace{0.2cm} The second step of our strategy is to impose Mellin factorization. As explained in the previous section, all the polar terms of the Mellin amplitude can be completely fixed in terms of the lower-point Mellin amplitudes. For the $\langle pp222\rangle$ five-point function, all these lower-point amplitudes are known and are given in Appendix \ref{Sec:strongcouplingcorr}. These lower point functions depend on $R$-symmetry polarization vectors. One important detail which we did not discuss is how to glue together the $R$-symmetry structures in the lower-point functions using the representation of the exchanged fields. This step is explained in detail in Appendix \ref{Sec:rSymmetryGluing}. Thus all terms in the ansatz (\ref{eq:ansatzMellin}), except for the regular term $D$, can be fixed by using this factorization procedure. Note that the number of coefficients that remain unfixed in the ansatz is quite low as $D$ is just a constant with respect to the Mellin-Mandelstam variables. It can depend only on the linear combination coefficients of the 29 $R$-symmetry structures. \vspace{0.3cm} \noindent{\it Step 3: Drukker-Plefka twist} \vspace{0.2cm} The final step is to impose the Drukker-Plefka twist. As explained in Section \ref{subsec:DrukkerPlefkaTwist} this twist can be phrased in terms of a difference operator $\mathbb{D}_{\rm DP}$ acting on the Mellin amplitude, see (\ref{eq:DrukkerPlefkadifferenceOPE}). This relates the regular part with the singular part already fixed by factorization and completely fixes the remaining coefficients\footnote{At the same time the Drukker-Plefka twist provides a very non trivial consistency check for the procedure of extracting correlation functions of super-descendants and gluing of R-symmetry structures described in Appendix.}. Using this strategy, we obtain the $\langle pp222\rangle$ Mellin amplitudes in a closed form for arbitrary $p$. The final result for the Mellin amplitudes will be presented in the next section\footnote{It would also be interesting to extend this analysis to the first correction in $\alpha'$. One promising candidate is the $p=2$ case since it is more symmetric and we can also use the known results for the four-point function as an input \cite{Goncalves:2014ffa}. }. \subsection{Mellin amplitude for $p=2$} Due to the many R-symmetry structures involved, the expression for the full Mellin amplitude appears to be quite complicated at first sight. Therefore, before we present the Mellin amplitude for general $p$, let us first revisit the $p=2$ result of \cite{Goncalves:2019znr} and present it in a simpler way. When $p=2$, the amplitude is symmetric under permutations of all the five external points. The 22 R-symmetry structures also split into two classes and within each class the structures are related by permutations. The first class is the pentagon contraction \begin{equation} P_a=\{t_{12}t_{23}t_{34}t_{45}t_{15},\ldots\}\;,\quad a=1,2,\ldots, 12\;, \end{equation} which includes $P^{({\rm I,II})}_{a_3a_4a_5}$ in (\ref{indepRstructures}). The second class is the contraction of three points times the contraction of the remaining two points \begin{equation} T_a=\{t_{12}t_{23}t_{13}t_{45}^2,\ldots\}\;,\quad a=1,2,\ldots, 10\;, \end{equation} which includes $T^{({\rm I,II,III})}_{a_3a_4a_5}$ in (\ref{indepRstructures}). The full amplitude can be written as \begin{equation} \mathcal{M}_{p=2}=\sum_{a=1}^{12} \mathcal{M}^P_a P_a+\sum_{a=1}^{10} \mathcal{M}^T_a T_a\;. \end{equation} It is sufficient to determine the coefficient amplitudes $\mathcal{M}^P_1$ and $\mathcal{M}^T_1$ as the rest can be obtained by permutations. We find \begin{equation}\label{M1p} \begin{split} \mathcal{M}^P_1={}&4\sqrt{2} \bigg\{\frac{(\delta_{14}+\delta_{24})(\delta_{13}+\delta_{14})}{(\delta_{12}-1)(\delta_{34}-1)}+\frac{(\delta_{14}+\delta_{24})(\delta_{24}+\delta_{25})}{(\delta_{12}-1)(\delta_{45}-1)}+\frac{(\delta_{25}+\delta_{35})(\delta_{24}+\delta_{25})}{(\delta_{23}-1)(\delta_{45}-1)}\\ {}& +\frac{(\delta_{25}+\delta_{35})(\delta_{13}+\delta_{35})}{(\delta_{23}-1)(\delta_{15}-1)}+\frac{(\delta_{13}+\delta_{35})(\delta_{13}+\delta_{14})}{(\delta_{15}-1)(\delta_{34}-1)}+\frac{1}{2}\bigg(\frac{\delta_{35}}{\delta_{12}-1}+\frac{\delta_{14}}{\delta_{23}-1}\\ {}&+\frac{\delta_{25}}{\delta_{34}-1}+\frac{\delta_{13}}{\delta_{45}-1}+\frac{\delta_{24}}{\delta_{15}-1}\bigg)-2\bigg\}\;, \end{split} \end{equation} \begin{equation}\label{M1T} \begin{split} \mathcal{M}^T_1={}&-2\sqrt{2}\bigg(\frac{(\delta_{13}+\delta_{14})(\delta_{23}+\delta_{24})}{(\delta_{12}-1)(\delta_{34}-1)}+\frac{(\delta_{13}+\delta_{15})(\delta_{23}+\delta_{25})}{(\delta_{12}-1)(\delta_{35}-1)}+\frac{(\delta_{14}+\delta_{15})(\delta_{24}+\delta_{25})}{(\delta_{12}-1)(\delta_{45}-1)}\bigg)\;\nonumber. \end{split} \end{equation} It is clear that terms of the same structure are related by the permutations preserved by the R-symmetry structure. We will see that the Mellin amplitude for general p also has similar structures. \subsection{Mellin amplitudes for general $p$} For $p>2$, we no longer have the full permutation symmetry and there are seven types of R-symmetry structures as we discussed in Section \ref{Subsec:Rsymm}. The Mellin amplitude can be written as a sum over all the inequivalent R-symmetry structures \begin{align}\label{M5ptgeneralp} \mathcal{M}_p={}&\sum_{\mathcal{I}_1}\mathcal{M}^{P, ({\rm I})}_{a_3a_4a_5} P^{({\rm I})}_{a_3a_4a_5}+\sum_{\mathcal{I}_1}\mathcal{M}^{P, ({\rm II})}_{a_3a_4a_5} P^{({\rm II})}_{a_3a_4a_5}+\mathcal{M}^{T,({\rm I})}_{345} T^{({\rm I})}_{345}+\sum_{\mathcal{I}_2}\mathcal{M}^{T, ({\rm II})}_{a_3a_4a_5} T^{({\rm II})}_{a_3a_4a_5}\nonumber\\ {}&+\sum_{\mathcal{I}_3}\mathcal{M}^{T, ({\rm III})}_{a_1a_2a_3a_4a_5} T^{({\rm III})}_{a_1a_2a_3a_4a_5}+\mathcal{M}^{N,({\rm I})}_{345} N^{({\rm I})}_{345}+\sum_{\mathcal{I}_1}\mathcal{M}^{N, ({\rm II})}_{a_3a_4a_5} N^{({\rm II})}_{a_3a_4a_5}\;, \end{align} where the sets $\mathcal{I}_{1,2,3}$ contain the following permutations \begin{align} \mathcal{I}_1={}&\{(3,4,5),(3,5,4),(4,3,5),(4,5,3),(5,3,4),(5,4,3)\}\;,\nonumber\\ \mathcal{I}_2={}&\{(3,4,5),(4,3,5),(5,3,4)\}\;,\\ \mathcal{I}_3={}&\{(1,2,3,4,5),(1,2,4,3,5),(1,2,5,3,4),(2,1,3,4,5),(2,1,4,3,5),(2,1,5,3,4)\}\nonumber \end{align} The coefficient Mellin amplitudes are given as follows. For the structures of $P^{({\rm I})}_{a_3a_4a_5}$, $P^{({\rm II})}_{a_3a_4a_5}$, the coefficients are \begin{align} \mathcal{M}^{P, ({\rm I})}_{a_3a_4a_5}={}&2\sqrt{2}p\bigg\{\frac{2}{p}\frac{\delta_{1a_4}+\delta_{2a_4}}{\delta_{12}-p+1}\bigg(\frac{\delta_{1a_3}+\delta_{1a_4}}{\delta_{a_3a_4}-1}+\frac{\delta_{2a_4}+\delta_{2a_5}}{\delta_{a_4a_5}-1}\bigg)+\frac{1}{p}\frac{\delta_{a_3a_5}}{\delta_{12}-p+1}\nonumber\\ {}&+\frac{p-2}{p}\frac{\delta_{1a_4}+\delta_{2a_4}-1}{\delta_{12}-p+2}\bigg(\frac{\delta_{1a_3}+\delta_{1a_4}}{\delta_{a_3a_4}-1}+\frac{\delta_{2a_4}+\delta_{2a_5}}{\delta_{a_4a_5}-1}-1\bigg)\nonumber\\ {}&-\frac{(p-2)(p-3)}{2p}\frac{\delta_{a_3a_5}}{\delta_{12}-p+3}+\frac{(\delta_{2a_5}+\delta_{a_3a_5})(\delta_{2a_4}+\delta_{2a_5})}{(\delta_{2a_3}-1)(\delta_{a_4a_5}-1)}+\frac{(\delta_{1a_3}+\delta_{a_3a_5})(\delta_{1a_3}+\delta_{1a_4})}{(\delta_{1a_5}-1)(\delta_{a_3a_4}-1)}\nonumber\\ {}&+\frac{p}{2}\frac{(\delta_{1a_3}+\delta_{a_3a_5})(\delta_{2a_5}+\delta_{a_3a_5})}{(\delta_{1a_5}-1)(\delta_{2a_3}-1)}+\frac{1}{2}\bigg(\frac{\delta_{2a_4}}{\delta_{1a_5}-1}+\frac{\delta_{1a_4}}{\delta_{2a_3}-1}\bigg)\nonumber\\ {}&+\frac{p-1}{p}\bigg(\frac{\delta_{2a_5}}{\delta_{a_3a_4}-1}+\frac{\delta_{1a_3}}{\delta_{a_4a_5}-1}\bigg)+\frac{6-7p}{2p}\bigg\}\;, \end{align} \begin{align} \mathcal{M}^{P, ({\rm II})}_{a_3a_4a_5}={}&\sqrt{2}p^2\bigg\{\frac{(\delta_{12}+\delta_{2a_5})(\delta_{2a_5}+\delta_{a_3a_5})}{(\delta_{1a_5}-1)(\delta_{2a_3}-1)}+\frac{(\delta_{12}+\delta_{1a_4})(\delta_{1a_4}+\delta_{a_3a_4})}{(\delta_{2a_4}-1)(\delta_{1a_3}-1)}\nonumber\\ {}&+\frac{(\delta_{12}+\delta_{2a_5})(\delta_{12}+\delta_{1a_4})}{(\delta_{1a_5}-1)(\delta_{2a_4}-1)}-\frac{(p-2)\delta_{12}}{(\delta_{1a_5-1})(\delta_{2a_4}-1)}+\frac{2\delta_{12}}{p(\delta_{a_4a_5}-1)}\bigg(\frac{\delta_{1a_4}}{\delta_{2a_3}-1}+\frac{\delta_{2a_5}}{\delta_{1a_3}-1}\bigg)\nonumber\\ {}&-\frac{2(p+1)\delta_{12}}{p^2(\delta_{a_4a_5}-1)}+\frac{(p-2)(\delta_{12}+1)+\delta_{a_3a_4}}{p(\delta_{1a_5}-1)}+\frac{(p-2)(\delta_{12}+1)+\delta_{a_3a_5}}{p(\delta_{2a_4}-1)}\nonumber\\ {}&+\frac{1-p}{p}\bigg(\frac{\delta_{1a_4}}{\delta_{2a_3}-1}+\frac{\delta_{2a_5}}{\delta_{1a_3}-1}\bigg)+\frac{p-2}{p}\bigg\}\;. \end{align} Upon setting $p=2$, the two coefficient amplitudes become degenerate up to permutations and reproduce $\mathcal{M}_1^P$ in (\ref{M1p}). The coefficient Mellin amplitudes of $T^{({\rm I})}_{345}$, $T^{({\rm II})}_{a_3a_4a_5}$ and $T^{({\rm III})}_{a_1a_2a_3a_4a_5}$ are given by \begin{equation} \begin{split} \mathcal{M}^{T, ({\rm I})}_{345}={}&-2\sqrt{2}\bigg\{\frac{1}{\delta_{12}-p+1}\bigg(\frac{(\delta_{1a_3}+\delta_{1a_4})(\delta_{2a_3}+\delta_{2a_4})}{\delta_{a_3a_4}-1}+\frac{(\delta_{1a_3}+\delta_{1a_5})(\delta_{2a_3}+\delta_{2a_5})}{\delta_{a_3a_5}-1}\\ {}&+\frac{(\delta_{1a_4}+\delta_{1a_5})(\delta_{2a_4}+\delta_{2a_5})}{\delta_{a_4a_5}-1}\bigg)+\frac{(p-2)(\delta_{12}-p)}{\delta_{12}-p+2}\bigg\}\;, \end{split} \end{equation} \begin{equation} \begin{split} \mathcal{M}^{T, ({\rm II})}_{a_3a_4a_5}={}&-\frac{\sqrt{2}p(p-1)}{\delta_{a_4a_5}-1}\bigg\{\frac{(\delta_{2a_4}+\delta_{3a_4})(\delta_{2a_5}+\delta_{3a_5})}{\delta_{2a_3}-1}+\frac{(\delta_{1a_4}+\delta_{3a_4})(\delta_{1a_5}+\delta_{3a_5})}{\delta_{1a_3}-1}\\ {}&+\frac{2}{p(p-1)}\frac{(\delta_{1a_4}+\delta_{2a_4})(\delta_{1a_5}+\delta_{2a_5})}{\delta_{12}-p+1}+\frac{4(p-2)}{p(p-1)}\bigg(\frac{(\delta_{1a_4}+\delta_{2a_4}-1)(\delta_{1a_5}+\delta_{2a_5}-1)}{\delta_{12}-p+2}\\ {}&+\frac{p-3}{4}\frac{(\delta_{1a_4}+\delta_{2a_4}-2)(\delta_{1a_5}+\delta_{2a_5}-2)}{\delta_{12}-p+3}-\frac{1}{2}(p\delta_{a_4a_5}-p-1)\bigg)\bigg\}\;, \end{split} \end{equation} \begin{equation} \begin{split} \mathcal{M}^{T, ({\rm III})}_{a_1a_2a_3a_4a_5}={}&-\frac{\sqrt{2}p(p-1)}{\delta_{a_2a_3}-1}\bigg\{\frac{(\delta_{a_1a_2}+\delta_{a_2a_5})(\delta_{a_1a_3}+\delta_{a_3a_5})}{\delta_{a_1a_5}-1}+\frac{(\delta_{a_1a_2}+\delta_{a_2a_4})(\delta_{a_1a_3}+\delta_{a_3a_4})}{\delta_{a_1a_4}-1}\\ {}&+\frac{2}{p(p-1)}\frac{(\delta_{a_2a_4}+\delta_{a_2a_5})(\delta_{a_3a_4}+\delta_{a_3a_5}+p-2)}{\delta_{a_4a_5}-1}-\frac{p-2}{p-1}\bigg(\frac{\delta_{a_1a_2}\delta_{a_2a_4}}{\delta_{a_1a_5}-1}+\frac{\delta_{a_1a_2}\delta_{a_2a_5}}{\delta_{a_1a_4}-1}\\ {}&-\frac{1+2p}{p}\delta_{a_1a_2}-\frac{\delta_{a_2a_3}}{p}+1\bigg)\bigg\}\;. \end{split} \end{equation} They become $\mathcal{M}_1^T$ in (\ref{M1T}) when $p=2$. Finally, the coefficients of the two new structures $N^{({\rm I})}_{345}$, $N^{({\rm II})}_{a_3a_4a_5}$ are \begin{equation} \begin{split} \mathcal{M}^{N, ({\rm I})}_{345}={}&\sqrt{2}p^2(p-2)\delta_{12}\bigg\{\frac{1}{(\delta_{15}-1)(\delta_{23}-1)}+\frac{1}{(\delta_{15}-1)(\delta_{24}-1)}+\frac{1}{(\delta_{13}-1)(\delta_{24}-1)}\\ {}&+\frac{1}{(\delta_{13}-1)(\delta_{25}-1)}+\frac{1}{(\delta_{14}-1)(\delta_{23}-1)}+\frac{1}{(\delta_{14}-1)(\delta_{25}-1)}\\ {}&-\frac{2}{p}\bigg(\frac{1}{\delta_{15}-1}+\frac{1}{\delta_{25}-1}+\frac{1}{\delta_{13}-1}+\frac{1}{\delta_{23}-1}+\frac{1}{\delta_{14}-1}+\frac{1}{\delta_{24}-1}\bigg)\bigg\}\;, \end{split} \end{equation} \begin{equation} \begin{split} \mathcal{M}^{N, ({\rm II})}_{a_3a_4a_5}={}&-\sqrt{2}p(p-2)\delta_{12}\bigg\{\frac{\delta_{2a_3}}{(\delta_{1a_5}-1)(\delta_{2a_4}-1)}+\frac{\delta_{1a_4}}{(\delta_{2a_5}-1)(\delta_{1a_3}-1)}\\ {}&+\frac{1+\delta_{12}-p(\delta_{1a_3}+\delta_{2a_4}+\delta_{a_3a_4})}{p(\delta_{2a_4}-1)(\delta_{1a_3}-1)}\bigg\}\;. \end{split} \end{equation} Note that they are proportional to $p-2$ and therefore vanish for $p=2$. Let us also make a comment regarding the seemingly confusing bevahior at the flat-space limit. The flat-space amplitude which one obtains from holographic correlators corresponds to that of gravitons. In general, one expects that the dependence on the KK levels should factorize as different KK modes all correspond to the same particle in flat space. However, this is not the case if we naively take the high energy limit of the Mellin amplitudes. Clearly, the $p$-dependence is not factored out as the component amplitudes of the new R-symmetry structures for $p>2$ have the same high energy scaling behavior as the other component amplitudes. To understand this, it is important to note that the flat-space amplitude from AdS is in a special kinematic configuration where the polarizations of the gravitons are perpendicular to all the momenta \cite{Alday:2021odx}. However, such an amplitude for five points is zero in flat space.\footnote{This is easiest to see using double copy. The gluon five-point amplitude with orthogonal polarizations vanishes because it is impossible to contract five polarization vectors among themselves. By double copy, the graviton five-point amplitude also vanishes.} Therefore, the high energy limit of the Mellin amplitudes is not the flat-space amplitude as one might have naively expected. In fact, in applying the prescription of \cite{Penedones:2010ue}, there is an additional power of the inverse AdS radius $1/R$ which renders the flat-space limit zero. In other words, the high energy limit of the Mellin amplitudes computes only the $1/R$ corrections. We expect these corrections to have the same power counting for different KK modes. However, we do not expect their explicit expressions to be universal. \subsection{A comment on consistency} Let us make a comment regarding the consistency of our result. In Section \ref{Sec:Mellin} we proved the truncation of the poles in $\delta_{12}$ by using factorization in the $(12)$ channel which only exploits the general analytic structure of the resulting four-point amplitude. Here we point out that the truncation can also be seen from a different point of view when it involves simultaneous poles with another channel. For concreteness, let us focus on the residue of the amplitude at the pole $\delta_{45}=1$. The residue is, via the factorization in the (45) channel, related to a four-point function $\langle pp 2 X\rangle$ where the first three operators are 1, 2, 3 respectively. As we know from (\ref{possibleexchanges}), the operator $X$ belongs to the $k=2$ multiplet and can be the superprimary $\mathcal{O}_2$, the R-symmetry current $\mathcal{J}_\mu$ or the stress tensor $\mathcal{T}_{\mu\nu}$. The Mellin amplitude of $\langle pp 2 X\rangle$ contains poles in $\delta_{12}$ due to the operator exchanges in the $(12)$ channel. These four-point Mellin amplitudes are given explicitly in Appendix \ref{Sec:strongcouplingcorr} and we observe a truncation of the subleading poles in $\delta_{12}$ for $m\geq 3$. This gives another derivation of the structure of the simultaneous poles in $\delta_{12}$ and $\delta_{45}$. Similar consistency checks have also been performed in other channels ({\it e.g.}, in the $(13)$ and $(45)$ channel), as well as for the R-symmetry gluing (see Appendix \ref{Sec:rSymmetryGluing} for details). \subsection{Comments on position space}\label{Sec:bootstrapposition} Up to this point, all of our discussions are exclusively in Mellin space. This is mainly because of the simplified analytic structure of Mellin amplitudes, as can be seen from our main result (\ref{M5ptgeneralp}). However, it is also sometimes convenient to have position space expressions as some information is difficult to extract from the Mellin space representation. This has to do with the fact that certain nonzero expressions in position space may naively vanish in Mellin space. More precisely, different inverse Mellin transformations can only be added up if their contours can be smoothly deformed from one to another. Usually the contour part is ignored for simplicity and one just adds up the Mellin amplitudes. This causes some information to be lost in the process. In fact, we have already encountered such an example in this paper: The Drukker-Plefka twisted correlator is a constant in position space but has zero Mellin amplitude.\footnote{See also \cite{Rastelli:2017udc} for more examples in four-point functions.} The existence of the ambiguities makes a direct translation of Mellin space results into position space difficult. One could also try to directly extend the position space algorithm of \cite{Goncalves:2019znr} to the $\langle pp222\rangle$ correlators. However, as explained in the introduction, this is technically difficult. Here we propose a hybrid approach. As explained in \cite{Goncalves:2019znr,Alday:2022lkk}, all five-point Witten diagrams can be expressed as a linear combinations of five-point $D$-functions by using integrated vertex identities\footnote{It is known for some time \cite{DHoker:1999mqo} that four point exchange Witten diagrams can be express in terms $D$-functions when certain conditions on the dimension of the operators are met, which is what often happens in $\mathcal{N}=4$ SYM.}. It is then natural to construct an ansatz in position space directly in terms of the $D$-functions. This will avoid directly computing the Witten diagrams which is a nontrivial task. More concretely, we propose that the ansatz for $G_p$ in position space should have the following form \begin{align} A_{\Delta_1\dots \Delta_5}(x_i) = \sum_{\{\beta\}}c_{\{\beta\}}(t_{ij}) (x_{ij}^2)^{-\beta_{ij}} D_{\tilde{\Delta}_1\dots \tilde{\Delta}_5} (x_i)\;,\label{eq:positionspaceansatz} \end{align} where the coefficients $c_{\{\beta\}}(t_{ij})$ are linear combinations of all possible R-symmetry structures. The summation over $\beta_{ij}$ are subjected to the constraints \begin{align} &\tilde{\Delta}_{i}+\sum_{j}\beta_{ij} = \Delta_i \label{betaconstraints1},\\ &\sum_{i}\tilde{\Delta}_i\leq 2+\sum_{i} \Delta_i, \label{betaconstraints2}\\ & \beta_{ij}>0\;,\;\; \beta_{kl}>0\;,\quad \text{only if } \{i,j\}\neq \{k,l\}\label{betacompatible}\\ &\beta_{ij}\ge -2, \, \, \, \label{betaconstraints3}\\ &\beta_{12} \le p-1, \,\quad \beta_{ij} \leq 1\;\;(i,j\neq 1,2)\label{betaconstraints4}\;. \end{align} Let us now unpack these constraints a little. The first condition (\ref{betaconstraints1}) ensures that the external operators have the correct weights under conformal transformations. The constraint (\ref{betaconstraints2}) imposes a bound on the sum of weights in each $D$-functions.\footnote{One can see explicitly that it is the case for the $p=2$ five-point function. Moreover, the same bound also holds for four-point functions of higher KK modes.} This is expected if we use the integrated vertex identities\footnote{These will generalize the ones presented in Appendix A of \cite{Goncalves:2019znr} for $p=2$.} to reduce the exchange Witten diagrams to contact Witten diagrams. Exchanging single-trace operators leads to singularities in position space. The condition (\ref{betacompatible}) is the statement that particle exchanges have to be in the compatible channels. The constraint (\ref{betaconstraints3}) arises because the exchanged single-trace operator operators have maximal spin 2. To understand this more precisely, let us notice the following translation between position and Mellin space \begin{align} \prod_{1\leq i<j\leq 5}(x_{ij}^2)^{-\alpha_{ij}} D_{\tilde{\Delta}_1\dots \tilde{\Delta}_5} \rightarrow \mathcal{M}^{\alpha_{ij}}(\delta)= \frac{\pi^\frac{d}{2} \Gamma\left(\frac{\sum_i \tilde{\Delta}_i-d}{2}\right)}{\prod_{i}\Gamma(\tilde{\Delta}_i)} \prod_{i<j} \frac{\Gamma(\delta_{ij}-\alpha_{ij})}{\Gamma(\delta_{ij})}\;.\label{eq:fromPositionToMellin} \end{align} The condition (\ref{betaconstraints3}) ensures in Mellin space that the numerator associated with an exchange pole is at most quadratic. Finally, the constraint (\ref{betaconstraints4}) controls the twists of the exchanged single-trace operators. Let us emphasize that this position ansatz, as it stands, does not manifest the truncation of poles seen in (\ref{eq:ansatzMellin}). Nevertheless, this truncation can still be imposed in position space, though in a more intricate manner (this is in stark contrast with Mellin space). We notice that a given negative power $(x_{12}^2)^{-\alpha}$ will lead to poles in Mellin space at all the locations $\delta_{12}=1,2,\dots ,\alpha$. Therefore, even though the $\delta_{12}$ poles in Mellin space truncate according to (\ref{eq:ansatzMellin}), in position space the result will necessarily involve all negative powers of $(x_{12}^2)^{-\alpha}$ with $\alpha=1,2,\ldots,p-1$. Truncation only implies that the negative powers are related but cannot just simply eliminate a subset of them. This is another instance where we can see explicitly that Mellin space is simpler. To fix the coefficients in the ansatz, one can translate the ansatz back into Mellin space and compare with the Mellin amplitude (\ref{M5ptgeneralp}). This can be achieved by using the rule (\ref{eq:fromPositionToMellin}). However, as explained above, only some of the coefficients can be fixed due to the ambiguities of the translation. One may wonder if implementing the Drukker-Plefka twist and the chiral algebra condition in position space\footnote{See Appendix D of \cite{Goncalves:2019znr} for more details on how to obtain explicit expressions for $D$-functions.} will give rise to additional constraints. But unfortunately we find that this is not the case. There still remains the possibility that one can fix the remaining coefficients using the recently derived higher-point lightcone conformal blocks \cite{Bercini:2021jti} to impose factorization in position space. But we have not found a very efficient way to implement this. Therefore, we will postpone the task of finding the expressions in position space and leave it to future work. \section{Strong coupling correlators} \label{Sec:strongcouplingcorr} We can define the inverse Mellin transform of the scalar correlator as \begin{equation} \langle \mathcal O_{p_1}(1) \mathcal O_{p_1}(2) \mathcal O_{p_2}(3) \mathcal O_{p_2}(4) \rangle = d_{12}^{p_1} \, d_{34}^{p_2}\; G(z_k)= \int \mathrm d \delta_{ij} \mathcal{M}(\delta_{ij},y_{ij})\prod_{i<j} \frac{\Gamma(\delta_{ij})}{x_{ij}^{2\delta_{ij}}}\,. \end{equation} Conformal symmetry requires the Mellin variables $\delta_{ij}$ to obey the following equations \begin{equation} \sum_{j\neq i} \delta_{ij} = \Delta_i\,, \end{equation} effectively leaving only two degrees of freedom for four-point functions. It is useful to consider the following parametrization \begin{equation}\label{sij1} \delta_{ij}= \frac{\Delta_i + \Delta_j - s_{ij}}{2}\,, \end{equation} so that the solution is given simply as \begin{equation}\label{sij2} s_{12}=s_{34}=s \,,\qquad s_{14}= s_{23} = t\,,\qquad s_{13}=s_{24}= 2 (p_1+p_2) - s -t\,. \end{equation} For the configuration we are interested in we can then write the inverse Mellin transform as \begin{equation} G(u,v;\sigma,\tau) = \int \frac{\mathrm d s \mathrm d t}{4} u^{\frac{s}{2}} v^{\frac{t-p_1-p_2}{2}} \mathcal{M}(s,t;\sigma,\tau) \prod_{i<j} \Gamma(\delta_{ij}(s,t))\,. \end{equation} Equivalently, the Mellin transform of the spacetime correlator is \begin{equation} \mathcal{M}(s,t;\sigma,\tau) \prod_{i<j} \Gamma(\delta_{ij}(s,t)) = \int_0^\infty \mathrm{d} u \int_0^\infty \mathrm{d}v \;u^{-\frac{s}{2}-1} v^{\frac{p_1+p_2-t}{2}-1} G(u,v;\sigma,\tau) \,. \end{equation} When the correlator has a factorized form as in \eqref{Factorized}, then it is convenient to introduce the Mellin transform of the dynamical function $\mathcal H_p(u,v)$ \begin{equation} \widetilde{\mathcal{M}}_p(s,t) \prod_{i<j} \Gamma(\tilde\delta_{ij}(s,t)) = \int_0^\infty \mathrm{d} u \int_0^\infty \mathrm{d}v \;u^{-\frac{s}{2}-1} v^{\frac{p-t}{2}} H_p(u,v) \,, \end{equation} where the shifted variables are defined as \begin{align} \tilde\delta_{13} &= \delta_{13}+2 \,,\qquad \tilde \delta_{24}=\delta_{24}+2\,,\nonumber\\ \delta_{ij}&=\delta_{ij} \quad\mathrm{otherwise,} \end{align} and make crossing properties of the Mellin amplitude simpler. At strong coupling the Mellin space version of the correlator was found to have a particularly simple structure \cite{Rastelli:2016nze,Rastelli:2017udc}, and in the case under consideration it reduces to \begin{equation} \widetilde{\mathcal{M}}_p(s,t) = \frac{32}{(s-2)(t-p)(p-s-t)}\,. \end{equation} For the spinning correlators we can also write inverse Mellin transforms as follows \begin{align} \langle \mathcal J_{p_1}(1) \mathcal O_{p_1}(2) \mathcal O_{p_2}(3) \mathcal O_{p_2}(4) \rangle &= \sum_{k=2}^4 \frac{z \cdot x_{1k}}{x_{1k}^2}\int \left[\mathrm d \delta\right] \mathcal{M}_{p_1,p_2}^{k} \prod_{i=2}^4 \frac{\Gamma(\delta_i + \textrm{{\fancy{$\delta$}}}^k_i)}{x_{1i}^{2\delta_i}}\prod_{i<j} \frac{\Gamma(\delta_{ij})}{x_{ij}^{2\delta_{ij}}}\,,\nonumber\\ \langle \mathcal T_{p_1}(1) \mathcal O_{p_1}(2) \mathcal O_{p_2}(3) \mathcal O_{p_2}(4) \rangle &= \sum_{k,l=2}^4 \frac{z \cdot x_{1k}}{x_{1k}^2}\frac{z \cdot x_{1l}}{x_{1l}^2}\int \left[\mathrm d \delta\right] \mathcal{M}_{p_1,p_2}^{kl} \prod_{i=2}^4 \frac{\Gamma(\delta_i + \textrm{{\fancy{$\delta$}}}^k_i+\textrm{{\fancy{$\delta$}}}^l_i)}{x_{1i}^{2\delta_i}}\prod_{i<j} \frac{\Gamma(\delta_{ij})}{x_{ij}^{2\delta_{ij}}}\,, \end{align} with $\textrm{{\fancy{$\delta$}}}^k_i$ the Kronecker-delta, and the Mellin variables are constrained by \begin{equation} \delta_i= - \sum_{j=2}^4 \delta_{ij}\,,\qquad \delta_{ii}= - \Delta_i\,, \qquad \sum_{i,j=2}^4 \delta_{ij}= S-\Delta_1 \,. \end{equation} In the two cases of interest we have $S-\Delta_1=p_1$, so the $\delta_{ij}$ variables have the same solution as in the scalar case, see \eqref{sij1} and \eqref{sij2}. Comparing with the form of the correlators obtained in the previous section, we can see that the inverse Mellin trasform of the functions introduced in \eqref{JTpos} are exactly the $\mathcal{M}^{k}$ and $\mathcal{M}^{kl}$ above \begin{align} &\alpha_{p_1,p_2}^{(k)}(u,v;y_{ij},Y_{1,ij}) = \int \frac{\mathrm d s \mathrm d t}{4} u^{\frac{s}{2}} v^{\frac{t-p_1-p_2}{2}} \mathcal{M}_{p_1,p_2}^{k}(s,t;y_{ij},Y_{1,ij}) \prod_{i=2}^4 \Gamma(\delta_i + \textrm{{\fancy{$\delta$}}}^k_i) \prod_{i<j} \Gamma(\delta_{ij}) \,, \nonumber\\ &\beta_{p_1,p_2}^{(k,l)}(u,v;y_{ij}) = \int \frac{\mathrm d s \mathrm d t}{4} u^{\frac{s}{2}} v^{\frac{t-p_1-p_2}{2}} \mathcal{M}_{p_1,p_2}^{kl}(s,t;y_{ij}) \prod_{i=2}^4 \Gamma(\delta_i + \textrm{{\fancy{$\delta$}}}^k_i + \textrm{{\fancy{$\delta$}}}^l_i) \prod_{i<j} \Gamma(\delta_{ij}) \,. \end{align} Inversing the logic we then have \begin{align} \mathcal{M}_{p_1,p_2}^{k}(s,t;y_{ij}, Y_{1,ij}) \prod_{i=2}^4 \Gamma(\delta_i + \textrm{{\fancy{$\delta$}}}^k_i) \prod_{i<j} \Gamma(\delta_{ij}) &= \int_0^\infty \mathrm{d} u \, \mathrm{d}v \;u^{-\frac{s}{2}-1} v^{\frac{p_1+p_2-t}{2}-1} \alpha_{p_1,p_2}^{(k)}(u,v;y_{ij}, Y_{1,ij} ) \,, \nonumber\\ \mathcal{M}_{p_1,p_2}^{kl}(s,t;y_{ij}) \prod_{i=2}^4 \Gamma(\delta_i + \textrm{{\fancy{$\delta$}}}^k_i +\textrm{{\fancy{$\delta$}}}^l_i) \prod_{i<j} \Gamma(\delta_{ij}) &= \int_0^\infty \mathrm{d} u \, \mathrm{d}v \;u^{-\frac{s}{2}-1} v^{\frac{p_1+p_2-t}{2}-1} \beta_{p_1,p_2}^{(k,l)}(u,v;y_{ij}) \,. \end{align} As explained in the previous section, the functions $\alpha^{(k)}_{p_1,p_2}$ and $\beta^{(k,l)}_{p_1,p_2}$ are given in terms of derivatives of the dynamical function from the scalar correlator. When $p_1=2$ and $p_2=p$, or $p_1=p$ and $p_2=2$, we are then relating with $H_p$ from \eqref{Factorized}, and so we should use \begin{align} &\int_0^\infty \mathrm d u \int_0^\infty \mathrm d v \;u^{-\frac{s}{2}-1} v^{\frac{p+2-t}{2}-1} u^m v^n \frac{\partial^a}{\partial u^a} \frac{\partial^b}{\partial v^b} H_p(u,v) =\widetilde{\mathcal{M}}_p(s-2m+2a,t-2n+2b) \nonumber\\ &\times (-1)^{a+b} \left(m-a-\frac{s}{2}\right)_a \left(n-b+\frac{p+2-t}{2}\right)_b\prod_{i<j} \Gamma(\tilde \delta_{ij}(s-2m-2a,t-2n-2b))\,, \end{align} which allows us to write $\mathcal{M}_{p_1,p_2}^{k}$ and $\mathcal{M}_{p_1,p_2}^{kl}$ for those two configurations in terms of the scalar Mellin amplitude $\widetilde{\mathcal{M}}_p(s,t)$. At the end of the day, the Mellin amplitudes for $\langle \mathcal J_2 \mathcal O_2 \mathcal O_p \mathcal O_p \rangle$ are \begin{align} \mathcal{M}_{2,p}^{2} &= -2(t-p-2)\left(\frac{2(p-2)}{s-4} + \frac{2}{s-2}+ \frac{p}{4+p-s-t}\right)y_{24}^2\, y_{34}^{2(p-1)}\, Y_{1,23} \nonumber\\ &\quad- 2(2+p-s-t)\left(\frac{2(p-2)}{s-4}+ \frac{2}{s-2}+\frac{p}{t-p}\right)y_{23}^2 \,y_{34}^{2(p-1)} \,Y_{1,24} \nonumber\\ &\quad - 2p(s-2p) \left(\frac{1}{t-p} - \frac{1}{4+p-s-t}\right) y_{23}^2 \,y_{24}^2\, y_{34}^{2(p-2)} \,Y_{1,34} \,, \nonumber \end{align} \begin{align} \mathcal{M}_{2,p}^{3} &= 2(t-p-2)\left(\frac{2}{s-2}+ \frac{p}{4+p-s-t}\right) y_{24}^2\, y_{34}^{2(p-1)}\, Y_{1,23} \nonumber\\ &\quad+ 2(2+p-s-t)\left(\frac{2}{s-2}-\frac{p}{t-p}\right)y_{23}^2 \,y_{34}^{2(p-1)}\, Y_{1,24} \nonumber\\ &\quad- 2p(s-2p) \left(\frac{1}{t-p}+\frac{1}{4+p-s-t}\right)y_{23}^2 \,y_{24}^2\, y_{34}^{2(p-2)} \,Y_{1,34} \label{eq:currentMellinexample} \,. \end{align} Note that in general we expected poles at $s-2$, $t-p$ and $p+4-s-t$. However, in the $\mathcal{M}_{2,p}^{2}$ component we see also the presence of a pole at $s-4$. While this might appear unexpected at first, it is in fact due to the shift in the Gamma functions of spinning correlators. When $p_1=2$ and $p_2=p$ the relevant factors are \begin{equation} \Gamma(\delta_2 + 1) \Gamma(\delta_{34}) = \Gamma\left(3-\frac{s}{2}\right) \Gamma\left(p-\frac{s}{2}\right)\,. \end{equation} It is then evident that the Gamma functions do not prohibit the satellite pole at $s-4$ (unless $p=2$, in which case the residue vanishes). Meanwhile for $\langle \mathcal J_p \mathcal O_p \mathcal O_2 \mathcal O_2 \rangle$ we have \begin{align} \mathcal{M}_{p,2}^{2} &= \frac{2(p-2)s}{p} \left[ \frac{t-p-2}{4+p-s-t} y_{12}^{2(p-3)} y_{13}^2 \,y_{24}^4 \, Y_{1,23} + \frac{2+p-s-t}{t-p} y_{12}^{2(p-3)} y_{14}^2 \,y_{23}^4 \, Y_{1,24}\right. \nonumber\\ &\quad\left.+ 2 \left(1 + \frac{p}{t-p} + \frac{p}{4+p-s-t}\right) y_{12}^{2(p-3)} y_{14}^2 \,y_{23}^2 \, y_{24}^2 \, Y_{1,23}\right] \nonumber\\ &\quad+ \frac{2(t-p-2)}{p}\left(p-2-\frac{4}{s-2}- \frac{2p}{4+p-s-t}\right) y_{12}^{2(p-2)} y_{24}^2 \,y_{34}^2 \, Y_{1,23} \nonumber\\ &\quad+ \frac{2(2+p-s-t)}{p}\left(p-2-\frac{4}{s-2}- \frac{2p}{t-p}\right)y_{12}^{2(p-2)} y_{23}^2 \,y_{34}^2 \, Y_{1,24} \nonumber\\ &\quad+ \frac{2}{p} \left(s (p-2) - \frac{2p(s-2p)}{t-p} + \frac{2p(s (p-1) - 2p)}{4+p-s-t}\right)\;y_{12}^{2(p-2)} y_{23}^2 \,y_{24}^2 \, Y_{1,34}\,, \nonumber \end{align} \begin{align} \mathcal{M}_{p,2}^{3} &= \frac{2(p-2)(s-2p)}{p} \left[ \frac{t-p-2}{4+p-s-t} y_{12}^{2(p-3)} y_{13}^2 \,y_{24}^4 \, Y_{1,23} + \frac{2+p-s-t}{t-p} y_{12}^{2(p-3)} y_{14}^2 \,y_{23}^4 \, Y_{1,24} \right.\nonumber\\ &\quad \left.+ 2\left(1 + \frac{p}{t-p} + \frac{p}{4+p-s-t}\right) y_{12}^{2(p-3)} y_{14}^2 \,y_{23}^2 \, y_{24}^2 \, Y_{1,23} \right]\nonumber\\ &\quad+ \frac{2(t-p-2)}{p}\left(p-2+\frac{4(p-1)}{s-2}+ \frac{2p(p-1)}{4+p-s-t}\right) y_{12}^{2(p-2)} y_{24}^2 \,y_{34}^2 \, Y_{1,23} \nonumber\\ &\quad+ \frac{2(2+p-s-t)}{p}\left(p-2+\frac{4(p-1)}{s-2} - \frac{2p}{t-p}\right)y_{12}^{2(p-2)} y_{23}^2 \,y_{34}^2 \, Y_{1,24} \nonumber\\ &\quad+\frac{2(s-2p)}{p}\left(p-2-\frac{2p}{t-p} - \frac{2p}{4+p-s-t}\right) y_{12}^{2(p-2)} y_{23}^2 \,y_{24}^2 \, Y_{1,34}\,. \end{align} In this case the Gamma factors for $\mathcal{M}_{p,2}^{2}$ are \begin{equation} \Gamma(\delta_2 + 1) \Gamma(\delta_{34}) = \Gamma\left(p+1-\frac{s}{2}\right) \Gamma\left(2-\frac{s}{2}\right)\,, \end{equation} and that is why the shift does not lead to any unexpected pole. For the other Mellin components $\mathcal{M}_{p_1,p_2}^{3}$ and $\mathcal{M}_{p_1,p_2}^{4}$ we have \begin{align} \Gamma(\delta_3 + 1) \Gamma(\delta_{24}) &= \Gamma\left(\frac{s+t-p}{2}\right) \Gamma\left(\frac{s+t-p-2}{2}\right)\,,\nonumber\\ \Gamma(\delta_4 +1) \Gamma(\delta_{23}) &= \Gamma\left(\frac{4+p-t}{2}\right) \Gamma\left(\frac{2+p-t}{2}\right)\,, \end{align} which explains why there cannot be any new poles in these channels for any of the two configurations considered. Moving on to the spin 2 case, the Mellin amplitudes for the $\langle \mathcal T_2 \mathcal O_2 \mathcal O_p \mathcal O_p \rangle$ correlator are \begin{align} \mathcal{M}_{2,p}^{2,2} &= \frac{16}{3}\left(1-p+6(p-2)\left(\frac{p-3}{s-6} +\frac{2}{s-4}\right) + \frac{2}{s-2}+ \frac{p(p-1)}{t-p} + \frac{p(p-1)}{4+p-s-t}\right) y_{23}^2 \,y_{24}^2 \, y_{34}^{2(p-1)} \,, \nonumber \end{align} \begin{align} \mathcal{M}_{2,p}^{2,3} &= \frac{16}{3}\left(1-p-\frac{6(p-2)}{s-4}-\frac{4}{s-2}+ \frac{p(p-1)}{t-p}-\frac{2p(p-1)}{4+p-s-t}\right)y_{23}^2 \,y_{24}^2 \, y_{34}^{2(p-1)} \,, \nonumber\\ \mathcal{M}_{2,p}^{3,3} &= \frac{16}{3}\left(1-p+\frac{2}{s-2}+ \frac{p(p-1)}{t-p}+\frac{p(p-1)}{4+p-s-t}\right)y_{23}^2 \,y_{24}^2 \, y_{34}^{2(p-1)} \label{eq:StressMellinexample}\,. \end{align} There are once again some satellite poles, but the explanation follows exactly the same reasoning as before. The relevant Gamma factors in $\mathcal{M}_{2,p}^{2,2}$ are in this case \begin{equation} \Gamma(\delta_2 + 2) \Gamma(\delta_{34}) = \Gamma\left(4-\frac{s}{2}\right) \Gamma\left(p-\frac{s}{2}\right)\,, \end{equation} thus allowing poles both at $s-4$ and $s-6$ (except if $p=2,3$). Meanwhile, for $\mathcal{M}^{2,3}$ (and also $\mathcal{M}^{2,4}$) the relevant Gammas are \begin{equation} \Gamma(\delta_2 + \textrm{{\fancy{$\delta$}}}^2_2) \Gamma(\delta_{34}) = \Gamma\left(3-\frac{s}{2}\right) \Gamma\left(p-\frac{s}{2}\right)\,, \end{equation} and so the only satellite pole in those Mellin components is at $s-4$. At last, for the correlator $\langle \mathcal T_p \mathcal O_p \mathcal O_2 \mathcal O_2 \rangle$ we have \begin{align} \mathcal{M}_{p,2}^{2,2} &= \frac{8(p-2)(s+2)}{(p+1)(p+2)}\left[\left(\frac{s(p-1)-2p}{t-p}+\frac{2p}{4+p-s-t}\right) y_{12}^{2(p-3)} \, y_{14}^2\, y_{23}^4 \,y_{24}^2 \right. \nonumber\\ &\qquad\qquad\qquad\qquad \left.+\left(\frac{2p}{t-p}+\frac{s(p-1)-2p}{4+p-s-t}\right) y_{12}^{2(p-3)} \, y_{13}^2\, y_{23}^2 \,y_{24}^4 \right] \nonumber\\ &\quad +\frac{8 \,y_{12}^{2(p-2)} y_{23}^2\, y_{24}^2 \,y_{34}^2}{(p+1)(p+2)} \left((p-1)(p-2)s - 4(p^2-2)+\frac{16}{s-2}\right.\nonumber\\ &\qquad\qquad\qquad\qquad\qquad \left.-\frac{2p(s(p-2)-2p)}{t-p}-\frac{2p(s(p-2)-2p)}{4+p-s-t} \right)\,, \nonumber \end{align} \begin{align} \mathcal{M}_{p,2}^{2,3} &= \frac{8 (p-2)}{(p+1)(p+2)}\left[\left(\frac{s^2(p-1)-2p^2 s-2(p+2)(p-1)}{t-p}-\frac{p(s(p-1)+6p+2)}{4+p-s-t}\right) y_{12}^{2(p-3)} \, y_{14}^2\, y_{23}^4 \,y_{24}^2 \right. \nonumber\\ &\qquad\qquad\qquad\qquad\left. +\left(\frac{2p(s-p+1)}{t-p}+\frac{s^2(p-1)-s p (p-1)+2(p^2+p+2)}{4+p-s-t}\right) y_{12}^{2(p-3)} \, y_{13}^2\, y_{23}^2 \,y_{24}^4 \right] \nonumber\\ &\quad +\frac{8\, y_{12}^{2(p-2)} \, y_{23}^2\, y_{24}^2 \,y_{34}^2 }{(p+1)(p+2)}\left((p-1)(p-2)s-2(p+2)(p-1)-\frac{16p}{s-2}\right.\nonumber\\ &\qquad\qquad\qquad \left.-\frac{2p(s(p-2)-(p-1)(p+2))}{t-p}+\frac{p(s(p-1)(p-2)-2(p^2+p+2))}{4+p-s-t}\right) \,, \nonumber \end{align} \begin{align} \mathcal{M}_{p,2}^{3,3} &= \frac{8 (p-2)(s-2p)}{(p+1)(p+2)}\left[\left(\frac{2p^2-(s-2)(p-1)}{t-p}-\frac{2p^2}{4+p-s-t}\right) y_{12}^{2(p-3)} \, y_{14}^2\, y_{23}^4 \,y_{24}^2 \right. \nonumber\\ &\qquad\qquad\qquad\qquad \left.+\left(\frac{2p}{t-p}+\frac{s(p-1)+2}{4+p-s-t}\right) y_{12}^{2(p-3)} \, y_{13}^2\, y_{23}^2 \,y_{24}^4 \right] \nonumber\\ &\quad+ \frac{8\, y_{12}^{2(p-2)} \, y_{23}^2\, y_{24}^2 \,y_{34}^2 }{(p+1)(p+2)} \left((p-1)(p-2)s-4p+\frac{8p(p-1)}{s-2}\right.\nonumber\\ &\qquad\qquad\qquad\qquad\qquad \left. +\frac{2p(2(p^2-2)-s(p-2))}{t-p}+\frac{2p^2(s(p-2)+2)}{4+p-s-t}\right) \,. \nonumber\\ \end{align} Note that in the final expressions above we omit the $\mathcal{M}_{p_1,p_2}^{4}$ and $\mathcal{M}_{p_1,p_2}^{k,4}$ cases, but they can be easily obtained from the equations relating different Mellin components \begin{align} \sum_k \delta_{k} \mathcal{M}^{k} &= 0\,, \nonumber\\ \sum_k (\delta_k + \textrm{{\fancy{$\delta$}}}^l_k) \mathcal{M}^{kl} &=0 \,, \end{align} which play a similar role to the equation \eqref{relation} relating the tensor structures in position space. Finally, note that for the particular case of $p_1=p_2=2$ the expressions above simplify and agree with those found in our earlier work \cite{Goncalves:2019znr}. \subsection{Example of factorization} The goal of this subsection is to show explicitly how to use factorization, lower-point Mellin amplitudes and the $R$-symmetry gluing rules from Appendix \ref{Sec:rSymmetryGluing} to recover part of the five-point function. To simplify the presentation we will focus on the factorization of the scalar $\textrm{{\bf{20}}}'$ operator exchanged in the channel $(45)$. The building blocks for the factorization are the Mellin amplitude of the four-point function $\langle\mathcal{O}_p\mathcal{O}_p \mathcal{O}_2\mathcal{O}_2\rangle$ and the three-point function $\langle \mathcal{O}_2\mathcal{O}_2 \mathcal{O}_2 \rangle$ \begin{align} \mathcal{M}_{pp22} =& \frac{4 t_{01} t_{23} t_{12}^{p-2} \left(\delta _{12} \left(p \left(t_{02} t_{13}-t_{03} t_{12}\right)-(p-1) t_{01} t_{23}\right)+(p-1) p t_{03} t_{12}+\delta _{12}^2 t_{01} t_{23}\right)}{\delta _{23}-1}+\dots\nonumber\\ \mathcal{M}_{222}=& \,C_{OOO}\,t_{45}t_{40}t_{50} \end{align} where we decided to write explicitly only part of the four point function to simplify even further the analysis. The label $0$ in the formula is associated to the operator that is being exchanged in the factorization channel. Now we can borrow the formula from (\ref{eq:factorizationequation},\ref{eq:QmScalarDefinition})to obtain \begin{align} \mathcal{M}_{pp222} = 2\Gamma(2) \frac{ \mathcal{M}_{pp22} \mathcal{M}_{222}}{(2\delta_{45}-2)} + \dots \end{align} where the $\dots$ stand for other poles and contributions of other operators. The gluing in R-symmetry space gives, implementing\footnote{Recall that $t_{ij}=y_{ij}^2$.} (\ref{eq:gluingScalar}) for $p=2$, \begin{align} t_{\ell 4}t_{\ell 5} \,t_{ri_1}t_{ri_2} &\rightarrow \frac{1}{2}\left((t_{4i_1}t_{5i_2}+t_{4i_2}t_{5i_1})-\frac{t_{45}t_{i_1i_2}}{3}\right)\;,\\ t_{\ell 4}t_{\ell 5} t_{ri_1}^2 &\rightarrow \, t_{4i_1}t_{5i_1}\;. \end{align} Thus we obtain \begin{align} &\mathcal{M}_{pp222} = \frac{2 C_{OOO} t_{23} t_{45} t_{12}^{p-2} }{3 \left(\delta _{23}-1\right) \left(\delta _{45}-1\right)} \big[\left(3 \delta _{12} \left(p\, t_{15} \left(t_{13} t_{24}-t_{12} t_{34}\right)+t_{14} \left(p \left(t_{13} t_{25}-t_{12} t_{35}\right)\right.\right.\right.\nonumber\\ &\left.\left.\left.-2 (p-1) t_{15} t_{23}\right)\right)+(p-1) p t_{12} \left(3 t_{15} t_{34}+3 t_{14} t_{35}-t_{13} t_{45}\right)+6 \delta _{12}^2 t_{14} t_{15} t_{23}\right) \big]+\dots \end{align} where, again, the dots stand for other poles and contributions of other operators. In particular this formula can be compared with our previous result for five point function of {\bf{20'}} operators.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The atmospheric interaction of large meteoroids provides our primary tool to characterize their population, physical and chemical properties, and dynamical evolution. In turn, this can lead to a better understanding of the diverse populations of small bodies of the Solar System. Currently, our knowledge is still quite limited, although, especially after the impact of comet D/Shoemaker--Levy~9 on Jupiter, the research efforts in this field have been intensified. In particular, in 1994 the US Department of Defense made of public domain its records on energetic bolides over a time span of about twenty years (Tagliaferri et al. \cite{DOD}). These data indicate that, from 1975 to 1992, there were 136 airbursts of energy greater than 1 kton, but the real number was probably at least 10 times higher, because the satellite system does not cover the entire Earth surface. Both data and theories are required to assess the impact hazard and to understand the very bright bolides. From this point of view, the Lugo bolide is a very interesting event, because the airburst was detected by several seismic stations. The corresponding data allow us to characterize the meteoroid and to draw some tentative inferences about its nature and origin. We have carried out a reanalysis of this event and we found that the data are most consistent with the hypothesis that the involved meteoroid was a porous carbonaceous chondrite, somehow similar to the asteroid 253 Mathilde. \section{The Lugo bolide} On 1993 January 19 at 00:33:29 UT a large meteoroid entered the atmosphere approximately over the town of Lugo, in Emilia Romagna, Italy. The impact was recorded by the National Research Council (CNR) forward--scatter meteor radar and by six seismic stations, three belonging to the Microseismic Network of Ferrara (Pontisette, C\`{a} Fornasina, Fiorile d'Albero) and the others to the National Institute of Geophysics (Barisano, Santa Sofia, Poggio Sodo). The event was also observed by several eyewitnesses, as it lit an extremely large area (almost all of Italy), and they reported a visual magnitude in the range $-22$ to $-25$. Preliminary calculations were carried out based on the eyewitness reports, although they were fragmentary and sometimes contradictory (Cevolani et al. \cite{LUGO1}, Korlevi\'{c} \cite{LUGO3}). Only at a later time we found seismic data which enabled us to infer the location of the explosion (Cevolani et al. \cite{LUGO2}). This analysis indicated that a meteoroid of initial radius in the range $1.5\div 3$~m impacted the Earth atmosphere at a velocity of about $26$~km/s, with an inclination of the trajectory to the horizon of $8\degr\div 20\degr$. By means of the seismic data, it was possible to calculate the height ($30\pm 3$~km), latitude ($44{\degr}.48\pm 0{\degr}.01$~N) and longitude ($11{\degr}.91\pm 0{\degr}.01$~E) of the explosion. \section{The reanalysis: Aerodynamics} Here, we will assume that the only reliable data are those recorded by the seismic stations, which in general are a very useful tool for understanding this kind of airburst (e.g. Ben-Menahem \cite{TUNGUSKA1}). Therefore, we assume as valid the height, latitude and longitude of the explosion only, i.e. those data calculated from seismic data (Cevolani et al. \cite{LUGO2}). The aerodynamics of large meteoroid/small asteroid impacts has been studied by several authors, sometimes with special reference to the 1908 Tunguska explosion (e.g. Ceplecha and McCrosky \cite{PE}, Ceplecha et al. \cite{CEPLECHA1}, Chyba et al. \cite{TUNGUSKA2}, Hills and Goda \cite{HILLS}, Lyne et al. \cite{TUNGUSKA3}). Although the details may vary, there is a consensus that a 30~km explosion height is typical for a carbonaceous chondrite or a cometary body. In the theory of Hills and Goda (\cite{HILLS}) the height of first fragmentation is calculated comparing the stagnation pressure in front of the meteoroid ($P_{max}=\rho_{0}V_{e}^{2}$) to the mechanical strength $S$ of the cosmic body. We rearrange the formula to evaluate the meteoroid speed ($V_{e}$): \begin{equation} V_{e}=\sqrt{\frac{S}{\rho_{0}}\exp\left[\frac{h_{e}}{H}\right]}\ \label{e:velo} \end{equation} \noindent where $\rho_{0}$ is the atmospheric density at the sea level [kg/m$^{3}$], $h_{e}$ is the height of first fragmentation [km] and $H$ is the atmospheric scale height (about 8~km). For the strength, we assume $S=10^{7}$~Pa, that is an intermediate value between those appropriate for carbonaceous chondrites and for cometary bodies. We obtain $V_{e}=18\pm 3$~km/s, that is a value much lower than that derived previously (about 26~km/s). Observing the seismic plots (e.g.~Fig.\ref{pont}), we can conclude that there was a single explosion (for a comparison with nuclear explosions, see Pierce et al. \cite{NUCLEAR}). There is no evidence of multiple explosions, as it should occur during multiple fragmentation. Thus, for the Lugo bolide, Eq.~(\ref{e:velo}) can be used by assuming that the first fragmentation corresponded to the airburst. \begin{figure}[t] \centering \includegraphics{lugo_fig1.eps} \caption{Seismic plot recorded at the Pontisette station. Time starts at 00:36:37.3 UT. Further plots of this type can be found in Cevolani et al. (\cite{LUGO2}).} \label{pont} \end{figure} In order to calculate the flight path angle, we have to solve two equations: \begin{equation} \frac{dh}{dt}=V\cdot \sin \theta\, , \label{e:path1} \end{equation} \begin{equation} \frac{d\theta}{dt}=-\frac{\cos \theta}{V}\left(g-\frac{V^{2}}{R+h}\right)\, , \label{e:path2} \end{equation} \noindent where $g$ is the gravity acceleration [m/s$^{2}$], $R$ is the Earth's radius (we assume $R=6367$ km, for about $45\degr$ latitude), and $\theta$ is the flight path angle, measured from the horizontal. We assume that the meteoroid lift can be neglected. For the Tunguska cosmic body, Chyba et al. (\cite{TUNGUSKA2}) assumed a lift value of $10^{-3}$ and found that its influence on the results of these calculations is only about $1\%$. With all these assumptions, we obtain that the flight path angle during the final part of the atmospheric trajectory was $\theta=5.0\degr\pm 0.3\degr$. Again, we have some disagreement with the previous results ($8\degr\div 20\degr$). This is probably due to the uncertainty of visual observations in these conditions: for such an event the surprise can reduce significantly the skills and reliability of eyewitnesses. \section{The reanalysis: Explosion energy} To obtain an estimate of the explosion energy, we can use the relationship for the maximum velocity of displacement of the solid rocks, obtained from studies on underground nuclear explosions (Adushkin and Nemchinov \cite{IMPACT}). We rearrange their equation in order to calculate the energy, when the distance and the displacement velocity are known: \begin{equation} E = k\cdot D^{3}\left(\frac{v}{240}\right)^{12/7}\, , \label{e:kton} \end{equation} \noindent where $E$ is the explosion energy in kton of TNT; $D$ is the distance of the sensor from explosion [km]; $v$ is the displacement velocity [mm/s]. This formula is valid for $D<100$ km: in our case, seismic stations were located at distances smaller than 70 km. The coupling coefficient $k$ is introduced to take into account that, in order to produce rock displacements, an airburst is less effective than an underground nuclear explosion (at least by a factor 100). Moreover, there is a difference in the effective energy, because the explosion of a meteoroid in the atmosphere does not involve nuclear fission, and this contributes about another factor 10. Finally, there is some increase of the wave amplitude with the height of burst up to 40~km (Pierce et al. \cite{NUCLEAR}), which typically exceeds a factor 2; we assume a power increase by a factor 5. Overall, we estimate $k=100\cdot 10\cdot \frac{1}{5}=200$. We have data from six seismic stations (for a complete set of plots and other information, see Cevolani et al. \cite{LUGO2}), but transfer functions are available only for the three stations belonging to the Microseismic Network of Ferrara. We have performed a Fourier analysis of the waveform and found a peak at 1.4~Hz, for both Pontisette and C\`{a} Fornasina, corresponding to the airburst (see Figs.~\ref{pont-f} and \ref{forn-f}). We have not taken into account data from the Fiorile d'Albero station, because they show a strong background noise overlapping the shock wave and preventing a reliable Fourier analysis. \begin{figure}[t] \centering \includegraphics{lugo_fig2.eps} \caption{Fourier analysis of the Pontisette seismic plot.} \label{pont-f} \end{figure} \begin{figure}[t] \centering \includegraphics{lugo_fig3.eps} \caption{Fourier analysis of the C\`{a} Fornasina seismic plot.} \label{forn-f} \end{figure} The transfer function has a nominal value of 175 mV$\cdot$s/mm for all stations and for frequencies greater than 2~Hz. Below the cutoff frequency, the transfer function is drastically reduced, down to a value of 10 mV$\cdot$s/mm for 0.5~Hz. For a frequency of 1.4~Hz, we have a transduction factor of 52 mV$\cdot$s/mm. The final results of our calculations for the explosion energy from the seismic data with Eq.~(\ref{e:kton}) are shown in Table~\ref{energy}. \begin{table}[h] \centering \caption{Explosion energy calculated from the seismic data.} \begin{tabular}{|l|c|c|c|} \hline Station & $D$ [km] & $v$ [$\mu$m/s] & $E$ [kton]\\ \hline Pontisette & $59\pm 3$ & $41.040\pm 0.002$ & $14\pm 2$\\ \hline C\`{a} Fornasina & $63\pm 3$ & $35.369\pm 0.002$ & $13\pm 2$\\ \hline \end{tabular} \label{energy} \end{table} We consider a mean value of $14\pm 2$~kton, that is $(5.9\pm 0.8)\times 10^{13}$~J. It is worth noting that we might have obtained more accurate values, but the saturation of the Barisano sensor introduced an error of $9\%$ in the burst height calculations (Cevolani et al. \cite{LUGO2}), which propagates to our results. On the other hand, had we not considered the Barisano data, the available data would have been insufficient for a meaningful analysis. When a cometary body or a carbonaceous chondrite enters the atmosphere, almost all the kinetic energy is released in the explosion. Then we can calculate the meteoroid mass, taking into account that during the path preceding the explosion the cosmic body undergoes a limited mass loss: \begin{equation} m = \frac{2E}{V^{2}}=(4\pm 1)\cdot 10^{5} \ [\mathrm{kg}]\, . \label{e:mass} \end{equation} In order to calculate the visual magnitude of the airburst, we have to solve the equation: \begin{equation} L = -\tau \frac{dm}{dt}\frac{V^{2}}{2}\, , \label{e:luminosity} \end{equation} \noindent where $\tau$ is the dimensionless coefficient for the meteor luminous efficiency. This coefficient mainly depends on the meteoroid speed and is quite uncertain (Ceplecha and McCrosky \cite{PE}). Some authors think that for very bright bolides $\tau$ ranges from $10$ to $30\%$ (Brown et al. \cite{STROBERTS}, McCord et al., \cite{MARSHALL1}). Others assume $\tau$ values between $1.5$ and $6.1\%$ (Borovi\v{c}ka and Spurn\'{y} \cite{SL9}, Ceplecha \cite{TAU}). Here we assume $\tau = 4.5\%$. Moreover, we assume that the meteoroid dissipated almost all of its energy within a scale height. Then, solving Eq.~(\ref{e:path1}) for the time during which the meteoroid exploded, we obtain $t=5.1\pm 0.8$~s. The corresponding value for the airburst luminosity is $(5\pm 1)\cdot 10^{11}$ J/s. In order to express the luminosity in terms of absolute magnitude (i.e., the magnitude as observed at a 100~km distance), we can use the equation: \begin{equation} M = -2.5\cdot (\log_{10} L - 2.63)\, , \label{e:magn} \end{equation} \noindent where we have rearranged the classical relationship in order to use the SI unit system. From Eq.~(\ref{e:magn}) we obtain $M = -22.7\pm 0.5$, a value consistent with visual observations ($-22\div -25$). We stress the importance of the coefficient $\tau$: assuming a value of $10\%$, as suggested by McCord et al.(\cite{MARSHALL1}), we would obtain $M\simeq -24$. \section{Further results and discussion} It is also interesting to check how the results are sensitive to the assumed value of the strength $S$. If we take $S=10^{6}$~Pa, that is typical for cometary bodies, we end up with a cosmic body with a speed of about 6~km/s and an inclination of $2\degr$. The mass would be about $3\times 10^{6}$~kg and the absolute visual magnitude $-21$. The airburst would have been 31~s long. These values appear unlikely. Note that a final velocity of 6~km/s is very close to 4~km/s, which Ceplecha (\cite{CEPLECHA}) indicated as necessary to have a meteorite fall. But for Lugo no meteorite was recovered. We can summarize some features of the Lugo bolide: it had a grazing trajectory in the atmosphere, it was probably a carbonaceous chondrite, but it exploded at a height higher than usual and with a single airburst, without fragmentations. The recent discovery by the NEAR probe of a carbonaceous asteroid (253 Mathilde) with a very low density (about $1300$~kg/m$^{3}$) suggests the existence of porous bodies (i.e. bodies with internal cavities) among asteroids (Yeomans et al. \cite{MATHILDE}). If we assume that the Lugo bolide was a porous carbonaceous chondrite, we have a body which was probably stronger than a cometary fragment, but which could explode at a higher altitude than those typical for stony objects, because of its porosity. It is very likely that porosity increases the burst efficiency: when ablation removes the surface of the body, cavities may appear which increase the aerobraking and generate a sudden deceleration. The kinetic energy then is rapidly transformed into heat, so that the body bursts within a scale height. This is consistent with a single explosion, without multiple fragmentation, as indicated by seismic plots (see Fig.~\ref{pont}). \section{Conclusions} The Lugo bolide has been reanalysed by taking into account only the data recorded by seismic stations. We summarize the main inferred properties of the bolide in the following Table~\ref{summary}. \begin{table}[h] \centering \caption{Summary on the properties of the Lugo bolide.} \begin{tabular}{|l|r|} \hline Apparition time (UT) & 1993 01 19 00:33:29 $\pm 1$ s\\ \hline Latitude of airburst$^{\mathrm{a}}$ & $44.48\degr \pm 0.01\degr$ N\\ \hline Longitude of airburst$^{\mathrm{a}}$ & $11.91\degr \pm 0.01\degr$ E\\ \hline Airburst height$^{\mathrm{a}}$ & $30\pm 3$ km\\ \hline Explosion Energy & $14\pm 2$ kton\\ \hline Mass & $(4\pm 1)\cdot 10^{5}$ kg\\ \hline Abs. Visual Magnitude & $-22.7\pm 0.5$\\ \hline Velocity & $18\pm 3$ km/s\\ \hline Inclination$^{\mathrm{b}}$ & $5.0\degr \pm 0.3\degr$\\ \hline Path azimuth$^{\mathrm{a,c}}$ & $146.5\degr \pm 0.5\degr$\\ \hline \end{tabular} \label{summary} \begin{list}{}{} \item[$^{\mathrm{a}}$] Calculated in Cevolani et al. (\cite{LUGO2}). \item[$^{\mathrm{b}}$] Over the horizon. \item[$^{\mathrm{c}}$] Clockwise from North. \end{list} \end{table} We are now carrying out calculations on the orbit and the dynamical evolution this bolide, whose results will be available soon. However, from the analysis described here it appears likely that the meteoroid was a porous carbonaceous chondrite, somehow similar in constitution to the asteroid 253 Mathilde. The porosity would have increased the braking and as a consequence the airburst occurred at a height higher than for a compact carbonaceous chondrite object. \begin{acknowledgements} I wish to thank P.~Farinella who drew my attention to the special properties of the asteroid 253~Mathilde and for constructive review. I wish also to thank T.J.~Jopek and Z.~Ceplecha, for valuable discussions concerning the velocity calculations, and a referee, M.P.~Goda, for useful comments. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Figure 1 Caption} \end{document} \section{Estimation of Airbridge Critical Current} \begin{figure} \begin{centering} \includegraphics{supplementGLFit.eps} \par\end{centering} \caption{(Color online): Inset: Ten airbridges fabricated in series for a four terminal measurement of the resistance. Main panel: Critical current as a function of reduced temperature $T/T_{c}$. The fit is to Eqn. 1, with $I_0=462\, mA$} \label{figure:IV} \end{figure} As detailed in the main paper, we fabricated 10 airbridges in series and measured them in a four terminal configuration. Each airbridge had a width of 8\,$\mu$m, a length of 28\,$\mu$m, and a thickness of 300\,nm. At room temperature, we measured a resistance of 6\,$\Omega$. For a standard aluminum resistivity of 2.7$\times 10^{-6} \, \Omega$-cm, the expected resistance at room temperature is 3.15 $\Omega$, which does not take into account the curvature of the bridges and the distance between the pads of the bridges, which was 6 microns. At 100 mK, we were limited to 10 mA of drive current, which was not enough to drive the airbridges normal. Instead, we slowly cooled the sample through $T_{c}=1.2K$ and measured the critical current $I_c$ as a function of temperature just below $T_{c}$, with the results shown in Fig. 1. The data matches the expected Ginsburg-Landau behavior, which predicts the following relation for the critical current of a thin superconducting wire \cite{tinkham2004introduction} \begin{equation} I_c=I_0 \left(1-T/T_c \right)^{2/3} \end{equation} where $I_0$ is the critical current at temperature well below $T_c$. By fitting to this equation, we extracted a low temperature critical current of 462\, mA. However, this result does not take into account the width of our airbridges. From previous works, we estimate that there is a decrease in $I_0$ by a factor of order 3 or 4 for an 8\,$\mu$m wire,\cite{skocpol1976critical} giving a critical current of around 100\,mA. \section{Shifts in Resonant Frequency Due to Airbridges} \begin{figure} \begin{centering} \includegraphics{supplementLCData.eps} \par\end{centering} \caption{(Color online): (a) Resonant frequencies for resonators with variable numbers of airbridges in red squares, compared with the frequencies of their corresponding controls which are designed to have the same length. As the number of bridges increases, the resonators shift lower in frequency compared to their controls. (b) Percent change in $LC$, the product of the inductance per length and capacitance per length, as a function of the percentage of the resonator covered by airbridges. The dashed blue line is a linear fit to the data, with slope 12.7\% and intercept 0.35\%. The offset from the origin is within normal chip to chip variations in our measured resonators. The red line is a prediction based on the additional capacitance of the airbridge. The slopes differ due to the decrease in inductance from the airbridge.} \label{figure:LC} \end{figure} Compared to more conventional crossovers which are supported by dielectrics, airbridges have a much smaller impact on the capacitance of a CPW line. However, this additional capacitance due to an airbridge is not negligible and should be accounted for. For example, in our experiment to test the microwave loss of airbridges using ten different resonators, we designed the resonators such that the density of airbridges increased with decreasing frequency, as shown in Fig. 2(a). A higher density of airbridges increases the capacitance of the resonator and decreases the resonant frequency. Thus, in our experiment, the resonant frequencies shifted further apart rather than closer together, avoiding any frequency collisions. We note here that from our control data, we found no significant correlation of the high or low power quality factor with the frequency of the resonator over the range we considered, which validated this particular design choice. If we assume the airbridge acts like a parallel plate capacitor between the center trace and ground, we can estimate the additional capacitance per unit length due to the airbridge as $C=\epsilon_0 w/d$, where $w$ is the width of the center trace and $d$ is the height of the airbridge. For the geometry in our experiment, $w=10 \mu$m and $d=3 \mu$m, giving $C=29.5$ pF/m. We can also numerically calculate the additional capacitance due to the airbridge using COMSOL. We simulated the cross-section of a CPW line with a 10 $\mu$m center trace and 5 $\mu$m gap with a substrate dielectric constant of 11.6, and found the capacitance per length to be 175.25 pF/m. After adding an airbridge, the capacitance increased to 204.03 pF/m giving an increase of 28.78 pF/m due to the airbridge, showing remarkable agreement with the parallel plate estimate. From these values, we predict that the capacitance of a resonator covered completely by airbridges should increase by 17\%. From the frequency data shown in Fig. 2(a), we can determine the actual effect of placing an airbridge over a CPW line. As the number of airbridges increased, the frequency of the resonator shifted further below the frequency of its corresponding control. Since each resonator and its control are designed to have the same wavelength, we can interpret the change in frequency as a change in the phase velocity of light $v_p=1/\sqrt{LC}$, where $L$ and $C$ are the inductance and capacitance per unit length. Given the total length of the resonator and the number of airbridges, we can also determine the percentage of the line covered by airbridges. The percent coverage should be linearly related to the change in the product of the inductance and capacitance per unit length, $LC$, which is shown in Fig. 2(b). The slope of the linear fit in Fig. 2(b) indicates that the $LC$ product for a section of line covered by airbridge differs from the bare line by 12.7\%. The discrepancy between our prediction and our data is most likely due to changes in the inductance of the resonator. Each airbridge adds additional pathways for current to flow, which decreases the inductance of the CPW line and compensates in part for the increase in capacitance. However, the inductance is not as easily modeled as the capacitance since edge effects are important. In other words, a single, wide airbridge that spanning a CPW line does not have the same effect as multiple narrower airbridges because they contain different current paths. \section{Participation Ratio of the Airbridge Interface} \begin{figure} \begin{centering} \includegraphics{supplementParticipation.eps} \par\end{centering} \caption{(Color online): (a) Cross section of an airbridge spanning a CPW line. (b) Close-up of the interface for which we calculate the participation ratio. This interface is a possible source of loss because the layer of aluminum is deposited on photoresist that has been crosslinked by an ion mill. The thickness and dielectric constant are variable. (c) Participation ratio as a function of thickness for various dielectric constants at the interface. We numerically calculate using COMSOL the participation ratio by setting the potential of the center trace of the CPW to 1V, solving for the electric fields, then numerically computing the integral in Eq. 1 in the interface region. We obtain the total energy $W$ by performing the same integral for all of the cross section.} \label{figure:participation} \end{figure} The interface underneath the airbridge is a potential source of loss, since this is the interface at which we deposited aluminum on photoresist that has been crosslinked by the argon ion mill. To understand the additional surface loss due to this interface, we calculate the participation ratio of a lossy dielectric at this metal-air interface following Ref \onlinecite{wenner2011surface}. We consider the resonator and airbridge structure in cross section as shown in Fig. 3(a). The participation ratio $p$ of any isotropic region of space in this cross-section is simply given by the ratio of energy stored in the region to the total energy stored in the entire cross-section \begin{equation} p=W^{-1} \epsilon_r \epsilon_0 \int \!\!\! \int dA \,\frac{\vert E \vert ^2}{2} \end{equation} where $W$ is the total energy in the cross-section which may be obtained by performing the same integral over all space, and $\epsilon_r$ is the dielectric constant in the region. Assuming that the region is thin, as it is in the case of our interface of interest, we can replace an integral over the thickness by a product, turning the double integral into a line integral over the boundary of the interface. We can also simplify the equation using the boundary conditions on our interface. The metal boundary allows us to approximate the electric field as normal to the metal, while the continuity of the displacement field at the air interface gives us the relation $\epsilon_r E_{i \perp}=E_{a \perp}$, where $E_{i}$ is the electric field in the interface and $E_{a}$ is the electric field in air. Combining these simplifications we obtain \begin{equation} p=W^{-1} t_{i} \epsilon_r ^{-1} \epsilon_0 \int dS \,\frac{\vert E_{a \perp} \vert ^2}{2} \end{equation} where $t_{i}$ is the small thickness of the interface. Assuming the contribution to the total energy $W$ of the interface is small, the participation ratio is proportional to the thickness and inversely proportional to the dielectric constant. We can estimate the value of the line integral by again modeling the airbridge as a parallel plate. If we assume a 1\,V difference in potential between center trace and ground, then from the calculation of total capacitance above, we know the value of $W=\frac{1}{2}C V^2$. The electric field is given by 1\,V divided by the separation distance of 3 $\mu$m, and we may replace the integral with a multiplication by the length, about 10 $\mu$m. We then obtain the following approximate formula: \begin{equation} p=4.8 \times 10^{-5}\,\textrm{nm}^{-1}\, \frac{t_i}{\epsilon_r} \end{equation} Alternatively, we can also numerically evaluate Eq. 1. We constructed the geometry of an airbridge spanning a CPW and included a thin dielectric interface on the underside of the bridge as shown in Fig. 4(b). After applying a potential of 1\,V to the center trace, we solved for the electric fields and numerically integrated Eq. 1 to determine the total energy in the cross-section and the energy in the interface, giving us the participation ratio. We calculated participation ratio as a function of interface thickness and dielectric constant, producing the plot shown in Fig. 4(c). We see that the scaling follows the expected scaling from Eqs. 2 and 3. Furthermore, we can more accurately determine the coefficient in Eq. 3 from the slopes of the lines, and we find that the coefficient is $6.34 \times 10^{-5}\,nm^{-1}$, which is within 30\% of our parallel plate estimation. Given the participation ratio, we can estimate the loss due to this interface. For the dielectric constant, we estimate a dielectric constant of 4 based on data pertaining to other photoresists \cite{pierce1965dielectric}. SEM images of the interface were inconclusive for determining the thickness, but it is certainly upper bounded by 100\,nm. Finally, there is little data on the loss tangent of resist and cryogenic temperatures, so we estimate this to be $10^{-3}$ based on the measured loss tangents of amorphous oxides.\cite{oconnell2008microwave} Using these numbers, we obtain a participation ratio of $1.6 \times 10^{-3}$ and a loss due to airbridges of $1.6 \times 10^{-6}$, or a $Q_i$ of 630,000. \section{Loss due to Inline Airbridges} \begin{figure} \begin{centering} \includegraphics{supplementCTABData.eps} \par\end{centering} \caption{(Color online): Insets: (a) Airbridges connecting together CPW lines within a resonator. (b) A second style of airbridge connection, where the ground plane is threaded underneath the airbridge. Main panel: Internal quality factor of resonators as a function of average photon population for many different styles of resonators. We show two witness resonators to demonstrate the typical spread in measured $Q_{i}$. Lines are guides for the eye.} \label{figure:CTAB} \end{figure} Given the high critical currents through the airbridges, we know that the airbridges provide a good connection at DC. In order to test connectivity at microwave frequencies, we fabricated airbridges as a part of the center trace of the quarter wave resonators described in the main paper. We considered two styles of inline airbridges. In both styles, we design the center trace to have a 20\,$\mu$m break, then connect together the two traces with an airbridge. In one style shown in Figure 4(a), the ground plane is left unconnected, while in the other style shown in Fig. 4(b), the ground plane is connected through the break in the center trace and underneath the bridge. We tested one and ten inline airbridges placed inside quarter wave CPW resonators in both styles. In addition, we tested a quarter wave resonator with an inline airbridge acting as the short to ground, since this configuration gave the largest current loading of the airbridge. Based on loss results in the main paper, we were confident that the airbridge processing did not degrade the quality factors of our resonators and used witness resonators fabricated on the same chip as the control resonators. All resonators had a larger center trace of 15\,$\mu$m to accommodate the pads of the bridge, a gap of 10\,$\mu$m , and were fabricated using aluminum deposited on a sapphire substrate. We performed quality factor measurements as described in the main paper, producing the results shown in Fig. 4. The two witness resonators shown in Fig. 4 represent the best and worst measured quality factors for our witness resonators. On average, the witness resonators show a low power $Q_{i}$ of around 800,000, and a high power $Q_{i}$ of around $5 \times 10^6$. All resonators which have a single inline airbridge, including the resonator shorted to ground by the airbridge, do not show substantial degradation in $Q_{i}$. On the other hand, ten inline airbridges shows some degradation at high power corresponding to a additional loss of $3 \times 10^{-7}$. Ten inline airbridges with threaded ground planes also showed significant loss at lower power, with an additional loss of $1.3 \times 10^{-6}$, or $10^{-7}$ per bridge.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Quantum walks constitute a promising route to the development of algorithms for quantum information processing. Successful applications of quantum walks include a variety of search algorithms \cite{Shenvi2003,Childs2004a,Ambainis2004a,Childs2004b}, graph hitting problems \cite{Childs2002,Childs2003,Kempe03a}, Boolean function evaluation \cite{Farhi2008, Childs2009a}, among others. Quantum walks are in fact a universal paradigm for quantum computation \cite{Childs2009, Lovett2010}. The simplest and most dramatic improvement demonstrated by quantum walks, when compared with the corresponding classical random walks, is the hitting probability of a walk on certain graphs. Two graphs in particular have demonstrated an exponential separation between the quantum and classical walks: the glued binary trees \cite{Childs2002} and the hypercube \cite{ Kempe03a}. An example of the former is shown in Fig. \ref{fig1}; this graph is formed by connecting the leaves of two binary trees of depth $d$. Due to the graph's symmetry, the quantum walk can be restricted to a subspace of the total (exponentially large) Hilbert space, known as the ``column space'' (also shown in Fig. 1). It has been argued that this symmetry is the heart of the quantum speed-up \cite{Krovi2007}. A quantum walk algorithm on a slight modification of this graph, to be described below, was proven to have an exponential speedup over any classical algorithm \cite{Childs2003}. Keating {{\it et al.}} have argued that physical implementations of these walks will not be able to maintain this quantum speed-up for large graphs \cite{Keating07}. Indeed, one expects that physical systems will be subject to decoherence and disorder. While one can always argue that future quantum computers will be protected from these effects by error correction, it is more likely that near term demonstrations of quantum walks will use a physical network. In fact, many of the recent experiments on quantum walks involve an encoding of the degrees of freedom that is not, strictly speaking, computationally useful \cite{Kendon11}. Nevertheless, these experiments demonstrate the dynamical speed-up characteristic of quantum walks. Determining when disorder or decoherence will limit the observability of this speed-up is an important theoretical problem \cite{Kendon07}. This problem may already have been encountered by nature. Recent evidence has shown that photosynthetic complexes operate using a type of quantum walk \cite{AGuzik08}, where the interplay of decoherence with disorder plays a crucial role in their energy harvesting efficiency. This phenomenon, known as dephasing or environmentally assisted transport, has relations to earlier studies of phonon effects on electron transport in disordered solids \cite{Jonson79}. Understanding and potentially reverse-engineering this efficiency is a topic of great scientific and practical importance. As the complexes and potential technologies under study can be far from the thermodynamic limit, examining quantum transport on graphs of modest size may reveal new surprises. In this paper, we examine the continuous-time quantum walk on the glued trees graph with static disorder, corresponding to the usual Anderson model with disordered on-site energies \cite{Anderson58}. Previous authors \cite{Keating07} introduced disorder through an effective one-dimensional representation (the column space). Using the fact that all eigenstates are localized in one-dimension with arbitrary disorder, they concluded that Anderson localization would cause an exponential suppression of the hitting probability. From a physical perspective, however, this model has some flaws. First, their model introduces a highly correlated form of disorder, in that all of the on-site energies on a given column are the same Second, the glued trees graph is quite similar to a Cayley tree, whose dimensionality is formally infinite, as far as Anderson localization is concerned \cite{Evers2008}. The Cayley tree exhibits a localization transition at large disorder, and thus significant speed-up may still be possible for weak disorder. We have performed a numerical study of this problem for the full Hilbert space of the glued trees graph and modifications thereof. Analysis of the eigenvalues and eigenvectors agree with previous studies of localization the Cayley tree. We pay special attention to the less well studied regime of weak disorder, using dynamical simulations to find a type of quantum decay of the quantum walk from the column space. The resulting hitting probability is well simulated by a model with local decay from each column state. We further consider the crossover from wavelike to diffusive transport for intermediate levels of disorder. A scattering theory analysis of this regime indicates a transition from the quantum walk to a classical random walk. Our results augment the many results for one-dimensional quantum walks with disorder. These include theoretical studies for the continuous-time quantum walk \cite{Yin2008} and the discrete-time quantum walk \cite{Ahlbrecht2011}, and the many recent experiments on atoms \cite{Karski2009}, ions \cite{Schmitz2009,Zahringer2010}, and photons \cite{Broome2010, Schreiber2011}. Higher-dimensional quantum walks could be realized in these or other systems, such as networks of superconducting circuits \cite{Strauch2008, Chudzicki10}. In particular, our simulations show that the effects of disorder on quantum transport can be identified in systems of 10-100 lattice sites. This paper is organized as follows. In Section II we review the known results for Anderson localization on the Cayley tree, and in Section III numerically study the phase diagram for the two types of glued trees, and find evidence of the localization transition. In Section IV we study the time-dependence of the quantum walk with disorder, and introduce the local decay model for weak disorder. In Section V we consider a scattering theory model for transport through the glued trees graph, and find evidence for the transition to classical random walk. We conclude in Section VI, comparing our results with the hypercube and highlighting the major open questions. Results for the quantum scattering and (classical) diffusive transport through the glued trees are found in the Appendices. \begin{figure}[t] \begin{center} \includegraphics[width=3.5in]{fig1} \caption{Glued binary trees graph $G_4$, showing the reduction to the column space for both the classical (top) and quantum (bottom) random walks. } \label{fig1} \end{center} \end{figure} \section{Localization and Diffusion on the Binary Tree} Anderson localization is the phenomenon that, for a sufficiently large amount of disorder, the eigenstates of a quantum system become exponentially localized about the nodes in a graph \cite{Anderson58}. Localization transitions are a rich field of study \cite{Evers2008}, for which both symmetry and dimensionality play key roles. The quantum walk we consider corresponds to the simplest such model, a tight-binding Hamiltonian with random on-site energies \begin{equation} \mathcal{H} = - \gamma \sum_{\langle j,k \rangle} \left(c_j^{\dagger} c_k + c_k^{\dagger} c_j \right) + \sum_{j} \epsilon_j c_j^{\dagger} c_j, \label{andersonmodel} \end{equation} where $\gamma$ is a hopping rate, $\epsilon_j$ is a set of random variables, uniformly distributed in the range $[-W/2,W/2]$, and $c_j^{\dagger}$ is the creation operator for an excitation at site $j$, and the sum is over neighboring sites. This model, originally inspired by electron transport in solids, can be used for many physical systems, such as quantum state transfer of a single excitation on a qubit network \cite{Bose2003,Bose2008,Strauch2008} or exciton transfer in a photosynthetic complex \cite{AGuzik08}. Much is known about the Anderson model given by Eq. \ref{andersonmodel} \cite{Evers2008}. For example, for a one-dimensional infinite lattice, localization occurs for arbitrarily small amounts of disorder \cite{Borland63}. It was argued that this property was generic to quantum walks \cite{Keating07}. However, it is known that for systems with dimension greater than two, localization requires a sufficiently large amount of disorder, i.e. there is a localization transition as the disorder is increased \cite{Abrahams79}. At first glance, the glued trees graph of Fig. 1 might appear to be a subset of a two-dimensional system, and thus one might expect localization for even small amounts of disorder. In fact, the infinite binary tree, or more generally the infinite Cayley tree (also known as the Bethe lattice \cite{Ostilli12}) has been used as a model for an infinite-dimensional system. This has been studied extensively, and exhibits a localization transition found numerically \cite {Jonson79,Miller94,Monthus2009,Biroli2010} and by an analytical mean field calculation \cite{AbC73,AbC74}. For this system, there is a mobility edge in the energy spectrum, such that eigenstates inside the mobility edge are extended, while those outside are localized. For sufficiently large values of disorder, there is a localization transition beyond which all eigenstates are localized. For a binary tree (or Cayley tree with $K=1$), this transition occurs for $W_c \approx 17$ \cite{AbC74,Jonson79,Miller94,Monthus2009,Biroli2010}. The description of localization given above, in which the eigenstates of the system exhibit a transition from extended to localized states, can be called eigenstate localization. There are two other ways to identify localization that we will encounter in this paper. The first is dynamical localization, in which the spreading of a wavepacket shows a saturation as a function of time. The second is spectral localization, in which the eigenvalues of the system move from an absolutely continuous spectrum (corresponding to extended states) to a pure point spectrum (corresponding to localized states). These other indicators of localization, which have also been studied extensively, we now briefly summarize. The spectral properties of the system were studied numerically \cite{Sade2003,Biroli2010}, and found to exhibit a transition in agreement with the self-consistent approaches described above. These numerical studies are sensitive to the handling of the boundary of the tree \cite{Aizenman2006}, a fact to which we will return in Section III. While there are some subtleties regarding the the phase diagram at weak disorder \cite{Aizenman2011a,Aizenman2011b}, the existence of a localization transition for large disorder is well established. For small disorder, the existence of extended eigenstates has been proven \cite{Klein98}. It has also been proven that, for small disorder, states that are initially localized spread ballistically \cite{Klein95}. The ballistic spreading does not preclude classical behavior, however. The random walk on the Bethe lattice has been studied, and can be mapped onto an asymmetric random walk on a one-dimensional half-line \cite{Hughes82}, as indicated in Fig. \ref{fig1} ({\it e.g.} the left half). A classical walker is twice as likely to move right as left, and this asymmetry leads to the peculiar fact that the classical walk also spreads ballistically. Exact results and limits have been established for this and other properties of the classical walk \cite{Cassi89, Monthus96}. Note that the asymmetry is linked to the exponential growth of sites away from the origin, such that one often calls the Bethe lattice a graph of infinite dimensionality. \section{Quantum Walk Eigenstates} The continuous-time quantum walk \cite{Farhi98,Mulken2011} is precisely the quantum dynamics of a single particle moving on a graph. This is given by the Schr{\"o}dinger equation \begin{equation} i \frac{d \psi_j}{d t} = - \gamma \sum_{k} A_{jk} \psi_k, \end{equation} where $A_{jk}$ is the adjacency matrix for the graph and $\gamma$ is hopping rate. One could also use the Laplacian matrix, or introduce potentials to implement search algorithms \cite{Childs2004a,Childs2004b}, but here we consider the addition of static disorder, such that \begin{equation} i \frac{d \psi_j}{d t} = - \gamma \sum_{k} A_{jk} \psi_k + \epsilon_j \psi_j, \end{equation} where the on-site energies $\epsilon_j$ are i.i.d. random variables uniformly distributed in the range $[-W/2,W/2]$. In this section we consider the nature of the eigenstates of the Hamiltonian of the quantum walk with disorder, namely $\mathcal{H} |\Psi\rangle = E |\Psi\rangle$, with $\mathcal{H} = \mathcal{H}_0 + \mathcal{H}'$, with the unperturbed Hamiltonian given by \begin{equation} \mathcal{H}_0 = -\gamma \sum_{j,k=1}^{N_d} A_{jk} |j\rangle \langle k|, \end{equation} and diagonal static disorder \begin{equation} \mathcal{H}' = \sum_{j=1}^{N_d} \epsilon_j |j\rangle \langle j|, \label{hprime} \end{equation} where $N_d = 3 \times 2^d -2$ is the number of vertices for the glued trees graph $G_d$. We begin by analyzing $\mathcal{H}_0$, and follow by studying the eigenvalues and eigenstates of $\mathcal{H}$. \subsection{Eigenstates without disorder} As described above, there is a great deal of symmetry in the glued trees graph $G_d$, and a great deal of structure in the eigenstates and eigenvalues of the system. We begin by providing a notation for the graph. We consider a labeling along the ``columns'' and ``rows'' of the graph, of the form $(j,n)$, where $j = 0 \to 2d$ indicates the distance from the left root along the graph, and $n = 0 \to N_{j,d}-1$ is the location within a given column. Here $N_{j,d}$ is the number of sites in a given column $j$, given by $N_{j,d} = 2^j$ for $j \le d$, and $N_{j,d} = 2^{2d-j}$ for $j > d$. The coordinates $(j,n)$ can be combined into a single coordinate $v$ by the following rule \begin{equation} v = \left\{ \begin{array}{ll} 2^j + n & \mbox{for} \ 0 \le j \le d, \\ 3 \times 2^d - 2^{2d + 1 - j} + n & \mbox{for} \ d < j \le 2 d. \end{array} \right. \end{equation} Note that $v$ ranges from $1 \to 3 \times 2^d -2$. The adjacency matrix elements $A_{v,w}$ are equal to one if vertices $v$ and $w$ are connected, and zero otherwise. This can be given in terms of the coordinates $(j,n)$ by observing that $(j,n)$ is connected to \begin{equation} (j-1, \lfloor n/2 \rfloor), (j+1,2n), (j+1,2n+1) \ \mbox{for} \ j \le d, \end{equation} and \begin{equation} (j+1, \lfloor n/2 \rfloor), (j-1,2n), (j-1,2n+1) \ \mbox{for} \ j > d. \end{equation} What is most important is that a given vertex is symmetrically coupled to those sites one step further from the left root (or closer to the right root). This allows us to express the eigenstates of the system in terms of ``column-states'' that are equal superpositions of states on a given column $j$. This column-space reduction is well-known for its utility in the analysis of the quantum walk \cite{Childs2002,Childs2003, Krovi2007}. Letting $|j,n\rangle$ denote the Hilbert-space vector associated with vertex $(j,n)$, we define the column-space vector $|\mbox{col} \ j \rangle$ by \begin{equation} |\mbox{col} \ j \rangle = \frac{1}{\sqrt{N_{j,d}}} \sum_{n=0}^{N_{j,d}-1} |j,n\rangle. \end{equation} By the symmetry noted above, the Hamiltonian $\mathcal{H}_0 = -\gamma A$ acts on the column-space states as \begin{equation} \mathcal{H}_0 |\mbox{col} \ j \rangle = -\sqrt{2} \gamma | \mbox{col} \ j-1 \rangle - \sqrt{2} \gamma | \mbox{col} \ j+1\rangle, \end{equation} where one factor of $\sqrt{2}$ is due to the number of connections to a neighboring column, and the other due to the normalization of the column states. Hence, we can reduce the dynamics to a quantum walk on a finite line, whose eigenstates are equally well-known \begin{equation} \ket{\Psi_{k,d}} = \frac{1}{\sqrt{d+1}}\sum_{j=0}^{2d} \,\sin\!\left(\frac{k\,(j+1)\,\pi}{2(d+1)}\right)\ket{ \mbox{col} \ j}, \label{ColSpEs} \end{equation} with energies \begin{equation} E_{k,d} = - 2\sqrt{2}\,\gamma \,\cos\!\left(\frac{k\,\pi}{2(d+1)}\right), \label{ColspEn} \end{equation} and $k = 1 \to 2d+1$. We have annotated the states by the depth of the graph, as there are in fact many more eigenstates for $G_d$, whose enumeration we now consider. The glued trees graph is self-similar, in that $G_d$ contains $2^\nu$ subgraphs, each equivalent to $G_{d-\nu}$. These subgraphs can be grouped into $2^{\nu-1}$ pairs, each pair formed by removing the roots of a larger graph equivalent to $G_{d-\nu+1}$. That is, we can repeatedly split the glued trees graph by removing the left and right roots, such that $G_d$ contains 2 copies of $G_{d-1}$, 4 copies of $G_{d-2}$, and so forth, formed by removing the roots until we are left with $2^d$ copies of $G_0$ (the isolated vertices at the center of the graph). Some representative subgraphs of $G_4$ are shown in Fig. \ref{subgraphs}. On their own, each subgraph would have eigenstates of the form of Eq. (\ref{ColSpEs}). To find how these subgraphs contribute to the spectrum of $G_d$, we observe that an equal but opposite-signed superposition over two paired subgraph eigenstates is an eigenstate of $G_d$. This occurs because the components with opposite phase, when acted upon by $\mathcal{H}_0$, will interfere destructively on the two nodes to which they are connected on the larger graph. By this pairing, we can thus construct the complete set of eigenstates for $G_d$. \begin{figure}[t] \begin{center} \includegraphics[width=3in]{fig2} \caption{(Color online) Glued binary trees graph $G_4$, with several highlighted subgraphs $G_3$ (top, in gray) and two copies of $G_2$ (middle, in red and bottom, in blue). } \label{subgraphs} \end{center} \end{figure} To see this more clearly, we define a set of ``sub-column'' states whose elements combine the paired subgraphs described above. These are given by \begin{equation} |\mbox{scol} \ j; \alpha, \nu \rangle = \sum_{n= 2 \alpha N_{d-\nu}}^{ (2 \alpha +1) N_{d-\nu} - 1 } \frac{ |j+\nu,n\rangle - |j+\nu, n + N_{j,d-\nu} \rangle}{\sqrt{2 N_{j,d-\nu}}}, \end{equation} where $j = 0 \to 2(d-\nu)$ indicates the column in $G_{d-\nu}$, $\alpha = 0 \to 2^{\nu-1}-1$ labels the paired subgraphs, and $\nu = 1 \to d$ indicates the depth of the subgraphs' left root. Using these states, the remaining eigenstates of $G_d$ can be obtained by using the eigenstates $|\Psi_{k,d-\nu}\rangle$ from Eq. (\ref{ColSpEs}) with $|\mbox{col} \ j \rangle$ replaced by $|\mbox{scol} \ j; \alpha, \nu\rangle$ and eigenvalues $E_{k,\,d-\nu}$ from Eq. (\ref{ColspEn}), where $k = 1 \to 2(d-\nu)+1$. Defining \begin{equation} \sigma_d = \{ E_{k,d}, k = 1 \to 2 d \}, \end{equation} the total spectrum can then be written as \begin{equation} \sigma = \sigma_d + \sum_{\nu=1}^{d} 2^{\nu-1} \sigma_{d-\nu}. \end{equation} This spectrum exhibits a very large degeneracy, especially for $E=0$, which has a multiplicity of $2^d$ (one from each copy of $\sigma_{d-\nu}$, and half from the $\sigma_0$). This large degeneracy (also observed in \cite{Aizenman2006}) occurs for trees that are both glued and unglued and can make numerical analysis of the disordered system problematic, as will be described below. \subsection{Eigenstates with disorder} The introduction of static disorder changes both the eigenvalues and eigenvectors of the system. For the Cayley tree, early studies used a self-consistent approach \cite{AbC74,Miller94} to find the localization transition and a phase diagram between extended and localized eigenstates. Recent numerical studies \cite{Monthus2009,Biroli2010} have confirmed these earlier results, which we now summarize. The eigenvalue spectrum has been studied for a single Cayley tree numerically by \cite{Sade2003} through the use of spectral statistics. A transition of the distribution of the energy level spacings from Wigner to Poisson (indicative of a localization transition) was observed when the tree was modified so that the leaves are randomly connected to each other. This random connection presumably reduces the prevalence of the zero eigenvalue described above, which would otherwise lead to a Poisson distribution for the spectral statistics \cite{Aizenman2006}. We performed a similar analysis for the glued trees graph, with the leaves connected as in Fig. \ref{fig1}, which we call a simple glued trees (SGT) graph, or interrupted by a random cycle (as in \cite{Childs2003}), which we call a modified glued trees (MGT) graph. An example of the MGT (with a regular connection \cite{Douglas09}) is shown in Fig. 9. A Wigner to Poisson transition was observed for the MGT, while the SGT exhibited a spectrum that was always far from Wigner. For the remainder of this section, our results were obtained using the MGT. The localization of the eigenstates and the localization transition can be found by studying a simple property of the eigenstates, namely the inverse participation ratio (IPR) $I_2$: \begin{equation} I_2(\psi) = \sum_{j} |\psi_j|^4, \end{equation} where we assume that the states are normalized with $I_1(\psi) = \sum_{j} |\psi_j|^2 = 1$. This quantity has the property that an eigenstate localized to a single site has $I_2(\psi) = 1$, while an eigenstate extended over $N$ sites has $I_2(\psi) = 1/N$. We further define an averaged value of this quantity \begin{equation} I_2(E) = \frac{1}{N(E;\Delta E)} \sum_{ |\langle \psi | \ham | \psi \rangle - E| < \Delta E} I_2(\psi), \end{equation} where $N(E;\Delta E)$ is the number of eigenstates found to have eigenvalue $E_j$ with in the range $E-\Delta E < E_j < E+\Delta E$, leaving the dependence on $\Delta E$ implicit.. By calculating $I_2(E)$ the eigenstates of the system as a function of energy and disorder, the phase diagram and localization transition can be visualized. For the SGT, we again encounter a difficulty in that there are a large number of states with an $I_2(\psi) = 1/2$, associated with the $2^{d-1}$ states in $\sigma_0$ (with $E=0$). However, by using the MGT, we can calculate meaningful results for $I_2(E)$ for various disorder strengths $W$ to obtain the diagram in \ref{IPRfig1}. Here we have let $d = 8$ and set $\Delta E = 0.15 \gamma$ and averaged over 500 realizations of $\mathcal{H}$. We observe a gradual movement of extended states (with small $I_2$) to localized states (with $I_2 > 1/2$) as disorder is increased. Also shown are the expected results for an infinite Bethe lattice, obtained using the self-consistent method of Miller and Derrida \cite{Miller94} To obtain an estimate of the localization transition, we fix our attention to states near $E=0$, and repeat the calculation of the averaged IPR for graphs of various sizes. As expected, $I_2(0)$ exhibits a small size-dependent value for $W=0$, which increases to approximately 0.5 for large $W$. At a certain value, the averaged IPRs for all of the graphs coalesce, from which we estimate $W_c \approx 17$, in agreement with recent results for the Cayley tree \cite{Monthus2009,Biroli2010}. \begin{figure} \begin{center} \includegraphics[width = 3.5in]{fig3} \caption{Inverse participation ratio $I_2(E)$ as a function of energy and disorder, for $d=8$. For each value of disorder the IPR was averaged over 500 realizations of $\mathcal{H}$ with $\Delta E = 0.15$ (with $\gamma = 1$). Also shown is the mobility edge from the self-consistent theory, predicting a localization transition with $W_c \approx 17$.} \label{IPRfig1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width = 3.5in]{fig4} \caption{(Color online) Inverse participation ratio $I_2(E)$ at the band center ($E=0$) as a function of disorder for various depths. For each value of disorder the IPR was averaged over the middle 100 eigenvalues and a number of realizations given by $500, 250, 125, \mbox{and} \ 50$ for $d = 5, 6, 7, \mbox{and} \ 8$, respectively. The localization transition occurs when the curves collapse near $W \approx 17$.} \label{IPRfig2} \end{center} \end{figure} \section{Quantum Walk Dynamics} The dynamics of a quantum walk on the glued trees graph with disorder has been studied using the one-dimensional column-space model by Keating {{\it et al.}} \cite{Keating07}. In the previous section, however, we have seen that the eigenstates of the MGT undergo a localization transition at large disorder. This leaves open the possibility that a dynamical speed-up can be observed for small disorder. To explore this possibility, we again turn to numerical studies. There are three issues to be studied: first, what is the probability to hit the right root should the quantum walk begin at the left root? Second, how does this probability decay as a function of disorder and size, and by what mechanism? Finally, if the disorder quantum walk does not hit the right root, how far does it get? The first question can be answered by calculating the average of \begin{equation} p_{\subs{hit}}(t) = |\langle \mbox{col} \ 2d | e^{-i \mathcal{H} t} |\mbox{col} \ 0\rangle|^2 \end{equation} for instances of the SGT Hamiltonian with disorder strength $W$ and for various sizes $d$. This corresponds to the single-shot measurement procedure \cite{Kempe03a} for the quantum walk, and a representative set of $p_{\subs{hit}}(t)$ as a function of time and values of $W$ are shown in Fig. \ref{phit}. The first thing to observe is the oscillatory structure, due to the Bessel function structure for $p_{\subs{hit}}(t)$ \cite{Childs2002,Bose2003}, from which we can see that probability is maximized at the hitting time \begin{equation} t_{\subs{hit}} \approx \frac{1}{2\sqrt{2} \gamma} \left[ 2 d + 1 + 1.0188 \left(d + \frac{1}{2}\right)^{1/3} \right], \end{equation} with a value $p_d \sim d^{-2/3}$ for large $d$. Second, as disorder increases the maximum of the hitting probability is seen to decrease. This hitting probability can be approximated by \begin{equation} p_{\subs{hit}}(t_{\subs{hit}}) \approx p_d \exp \left[ - \frac{1}{16} (d - \frac{1}{2}) W^2 \right], \label{phitapprox} \end{equation} The exponential decay of $p_{\subs{hit}}/p_d$ is shown in the inset to Fig. \ref{phit}. \begin{figure}[t] \begin{center} \includegraphics[width=3.5in]{fig5} \caption{(Color online) Hitting probability $p_{\subs{hit}}(t)$ as a function of time for the quantum walk on the glued trees graph with $d=15$ and various values of disorder. The symbols were calculated using exact numerical simulations with (from top to bottom) $W = 0, 0.2, 0.4, 0.6, 0.8, 1.0, \mbox{and} \ 1.2$, with 10 realizations of $\mathcal{H}$ for each value of $W$. The solid curves were calculated using the locay decay model (see text). The inset shows the hitting probability $p{\subs{hit}}(t_{\subs{hit}})/p_d$ as a function of $d$ with symbols from numerical simulations and the curve from the approximation of Eq. (\ref{phitapprox}) (see text). } \label{phit} \end{center} \end{figure} By what mechanism does this decay occur? It is not associated with eigenstate localization, as the eigenstates of the system are well within the extended regime in the phase diagram of Fig. \ref{IPRfig1}. To understand this, we turn to another quantity of interest, the column-space probability \begin{equation} p_{\subs{col}}(t) = \sum_{j=0}^{2d} \; \left| \,\braket{\mbox{col }j}{\psi(t)} \,\right|^2. \end{equation} Quantum walks on the glued trees graph that are initially in a column space state $| \mbox{col} \ j_0 \rangle$ will remain in the column space in the absence of disorder. Once static disorder is introduced, the eigenstates lying within the column space become coupled to the other eigenstates of $G_d$. These are associated with the subgraphs of $G_d$, and have zero amplitude on the graph's two roots. The resulting decay of $p_{\subs{col}}(t)$, shown in Fig. \ref{pcol} leads to the decay of $p_{\subs{hit}}(t)$. \begin{figure}[b] \begin{center} \includegraphics[width=3.5in]{fig6} \caption{(Color online) Column space probability $p_{\subs{col}}(t)$ as a function of time for the quantum walk on the glued trees graph with $d=15$ and various values of disorder. The symbols were calculated using exact numerical simulations with (from top to bottom) $W = 0, 0.2, 0.4, 0.6, 0.8, 1.0, \mbox{and} \ 1.2$, with 10 realizations of $\mathcal{H}$ for each value of $W$. The error bars indicate the standard deviation of each average. The solid curves were calculated using the local decay model (see text). } \label{pcol} \end{center} \end{figure} We can provide an analytical estimate of this decay through perturbation theory. Assuming the walk begins in a column state $\kc{j_0}$, taking the second order expansion of $p_{\subs{col}}(t)$ gives \begin{eqnarray} p_{\subs{col}}(t) &=& \sum_{j=0}^{2d} \; \left| \,\bc{j} \exp(-i\,\ham\,t)\kc{j_0} \,\right|^2 \nonumber \\ &\approx& \sum_{j=0}^{2d} \; |\braket{\mbox{col $j$}}{\mbox{col $j_0$}}|^2 + t^2 |\bc{j}\ham\kc{j_0}|^2 \nonumber \\ & & \qquad - t^2 \left( \braket{\mbox{col $j$}}{\mbox{col $j_0$}} \bc{j_0}\ham^2\kc{j} \right), \nonumber \\ \end{eqnarray} where the Hamiltonian is given by $\ham = \ham_0 + \ham'$. It is straightforward to show that $\ham_0$ contributes nothing to the quadratic term, and for $\ham'$ given by Eq. (\ref{hprime}) we find \begin{equation} p_{\subs{col}}(t) = 1 - \frac{t^2}{N_{j_0,d}^2} \left[ (N_{j_0,d} - 1) \sum_i {\epsilon_i}^2 - \sum_{i \ne j} \epsilon_i \, \epsilon_j \right]. \end{equation} Averaging $p_{\subs{col}}(t)$ over the disorder we find \begin{equation} \langle p_{col}(t) \rangle = 1 - \frac{1}{12} t^2 W^2 \left( 1 -\frac{1}{N_{j_0,d}} \right), \end{equation} where we have used $\langle \epsilon_i \epsilon_j \rangle = \frac{1}{12} W^2 \delta_{ij}$. This result for the short time decay applies to any graph for which the quantum walk can be projected onto a column space. On longer timescales, we conjecture that the decay will have an exponential character, while retaining the position dependence. Hence, extrapolating from the short time result, we postulate a model of ``local (exponential) decay'', in which the walk evolution is computed as in the ideal column space representation, but the probability at each site $j$ is allowed to decay: \begin{equation} p_{j}(t) = p_0 \exp\left[ - \frac{t}{12} \frac{W^2}{\gamma} \left(1-\frac{1}{N_{j,d}} \right) \right]. \label{loc_gamma} \end{equation} This is equivalent to applying the mapping \begin{equation} \ham \quad\mapsto \quad \hcol - \, i\Gamma/2, \label{ldm} \end{equation} where $\hcol$ is the projection of the graph Hamiltonian onto the column space, and $\Gamma$ is given by \begin{equation} \Gamma = \frac{1}{12} \frac{W^2}{\gamma}\, \sum_{j=0}^{2d}\left(1-\frac{1}{N_{j,d}} \right) \kc{j}\bc{j}, \end{equation} with $\gamma$ denoting the unit time hopping probability from $\mathcal{H}_0$. This expression for $\Gamma$ can be anticipated by applying Fermi's Golden Rule to $\mathcal{H}'$ in the eigenstate basis, but the presence of a large discrete component to the spectrum ($\sigma_0$) prevents us from making a precise derivation. Numerical evidence, however, suggests that this is in fact the correct mechanism. Along with the exact numerical results for $p_{\subs{hit}}$ and $p_{\subs{col}}$ in Figs. \ref{phit} and \ref{pcol}, we have included results from the local-decay model calculated in the (exponentially smaller) column-space representation for $\mathcal{H}_0$. We observe that the local decay model predictions (solid lines) agree well with the results of simulations (points), to within one standard deviation. Our model accurately reproduces many features of the exact simulations, such as the variation in the decay rate of $p_{\subs{col}}$ and the oscillations in $p_{\subs{hit}}$. Furthermore, if we use the column space probability to predict the hitting probability at the target node (opposite root), the agreement is quite good. This supports the claim that the disorder-induced reduction in quantum transport is primarily explained by decay from the column space. So far we have looked at the probability at the right root and the column space, but a more global characterization of the walk propagation can be found by analyzing the average depth reached by the quantum walk, \begin{equation} r(t) = \bra{\psi(t)} \;\hat{r}\; \ket{\psi(t)}, \end{equation} where the column position operator $\hat{r}$ is defined as \begin{equation} \hat{r} = \sum_{j=0}^{2d} \sum_{n=0}^{N_{j,d}-1} j |j,n\rangle \langle j,n| \end{equation} with the property $\hat{r} \kc{j} = j \kc{j}$. The value of $r(t)$ thus gives a snapshot of the expected position of the walk along the graph at any one time. This is displayed in Fig. \ref{rtime} as a function of time and disorder. In the absence of disorder, the average depth shows an oscillatory character consistent with ballistic propagation of the wavepacket and reflections at the two ends of the graph. Disorder-induced decay from the column space causes the amplitude of the oscillations to decay faster than in the ideal case, ultimately causing a ``damping'' of the oscillations. Therefore, the walk has a reduced probability of traversing the graph, and substantial probability is instead deposited in the center of the graph, where the concentration of nodes is highest. \begin{figure} \begin{center} \includegraphics[width = 3.5in]{fig7} \caption{(Color online) Average distance $r(t)$ for $d=15$ as a function of time and disorder $W$. For each $W$, the simulations were averaged over $10$ realizations of $\mathcal{H}$. The quantum oscillations decay in time for small disorder, with a critical damping near $W \approx 2$, indicating a type of quantum-to-classical transition.} \label{rtime} \end{center} \end{figure} In Fig. \ref{rmax} we plot the maximum value of $r(t)$ in the range $t < 3 t_{\subs{hit}}$ against the strength of disorder, illustrating the localization transition on the glued trees graph. We see that for small disorder ($W < 2$), the maximum value of $r$ is high, corresponding to the quantum walk hitting the right root, for which $r = 2d$. As disorder increases, the curve falls, and begins to level off at the graph center ($r \approx d$ for $2 < W < 4$). For larger amounts of disorder ($W > 4$), the curves continue to decrease, converging on a single value around $W = 16$, precisely when we expect all eigenstates to be localized. \begin{figure} \begin{center} \includegraphics[width = 3.5in]{fig8} \caption{(Color online) Maximum average distance $r(t)$ as a function of disorder $W$ for various depths. For each value of disorder the maximum of $r(t)$ was averaged over $1000, 100$, and $10$ realizations for $d=5, 10$, and $15$, respectively. For intermediate disorder ($2 < W < 4$), there is a quantum-to-classical transition, while for large disorder ($W>15$) a localization transition is observed.} \end{center} \label{rmax} \end{figure} This analysis of the quantum walk dynamics strongly suggests a type of quantum-to-classical or ``wavelike-to-diffusive'' crossover at weak disorder. Note that this not a ``ballistic-to-diffusive'' crossover, and does not conflict the ballistic spreading of wavepackets found by Klein \cite{Klein95} for the Bethe lattice at weak disorder, as classical diffusion also leads to ballistic spreading ($ r(t) \sim t$). It does, however, suggest that there may be a length scale (the mean-free-path) that limits the size of graphs for which a speedup could occur in the presence of disorder. That is, the exponential decay of the hitting probability (in $d$) seen in Fig. \ref{phit} may be interpreted as the classical probability for a walker to traverse $d$ sites given a mean-free-path of order $\ell \sim 1/W^2$. To understand this crossover in more detail we extend our analysis of the quantum walk to a transport model. \section{Quantum Walk Transport} To identify the quantum-to-classical transition, we consider the quantum transmission through the glued trees graph, subject to disorder. To transform the quantum walk into a transmission problem, we attach ``tails'' to the input and output nodes, and look at the transmission coefficient through the graph for a wavefunction of the form \begin{equation} \Psi(n) = \left\{ \begin{array}{ll} e^{i k n} + \mathcal{R} e^{-i k n} & \mbox{for} \ n < 0 \\ \mathcal{T} e^{i k n} & \mbox{for} \ n > 2d+1 \end{array} \right. \end{equation} This represents an ingoing wave that is reflected and transmitted through the graph, as illustrated in Fig. \ref{scatterfig}. This type of quantum walk was been used to develop a quantum algorithm for NAND-tree evaluation \cite{Farhi2008}, and has been generally analyzed in \cite{Varbonov2009}. We consider the transmission probability $T = |\mathcal{T}|^2$ as a function of depth and disorder. \begin{figure} \begin{center} \includegraphics[width = 3.5in]{fig9} \caption{Scattering approach to the quantum walk, in which an incident wavepacket is transmitted and reflected along the ``tails'' connected to the modified glued trees graph.} \label{scatterfig} \end{center} \end{figure} Using a standard analysis for transmission in tight-binding lattices \cite{Sadreev2003}, we find the transmission amplitude \begin{equation} \mathcal{T} = \langle \mbox{col} \ 0 | \frac{2 i \sin k}{\tilde{\mathcal{H}} - 2 \cos k} | \mbox{col} \ 2 d+1\rangle \end{equation} where \begin{equation} \tilde{\mathcal{H}} = \mathcal{H} + e^{i k} \left( | \mbox{col} \ 0\rangle \langle \mbox{col} \ 0| + |\mbox{col} \ 2d+1 \rangle \langle \mbox{col} \ 2d+1| \right) \end{equation} is an effective Hamiltonian for the MGT graph alone (note that this graph has $2d+1$ columns). This quantity can be calculated by diagonalizing the non-Hermitian Hamiltonian $\tilde{\mathcal{H}}$, and forming the appropriate matrix elements in $\mathcal{T}$. The resulting transmission probability $T = |\mathcal{T}|^2$ is shown as a function of momentum $k$ and disorder $W$ in Figs. \ref{transfig1} and \ref{transd6}, for depths $d=5$ and $d=6$, respectively. For small disorder, there is a size-dependent oscillatory structure as a function of $k$ due to resonances, much like those found in \cite{Sadreev2003} (and briefly described in Appendix A). These oscillations in $T$ disappear when the disorder strength $W \approx 2$, after which the transmission decays monotonically. Figure \ref{transmission_fit} shows $T$ for $k=\pi/2$ as a function of disorder for many graph sizes, which all exhibit the same behavior for $W > 2$, indicative of a transition in $T$. \begin{figure} \begin{center} \includegraphics[width = 3.5in]{fig10} \caption{Transmission probability $T$ as a function of momentum $k$ and disorder $W$ for a modified glued trees graph of depth $d=5$. For each value of momentum and disorder, the transmission was averaged over 250 realizations of $\tilde{\mathcal{H}}$. } \label{transfig1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width = 3.5in]{fig11} \caption{Transmission probability $T$ as a function of momentum $k$ and disorder $W$ for a modified glued trees graph of depth $d=6$. For each value of momentum and disorder, the transmission was averaged over 100 realizations of $\tilde{\mathcal{H}}$.} \label{transd6} \end{center} \end{figure} The decay of the transmission probability with disorder can be understood using a classical model, described in the Appendix B. This model uses a diffusion constant proportional to the mean-free-path $\lambda \sim \ell \sim W^{-2}$ and leads to a classical transmission probability of the form \begin{equation} T_c = \frac{T_0}{1 + c (W/\gamma)^2}, \label{T_c} \end{equation} where the coefficients $T_0$ and $c$ presumably depend on the exact mapping of the disordered quantum walk to the diffusion equation, such as the method of \cite{Amir2009} or the results of \cite{Erdos2005}. This expression for Eq. (\ref{T_c}), fit using $T_0 = 0.8$ and $c = 0.2$, is shown in Fig. \ref{transmission_fit}. This agreement, for intermediate values of disorder, provides confirmation of the quantum-to-classical crossover observed in the dynamical studies of the previous section. \begin{figure} \begin{center} \includegraphics[width = 3.5in]{fig12} \caption{(Color online) Transmission probability $T$ for $k=\pi/2$ as function of disorder and for various depths. The various symbols are $d = 7$ (blue circles), $d=8$ (red upward triangles), $d=9$ (blue squares), and $d=10$ (red downward triangles), averaged over $500, 200, 100, \mbox{and} \ 50$ realizations of $\tilde{\mathcal{H}}$, respectively. The solid black curve is the transmission probability for the classical random walk $T_c$ (see text).} \label{transmission_fit} \end{center} \end{figure} \section{Conclusion} In this paper, we have carried out an investigation of the effects of diagonal disorder on quantum walks on the glued trees graph. While disorder does lead to localization in the strong disorder limit, we find the primary effect in the case of small disorder to be quantum decay out of the column space. The quantum decay can be accurately modeled in the column space by a non-unitary mapping that enforces position dependent decay of the probability. This local decay model is efficient to compute, owing to the exponential reduction in size of the representation, yet it allows prediction of the end-to-end hitting probability, and should be extendable to other graphs with similar symmetries. One such graph is the hypercube. This problem had been previously studied for quantum state transfer \cite{Strauch2008}, where the effects of off-diagonal disorder were emphasized. Numerical simulations for diagonal disorder, however, provide very similar results to those found in Sec. IV, with one important difference. The hitting time for the hypercube is independent of the dimension $d$ (which is analogous to the depth of the glued binary trees graph). The local decay model then predicts that the hitting probability should decay as $e^{-W^2/12}$, a result borne out by simulations. Thus, for the hypercube, quantum transport outperforms classical transport for small $W$. Such a result is also possible for the glued trees graph. The exponential suppression of the hitting probability occurs for the specific case of a state initially localized to the left root of the graph. By using a graph with ``tails'', the results of Sec. V show that appreciable transport is possible for $W<2$, provided there is no disorder in the tails and an appropriate initial state can be found. In addition, alternative measurement strategies \cite{Varbonov2008}, or the inclusion of traps \cite{Mulken2011} could provide opportunities for speedup. Exploring this possibility could provide additional context for understanding environmentally assisted quantum transport \cite{AGuzik08}. In summary, we have performed an analysis of the effect of static disorder on a quantum walk on the glued trees graph. For small disorder, we find that the dominant effect is a type of quantum decay, and not quantum localization. For intermediate disorder, there is a crossover to diffusive transport, while a localization transition is observed at large disorder, in agreement with Anderson localization on the Cayley tree. Our results suggest that intermediate disorder will inhibit any quantum speedup, but also that large speedups are possible for quantum walks on complex networks with small disorder. \begin{acknowledgments} We thank A. Aspuru-Guzik, S. M. Girvin, T. Kottos, and S. Lloyd for helpful discussions. FWS was supported by the Research Corporation for Science Advancement. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} As conversational agents and dialog systems are deployed to real-world scenarios, these systems require data-efficient personalization paradigms such that language systems such as conversational agents can be effectively adapted on-device. The benefits of on-device optimization are two-fold; (1) Swift adaptation of model-behavior based on human-interactions \citep{dudy2021refocusing}, (2) Privacy protection by means of retaining all data related to the user on-device \citep{li2020secure}. One of the prevailing paradigms for learning from and engaging with end-users is \textit{federated learning}. Federated learning is an inherently decentralized learning paradigm that assumes no access to a large labeled dataset and instead leverages averaged parameter updates across all users of the system \citep{mcmahan2017communication}. Such averaged updates invariably dilute individual preferences or deviations from the mean, resulting in a model that works well for the average user while failing to appropriately capture under-represented preferences or sub-groups within the data. In this work, we present a novel approach (FedPC) to personalizing federated learning with personal and context embeddings (collectively called ``preference embeddings''), adapting more efficiently and effectively than prior work with respect to both data and compute on-device. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{fig/overview_diagram.png} \caption{Overview of our personalized federated learning setup, FedPC. Language models within client devices, such as individual agents deployed to communicate with people at hospitals, homes, or construction sites, pull down global model parameters and context embeddings. Local, on-device data is then paired with both personal and context embeddings to produce personalized predictions with global model parameters.} \label{fig:pipeline} \end{figure} We leverage the insight that a client's data distribution is informed by both individual preferences and additional contextual information. For example, while each user may have their own \textit{individual} style, there may be more general \textit{population-wide} trends that inform the style of personalized predictions (e.g., dialogue assistants helping patients with cognitive disorders, whereby agents can personalize to individual patients and broader condition-wide trends). While individual preferences may be unique to each client (e.g. a user's taste or affect), we can more accurately personalize to client preferences with the addition of context, as shared-context parameters carry beneficial stylistic information across clients \citep{dudy2021refocusing,jones1999automatic}. Stylistic or situational context provides additional information to curate relevant language outputs that can be shared across users. In this work, we contribute a new approach to personalized federated learning that is both easier to learn and more effective than prior work, and investigate the utility of personalization via individual preferences and contexts. While prior language generation approaches have developed personal or persona-based generative systems \citep{wu2021personalized,zhang2018personalizing} or context-based generative systems \citep{cheng2019dynamic, lin2019moel} individually, none have combined them to personalize outputs in a low-data setting under stylized preferences. We show that our approach is more sample-efficient than state-of-the-art baselines, while requiring less time to train. We additionally present an inference-only version of our approach, personalizing without backpropagation for new users. Finally, we directly test the potential for personalization with users who have been held-out from training (i.e., testing with new users). An overview of our approach is given in Figure \ref{fig:pipeline}. \section{Related Work} Federated learning enables machine-learning at-scale to a diverse population of end-users without first collecting a large, labeled dataset for all possible tasks. After the introduction of \textit{federated averaging} \citep{mcmahan2017communication}, focus has shifted to different ways of personalizing to individual users. Prior personalization approaches for federated learning have typically involved learning personal network heads and a shared global encoder (i.e., ``split-learning'' approaches \citep{gupta2018split}), or learning a separate local model from a global initialization (i.e., a ``meta-learning'' approach \citep{finn2017maml,nichol2018reptile}). \paragraph{Learning Personal Model Heads} \label{subsec:personal-heads} The most prevalent approach to personalization in federated learning is through personalized model heads. Such approaches share gradient information to learn a global feature encoder, but retain user-specific classification-head gradients on-device. Approaches such as FedRep \citep{collins2021fedrep} solely separate out local and global gradients, while other methods such as PFedMe \citep{dinh2020pfedme} enforce constraints on model-divergence (such as via FedProx\citep{li2020federated}). Other approaches, such as FedMD \citep{li2019fedmd}, enable clients to adopt any desired architecture, sharing a common backbone but allowing for completely divergent model heads \citep{arivazhagan2019federated,kim2021spatio,rudovic2021personalized,paulik2021federated}. Finally, there has recently been increased effort on identifying clusters of related users to share model heads, such as with K-Means clustering in PFedKM \citep{tang2021pfedkm} or through clustered personal embeddings in FedEmbed \citep{silva2022fedembed}. Notably, there is no prior work which learns both personal \textit{and} contextual model heads for personalization within federated learning. \paragraph{Meta-Learning Global Models} \label{subsec:meta-learned} An alternate approach to personalizing federated learning models is through the adoption of meta-learning \citep{jiang2019improving, fallah2020personalized}, for learning a global model prior to fine-tuning on client-data. After cloning the global model as an initialization from all client's updates, local, client-side models are permitted to diverge and fine-tune to a user's individual preferences or data distribution \citep{fallah2020personalized,deng2020apfl,hanzely2020federated,hanzely2020lower, lin2019personalizing}. However, computing and applying gradients for a full model often requires too much time, power, and memory. As such, expensive full-model gradients can often only be computed and applied when a device is not actively in-use. As in the split-learning literature, there are not meta-learning approaches for disentangling personal and contextual preferences within personalized federated learning. \paragraph{Learning with Personal Embeddings} Our work leverages the insight that personal preferences can be represented using a personalized embedding, allowing the model to condition output predictions on personal preferences without requiring completely re-trained classification heads or networks. Personal embeddings have been used in prior work to capture an individual's ``style,'' often in imitation learning settings \citep{tamar2018imitation,hsiao2019learning,paleja2020interpretable,schrum2022mind}. Treating personal embeddings as neural network parameters that are updated on-device, these approaches learn to embed preferences and condition network output over both input data and preference embeddings. Most closely related to our work are FedNLG \citep{lu2021federated}, which predicts ``persona'' parameters for users, and the Global+ model in FedEmbed~\citep{silva2022fedembed}, which learns a personal embedding for each user. However, FedNLG requires access to a user's entire history of language and demographic data in order to produce a ``persona'' for each user, informing the generation of a ``persona'' embedding, and Global+ incorporates supervised style feedback. Prior embedding-based approaches solely learn \textit{personal} embeddings, neglecting stylization through context. In our work, we explore the utility of incorporating context in addition to personal preferences, and all preference embeddings are updated solely via a self-supervised language-modeling loss. \paragraph{Personalization in Language} Personalization for language generation systems seeks to produce grounded systems that can efficiently adapt to end-user needs \citep{yang2021towards, dudy2021refocusing}. One such approach to personalization is by learning a ``persona'' for each user and conditioning the language model on the embeddings or representation for the persona via a memory network \citep{zhang2018personalizing, wu2021personalized, lu2021federated}. ``Personas'' are generally short sequences of 5-6 sentences which contain information about an individual such as ``I have blonde hair'' or ``My mom is a doctor.'' Similar approaches leverage Bayesian inference methods to infer context\citep{majumder2020like} or persona \citep{kim2020will}, and then condition the language generation on the inferred context. However such approaches involve collecting and maintaining user-profiles on a central server which may violate user-confidentiality. Alternate approaches seek bypass this issue by enabling dynamic speaker modeling through context-based fine-tuning rather than conditioning on profile information \citep{cheng2019dynamic, li-liang-2021-prefix}. FedPC leverages a similar design to dynamically learn personal and context embeddings through data from small datasets for a given user, while also preserving user-confidentiality via federated learning. FedPC represents a new direction in personalized federated learning research, enabling personal and stylized language generation with a fraction of the memory, data, and compute costs of prior approaches without requiring access to pre-made personal profiles. \section{Approach} In this section, we present our novel approach to personalization in federated learning with FedPC. FedPC produces personal and contextual preference embeddings either via backpropagation (i.e., learning preference embeddings), or by inference (i.e., predicting preference embeddings). A visual overview of our federated learning architecture is in Figure \ref{fig:architecture}, and a step-by-step walk-through of our training algorithm is given in Algorithm \ref{alg:training_loop}. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{fig/model_diagram.png} \caption{The FedPC model architecture. Input data, such as on-device conversation data for a user, is passed into the language model in addition to personal and context labels specifying user's preference. The personal and context labels are embedded through a preference embedding layer to produce a single preference embedding. This preference embedding is combined with the word embeddings for the input sequence and passed into the DistilGPT2 model to predict the next word.} \label{fig:architecture} \end{figure} \subsection{Personalization via Embeddings} Personalization in FedPC is achieved entirely through preference embeddings. Every input sample (e.g., an incomplete sentence) is accompanied by both a personal preference embedding, representing the user, and a contextual preference embedding, representing the context or style of the prediction. These two embeddings are combined via an element-wise multiplication to produce a single preference embedding that accompanies the input sample. By leveraging both personal and context embeddings, FedPC considers the individual user \textit{and} the broader context of an utterance, enabling personal, stylized prediction. In the language-modeling domain, the unified preference embedding is prepended to the input utterance, providing a prefix for the model to consider \citep{li-liang-2021-prefix}. The model then predicts the next token of the utterance, and a language-modeling loss is calculated by comparing the prediction to the user's actual next token. The next token is then appended to the sequence, and preference embeddings are again prepended to the new input sequence, and the process repeats. After completing a full utterance, preference embeddings may be updated, either through backpropagation or by using an embedding-generator to predict new personal and contextual embeddings for the client. \subsection{Federated Learning Algorithm} To begin, all clients initialize their own personal embedding on-device, and the server initializes a set of $C$ context vectors for each relevant setting given the target task. We additionally assume that all data points on a client device have an associated context, $c$, being derived from the contextual information of the client device when the data point was captured (e.g., time of day, location, etc.). Training begins by distributing all the requisite information to client-devices. Client devices pull down the global model parameters, $\theta$, and the global context embedding parameters, $\phi$, making local copies, $\theta_d$ and $\phi_d$ (line 6). Unlike the global model parameters and context embeddings, the personal embeddings, $\psi_d$ do not need to be copied from the server as they are kept on client-devices. Client devices then take $K$ gradient steps using their own on-device data, where each input sample is paired with the client's on-device embedding, $\psi_d$, and the context embedding for the particular sample, $\phi_{d,c}$, assuming the data point was drawn under context $c \in C$. Gradients are calculated using a language-modeling objective, though any objective could theoretically be applied. If preference embeddings are being generated via forward-propagation rather than learned via backpropagation, contextual and personal preference embeddings will also be predicted by an embedding-generator at this stage (note: the parameters of the embedding-generator are shared globally, being a part of $\theta$) Gradients are applied to the shared-model parameters, $\theta_d$, and are then used to update preference embeddings (line 9). If preference embeddings are being predicted, these gradient steps are also applied to the shared embedding-generator, and preference embeddings (i.e., context embeddings $\phi_d$ and personal embeddings $\psi_d$) are overwritten with their latest predicted values (lines 10-11). If preference embeddings are being learned via backpropagation, gradient steps are applied to $\phi_d$ and $\psi_d$ using Equation \ref{eqn:backprop_embeds} (lines 10-11). After $K$ steps, gradients for $\theta_d$ and $\phi_d$ are sent back to the server, while $\psi_d$ remains on-device (lines 13 - 15). The server computes a single update for the global model and context embeddings by averaging across all clients (lines 17-18). The server applies the averaged update to $\theta$ and $\phi$, and the process repeats (lines 19-21). \begin{algorithm}[t] \caption{FedPC Training Loop} \label{alg:training_loop} \begin{algorithmic}[1] \STATE {\bfseries Given:} Training objective $\mathcal{L}$, Client devices $D$, \# client steps, $K$, \# global steps, $N$ \STATE {\bfseries Initialize:} Global model $\theta$, Context embeds $\phi$ \STATE{\bfseries Initialize:} Personal embeddings on-device $\psi$ \FOR{$n \in N$} \FOR{$d \in D$} \STATE $\theta_d = \theta, \phi_d = \phi$ \FOR{$k \in K$} \STATE Sample $B_d$ from user's on-device data \STATE $\theta_d \leftarrow \theta_d + \nabla_\theta\mathcal{L}(\theta_d, \phi_{d,c}, \psi_d, B_d)$ \STATE $\phi_d \leftarrow \phi_{d} + \nabla_{\phi_d}$ \STATE $\psi_d \leftarrow \psi_d + \nabla_{\psi_d}$ \ENDFOR \STATE $\nabla_{\theta_d} \leftarrow \theta-\theta_d$ \STATE $\nabla_{\phi_d} \leftarrow \phi - \phi_d$ \STATE Return $\nabla_{\theta_d}$ and $\nabla_{\phi_d}$ to the server \ENDFOR \STATE $\nabla_\theta \leftarrow \frac{1}{D} \sum_d^D \nabla_{\theta_d} $ \STATE $\nabla_\phi \leftarrow \frac{1}{D} \sum_d^D \nabla_{\phi_d} $ \STATE $\theta \leftarrow \theta + \nabla_{\theta}$ \STATE $\phi \leftarrow \phi + \nabla_{\phi}$ \ENDFOR \end{algorithmic} \end{algorithm} \begin{equation} \label{eqn:backprop_embeds} \begin{aligned} &\phi_d = \phi_d + \nabla_\phi\mathcal{L}(\theta_d, \phi_{d,c}, \psi_d, B_d) \\ &\psi_d = \psi_d + \nabla_\psi \mathcal{L}(\theta_d, \phi_{d,c}, \psi_d, B_d) \end{aligned} \end{equation} In a typical federated averaging deployment, client devices will pull down global parameters, fine-tune on local datasets, and then test on held-out, local data. With FedPC, the majority of the network's parameters, $\theta$, are frozen, reflecting a federated-learning setup with a more constrained computational budget when deploying large language models. Using FedPC, clients pull down and subsequently freeze global parameters, $\theta$, and either generate preference embeddings from observation, or only compute and apply gradients to context embeddings, $\phi$, and their local personal embedding $\psi$. Relying on forward-propagation calls rather than backpropagation, or by computing gradients over only these embeddings, we reduce the computational overhead of FedPC while preserving or even improving upon accuracy relative to fine-tuning an entire model. When testing over local data, all updates to context embeddings $\nabla_\phi$ are not sent to the central server. Rather, these gradients are directly applied to the context embeddings for the current user, and then discarded. When instantiating a new embedding for a previously unseen user, we set the user's embedding to the noisy-average of all known user embeddings. \subsubsection{Generating Preference Embeddings} To generate embeddings, we adopt a similar procedure to HyperNetworks \citep{ha2016hypernetworks,shamsian2021personalized}, in which a neural network is trained to predict parameters of another network. In FedPC, an embedding-generator is trained to predict the parameters of preference embeddings (either personal or context). To generate embeddings, we apply an additional transformer decoder block~\citep{vaswani2017attention}, that uses a randomly-initialized personal embedding and a known context embedding as the queries, along with the word embeddings for the utterance as the keys and values to update the given preference embeddings. We utilize separate generators to predict the personal embedding, $\psi_d$, and the context embedding, $\phi_d$. Specific training details for the embedding-generator applied to language-modeling are given in the appendix. While the embedding-generator must be learned from scratch during training, this method of predicting preference embeddings allows us to generate personal embeddings for previously \textit{unseen} users when testing. By predicting preference embeddings, we circumvent the need for expensive gradient calculation and on-device learning. Instead, new users can quickly reap the benefits of personalized predictions via a trained preference prediction module (i.e., the embedding generator), as opposed to conventional personalized federated learning methods that require slow and sample-inefficient on-device learning. \section{Experiments} We conduct several experiments to evaluate the sample efficiency, generalization, and runtime of our approach relative to baseline federated learning frameworks. In our experiments, we compare: \begin{itemize} \item FedPC -- Learning personal and context embeddings jointly with a global feature encoder, and performing local fine-tuning of personal and context embeddings on-device. \item FedPC (Frozen) -- As above but without local fine-tuning. \item FedPC (Generated) -- Learning an embedding generator and global feature encoder, and then using only generated embeddings at test-time. \item Split-Learning -- Learning personal and context-specific model-heads jointly with a global feature encoder, and performing local fine-tuning of the personal and context-specific model heads on-device \citep{dinh2020pfedme,collins2021fedrep}. \item Meta-Learning -- Learning a single global model for all users and contexts, and fine-tuning the shared model-head on-device \citep{finn2017maml,nichol2018reptile}. \end{itemize} We conduct two sets of experiments to compare the above approaches on both sample efficiency and runtime efficiency. For the sample efficiency experiments, we present perplexity numbers for all methods across two versions of the dataset: known users and withheld users. For our known user experiments, all users are present in the training and testing set. For our withheld user experiments, a subset of users from each dataset is withheld entirely from training, and performance results are presented only for the held-out users. Perplexity is calculated over unseen utterances with the first three tokens of each utterance given as a prompt. Finally, we present qualitative results from our method, demonstrating the power of stylized generation for individual users. All models are initialized with the DistilGPT2 pre-trained model from Huggingface \citep{wolf2019huggingface}, with all layers frozen. For Split-Learning and Meta-Learning, the last layer of the model is unfrozen. Training details are in the appendix. \begin{table*}[t] \caption{Perplexity Showing Sample Efficiency Across All Methods for Known Users. Lower is Better.} \label{tab:known_results} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{lc|ccccc} & \# Samples & FedPC & FedPC (Frozen) & FedPC (Generated) & Split-Learning & Meta-Learning \\ \hline \parbox[t]{0mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Reddit}}} &&&&&& \\ &1 & 219.5 $\pm$ 35.7 & 146.2 $\pm$ 2.3 & \textbf{120.2 $\pm$ 1.4} & 1297.5 $\pm$ 21.9 & 226.2 $\pm$ 3.7 \\ &5 & 131.6 $\pm$ 10.1 & 136.9 $\pm$ 3.4 & \textbf{123.3 $\pm$ 2.8} & 994.3 $\pm$ 27.8 & 234.7 $\pm$ 5.1 \\ &15 & \textbf{111.4 $\pm$ 3.5} & 132.6 $\pm$ 4.7 & 120.0 $\pm$ 3.3 & 691.3 $\pm$ 34.3 & 227.1 $\pm$ 8.4 \\ &All & 189.5 $\pm$ 6.7 & 167.9 $\pm$ 2.0 & \textbf{124.9 $\pm$ 1.3} & 930.4 $\pm$ 30.9 & 241.4 $\pm$ 2.1 \\ \hline \hline \parbox[t]{0mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{TV Shows}}} &&&&&&\\ &1 & 57.2 $\pm$ 3.6 & \textbf{50.3 $\pm$ 1.6} & 51.6 $\pm$ 1.3 & 359.4 $\pm$ 28.2 & 111.7 $\pm$ 4.6 \\ &5 & 51.5$\pm$ 1.5 & \textbf{50.7 $\pm$ 2.1} & 51.7 $\pm$ 2.0 & 244.5 $\pm$ 15.1 & 110.0 $\pm$ 6.5 \\ &15 & \textbf{48.8 $\pm$ 1.7} & 51.0 $\pm$ 2.1 & 51.7 $\pm$ 2.0 & 167.7 $\pm$ 8.6 & 111.9 $\pm$ 6.1 \\ &All & \textbf{46.7 $\pm$ 1.7} & 51.2 $\pm$ 2.0 & 52.1 $\pm$ 2.6 & 82.1 $\pm$ 3.3 & 113.0 $\pm$ 4.7 \\ \end{tabular}} \end{table*} \subsection{Datasets} We conduct our experiments using two datasets, a smaller dataset of TV Show scripts (``Friends'' \citep{chen2016character} and ``Game of Thrones'' \citep{got_data}) and a larger dataset of Reddit posts \citep{convokit}. Each dataset has a diverse set of individuals as well as clearly defined contexts/styles (i.e., TV shows or subreddits). These properties enable us to not only compare our approach to baseline approaches for personalized predictions, but they also enable us to move users between contexts or styles (e.g., producing text for a ``Friends'' character under a ``Game of Thrones'' context). By generating sequences for different users under new styles, we demonstrate the power of FedPC for personal, stylized prediction. Additional information about the datasets used in this work is given in the appendix. For both datasets, we treat each sentence from a speaker (i.e., TV Show character or Reddit user) as an independent utterance and we only consider utterances with at least three tokens. For experiments on known users, we perform a 60/20/20 Train/Validation/Test data split. For experiments on novel, unseen users, we perform a 70/15/15 split of Reddit users, and we manually select the ``Friends'' and ``Game of Thones'' users to include in each data fold. For both sets of experiments, all contexts are seen during training. \begin{table*}[t] \caption{Perplexity Showing Sample Efficiency Across All Methods for Withheld Users. Lower is Better} \vspace{2mm} \label{tab:withheld_results} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{lc|ccccc} & \# Samples & FedPC & FedPC (Frozen) & FedPC (Generated) & Split-Learning & Meta-Learning \\ \hline \parbox[t]{0mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Reddit}}} &&&&&& \\ & 1 & 594.3 $\pm$ 973.8 & 202.0 $\pm$ 5.9 & \textbf{117.3 $\pm$ 1.8} & 922.9 $\pm$ 27.8 & 213.9 $\pm$ 6.0 \\ & 5 & 139.4 $\pm$ 4.4 & 202.9 $\pm$ 10.9 & \textbf{117.5 $\pm$ 2.7} & 655.9 $\pm$ 18.8 & 212.2 $\pm$ 5.4 \\ & 15 & 117.4 $\pm$ 1.9 & 203.6 $\pm$ 11.2 & \textbf{116.6 $\pm$ 2.6} & 449.2 $\pm$ 11.4 & 211.7 $\pm$ 3.7 \\ & All & \textbf{101.1$\pm$ 2.2} & 202.2 $\pm$ 7.6 & 117.9 $\pm$ 2.8 & 309.3 $\pm$ 8.3 & 212.8 $\pm$ 5.2 \\ \hline \hline \parbox[t]{0mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{TV Shows}}} &&&&&&\\ & 1 & 205.1 $\pm$ 292.2 & 96.4 $\pm$ 10.4 & \textbf{68.7 $\pm$ 5.9} & 283.6 $\pm$ 30.9 & 113.5 $\pm$ 13.1 \\ & 5 & 68.6$\pm$ 5.6 & 90.1 $\pm$ 4.9 & \textbf{66.7 $\pm$ 6.3} & 220.7 $\pm$ 29.2 & 111.4 $\pm$ 13.3 \\ & 15 & \textbf{62.1 $\pm$ 5.0} & 97.6 $\pm$ 6.8 & 66.1 $\pm$ 5.5 & 158.1 $\pm$ 20.0 & 117.3 $\pm$ 10.5 \\ & All & \textbf{52.3 $\pm$ 3.3} & 98.2 $\pm$ 9.5 & 68.6 $\pm$ 5.1 & 96.7 $\pm$ 14.5 & 114.2 $\pm$ 17.0\\ \end{tabular}} \end{table*} \subsection{Results and Discussion} \label{subsec:results} All experiments are repeated fifteen times, and means and standard deviations for performance and runtime results are presented in Tables \ref{tab:known_results}, \ref{tab:withheld_results}, and \ref{table:runtime_results}. Tables \ref{tab:known_results} and \ref{tab:withheld_results} show that our approach is able to generate sensible language for both held-out user instances and known users. Both embedding-based approaches presented in this paper (i.e., FedPC with generated or learned embeddings) show drastic improvements over baselines in terms of both sample- and runtime-efficiency, and are more suitable for real-world on-device language models. \paragraph{Summary} With known users, FedPC achieves perplexity as low as 46.7 and 100.3, on the TV Show and Reddit datasets, respectively, compared to the best baseline perplexities of 82.1 and 233.2 (a 45-50\% improvement). For unknown users, FedPC achieves perplexities of 52.3 and 97.6, respectively, compared to baselines at 96.7 and 212.7 (a 45-55\% improvement). FedPC training times are between 25-400\% faster than baseline training times. Finally, FedPC uses 0.001\% of the memory that baseline methods use for stylized personalization. \paragraph{Memory Costs} FedPC incurs a significantly lower memory cost than prior Split-Learning based approaches \citep{li2019fedmd,collins2021fedrep,dinh2020pfedme,tang2021pfedkm,rudovic2021personalized,gupta2018split}. The Split-Learning baselines require maintaining a model-head for each user and context present in the dataset, and the size of these model heads is proportional to the size of the vocabulary. On each client-device, a user's personal model head and all context heads need to be stored in memory and used in forward passes. In our work, every GPT model head is approximately 154 MB (being $768 \times 50257$ parameters). To update the model on-device, one would need to store a model head corresponding to every possible context. Our Reddit dataset involves 57 contexts, totalling an additional $\sim$ 8GB of data in memory. This memory requirement for personalized heads could become infeasible for real-world tasks, particularly for on-device inference or backpropagation on mobile devices. Using FedPC, which only requires the addition of a drastically smaller preference embedding, the total amount of memory required on device to store the embeddings is only $\sim$171 KB (0.001\% of the memory required by separate model heads). \begin{table*} \caption{Training and Testing run-time for FedPC and our baselines, in milliseconds. Lower is better.} \vspace{2mm} \label{table:runtime_results} \centering \begin{tabular}{l|ccccc} Method & FedPC & FedPC (Frozen) & FedPC (Generated) & Split-Learning & Meta-Learning \\ \hline \hline Train Pass Time & 88.18 $\pm$ 24.104 & \textbf{43.57 $\pm$ 11.99} & 55.96 $\pm$ 12.41 & 222.08 $\pm$ 37.55 & 111.81 $\pm$ 22.33 \\ Test Pass Time & 40.37 $\pm$ 11.76 & 40.25 $\pm$ 12.10 & 47.02 $\pm$ 12.63 & 65.42 $\pm$ 16.49 & \textbf{36.77 $\pm$ 8.95} \end{tabular} \end{table*} \paragraph{Sample Efficiency} FedPC is able to outperform Split-Learning and Meta-Learning models with significantly fewer samples across both experiments and both datasets. This trend is reflected regardless of whether embeddings are generated or learned through backpropagation. When embeddings are learned, FedPC improves with online data to more effectively model the given user's style as more data is made available to the model. Conversely, while the generated embeddings exhibit greater sample performance with a single sample, they are unable to improve with more data. For both known and with-held users, FedPC with generated embeddings is unable to effectively update the preference embedding to improve generation performance. Finally, we see an increase in perplexity for Reddit users with all available data when using FedPC. This result suggests that it is possible to \textit{overfit} preference embeddings, as we see an increase in perplexity from 15 to ``All'' samples (Table \ref{tab:known_results}). We observe no improvement for the Meta-Learning baseline, regardless of how much data is available for each user. This lack of improvement suggests that the model is not capable of rapidly personalizing to a single user or context with only a handful of available samples. Only updating the model head may be insufficient when the base, shared model head must generalize across all possible contexts and characters. The Split-Learning baseline, on the other hand, does show significant improvement with increasing amounts of data for withheld and known users. In our known user experiments, all personal model heads should have already been well-tuned to personal preferences. Our result therefore suggests that context-specific model heads are over-generalized to their respective contexts, and must be refined to better-align with individual users. \begin{table*}[t] \caption{Generated Examples using Arya, from ``Game of Thrones'' (GoT) and Chandler, from ``Friends''.} \vspace{2mm} \label{tab:qualitative} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|l|l} Character & Show & ``We Must'' & ``I think'' \\ \hline \hline \multirow{3}{*}{Chandler} & \multirow{3}{*}{Friends} & be careful! I'm not going to get a divorce. & I'll be able to do this.\\ & & be a little bit more relaxed than we're here. & I'm a good man \\ & & be the one who's the one who's the one... & I'm a big fan of you\\ \hline \multirow{3}{*}{Chandler} & \multirow{3}{*}{GoT} & be honest with you. & I'm going to be a little more serious\\ & & be very nervous about the possibility of a bomb attack. & I'm going to be a little bit of a jerk \\ & & be a little nervous about the situation & I'm going to have a big secret.\\ \hline \multirow{3}{*}{Arya} & \multirow{3}{*}{GoT} & be a little more careful. & you can't help me \\ & & be careful about the dangers of the sea. & of the people\\ & & be wary of the possibility of a coup. & you're not going to be a hero?\\ \hline \multirow{3}{*}{Arya} & \multirow{3}{*}{Friends} & be a thief & I'm not a bad person \\ & & be a hero. & I can do it\\ & & be a little girl. & I should have a chance to do something\\ \end{tabular}} \end{table*} \paragraph{Runtime Efficiency} FedPC incurs significantly lower training costs than both Split-Learning and Meta-Learning approaches to personalization. While Meta-Learning baseline does not have the memory-constraints of the split-learning model in terms of storing \textit{additional} model heads, training the Meta-Learning baseline still involves computing gradients over all $768 \times 50257$ parameters in the shared output layer. As we show in Table~\ref{table:runtime_results}, this leads to a significantly more costly training time for each user. Similarly, the Split-Learning baseline must update \textit{at least} two model heads for each backward pass, requiring gradient computation for $2 \times 768 \times 50257$ parameters. If a user is active in multiple contexts, then additional context model-heads must be used, further exacerbating the training cost of the Split-Learning approach. The Split-Learning approach must also leverage these additional context model-heads at test-time, resulting in the slowest forward-passes of any baseline. In contrast to prior approaches, training for FedPC only requires updating $2 \times 768$ parameters. This reduced computation results in significantly lower training times. When we train an embedding-generator, there is an increase in training times reflecting the added cost of computing gradients for the embedding generator. Additionally, there is a test-time penalty incurred by the added forward-pass parameters. When running inference with any version of FedPC, preference embeddings are combined and then prepended to the input utterance. This process results in marginally slower test times with FedPC relative to the Meta-Learning baseline. \paragraph{Qualitative Results} Our qualitative results in Table \ref{tab:qualitative} demonstrate the power of FedPC. Not only is our model able to complete sequences for a character in their ``home'' context (i.e., the context from which all of their data is drawn), but we are also able to stylize generation for characters, bringing them into \textit{new} contexts. We present generated samples from a ``Game of Thrones'' (GoT) character (Arya) with a ``Friends'' context embedding and a GoT context embedding. We see that Arya's generated sequences are distinct under the two different contexts. Under the GoT context, Arya's utterances match the theme of the show, suggesting danger and revolution. Under the ``Friends'' context, Arya's utterances change to instead reflect more mundane, modern language while still preserving personal attributes of the character. Across all of our experiments, particularly the novel experimental evaluation on held-out user-instances, our results provide evidence that embedding-conditioned personalization within federated learning can be effectively applied to real-world use-cases. FedPC offers a promising avenue of future work towards on-device language models, capable of efficient language generation with respect to compute-power and data availability \section{Limitations} Firstly, although our embedding-generator offers a promising avenue of personalizing without any on-device gradient computation, our generator is currently unable to improve on its generated embeddings given more examples for a given user. As shown in our results from Sec~\ref{subsec:results}, while the model can generate an effective preference embedding for a user with a single sample, it is unable to improve with more data. In future work, we hope to explore approaches to facilitate a generator which can effectively modify embeddings given additional data. Secondly, our approach caters to confidentiality by ensuring that user-data and embeddings remain on-device, however we have not incorporated differential privacy in our experiments \citep{li2020secure}. Future work may apply differential privacy to guarantee user privacy while personalizing and contributing feature encoder information to a central server. Finally, it is important to note that FedPC does not solve all problems within the scope of language generation models. As FedPC offers a path forward to facilitate privacy protection and efficient on-device learning for large language models, future work may extend FedPC to additional problems (e.g., language summarization or turn-based dialogue generation). \section{Conclusion} We present FedPC, a new approach to personalized federated learning, enabling efficient and high-performance personalization to client devices by leveraging preference embeddings. Combining context with individual personal preferences, FedPC outperforms baselines even when allotted a lower computational budget. We also provide a method of generating preference embeddings through inference alone, providing personalization with no on-device gradient computation, and we show comparable performance to FedPC with learned embeddings. We presented experiments on two datasets, TV Show scripts and Reddit user data, presenting empirical evidence of the utility of FedPC towards personalizing to unseen users in a federated learning setting, i.e. a 50\% improvement in terms of runtime to fine-tune on new users as well as perplexity. We also demonstrated qualitative results, showing the power of separate personal and context embeddings and enabling stylization of users in new contexts. Our results show that FedPC offers a promising path forward for personalization within federated learning, achieving superior quantitative results, and requiring significantly less training time relative to baseline approaches. \section{Acknowledgements} This work was supported by the Office of Naval Research (ONR) under award N00014-19-1-2076. Andrew Silva was supported by the Apple Scholars in AI/ML PhD fellowship. \bibliographystyle{unsrtnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In widely-used machine learning (ML) systems such as TensorFlow~\cite{tensorflow} and PyTorch~\cite{pytorch}, computations are usually specified operationally: a sequence of mathematical operations such as matrix multiplications, activation functions (ReLU, tanh), convolutions, etc., are applied to a set of input tensors to define a computation. HPC systems such as ScaLAPACK~\cite{choi1992scalapack} and distributed analytic packages such as Dask~\cite{dask} offer a similar, operational interface. Operations are ``black boxes'' in the sense that the internals of the operation are mostly opaque to the system. An operation such as a matrix multiply is not a logical operation that the ML system figures out how to best optimize at runtime. Instead, it is a physical operation that has to run somewhere, on some hardware, via the invocation of a computational kernel. This operational approach has certain drawbacks, namely that the system has limited latitude for automatic optimization. The programmer is responsible for making sure that the operations can run successfully using available resources (such as the amount of RAM on each GPU), and if the operations cannot run successfully, the programmer must figure out how to break the operations up into smaller pieces that can run. Tasks such as getting a computation to run on multiple GPUs or on multiple machines in a distributed cluster so as to minimize communication are left to the programmer. \vspace{5 pt} \noindent \textbf{Toward declarative tensor programming.} There has been work on programming models that are more declarative. PyTorch and TensorFlow now both support variants of Einstein notation---a classical notation for specifying operations over tensors, and work on optimizing such computations has made its way into commercial systems~\cite{abo2016faq}. Researchers have proposed variants of the Ricci calculus as a specification language for ML~\cite{laue2020simple}. There have been other proposals for declarative tensor programming languages that allow for automatic generation of compute kernels that can be optimized to handle the data at hand such as Tensor Comprehensions~\cite{vasilache2018tensor}. Nevertheless, while there has been attention paid at the question of how to specify ML computations declaratively, there has been little attention paid to the question of what the correct implementation abstraction for ML system design should be. That is, what target should a ML system \emph{back-end} present to the \emph{front-end}?\footnote{Throughout the paper, we use the term \emph{front-end} to refer to the programmer-facing API and compiler; in PyTorch, for example, this would be the part of the system that accepts Einstein notation and transforms it into a set of executable operations. We use the term \emph{back-end} to refer to the sub-system that actually executes the computation.} There are several requirements for such a back-end interface. It should be able to express most/all important ML or numerical computations. It should be hardware agnostic, but computations specified using the interface should be easily scaled to multiple ASICs and multiple machines. It should allow for computations over very large input data. It should facilitate easy, automatic optimization. And it should provide for execution times that are as fast as a carefully-designed, ``hand-built'' computation on top of the very best tools such as ScaLAPACK, Dask, TensorFlow, or PyTorch. \vspace{5 pt} \noindent \textbf{The tensor relational algebra.} In this paper, we argue that the implementation abstraction that should be offered by a ML system back-end is the \emph{tensor relational algebra}, or TRA for short. The TRA is a simple variant of the classical relational algebra (RA), which serves as the standard implementation abstraction for modern relational database system design. The key difference is that in the TRA, the relations over which computations are performed are always binary relations between $k$-dimensional vectors (keys) and $r$-dimensional tensors. Of course, it is widely understood that a $k$-dimensional tensor can be stored as a binary relation between $k$-dimensional keys and real number values, e.g.~\cite{hutchison2017laradb}. Thus, why not use classical RA as the implementation abstraction? There is good evidence that a compute engine based upon such an abstraction will perform very poorly over the dense-tensor computations common in deep learning~\cite{luo2018scalable}: the overhead associated with storing each entry in a high-dimensional tensor as a separate tuple and moving billions of tuples through a system can result in poor performance, especially when the competition is a high-performance CPU or GPU kernel. This is why the TRA specifically allows for tensors or ``chunks'' of a larger tensor to be stored in each tuple---the fixed, per-tuple cost is incurred by a much smaller number of tuples. We argue that the TRA has three key advantages as an implementation abstraction: expressivity, easy optimization, and high performance. The joins and aggregations offered by the TRA can implement the indexed operations required by the Einstein notation and related specification languages. It is easy to implement and optimize relational operations in a distributed environment, as the decades of success enjoyed by relational database systems has demonstrated. And by design, an ML system back-end implementing the TRA is able to run classical HPC algorithms which rely on decomposing matrices into chunks, and it can run them almost as well as special-purpose HPC softwares such as ScaLAPACK. \vspace{5 pt} \noindent \textbf{Our Contributions.} We propose the TRA as well as an implementation algebra (IA) which is easy to implement in a distributed system. We consider how computations in the TRA can be transformed into computations in the IA, and propose a set of transformations or equivalences that allow re-writes of computations in the IA. Finally, we implement a prototype of the IA, and show that it can enable efficient, distributed implementations of ML computations, which can reach or even significantly outperform the existing HPC and ML systems including ScaLAPACK, Dask, TensorFlow, and PyTorch. \section{TRA Overview} \subsection{Motivation} There has been significant interest in tensor manipulation languages, which allow for declarative specification of ML and numerical computations. This simplest of these is the Einstein notation, which provides a way to write a tensor computation like the form: $$\forall i,j: \textrm{ }\textbf{C}_{i,j} \leftarrow \sum_k \textbf{A}_{i,k} \times \textbf{B}_{k,j}.$$ \noindent This example describes matrix multiplication. The value $i$-th row and $j$-th column of the output is the dot product of the $i$-th row of input matrix $\textbf{A}$ and the $j$-th column of input matrix $\textbf{B}$. Different proposals have different variations on this idea~\cite{abo2016faq,laue2020simple,vasilache2018tensor}, but the common features are that (1) values from different tensors are fed into a scalar function (such as the multiplication above) by matching indices, and (2) those dimensions are summed over. Languages such as the Einstein notation provide an excellent, declarative interface for programming an ML system---much as SQL provides the interface for relational systems. But the question remains: what is the correct implementation abstraction for ML systems? That is, what is the interface that the back-end should export, which can be targeted by an ML programming system compiler and auto-differentiation engine that make up the ML system's front-end? \subsection{TRA: The Basics} We propose the TRA as this ML implementation abstraction. The TRA operates over \emph{tensor relations} containing pairs of the form: $$\texttt{(key, array)}.$$ Conceptually, these tensor relations store sets of arrays. Each key value serves, not surprisingly, as the key for the pair. Tensors are decomposed into sets of sub-tensors to represent them as tensor relations. For example, consider the matrix \textbf{A}, $$\textbf{A} = \begin{bmatrix} 1 & 2 & 5 & 6 \\ 3 & 4 & 7 & 8 \\ 9 & 10 & 13 & 14 \\ 11 & 12 & 15 & 16 \end{bmatrix},$$ we may store this as a tensor relation \begin{align} \texttt{R}_A = \Bigg\{& \left( \langle 0, 0 \rangle, \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \right), \left( \langle 0, 1 \rangle, \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \right), \nonumber \\ &\left( \langle 1, 0 \rangle, \begin{bmatrix} 9 & 10 \\ 11 & 12 \end{bmatrix} \right), \left( \langle 1, 1 \rangle, \begin{bmatrix} 13 & 14 \\ 15 & 16 \end{bmatrix} \right) \nonumber \Bigg\}.\end{align} The TRA offers a similar set of operations to the RA: joins, aggregations, selections. It makes an excellent compilation target for tensor manipulation languages like the Einstein notation for several reasons. First, these languages typically match elements in tensors based on shared index values (which can be implemented as relational joins) and then sum out various dimensions (implemented as aggregations). The problem with using the RA as an implementation abstraction for tensors, where tensors are stored relationally as keys identifying a single non-zero value in a tensor, is performance. Tensors can have millions or billions of entries, and it seems impossible to build a back-end with acceptable performance if it has to process one tuple per non-zero value. Thus, the TRA operates over sub-tensors or ``chunks'', and expects the ML system front-end to supply a kernel function (typically a high-performance CPU or GPU operation) to operate over the sub-tensors themselves. This does complicate the TRA compared to the RA, but as we show experimentally, this modification makes it possible for a TRA-based back-end to significantly outperform the back-ends offered by TensorFlow and PyTorch. \subsection{TRA: the Implementation Algebra} One of the key advantages of the TRA is that like the RA, there are a number of re-writes, inherited from the RA (such as push-down of selections) that can be used to optimize TRA computations. However, when designing an ML system back-end, one of our key goals is to distribute ML computations across multiple ASICs or multiple machines. Such distributed computations have many different implementations. Consider the matrix multiply required to push a batch of feature vectors (represented as one matrix) through the links into a fully connected layer in a neural network (the weights are represented as a second matrix). This could be implemented by decomposing the feature matrix into sub-matrices at each site (each containing a subset of the feature vectors) and broadcasting the weight matrix to all sites. This is the common ``data parallel'' implementation, in ML system parlance. Or, one could fully decompose both matrices and apply a complex distributed algorithm, such as the 3D algorithm~\cite{agarwal1995three}. This would be a combined data and ``model parallel'' implementation in ML system parlance. Crucially, the TRA cannot express the distinctions among such implementations. As such, we also propose an \emph{implementation algebra} (IA) that can express these different distributed computations, as well as a simple compilation strategy from the TRA to the IA. Further, we propose an extensive set of re-writes for the IA that allow such disparate implementations to be reached from an initial TRA computation, and a simple cost model. Given this, an ML system back-end would export a simple, TRA-based interface which says nothing about distribution to multiple machines or ASICs. A TRA computation can then be compiled into a computation in the IA, and optimized into a high-performance, distributed computation. \section{Tensor Relational Algebra} The RA operates over \texttt{(key, array)} pairs. Define an \emph{array type} that consists of: \begin{enumerate} \item A \emph{rank} $r \in \mathbb{Z}^{*}$ \item A \emph{bound} $\textbf{b} \in (\mathbb{Z}^{*})^r$. \end{enumerate} \noindent For two vectors $\textbf{u} = \langle u_i \rangle$ and $\textbf{v} = \langle v_i \rangle$, define $\textbf{u} \leq \textbf{v} \equiv \wedge_i \left( u_i \leq v_i \right)$. Define $\textbf{u} < \textbf{v}$ similarly. Informally, we say that an array of rank $r$ is \emph{bounded} by vector \textbf{b} if the array is $r$ dimensional, and for any $r$-dimensional vector \textbf{i} that is less than the bound, $\texttt{array}_{\textbf{i}}$ returns a real number. Formally: \begin{enumerate} \item For any index $\textbf{i} \in (\mathbb{Z}^*)^r$, $\vec{0} \leq \textbf{i} < \textbf{b} \implies \texttt{array}_{\textbf{i}} \in \mathbb{R}$. \item $\neg (\vec{0} \leq \textbf{i} < \textbf{b}) \implies \texttt{array}_{\textbf{i}} = \bot$. That is, for any index $\textbf{i}$ outside of the bound $[\vec{0}, \textbf{b}]$, $\texttt{array}_{\textbf{i}}$ is undefined. \end{enumerate} Subsequently, we denote the set of all arrays of rank $r$ and bound $\textbf{b}$ as $T^{(r, \textbf{b})}$. Thus, $T^{(r, \textbf{b})}$ defines an array type. We denote the power set of $(\mathbb{Z}^{*})^k \times T^{(r, \textbf{b})}$ as $R^{(k, r, \textbf{b})}$; this is the set of all possible tensor relations with $k$-dimensional keys, storing arrays of type $T^{(r, \textbf{b})}$. \subsection{Operations in Tensor Relational Algebra} Given this, the TRA is essentially a set of higher-order functions over tensor relations. That is, each operation takes as input a kernel function defined over multi-dimensional arrays (in practice, this function is likely be an array-based MKL, CUDA, or Verilog kernel) and returns a function over tensor relations. We begin by giving an overview of the higher-order functions taking binary functions as input: aggregation (denoted using $\Sigma$) and join (denoted using $\Join$). \noindent (1) \textit{Aggregation} is a function: \begin{align} \Sigma : \left( (\mathbb{Z}^{*})^g \times \left(T^{(r, \textbf{b})} \times T^{(r, \textbf{b})} \rightarrow T^{(r, \textbf{b})}\right)\right) \nonumber \\ \rightarrow \left(R^{(k, r, \textbf{b})} \rightarrow R^{(g, r, \textbf{b})} \right) \nonumber \end{align} \noindent$\Sigma _{\texttt{(groupByKeys, aggOp)}}$ takes as input a list of key dimensions to aggregate according to \texttt{groupByKeys} as well as an array kernel operation \texttt{aggOp}, and then returns a function that takes as input a tensor relation, groups the arrays in the relation based upon the indicated key values, and applies \texttt{aggOp} to the arrays in the group. Consider the matrix $\textbf{A}$ from the last section. We can sum up the individual arrays vertically using $$\Sigma_{(\langle 1 \rangle, \texttt{matAdd})}(\texttt{R}_A)$$ \noindent which gives: $$\left\{ \left( \langle 0 \rangle, \begin{bmatrix} 10 & 12 \\ 14 & 16 \end{bmatrix} \right),\left( \langle 1 \rangle, \begin{bmatrix} 18 & 20 \\ 22 & 24 \end{bmatrix}\right)\right\}.$$ Because of the argument $\langle 1 \rangle$, the call $\Sigma _{(\langle 1 \rangle, \texttt{matAdd})}$ constructs an aggregation function that groups all pairs having the same value for the key in position $1$, and sums them. Or we could sum up the individual arrays into a single array using: $$\Sigma _{(\langle \rangle, \texttt{matAdd})}(\texttt{R}_A)$$ which gives: $$\left\{ \left( \langle \rangle, \begin{bmatrix} 28 & 32 \\ 36 & 40 \end{bmatrix} \right) \right\}.$$ \noindent (2) \textit{Join} is a function: \begin{align} \Join : \bigg(& (\mathbb{Z}^{*})^{g} \times (\mathbb{Z}^{*})^{g} \times \left(T^{(r_l, \textbf{b}_l)} \times T^{(r_r, \textbf{b}_r)} \rightarrow T^{(r_o, \textbf{b}_o)}\right)\bigg) \nonumber \\ &\rightarrow \left(R^{(k_l, r_l, \textbf{b}_l)} \times R^{(k_r, r_r, \textbf{b}_r)} \rightarrow R^{(k_l + k_r - g, r_o, \textbf{b}_o)} \right) \nonumber \end{align} \noindent $\Join _{\texttt{(joinKeysL, joinKeysR, projOp)}}$ takes as input a set of key dimensions to join on from the left and from the right, as well as an operation to run over all \texttt{(leftArray, rightArray)} pairs that are created during the join, and returns a function that performs the join and applies \texttt{projOp} to all pairs. Similar to a natural join in classical databases systems, the output key is all of the key values from the left input, with all of the key values from the right input appended to them, subject to the constraint that no value in \texttt{joinKeysR} is repeated a second time. With join and aggregation we may implement matrix multiply over two matrices stored as tensor relations. Imagine that we want to implement $\textbf{A} \times \textbf{A}$ for the matrix \textbf{A} defined previously, where \textbf{A} is stored as a tensor relation $\texttt{R}_A$. This can be written as: $$\Sigma _{\left(\langle 0, 2 \rangle, \texttt{matAdd}\right)} \left(\Join _{\left(\langle 1 \rangle, \langle 0 \rangle, \texttt{matMul}\right)} \left( \texttt{R}_A, \texttt{R}_A \right) \right)$$ \noindent This computes a matrix multiply of the matrix \textbf{A} because all of the pairs in $\texttt{R}_A$ are first joined on key index $1$ from the first instance of $\texttt{R}_A$ equaling key index $0$ from the second instance of $\texttt{R}_A$. Each pair of arrays are then multiplied using the kernel $\texttt{matMul}$. For example, $$\left( \langle 0, 1 \rangle, \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \right) \textrm{ and } \left( \langle 1, 0 \rangle, \begin{bmatrix} 9 & 10 \\ 11 & 12 \end{bmatrix} \right)$$ \noindent are joined to produce $$\left( \langle 0, 1, 0 \rangle, \begin{bmatrix} 111 & 122 \\ 151 & 166 \end{bmatrix} \right).$$ \noindent The index $\langle 0, 1, 0 \rangle$ in this output pair is a combination of $\langle 0, 1 \rangle$ and $ \langle 1, 0 \rangle$ from the two input pairs, with the redundant index entry dropped (redundant because we know that two of the entries in positions 1 and 0, respectively, are repeated due to the join). Next, the arrays are aggregated using $\texttt{matAdd}$, summing out index $1$ (keeping indices $\langle 0, 2 \rangle$ as \texttt{groupByKeys}), to complete the matrix multiply. In contrast to join and aggregation, rekey, filter and transform are higher-order functions taking a unary function as input. \noindent (3) \textit{ReKey} allows manipulation of keys: \begin{align} {\textsc{ReKey}}: \left((\mathbb{Z}^{*})^{k_i} \rightarrow (\mathbb{Z}^{*})^{k_o} \right) \rightarrow \left(R^{(k_i, r, \textbf{b})} \rightarrow R^{(k_o, r, \textbf{b})} \right) \nonumber \end{align} \noindent ${\textsc{ReKey}}_{\texttt{(keyFunc)}}$ applies the $\texttt{keyFunc}$ on every key in the relation and generates a new key. \noindent (4) \textit{Filter} is a function: \begin{align} \sigma : \Big((\mathbb{Z}^{*})^k \rightarrow \{\texttt{true}, \texttt{false}\} \Big) \nonumber \rightarrow \left(R^{(k, r, \textbf{b})} \rightarrow R^{(k, r, \textbf{b})} \right) \nonumber \end{align} \noindent $\sigma_{\texttt{(boolFunc)}}$ returns a function that accepts a tensor relation and filters each of the tuples in the tensor relation by applying \texttt{boolFunc} to the keys in the tuples. \noindent (5) \textit{Transform} is a function: \begin{align}\lambda : \Big( T^{(r_i, \textbf{b}_i)} \rightarrow T^{(r_o, \textbf{b}_o)} \Big) \rightarrow \left(R^{(k, r_i, \textbf{b}_i)} \rightarrow R^{(k, r_o, \textbf{b}_o)} \right) \nonumber \end{align} \noindent $\lambda _{\texttt{(transformFunc)}}$ returns a function that accepts a tensor relation and applies the kernel function \texttt{transformFunc} to the array in each tuple from the tensor relation. For an example of the rekey, filter and transform operations, assume we have a kernel operation \texttt{diag} that diagonalizes a matrix block, a function \texttt{isEq}$(\langle \texttt{k}_0, \texttt{k}_1) \rangle \mapsto \texttt{k}_0 = \texttt{k}_1$ that accepts a key and returns true if entries in position $0$ and $1$ in the key are identical to one another, and a function \texttt{getKey0}$(\langle \texttt{k}_0, \texttt{k}_1 \rangle) \mapsto \langle \texttt{k}_0 \rangle$ that returns the first dimension of a key. We can use these functions along with filter, rekey, and transform to diagonalize a matrix \textbf{A} represented as a tensor relation $\texttt{R}_A$, by first examining the keys to remove all pairs that do not contain entries along the diagonal, and then diagonalizing the resulting arrays: $$\lambda_{\left(\texttt{diag}\right)} \left( {\textsc{ReKey}}_{\left(\texttt{getKey0}\right)} \left( \sigma _{\left(\texttt{isEq}\right)} \left( \texttt{R}_A\right)\right)\right).$$ In addition, there are a number of operations that can be used to alter the organization of arrays within a tensor relation. This allows the manipulation of how a tensor is represented as a tensor relation. For this purpose, we have tile and concat: \noindent (6) \textit{Tile}: \begin{align} {\textsc{Tile}} : \left(\mathbb{Z}^{*} \times \mathbb{Z}^{*} \right) \rightarrow \left( R^{(k, r, \textbf{b})} \rightarrow R^{(k+1, r, \textbf{b}')}\right) \nonumber \end{align} ${\textsc{Tile}} _{\texttt{(tileDim, tileSize)}}$ returns a function that decomposes (or tiles) each array along a dimension \texttt{tileDim} to arrays of the target \texttt{tileSize} (by applying the \texttt{arrayTileOp} function on the array). As a result, a new key dimension is created, that effectively counts which tile the tuples holds along the tiling dimension. For example, consider the matrix \textbf{B}, $$\textbf{B} = \begin{bmatrix} 1 & 2 & 5 & 6 & 9 & 10 & 13 & 14 \\ 3 & 4 & 7 & 8 & 11 & 12 & 15 & 16\\ \end{bmatrix},$$ partitioned by columns and stored in tensor relation: \begin{align} \texttt{R}_B = \left\{ \left( \langle 0 \rangle, \begin{bmatrix} 1 & 2 & 5 & 6 \\ 3 & 4 & 7 & 8 \end{bmatrix} \right), \left( \langle 1 \rangle, \begin{bmatrix} 9 & 10 & 13 & 14 \\ 11 & 12 & 15 & 16 \end{bmatrix} \right) \right\} \nonumber \end{align} If we make the call ${\textsc{Tile}}_{\left(1, 2\right)} \left( \texttt{R}_B\right)$, we will decompose each array along dimension $1$, creating one new array for each two columns. In addition, a new key dimension is created, that effectively counts which tile the pair holds along the tiling dimension: \begin{align} {\textsc{Tile}}_{\left(1, 2\right)} \left( \texttt{R}_B\right) = \Bigg\{ &\left( \langle 0, 0 \rangle, \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \right), \left( \langle 0, 1 \rangle, \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \right), \nonumber \\ &\left( \langle 1, 0 \rangle, \begin{bmatrix} 9 & 10 \\ 11 & 12 \end{bmatrix} \right), \left( \langle 1, 1 \rangle, \begin{bmatrix} 13 & 14 \\ 15 & 16 \end{bmatrix} \right) \Bigg\}. \nonumber \end{align} \noindent We may sometimes find ourselves in a situation where it is necessary to manipulate the key in each pair in a tensor relation so that the key is consistent with the desired interpretation. For example, the tensor relation $\texttt{R}_B$ defined above can represent a matrix with eight columns and two rows, so ${\textsc{Tile}}_{\left(1, 2\right)} \left( \texttt{R}_B\right)$ is inconsistent with this, logically representing a matrix having four columns and four rows. For this purpose, we can leverage the ${\textsc{ReKey}}$ operator as we defined before. For example, we can rekey the output of ${\textsc{Tile}}_{\left(1, 2\right)} \left( \texttt{R}_B\right)$ so that logically, it corresponds to a two-by-eight matrix: $${\textsc{ReKey}}_{\left( \langle \texttt{k}_0, \texttt{k}_1\rangle \rightarrow \langle 2\texttt{k}_0+\texttt{k}_1 \rangle\right)} \left({\textsc{Tile}}_{\left(1, 2\right)} \left( \texttt{R}_B\right) \right)$$ This will result in: \begin{align}\Bigg\{ &\left( \langle 0 \rangle, \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \right), \left( \langle 1 \rangle, \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \right), \left( \langle 2 \rangle, \begin{bmatrix} 9 & 10 \\ 11 & 12 \end{bmatrix} \right), \left( \langle 3 \rangle, \begin{bmatrix} 13 & 14 \\ 15 & 16 \end{bmatrix} \right) \Bigg\} \nonumber \end{align} Finally, we have the ability to undo a tiling. \noindent (7) \textit{Concat}: \begin{align} {\textsc{Concat}} : \left(\mathbb{Z}^{*} \times \mathbb{Z}^{*} \right) \rightarrow \left( R^{(k, r, \textbf{b})} \rightarrow R^{(k-1, r, \textbf{b'})}\right)\nonumber \end{align} \noindent ${\textsc{Concat}}_{\texttt{(keyDim, arrayDim)}}$ is an inverse to tile, which first groups all pairs in the relation using all of the key dimensions \emph{other than} \texttt{keyDim}, then concatenates all of the arrays in each group along \texttt{arrayDim}, with the concatenation ordering provided by \texttt{keyDim}. A call to ${\textsc{Concat}}_{\left (1, 1\right)} \left({\textsc{Tile}}_{\left(1, 2\right)} \left( \texttt{R}_B\right)\right)$ first groups all pairs in ${\textsc{Tile}} _{\left(1, 2\right)} \left( \texttt{R}_B\right)$ using all of the key dimensions other than key dimension $1$, and then concatenates the arrays in each group along array dimension $1$, with the ordering provided by key dimension $1$. Hence, this computation simply results in the recovery of $\texttt{R}_B.$ \subsection{Integrity Constraints and Closedness} \label{sec:tra_discussion} There are two important integrity constraints that each tensor relation must follow: uniqueness of keys, and a lack of ``holes'' in the tensor relation. The primary reason for defining these constraints is facilitating easy, cost-based optimization. With such constraints, cardinality estimation, one of the most vexing problems in relational optimization, goes away---see Section \ref{sec:cost}. Further, neither is particularly burdensome when expressing computations using the TRA. In fact, if the interpretation of a tensor relation of type $R^{(k, r, \textbf{b})}$ is that it represents a $r$-dimensional tensor decomposed into chunks, these constraints are quite natural: \begin{itemize} \item \emph{Uniqueness}: every key should be unique in a tensor relation. \item \emph{Continuity}: there are no ``holes''. Given a tensor relation $\texttt{R}$, of key-arity $k$, define the \emph{frontier} of $\texttt{R}$ as ${\textsc{Front}}(\texttt{R})$. $\textbf{f} = {\textsc{Front}}(\texttt{R})$ is a $k$-dimensional vector that bounds all keys in $\texttt{R}$. That is, for each key vector $\textbf{k}$ in $\texttt{R}$, $\textbf{k} < \textbf{f}$. Further, the frontier is the ``smallest'' vector bounding $\texttt{R}$, in that for any other vector $\textbf{f}'$ bounding $\texttt{R}$, $\textbf{f} \leq \textbf{f}'$. \emph{Continuity} requires that for any vector $\textbf{k} < \textbf{f}$, some tuple in $\texttt{R}$ have the key $\textbf{k}$. \end{itemize} It is easy to show that for the majority of TRA operations---the exceptions being the rekey and filter operations---tensor relations are closed. That is, if the input(s) are tensor relation(s) that obey uniqueness and continuity, then the output must be a tensor relation that obeys these constraints. Filtering a tensor relation or altering the keys can obviously violate the constraints, where the the former probably leads to holes in the resulting relation, and the latter can result in repeated key values. Analyzing a TRA expression to automatically detect whether it can violate these constraints is left as future work; we conjecture that if the filtering predicate (or re-keying computation) are limited to simple arithmetic expressions, it may be possible to check for closedness using an SMT solver \cite{de2008z3}. \section{Implementation Algebra} We now describe TRA's \textit{implementation algebra} (IA) that is suitable for execution in a parallel/distributed environment. In IA, we extend each \texttt{(key, array)} tuple in a tensor relation with an additional \texttt{site} attribute, so that a \emph{physical tensor relation} $\phys{R}$ will consist of triples: $$\texttt{(key, array, site)}.$$ The \texttt{site} attribute takes a value in $\{1,2,...,s\}$ where $s$ is the number of computation sites. Conceptually, the \texttt{site} value indicates the location where the tuple is stored; this could be a machine in a distributed cluster, or a compute unit like a GPU. Each physical tensor relation can map a particular $\texttt{(key, array)}$ pair to one or more sites. There are a few especially important mappings, recognized by the predicates $\textsc{All}()$ and $\textsc{Part}_D()$: \begin{enumerate} \item If $\textsc{All}(\phys{R}) = \textrm{ true}$, it indicates that if we project away the \texttt{array} attribute, the resulting set will take the value: $$\{\textbf{k} \textrm{ s.t. } \textbf{k} \le \textsc{Front}(\phys{R}) \times \{1...s\}\}$$ where $ \textsc{Front}(\phys{R})$ is the frontier of $\phys{R}$ (the frontier of a physical tensor relation is defined as in a ``regular'' tensor relation). In other words, this means that each possible \texttt{(key, array)} tuple in $\phys{R}$ appears at all sites. \item If $\textsc{Part}_D(\phys{R}) = \textrm{ true}$ for some set $D \subseteq \{1...k\}$, it indicates that: (i) for a given \texttt{key} value, there is only one tuple in $\phys{R}$ and (ii) two tuples with the same key values for all dimensions in $D$ must be found on the same site. In other words, $\phys{R}$ is partitioned according to the key dimensions in the set $D$. \end{enumerate} We are ready to describe the IA. Let $\mathcal{R}^{(k, r, \textbf{b}, s)}$ specify the set of all valid physical tensor relations with key-arity of dimension $k$, storing arrays of type $T^{(r, \textbf{b})}$, and partitioned across $s$ sites. The first two operations are concerned with manipulating the assigning of tuples in a physical relation to sites, while the later four operations operate over the \texttt{key} and \texttt{array} attributes. \noindent (1) \textit{Broadcast} is defined as \begin{align} {\textsc{Bcast}} : \mathcal{R}^{(k, r, \textbf{b}, s)} \rightarrow \mathcal{R}^{(k, r, \textbf{b}, s)} \nonumber \end{align} \noindent Given a physical tensor relation, ${\textsc{Bcast}}$ simply ensures that each tuple takes each site value, so that (i) the set of \texttt{(key, array}) pairs is unchanged after ${\textsc{Bcast}}$, but (ii) in any physical relation $\phys{R}$ output from a broadcast, $\textsc{All}(\phys{R}) = \textrm{ true}$. \noindent (2) \textit{Shuffle} is defined as: \begin{align} {\textsc{Shuf}} : 2^{\{1...k\}} \rightarrow \left(\mathcal{R}^{(k, r, \textbf{b}, s)} \rightarrow \mathcal{R}^{(k, r, \textbf{b}, s)}\right)\nonumber \end{align} \noindent ${\textsc{Shuf}}_{(\texttt{partDims})}$ is a function that accepts a set of key dimensions, and returns a function that accepts physical tensor relation, and then repartitions the physical tensor relation, so that (i) the set of \texttt{(key, array}) pairs is unchanged after ${\textsc{Shuf}}$, but (ii) in any physical relation $\phys{R}$ output from a shuffle, $\textsc{Part}_{\texttt{partDims}}(\phys{R}) = \textrm{ true}$. \noindent (3) \textit{Local join} is an extension of the TRA's join operation: \begin{align} &{\textsc{$\Join^{L}$}} : \bigg( (\mathbb{Z}^{*})^{g} \times (\mathbb{Z}^{*})^{g} \times \left(T^{(r_l, \textbf{b}_l)} \times T^{(r_r, \textbf{b}_r)} \rightarrow T^{(r_o, \textbf{b}_o)}\right)\bigg) \nonumber \\ &\rightarrow \left(\mathcal{R}^{(k_l, r_l, \textbf{b}_l, s)} \times \mathcal{R}^{(k_r, r_r, \textbf{b}_r, s)} \rightarrow \mathcal{R}^{(k_l + k_r - g, r_o, \textbf{b}_o, s)} \right) \nonumber \end{align} \noindent Similar to TRA join ($\Join$), ${\textsc{$\Join^{L}$}}_{\texttt{(joinKeysL, joinKeysR, projOp)}}$ takes as input a set of key dimensions to join on from the left and from the right, as well as a kernel operation to run over all \texttt{(leftArray, rightArray)} pairs that are created during the join. The key difference that the local join combines \emph{only} on pairs from the left and right inputs that have the \textbf{same} \texttt{site} values. If two tuples successfully join, the corresponding output tuple will have the \texttt{site} value as those input tuples. \noindent (4) \textit{Local aggregation} is an extension of TRA aggregation: \begin{align} {\textsc{$\Sigma^{L}$}} : \left( (\mathbb{Z}^{*})^k \times \left(T^{(r, \textbf{b})} \times T^{(r, \textbf{b})} \rightarrow T^{(r, \textbf{b})}\right)\right) \nonumber \\ \rightarrow \left(\mathcal{R}^{(k, r, \textbf{b}, s)} \rightarrow \mathcal{R}^{(g, r, \textbf{b}, s)} \right) \nonumber \end{align} \noindent Like TRA aggregation ($\Sigma$), ${\textsc{$\Sigma^{L}$}}_{\texttt{(groupByKeys, aggOp)}}$ takes as input a list of key dimensions to aggregate over \texttt{groupByKeys} as well as a kernel function \texttt{aggOp}. However, it returns a function that takes as input a physical tensor relation, groups the arrays in the relation based upon the indicated key values \emph{and} the site value, and applies \texttt{aggOp} to the arrays in the group. Each output tuple in the resulting, physical tensor relation will take its \texttt{site} value from the \texttt{site} value of the set of input tuples that were aggregated to produce it. \noindent (5) IA has a filter: \begin{align} {\textsc{$\sigma^{L}$}} : \Big((\mathbb{Z}^{*})^g \rightarrow \{\texttt{true}, \texttt{false}\} \Big) \rightarrow \left(\mathcal{R}^{(k, r, \textbf{b}, s)} \rightarrow \mathcal{R}^{(k, r, \textbf{b}, s)} \right) \nonumber \end{align} \noindent The only difference is that each accepted input tuple's \texttt{site} value is carried through the filter. \noindent (6) Map provides two functionalities: \begin{align} {\textsc{$\lambda^{L}$}} : {\Big( \left((\mathbb{Z}^{*})^{k_i} \rightarrow \big((\mathbb{Z}^{*})^{k_o} \big)^m \right) \times \left(T^{(r_i, \textbf{b}_i)} \rightarrow \big( T^{(r_o, \textbf{b}_o)}\big)^m\right) \Big) } \nonumber\\ \rightarrow \left(\mathcal{R}^{(k_i, r_i, \textbf{b}_i, s)} \rightarrow \mathcal{R}^{(k_o, r_o, \textbf{b}_o, s)} \right) \nonumber \end{align} \noindent ${\textsc{$\lambda^{L}$}}_{\texttt{(keyMapFunc,arrayMapFunc)}}$ is a multi-map. It returns a function that applies \texttt{keyMapFunc} to each \texttt{key} value in the input and applies \texttt{arrayMapFunc} to each \texttt{array} value in the input. Both \texttt{keyMapFunc} and \texttt{arrayMapFunc} return $m$ output tuples per input tuple; the \texttt{site} value is simply copied from input to output. We subsequently call $m$ the \emph{arity} of \texttt{keyMapFunc}/\texttt{arrayMapFunc}. In most cases the arity of these functions will be one, but on some cases (such as replication-based matrix multiply, see Section 4.2.2), the arity will be greater. \section{Compilation and Optimization} To distribute tensor-based computations so that they can run efficiently requires an optimization framework. We consider three core questions related to actually distributing a computation specified in the TRA: (1) How is that TRA computation compiled into an equivalent statement in IA? (2) What are a set of equivalence rules that allow computations in IA to be re-written, so as to produce different implementations that are known to produce the same results, but may run more efficiently? And (3) How to cost those different, equivalent implementations, so that a search strategy can be used to choose the most efficient one? { \begin{table*}[] \small \centering \caption{Translation from TRA to IA.} \begin{tabular}{|l|l|} \hline \textbf{TRA expression} & \textbf{Corresponding IA}\\\hline $\Sigma _{\texttt{(groupByKeys, aggOp)}}\left(\texttt{R}\right)$ & ${\textsc{$\Sigma^{L}$}}_{\texttt{(groupByKeys, aggOp)}}\left({\textsc{Shuf}}_{\texttt{(groupByKeys)}}\left(\phys{R}\right)\right)$ \\\hline $\Join_{\texttt{(joinKeysL, joinKeysR, projOp)}}\left(\texttt{R}_l, \texttt{R}_r\right)$ & ${\textsc{$\Join^{L}$}}_{\texttt{(joinKeysL, joinKeysR, projOp)}}\left({\textsc{Bcast}}\left(\phys{R}_l\right),\phys{R}_r\right)$ \\\hline ${\textsc{ReKey}}_{\texttt{(keyFunc)}}\left(\texttt{R}\right)$ & ${\textsc{$\lambda^{L}$}}_{\texttt{(keyFunc,idOp)}}\left(\phys{R}\right)$\\\hline $\sigma_{\texttt{(boolFunc)}}\left(\texttt{R}\right)$ & ${\textsc{$\sigma^{L}$}}_{(\texttt{boolFunc})}\left(\phys{R}\right)$\\\hline $\lambda _{\texttt{(transformFunc)}}\left(\texttt{R}\right)$ & ${\textsc{$\lambda^{L}$}}_{\texttt{(idOp,transformFunc)}}\left(\phys{R}\right)$\\\hline ${\textsc{Tile}} _{\texttt{(tileDim, tileSize)}}\left(\texttt{R}\right)$ & ${\textsc{$\lambda^{L}$}}_{\texttt{(keyTileOp(tileDim, tileSize), arrayTileOp(tileDim, tileSize))}}\left(\phys{R}\right)$ \\\hline ${\textsc{Concat}}_{\texttt{(keyDim, arrayDim)}}\left(\texttt{R}\right)$ & ${\textsc{$\Sigma^{L}$}}_{(\langle \texttt{keyDim}\rangle^c, \texttt{arrayConcatOp})}\left({\textsc{Shuf}}_{\langle \texttt{keyDim}\rangle^c}\left(\phys{R}\right)\right)$ \\\hline \end{tabular} \label{tab:compile} \end{table*} } \subsection{Compiling the TRA} A complete set of rules mapping from TRA operations to IA operations are listed in Table \ref{tab:compile}\footnote{In Table \ref{tab:compile}, tensor relations \texttt{R}, $\texttt{R}_l$ and $\texttt{R}_r$ are stored as the corresponding physical tensor relations $\phys{R}$, $\phys{R}_l$ and $\phys{R}_r$; \texttt{idOp} represents an identity map for key or array; \texttt{arrayTileOp} is a kernel function splitting the array to chunks of the indicated size on the indicated dimension; $\texttt{arrayConcatOp}$ reverses this. \texttt{keyTileOp} is similar to \texttt{arrayTileOp}, but operates on keys; $\langle \texttt{keyDim}\rangle^c$ represents the complement set of $\langle \texttt{keyDim}\rangle$.}. Note that though there can be multiple IA computations for a given TRA computation, the compiler will generate one of such IA computation as the initial plan, and an optimizer will typically be responsible for applying a search algorithm to produce the optimal physical plan represented by IA. \subsection{Equivalence Rules} Once a (possibly inefficient) computation in IA is produced, it can be optimized via the application of a set of equivalence rules. An \emph{equivalence rule} for IA expressions holds if, for any input physical tensor relations, the two expressions produce equivalent outputs---two physical tensor relations are said to be equivalent if they contain the same set of \texttt{(key, array)} pairs after projecting away the \texttt{site} attribute. There are two classes of equivalence rules we consider: \emph{simple equivalence rules} which are often extensions of classic relational equivalence rules (e.g., commutative property of selections), and \emph{domain-specific equivalence rules} that are more complex transformations that always hold, and tend to be useful for mathematical computations, such as matrix multiplications. \subsubsection{Simple Equivalence Rules} In Table \ref{tab:rules}, we give an extensive list of two types of simple equivalence rules: (i) those based on kernel function composition, and (ii) equivalence rules based on optimization of re-partitions. Kernel function composition targets the order or location of the application of kernel functions in order to reduce the computation load and memory consumption. This is closely related to the idea of ML operator fusion, which has been explored in systems such as TVM \cite{chen2018tvm} (though TVM does not consider the sort of distributed computations considered here). Re-partition rules formalize notions of distributed query optimization over tensor relations, and are primarily designed to reduce communication. { \begin{table*}[] \small \centering \caption{Simple equivalence rules for kernel function composition and re-partition enumeration.} \begin{tabular}{|l|} \hline \textbf{Kernel function composition based rules}:\\\hline \textbf{R1-1.} Filter operations can be merged. \\ For a physical relation $\phys{R}$: \\ ${\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc1}\right)}\left({\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc2}\right)} \left(\phys{R}\right)\right) \equiv {\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc1}\wedge\texttt{boolFunc2}\right)} \left(\phys{R}\right).$ \\\hline \textbf{R1-2.} Map operations can be merged, if the output arity of the key and array mapping functions is one. \\ For a physical relation $\phys{R}$: \\ ${\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc1, arrayMapFunc1}\right)}\left({\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc2, arrayMapFunc2}\right)} \left(\phys{R}\right)\right) \equiv {\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc1}\circ\texttt{keyMapFunc2}, \texttt{arrayMapFunc1}\circ\texttt{arrayMapFunc2}\right)} \left(\phys{R}\right)$. \\\hline \textbf{R1-3.} Map and filter are commutative if \texttt{keyMapFunc} is an identify function (\texttt{idOp}). \\ For a physical relation $\phys{R} \in \mathcal{R}^{(k, r, \textbf{b}, s)}$, if $\forall \ \texttt{key} \in \left(\mathbb{Z}^{*}\right)^k$, $\texttt{keyMapFunc}(\texttt{key})=\texttt{key}$: \\ ${\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc, arrayMapFunc}\right)}\left({\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc}\right)} \left(\phys{R}\right)\right) \equiv {\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc}\right)}\left({\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc, arrayMapFunc}\right)}\left(\phys{R}\right)\right)$. \\\hline \textbf{R1-4.} The \texttt{arrayMapFunc} in map can be composed with local aggregation if \texttt{keyMapFunc} is an identify function (\texttt{idOp}): \\ For a physical relation $\phys{R} \in \mathcal{R}^{(k, r, \textbf{b}, s)}$, if $\forall \ \texttt{key} \in \left(\mathbb{Z}^{*}\right)^k$, $\texttt{keyMapFunc}(\texttt{key})=\texttt{key}$:\\ ${\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc, arrayMapFunc}\right)}\left({\textsc{$\Sigma^{L}$}}_{\left(\texttt{groupByKeys, aggOp}\right)}\left(\phys{R}\right)\right) \equiv {\textsc{$\Sigma^{L}$}}_{\left(\texttt{groupByKeys, arrayMapFunc} \circ \texttt{aggOp}\right)}\left(\phys{R}\right)$ \\ And if the kernel function \texttt{arrayMapFunc} and \texttt{aggOp} is distributive; that is, $\forall \ \texttt{array}_1,\texttt{array}_2 \in T^{(r, \textbf{b})}$, \\ $\texttt{arrayMapFunc} \left( \texttt{aggOp}\left(\texttt{array}_1,\texttt{array}_2\right)\right)=\texttt{aggOp} \left( \texttt{arrayMapFunc}\left(\texttt{array}_1\right),\texttt{arrayMapFunc}\left(\texttt{array}_2\right)\right)$: \\ ${\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc, arrayMapFunc}\right)}\left({\textsc{$\Sigma^{L}$}}_{\left(\texttt{groupByKeys, aggOp}\right)}\left(\phys{R}\right)\right) \equiv {\textsc{$\Sigma^{L}$}}_{\left(\texttt{groupByKeys, aggOp} \circ \texttt{arrayMapFunc}\right)}\left(\phys{R}\right)$ \\\hline \textbf{R1-5.} The \texttt{boolFunc} in filter can be composed with local aggregation if the kernel function only depends on \texttt{groupByKeys}.\\ For a physical relation $\phys{R} \in \mathcal{R}^{(k, r, \textbf{b}, s)}$, $\forall \ \texttt{key}_1,\texttt{key}_2 \in \left(\mathbb{Z}^{*}\right)^k$, if \\ $\texttt{boolFunc}\left(\Pi_{\texttt{groupByKeys}}\left(\texttt{key}_1\right)\right)=\texttt{boolFunc}\left(\Pi_{\texttt{groupByKeys}}\left(\texttt{key}_2\right)\right)\Rightarrow \texttt{boolFunc}\left(\texttt{key}_1\right)=\texttt{boolFunc}\left(\texttt{key}_2\right)$:\\ ${\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc}\right)}\left({\textsc{$\Sigma^{L}$}}_{\left(\texttt{groupByKeys, aggOp}\right)}\left(\phys{R}\right)\right) \equiv {\textsc{$\Sigma^{L}$}}_{\left(\texttt{boolFunc(groupByKeys), aggOp}\right)}\left(\phys{R}\right)$ \\\hline \textbf{R1-6.} The kernel function in local filter can be pushed down with local join if the \texttt{boolFunc} only checks the joined keys . \\ For physical relations $\phys{R}_l$ and $\phys{R}_r$:\\ ${\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc}\right)}\left({\textsc{$\Join^{L}$}}_{\left(\texttt{joinKeysL, joinKeysR, projOp}\right)}\left(\phys{R}_l,\phys{R}_r\right)\right) \equiv {\textsc{$\Join^{L}$}}_{\left(\texttt{joinKeysL, joinKeys, projOp}\right)}\left({\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc}\right)}\left(\phys{R}_l\right),{\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc}\right)}\left(\phys{R}_r\right)\right)$.\\ \hline \textbf{R1-7.} The kernel function in local map can be composed with local join, if the output arity of the key and array mapping functions is one.\\ For physical relations $\phys{R}_l$ and $\phys{R}_r$:\\ ${\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc, arrayMapFunc}\right)}\left({\textsc{$\Join^{L}$}}_{\left(\texttt{joinKeysL, joinKeysR, projOp}\right)}\left(\phys{R}_l,\phys{R}_r\right)\right) \equiv {\textsc{$\Join^{L}$}}_{\left(\texttt{keyMapFunc(joinKeysL), keyMapFunc(joinKeysR), arrayMapFunc} \circ \texttt{projOp}\right)}\left(\phys{R}_l,\phys{R}_r\right)$.\\ And if the kernel function \texttt{arrayMapFunc} and \texttt{projOp} is distributive, that is, $\forall \ \texttt{array}_1,\texttt{array}_2 \in T^{(r, \textbf{b})}$, \\ $\texttt{arrayMapFunc}: \left( \texttt{projOp}\left(\texttt{array}_1,\texttt{array}_2\right)\right)=\texttt{projOp} \left( \texttt{arrayMapFunc}\left(\texttt{array}_1\right),\texttt{arrayMapFunc}\left(\texttt{array}_2\right)\right)$: \\ ${\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc, arrayMapFunc}\right)}\left({\textsc{$\Join^{L}$}}_{\left(\texttt{joinKeysL, joinKeysR, projOp}\right)}\left(\phys{R}_l,\phys{R}_r\right)\right) \equiv {\textsc{$\Join^{L}$}}_{\left(\texttt{keyMapFunc(joinKeysL), keyMapFunc(joinKeysR), projOp} \circ \texttt{arrayMapFunc}\right)}\left(\phys{R}_l,\phys{R}_r\right)$.\\ \hline \textbf{Re-partition based rules}: \\\hline \textbf{R2-1.} Only the final broadcast/shuffle in a sequence of broadcast/shuffle operations is needed.\\ For a physical relation $\phys{R}$: \\ ${\textsc{Bcast}}\left({\textsc{Bcast}}\left(...{\textsc{Bcast}}\left(\phys{R}\right)\right)\right) \equiv {\textsc{Bcast}}\left(\phys{R}\right)$\\ ${\textsc{Shuf}}_{\left(\texttt{partDims}_n\right)}\left(...{\textsc{Shuf}}_{\left(\texttt{partDims}_2\right)}\left({\textsc{Shuf}}_{\left(\texttt{partDims}_1\right)}\left(\phys{R}\right)\right)\right) \equiv {\textsc{Shuf}}_{\left(\texttt{partDims}_n\right)}\left(\phys{R}\right)$.\\\hline \textbf{R2-2.} The re-partition operations are commutative with the local filter operation.\\ For a physical relation $\phys{R}$: \\ ${\textsc{Bcast}}\left({\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc}\right)}\left(\phys{R}\right) \right) \equiv {\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc}\right)}\left({\textsc{Bcast}}\left(\phys{R}\right)\right)$; \\ ${\textsc{Shuf}}_{\left(\texttt{partDim}\right)}\left({\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc}\right)}\left(\phys{R}\right) \right) \equiv {\textsc{$\sigma^{L}$}}_{\left(\texttt{boolFunc}\right)}\left({\textsc{Shuf}}_{\left(\texttt{partDims}\right)}\left(\phys{R}\right)\right)$. \\\hline \textbf{R2-3.} The re-partition operations are commutative with the local map operation. \\ For a physical relation $\phys{R}$: \\ ${\textsc{Bcast}}\left({\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc,arrayMapFunc}\right)}\left(\phys{R}\right) \right) \equiv {\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc,arrayMapFunc}\right)}\left({\textsc{Bcast}}\left(\phys{R}\right)\right)$;\\ And, if \texttt{keyMapFunc} is the identity function:\\ ${\textsc{Shuf}}_{\left(\texttt{partDims}\right)}\left({\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc,arrayMapFunc}\right)}\left(\phys{R}\right) \right) \equiv {\textsc{$\lambda^{L}$}}_{\left(\texttt{keyMapFunc,arrayMapFunc}\right)}\left({\textsc{Shuf}}_{\left(\texttt{partDims}\right)}\left(\phys{R}\right)\right)$.\\\hline \textbf{R2-4.} A shuffle can be avoided if the physical relation is already partitioned by a local aggregation's \texttt{groupByKeys}. \\ For a physical relation $\phys{R}$, if $\texttt{partDims} \subseteq \texttt{groupByKeys}$: \\ ${\textsc{$\Sigma^{L}$}}_{\left(\texttt{groupByKeys, aggOp}\right)}\left( {\textsc{Shuf}}_{\left(\texttt{partDims}\right)}\left(\phys{R}\right)\right) \equiv {\textsc{$\Sigma^{L}$}}_{\left(\texttt{groupByKeys, aggOp}\right)}\left(\phys{R}\right)$ \\ \hline \textbf{R2-5.} An aggregation can be split to two phases, if the physical relation is only partially partitioned. \\ For a physical relation $\phys{R}$, if $\texttt{groupByKeys} \subset \texttt{partDims}$:\\ ${\textsc{$\Sigma^{L}$}}_{\left(\texttt{groupByKeys, aggOp}\right)} \left( {\textsc{Shuf}} _{\left(\texttt{partDims}\right)}\left(\phys{R}\right)\right) \equiv {\textsc{$\Sigma^{L}$}}_{\left(\texttt{groupByKeys, aggOp}\right)}\left({\textsc{Shuf}}_{\left(\texttt{partDims}\right)}\left({\textsc{$\Sigma^{L}$}}_{\left(\texttt{groupByKeys, aggOp}\right)} \left(\phys{R}\right)\right)\right)$. \\ \hline \textbf{R2-6.} A Join $\Join$ defined by the TRA can be implemented in the following equivalent ways.\\ For physical relations $\phys{R}_l$ and $\phys{R}_r$:\\ ${\textsc{$\Join^{L}$}}_{\left(\texttt{joinKeysL, joinKeysR, projOp}\right)}\left({\textsc{Bcast}}\left(\phys{R}_l \right),\phys{R}_r\right)$ $\equiv {\textsc{$\Join^{L}$}}_{\left(\texttt{joinKeysL, joinKeysR, projOp}\right)}\left(\phys{R}_l,{\textsc{Bcast}}\left(\phys{R}_r\right)\right)$ \\ $\equiv {\textsc{$\Join^{L}$}}_{\left(\texttt{joinKeysL, joinKeysR, projOp}\right)}\left({\textsc{Shuf}}_{\left(\texttt{joinKeysL}\right)}\left(\phys{R}_l \right),{\textsc{Shuf}}_{\left(\texttt{joinKeysR}\right)}\left(\phys{R}_r\right)\right)$.\\ \hline \textbf{R2-7.} The local join can be pushed through shuffle.\\ For physical relations $\phys{R}_l \in \mathcal{R}^{(k_l, r_l, \textbf{b}_l, s)}$ and $\phys{R}_r \in \mathcal{R}^{(k_r, r_r, \textbf{b}_r, s)}$, if $\texttt{partDims} \subseteq \texttt{joinKeysL}$:\\ ${\textsc{Shuf}}_{\left(\texttt{partDims}\right)}\left({\textsc{$\Join^{L}$}}_{\left(\texttt{joinKeysL, joinKeysR, projOp}\right)}\left({\textsc{Shuf}}_{\left(\texttt{joinKeysL}\right)}\left(\phys{R}_l\right), {\textsc{Shuf}}_{\left(\texttt{joinKeysR}\right)}\left(\phys{R}_r\right) \right) \right) \equiv$ \\ ${\textsc{$\Join^{L}$}}_{\left(\texttt{joinKeysL, joinKeysR, projOp}\right)}\left({\textsc{Shuf}}_{\left(\texttt{joinKeysL}\right)}\left(\phys{R}_l\right), {\textsc{Shuf}}_{\left(\texttt{joinKeysR}\right)}\left(\phys{R}_r\right)\right)$\\\hline \end{tabular} \label{tab:rules} \end{table*} } Such rules are surprisingly effective for optimizing distributed tensor manipulations. Consider the example of extracting the diagonal elements of matrix $\textbf{X}$ plus matrix $\textbf{Y}$: $\texttt{diag}(\textbf{X}+\textbf{Y})$, where matrix $\textbf{X}$ and $\textbf{Y}$ are stored in physical tensor relations $\phys{R}_X$ and $\phys{R}_Y$. This computation can be represented by the following TRA expression, where $\texttt{isEq}(\langle \texttt{k}_0, \texttt{k}_1 \rangle) \mapsto \texttt{k}_0 = \texttt{k}_1$, $\texttt{merge}(\langle \texttt{k}_0, \texttt{k}_1 \rangle) \mapsto \langle \texttt{k}_0 \rangle$, \texttt{matAdd} is an element-wise sum of two arrays, and \texttt{diag} diagonalizes the array. Then diag$(\textbf{X}+\textbf{Y})$ can be written as: \begin{footnotesize} $$\lambda_{\left(\texttt{diag}\right)} \left({\textsc{ReKey}}_{(\texttt{merge})}\left( \sigma_{\left(\texttt{isEq}\right)} \left(\Join _{\left(\langle 0,1 \rangle, \langle 0,1 \rangle, \texttt{matAdd}\right)} \left(\texttt{R}_{X}, \texttt{R}_{Y} \right) \right)\right)\right).$$ \end{footnotesize} This TRA expression can be translated to the IA expression: \begin{footnotesize} $${\textsc{$\lambda^{L}$}}_{\left(\texttt{idOp,diag}\right)}\left({\textsc{$\lambda^{L}$}}_{\left(\texttt{merge,idOp}\right)}\left({\textsc{$\sigma^{L}$}} _{\left(\texttt{isEq}\right)} \left({\textsc{$\Join^{L}$}}_{\left(\langle 0,1 \rangle, \langle 0,1 \rangle, \texttt{matAdd}\right)}\left({\textsc{Bcast}}\left(\phys{R}_{X}\right), \phys{R}_{Y}\right)\right)\right)\right).$$ \end{footnotesize} We can apply the following equivalence rules for the above IA expression: \begin{footnotesize} \begin{align*} &{\textsc{$\lambda^{L}$}}_{\left(\texttt{idOp,diag}\right)}\left({\textsc{$\lambda^{L}$}}_{\left(\texttt{merge,idOp}\right)}\left({\textsc{$\sigma^{L}$}} _{\left(\texttt{isEq}\right)} \left({\textsc{$\Join^{L}$}}_{\left(\langle 0,1 \rangle, \langle 0,1 \rangle, \texttt{matAdd}\right)}\left({\textsc{Bcast}}\left(\phys{R}_{X}\right), \phys{R}_{Y}\right)\right)\right)\right)\\ &\overset{R1-2}{\equiv}\: {\textsc{$\lambda^{L}$}}_{\left(\texttt{merge,diag}\right)}\left({\textsc{$\sigma^{L}$}} _{\left(\texttt{isEq}\right)} \left({\textsc{$\Join^{L}$}}_{\left(\langle 0,1 \rangle, \langle 0,1 \rangle, \texttt{matAdd}\right)}\left({\textsc{Bcast}}\left(\phys{R}_{X}\right), \phys{R}_{Y}\right)\right)\right) \\ &\overset{R1-6}{\equiv}\: {\textsc{$\lambda^{L}$}}_{\left(\texttt{merge,diag}\right)}\left({\textsc{$\Join^{L}$}}_{\left(\langle 0,1 \rangle, \langle 0,1 \rangle, \texttt{matAdd}\right)}\left({\textsc{$\sigma^{L}$}} _{\left(\texttt{isEq}\right)} \left({\textsc{Bcast}}\left(\phys{R}_{X}\right)\right), {\textsc{$\sigma^{L}$}} _{\left(\texttt{isEq}\right)} \left(\phys{R}_{Y}\right)\right)\right) \\ &\overset{R2-2}{\equiv}\: {\textsc{$\lambda^{L}$}}_{\left(\texttt{merge,diag}\right)}\left({\textsc{$\Join^{L}$}}_{\left(\langle 0,1 \rangle, \langle 0,1 \rangle, \texttt{matAdd}\right)}\left({\textsc{Bcast}}\left({\textsc{$\sigma^{L}$}} _{\left(\texttt{isEq}\right)} \left(\phys{R}_{X}\right)\right), {\textsc{$\sigma^{L}$}} _{\left(\texttt{isEq}\right)} \left(\phys{R}_{Y}\right)\right)\right) \\ &\overset{R1-7}{\equiv}\: {\textsc{$\Join^{L}$}}_{\left(\texttt{merge}\left(\langle0,1\rangle\right),\texttt{merge}\left(\langle0,1\rangle\right), \texttt{matAdd}\circ\texttt{diag} \right)}\left({\textsc{Bcast}}\left({\textsc{$\sigma^{L}$}} _{\left(\texttt{isEq}\right)} \left(\phys{R}_{X}\right)\right), {\textsc{$\sigma^{L}$}} _{\left(\texttt{isEq}\right)} \left(\phys{R}_{Y}\right)\right). \end{align*} \end{footnotesize} The transformation will significantly reduce both the communication overhead and the computation load: by applying R1-6, the \texttt{isEq} functions will be pushed down, this transformation not only reduces the input tuple pairs for the join to execute the \texttt{matAdd} function but also the enables reduction of communication overhead where the the filter operation is commuted with the broadcast operation by R2-2; lastly, R1-7 leverages the property that kernel functions \texttt{diag} and \texttt{matAdd} are distributive, as a result, addition will only be applied for the diagonal elements for the paired blocks after kernel function composition. \subsubsection{Domain-Specific Equivalence Rules} \label{sec:dom_rule} Such rules encode specific knowledge from parallel and distributed computing algorithms. Adding such rules to a system allows IA to have at its disposal common implementation strategies, that it can choose from in a cost-based manner. We do not attempt to produce an exhaustive list of such rules, but rather we consider in detailing one example: distributed matrix multiplication over tensor relations $\texttt{R}_X$ and $\texttt{R}_Y$: \begin{footnotesize} $$\Sigma _{\left(\langle 0, 2 \rangle, \texttt{matAdd}\right)} \left(\Join _{\left(\langle 1 \rangle, \langle 0 \rangle, \texttt{matMul}\right)} \left(\texttt{R}_X, \texttt{R}_Y \right) \right).$$ \end{footnotesize}% For physical tensor relations $\phys{R}_X$ and $\phys{R}_Y$, using the rules of Table 1, this would be compiled into: \begin{footnotesize} $${\textsc{$\Sigma^{L}$}}_{\left(\langle 0, 2 \rangle, \texttt{matAdd}\right)}\left({\textsc{Shuf}}_{\left(\langle0,2\rangle\right)}\left( {\textsc{$\Join^{L}$}}_{\left(\langle1\rangle,\langle0\rangle,\texttt{matMul}\right)}\left({\textsc{Bcast}}\left(\phys{R}_X\right), \phys{R}_Y\right)\right)\right).$$ \end{footnotesize}% This is a simple, broadcast-based matrix multiply. Applying simple equivalence rules brings us to cross product-based matrix multiplication, which partitions $\phys{R}_X$ on columns, and $\phys{R}_Y$ on rows. The IA program is: \begin{footnotesize} \begin{align*} {\textsc{$\Sigma^{L}$}}_{\left(\langle 0, 2 \rangle, \texttt{matAdd}\right)}\left({\textsc{Shuf}}_{\left(\langle0,2\rangle\right)}\left( {\textsc{$\Join^{L}$}}_{\left(\langle1\rangle,\langle0\rangle,\texttt{matMul}\right)}\left({\textsc{Shuf}}_{\left(\langle1\rangle\right)}\left(\phys{R}_X\right), {\textsc{Shuf}}_{\left(\langle0\rangle\right)}\left(\phys{R}_Y\right)\right)\right)\right). \end{align*} \end{footnotesize}% However, more complicated schemes are possible, which are expressible in IA, but not derivable using the simple equivalence rules. For example, replication-based matrix multiplication can be viewed as a relational version of the 3D parallel matrix multiplication \cite{agarwal1995three}. The algorithm first replicates matrix \textbf{X} and \textbf{Y}'s blocks multiple times, viewing the result as a 3-D array, and shuffles them using the index of the corresponding voxel as a key; then each site joins the tuples with the same keys and performs local multiplications, aggregating to obtain the final results. If \texttt{xDups} is defined as ${\textsc{Front}}(\phys{R}_Y)[1]$ and \texttt{yDups} is ${\textsc{Front}}(\phys{R}_X)[0]$, the shuffle stage can be implemented in IA as: \begin{footnotesize} \begin{align*} \phys{R}_X^{*} = {\textsc{Shuf}}_{\left(\langle0,2\rangle\right)}\left({\textsc{$\lambda^{L}$}}_{\left(\texttt{insertDim}(2,\texttt{xDups}),\texttt{duplicate}(\texttt{xDups})\right)}\left(\phys{R}_X\right)\right)\\ \phys{R}_Y^{*} = {\textsc{Shuf}}_{\left(\langle0,2\rangle\right)}\left({\textsc{$\lambda^{L}$}}_{\left(\texttt{insertDim}(0,\texttt{yDups}),\texttt{duplicate}(\texttt{yDups})\right)}\left(\phys{R}_Y\right)\right) \end{align*} \end{footnotesize}% \noindent where kernel functions \texttt{insertDim} and \texttt{duplicate} add a new dimension, and duplicate each existing array the specified number of times. For example, applying ${\textsc{$\lambda^{L}$}}_{\left(\texttt{insertDim}(2,\texttt{xDups}),\texttt{duplicate}(\texttt{xDups})\right)}$ to the tensor relation \begin{align}\Bigg\{ &\left( \langle 0,0 \rangle, \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \right), \left( \langle 0,1 \rangle, \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \right), \nonumber \\ &\left( \langle 1,0 \rangle, \begin{bmatrix} 9 & 10 \\ 11 & 12 \end{bmatrix} \right), \left( \langle 1,1 \rangle, \begin{bmatrix} 13 & 14 \\ 15 & 16 \end{bmatrix} \right) \Bigg\} \nonumber \end{align} will produce: \begin{align}\Bigg\{ &\left( \langle 0,0,0 \rangle, \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \right), \left( \langle 0,0,1 \rangle, \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \right), \nonumber \\ &\left( \langle 0,1,0 \rangle, \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \right), \left( \langle 0,1,1 \rangle, \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \right), \nonumber \\ &\left( \langle 1,0,0 \rangle, \begin{bmatrix} 9 & 10 \\ 11 & 12 \end{bmatrix} \right), \left( \langle 1,0,1 \rangle, \begin{bmatrix} 9 & 10 \\ 11 & 12 \end{bmatrix} \right), \nonumber \\ &\left( \langle 1,1,0 \rangle, \begin{bmatrix} 13 & 14 \\ 15 & 16 \end{bmatrix} \right), \left( \langle 1,1,1 \rangle, \begin{bmatrix} 13 & 14 \\ 15 & 16 \end{bmatrix} \right) \Bigg\} \nonumber \end{align} \noindent Next we execute: \begin{footnotesize} $${\textsc{$\Sigma^{L}$}}_{\left(\langle 0, 2 \rangle, \texttt{matAdd}\right)}\left({\textsc{$\Join^{L}$}}_{\left(\langle0,1,2\rangle,\langle0,1,2\rangle,\texttt{matMul}\right)}\left( \phys{R}_X^{*}, \phys{R}_Y^{*} \right)\right).$$ \end{footnotesize}% The equivalence of these three implementations is an example of a set of domain-specific equivalence rules. \subsection{Cost Model} \label{sec:cost} One of the beneficial aspects of the TRA is that cost-based optimization is much easier than that for classical relational algebra: If the uniqueness and continuity constraints hold, tuple counts need not be estimated and can be computed exactly. In the simple cost model presented here, we use the number of floating point values that must be transferred between sites as the cost metric. There are two reasons for this decision. Fist, the number of floating point operations in distributed ML computation is fixed. For example, all of the classical distributed matrix multiply algorithms---2.5D \cite{solomonik2011communication}, SUMMA \cite{van1997summa}, etc., have the same floating point cost. While this is not a hard and fast rule---it is possible to push the filtering of tuples in a tensor relation past the application of a kernel function, which would change the number of floating-point operations---in many applications, network transfer is the dominant cost, and is a reasonable cost metric. Second, skew, which could slow down a computation with low network cost, is generally not an issue in a TRA computation, unlike in classical relational database system. The TRA continuity and uniqueness constraints imply that joins and aggregations cannot encounter skew. Consider a join. For two tuples $t_1$ and $t_2$ in tensor relation \texttt{R}, when that relation is joined with another relation \texttt{S}, the number of tuples from \texttt{S} that join with $t_1$ and $t_2$ must be the same. This, along with the fact that the TRA requires all arrays in a tensor relation to be of the same type, implies that skew will be very rare. In a TRA implementation, the only source of delay where the entire computation is blocked on a machine is likely to be a machine that is, simply stated, slower to perform computations than the others in the cluster. Such slowness may be due to hardware heterogeneity---a challenging issue for future work---or for unpredictable reasons, such as other workloads running on the machine, which are eventualities that cannot be planned for and must be handled by the runtime. To compute the network transfer cost for a plan in IA, we need to be able to compute the frontier of each physical relation $\phys{R}$: $\textbf{f} = \textsc{Front}(\phys{R})$. The reason is that, assuming that uniqueness and continuity constraints hold, we can compute the number of floating point numbers in $\phys{R}$ using $\textbf{f}$. If $\phys{R}$ is of type $\mathcal{R}^{(k, r, \textbf{b}, s)}$, and $\textbf{f} = \textsc{Front}(\phys{R})$, then the number of tuples in the corresponding tensor relation is $n = \prod_i \textbf{f}_i$, and the number of floating point numbers in the tensor relation is $n \times \prod_i \textbf{b}_i$. Once the frontier is known, it is used to compute the transfer cost for each ${\textsc{Bcast}}$ and ${\textsc{Shuf}}$ operation. The cost to broadcast a tensor relation of type $\mathcal{R}^{(k, r, \textbf{b}, s)}$ and having $f$ floating point numbers, is simply $f \times s$. The cost to shuffle a tensor relation of $f$ floating point numbers is simply $f$. Thus, the task of costing a physical TRA plan reduces to computing the type of each intermediate relation, as well as its frontier. Computing the type is relatively easy: we work up from the leaves to the end result(s) of a physical plan, using the type signature of each of the physical operations (Section 4) to infer the type of each intermediate physical relation. Computing the frontier in this way, working up from leaves to outputs, is also possible, but it requires a bit more thought. We now consider how the frontier of an output is computed for each of the various operations in the physical algebra: \begin{enumerate} \item ${\textsc{$\Join^{L}$}}_{\texttt{(joinKeysL, joinKeysR, projOp)}} \left(\phys{R}_l, \phys{R}_r\right)$. For local join, assuming that $\phys{R}_l$ and $\phys{R}_r$ have an appropriate partitioning to sites, let $\textbf{f}_l$ and $\textbf{f}_r$ be the left and right input frontiers of dimensionality $k_l$ and $k_r$, respectively. Then the output frontier $\textbf{f}$ is computed as follows. For $k < k_l$ and $k$ not in \texttt{joinKeysL}, $\textbf{f}[k]$ $=$ $\textbf{f}_l[k]$, as the frontier value for that dimension is inherited from the left. For $k < k_l$ and where $k =$ \texttt{joinKeysL}$[i]$, $\textbf{f}[k]$ $=$ min$(\textbf{f}_l[k], \textbf{f}_r[i])$, as the frontier value for that dimension results from the join of the two relations. And finally, for all other $k$, $\textbf{f}[k]$ is inherited from the corresponding dimension in the right frontier. \item ${\textsc{$\Sigma^{L}$}}_{\texttt{(groupByKeys, aggOp)}} (\phys{R})$. For local aggregation, assuming an appropriate partitioning, let $\textbf{f}_i$ denote the input frontier, and let $n$ be the number of key dimensions in \texttt{groupByKeys}. In this case, for $k \le n$, $\textbf{f}[k]$ $=$ $\textbf{f}_l[\texttt{groupByKeys}[k]]$. \item ${\textsc{$\sigma^{L}$}}_{\texttt{(boolFunc)}}(\phys{R})$. This performs a filter in the physical tensor relation $\phys{R}$. For an $n$-dimensional input frontier $\textbf{f}_i$, for any $k \le n$, by definition: $$\qquad \quad \textbf{f}[k] = 1 + \textrm{max}\left\{\textbf{k}[k] \textrm{ s. t. } \textbf{k} < \textbf{f}_i \textrm{ and } \texttt{boolFunc}(\textbf{k}) = \textrm{ true}\right\}.$$ That is, the $k$-th dimension in the frontier is inherited from the largest key value in that dimension accepted by \texttt{boolFunc}. In many cases, especially if \texttt{boolFunc} consists of simple arithmetic expressions any comparisons, symbolic methods can be used to compute this. But in practice, it may simply be easier to use a brute-force approach, where each key value is fed into \texttt{boolFunc} to compute the required maximum. Since the size of a tensor relation is typically small---tens of thousands of tuples would be very large---this is a very practical approach. \item ${\textsc{$\lambda^{L}$}}_{\texttt{(keyMapFunc,arrayMapFunc)}}(\phys{R})$. Similarly, for an $n$-dimensional input frontier $\textbf{f}_i$, for any $k \le n$, by definition: $$\qquad \quad \textbf{f}[k]=1 + \textrm{max}\left\{\texttt{keyMapFunc}(\textbf{k})[k] \textrm{ s. t. } \textbf{k} < \textbf{f}_i\right\}.$$ Again, a brute force-approach is appropriate for computing the frontier in this case. \end{enumerate} \section{Evaluation} { \begin{table*}[t] \small \begin{center} \caption{Distributed matrix multiply runtimes.} \begin{tabular}{|c||c|c|c||c|c|c||c|c|c|} \hline & \multicolumn{3}{|c||}{\texttt{Two General Matrices}} & \multicolumn{3}{|c||}{\texttt{A Common Large Dim}} & \multicolumn{3}{|c|}{\texttt{ Two Large Dims}} \\ \hline Cluster Size & 5 & 10 & 15 & 5 & 10 & 15 & 5 & 10 & 15 \\ \hline BMM & 61.18s & 46.48s & 38.54s & 106.24s & 104.67s & 101.63s & 57.23s & 37.60s & 31.64s \\ CMM & 63.14s & 40.08s & 29.38s & 51.52s & 30.58s & 23.09s & 106.82s & 82.72s & 75.63s \\ RMM & 60.71s & 43.56s & 44.55s & 91.19s & 74.40s & 68.43s & 59.91s & 41.12s & 33.26s \\ ScaLAPCK & 66.11s & 37.05s & 28.30s & 83.96s & 58.17s & 35.45s & 53.06s & 28.13s & 22.34s \\ 2.5D & 62.93s & 29.60s & 23.11s & 83.59s & 46.43s & 34.82s & 61.13s & 28.36s & 21.21s \\ Dask & 200.64s & 161.23s & 104.12s& Fail & Fail & Fail & Fail & Fail & Fail \\ PETSc & 1034.40s & 535.85s & 430.80s & 1071.62s & 801.26s & 550.74s & 1051.71s & 810.61s & 598.24s\\ \hline \end{tabular} \label{table:mm} \end{center} \end{table*} } { \begin{table}[t] \small \begin{center} \caption{Predicted costs for a 10-node cluster.} \begin{tabular}{|c||c|c|c|} \hline & BMM & CMM & RMM \\ \hline \texttt{Two General} & $1.6\times10^{10}$ & $1.6\times10^{10}$ & $1.6\times10^{10}$ \\ \texttt{A Common Large Dim} & $6.4\times10^{10}$ & $1.0\times10^{9}$ & $6.4\times10^{10}$ \\ \texttt{Two Large Dims} & $8.0\times10^{9}$ & $6.4\times10^{10}$ & $8.0\times10^{9}$ \\ \hline \end{tabular} \label{table:mm_cost} \end{center} \end{table} } { \begin{table*}[] \small \begin{center} \caption{Nearest neighbor search runtimes.} \begin{tabular}{|c||c|c|c|c|c|c||c|c|c|c|c|c|} \hline & \multicolumn{3}{|c|}{ \texttt{Wide ($N=1.5\times10^5$)}} & \multicolumn{3}{|c||}{\texttt{Wide ($N=1.5\times10^6$)}}& \multicolumn{3}{|c|}{ \texttt{Large ($D=3\times10^4$)}} & \multicolumn{3}{|c|}{\texttt{Large ($D=10^5$)}}\\ \hline Cluster Size & 4 & 8 & 12 & 4 & 8 & 12 & 4 & 8 & 12 & 4 & 8 & 12\\ \hline \texttt{Opt4Horizontal} & 5.64s & 4.71s & 3.36s & 55.62s & 39.24s & 24.63s & 13.26s & 17.71s & 28.25s & 159.69s & 229.40s & 315.17s\\ \texttt{Opt4Vertical} & 10.44s & 9.26s & 9.88s & 120.09s & 112.69s & 108.59s & 5.93s & 4.52s & 3.82s & 57.81s & 35.61s & 26.43s\\ Single big machine & 5.31s & 4.65s & 3.33s & 51.54s & 35.82s & 23.01s & 5.21s & 4.41s & 3.79s & 48.22s & 31.12s & 24.88s \\ Dask & 485.87s & 289.50s & 223.55s & Fail & Fail & Fail & 437.31s & 420.25s & 381.10s & Fail & Fail & Fail\\ \hline \end{tabular} \label{table:nns} \end{center} \end{table*} } { \begin{table}[] \small \begin{center} \caption{Predicted nearest neighbor search costs, 8 machines.} \begin{tabular}{|c||c|c|} \hline & \texttt{Opt4Horizontal} & \texttt{Opt4Vertical}\\ \hline \texttt{Wide data set} & $2.9 \times 10^8$ & $8.0 \times 10^{10}$ \\ \texttt{Large data set} & $7.2 \times 10^{10}$ & $4.8\times10^9$ \\ \hline \end{tabular} \label{table:prednn} \end{center} \end{table} } The goal of our paper is to design a computational abstraction that could be exported by the back-end of a machine learning system. To be specific, we have detailed that: (1) such an abstraction should be expressive enough to express a variety of computations; (2) the computations expressed using the abstraction should be competitive with hand-coded or special-purpose solutions; and (3) the abstraction should be amenable to automatic optimization. To determine whether the TRA meets these goals, we have implemented a simple Python back-end that exports the TRA/IA interface. Across three different large-scale ML computations, we experimentally evaluate this TRA-based back-end. To see whether a TRA-based back-end can provide suitable performance, we compare with a number of other options, including hand-coded MPI solutions, high-performance libraries such as ScaLAPACK, distributed data science tools such as Dask, and ML systems such as TensorFlow and PyTorch. To see whether it is amenable to automatic optimization, for each ML computation, we apply a series of transformations to obtain multiple implementations in the IA, and evaluate whether the cost model is able to predict which is preferable. \noindent \textbf{Benchmark Tasks.} (i) distributed matrix multiplication, (ii) distributed nearest neighbor search in a Riemannian metric space, and (iii) distributed stochastic gradient decent (SGD) in a two-layer, feed-forward neural network (FFNN). \noindent \noindent \textbf{TRA Implementation.} We implement an execution engine for the IA in Python. While it may seem surprising that Python is appropriate for implementing a relational engine, for even very large ML problems, the number of tuples in a TRA computation is small; most data are stored in the large arrays. Our Python execution engine makes heavy use of PyTorch to handle those arrays. PyTorch is used to actually execute the compute kernels on the various sites in a compute cluster, and our IA implementation uses PyTorch's optimized communication library to move the arrays stored in tensor relations between machines. \subsection{Matrix Multiplication} Multiplication of $\textbf{A} \in \mathbb{R}^{I \times K }$ and $\textbf{B} \in \mathbb{R}^{K \times J}$ can be formalized: \begin{footnotesize} \begin{align*} \Sigma _{\left(\langle 0, 2 \rangle, \texttt{matAdd}\right)} \left(\Join _{\left(\langle 1 \rangle, \langle 0 \rangle, \texttt{matMul}\right)} \left( \texttt{R}_\textbf{A}, \texttt{R}_\textbf{B} \right) \right) \end{align*} \end{footnotesize} \noindent where matrix $\textbf{A}$ and $\textbf{B}$ are stored in tensor relations $\texttt{R}_{\textbf{A}}$ and $\texttt{R}_{\textbf{B}}$. To test the effectiveness of IA optimization, as others have done \cite{gu2017improving, han2019distme}, we consider three different multiplications: (i) general matrices ($I=K=J=4\times10^4$), (ii) matrices with a common large dimension ($K=6.4\times10^5$, $I=J=10^4$), and (iii) matrices with two large dimensions ($I=J=8\times10^4$, $K=10^4$). Matrices are filled with random data following uniform distribution $\mathcal{U}(-1, 1)$. As discussed, the above TRA as three equivalent IA plans: broadcast based matrix multiplication (BMM), cross-product based matrix multiplication (CMM), and replication-based matrix multiplication (RMM). We compare these three IA implementations with Intel's version of ScaLAPACK \cite{choi1992scalapack} which realizes the classic SUMMA \cite{van1997summa}. We also compare with our own, hand-coded version of the classical 2.5D matrix multiply algorithm \cite{solomonik2011communication}, implemented on top of MPI \cite{barker2015message}; with Dask \cite{dask}, a popular distributed analytic tool with a Python interface \cite{rocklin2015dask}; and with PETSc \cite{petsc}, a popular high-performance distributed computing library \cite{balay2019petsc}. All methods are benchmarked over Amazon EC2 clusters with 5, 10 or 15 \texttt{r5d.2xlarge} instances (each with 8 vCPU, 64 GB RAM, and connected by up to 10 Gb/s interconnect). Note that we have made reasonable amount of effort to tune the hyper-parameters in the alternative solutions (e.g., grid size, thread number, initial layout, etc.) and report the best results. Results are in Table \ref{table:mm}. In Table \ref{table:mm_cost}, we report the IA cost (as computed in Section \ref{sec:cost}) predicted for a 10-node cluster. \subsection{Nearest Neighbor Search} We use TRA to implement a nearest neighbor search problem in a Riemannian metric space encoded by matrix $\textbf{A} \in \mathbb{R}^{D \times D}$, where given a query vector $\textbf{x}_q \in \mathbb{R}^{1 \times D}$ and a candidate set $\textbf{X} \in \mathbb{R}^{N \times D}$, the goal is to find the $i$-th row in the matrix that minimizes: $d_{\textbf{A}}\left(\textbf{x}_i, \textbf{x}_q\right) = \left(\textbf{x}_i - \textbf{x}_q\right)\textbf{A}\left(\textbf{x}_i - \textbf{x}_q\right)^T$. Suppose $\textbf{x}_q$, $\textbf{X}$, $\textbf{A}$ are stored in tensor relation $\texttt{R}_{\textbf{x}_q}$, $\texttt{R}_{\textbf{X}}$ and $\texttt{R}_{\textbf{A}}$, the corresponding TRA program can be encoded as: {\small \begin{align*} \texttt{R}_{\textbf{diff}} = &\Join_{\left(\langle 1 \rangle, \langle 1 \rangle, \texttt{matVecSub}\right)} \left( \texttt{R}_{\textbf{x}_q}, \texttt{R}_{\textbf{X}} \right) \\ \texttt{R}_{\textbf{proj}} = &\Sigma_{\left(\langle 0, 2 \rangle, \texttt{matAdd}\right)} \left( \Join_{\left(\langle 1 \rangle, \langle 0 \rangle, \texttt{matMul}\right)} \left(\texttt{R}_{\textbf{diff}_{}} , \texttt{R}_{\textbf{A}}\right)\right) \\ \texttt{R}_{\textbf{dist}} =&\lambda_{\left(\texttt{rowSum}\right)}\left(\Sigma_{\left(\langle 0 \rangle, \texttt{matAdd}\right)} \left( \Join_{\left(\langle 0,1 \rangle, \langle 0,1 \rangle, \texttt{elemMul}\right)} \left(\texttt{R}_{\textbf{proj}} , \texttt{R}_{\textbf{diff}}\right)\right)\right)\\ \texttt{R}_{\textbf{min}} = &\Sigma_{\left(\langle \rangle, \texttt{minIndex}\right)}\left( \texttt{R}_{\textbf{dist}} \right). \end{align*} }% \noindent where $\texttt{matVecSub}$ is matrix-vector subtraction, $\texttt{elemMul}$ is element-wise matrix multiplication (Hadamard product), $\texttt{minIndex}$ finds the minimal element's index. We hand-compile this into an expression in the IA, and then use the various equivalence rules to produce two different implementations: \texttt{Opt4Horizontal} and \texttt{Opt4Vertical}. \texttt{Opt4Horizontal} will broadcast $\texttt{R}_{\textbf{x}_q}$ and $\texttt{R}_{\textbf{A}}$ to each compute site and partition $\texttt{R}_{\textbf{X}}$ by dimension $0$; then the computation of $\texttt{R}_{\textbf{diff}} $, $\texttt{R}_{\textbf{proj}}$, and $\texttt{R}_{\textbf{dist}}$ will be conducted by local operations. \texttt{Opt4Vertical} will first broadcast $\texttt{R}_{\textbf{x}_q}$ to each site and compute $\texttt{R}_{\textbf{diff}}$, then partition $\texttt{R}_{\textbf{diff}}$ by dimension $1$ and partition $\texttt{R}_{\textbf{A}}$ by dimension $0$ so that $\texttt{R}_{\textbf{proj}}$ is computed in a cross-product based matrix multiplication. For the the \texttt{Opt4Horizontal} IA implementation, $\texttt{R}_{\textbf{x}_q}$, $\phys{R}_{\textbf{X}}$ and $\phys{R}_{\textbf{A}}$ are initially partitioned by dimension $0$. For the the \texttt{Opt4Vertical} IA implementation, $\phys{R}_{\textbf{x}_q}$, and $\phys{R}_{\textbf{A}}$ are initially partitioned by dimension $0$, while $\phys{R}_{\textbf{X}}$ is initially partitioned by $1$. We generate two data sets: (i) \texttt{Large}, with a large number of data points ({ $N=1.5\times10^5,1.5\times10^6$)} but small feature space ($D=6\times10^3$); and (ii) \texttt{Wide}, with a small number of data points ($N=6\times10^3$), with a large feature space ({ $D=3\times10^4,10^5$}). We execute this computation on compute clusters with 4, 8 or 12 \texttt{r5d.2xlarge} instances. We also implemented the same computation using Dask\cite{dask}. And as a baseline, we compare the execution time with a PyTorch implementation that runs on a single site equipped with the same computing power as the TRA implementation: an \texttt{r5d.8xlarge} instance (with 32 vCPU, 256 GB RAM), an \texttt{r5d.16xlarge} instance (with 64 vCPU, 512 GB RAM) and an \texttt{r5d.24xlarge} instance (with 96 vCPU, 768 GB RAM). Since the single-site implementation has zero communication overhead, this should be something of a lower-bound on the time required to run the computation. The results and predicted costs are enumerated in Table \ref{table:nns} and Table \ref{table:prednn}. \subsection{Feed-Forward Neural Network} Lastly, we benchmark a training iteration of a two-layer FFNN for multiple label classification, computed over an input matrix. \begin{comment} Performing forward and backward propagation of SGD requires the following computations: {\small \begin{align*} &\textbf{a}_1 = f_{\textit{relu}}\left(\textbf{X} \textbf{W}^{(i)}_1\right); \textrm{ } \textbf{a}_2 = f_{\textit{sigmoid}}\left(\textbf{a}_1 \textbf{W}^{(i)}_2\right); \nonumber\\ & \nabla_{\textbf{a}_2}=\textbf{a}_2-\textbf{Y}; \textrm{ } \nabla_{\textbf{W}_2}^{(i)}= \textbf{a}_1^{T} \nabla_{\textbf{a}_2}; \textrm{ } \nonumber\\ &\nabla_{\textbf{a}_1} = f'_{\textit{relu}}\left(\textbf{a}_1\right) \circ \left(\nabla_{\textbf{a}_2} {\textbf{W}^{(i)}_2}^T \right) ; \textrm{ } \nabla_{\textbf{W}_1}^{(i)} = \textbf{X}^T \nabla_{\textbf{a}_1}; \nonumber\\ &\textbf{W}^{(i + 1)}_1 = \textbf{W}^{(i)}_1 - \eta \nabla_{\textbf{W}_1}^{(i)}; \textrm{ } \textbf{W}^{(i + 1)}_2 = \textbf{W}^{(i)}_2 - \eta \nabla_{\textbf{W}_2}^{(i)}. \nonumber\\ \end{align*} }% \noindent Here, $\circ$ is the Hadamard product. Suppose batch $(\textbf{X},\textbf{Y})$ are stored in tensor relation $\texttt{R}_{\textbf{X}}$ and $\texttt{R}_{\textbf{Y}}$, weight matrices $\textbf{W}_1^{(i)}$ and $\textbf{W}_2^{(i)}$ are stored in tensor relation $\texttt{R}_{\textbf{W}_{1}^{(i)}}$ and $\texttt{R}_{\textbf{W}_{2}^{(i)}}$. \end{comment} Again, we compile the TRA program for FFNN learning by hand into the IA, and use the equivalence rules to produce two implementations. The first, called \texttt{TRA-DP}, resembles the classic data parallel implementation. The second, called \texttt{TRA-MP}, corresponds to an intra-operation model parallel plan. We compare these two IA plans with the state-of-the-art data parallel implementation provided by PyTorch 1.7.1 \cite{li2020pytorch} and TensorFlow 2.4.1 \cite{abadi2016tensorflow}. We also compare with the same computation written on top of Dask \cite{dask}, and hand-coded using ScaLAPACK \cite{choi1992scalapack}. Note that these two options do not fully support GPU. Two data sets are considered. First, the data from the Google speech recognition task \cite{warden2018speech}, where a $1600$ feature vector is extracted from audio wave-forms; the goal is to identify 10 keywords ($D=1600$ and $L=10$); for this task, we train a very wide hidden layer with large number of neurons where $H = 1\times10^5$, $1.5\times10^5$, or $2\times10^5$; a batch size of $10^4$ ($N=10^4$) are used for min-batch SGD. Second, we consider the AmazonCat-14K \cite{mcauley2015image, mcauley2015inferring} benchmark, which is an extreme multi-label (XML) classification dataset including a large number of features ($D = 597540$) and labels ($L = 14588$); we train a relatively narrow network with $H = 0.5\times10^3, 1\times10^3$, $3\times10^3$, $5\times10^3$, or $7\times10^3$; a batch size of $10^3$ ($N=10^3$) are used for mini-batch SGD. Each is executed on CPU clusters with 2, 5 or 10 \texttt{r5dn.2xlarge} instances connected by up to 25 Gb/s interconnect) and GPU clusters with 2, 5 or 10 \texttt{p3.2xlarge} instances (each with a NVIDIA Tesla V100 GPU, and connected by 10 Gb/s interconnect). The results for Google speech are listed in Table \ref{table:nn_speech}; for Amazon-XML in Table \ref{table:nn_xml}. Predicted costs are given in Table \ref{table:ffnn_cost_pred}. { \begin{table}[] \small \begin{center} \caption{SGD iteration time: FFNN for Google Speech.} \begin{tabular}{|c||c|c|c||c|c|c|} \hline Cluster & \multicolumn{3}{c||}{CPU} & \multicolumn{3}{c|}{GPU} \\ \hline Nodes & 2 & 5 & 10 & 2 & 5 & 10 \\ \hline \multicolumn{7}{|c|}{\texttt{100k Neurons}} \\ \hline PyTorch-DP & 11.16s & 6.15s & 4.75s & 0.99s & 1.19s & 1.27s \\ TF-DP & 11.93s & 7.32s & 5.51s & 0.87s & 1.13s & 1.17s \\ ScaLAPCK & 8.52s & 4.97s & 2.79s & NA & NA & NA \\ Dask & 62.57s & 56.57s & 49.63s & NA & NA & NA \\ TRA-DP & 11.62s & 6.51s & 5.20s & 1.49s & 1.59s & 1.63s \\ TRA-MP & 26.56s & 28.71s & 29.09s & 7.01s & 11.56s & Fail \\ \hline \multicolumn{7}{|c|}{\texttt{150k Neurons}} \\ \hline PyTorch-DP & 14.28s & 9.46s & 6.54s & 1.18s & 1.65s & 1.78s \\ TF-DP & 16.68s & 10.69s & 8.43s & 1.16s & 1.62s & 1.75s \\ ScaLAPCK & 13.45s & 7.48s & 3.87s & NA & NA & NA \\ Dask & 96.56s & 85.32s & 77.07s & NA & NA & NA \\ TRA-DP & 14.52s & 9.68s & 7.56s & 2.15s & 2.22s & 2.23s \\ TRA-MP & 33.20s & 42.80s & 43.10s & Fail & Fail & Fail \\ \hline \multicolumn{7}{|c|}{\texttt{200k Neurons}} \\ \hline PyTorch-DP & 17.25s & 11.94s & 9.30s & Fail & 2.09s & 2.42s \\ TF-DP & 21.36s & 13.21s & 11.21s & 1.52s & 2.12s & 2.46s \\ ScaLAPCK & 17.18s & 10.05s & 5.06s & NA & NA & NA \\ Dask & 136.66s & 112.72s & 104.01s & NA & NA & NA \\ TRA-DP & 17.89s & 12.51s & 9.67s & 2.94s & 2.80s & 2.85s \\ TRA-MP & 37.82s & 54.23s & 59.84s & Fail & Fail & Fail \\ \hline \end{tabular} \label{table:nn_speech} \end{center} \end{table} } { \begin{table}[] \begin{center} \small \caption{SGD iteration time: FFNN for Amazon-XML.} \begin{tabular}{|c||c|c|c||c|c|c|} \hline Cluster & \multicolumn{3}{c||}{CPU} & \multicolumn{3}{c|}{GPU} \\ \hline Nodes & 2 & 5 & 10 & 2 & 5 & 10 \\ \hline \multicolumn{7}{|c|}{ \texttt{0.5k Neurons}} \\ \hline PyTorch-DP & 3.58s & 4.51s & 6.41s & 1.46s & 2.11s & 2.19s \\ TF-DP & 5.94s & 7.81s & 8.96s & 1.21s & 1.85s & 2.11s \\ ScaLAPCK & 4.92s & 2.91s & 1.73s & NA & NA & NA \\ Dask & 27.96s & 27.01s & 22.69s & NA & NA & NA \\ TRA-DP & 4.77s & 5.13s & 7.84s & 2.42s & 2.48s & 2.61s \\ TRA-MP & 2.18s & 1.41s & 0.83s & 0.15s & 0.12s & 0.09s \\ \hline \multicolumn{7}{|c|}{\texttt{1k Neurons}} \\ \hline PyTorch-DP & 9.74s & 10.29s & 10.34s & 2.67s & 3.76s & 4.20s \\ TF-DP & Fail & Fail & Fail & Fail & Fail & Fail \\ ScaLAPCK & 8.16s & 6.65s & 2.47s & NA & NA & NA \\ Dask & 45.40s & 42.15s & 29.34s & NA & NA & NA \\ TRA-DP & 12.50s & 14.29s & 15.68s & 4.67s & 4.69s & 4.73s \\ TRA-MP & 3.86s & 2.79s & 1.70s & 0.40s & 0.37s & 0.35s \\ \hline \multicolumn{7}{|c|}{\texttt{3k Neurons}} \\ \hline PyTorch-DP & 25.46s & 29.04s & 30.51s & Fail & Fail & Fail \\ TF-DP & Fail & Fail & Fail & Fail & Fail & Fail \\ ScaLAPCK & 17.56s & 9.59s & 7.91s & NA & NA & NA \\ Dask & 103.83s & 89.09s & 81.56s & NA & NA & NA \\ TRA-DP & 26.59s & 38.15s & 46.06s & Fail & 12.74s & 13.13s \\ TRA-MP & 10.57s & 6.36s & 3.88s & Fail & 0.54s & 0.44s \\ \hline \multicolumn{7}{|c|}{\texttt{5k Neurons}} \\ \hline PyTorch-DP & 34.05s & 46.53s & 50.17s & Fail & Fail & Fail \\ TF-DP & Fail & Fail & Fail & Fail & Fail & Fail \\ ScaLAPCK & 23.21s & 11.65s & 8.33s & NA & NA & NA \\ Dask & 246.56s & 143.86s & 127.26s & NA & NA & NA \\ TRA-DP & 44.12s & 68.54s & 75.15s & Fail & Fail & Fail \\ TRA-MP & 18.59s & 8.07s & 5.75s & Fail & 0.59s & 0.48s \\ \hline \multicolumn{7}{|c|}{\texttt{7k Neurons}} \\ \hline PyTorch-DP & Fail & Fail & Fail & Fail & Fail & Fail \\ TF-DP & Fail & Fail & Fail & Fail & Fail & Fail \\ ScaLAPCK & 29.19s & 14.04s & 9.57s & NA & NA & NA \\ Dask & Fail & Fail & Fail & NA & NA & NA \\ TRA-DP & 60.28s & 89.36s & 107.86s & Fail & Fail & Fail \\ TRA-MP & 21.35s & 12.12s & 7.854s & Fail & Fail & 0.73s \\ \hline \end{tabular} \label{table:nn_xml} \end{center} \end{table} } { \begin{table}[t] \small \begin{center} \caption{Predicted FFNN costs for a 5-node cluster.} \begin{tabular}{|c||c|c|} \hline & TRA-DP & TRA-MP \\ \hline \texttt{Google speech 100k} & $9.7 \times 10^{8}$ & $1.0 \times 10^{10}$ \\ \texttt{Google speech 150k} & $1.5 \times 10^{9}$ & $1.5 \times 10^{10}$ \\ \texttt{Google speech 200k} & $1.9 \times 10^{9}$ & $2.0 \times 10^{10}$ \\ \hline \texttt{Amazon XML 1k} & $3.7 \times 10^{9}$ & $1.0 \times 10^{7}$ \\ \texttt{Amazon XML 3k} & $1.1 \times 10^{10}$ & $3.0 \times 10^{7}$ \\ \texttt{Amazon XML 5k} & $1.8 \times 10^{10}$ & $5.0 \times 10^{7}$ \\ \texttt{Amazon XML 7k} & $2.6 \times 10^{10}$ & $7.0 \times 10^{7}$ \\ \hline \end{tabular} \label{table:ffnn_cost_pred} \end{center} \end{table} } \subsection{Discussion} \label{sec:exp_disc} First, the experiments do seem to show that the TRA provides an abstraction upon which a variety of ML computations can be mapped. Further, the TRA seems to provide for good performance. On the matrix multiplication experiments, the best TRA-based implementations were at least competitive with ScaLAPACK as well as our hand-coded MPI-based implementation (we observed 29s for the TRA-based CMM vs. 23s for hand-coded MPI in the ``two general marices'' case, 31s for the TRA-based BMM vs. 21s for hand-coded MPI in the ``two large dims case'') even beating them both (23s for the TAR-based CMM vs. 35s for hand-coded MPI) in the ``common large dim case''. On the FFNN experiments, the best TRA-based implementation for each task was about 1.5$\times$ times as slow as the hand-constructed ScaLAPACK implementation for the Google data set, but considerably faster than ScaLAPACK for the more challenging Amazon data set. In general it is fair to say that the best TRA-based implementation for each task was at least competitive with the ScaLAPACK and MPI-based codes. The fact that there is not a significant performance hit moving from a special-purpose tool requiring significant programmer expertise to a general-purpose implementation abstraction seems to argue that in fact, a TRA-based back-end can provide state-of-the-art performance. It is also instructive to compare our TRA-based implementations with the other, more user-friendly tools tested, which in practice would be more reasonable alternatives to an ML system with a TRA-based back-end. Dask was not competitive, and was often one or two orders of magnitude slower. On the FFNN experiments, PyTorch was generally better performing than TensorFlow in CPU clusters, while both systems perform almost identically in GPU clusters. For Google speech, the optimal partition schema is identical to data parallelism, where the best TRA-based option is able to closely match PyTorch's speed. Further, while PyTorch failed on the larger Google computations in a 2-GPU cluster, the TRA implementation was able to run to completion. On the even larger, extreme classification problem, the TRA-MP (model parallel) IA was much, much faster than PyTorch, (where TensorFlow fails in most cases since it does not allow a parameter matrix to exceed 2 GB), and much more scalable. PyTorch also cannot handle the huge matrices required to power this computation in some settings. The final question we wanted to address was whether the TRA is amenable to automatic optimization. Note that in each case, there was one IA implementation that was suitable for the input data, and one that was not; the difference between the two was often significant. In a system based upon the TRA, it would be crucial to automatically choose the suitable implementation. We found that in each case, the simple cost model from Section \ref{sec:cost} would have chosen the correct implementation. For example, consider Table \ref{table:ffnn_cost_pred}. In each case, the cost metric correctly assigns the lower cost to the appropriate IA computation: TRA-DP for the smaller, Google problem, and TRA-MP for the larger, extreme classification problem. These results suggest that it should easily be possible to perform cost-based optimization over the IA. \section{Related Work} \label{sec_rel} \noindent Our focus has been on the proper implementation abstraction for ML systems. The TRA is ``front-end agnostic.'' Still, there has been considerable interest in programming and compilation for such systems. FAQ, by Khamis et al. \cite{abo2016faq}, considers how to compute Einstein-notation-like expressions over semi-rings. Effectively, FAQ re-writes such expressions so that they can easily be computed using the ``OutsideIn'' method for first determining the non-zero entries using a series of joins, followed by the computation of the values. Laue et al. \cite{laue2020simple} propose a variant on the Ricci calculus for computing tensor-based derivatives using the Einstein notation. Tensor Comprehensions are an Einstein-like programming language and associated compiler that is able to produce efficient CUDA kernels \cite{vasilache2018tensor}; the tensor algebra compiler is a similar effort \cite{kjolstad2017tensor}. Our efforts are complementary. One could imagine, for example, using FAQ-like algorithms along with a compiler for high-performance kernels to generate expressions for a TRA-based back-end. Classic data-flow systems have been modified to support distributed machine learning. Both Spark \cite{zaharia2010spark} and SystemML \cite{ghoting2011systemml} provide native libraries for deep learning. A set of deep learning frameworks can run on top of Spark, such as TensorFrames \cite{hunter2016tensorframes}, Deeplearning4j \cite{team2016deeplearning4j}, SparkNet\cite{moritz2015sparknet} and BigDL \cite{dai2019bigdl}. Conceptually, these deep learning frameworks are related to the TRA as they allow the distribution of ML computations. Consider TensorFrames. TensorFrames allows the items in a Spark DataFrame to be operated on by a TensorFlow computation. One could view those TensorFlow computations as being similar to the kernels applied by TRA, and the Spark operations used to manipulate the data as being similar to the the joins, aggregations, and so on offered by the TRA. The key difference is that while these systems are each significant engineering efforts aimed at marrying different technologies (TensorFlow and Spark in the case of TensorFrames), the TRA is designed as a generic back-end. In fact, a TensorFrames-like programming model could easily be mapped onto TRA, with \texttt{mapRows}, \texttt{aggregate}, etc., being mapped to the appropriate TRA operations, and the TensorFlow computations run as kernels. Relational systems have long been proposed for ML. MLog \cite{li2017mlog} is a declarative relational system managing data movement, data persistency, and training batch generation. Similar ideas have been applied in \cite{abo2018database} for feature extraction queries over multi-relation databases, and \cite{khamis2018ac} for optimizing sparse tensor computations constructed from relational tables. Recently, relational systems have also been considered as runtime engine (instead of an efficient data loader) for distributed ML. DB4ML \cite{jasny2020db4ml} proposes user-defined iterative transactions. Multi-dimensional-recursion has been built on top of SimSQL \cite{cai2013simulation}, a distributed analytic database system, that can support neural network training \cite{jankov2019declarative}. The idea of moving past relations onto arrays as a database data model, is long-standing (e.g., consider Baumann's work on Rasdaman \cite{baumann1998multidimensional}). SciDB \cite{brown2010overview} is a well-known system following this idea. LevelHeaded \cite{aberger2018levelheaded} uses a special key-value structure to support linear operations. MATLANG \cite{barcelo2019expressiveness} introduces a language for matrix manipulation. TensorDB \cite{kim2014tensordb,kim2014efficient} is a database system that can perform tensor manipulation. LARA\cite{hutchison2017laradb} proposes an algebra with tuple-wise operators, attribute-wise operators, and tuple extensions, then defines linear and relational algebra operations using these primitives. RMA \cite{dolmatova2020relational} attempts to bridge the gap between relations and matrices. While related, these systems attempt to implement tensor computations as algebraic expressions (e.g., a join followed by an aggregation) over relations of \texttt{(key, value)} pairs. This requires pushing a huge number of pairs through the system, which introduces significant overhead. \section{Conclusion} We have introduced the tensor relational algebra (TRA), and suggested this as the interface that could be exported by the back-end of a machine learning system. We have showed through extensive experimentation that a computation expressed in the TRA then transformed into the implementation algebra and optimized, is competitive with (and often faster than) other options, including HPC softwares such as ScaLAPACK, and ML softwares such as TensorFlow and PyTorch. There are many avenues for future work. TRA is not meant to be a user-facing programming language. Thus, a key question is: can a language such as Tensor Comprehensions or Einstein notation be compiled into TRA? At a high level, this should not be too difficult, as these languages match indices in different tensors (which is easily implemented as a join) and then sum out dimensions (aggregation). But there are many details to consider. The TRA uses arrays or ``chunks'' for speed. How to automatically block or chunk a tensor computation? How to automatically generate the compute kernels? Sparsity is also an important issue. A compiler could also decide to store a sparse tensor using arrays that do not have zero dimensions, but where those arrays are stored sparsely, with a high-performance kernel generated to handle the specific sparsity pattern.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:infinite} In this paper we discuss the approximation of transport maps on infinite dimensional domains. Our main motivation are inference problems, in which the unknown belongs to a Banach space $Y$. Two examples could be the following: \begin{itemize} \item {\bf Groundwater flow}: Consider a porous medium in a domain $\mathrm{D}} \newcommand{\measi}{{\rho}\subseteq{\mathbb R}} \newcommand{\bbP}{{\mathbb P}^3$. Given observations of the subsurface flow, we are interested in the permeability (hydraulic conductivity) of the medium in $\mathrm{D}} \newcommand{\measi}{{\rho}$. The physical system is described by an elliptic partial differential equation, and the unknown quantity describing the permeability can be modelled as a function $\psi\in L^\infty(\mathrm{D}} \newcommand{\measi}{{\rho})=Y$ \cite{groundwater}. \item {\bf Inverse scattering}: Suppose that $\mathrm{D}} \newcommand{\measi}{{\rho}_{\rm scat}\subseteq{\mathbb R}} \newcommand{\bbP}{{\mathbb P}^3$ is filled by a perfect conductor and illuminated by an electromagnetic wave. Given measurements of the scattered wave, we are interested in the shape of the scatterer $\mathrm{D}} \newcommand{\measi}{{\rho}_{\rm scat}$. Assume that this domain can be described as the image of some bounded reference domain $\mathrm{D}} \newcommand{\measi}{{\rho}\subseteq{\mathbb R}} \newcommand{\bbP}{{\mathbb P}^3$ under a bi-Lipschitz transformation $\psi:\mathrm{D}} \newcommand{\measi}{{\rho}\to{\mathbb R}} \newcommand{\bbP}{{\mathbb P}^3$, i.e., $\mathrm{D}} \newcommand{\measi}{{\rho}_{\rm scat}=\psi(\mathrm{D}} \newcommand{\measi}{{\rho})$. The unknown is then the function $\psi\in W^{1,\infty}(\mathrm{D}} \newcommand{\measi}{{\rho})=Y$. We describe the forward model in \cite{JSZ16}. \end{itemize} The Bayesian approach to these problems is to model $\psi$ as a $Y$-valued random variable and determine the distribution of $\psi$ conditioned on a noisy observation of the system. Bayes' theorem can be used to specify this ``posterior'' distribution via the prior and the likelihood. The prior is a measure on $Y$ that represents our information on $\psi\in Y$ before making an observation. Mathematically speaking, assuming that the observation and the unknown follow some joint distribution, the prior is the marginal distribution of the unknown $\psi$. % The goal is to explore the posterior and in this way to make inferences about $\psi$. We refer to \cite{MR3839555} for more details on the general methodology of Bayesian inversion in Banach spaces. For the analysis and implementation of such methods, instead of working with (prior and posterior) measures on the Banach space $Y$, it can be convenient to parameterize the problem and work with measures on ${\mathbb R}} \newcommand{\bbP}{{\mathbb P}^\N$. To demonstrate this, choose a sequence $(\psi_j)_{j\in\N}$ in $Y$ and a measure $\mu$ on ${\mathbb R}} \newcommand{\bbP}{{\mathbb P}^\N$. With ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon (y_j)_{j\in\N}\in{\mathbb R}} \newcommand{\bbP}{{\mathbb P}^\N$ \begin{equation}\label{eq:prior} \Phi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \sum_{j\in\N}y_j\psi_j \end{equation} we can formally define a prior measure on $Y$ as the pushforward $\Phi_\sharp\mu$. Instead of inferring $\psi\in Y$ directly, we may instead infer the coefficient sequence ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}=(y_j)_{j\in\N}\in{\mathbb R}} \newcommand{\bbP}{{\mathbb P}^\N$, in which case $\mu$ holds the prior information on the unknown coefficients. % These viewpoints are equivalent in the sense that the conditional distribution of $\psi$ given an observation is the pushforward, under $\Phi$, of the conditional distribution of ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}$ given the observation. Under certain assumptions on the prior and the space $Y$, the construction \eqref{eq:prior} arises naturally through the Karhunen--Lo\`eve expansion; see, e.g., \cite{MR3308418,1509.07526}. In this case the $y_j\in{\mathbb R}} \newcommand{\bbP}{{\mathbb P}$ are uncorrelated random variables with unit variance, and the $\psi_j$ are eigenvectors of the prior covariance operator, with their norms equal to the square root of the corresponding eigenvalues. In this paper we concentrate on the special case where the coefficients $y_j$ are known to belong to a bounded interval. Up to a shift and a scaling this is equivalent to $y_j\in [-1,1]$, which will be assumed throughout. We refer to \cite[Sec.~2]{MR3839555} for the construction and further discussion of such (bounded) priors. The goal then becomes to determine and explore the posterior measure on $U\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon [-1,1]^\N$. Denote this measure by ${\pi}$ and let $\mu$ be % the prior measure on $U$ such that ${\pi}\ll\mu$. Then the Radon-Nikodym derivative $f_{\pi}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \frac{\mathrm{d}{\pi}}{\mathrm{d}\mu}:U\to [0,\infty)$ exists. Since the forward model (and thus the likelihood) only depends on $\Phi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})$ in the Banach space $Y$, $f_{\pi}$ must be of the type \begin{equation}\label{eq:posterior} f_{\pi}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}) = \mathfrak{f}_{{\pi}}(\Phi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}))=\mathfrak{f}_{{\pi}}\Big(\sum_{j\in\N}y_j\psi_j\Big) \end{equation} for some $\mathfrak{f}_{\pi}:Y\to [0,\infty)$. We give a concrete example in Ex.~\ref{ex:bayes} where this relation holds. ``Exploring'' the posterior refers to computing expectations and variances w.r.t.\ ${\pi}$, or detecting areas of high probability w.r.t.\ ${\pi}$. A standard technique to do so in high dimensions is Monte Carlo---or in this context Markov chain Monte Carlo---sampling, e.g., \cite{10.5555/1051451}. Another approach is via transport maps \cite{MR3821485}. Let $\measi$ be another measure on $U$ from which it is easy to sample. Then, a map $T:U\to U$ satisfying $T_\sharp\measi={\pi}$ (i.e., ${\pi}(A)=\measi(\set{{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}}{T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\in A})$ for all measurable $A$) is called a transport map that pushes forward $\measi$ to ${\pi}$. Such a $T$ has the property that if ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\sim\measi$ then $T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\sim{\pi}$, and thus samples from ${\pi}$ can easily be generated once $T$ has been computed. Observe that $\Phi\circ T:U\to Y$ will then transform a sample from $\measi$ to a sample from $\Phi_\sharp T_\sharp\measi=\Phi_\sharp{\pi}$, which is the posterior % in the Banach space $Y$. Thus, given $T$, we can perform inference on the quantity in the Banach space. This motivates the setting we are investigating in this paper: for two measures $\measi$ and ${\pi}$ on $U$, such that their densities are of the type \eqref{eq:posterior} for a smooth (see Sec.~\ref{sec:main}) function $\mathfrak{f}_{\pi}$, we are interested in the approximation of $T:U\to U$ such that $T_\sharp\measi={\pi}$. More precisely, we will discuss the approximation of the so-called Knothe--Rosenblatt (KR) transport by rational functions. The reason for using rational functions (rather than polynomials) is to guarantee that the resulting approximate transport is a bijection from $U\to U$. The rate of convergence will in particular depend on the decay rate of the functions $\psi_j$. If \eqref{eq:prior} is a Karhunen--Lo\`eve expansion, this is the decay rate of the square root of the eigenvalues of the covariance operator of the prior. The faster this decay, the larger the convergence rate will be. {% The reason for analyzing the triangular KR transport is its wide use in practical algorithms \cite{MR2972870, spantini2018inference,jaini2019sum,wehenkel2019unconstrained}, and the fact that its concrete construction makes it amenable to a rigorous analysis.} Sampling from high-dimensional distributions by transforming a (usually lower-dimensional) ``latent'' variable into a sample from the desired distribution is a standard problem in machine learning. It is tackled by methods such as generative adversarial networks \cite{goodfellow2014generative} and variational autoencoders \cite{doersch2016tutorial}. In the setting above, the high-dimensional distribution is the posterior on $Y$. We will show that under the assumptions of this paper, it is possible to approximately sample from this distribution by transforming a low dimensional latent variable, and without suffering from the curse of dimensionality. While Bayesian inference is our motivation, for the rest of the manuscript the presentation remains in an abstract setting, and our results therefore have ramifications on the broader task of transforming high-dimensional distributions. \subsection{Contributions and outline} In this manuscript we generalize the analysis of \cite{zm1} to the infinite dimensional case. Part of the proofs are based on the results in \cite{zm1}, which we recall in the appendix where appropriate to improve readability. In Sec.~\ref{sec:main} we provide a short description of our main result. Sec.~\ref{SEC:Tinf} discusses the KR map in infinite dimensions. Its well-definedness in infinite dimensions has been established in \cite{bogachevtri}. In Thm.~\ref{THM:KNOTHEINF} we additionally give a formula for the pushforward density assuming continuity of the densities w.r.t.\ the product topology. In Sec.~\ref{sec:infanalyticity} we analyze the regularity of the KR transport. % The fact that a transport inherits the smoothness of the densities is known for certain function classes: for example, in the case of $C^k$ densities, \cite{MR3349831} shows that the optimal transport also belongs to $C^k$, and a similar statement holds for the KR transport; see for example \cite[Remark 2.19]{santambrogio}. In Prop.~\ref{PROP:COR:DININF}, assuming analytic densities we show analyticity of the KR transport. Furthermore, and more importantly, we carefully examine the domain of holomorphic extension to the complex numbers. These results are exploited in Sec.~\ref{sec:polinfty} to show convergence of rational function approximations to $T$ in Thm.~\ref{THM:TINF}. This result proves a \emph{dimension-independent} higher-order convergence rate for the transport of measures supported on infinite dimensional spaces (which need not be supported on finite dimensional subspaces). % In this result, \emph{all} occurring constants (not just the convergence rate) are controlled independently of the dimension. In Sec.~\ref{sec:measinfty} we show that this implies convergence of the pushforward measures (on $U$ and on the Banach space $Y$) in the Hellinger distance, the total variation distance, the KL divergence, and the Wasserstein distance. These results are formulated in Thm.~\ref{THM:MEASCONVINF} and Thm.~\ref{thm:wassersteinconv}. To prove the latter, in Prop.~\ref{PROP:WASSERSTEIN} we slightly extend a statement from \cite{MR4120535} to compact Polish spaces to show that the Wasserstein distance between two pushforward measures can be bounded by the maximal distance of the two maps pushing forward the initial measure. Finally, we show that it is possible to compute approximate samples for the pushforward measure in the Banach space $Y$, by mapping a low-dimensional reference sample to the Banach space; see Cor.~\ref{COR:MEASCONVINF}. All proofs can be found in the appendix. \section{Main result}\label{sec:main} Let for $k\in\N$ \begin{equation}\label{eq:U} \U{k}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon [-1,1]^k\qquad\text{and}\qquad U\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon [-1,1]^\N \end{equation} where these sets are equipped with the product topology and the Borel $\sigma$-algebra, which coincides with the product $\sigma$-algebra \cite[Lemma 6.4.2 (ii)]{bogachev}. Additionally, let $U_0\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \emptyset$. Denote by $\lambda$ the Lebesgue measure on $[-1,1]$ and by \begin{equation}\label{eq:mu} \mu=\bigotimes_{j\in\N}\frac{\lambda}{2} \end{equation} the infinite product measure. Then $\mu$ is a (uniform) probability measure on $U$. By abuse of notation for $k\in\N$ we additionally denote $\mu=\otimes_{j=1}^k\frac{\lambda}{2}$, where $k$ will always be clear from context. For a \emph{reference} $\measi\ll\mu$ and a \emph{target} measure ${\pi}\ll\mu$ on $U$, we investigate the smoothness and approximability of the % KR transport $T:U\to U$ satisfying $T_\sharp\measi={\pi}$; the notation $T_\sharp\measi$ refers to the pushforward measure defined by $T_\sharp\measi(A)\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \measi(\set{T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\in A}{{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U})$ for all measurable $A\subseteq U$. While in general there exist multiple maps $T:U\to U$ pushing forward $\measi$ to ${\pi}$, the KR transport is the unique such map satisfying \emph{triangularity} and \emph{monotonicity}. Triangularity refers to the $k$th component $T_k$ of $T=(T_k)_{k\in\N}$ being a function of the variables $x_1,\dots,x_k$ only, i.e., $T_k:\U{k}\to \U{1}$ for all $k\in\N$. Monotonicity means that $x_k\mapsto T_k(x_1,\dots,x_{k-1},x_k)$ is monotonically increasing on $\U{1}$ for every $k\in\N$ and every fixed $(x_1,\dots,x_{k-1})\in \U{k}$. Absolute continuity of $\measi$ and ${\pi}$ w.r.t.\ $\mu$ imply existence of the Radon-Nikodym derivatives \begin{equation} f_\measi\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \frac{\mathrm{d}\measi}{\mathrm{d}\mu}\qquad\text{and}\qquad f_{\pi}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \frac{\mathrm{d}{\pi}}{\mathrm{d}\mu} \end{equation} which will also be referred to as the densities of these measures. Assuming for the moment existence of the KR transport $T$, approximating $T$ requires approximating the \emph{infinitely many} functions $T_k:\U{k}\to \U{1}$, $k\in\N$. This, and the fact that the domain $\U{k}$ of $T_k$ becomes increasingly high dimensional as $k\to\infty$, makes the problem quite challenging. For these reasons, further assumptions on $\measi$ and ${\pi}$ are necessary. Typical requirements imposed % on the measures guarantee some form of intrinsic low dimensionality. Examples include densities belonging to certain reproducing kernel Hilbert spaces, or to other function classes of sufficient regularity. In this paper we concentrate on the latter. % As is well-known, if $T_k:\U{k}\to \U{1}$ belongs to $C^k$, then it can be uniformly approximated with the $k$-independent convergence rate of $1$, for instance with multivariate polynomials. % The convergence rate to approximate $T_k$ then does not deteriorate with increasing $k$, but the constants in such error bounds usually still depend exponentially on $k$. Moreover, as $k\to\infty$, this line of argument requires the components of the map to become arbitrarily regular. For this reason, in the present work, where $T=(T_k)_{k\in\N}$, it is not unnatural to restrict ourselves to transports that are $C^\infty$. More precisely, we in particular assume \emph{analyticity} of the densities $f_\measi$ and $f_{\pi}$, which in turn implies analyticity of $T$ as we shall see. This will allow us to control all occurring constants \emph{independent} of the dimension, and approximate the whole map $T:U\to U$ using only finitely many degrees of freedom in our approximation. Assume in the following that $Z$ is a Banach space with complexification $Z_{\mathbb C}} \newcommand{\N}{{\mathbb N}$; see, e.g., \cite{padraig,munoz99} for the complexification of Banach spaces. We may think of $Z$ and $Z_{\mathbb C}} \newcommand{\N}{{\mathbb N}$ as real and complex valued function spaces, e.g., $Z=L^2([0,1];{\mathbb R}} \newcommand{\bbP}{{\mathbb P})$ and $Z_{\mathbb C}} \newcommand{\N}{{\mathbb N}=L^2([0,1];{\mathbb C}} \newcommand{\N}{{\mathbb N})$. To guarantee analyticity and the structure in \eqref{eq:prior} we consider densities $f$ of the following type: \begin{assumption}\label{ass:density} For constants $p\in (0,1)$, $0<M\le L<\infty$, a sequence $(\psi_j)_{j\in\N}\subseteq Z$, and a differentiable function $\mathfrak{f}:O_Z\to {\mathbb C}} \newcommand{\N}{{\mathbb N}$ with $O_Z\subseteq Z_{\mathbb C}} \newcommand{\N}{{\mathbb N}$ open, the following hold: \begin{enumerate}[label=(\alph*)] \item $\sum_{j\in\N}\norm[Z]{\psi_{j}}^p<\infty$, \item $\sum_{j\in\N}y_j\psi_{j}\in O_Z$ for all ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$, \item $\mathfrak{f}(\sum_{j\in\N}y_j\psi_{j})\in{\mathbb R}} \newcommand{\bbP}{{\mathbb P}$ for all ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$, \item\label{item:densityML} $M= \inf_{\psi\in O_Z}|\mathfrak{f}(\psi)|\le \sup_{\psi\in O_Z}|\mathfrak{f}(\psi)| = L$. \end{enumerate} The function $f:U\to{\mathbb R}} \newcommand{\bbP}{{\mathbb P}$ given by \begin{equation}\label{eq:density} f({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \mathfrak{f}\bigg(\sum_{j\in\N}\psi_{j}y_j\bigg) \end{equation} satisfies $\int_U f({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\;\mathrm{d}\mu({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})=1$. \end{assumption} \begin{assumption}\label{ass:densities} For two sequences $(\psi_{*,j})_{j\in\N}\in Z$ with $(*,Z)\in \{(\measi,X),({\pi},Y)\}$, the functions \begin{equation*} f_\measi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})=\mathfrak{f}_\measi\bigg(\sum_{j\in\N}y_j\psi_{\measi,j}\bigg),\qquad f_{\pi}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})=\mathfrak{f}_{\pi}\bigg(\sum_{j\in\N}y_j\psi_{{\pi},j}\bigg) \end{equation*} % both satisfy Assumption \ref{ass:density} for some fixed constants $p\in (0,1)$ and $0<M\le L<\infty$. \end{assumption} The summability parameter $p$ determines the decay rate of the functions $\psi_j$---the smaller $p$ the stronger the decay of the $\psi_j$. Because $p<1$, the argument of $\mathfrak{f}$ in \eqref{eq:density} is well-defined for ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$ since $\sum_{j\in\N}|y_j|\norm[Z]{\psi_{j}}<\infty$. % Our main result is about the existence and approximation of the KR-transport $T:U\to U$ satisfying $T_\sharp\measi={\pi}$. We state the result here in a simplified form; more details will be given in Thm.~\ref{THM:TINF}, Thm.~\ref{THM:MEASCONVINF}, and Thm.~\ref{thm:wassersteinconv}. We only mention that the trivial approximation $T_k(x_1,\dots,x_k)\simeq x_k$ is interpreted as not requiring any degrees of freedom in the following theorem. \begin{theorem}\label{thm:main} Let $f_\measi:U\to (0,\infty)$ and $f_{\pi}:U\to (0,\infty)$ be two % probability densities as in Assumption \ref{ass:densities} for some $p\in (0,1)$. Then there exists a unique triangular, monotone, and bijective map $T:U\to U$ satisfying $T_\sharp\measi={\pi}$. Moreover, for $N\in\N$ there exists a space of rational functions employing $N$ degrees of freedom, and a bijective, monotone, and triangular $\tilde T:U\to U$ in this space such that \begin{equation}\label{eq:error} {\rm dist}(\tilde T_\sharp\measi,{\pi})\le C N^{-\frac{1}{p}+1}. \end{equation} Here $C$ is a constant independent of $N$ and ``${\rm dist}$'' may refer to the total variation distance, the Hellinger distance, the KL divergence, or the Wasserstein distance. \end{theorem} Equation \eqref{eq:error} shows a dimension-independent convergence rate (indeed our transport is defined on the infinite dimensional domain $U=[-1,1]^\N$), so that the curse of dimensionality is overcome. % The rate of algebraic convergence becomes arbitrarily large as $p\in (0,1)$ in Assumption \ref{ass:density} becomes small. The convergence rate $\frac{1}{p}-1$ in Thm.~\ref{thm:main} is well-known for the approximation of functions as in \eqref{eq:density} by sparse polynomials, e.g., \cite{CDS10,CDS11,CCS15}; also see Rmk.~\ref{rmk:bpe}. There is a key difference to earlier results dealing with the approximation of such functions: we do not approximate the function $f:U\to {\mathbb R}} \newcommand{\bbP}{{\mathbb P}$ in \eqref{eq:density}, but instead we approximate the transport $T:U\to U$, i.e., an infinite number of functions. Our main observation in this paper is that the sparsity of the densities $f_\measi$ and $f_{\pi}$ carries over to the transport. Even though it has infinitely many components, $T$ can still be approximated very efficiently if the ansatz space is carefully chosen and tailored to the specific densities. In addition to showing the error convergence \eqref{eq:error}, in Thm.~\ref{THM:TINF} we give concrete ansatz spaces achieving this convergence rate. These ansatz spaces can be computed in linear complexity and may be used in applications. The main application for our result is to provide a method to sample from the target ${\pi}$ or the pushforward $\Phi_\sharp{\pi}$ in the Banach space $Y$, where $\Phi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})=\sum_{j\in\N}y_j\psi_{{\pi},j}$. Given an approximation $\tilde T=(\tilde T_j)_{j\in\N}$ to $T$, this is achieved via $\Phi(\tilde T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}))$ for ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\sim\measi$. It is natural to truncate this expansion, which yields \begin{equation*} \sum_{j=1}^s \tilde T_j(y_1,\dots,y_j)\psi_{{\pi},j} \end{equation*} for some truncation parameter $s\in\N$ and $(y_1,\dots,y_s)\in\U{s}$. This map transforms {a sample from} % a distribution on the $s$-dimensional space $\U{s}$ to a sample from an infinite dimensional distribution on $Y$. In Cor.~\ref{COR:MEASCONVINF} we show that the error {of this truncated representation} in the Wasserstein distance converges with the same rate as given in Thm.~\ref{thm:main}. \begin{remark} The reference $\measi$ is a ``simple'' measure whose main purpose is to allow for easy sampling. % One possible choice for $\measi$ (that we have in mind throughout this paper) is the uniform measure $\mu$. It trivially satisfies Assumption \ref{ass:density} with $\mathfrak{f}_\measi:{\mathbb C}} \newcommand{\N}{{\mathbb N}\to{\mathbb C}} \newcommand{\N}{{\mathbb N}$ being the constant $1$ function (and, e.g., $\psi_{\measi,j}=0\in{\mathbb C}} \newcommand{\N}{{\mathbb N}$). \end{remark} \begin{remark} Even though we can think of $\measi$ as being $\mu$, we formulated Thm.~\ref{thm:main} in more generality, mainly for the following reason: Since the assumptions on $\measi$ and ${\pi}$ are the same, we may switch their roles. Thus Thm.~\ref{thm:main} can be turned into a statement about the inverse transport $S\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon T^{-1}:U\to U$, which can also be approximated at the rate $\frac{1}{p}-1$. \end{remark} \begin{example}[Bayesian inference]\label{ex:bayes} For a Banach space $Y$ (``parameter space'') and a Banach space $\mathcal{X}$ (``solution space''), let $\scr{u}:O_Y\to \mathcal{X}_{\mathbb C}} \newcommand{\N}{{\mathbb N}$ be a complex differentiable \emph{forward operator} that takes values in (the ${\mathbb R}} \newcommand{\bbP}{{\mathbb P}$-vector space) $\mathcal{X}$ for inputs in (the open subset of the ${\mathbb R}} \newcommand{\bbP}{{\mathbb P}$-vector space) $Y\cap O_Y$. Here $O_Y\subseteq Y_{\mathbb C}} \newcommand{\N}{{\mathbb N}$ is some nonempty open set. Let $G:\mathcal{X}\to {\mathbb R}} \newcommand{\bbP}{{\mathbb P}^m$ be a bounded linear \emph{observation operator}. For some unknown $\psi\in Y$ we are given a noisy observation of the system in the form \begin{equation*} \varsigma = G(\scr{u}(\psi))+\eta\in{\mathbb R}} \newcommand{\bbP}{{\mathbb P}^m, \end{equation*} where $\eta\sim\mathcal{N}(0,\Gamma)$ is a centered Gaussian random variable with symmetric positive definite covariance $\Gamma\in{\mathbb R}} \newcommand{\bbP}{{\mathbb P}^{m\times m}$. The goal is to recover $\psi$ given the measurement $\varsigma$. To formulate the Bayesian inverse problem, we first fix a prior: Let $(\psi_{j})_{j\in\N}$ be a summable sequence of linearly independent elements in $Y$. With \begin{equation*} \Phi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \sum_{j\in\N}y_j\psi_j \end{equation*} and the uniform measure $\mu$ on $U$, we choose the prior $\Phi_\sharp\mu$ on $Y$. Determining $\psi$ within the set $\set{\Phi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})}{{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U}\subseteq Y$ is equivalent to determining the coefficient sequence ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$. Assuming independence of ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\sim \mu$ and $\eta\sim{\mathcal N}} \newcommand{\cP}{{\mathcal P}(0,\Gamma)$, the distribution of ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}$ given $\varsigma$ (the posterior) can then be characterized by its density w.r.t.\ $\mu$, which, up to a normalization constant, equals \begin{equation}\label{eq:post} \exp\left(\Bigg(\varsigma-G\Big(\scr{u}\Big(\sum_{j\in\N}y_j\psi_j\Big)\Big)\Bigg)^\top\Gamma^{-1}\Bigg(\varsigma-G\Big(\scr{u}\Big(\sum_{j\in\N}y_j\psi_j\Big)\Big)\Bigg)\right). \end{equation} This posterior density is of the form \eqref{eq:density} and the corresponding measure ${\pi}$ can be chosen as a target in Thm.~\ref{thm:main}. Given $T$ satisfying $T_\sharp\measi={\pi}$, we may then explore ${\pi}$ to perform inference on the unknown ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}$ (or its image $\Phi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})$ in the Banach space $Y$); see for instance \cite[Sec.~7.4]{zm1}. For more details on the rigorous derivation of \eqref{eq:post} we refer to \cite{CSStBIP2012} and in particular \cite[Sec.~3]{MR3839555}. \end{example} \begin{remark}\label{rmk:bpe} Functions as in Assumption \ref{ass:density} belong to the set of so-called ``$(\bsb,p,\varepsilon)$-holomorphic'' functions; see, e.g., \cite{CCS15}. This class contains infinite parametric functions that are holomorphic in each argument $y_j$, and exhibit some growth in the domain of holomorphic extension as $j\to\infty$. The results of the present paper and the key arguments remain valid if we replace Assumption \ref{ass:density} with the $(\bsb,p,\varepsilon)$-holomorphy assumption. Since most relevant examples of such functions are of the specific type \eqref{eq:density}, we restrict the discussion to this case in order to avoid technicalities. \end{remark} }% \section{The Knothe--Rosenblatt transport in infinite dimensions}\label{SEC:Tinf} Recall that we consider the product topology on $U=[-1,1]^\N$. Assume that $f_\measi\in C^0(U;{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+)$ and $f_{\pi}\in C^0(U;{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+)$ are two positive probability densities. Here ${\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon (0,\infty)$, and $C^0(U;{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+)$ denotes the continuous functions from $U\to{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+$. We now recall the construction of the KR map. For ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}=(y_j)_{j\in\N}\in {\mathbb C}} \newcommand{\N}{{\mathbb N}^\N$ and $1\le k\le n<\infty$ let \begin{equation}\label{eq:slices} {\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon (y_j)_{j=1}^k,\qquad {\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k:n]}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon (y_j)_{j=k}^n,\qquad {\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[n:]}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon (y_j)_{j\ge n}. \end{equation} For $*\in\{\measi,{\pi}\}$ and ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$ define \begin{subequations}\label{eq:fk2} \begin{equation} \hat f_{*,0}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon 1 \end{equation} and for $k\in\N$ \begin{equation} \hat f_{*,k}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \int_{U}f_*({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]},\bst) \;\mathrm{d}\mu(\bst)>0,\qquad f_{*,k}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \frac{\hat f_{*,k}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})}{\hat f_{*,k-1}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]})}>0. \end{equation} \end{subequations} Then, ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]}\mapsto \hat f_{\measi,k}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})$ is the marginal density of $\measi$ in the first $k$ variables ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]}\in \U{k}$, and we denote the corresponding measure on $\U{k}$ by $\measi_k$. Similarly, $y_k\mapsto f_{\measi,k}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]},y_k)$ is the conditional density of $y_k$ given ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]}$, and the corresponding measure on $\U{1}$ is denoted by $\measi_k^{{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]}}$. The same holds for the densities of ${\pi}$, and we use the analogous notation ${\pi}_k$ and ${\pi}_k^{{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]}}$ for the {marginal and conditional measures.} Recall that for two atomless measures $\eta$ and $\nu$ on $\U{1}$ with distribution functions $F_\eta:\U{1}\to [0,1]$ and $F_\nu:\U{1}\to [0,1]$, $F_\eta^{-1}\circ F_\nu:\U{1}\to \U{1}$ pushes forward $\nu$ to $\eta$, as is easily checked, e.g., \cite[Thm.~2.5]{santambrogio}. In case $\eta$ and $\nu$ have positive densities on $\U{1}$, this map is the unique strictly monotonically increasing such function. With this in mind, the KR-transport can be constructed as follows: Let $T_1:\U{1}\to \U{1}$ be the (unique) monotonically increasing transport satisfying \begin{subequations}\label{eq:T} \begin{equation} (T_1)_\sharp \measi_1 = {\pi}_1. \end{equation} Analogous to \eqref{eq:slices} denote $T_{[k]}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon (T_j)_{j=1}^k:\U{k}\to \U{k}$. Let inductively for any ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$, $T_{k+1}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]},\cdot):\U{1}\to \U{1}$ be the (unique) monotonically increasing transport such that \begin{equation} (T_{k+1}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]},\cdot))_\sharp \measi_{k+1}^{{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]}} ={\pi}_{k+1}^{T_{[k]}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})}. \end{equation} \end{subequations} Note that $T_{k+1}:\U{{k+1}}\to \U{1}$ and thus $T_{[k+1]}=(T_j)_{j=1}^{k+1}:\U{{k+1}}\to \U{{k+1}}$. It can then be shown that for any $k\in\N$ \cite[Prop.~2.18]{santambrogio} \begin{equation}\label{eq:Tfinite} (T_{[k]})_\sharp \measi_k = {\pi}_k. \end{equation} By induction this construction yields a map $T\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon (T_k)_{k\in\N}$ where each $T_k:\U{k}\to \U{1}$ satisfies that $T_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]},\cdot):\U{1}\to \U{1}$ is strictly monotonically increasing and bijective. This implies that $T:U\to U$ is bijective, as follows. First, to show \emph{injectivity}: let ${\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t}\neq {\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$ and $j=\argmin\set{i}{x_i\neq y_i}$. Since $t\mapsto T_j(x_1,\dots,x_{j-1},t)$ is bijective, $T_j(x_1,\dots,x_{j-1},x_j)\neq T_j(x_1,\dots,x_{j-1},y_j)$ and thus $T({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t})\neq T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})$. Next, to show \emph{surjectivity:} fix ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$. Bijectivity of $T_1:\U{1}\to \U{1}$ implies existence of $x_1\in\U{1}$ such that $T_1(x_1)=y_1$. Inductively choose $x_j$ such that $T_j(x_1,\dots,x_j)=y_j$. Then $T({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t})={\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}$. Thus: \begin{lemma}\label{lemma:bijective} Let $T=(T_k)_{k\in\N}:U\to U$ be triangular. If $t\mapsto T_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]},t)$ is bijective from $\U{1}\to \U{1}$ for every ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$ and $k\in\N$, then $T:U\to U$ is bijective. \end{lemma} The continuity assumption on the densities guarantees that the marginal densities on $\U{k}$ converge uniformly to the full density, as we show next. This indicates that in principle it is possible to approximate the infinite dimensional transport map by restricting to finitely many dimensions. \begin{lemma}\label{LEMMA:FC0} Let $f\in C^0(U;{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+)$, and let $\hat f_k$ and $f_k$ be as in \eqref{eq:fk2}. Then \begin{enumerate} \item $f$ is measurable and $f\in L^2(U,\mu)$, \item $\hat f_{k}\in C^0(\U{k};{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+)$ and $f_{k}\in C^0(\U{k};{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+)$ for every $k\in\N$, \item it holds \begin{equation}\label{eq:unifconv} \lim_{k\to\infty}\sup_{{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U}|\hat f_{k}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})-f({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})|=0. \end{equation} \end{enumerate} \end{lemma} Throughout what follows $T$ always stands for the KR transport defined in \eqref{eq:T}. Next we show that $T$ indeed pushes forward $\measi$ to ${\pi}$, and additionally we provide a formula for the transformation of densities. In the following $\partial_jg({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \frac{\partial}{\partial x_j}g({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t})$. Furthermore, we call $f:U\to{\mathbb R}} \newcommand{\bbP}{{\mathbb P}$ a \emph{positive probability density} if $f({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})>0$ for all ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$ and $\int_U f({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\;\mathrm{d}\mu({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})=1$. \begin{theorem}\label{THM:KNOTHEINF} Let $f_{\pi}$, $f_\measi \in C^0(U;{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+)$ be two positive probability densities. Then \begin{enumerate} \item\label{item:KT} $T=(T_j)_{j\in\N}:U\to U$ is measurable, bijective and satisfies $T_\sharp\measi={\pi}$, \item\label{item:rdder} for each $k\in\N$ holds $\partial_kT_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})\in C^0(\U{k};{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+)$ and \begin{equation}\label{eq:detinf} \det dT({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \lim_{n\to\infty}\prod_{j=1}^n\partial_jT_j({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[j]})\in C^0(U;{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+) \end{equation} is well-defined (i.e., converges in $C^0(U;{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+)$). Moreover \begin{equation} f_{{\pi}}(T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}))\det dT({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t})=f_\measi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}) \qquad\forall{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U. \end{equation} \end{enumerate} \end{theorem} \begin{remark}\label{rmk:knotheS} Switching the roles of $f_\measi$ and $f_{\pi}$, for % $S=T^{-1}$ it holds $f_\measi(S({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}))\det dS({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})=f_{\pi}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})$ for all ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$, where $\det dS({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \lim_{n\to\infty}\prod_{j=1}^n\partial_jS_j({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[j]})$ is well-defined. \end{remark} \section{Analyticity of $T$}\label{sec:infanalyticity} In this section we investigate the domain of analytic extension of $T$. To state our results, for $\delta>0$ and $D\subseteq{\mathbb C}} \newcommand{\N}{{\mathbb N}$ we introduce the complex sets \begin{equation*} \cB_\delta\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \set{z\in{\mathbb C}} \newcommand{\N}{{\mathbb N}}{|z|<\delta}\qquad\text{and}\qquad \cB_\delta(D)\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \set{z+y}{z\in\cB_\delta,~y\in D}, \end{equation*} and for $k\in\N$ and ${\bm \delta}\in (0,\infty)^k$ \begin{equation*} \cB_{\bm \delta}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \bigtimes_{j=1}^k\cB_{\delta_j}\qquad\text{and}\qquad \cB_{\bm \delta}(D) \vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \bigtimes_{j=1}^k \cB_{\delta_j}(D), \end{equation*} which are subsets of ${\mathbb C}} \newcommand{\N}{{\mathbb N}^k$. Their closures will be denoted by $\bar\cB_\delta$, etc. If we write $\cB_{\bm \delta}(\U{1})\times U$ we mean elements ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in{\mathbb C}} \newcommand{\N}{{\mathbb N}^\N$ with $y_j\in \cB_{\delta_j}(\U{1})$ for $j\le k$ and $y_j\in \U{1}$ otherwise. Subsets of ${\mathbb C}} \newcommand{\N}{{\mathbb N}^\N$ are always equipped with the product topology. In this section we analyze the domain of holomorphic extension of each component $T_k:\U{k}\to\U{1}$ of $T$ to subsets of ${\mathbb C}} \newcommand{\N}{{\mathbb N}^k$. The reason why we are interested in such statements, is that they allow to upper bound the expansion coefficients w.r.t.\ certain polynomial bases: For a multiindex ${\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\in \N_0^k$ (where $\N_0=\{0,1,2,\dots\}$) let $L_{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})=\prod_{j=1}^kL_{\nu_j}(y_j)$ be the product of the one dimensional Legendre polynomials normalized in $L^2(\U{1},\mu)$. Then $(L_{\bm \nu}} \newcommand{\bsmu}{{\bm \mu})_{{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\in\N_0^k}$ forms an orthonormal basis of $L^2(\U{k},\mu)$. Hence we can expand $\partial_kT_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})=\sum_{{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\in\N_0^k}l_{k,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}} L_{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})$ for ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$ and with the Legendre coefficients \begin{equation}\label{eq:lkbsnu} l_{k,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}}=\int_{\U{k}}\partial_kT_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})L_{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})\in{\mathbb R}} \newcommand{\bbP}{{\mathbb P}. \end{equation} Analyticity of $T_k$ (and thus of $\partial_kT_k$) on the set $\cB_{\bm \delta}(\U{1})$ implies bounds of the type (see Lemma \ref{lemma:legest}) \begin{equation}\label{eq:legbound} |l_{k,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}}|\le C\norm[{L^\infty(\cB_{\bm \delta}(\U{1}))}]{\partial_kT_k}\prod_{j=1}^k (1+\delta_j)^{-\nu_j}. \end{equation} Here $C$ in particular depends on % $\min_j \delta_j>0$. The exponential decay in each $\nu_j$ leads to exponential convergence of truncated sparse Legendre expansions. Once we have approximated $\partial_kT_k$, we integrate this term in $x_k$ to obtain an approximation to $T_k$. The reason for not approximating $T_k$ directly is explained after Prop.~\ref{PROP:COR:DININF} below; see \eqref{eq:l0trivial}. The size of the holomorphy domain (the size of ${\bm \delta}$) determines the constants in these estimates---the larger the entries of ${\bm \delta}$, the smaller the upper bound \eqref{eq:legbound} and the faster the convergence. We are now in position to present our main technical tool to find suitable holomorphy domains of each $T_k$ (or equivalently $\partial_kT_k$). We will work under the following assumption on the two densities $f_\measi:U\to (0,\infty)$ and $f_{\pi}:U\to (0,\infty)$. The assumption is a modification of \cite[Assumption 3.5]{zm1}. \begin{assumption}\label{ass:infdens} For constants $C_1$, $M>0$, $L<\infty$, $k\in\N$, and ${\bm \delta}\in (0,\infty)^k$, the following hold: \begin{enumerate}[label=(\alph*)] \item\label{item:cordinN:0inf} $f\in C^0(\cB_{{\bm \delta}}(\U{1})\times U;{\mathbb C}} \newcommand{\N}{{\mathbb N})$ and $f:U\to{\mathbb R}} \newcommand{\bbP}{{\mathbb P}_+$ is a probability density, \item\label{item:cordinN:1inf} ${\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t}\mapsto f({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t},{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\in C^1(\cB_{{\bm \delta}}(\U{1});{\mathbb C}} \newcommand{\N}{{\mathbb N})$ for all ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$, \item\label{item:cordinN:2inf} $ M\le |f({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})|\le L$ for all ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in \cB_{{\bm \delta}}(\U{1})\times U$, \item\label{item:cordinN:3inf} $\sup_{{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in \cB_{{\bm \delta}}\times\{0\}^\N}|f({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t}+{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})-f({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t})| \le C_1$ for all ${\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t}\in U$, \item\label{item:cordinN:4inf} % $\sup_{{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in \cB_{{\bm \delta}_{[j]}}\times \{0\}^{\N}}|f({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t}+{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})-f({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t})|\le C_1 \delta_{j+1}$ for all ${\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t}\in U$ and $j\in\{1,\dots,k-1\}$. \end{enumerate} \end{assumption} Such densities yield certain holomorphy domains for $T_k$ as we show in the next proposition, which is an infinite dimensional version of \cite[Theorem 3.6]{zm1}. \begin{proposition}\label{PROP:COR:DININF} Let $k\in\N$, ${\bm \delta}\in (0,\infty)^k$ and $0< M\le L<\infty$. There exist $C_1>0$, $C_2\in (0,1]$ and $C_3>0$ solely depending on $M$ and $L$ (but not on $k$ or ${\bm \delta}$) such that if $f_\measi$ and $f_{\pi}$ satisfy Assumption \ref{ass:infdens} with $C_1$, $M$, $L$ and ${\bm \delta}$, then: With ${\boldsymbol \zeta}=(\zeta_j)_{j=1}^k$ defined by \begin{equation}\label{eq:zetak2inf} \zeta_{j}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon C_2 \delta_{j} \qquad \forall j\in\{1,\dots,k\}, \end{equation} it holds for all $j\in\{1,\dots,k\}$ with $R_j\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \partial_jT_j$ (with $T$ as in \eqref{eq:T}) that \begin{enumerate} \item\label{item:cordinN:ainf} $R_j\in C^1(\cB_{{\boldsymbol \zeta}_{[j]}}(\U{1});\cB_{ C_3}(1))$ and $\Re(R_j({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t}))\ge \frac{1}{C_3}$ for all ${\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t}\in \cB_{{\boldsymbol \zeta}_{[j]}}(\U{1})$, \item\label{item:cordinN:binf} if $j\ge 2$, $R_j:\cB_{{\boldsymbol \zeta}_{[j-1]}}(\U{1})\times \U{1}\to \cB_{\frac{C_3}{\delta_j}}(1)$. \end{enumerate} \end{proposition} Let us sketch how this result can be used to show that $T_k$ can be approximated by polynomial expansions. In appendix \ref{app:lemma:scrfhol} we will verify Assumption \ref{ass:infdens} for densities as in \eqref{eq:density}. Prop.~\ref{PROP:COR:DININF} \ref{item:cordinN:ainf} then provides a holomorphy domain for $\partial_kT_k$, and together with \eqref{eq:legbound} we can bound the expansion coefficients $l_{k,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}}$ of $\partial_kT_k=\sum_{{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\in\N_0^k}l_{k,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}} L_{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})$. However, there is a catch: In general one can find different ${\bm \delta}$ such that Assumption \ref{ass:infdens} holds. The difficulty is to choose ${\bm \delta}$ in a way that depends on ${\bm \nu}} \newcommand{\bsmu}{{\bm \mu}$ to obtain a possibly sharp bound in \eqref{eq:legbound}. To do so we will use ideas from, e.g., \cite{CCS15} where similar calculations were made. The outlined argument based on Prop.~\ref{PROP:COR:DININF} \ref{item:cordinN:ainf} suffices to prove convergence of sparse polynomial expansions in the finite dimensional case; see \cite[Thm.~4.6]{zm1}. % In the infinite dimensional case where we want to approximate $T=(T_k)_{k\in\N}$ with only finitely many degrees of freedom we additionally need to employ Prop.~\ref{PROP:COR:DININF} \ref{item:cordinN:binf}: for ${\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\in\N_0^k$ such that ${\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\neq{\bm 0}} \newcommand{\subsetq}{\subseteq \vcentcolon=} \newcommand{\dfnn}{=\vcentcolon (0)_{j=1}^k$ but $\nu_k=0$, Prop.~\ref{PROP:COR:DININF} \ref{item:cordinN:binf} together with \eqref{eq:legbound} implies a bound of the type \begin{equation}\label{eq:legbound2} |l_{k,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}}|=\left|\int_{\U{k}} (\partial_k T_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})-1)L_{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})\;\mathrm{d}\mu({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]}) \right|\le C \frac{1}{\delta_k} \prod_{j=1}^k (1+\delta_j)^{-\nu_j}, \end{equation} where the additional $\frac{1}{\delta_k}$ stems from $\norm[{L^\infty(\cB_{{\boldsymbol \zeta}_{[j-1]}}(\U{1})\times \U{1})}]{\partial_kT_k-1}\le \frac{C_3}{\delta_k}$. Here we used the fact $\int_{\U{k}} L_{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})\;\mathrm{d}\mu({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})=0$ for all ${\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\neq{\bm 0}} \newcommand{\subsetq}{\subseteq $ by orthogonality of the $(L_{\bm \nu}} \newcommand{\bsmu}{{\bm \mu})_{{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\in\N_0^k}$ and because $L_{\bm 0}} \newcommand{\subsetq}{\subseteq \equiv 1$. In case $\nu_k\neq 0$, then the factor $\frac{1}{1+\delta_k}$ occurs on the right-hand side of \eqref{eq:legbound} . Hence, % \emph{all} coefficients $l_{k,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}}$ for which ${\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\neq{\bm 0}} \newcommand{\subsetq}{\subseteq $ are of size $O(\frac{1}{\delta_k})$. In fact one can show that even $\sum_{{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\neq{\bm 0}} \newcommand{\subsetq}{\subseteq }|l_{k,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}}||L_{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})|$ is of size $O(\frac{1}{\delta_k})$. Thus \begin{equation*} \partial_kT_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})=\sum_{{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\in\N_0^k}l_{k,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}} L_{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})=l_{k,{\bm 0}} \newcommand{\subsetq}{\subseteq } L_{\bm 0}} \newcommand{\subsetq}{\subseteq ({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})+O\left(\frac{1}{\delta_k}\right). \end{equation*} Using $L_{\bm 0}} \newcommand{\subsetq}{\subseteq \equiv 1$ \begin{equation*} l_{k,{\bm 0}} \newcommand{\subsetq}{\subseteq } = \int_{\U{k}} \partial_kT_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]}) L_{\bm 0}} \newcommand{\subsetq}{\subseteq ({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})\;\mathrm{d}\mu({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]}) =\int_{\U{{k-1}}} T_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]},1)-T_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]},-1)\;\mathrm{d}\mu({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]}) =2, \end{equation*} and therefore if $\delta_k$ is very large, since $L_{\bm 0}} \newcommand{\subsetq}{\subseteq \equiv 1$ \begin{equation}\label{eq:l0trivial} T_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})=-1+\int_{-1}^{y_k}\partial_kT_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]},t)\;\mathrm{d}\mu(t)\simeq -1+\int_{-1}^{y_k}l_{k,{\bm 0}} \newcommand{\subsetq}{\subseteq }L_{\bm 0}} \newcommand{\subsetq}{\subseteq ({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})\;\mathrm{d}\mu(t)=y_k. \end{equation} Hence, for large $\delta_k$ we can use the trivial approximation $T_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})\simeq y_k$. To address this special role played by the $k$th variable for the $k$th component we introduce \begin{equation}\label{eq:gamma} \gamma(\bsvarrho,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \varrho_k^{-\max\{1,\nu_k\}}\prod_{j=1}^{k-1}\varrho_j^{-\nu_j} \qquad\qquad\forall \bsvarrho\in (1,\infty)^\N,~{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\in\N_0^k, \end{equation} which, up to constants, corresponds to the minimum of \eqref{eq:legbound} and \eqref{eq:legbound2}. This quantity can be interpreted as measuring the importance of the monomial ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}^{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}$ in the ansatz space used for the approximation of $T_k$, and we will use it to construct such ansatz spaces. \begin{remark} To explain the key ideas, in this section we presented the approximation of $T_k$ via a Legendre expansion of $\partial_k T_k$. For the proofs of our approximation results in Sec.~\ref{sec:polinfty} we instead approximate $\sqrt{\partial_k T_k}-1$ % with truncated Legendre expansions. This will guarantee the approximate transport to satisfy the monotonicity property as explained in Sec.~\ref{sec:polinfty}. \end{remark} \section{Convergence of the transport}\label{sec:polinfty} { We are now in position to state an algebraic convergence result for approximations of infinite dimensional transport maps $T:U\to U$ associated to densities of the type \eqref{eq:density}. For a triangular approximation $\tilde T=(\tilde T_k)_{k\in\N}$ to $T$ it is desirable that it retains the monotonicity and bijectivity properties, i.e., $\partial_k\tilde T_k>0$ and $\tilde T:U\to U$ is bijective. The first guarantees that $\tilde T$ is injective and easy to invert (by subsequently solving the one dimensional equations $x_k=\tilde T_k(y_1,\dots,y_k)$ for $y_k$ starting with $k=1$), and for the purpose of generating samples, the second property ensures that for ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\sim \measi$, the transformed sample $\tilde T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\sim \tilde T_\sharp\measi$ also belongs to $U$. These constraints are hard to enforce for polynomial approximations. For this reason, we use the same rational parametrization we introduced in \cite{zm1} for the finite dimensional case: For a set of $k$-dimensional multiindices $\Lambda\subseteq \N_0^k$, define \begin{equation*} \bbP_\Lambda\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon {\rm span}\set{{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}^{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}}{{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\in\Lambda}. \end{equation*} The dimension of this space is equal to the cardinality of $\Lambda$, which we denote by $|\Lambda|$. Let $p_k\in \bbP_\Lambda$ (where $\Lambda$ remains to be chosen) be a polynomial approximation to $\sqrt{\partial_kT_k}-1$. Set for ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in \U{k}$ \begin{equation}\label{eq:tTk} \tilde T_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}) \vcentcolon=} \newcommand{\dfnn}{=\vcentcolon -1 + 2 \frac{\int_{-1}^{y_k}\int_{\U{{k-1}}}(p_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]},t)+1)^2\;\mathrm{d}\mu({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k-1]})\;\mathrm{d}\mu(t)}{\int_{\U{k}}(p_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})+1)^2\;\mathrm{d}\mu({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})}. \end{equation} It is easily checked that $\tilde T_k$ satisfies both monotonicity and bijectivity as long as $p_k\neq -1$. Thus we end up with a rational function $\tilde T_k$, but we emphasize that the use of rational functions instead of polynomials is not due to better approximation capabilities, but solely to guarantee bijectivity of $\tilde T:U\to U$. \begin{remark} Observe that $\Lambda=\emptyset$ gives the trivial approximation $p_k\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon 0\in\bbP_\emptyset$ and $\tilde T_k({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})=y_k$. \end{remark} The following theorem yields an algebraic convergence rate \emph{independent of the dimension} (since the dimension is infinity) in terms of the total number of degrees of freedom for the approximation of $T$. Therefore the curse of dimensionality is overcome for densities as in Assumption \ref{ass:density}.} \begin{theorem}\label{THM:TINF} Let $f_\measi$, $f_{\pi}:U\to (0,\infty)$ be two probability densities satisfying Assumption \ref{ass:densities} for some $p\in (0,1)$. Set $b_j\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \max\{\norm[Z]{\psi_{\measi,j}},\norm[Z]{\psi_{{\pi},j}}\}$, $j\in\N$. There exist $\alpha>0$ and $C>0$ such that the following holds: For $j\in\N$ set \begin{equation}\label{eq:xij} \varrho_j\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon 1+ \frac{\alpha}{b_j}, \end{equation} and with $\gamma(\bsvarrho,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu})$ as in \eqref{eq:gamma} define \begin{equation*} \Lambda_{\varepsilon,k}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \set{{\bm \nu}} \newcommand{\bsmu}{{\bm \mu}\in\N_0^k}{\gamma(\bsvarrho,{\bm \nu}} \newcommand{\bsmu}{{\bm \mu})\ge\varepsilon}\qquad\forall k\in\N. \end{equation*} For each $k\in\N$ there exists a polynomial $p_k\in\bbP_{\Lambda_{\varepsilon,k}}$ such that with the components $\tilde T_{\varepsilon,k}$ as in \eqref{eq:tTk}, $\tilde T_\varepsilon=(\tilde T_{\varepsilon,k})_{k\in\N}:U\to U$ is a monotone triangular bijection. For all $\varepsilon>0$, it holds that $N_\varepsilon\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \sum_{k\in\N} |\Lambda_{\varepsilon,k}|<\infty$ and \begin{subequations}\label{eq:Talgebraic} \begin{equation}\label{eq:Talgebraica} \sum_{k\in\N}\norm[{L^{\infty}(\U{k})}]{T_k-\tilde T_{\varepsilon,k}}\le C N_\varepsilon^{-\frac{1}{p}+1} \end{equation} and \begin{equation}\label{eq:Talgebraicb} \sum_{k\in\N}\norm[{L^{\infty}(\U{k})}]{\partial_kT_k- \partial_k\tilde T_{\varepsilon,k}}\le C N_\varepsilon^{-\frac{1}{p}+1}. \end{equation} \end{subequations} \end{theorem} \begin{remark}\label{rmk:kidentity} Fix $\varepsilon>0$. Since $N_\varepsilon<\infty$, there exists $k_0\in\N$ such that for all $k\ge k_0$ holds $\Lambda_{\varepsilon,k}=\emptyset$ and thus $\tilde T_{\varepsilon,k}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[k]})=y_k$. \end{remark} Switching the roles of $\measi$ and ${\pi}$, Thm.~\ref{THM:TINF} also yields an approximation result for the inverse transport $S=T^{-1}$ by some rational functions $\tilde S_k$ as in \eqref{eq:tTk}. Moreover, if $\tilde T$ is the rational approximation from Thm.~\ref{THM:TINF}, then its inverse $\tilde T^{-1}:U\to U$ (whose components are not necessarily rational functions) also satisfies an error bound of the type \eqref{eq:Talgebraic} as we show next. { \begin{corollary}\label{COR:SINF} Consider the setting of Thm.~\ref{THM:TINF}. Denote $S\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon T^{-1}:U\to U$ and $\tilde S_\varepsilon\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \tilde T_\varepsilon^{-1}:U\to U$. Then there exists a constant $C$ such that for all $\varepsilon>0$ \begin{subequations}\label{eq:Salgebraic} \begin{equation}\label{eq:Ssumbound} \sum_{k\in\N}\norm[{L^{\infty}(\U{k})}]{S_k-\tilde S_{\varepsilon,k}}\le C N_\varepsilon^{-\frac{1}{p}+1} \end{equation} and \begin{equation}\label{eq:dSsumbound} \sum_{k\in\N}\norm[{L^{\infty}(\U{k})}]{\partial_kS_k- \partial_k\tilde S_{\varepsilon,k}}\le C N_\varepsilon^{-\frac{1}{p}+1}. \end{equation} \end{subequations} \end{corollary} Note that both $S$ and $\tilde S$ in Cor.~\ref{COR:SINF} are monotonic, triangular bijections as they are the inverses of such maps.} \section{Convergence of the pushforward measures}\label{sec:measinfty} Thm.~\ref{THM:TINF} established smallness of $\sum_{k\in\N}|\partial_k(T_k-\tilde T_k)|$. The relevance of this term stems from the formal calculation (cp.~\eqref{eq:detinf}) \begin{equation*} |\det dT-\det d\tilde T|=\left|\prod_{k\in\N}\partial_kT_k- \prod_{k\in\N}\partial_k\tilde T_k\right| \le \sum_{k\in\N}|\partial_kT_k-\partial_k\tilde T_k| \prod_{j<k}|\partial_j T_j|\prod_{i>k}|\partial_iT_i|. \end{equation*} Assuming that we can bound the last two products, the determinant $\det d\tilde T$ converges to $\det d T$ at the rate given in Thm.~\ref{THM:TINF}. This will allow us to bound the Hellinger distance (H), the total variation distance (TV), and the Kullback-Leibler divergence (KL) between $\tilde T_\sharp\measi$ and ${\pi}$, as we show in the following theorem. Recall that for two probability measures $\nu\ll\mu$, $\eta\ll\mu$ on $U$ with densities $f_\nu=\frac{\mathrm{d}\nu}{\mathrm{d}\mu}$, $f_\eta=\frac{\mathrm{d}\eta}{\mathrm{d}\mu}$, \begin{equation*} {\rm H}(\nu,\eta)=\frac{1}{\sqrt{2}}\norm[L^2(U,\mu)]{\sqrt{f_\nu}-\sqrt{f_\eta}},\quad {\rm TV}(\nu,\eta)=\frac{1}{2}\norm[L^1(U,\mu)]{f_\nu-f_\eta},\qquad {\rm KL}(\nu,\eta)=\int_U \log\left(\frac{f_\nu}{f_\eta}\right)\;\mathrm{d}\nu. \end{equation*} \begin{theorem}\label{THM:MEASCONVINF} Let $f_\measi$, $f_{\pi}$ satisfy Assumption \ref{ass:densities} for some $p\in (0,1)$, and let $\tilde T_\varepsilon:U\to U$ be the approximate transport from Thm.~\ref{THM:TINF}. Then there exists $C>0$ such that for ${\rm dist}\in\{{\rm H},{\rm TV},{\rm KL}\}$ and every $\varepsilon>0$ \begin{equation}\label{eq:measdiffinf} {\rm dist}((\tilde T_\varepsilon)_\sharp \mu,{\pi}) \le C N_\varepsilon^{-\frac{1}{p}+1}. \end{equation} \end{theorem} {Next we treat the Wasserstein distance. Recall that for a Polish space $(M,d)$ (i.e., $M$ is separable and complete with the metric $d$ on $M$) and for $q\in [1,\infty)$, the $q$-Wasserstein distance between two probability measures $\nu$ and $\eta$ on $M$ (equipped with the Borel $\sigma$-algebra) is defined as \cite[Def.~6.1]{MR2459454} \begin{equation*} W_q(\nu,\eta)\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \inf_{\gamma\in\Gamma} \left(\int_M d(x,y)^q\;\mathrm{d}\gamma(x,y)\right)^{1/q}, \end{equation*} where $\Gamma$ stands for the couplings between $\eta$ and $\nu$, i.e., the set of probability measures on $M\times M$ with marginals $\nu$ and $\eta$, cp.~\cite[Def.~1.1]{MR2459454}. To bound the Wasserstein distance, we employ the following proposition. It has been similarly stated in \cite[Theorem 2]{MR4120535}, but for measures on ${\mathbb R}} \newcommand{\bbP}{{\mathbb P}^d$. % To fit our setting, we extend the result to compact metric spaces,\footnote{The author of \cite{MR2459454} mentions that such a result is already known, but without providing a reference. For completeness we have added the proof.} but emphasize that the proof closely follows that of \cite[Theorem 2]{MR4120535}, and the argument is essentially the same. As pointed out in \cite{MR4120535}, the bound in the proposition is sharp. \begin{proposition}\label{PROP:WASSERSTEIN} Let $(M,d)$ be a compact Polish space. Let $T:M\to M$ and $\tilde T:M\to M$ be two continuous functions and let $\nu$ be a probability measure on $M$ equipped with the Borel $\sigma$-algebra. Then for every $q\in [1,\infty)$ \begin{equation} W_q(T_\sharp\nu,\tilde T_\sharp\nu)\le \sup_{x \in M}d(T(x),\tilde T(x))<\infty. \end{equation} \end{proposition} To apply Prop.~\ref{PROP:WASSERSTEIN} we first have to equip $U$ with a metric. For a sequence $(c_j)_{j\in\N}\in\ell^1(\N)$ of positive numbers set \begin{equation}\label{eq:prodmet} d({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t},{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \sum_{j\in\N}c_j|x_j-y_j|\qquad\forall\;{\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t},{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U. \end{equation} By Lemma \ref{lemma:producttop}, $d$ defines a metric that induces the product topology on $U$. Since $U$ with the product topology is a compact space by Tychonoff's theorem \cite[Thm.~37.3]{munkres}, $(U,d)$ is a compact Polish space. Moreover: \begin{lemma}\label{LEMMA:TCONT} Let $f_\measi$, $f_{\pi}$ satisfy Assumption \ref{ass:densities} and consider the metric \eqref{eq:prodmet} on $U$. Then $T:U\to U$ and the approximation $\tilde T_\varepsilon:U\to U$ from Thm.~\ref{THM:TINF} are continuous with respect to $d$. Moreover, if there exists $C>0$ such that with \begin{equation}\label{eq:bj} b_j\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \max\{\norm[X]{\psi_{\measi,j}},\norm[Y]{\psi_{{\pi},j}}\} \end{equation} holds $b_j\le Cc_j$ for all $j\in\N$ (cp.~Assumption \ref{ass:densities}), then $T$ and $\tilde T_\varepsilon$ are Lipschitz continuous. \end{lemma} With $d:U\times U\to{\mathbb R}} \newcommand{\bbP}{{\mathbb P}$ as in Lemma \ref{LEMMA:TCONT}, $(U,d)$ is a compact Polish space and $T$ and $\tilde T_\varepsilon$ are continuous, so that we can apply Prop.~\ref{PROP:WASSERSTEIN}. Using Thm.~\ref{THM:TINF} and $\sup_jc_j\in (0,\infty)$, \begin{equation}\label{eq:Wq} W_q(T_\sharp\mu,(\tilde T_\varepsilon)_\sharp\mu)\le \sup_{{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U} d(T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}),\tilde T_\varepsilon({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}))\le \sum_{k\in\N}\norm[{L^\infty(\U{k})}]{T_k-\tilde T_{\varepsilon,k}}c_k \le CN_\varepsilon^{-\frac{1}{p}+1}. \end{equation} Next let us discuss why $c_j\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon b_j$ as in \eqref{eq:bj} is a natural choice in our setting. Let $\Phi:U\to X$ be the map $\Phi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})=\sum_{j\in\N}y_j\psi_{{\pi},j}\in Y$. In the inverse problem discussed in Ex.~\ref{ex:bayes}, we try to recover an element $\Phi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\in Y$. For computational purposes, the problem is set up to recover instead the expansion coefficients ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$. Now suppose that ${\pi}$ is the posterior measure on $U$. Then $\Phi_\sharp{\pi}=(\Phi\circ T)_\sharp \measi$ is the corresponding posterior measure on $Y$ (the space we are actually interested in). The map $\Phi:U\to Y$ is Lipschitz continuous w.r.t.\ the metric $d$ on $U$, since for ${\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t}$, ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\in U$ due to $\norm[Y]{\psi_{{\pi},j}}\le b_j$, \begin{equation}\label{eq:before} \norm[Y]{\Phi({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t})-\Phi({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})} =\normc[Y]{\sum_{j\in\N}(x_j-y_j)\psi_{{\pi},j}} \le \sum_{j\in\N}|x_j-y_j|b_j=d({\boldsymbol x}} \newcommand{\bst}{{\boldsymbol t},{\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}). \end{equation} Therefore, $\Phi\circ T:U\to Y$ and $\Phi\circ\tilde T_\varepsilon:U\to Y$ are Lipschitz continuous by Lemma \ref{LEMMA:TCONT}. Moreover, compactness of $U$ and continuity of $\Phi:U\to Y$ imply that $\Phi(U)\subseteq Y$ is compact. Hence we may apply Prop.~\ref{PROP:WASSERSTEIN} also w.r.t.\ the maps $\Phi\circ T:U\to Y$ and $\Phi\circ\tilde T_\varepsilon:U\to Y$. This gives a bound of the pushforward measures on the Banach space $Y$. Specifically, since $\norm[Y]{\Phi(T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}))-\Phi(\tilde T_\varepsilon({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}))}\le d(T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}),\tilde T_\varepsilon({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}))$, which can be bounded as in \eqref{eq:Wq}, we have shown: \begin{theorem}\label{thm:wassersteinconv} Let $f_\measi$, $f_{\pi}$ satisfy Assumption \ref{ass:densities} for some $p\in (0,1)$, let $\tilde T_\varepsilon:U\to U$ be the approximate transport and let $N_\varepsilon\in\N$ be the number of degrees of freedom as in Thm.~\ref{THM:TINF}. Then there exists $C>0$ such that for every $q\in [1,\infty)$ and every $\varepsilon>0$ \begin{equation*} W_q((\tilde T_\varepsilon)_\sharp \mu,{\pi}) \le C N_\varepsilon^{-\frac{1}{p}+1}, \end{equation*} and for the pushforward measures on the Banach space $Y$ \begin{equation}\label{eq:convpwY} W_q((\Phi\circ \tilde T_\varepsilon)_\sharp \mu,\Phi_\sharp {\pi}) \le C N_\varepsilon^{-\frac{1}{p}+1}. \end{equation} \end{theorem} Finally let us discuss how to efficiently sample from the measure $\Phi_\sharp {\pi}$ on the Banach space $Y$. As explained in the introduction, for a sample ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}\sim\measi$ we have $T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z})\sim {\pi}$ and $\Phi(T({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}))=\sum_{j\in\N}T_j({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[j]})\psi_{{\pi},j}\sim \Phi_\sharp{\pi}$. To truncate this series, introduce $\Phi_s({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[s]})\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \sum_{j=1}^s y_j\psi_{{\pi},j}$. As earlier, denote by $\measi_s$ the marginal measure of $\measi$ on $\U{s}$. For ${\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[s]}\sim\measi_s$, the sample \begin{equation*} \Phi_{s}(\tilde T_{\varepsilon,[s]}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[s]})) =\sum_{j=1}^{s}T_{\varepsilon,j}({\boldsymbol y}} \newcommand{\bsz}{{\boldsymbol z}_{[j]})\psi_{{\pi},j} \end{equation*} follows the distribution of $(\Phi_s\circ \tilde T_{\varepsilon,[s]})_\sharp\measi_s$, where $\tilde T_{\varepsilon,[s]}\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon (\tilde T_{\varepsilon,k})_{k=1}^s:\U{s}\to\U{s}$. In the next corollary we bound the Wasserstein distance between $(\Phi_s\circ \tilde T_{\varepsilon,[s]})_\sharp\measi_s$ and $\Phi_\sharp{\pi}$. Note that the former is a measure on $Y$, and in contrast to the latter, is supported on an $s$-dimensional subspace. Thus in general neither of these two measures need to be absolutely continuous w.r.t.\ the other. This implies that the KL divergence, the total variation distance, and the Hellinger distance, in contrast with the Wasserstein distance, need not tend to $0$ as $\varepsilon\to 0$ and $s\to\infty$. The corollary shows that the convergence rate in \eqref{eq:convpwY} can be retained by choosing the truncation parameter $s$ as $N_\varepsilon$ (the number of degrees of freedom in Thm.~\ref{THM:TINF}); in fact, it even suffices to truncate after the maximal $k$ such that $\Lambda_{k,\varepsilon}\neq\emptyset$, as described in Rmk.~\ref{rmk:dNeps}. \begin{corollary}\label{COR:MEASCONVINF} Consider the setting of Thm.~\ref{thm:wassersteinconv} and assume that $(b_j)_{j\in\N}$ in \eqref{eq:bj} is monotonically decreasing. Then there exists $C>0$ such that for every $q\in [1,\infty)$ and $\varepsilon>0$ \begin{equation*} W_q((\Phi_{N_\varepsilon}\circ \tilde T_{\varepsilon,[N_\varepsilon]})_\sharp \measi_{N_\varepsilon},\Phi_\sharp {\pi}) \le C N_\varepsilon^{-\frac{1}{p}+1}. \end{equation*} \end{corollary} \begin{remark} Convergence in $W_q$ implies weak convergence \cite[Theorem 6.9]{MR2459454}. \end{remark} \begin{remark}\label{rmk:dNeps} Checking the proof of Thm.~\ref{THM:TINF}, we have $N_\varepsilon\le C\varepsilon^{-p}$, cp.~\eqref{eq:nepsest}. Thus the maximal activated dimension (represented by the truncation parameter $s=N_\varepsilon$) increases only algebraically as $\varepsilon\to 0$. The approximation error also decreases algebraically like $\varepsilon^{1-p}$ as $\varepsilon\to 0$, cp.~\eqref{eq:sumest}. Moreover, the function $\Phi_{s_\varepsilon}\circ \tilde T_{\varepsilon,[s_\varepsilon]}$ with $s_\varepsilon\vcentcolon=} \newcommand{\dfnn}{=\vcentcolon \max\set{k\in\N}{\Lambda_{\varepsilon,k}\neq\emptyset}$ leads to the same convergence rate in Cor.~\ref{COR:MEASCONVINF}. In other words, we only need to use the components $\tilde T_{\varepsilon,k}$ for which $\Lambda_{\varepsilon,k}\neq \emptyset$. \end{remark} } \section{Conclusions}\label{sec:conclusions} {The use of transportation methods to sample from high-dimensional distributions is becoming increasingly popular to solve inference problems and perform other machine learning tasks. Therefore, questions of when and how these methods can be successful are of great importance, but thus far not well understood. In the present paper we analyze the approximation of the KR transport in the high- (or infinite-) dimensional regime and on the bounded domain $[-1,1]^\N$. Under the setting presented in Sec.~\ref{sec:main}, it is shown that the transport can be approximated without suffering from the curse of dimension. Our approximation is based on polynomial and rational functions, and we provide an explicit \textit{a priori} construction of the ansatz space. Moreover, we show how these results imply that it is possible to efficiently sample from certain high dimensional distributions by transforming a lower dimensional latent variable. As we have discussed in the finite dimensional case \cite[Sec.~5]{zm1}, from an approximation viewpoint there is also a link to neural networks, which can be established via \cite{yarotsky,pmlr-v70-telgarsky17a} where it is proven that ReLU neural networks are efficient at emulating polynomials and rational functions. While we have not developed this aspect further in the present manuscript, we mention that neural networks are used in the form of normalizing flows \cite{pmlr-v37-rezende15,papamakarios2019normalizing} to couple distributions in spaces of equal dimension, and for example in the form of generative adversarial networks \cite{NIPS2014_5ca3e9b1,pmlr-v70-arjovsky17a} and, more recently, injective flows \cite{2002.08927,2102.10461}, to map lower-dimensional latent variables to samples from a % high-dimensional distribution. In Sec.~\ref{sec:measinfty} we provided some insight (for the present setting, motivated by inverse problems in science and engineering) into how low-dimensional the latent variable can be, and how expressive the transport should be, to achieve a certain accuracy in the Wasserstein distance (see Cor.~\ref{COR:MEASCONVINF}). Further examining this connection and generalizing our results to distributions on unbounded domains (such as ${\mathbb R}} \newcommand{\bbP}{{\mathbb P}^\N$ instead of $[-1,1]^\N$) will be the topic of future research.}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Smartphones are one of the most important inventions of modern society. In February 2015, the penetration of smartphones was about 75\% in the U.S. \cite{Report:smartphonepenetration}. This figure is still growing. With the improvement of hardware processing capability and software development environments, the smartphone is no longer just a handset to make phone calls, but also lets the user play entertaining games, watch movies, browse web pages, and so on. On the other hand, users are often frustrated by limited battery capacity -- applications running in parallel could easily drain a fully-charged battery within 24 hours. Software optimization by the compiler achieves very little energy-saving for mobile devices, since besides energy efficiency, the compiler for the mobile device has to consider many other important factors, such as limited memory usage and quick responses to user interactions. The Android platform, for instance, employs the Just-In-Time (JIT) compiler \cite{justintime}, also known as the dynamic compiler. Its optimization window is generally as small as one or two basic blocks in order to use less memory and speed up delivery of performance boost. However, the small window largely restricts the space of energy-saving strategies. More powerful code refactoring is needed, but this is beyond the scope of compilers and relies more on developers. Unfortunately, current software development is performed in an energy-oblivious manner. Throughout the engineering life cycle, most developers and designers are blind to the energy usage of code written by themselves. However, developers are desperate for knowledge on energy-aware programming techniques. In the most popular software development forum \textsc{StackOverFlow} \cite{stackoverflow}, energy-related questions are marked as favorites 3.89 more often than the average questions \cite{Pinto_miningqusetion}. Furthermore, among the energy-related questions, code-design-related ones are prominent. Moreover, it has been estimated that energy-saving by a factor of as much as three to five could be achieved solely by software optimization \cite{Edwards:lssmlps}. To realize this, the first step is to analyze the energy attributes of source code at different levels of granularity and from different points of view. In order to expose energy attributes of code, energy modeling of code is needed to bridge the gap between high-level source code and low-level hardware, where energy is consumed. However, traditional bottom-to-top modeling techniques \cite{Tiwari:power_analysis_embedded,bran:instruction-level_model,gangqu:function-level_powermodel,Simunic:2000:source_code_optimization} face obstacles when the software stack of the system consists of a number of abstract layers. On the Android platform, for instance, the source code is in Java and then translated to Java byte-code, further to Dalvik \cite{Android:Dalvik} byte-code, native code and machine code and finally executes on the processors and dissipates energy. Consequently, the modeling task has to describe the links between all the layers. Instead of building a software energy model layer by layer, another approach to acquiring software-level energy information is to use the hardware readings, like CPU state residency, CPU utilization, L1/L2 Cache misses and battery trace, as predictors of software energy use \cite{Dong_selfconstructivemodel,Pathak_whereisenergy,Zhang_onlinepowerestimation,Wang_batterytrace}. However, they are only capable of obtaining energy information at a coarse level of granularity such as methods or even applications. Two pieces of work \cite{HaoShuai:2013:EstMobileApp,sourceline_energy} result in source-line energy information. The former requires low-level energy profiles. The latter employs an accurate measurement to obtain the energy dissipation of source lines. The energy information on blocks or more coarse-grained units could identify the hot spots in the code, but it gives few clues about how to make changes to improve the code. The source line is also not an appropriate level of granularity to provide energy information. For instance, the header of \texttt{for} loop contains three segments which are \textit{initialization}, \textit{boolean} and \textit{update} in the same source line, but usually have distinct numbers of executions. Our latest work constructs a source-level energy model based on "energy operations", which is more fine-grained and gives more valuable information for code optimization. By "source-level" we mean that the energy costs of running a program are all attributed to source code constructs, despite the fact that much of the energy consumed is actually accounted for by things outside the source code such as the operating system. Thus the model is bound to be an approximation, yet as our results show, it is precise enough to give useful information. Compared with coarse-grained techniques, there are some advantages of the operation-based model in guiding energy-aware programming techniques: \begin{itemize} \item The energy operations are basic units that constitute the energy consumption of the entire application. Thus using the energy estimate of operations, developers can assess the effects of code changes on the energy consumption of code. \item It provides more valuable information for refactoring. For example, the experiment shows that method invocation is one of the most expensive operations, suggesting that in some cases we may inline some thin methods, at the cost of losing the integrity of the structure of code. \end{itemize} In this paper, we propose an energy-aware programming approach guided by a fine-grained energy model of source code. The summary procedure of the approach is the following: \begin{itemize} \item We build an operation-based source-level energy model, which is achieved by analyzing the data produced in a range of well-designed execution cases. \item We perform energy accounting based on the model, at operation and block level to capture the key energy characteristics of the code. \item We focus efforts on the most costly blocks, where we refactor the code to remove, reduce or replace the expensive operations, while maintaining its logical consistency with the original code. \end{itemize} Our target platform is an Android development board with two ARM quad-core CPUs, and the source code in our study is a game engine used in games, demos and other interactive applications. We evaluate the approach in three game scenarios, and the experimental result shows that it can save energy consumption from 6.4\% up to 50.2\% depending on different scenarios. The generality of the approach goes beyond the boundaries of the case study described here. Firstly, the energy-aware programming approach can be used in developing the large class of applications which are based on the game-engine, comprising many interactive applications with rich user interfaces. Secondly, the approach is applicable to all kinds of applications. The choice of energy operations is dependent only on the Java source language; the techniques for designing test cases, regression analysis and code optimization can be applied to other application domains. In the rest of this paper, we begin with the identification of energy operations in Section \ref{Section_basicEnergyOp}. In Sections \ref{Section_experimentsetup} and \ref{Section:Model}, we briefly summarize (to make the paper self-contained) the setup and construction of the energy model. Based on the model we are able to capture energy characteristics and optimize the source code in three different scenarios, \texttt{Click \& Move}, \texttt{Orbit} and \texttt{Waves}, as seen in Sections \ref{Section_clickmove}, \ref{Section_orbit} and \ref{Section_waves} respectively. \iffalse \begin{figure} \centering \includegraphics[width = 0.44\textwidth]{framework} \caption{Framework}\label{fig:framework} \end{figure} \fi \iffalse \begin{table*}\label{EquOcrCmp} \centering \caption{Equations for Occurrence Computing} \begin{tabular}{|c|c|} \hline\hline \textbf{if} & N_e(If\_pred) = N_e(Log\_ins \quad in \quad \rho(If\_block)) \\ & N_e(If\_block) = N_e(Log\_ins \quad in \quad If\_block) \\ \hline & N_e(For\_init) = N_e(Log\_ins \quad in \quad \rho(For\_block)) \\ \textbf{for} & N_e(For\_pred) \approx N_e(Log\_ins \quad in \quad For\_block) \\ & + N_e(Log\_ins \quad in \quad \rho(For\_block)) \times 50\% \\ & N_e(For\_updt) = N_e(Log\_ins \quad in \quad For\_block) \\ \hline & N_e(While\_pred) \approx N_e(Log\_ins \quad in \quad While\_block) \\ \textbf{while} & + N_e(Log\_ins \quad in \quad \rho(While\_block)) \times 50\% \\ & N_e(While\_block) = N_e(Log\_ins \quad in \quad While\_block) \\ \hline \textbf{switch}& N_e(Switch\_pred) = N_e(Log\_ins \quad in \quad \rho(Switch\_block)) \\ & N_e(Switch\_block) = N_e(Log\_ins \quad in \quad Switch\_block) \\ \hline\hline\end{tabular} \end{table*} \fi \section{Basic Energy Operations}\label{Section_basicEnergyOp} \iffalse \begin{table} \centering \caption{Basic Energy Operations\label{EnOps}} \begin{tabular}{cc} \hline Type & Operation \\ \hline \cellcolor{Gray} & \cellcolor{Gray} Addition, Subtraction, Increment\\ \multirow{-2}*{ \cellcolor{Gray} Arithmetic}& \cellcolor{Gray} Multiplication, Division, Decrement\\ Boolean & And, Or, Not\\ \cellcolor{Gray} & \cellcolor{Gray} Greater, Less, Equal\\ \multirow{-2}*{ \cellcolor{Gray} Comparison} & \cellcolor{Gray} Greater or equal, Less or equal \\ & BitAnd, BitOr, SignedBitShiftRight \\ \multirow{-2}{*}{ Bitwise} & SignedBitShiftLeft \\ \rowcolor{Gray} Reference & Array reference, Field reference\\ Function & Argument passing, Returning value\\ \rowcolor{Gray} Control & Block goto, Function Invocation\\ Others & Declaration, Type conversion\\\hline \end{tabular} \end{table} \fi \begin{table} \centering \caption{Examples of Energy Operations\label{EnOps}} \begin{tabular}{ll} \hline Operation & \textit{Identified where:} \\ \hlin {Method Invocation} & \textit{ one method is called}\\ Parameter\_Object & \textit{ Object is one parameter of the method}\\ Return\_Object & \textit{ the method returns an Object}\\ Addition\_int\_int & \textit{ addition's operands are integers }\\ Multi\_float\_float & \textit{ multiplication's operands are floats }\\ Increment & \textit{ symbol "$++$" appears in code}\\ And & \textit{ symbol "$\&\&$" appears in code}\\ Less\_int\_float & \textit{ "<"'s operands are integer and float}\\%\hline Equal\_Object\_null & \textit{ "=="'s operands are Object and null}\\ Declaration\_int & \textit{ one integer is declared}\\%\hline Assign\_Object\_null & \textit{ assignment's operands are Object and null}\\ Assign\_char[]\_char[] & \textit{ assignment's operands are arrays of chars}\\%\hline Array Reference & \textit{ one array element is referred}\\ Block Goto & \textit{ the code execution goes to a new block}\\ \hline \end{tabular} \end{table} Energy operations are identified directly from source code. The enumeration of the operations is inspired by Java semantics \cite{Bogdanas_Semantics}, which specifies the operational meaning, or behavior, of the Java language, which is the target language in the experiment. We intuitively identify semantic operations that perform operations on the state and may be energy-consuming, and let them be our energy operations. Ones that have little or no energy effect will automatically be identified by the regression analysis in the later stage of the analysis. Table \ref{EnOps} lists 14 representative operations out of a total of 120 in the experiment. They include arithmetic calculations like \textit{Multi\_float\_float}, \textit{Addition\_int\_int}, in which operands types are explicit, as well as \textit{Increment} whose operand is implicitly an integer. Boolean operations and comparisons, such as \textit{And}, \textit{Less\_int\_float} and \textit{Equal\_Object\_null} also form one major part. \textit{Method Invocation} and \textit{Block Goto} are important for the control flow which plays a key role in the execution of the code. Assignments and \textit{Array Reference} will unexpectedly take a significant amount of the application's energy consumption, as will be shown in Section \ref{Section_Analysis}. \begin{table} \centering \caption{Examples of Library Functions\label{Libaray_functions}} \begin{tabular}{ll} \hline Class & $\qquad$Function \\ \hlin ArrayList & \textit{add, get, size, isEmpty, remove} \\ & \textit{glBindTexture, glDisableClientState } \\ & \textit{glDrawElements, glEnableClientState} \\ GL10 & \textit{glMultMatrixf, glTexCoordPointer} \\ & \textit{glPopMatrix, glPushMatrix} \\ & \textit{glTexParameterx, glVertexPointer}\\ Math & max, pow, sqrt, random \\%\hline FloatBuffer & \textit{position, put} \\ \hline \end{tabular} \end{table} The game engine application, like many others, also employs a diversity of library functions. Unlike the normal source code which is interpreted at run-time, the key part of library code has been compiled into native code before execution and some part may be already written in different languages and at lower levels of the software stack. On the other hand, usually a limited number (67 in the experiment) of library functions are frequently called in one application. So we treat them as basic modeling units. The examples of highly-used library functions in the experiment are shown in Table \ref{Libaray_functions}. For instance, the functions in the class of \textit{GL10} are responsible for graphic computing. \section{Experimental Setup}\label{Section_experimentsetup} In this section and the next, we summarize the construction of the energy model, including the setup of the target device and the design principles of the execution cases. Further details on these can be found in our recent work\footnote{To preserve anonymity in the review process, the citation is hidden}. \subsection{Target Device}\label{Section_target_measurement} Experimental target: we employ an Odroid-XU+E development board \cite{target:odroid} as the target device. It possesses two ARM quad-core CPUs, which are Cortex-A15 with 2.0 GHz clock rate and Cortex-A7 with 1.5 GHz. The eight cores are logically grouped into four pairs. Each pair consists of one big and one small core. So from the operating system's point of view there are four logic cores. In our experiment, we turn off the small cores and run workload on big cores at a fixed clock frequency of 1.1 GHz. We do this in order to remove the influence of voltage, clock rate and CPU performance on the power usage. Odroid-XU+E has a built-in power monitoring tool to measure the voltage and current of CPUs with a frequency of 30 Hz. \subsection{Target Source Code} The target source code is the Cocos2d-Android \cite{code:cocos2d} game engine, a framework for building games, demos and other interactive applications such as virtual reality. It also implements a fully-featured physics engine. Games are increasingly popular on mobile phones and include more and more fancy and energy-consuming features, requiring high CPU performance. This paper demonstrates the energy modeling, accounting and improvement for the source code of the game engine, and evaluates the improvement in three game scenarios. \subsection{Design of Execution Cases }\label{Section_sourcecode_casedesign} The execution cases whose energy usage is measured and analyzed represent typical sequences of actions during game, including user inputs. We focus on three scenarios which are \texttt{Click \& Move}, \texttt{Orbit} and \texttt{Waves}. In the \texttt{Click \& Move} scenario, the sprite (the character in the game) moves to the position where the tap occurs. In the \texttt{Orbit} scenario, the sprite together with the grid background spins in the three-dimension space. In the \texttt{Waves} scenario, the sprite scales up and down, meanwhile the grid background waves like flow. In both the \texttt{Orbit} and \texttt{Waves} scenarios, the animation will restart from the starting point whenever and wherever the tap occurs. To simulate the game scenarios under different sequences of user inputs, we script with the Android Debug Bridge \cite{adb:android} (ADB), a command line tool connecting the target device to the host, to automatically feed the input sequences to the target device. In order to obtain a more varied set of execution cases and thus a more precise model, we vary the executions of individual basic blocks in the code. This is achieved by systematically removing a set of blocks for each execution case, using the control flow graph extracted using the Soot tool \cite{soot:callgraph}. We ensure that each block could be removed in some execution case. Thus an execution case is made up of one user input sequence and one set of basic blocks. \iffalse \section{Data Collection}\label{Section_dataCollection} \begin{figure} \centering \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=\textwidth]{for_cfg_new.pdf} \caption{\texttt{for} loop} \label{fig:for_loop} \end{subfigure}% \begin{subfigure}[b]{0.24\textwidth} \centering \raisebox{5mm}{\includegraphics[width=\textwidth]{while_cfg_new.pdf}} \caption{\texttt{while} loop} \label{fig:while_loop} \end{subfigure} \caption{Block division of \texttt{for} and \texttt{while} loops in control flow graph.}\label{fig:block_division} \end{figure} In this section, we describe the collection of data on the number of times each operation executes and the energy consumption of an execution case, based on which we construct the energy model. \iffalse \newtheorem{theorem}{Definition} \begin{theorem}\label{theorem_1} A block is a set of gathered statements. Each node (statement) in a block has only one in-edge and one out-edge in the control flow graph, but the start point of the block could have more than one in-edge, the end point could have more than one out-edge. \end{theorem} \fi \subsection{Number of Executions of Operations} To obtain the number of times that each operation executes in an execution case, we need to determine at which level of granularity to track the execution. We choose the level of "blocks". A block is a sequence of consecutive statements, without loops or branches. It is sufficient to track block executions, since if one part of a block is processed, the rest certainly will be processed as well. We could consider collecting data at other levels of granularity. Tracing individual statements might overload the capacity of the target device. On the other hand, methods or classes are unsuitable execution units, since we cannot determine which parts of the method or class will be active during the execution, and this information about energy operations is lost. We then divide the source code into blocks. For individual syntactic structures, we deal with block division case by case. \texttt{For} loops and \texttt{while} loops are handled as shown in Figure \ref{fig:block_division}. In a \texttt{for} loop, the header usually has three segments which are \textit{initialization}, \textit{boolean} and \textit{update}. They are divided into three different blocks. Similarly, we set the \texttt{while} header itself as a block ("block 2" in Figure \ref{fig:while_loop}). In order to build the log, we instrument the source code with a log instruction at the beginning of each block. \begin{figure} \centering \includegraphics[width = 0.35\textwidth]{operation_list.pdf} \caption{The flow of the operation-execution data collection.}\label{fig:data_collection_flow} \end{figure} The generic view of the collection of the operation-execution data is displayed in Figure \ref{fig:data_collection_flow}. We build a dictionary showing, for each block, the number of occurrences within it of each energy operation, such as those in Table \ref{EnOps}. This dictionary is built using a parser that traverses all the blocks in the code. Then, using the log file recording the processed blocks, together with the dictionary, we can sum up the number of times that each energy operation is executed during an execution case. To be more precise, let $B_i$ be the number of times that the $i^{th}$ block is executed (this is obtained from the log file). Let $O_{i,j}$ be the number of occurrences of operation $j$ in block $i$ (this is obtained from the dictionary). Then the total number of executions of the $j^{th}$ operation is $\sum_{i=1}^{n} (B_i * O_{i,j})$, where $n$ is the total number of blocks. \iffalse \begin{equation}\label{op_count_eqt}N_e(op_i)=\sum_{}^{block_j\, \in\, BLK(op_i)}\{N_e(block_j)\;\cdot\;N_o(op_i,\; block_j)\}\end{equation} \begin{displaymath}where\quad op_i\, \in\, Energy\; Ops\end{displaymath} \fi \iffalse \begin{algorithm}[H] previous code\; \If{boolean}{ body\; } following code; } \end{algorithm}\\ \begin{algorithm}[H] previous code\; \While{boolean}{ body\; } following code; } \end{algorithm}\\ \begin{algorithm}[H] previous code\; \For{initialization; boolean; update}{ body\; } following code; } \end{algorithm}\\ \fi \subsection{Energy Approximation from Power Samples} We write a script to obtain the power samples from the built-in measurement component with a frequency of 30 Hz. The power samples are the discrete values sampled from the power trace; we approximate energy consumption by calculating Equation (\ref{Energy_Equation}): $p=\emph{power}(t)$ is the power trace, that is, the continuous power-vs-time function; $\emph{power}(t_i)$ is the power sample at time-stamp $t_i$; $\Delta_i$ equals to $t_i - t_{i-1}$, which is the interval between two sequential samples. \begin{equation}\label{Energy_Equation}E=\int_{t_0}^{t_n}\textit{power}(t)\,dt \;\approx\sum_{i=1}^{n}\textit{power}(t_i)\cdot\Delta_i\; \end{equation} \begin{displaymath}\quad where \quad t_0\le t_1\le t_2 \cdots \le t_{n-1}\le t_{n} \end{displaymath} \fi \iffalse \subsection{Challenges in Practice } \textbf{Measurement limitation:} the sampling rate of the built-in power monitor is 30 Hz. However, the instruction execution rate is about several million per second. That means, one power sample measures the energy cost of hundreds of thousand instructions. Even though the state of the art of the power measurement can reach a sampling rate of tens of KHz \cite{Jiang_bestpowermeter}, one power sample still includes up to thousands of instructions. To deal with this problem, we first lengthen the sessions of all the execution cases to above 100 seconds, and then run each case for ten times to calculate their average energy cost. Compared with the execution cases that only run once with sessions around one second, this approach can reduce the error of measuring energy consumption of the code by three orders of magnitude. \\\\ \textbf{Run-time context:} during the running of the application, the Dalvik virtual machine performs garbage collection, which is not part of the application and still could be included in the power samples. The Dalvik virtual machine produce time-stamp logs when launching the garbage collection procedure. We consider the garbage collection as one library function, so it will be integrated in the model. \\\\ \textbf{Code instrumentation and power reading script:} although the instrumentation is at block level rather than statement level, its impact on energy consumption is still not negligible and its cost is as much as 50\% of the application's energy consumption itself. Also, the energy cost of the power reading script is up to 5\% of the application's consumption. We followed three experimental principles to address this problem. Firstly, for each execution case, the log of the execution path and of the power samples are separated into two separate runs. In the first round, we record the execution path without reading power samples. In the second round, we only trace power and disable the instrumented log instructions. So for each execution case, the instrumentation for logging the execution path will not influence the power samples. Secondly, in each of the two runs, the main process of the application is allocated to one CPU core, while the thread logging execution path or power samples is allocated to another CPU core, minimizing effects due to interaction of the threads. Thirdly, we design one "idle execution case" paired with each execution case; this only runs the power reading script without the application. By this means we can get the energy consumption of the main application process by excluding the cost of the "idle execution case" from the execution case. Note that the durations of execution cases are different, so we need to have a distinct "idle execution case" for each execution case. In summary, each execution case will be run 21 times: once for tracing the execution path; ten times for calculating the average energy consumption of the "idle execution case", and ten times for calculating average energy consumption of the execution case. \begin{figure*} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{training_case_wo_fs.pdf} \label{fig:train_case_wo_fs} \end{subfigure}% \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{test_case_wo_fs.pdf} \label{fig:test_case_wo_fs} \end{subfigure} \caption{The predicted, observed energy consumption in training and test cases BEFORE feature selection. The bars show the errors of predicted values }\label{fig:data_wo_fs} \end{figure*} \fi \section{Model Construction}\label{Section:Model} The entire energy use is composed of three parts: the cost of energy operations, the cost of library functions and the idle cost. The aimed model is formalized in Equation (\ref{Energy_Model}): \begin{equation}\label{Energy_Model} E = \sum_{}^{op_i \in Energy\,Ops} Cost_{op_i} \cdot N_e(op_i) \qquad\qquad \end{equation} \begin{displaymath} + \sum_{}^{func_i \in Lib\,Funcs} Cost_{func_i} \cdot N_e(func_i) + Idle\;Cost \end{displaymath} The cost of energy operations is the sum of $ Cost_{op_i} \cdot N_e(op_i) $ (the cost of one operation multiplied by the number of its executions), where $op_i \in Energy\,Ops$; $Energy\,Ops$ is the set containing all the operations. The cost of library functions is the sum of $Cost_{func_i} \cdot N_e(func_i)$ (the cost of one library function multiplied by the number of its executions), where $func_i \in Lib\,Funcs$; $Lib\,Funcs$ is the set of library functions. The $Idle\; Cost$ is the energy consumption of the device when running no application, simply the Android system. The lengths of case sessions are diversified due to input sequences, so the $Idle\; Cost$ is different for each execution case. The model construction is based on regression analysis, finding out the correlation between energy operations and their costs from the data obtained in the execution cases. \iffalse In the experiment, the three scenarios (\texttt{Click \& Move}, \texttt{Orbit} and \texttt{Waves}) separately have their own processes of data collection and model construction as different scenarios may have different sets of parameters (costs of operations) for the model (Equation (\ref{Energy_Model})). The cost of the same operation is not absolutely constant in certain cases, one of the reasons is that the values of operands influence the energy consumption of operations, as seen in \cite{Steve_data_dependent}. Our modeling approach is trying to make a good approximation of the costs of operations for individual scenarios. We set out the collected data in the matrices in Equation (\ref{equation_matrices}). The leftmost matrix ($N$) contains the execution numbers of $l$ operations (including energy operations and library functions) in $m$ execution cases, acquired as shown in Section \ref{Section_dataCollection}. Each row indicates one execution case. Each column represents one operation. The vector ($\vec{cost}$) in the middle contains the costs of $l$ operations, which are the values we are aiming to estimate. The vector ($\vec{e}$) on the right of the equal mark contains the measured entire energy costs of the execution cases. So for each execution case, the entire energy cost is the sum of the costs of operations. It should be noticed that the energy costs $\vec{e}$ exclude the $Idle\; Cost$ which is measured when no application workload is being processed. \\ \begin{equation}\label{equation_matrices} \begin{pmatrix} n_1^{(1)} & n_2^{(1)} & ... & n_l^{(1)}\\ n_1^{(2)} & n_2^{(2)} & ... & n_l^{(2)}\\ & ...& ... & \\ n_1^{(m-1)} & n_2^{(m-1)} & ... & n_l^{(m-1)}\\ n_1^{(m)} & n_2^{(m)} & ... & n_l^{(m)} \end{pmatrix} \times \begin{pmatrix} cost_1\\ cost_2\\ ...\\ cost_l \end{pmatrix} = \begin{pmatrix} e_1\\ e_2\\ ...\\ e_{m-1}\\ e_m \end{pmatrix}\\ \end{equation} Inevitably, the power samples are not absolutely accurate. Furthermore, the energy model in reality is unlikely to be completely linear. For these reasons Equation (\ref{equation_matrices}) may be unsolvable, that is, the vector $\vec{e}$ is out of the column space of $N$. We thus employ the gradient descent algorithm \cite{Ng:Machinelearning} to compute the approximate values of $\vec{cost}$. The elements of $\vec{cost}$ are randomly initialized and then improved by the gradient descent algorithm iteratively. We first introduce the error function $J$ (computed by Equation (\ref{equation_ErrorFunction})) which indicates the quality of the model. The smaller $J$ is, the better the model is. $\vec{n^{(i)}}$ is the $i^{th}$ row in $N$, $\vec{cost}$ is the middle vector above. $\vec{n^{(i)}}\times\vec{cost}$ is the estimated energy cost for the $i^{th}$execution case, $e^{(i)}$ is its observed energy cost. $J$ first computes the sum of the squared values of the estimate errors of all the execution cases, which is afterwards divided by $2m$ to get the average value. \begin{equation}\label{equation_ErrorFunction} J(cost_1, cost_2,...cost_l)=\frac{1}{2m}\sum_{i=1}^{m}(\vec{n^{(i)}}\times\vec{cost}-e^{(i)})^2 \end{equation} \begin{equation}\label{equation_updatecost} cost_j := cost_j - \alpha\frac{\partial J(cost_1,...cost_j,...cost_l)}{\partial cost_j} \end{equation} \begin{displaymath} = cost_j - \alpha \frac{1}{m} \sum_{i=1}^{m}(\vec{n^{(i)}}\times\vec{cost}) \cdot n_j^{(i)} \end{displaymath} \begin{displaymath} j = 1,2,...l \end{displaymath} The idea of gradient descent is to minimize $J$ by repeatedly updating all the elements in $\vec{cost}$ with Equation (\ref{equation_updatecost}) until convergence. The partial derivative of the function $J$ on $cost_j$ gives the direction in which increasing or decreasing $cost_j$ will reduce $J$. Every element ($cost_j$) of $\vec{cost}$ is updated one by one in each iteration. The value $\alpha$ determines how large the step of each iteration is. If it is too large, the extremum value will possibly be missed; if too small, the minimizing process will be rather time-consuming. It needs to be manually tuned. Theoretically, the gradient descent algorithm could only find the local optima. In practice, we randomly set the values in $\vec{cost}$ and restart the entire gradient decent procedure for several times to look for the global optima. \fi \iffalse \begin{table} \centering \caption{NMAE in Four-Round Cross Validation\label{Cross Validation}} \begin{tabular}{cccccc} \hline Scen. & Set & 1st & 2nd & 3rd & 4th \\ \hline & Trai. & 15.7\% & 14.1\% & 16.3\% & 15.2\% \\ \multirow{-2}{*}{\small C\&M } & Vali. & 9.3\% & 15.7\% & 9.5\% & 11.6\% \\ \hline & Trai. & 19.9\% & 17.9\% & 14.4\% & 16.8\% \\ \multirow{-2}{*}{Orbit } & Vali. & 11.7\% & 17.0\% & 18.0\% & 15.0\% \\ \hline & Trai. & 13.9\% & 14.1\% & 14.8\% & 15.0\% \\ \multirow{-2}{*}{Waves } & Vali. & 16.8\% & 16.7\% & 16.1\% & 17.2\% \\ \hline \end{tabular} \end{table} \fi \iffalse To validate the reliability of model, we apply the four-round cross validation. If the model is proved to be reliable, then we use it for the energy accounting in later stages, otherwise we try other solutions to improve the model. The four-round cross validation procedure is as follows: the set of execution cases are randomly divided into four subsets; in each round, one of them is chosen to be the validation set and the others together to be the training set. \begin{equation}\label{equation_NMAE} NMAE= \frac{1}{n} \sum_{i=1}^{n} | \frac{\hat{e^{(i)}} - e^{(i)}}{e^{(i)}} | \end{equation} In Table \ref{Cross Validation}, we can see the Normalized Mean Absolute Error (NMAE) of the model in three scenarios in training and validation sets in the four rounds. The NMAE is a well-known statistical criterion that shows how well the estimated value matches the measured one. It is computed by Equation (\ref{equation_NMAE}), the mean value of normalized difference between the predicted energy cost $\hat{e}$ and the measured cost $e$. The lower the ratio the better the result. In the three scenarios, the NMAE in training sets ranges from 13.9\% to 19.9\%, and in validation sets from 9.3\% to 18.0\%. For the three scenarios, the sets of parameters respectively generated in the 2nd, 4th and 3rd rounds of cross validation are chosen to help analyze the energy property of the code in Section \ref{Section_Analysis}, because they have good balance on both training and validation sets. Their NMAEs are around 15.0\%, which means the model's inference accuracy is around 85.0\%. \fi \iffalse \begin{equation}\label{equation_ErrorFunctionAddition} J=\frac{1}{2m}\sum_{i=1}^{m}(\vec{n^{(i)}}\times\vec{cost}-e^{(i)})^2 + \lambda \frac{1}{l} \sum_{j=1}^{l} \rho^{- cost_j} \end{equation} \begin{displaymath} \rho > 1 \end{displaymath} \begin{equation}\label{equation_newupdate} cost_j := cost_j - \alpha \frac{1}{m} \sum_{i=1}^{m}(\vec{n^{(i)}}\times\vec{cost}) \cdot n_j^{(i)} + \alpha \lambda \frac{ln(\rho) }{l} \rho^{-cost_j} \end{equation} \begin{displaymath} j = 1,2,...l \end{displaymath} \begin{table*}\label{execution_cases} \centering \caption{Execution Cases} \begin{tabular}{>{\centering\arraybackslash}m{3cm} >{\centering\arraybackslash}m{4cm} >{\centering\arraybackslash}m{2cm} >{\centering\arraybackslash}m{3cm} >{\centering\arraybackslash}m{1cm} >{\centering\arraybackslash}m{1cm} >{\centering\arraybackslash}m{1cm}} \hline\hline Scenario & Description & Input Type & Input Function & \#Sqc & \#Rpt & \#Cases \\ \hline Behind tree & Sprite moves through the forest & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Bezier & Sprite moves in the Bezier Curve & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Blink & Sprite disappears and appears frequently & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Click \& move & Sprite moves to where the tap occurs & Tap & Lead the motion & 70 & 3 & 210 \\ \hline Ease exponential & Speed of sprite motion accelerates exponentially & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Ease normal & Sprite moves in a constant speed & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Ease sine & The acceleration changes in a sine function of time & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Ease elastic & Sprite moves like the end of a released elastic & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Jump & Sprite jumps & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Scale & Sprite scales lager and smaller periodically & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Rotate & Sprite rotates & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Motion streak & Sprites leaves a streak on the trace of motion & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Move \& fire & Sprites moves with fire effect following it & Tap & Restart Scenario & 10 & 3 & 30 \\ \hline Orbit & Sprite moves in orbit around the vertical axis of screen & Tap & Restart Scenario & 10 & 3 & 30\\ \hline Particle fire & Start fire effect where tap occurs & Tap & Add more fire & 50 & 3 & 150\\ \hline Particle galaxy & Start galaxy effect where tap occurs & Tap & Add more galaxy & 50 & 3 & 150\\ \hline Particle meteor & Start meteor effect where tap occurs & Tap & Add more meteor & 50 & 3 & 150\\ \hline Particle radius spiral & Start spiral effect where tap occurs & Tap & Add more spiral & 50 & 3 & 150\\ \hline Particle rain & Start rain effect where tap occurs & Tap & Restart rain effect & 50 & 3 & 150\\ \hline Particle ring & Start light-ring effect where tap occurs & Tap & Add more light-ring & 50 & 3 & 150\\ \hline Particle smoke & Start smoke effect where tap occurs & Tap & Add more smoke & 50 & 3 & 150\\ \hline Particle snow & Start snow effect where tap occurs & Tap & Add more snow & 50 & 3 & 150\\ \hline Particle spinning & Start spinning stars effect where tap occurs & Tap & Add more stars & 50 & 3 & 150\\ \hline Particle sun & Start sun effect where tap occurs & Tap & Add more sun effect & 50 & 3 & 150\\ \hline \hline\hline\end{tabular} \end{table*} \fi \iffalse \begin{figure*} \centering \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width = \textwidth]{training_case_w_fs.pdf} \label{fig:train_case_w_fs} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{test_case_w_fs.pdf} \label{fig:test_case_w_fs} \end{subfigure} \caption{The predicted, observed energy consumption in training and test cases. The bars show the inference error. }\label{fig:data_w_fs} \end{figure*} \fi \iffalse \subsection{Inference Accuracy} \begin{figure} \centering \includegraphics[width = 0.48\textwidth]{Threshold_for_op_grouping.pdf} \caption{The effect of correlation threshold on the inference error. }\label{fig:threshold} \end{figure} \begin{algorithm} \caption{The algorithm for grouping energy operations} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Set of tuples, $T$: \{ $< op_i, op_j, corr> |$ $op_i$ and $op_j$ are each pair of different energy operations, $corr$ is the correlation between them\}} \Output{Set of groups, $G$: \{ $g\, | \; g$ contains the operations with ``strong" correlation\}} Initialize $G$ as $\emptyset$.\\ \ForEach{$<op_i, op_j, corr> \in T$} { \eIf{$corr > $ \texttt{THRESHOLD}} { \SetAlgoLined \uIf{$op_i, op_j \notin$ any group in $G$} {Add a new group to $G$, containing $op_i$, $op_j$} \uElseIf{$op_i \in g_k$ in $G$ \textbf{and} $op_j \notin$ any group in $G$ }{Put $op_j$ into $g_k$} \uElseIf{$op_i \notin$ any group in $G$ \textbf{and} $op_j \in g_k$ in $G$}{Put $op_i$ into $g_k$} \uElseIf{$op_i \in g_k$ in $G$ \textbf{and} $op_j \in g_l$ in $G$ }{ Merge $g_k$ and $g_l$ as one group} } { \SetAlgoLined \uIf{$op_i, op_j \notin$ any group in $G$} {Add two new groups to $G$, separately containing $op_i$, $op_j$} \uElseIf{$op_i \in g_k$ in $G$ \textbf{and} $op_j \notin$ any group in $G$ }{Add a new group to $G$, containing $op_j$} \uElseIf{$op_i \notin$ any group in $G$ \textbf{and} $op_j \in g_k$ in $G$}{Add a new group to $G$, containing $op_i$} } } \end{algorithm} The key point of case-design is to vary the executions of individual blocks. By doing this, we are able to enlarge the column space of the $N$ matrix (in Section \ref{Section:Model}) to raise the possibilities to solve all the values in $\vec{cost}$. We try to achieve it by commenting out different sets of blocks in each execution case. With the collected data in training cases, we obtain the approximate $\vec{cost}$. The execution cases are divided to training and test cases, which apply the same design principle, varying the executions of blocks. The data collection stage is quite time-consuming. 200 execution cases need about 70 hours, so we set the sessions of test cases shorter than those of training cases to cover more potential corners. In the implementation, we have 190 training cases and 300 test cases. In the beginning, the model performs much better in training cases than in test cases. We find that in most situations a set of energy operations are executed sequentially, for example, the \textit{comparison} operations are always followed by a \texttt{block goto} operation, which causes redundant inputs for the model. To settle this, we apply an feature selection procedure. According to the training data, we put the operation with strong positive linear execution correlations in the same group. The correlation is $\frac{cov(x,y)}{std(x) \cdot std(y)} $, which is the covariance over the product of the standard deviations of two variables (operation executions). The correlation closer to 1 means stronger linear relation between two variables. Operations above an correlation threshold are grouped together and treated as one operation for the model. In Figure \ref{fig:threshold}, we can see the effect of correlation threshold on the inference error in training and test sets. When the grouping condition (the threshold) is more strict (higher), less operations are grouped, the error rate is decreasing all along, because more inputs always mean more information for prediction. On the other hand, the inference error in test set goes down first and inclines up later and exceeds the other line from the threshold of 0.97 due to the reason we discussed above. We choose the grouping threshold as the crossing point (0.97) in Figure \ref{fig:threshold}. Figure \ref{fig:data_w_fs} demonstrates the inference error in training and test sets, which is within 10\%. \fi \section{The \texttt{Click \& Move} Scenario}\label{Section_clickmove} In this section, we begin with energy accounting at operation and block level for the \texttt{Click \& Move} scenario, after which we improve the most costly blocks focusing on the most expensive operations. We apply a similar approach in the other scenarios \texttt{Orbit} and \texttt{Waves} in Section \ref{Section_orbit} and Section \ref{Section_waves}; however for those cases we will only briefly summarize energy accounting and focus on the code improvements. \subsection{Energy Accounting}\label{Section_Analysis} The energy model of app source code based on energy operations facilitates comprehensive energy accounting from operation-level up to source-level. In this section, we will see the rank of the most expensive operations, and the contributions of different operations to the energy consumption of each block. \begin{figure} \centering \includegraphics[width = 0.44\textwidth]{op_cost_rank.pdf} \caption{The top 30 energy consuming operations in \texttt{Click \& Move} scenario.}\label{fig:op_rank} \end{figure} \paragraph{Operation Level} Figure \ref{fig:op_rank} shows the top 30 energy consuming operations in the model, ranked by their single-execution energy costs. The line marked "71.3\% Energy Consumption" indicates the percentage of the energy cost of the execution cases for the \texttt{Click \& Move} scenario contributed by the top 10 operations. Similarly, "26.1\% Energy Consumption" shows the contribution to the total cost of operations from 11th to 30th. We can see that the energy usage of the code is largely determined (97.4\%) by a relatively small number of operations. This is due to the fact that these operations are frequently used and expensive in themselves. It might be supposed that the sophisticated arithmetic operations, such as multiplications and divisions, should be the most costly. However, the result shows that \textit{Method Invocation} ranks the highest. This is due to a sequence of complex processes to fulfill \textit{Method Invocation}, for example, most of the method calls in Java are virtual invocations which are dispatched on the type of the object at run-time and always implicitly passed a "this" reference as their first parameter, not to mention other operations such as storing the return address and managing the stack frame. This suggests a trade-off between code structure and energy saving when writing the code. That means, in certain cases, we could inline some thin and highly-invoked methods in the code, at the cost of losing the integrity of the structure of the code to some extent. Only one arithmetic operation, namely \textit{Multi\_float\_float}, is a member of the top 10, and there are only six arithmetic operations in the top 30. They together cost only 6.1\% of the overall energy consumption of the application, which is somewhat unexpected. Later in block-level energy accounting, we will see that assignments, comparisons and \textit{Array Reference} play significant roles in the overall energy consumption. This is not only because they are frequently used, but also because they are costly as operations themselves, as shown in Figure \ref{fig:op_rank}. \textit{Block Goto} operations are expensive as well. Based on the types of conditionals and loops where "Block Goto" occurs, they are classified into \textit{BlockGoto\_if}, \textit{BlockGoto\_for} and \textit{BlockGoto\_while}. The result shows that they cost different amounts of energy as operations themselves, respectively 6.7 $\mu$J, 4.1 $\mu$J and 1.1 $\mu$J. Together with \textit{Method Invocation}, they take up 37.6\% of the total energy consumption of the application. \paragraph{Block Level} \begin{figure*} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width = \textwidth]{blocks_in_program.pdf} \label{fig:blocks_in_program} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{op_in_block.pdf} \label{fig:op_in_block} \end{subfigure} \caption{Energy distribution in \texttt{Click \& Move}. Blocks are sorted by the order of their run-time energy costs In Application.}\label{fig:block} \end{figure*} In the execution cases, we have 108 active blocks with a wide diversity of energy usage. In Figure \ref{fig:blocks_in_program}, "In Application" means running the \texttt{Click \& Move} scenario with the full set of blocks (that is, ignoring the execution cases described in Section \ref{Section_sourcecode_casedesign} in which some blocks are removed). The total cost of a block "In Application" is plotted as an orange bar, reflecting both its cost and the number of times it is executed. The cost of a fixed number (3000) of executions of one block are calculated by multiplying its single-execution cost by 3000. This helps us to compare the single-execution costs of different blocks. The costs of blocks at "3000-Times-Execution" are plotted as green bars. Similar to energy distribution on operations, only a small number (11 blocks) of all the blocks use up nearly half of the entire cost, which indicates that putting efforts on optimising a small group of blocks can achieve significant energy saving. There are two factors that make one block costly "In Application". The first factor is a large number of executions. For example, the most costly block "In Application" (the rightmost orange bar in Figure \ref{fig:blocks_in_program}) has a large number of execution times. This block takes only 30.6 $\mu$J for single-execution but 2.1 Joules when running "In Application". The second factor is the energy consumption of the block itself. For example, we can see three prominent green bars in Figure \ref{fig:blocks_in_program}, whose single-execution costs are 201.5 $\mu$J, 146.9 $\mu$J and 142.8 $\mu$J. We will later zoom in these three blocks to see which operations contribute to their energy costs. We can further observe the energy proportions of operations in each block in Figure \ref{fig:op_in_block}. To illustrate, operations are grouped into eight classes. Specifically, the "Block Goto" operations and \textit{ Method Invocation} are gathered in \textit{Control Ops}; the parameter passing and the value returns of methods are in \textit{Function Ops}; the comparisons and Booleans are in \textit{Boolean Ops}; all the arithmetic computations are in \textit{Arithmetic Ops}; all the library functions are in \textit{Lib Functions}. Most of the blocks cost less than 25 $\mu$J for single-execution. In these blocks, \textit{Control Ops} occupy the major part of the energy consumption, in contrast, \textit{Arithmetic Ops} only take a tiny proportion. For those three most prominent blocks, assignments and \textit{Array Reference} are the biggest energy consumers. Furthermore one of the three blocks has the largest proportion of \textit{Arithmetic Ops} among all the blocks. The most expensive block "In Application" consists of three even parts: \textit{Control Ops}, \textit{Function Ops} and \textit{Boolean Ops}. This block is the main entrance of the game engine to draw and display frames, so its work is dominated by conditional judgments and method invocations. \subsection{Code Optimization} \begin{table} \centering \caption{The top 10 most costly blocks in \texttt{Click \& Move}. \label{table_topblocks}} \begin{tabular}{lr} \hline \quad Block ID & Energy Cost (mJ)\\ \hlin CCNode.visit() & 2128.6 \qquad\quad \\ CCNode.transform() & 1648.4 \qquad\quad\\ CCTextureAtlas.putVertex() & 1494.4 \qquad\quad\\ CCNode.visit().if\_4.for\_1 & 1426.8 \qquad\quad\\ CCNode.transform().if\_1 & 1426.3 \qquad\quad\\ CCTextureAtlas.putTexCoords() & 1107.8 \qquad\quad\\ CCAtlas.updateValues().for\_1 & 1018.7 \qquad\quad\\ CCNode.visit().if\_3.for\_1 & 915.7 \qquad\quad\\ CCSprite.draw() & 766.9 \qquad\quad\\ CCTexture2D.name() & 537.5 \qquad\quad\\ \hline \end{tabular} \end{table} The most important consideration of app developers is to guarantee the correctness of software, which should then be followed by energy efficiency. So our energy-aware programming approach is adopted at the end of software engineering life circle when the software system is in general complete. We look into the top 10 costly blocks "In Application" (see Table \ref{table_topblocks}). For example, \textit{CCNode.visit()} is the entrance block of the \textit{visit()} function; \textit{CCNode.visit().if\_4.for\_1} is the body block of the \texttt{for} loop. These 10 blocks are distributed in seven methods, so the code review is straightforward. We find four easy opportunities to improve energy efficiency of some blocks: \textit{CCNode.visit()}, \textit{CCNode.visit().if\_4.for\_1} and \textit{CCTexture2D.name()}. There are also other opportunities in other blocks supposed possible to save energy, but requiring more efforts and gaining little. For example, \textit{CCAtlas.\\updateValues().for\_1} has several busy arithmetic expressions. Usually it is supposed that replacing the busy expression with a variable would reduce energy, however in this case the overhead of variable declaration counteracts the saved energy. The four opportunities to reform the code are very simple and effective, but can only be discovered by the operation-level information. The changes are shown as follows: \begin{program} \begin{lstlisting} if (children_ != null) { if_body1; } draw(gl); if (children_ != null) { if_body2; } \end{lstlisting} \caption{ Simplified parts of \textbf{original} code in \textit{CCNode.visit()}} \label{program1} \end{program} \begin{program} \begin{lstlisting} if (children_ != null) { if_body1; draw(gl); if_body2; } else {draw(gl);} \end{lstlisting} \caption{The changed Program \ref{program1} } \label{program2} \end{program} \paragraph{If Combination} This change is made in the most costly block \textit{CCNode.visit()}, which has two comparisons, two Boolean operations, one \textit{Method Invocation} and one parameter passing. In fact, the two \texttt{if} headers make the same comparison, as shown in Program \ref{program1}. We change the code to Program \ref{program2}, which combines the two \texttt{if} statements and meanwhile keep it logically consistent with Program \ref{program1}. By these means each execution of the block can reduce one comparison, and when the condition is false, it can additionally reduce one \textit{BlockGoto\_if}. \begin{program} \begin{lstlisting} public void visit(GL10 gl) { ...... transform(gl); ...... } public void transform(GL10 gl) { tranform_body; } \end{lstlisting} \caption{Simplified parts of \textbf{original} code in \textit{CCNode} class} \label{program3} \end{program} \begin{program} \begin{lstlisting} public void visit(GL10 gl) { ...... transform_body; ...... } public void transform(GL10 gl) { transform_body; } \end{lstlisting} \caption{The changed Program \ref{program3}} \label{program4} \end{program} \paragraph{Inner-Class Method Inline} When "In Application", the \textit{transform()} function is invoked 18903 times and mostly by the\textit{visit()} function. We change the Program \ref{program3} to Program \ref{program4} by switching the body of \textit{transform()} to the function call of \textit{transform()} in \textit{visit()}, meanwhile remaining the original definition of \textit{transform()} in case that other parts of the code call it. This change can greatly decrease the number of calls to \textit{transform()}s and thus \textit{Method Invocation}s that are costly. However, it may be at the cost of losing readability of the code (which might be partly compensated by adding explanatory comments). \paragraph{Loop-Invariant Code Motion} \textit{CCNode.visit().if\_3.for\_1} and \textit{CCNode.visit().if\_4.for\_1} are entrance blocks of the two \texttt{for} loops as seen in Program \ref{program5}. These two loops share a quantity, \textit{children\_.size()}, which is computed in each iteration but actually constant. We thus hoist it outside the loop, as shown in Program \ref{program6}, which vastly saves the energy of invoking and executing the \textit{size()} function during every iteration. Meantime, we move the declaration of the \textit{child} outside the loop, considering the cost of \textit{Declaration\_Object} is about 2.97 $\mu$J and also in the top 30. \\ \paragraph{Inter-Class Method Inline} \textit{CCTexture2D.name()} is the 10th most costly block and costs 537.5 mJ "In Application". However, its job is to simply get the value of the private member variable, \textit{\_name}, of the class \textit{CCTexture2D}. This method has only two callers in the code. So we consider to make this variable public and let the two callers directly get access to the variable, which avoids the cost of \textit{Method Invocation}. This change may harm the encapsulation of data, however, only one member of one class is changed. The trade-off between energy-saving and data encapsulation will be at last decided by developers. \begin{program} \begin{lstlisting} if (children_ != null) { for (int i=0; i<children_.size(); ++i) { CCNode child = children_.get(i); if (child.zOrder_ < 0) { child.visit(gl); } else break; } draw(gl); for (int i=0; i<children_.size(); ++i) { CCNode child = children_.get(i); if (child.zOrder_ >= 0) { child.visit(gl); } } } else {draw(gl);} \end{lstlisting} \caption{The full version of Program 2} \label{program5} \end{program} \begin{program} \begin{lstlisting} CCNode child = new CCNode(); //added int children_size = children_.size(); //added if (children_ != null) { for (int i=0; i<children_size; ++i) { //changed child = children_.get(i); //changed if (child.zOrder_ < 0) { child.visit(gl); } else break; } draw(gl); for (int i=0; i<children_size; ++i) { //changed child = children_.get(i); //changed if (child.zOrder_ >= 0) { child.visit(gl); } } } else {draw(gl);} \end{lstlisting} \caption{The changed Program \ref{program5}} \label{program6} \end{program} \subsection{Evaluation} Figure \ref{fig:energy_saving_candm} illustrates the energy dissipation of the software without and with the changes introduced in the previous section. From left to right, the bars indicate cumulative effects of the changes. For example, "\textit{+ If Comn}" is the energy consumption of the original code with the change of "If Combination"; "\textit{+ Inner-Class MI}" is the energy consumption of the code with the changes of both "If Combination" and "Inner-Class Method Inline". In total, these four simple changes save 6.4\% of the entire energy consumption without influencing the functionality of code. These changes are made in the basic part of the game engine, which most applications will be bases on, so any gain here can have fundamental impact. Furthermore, these changes are made with little knowledge about the algorithm of the code, the developers who designed the code are surely able to improve the code much more and achieve far more energy-saving, if the energy model was available to them. \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{EnergySaving.pdf} \caption{Energy consumption of the code without and with the changes in \texttt{Click \& Move}.}\label{fig:energy_saving_candm} \end{figure} \section{The \texttt{Orbit} Scenario }\label{Section_orbit} In this section, we briefly describe the energy accounting for the \texttt{Orbit} scenario. Then we improve the most costly blocks focusing on the expensive operations. In Section \ref{orbit_evaluation}, the experimental result shows that the improvement can save 50.2\% of the overall energy consumption. \subsection{Energy Accounting} \begin{figure} \centering \includegraphics[width = 0.38\textwidth]{lensorbit_blocks.pdf} \caption{The energy proportions of blocks "In Application" in the \texttt{Orbit} scenario}\label{fig:lensorbit_blocks} \end{figure} In the \texttt{Orbit} scenario, the block \textit{CCGrid3d.blit().for\_1} dominates the overall energy consumption. As shown in Figure \ref{fig:lensorbit_blocks}, 80.9\% of the entire cost is consumed by this block. The second most costly block consumes only 1.3\%. "In Application" here means running the \texttt{Orbit} scenario without removing any block. Later in Section \ref{section:orbit_code_optn}, we only focus on this single block. \subsection{Code Optimization}\label{section:orbit_code_optn} Program \ref{program7} shows the original code of \textit{CCGrid3D.blit().\\for\_1}. In this block, the \textit{Control Ops} (\textit{BlockGoto\_for} and \textit{Field Reference}) use up 35.6\% of the energy; \textit{Boolean Ops} use up 20.5\%; the assignments use up 16.7\%; \textit{Arithmetic Ops} use up 14.0\%; \textit{Lib Functions} use up 13.3\%. We find three easy changes to reduce or replace the pricey operations.\\ \paragraph{Loop-Invariant Code Motion} In this block, the value of \textit{vertices.limit()} is the constant 2112; we therefore hoist it outside the loop and replace it with the variable \textit{limit}, as shown in Program \ref{program8}. This change avoids invocations and executions of \textit{vertices.limit()} and at the same time decreases a small amount of \textit{Field Reference}. \begin{program} \begin{lstlisting} for (int i = 0; i < vertices.limit(); i=i+3) { mVertexBuffer.put(vertices.get(i)); mVertexBuffer.put(vertices.get(i+1)); mVertexBuffer.put(vertices.get(i+2)); } \end{lstlisting} \caption{The \textbf{original} code of \textit{CCGrid3D.blit().for\_1}} \label{program7} \end{program} \begin{program} \begin{lstlisting} int limit = vertices.limit(); //added for (int i = 0; i < limit; i=i+24) { //changed mVertexBuffer.put(vertices.get(i)); mVertexBuffer.put(vertices.get(i+1)); mVertexBuffer.put(vertices.get(i+2)); ... mVertexBuffer.put(vertices.get(i+23));//added } \end{lstlisting} \caption{The changed Program \ref{program7}} \label{program8} \end{program} \paragraph{Loop Unrolling} Also as shown in Program \ref{program8}, we duplicate the loop body eight times, reducing the times of comparisons, \textit{BlockGoto\_for}s, assignments and additions. Note that we set the value of the increment as 24 since 24 is a factor of the \textit{limit}, 2112. \paragraph{Full Use of Library Function} The job of Program \ref{program7} or Program \ref{program8} is to get all the elements in \textit{vertices} one by one and put them one by one into \textit{mVertexBuffer}. Program \ref{program7} can be simply replaced by one line: \textit{mVertexBuffer.put(vertices.asReadOnlyBuffer())}. This puts all the elements of \textit{vertices} into \textit{mVertexBuffer}. This change realizes the same functionality using the already existing library function, which is one of the key library functions already compiled into native code. \subsection{Evaluation}\label{orbit_evaluation} \begin{figure} \centering \includegraphics[width = 0.41\textwidth]{energy_saving_orbit.pdf} \caption{Energy consumption of the code without and with the changes in \texttt{Orbit}.}\label{fig:energy_saving_orbit} \end{figure} Figure \ref{fig:energy_saving_orbit} shows the cumulative effects of the code changes on energy consumption. In contrast to the other columns, "\textit{Full-Use LF}" does not take previous changes into account and means only replacing Program \ref{program7} with the built-in library function as stated above. The figure shows that loop-invariant code motion does not gain much energy saving because \textit{vertices.limit()} is a library function and in addition uses a very small percentage of energy consumption. On the other hand, loop unrolling achieves 25.8\% energy saving due to the reduction of the amount of \textit{Control Ops}, comparisons and assignments, which occupy most of the cost. The most effective change is the replacement to the library function, avoiding the waste of 50.2\% energy use because this library function has been compiled into native code before execution, in contrast the Java source code need run-time interpretation which of course incurs an energy cost. The result implies that it is a good idea for developers to make a good use of library functions rather than implementing the same function with Java source code. The discovery of this source of inefficiency was assisted by the energy accounting. \begin{table*} \centering \caption{Top 10 most costly blocks "In Application" in the \texttt{Waves} scenario and the energy percentages of different kinds of operations in each block.}\label{table_waves_topblocks} \begin{tabular}{lrrrrrrrrr} \hline \quad Block ID & \#Executions & Energy Cost (mJ) & Assi. & Decl. & Cont. & Func. & Bool. & Arit. & Libr. \\ \hlin CCGrid3D.blit().for\_1 & 112193 \quad\quad & 8094.1 \qquad\quad & 16.7\% & 0\% & 35.6\% & 0\% & 20.5\% & 14.0\% & 13.3\% \\ CCVertex3D.CCVertex3D() & 40219 \quad\quad & 5232.0 \qquad\quad & 27.2\% & 0\% & 10.0\% & 62.8\% & 0\% & 0\% & 0\%\\ CCWaves3D.update().for\_1.for\_1 & 34604 \quad\quad& 4088.7 \qquad\quad & 10.7\% & 0\% & 32.1\% & 0\% & 14.7\% & 39.0\% & 2.2\%\\ ccGridSize.ccg() & 42275 \quad\quad& 3769.1 \qquad\quad & 0\% & 0\% & 32.1\% & 67.9\% & 0\% & 0\% & 0\%\\ CCGrid3DAction.setVertex() & 31856 \quad\quad & 3285.4 \qquad\quad & 14.6\% & 7.8\% & 30.9\% & 46.7\% & 0\% & 0\% & 0\%\\ CCGrid3DAction.originalVertex() & 36566 \quad\quad & 2891.3 \qquad\quad & 19.1\% & 10.2\% & 40.3\% & 30.4\% & 0\% & 0\% & 0\%\\ CCNode.getGrid() & 49119 \quad\quad& 2145.1 \qquad\quad & 0\% & 0\% & 58.1\% & 41.9\% & 0\% & 0\% & 0\%\\ ccGridSize.ccGridSize() & 10570 \quad\quad & 1173.8 \qquad\quad & 30.3\% & 0\% & 31.6\% & 38.0\% & 0\% & 0\% & 0\%\\ CCGrid3D.setVertex() & 3944 \quad\quad& 657.2 \qquad\quad & 10.1\% & 1.6\% & 32.8\% & 28.9\% & 0\% & 26.4\% & 0.2\%\\ CCGrid3D.originalVertex() & 2785 \quad\quad& 374.2 \qquad\quad & 14.0\% & 1.9\% & 33.4\% & 17.9\% & 0\% & 32.8\% & 0\%\\ \hline \end{tabular} \end{table*} \section{The \texttt{Waves} Scenario}\label{Section_waves} In this section, similarly, we first analyze the energy features of the blocks in the \texttt{Waves} scenario, based on which we modify the code and then evaluate the effects of changes on energy consumption. \subsection{Energy Accounting} Unlike the \texttt{Orbit} scenario where only one block dominates energy cost, in the \texttt{Waves} scenario the costs of the top eight blocks are at the same order of magnitude of kJ, as listed in Table \ref{table_waves_topblocks}. The \textit{CCGrid3D.blit().for\_1} is also employed in this scenario and is the most costly as well among all the blocks. The majority of blocks in Table \ref{table_waves_topblocks} are directly or indirectly invoked by \textit{CCWaves3D.update().for\_1.\\for\_1}, as shown in Program \ref{program9}. The purpose of these methods is mainly to set or get the values of member variables, so a large part of energy consumption goes to assignments, \textit{Function Ops} and \textit{Control Ops}. It was not expected that the code spends such a large amount of energy on simple set and get functions. \subsection{Code Optimization} \paragraph{Full-Use of Library Function} We mentioned previously in Section \ref{section:orbit_code_optn} the optimization for \textit{CCGrid3D.blit().for\_1} where we replace the entire Program \ref{program7} with one line of code making use of library functions. We keep this change in this scenario. For other blocks, we come up with one modification as below.\\ \begin{program} \begin{lstlisting} int i, j; for( i = 0; i < (gridSize.x+1); i++ ) { for( j = 0; j <(gridSize.y+1); j++ ) { CCVertex3D v=originalVertex(ccGridSize.ccg(i,j)); ... setVertex(ccGridSize.ccg(i,j), v); } } \end{lstlisting} \caption{The \textbf{original} code in \textit{CCWaves3D.update()}} \label{program9} \end{program} \normalsize \begin{program} \begin{lstlisting} ccGridSize ccgridsize = new ccGridSize(0,0);//added CCGrid3D ccgrid3d = (CCGrid3D) target.getGrid(); //added CCVertex3D v = new CCVertex3D(0,0,0); //added int i, j; for( i = 0; i < (gridSize.x+1); i++ ) { for( j = 0; j <(gridSize.y+1); j++ ) { ccgridsize.x=i;ccgridsize.y=j; //added v =ccgrid3d.originalVertex(ccgridsize);//changed ... ccgrid3d.setVertex(ccgridsize, v);//changed } } \end{lstlisting} \caption{Program \ref{program9} after Method Inline \& Code Motion } \label{program10} \end{program} \iffalse \begin{program} \begin{lstlisting} ... for( i = 0; i < (gridSize.x+1); i++ ) { ccgridsize.x=i;ccgridsize.y=0; ... ccgrid3d.setVertex(ccgridsize, v); ccgridsize.x=i;ccgridsize.y=1; //added ... ccgrid3d.setVertex(ccgridsize, v); //added ... ... ccgridsize.x=i;ccgridsize.y=10; //added ... ccgrid3d.setVertex(ccgridsize, v); //added } \end{lstlisting} \caption{Program \ref{program10} after Loop Unrolling } \label{program11} \end{program} \fi \paragraph{Method Inline \& Code Motion} As shown in Program \ref{program9}, the three functions called in the inner loop body are \textit{CCGrid3DAction.originalVertex()}, \textit{ccGridSize.ccg()} and \textit{CCGrid3DAction.setVertex()}, which respectively cost 2891.3 mJ, 3769.1 mJ and 3285.4 mJ "In Application". Note that, \textit{CCGrid3DAction} is the parent class of \textit{CCWaves3D}, so in Program \ref{program9} \textit{originalVertext()} and \textit{setVertex()} can be directly called without referring to their class names. As seen in Program \ref{program10}, we unpack these three methods in this block: the first and fourth "added" lines are unpacked \textit{ccGridSize.ccg()}; the second "added" and first "changed" lines are unpacked \textit{CCGrid3DAction.originalVertex()}; the second "added" and second "changed" lines are unpacked \textit{CCGrid3DAction.setVertex()}. This change removes all the \textit{Method Invocation}s, parameter passing and value returns related to these three functions invoked by this block. Note that the first three "added" lines are located outside the loop in order to reduce energy consumption of the process of initializing objects and calling \textit{CCNode.getGrid()}. \subsection{Evaluation} \begin{figure} \centering \includegraphics[width = 0.38\textwidth]{energy_saving_waves.pdf} \caption{CPU and GPU Energy consumption of the code without and with the changes in \texttt{Waves}.}\label{fig:energy_saving_waves} \end{figure} Figure \ref{fig:energy_saving_waves} shows the cumulative effects of changes on energy consumption of CPU and GPU (note that previous figures only showed the CPU energy consumption because the GPU energy consumption did not vary noticeably), and the dashed line indicates the linear trend of the GPU energy consumption. In the case of games, the target frame rate is usually 60 Hz; when the game overloads the CPU the rate will decrease, and when the workload is light, even very light, the rate is generally fixed to 60 Hz. The rate in "\textit{Original}" is around 36 Hz; that in "\textit{+ Full-Use LF}" is around 50 Hz; that in "\textit{+ Method Inline \& CM}" is around 60 Hz. The change of \textit{Full-Use LF} (full use of library function) does not save energy in the CPU because the execution of the original \texttt{Waves} actually overloads the CPU capacity, so the improvement of code enables the device to generate more frames every second. Consequently, the CPU does the same volume of work and consumes the equal amount of energy, the GPU does more work and consumes more energy, as seen in Figure \ref{fig:energy_saving_waves}. After this change, when we apply the method inline and code motion, 27.7\% of the overall CPU energy is saved, and for the same reason the GPU consumes slightly more. This experimental result indicates that our approach not only saves energy but also potentially boosts performance, which benefits the user doubly. \section{Related Work} \paragraph{Energy Modeling} From the hardware side, research on energy modeling have been done at the circuit level (see the survey \cite{Najm_VLSI_level}), gate level \cite{Najm_gate_level, Marcu_gate_level} and register-transfer level \cite{cheng_RT_level}. Later, research focus shifted towards high-level modelings, such as software and behavioral levels \cite{Macii_high_level}. Energy modeling techniques for software start with the basic instruction level, which calculates the sum of energy consumption of basic instructions and transition overheads \cite{Tiwari:power_analysis_embedded,bran:instruction-level_model}. Gang et al. \cite{gangqu:function-level_powermodel} base the model at the function-level while considering the effects of cache misses and pipeline stalls on functions. T. K. Tan et al. \cite{Tan:2001:high-level_softwaremodel} utilize regression analysis for high-level software energy modeling. However, the run-time context considered in the above works is unsophisticated, free from user inputs, a virtual machine, dynamic compilation, and etc. Furthermore the software stack below the level that they deal with (such as the level of the basic or assembly instruction) is relatively thin. When research is focused on the energy use of mobile applications, the level of granularity of the techniques is increased as well. An important part of such efforts is the use of operating system and hardware features as predictors to estimate the energy consumption at the component, virtual machine and application level \cite{Dong_selfconstructivemodel, Kansal_powerofvm, Pathak_whereisenergy, Zhang_onlinepowerestimation, Wang_batterytrace, Shye_intowild}. Shuai et al. \cite{HaoShuai:2013:EstMobileApp} and Ding et al. \cite{sourceline_energy} propose approaches to obtain source line energy information. The former requires the specific energy profile of the target system, and the workload is fine-tuned. The latter utilizes advanced measurement techniques to obtain the source line energy cost. Compared with approaches above, our latest work explores the idea of identifying energy operations and constructing a fine-grained model based on operations which is able to capture energy information at a level more fine-grained than source line. \paragraph{Energy-Saving Techniques} A large amount of research effort on energy-saving for mobile devices has been focused on the main hardware components, such as the CPU, display and network interface. The CPU-related techniques involve dynamic voltage and frequency scaling \cite{anotherDVFS} and heterogeneous architecture \cite{Reflex_Lin, GreeDroid_Goulding}. Techniques targeting the display include dynamic back-light dimming \cite{dimming_backlight, dimming_backlight2} and tone-mapping based back light scaling \cite{tone_mapping,tone_mapping2}. Network-related techniques try to exploit idle and deep sleep opportunities \cite{network1,network2}, shape the traffic patterns\cite{network_3,network4}, and so on. Such work attempts to reduce energy dissipation by optimizing the hardware usage; on the other hand, several pieces of work aim at designing new hardware and devices \cite{hardwaredisign1, hardwaredesign2}. There is a significant research focus on software optimization for saving energy. The basic work seeks to understand how the different methods, algorithms and design patterns of software influence the energy consumption. For example, \cite{newRoutingTech1,newRoutingTech2,newRoutingTech3} propose new routing techniques and protocols that are aware of energy consumption, which are evaluated by comparing with traditional techniques. For another example, \cite{choosesortalgorithm} investigates the affects of different sorting algorithms on the energy consumption with respect to the algorithm's input-size. Considering design patterns, Litke et al. \cite{desiangPattern1} conduct an experiment showing how big the difference of energy consumption is, before and after the application of design patterns, such as \textit{factory method pattern}, \textit{observer pattern}, and etc. The result reveals that except for one example the use of design patterns does not increase the energy use noticeably. Comparable work to \cite{desiangPattern1} is done by \cite{desiangPattern2}; they explores more design patterns and arrive at the conclusion that applying design patterns can both increase and decrease energy dissipation, so design-level artifacts cannot be used to estimate the impacts of design patterns on energy use. Vetr\`{o} et al. \cite{code_smell} define the concept of "energy code smells" that are the code patterns (such as self assignment, repeated conditionals and useless control flow) that suggest energy inefficiency. However, the code patterns selected in \cite{code_smell} have very little influence (less than 1.0\%) on energy consumption. Regarding code refactoring for energy-saving, Ding et al. \cite{energy_saving_programming} perform a small scale evaluation of several commonly suggested programming practices that may reduce energy. Its result shows that reading array length, accessing class field and method invocation all cost noticeable energy. However, this work only provides a small number of tips to developers on how to make the code more energy-efficient. To the best of our knowledge, the state of art before this paper did not connect the understanding of energy consumption with refactoring for such a high-level source code as Java. Our research proposes an energy-aware programming approach, which is guided by the operation-based source-level energy model. The experimental evaluation demonstrates that the approach is an effective and practical approach to energy-aware mobile application development. \section{Conclusion} In this paper, we propose an energy-aware programming approach for mobile app development, guided by an operation-based source-level energy model. The approach consists of 1) construction of an operation-based energy model by mining the data generated in a range of well-designed execution cases; 2) capturing energy characteristics of the code based on the model; 3) improving the code by removing, reducing or replacing the expensive operations in the costly blocks. We evaluate this approach on a physical Android development board with two ARM quad-core CPUs and on a real-world game engine. In this case study our approach has a significantly positive impact on energy-saving. For different scenarios, this approach can save energy between 6.4\% and 50.2\%. The findings also indicate that the performance of code is a potential by-product of this approach, which improves the user experience more.\\ \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The incompressible Navier-Stokes equations describe the motion of incompressible viscous flow. Whether singular solutions exist in 3D is one of the famous seven millennium prize problems. In terms of singular solutions, the study of incompressible Euler equations looks most probably to be the first major step. For the sake of convenience, we will omit {\it incompressible } from now on. The first nonlocal model was constructed by Constatin, Lax and Majda\cite{CLM}. They constructed a one-dimensional model and got the singular solution explicitly. The motivation is the vorticity formulation. There are many developments after this model (\cite{Sc, CCH, De} and others). In this paper, we give some high dimensional generalizations of the Constantin-Lax-Majda model. The study of them might help the understanding of singular solutions to the Euler and Navier-Stokes equations. The motivation is still vorticity formulation. We first present a two-dimensional zero order scalar model. One may think of it as a nonlocal ODE. The good understanding of it possibly is among the first steps toward singular solutions of the Euler and Navier-Stokes equations. Then we give further models. In some sense, the vorticity formulation provides an explanation why the singular solutions to the Navier-Stokes equations are so hard. Roughly speaking, the zero order term is pro-singularity term. The first and second order terms are perturbations. To be able to construct original solution from vorticity, we need the initial vorticity to be divergence-free. Any of them is hard to handle. The combination of them composes one of the most difficult problems in mathematics. One potential major ppplication of singular solutions of the Navier-Stokes equations is about turbulence. There is a prevailing viewpoint that the turbulent theory is deeply connected with the solutions of the Navier-Stokes equations. Actually one can interpret the behaviour of turbulence by the guessed properties of singular solutions to the Navier-Stokes equations. This paper is organized as follows. In section two, we discuss the two-dimensional zero order scalar models. In section three, further models are given. The possible connection between singular solutions of the Navier-Stokes equations and turbulence theory is presented in section four. The notations we use are standard ones. \section{ Zero order scalar models} The vorticity formulation of the three-dimensional Euler equations is the following: \begin{equation}\label{2.1} w_{t}+u \cdot \nabla w-\nabla uw=0, \end{equation} In~$R^{3}$, the velocity~$u$~is given by the Biot-Savart law: \begin{equation}\label{1.4} u=\mbox{\rm curl}~\Delta^{-1}w. \end{equation} Note that the equations (\ref{2.1}) and (\ref{3.1}) are well defined when the vorticity is not divergence-free. Define \begin{equation}\label{2.2} Z_{ij}=\partial _{ij}\Delta^{-1},x\in\Omega, \end{equation} where $\Omega$ is a bounded domain in $R^n$ or $R^n$ itself, $n \geq 2$. If the domain is bounded, then the boundary condition for the $\triangle$ is the homogeneous Drichilet boundary condition: \begin{equation}\label{2.3} \Delta^{-1}w\mid_{\partial\Omega}=0. \end{equation} In $R^3$, the term~$\nabla u$ can be rewritten as \begin{equation}\label{2.4} \begin{split}\nabla u&=\nabla \mbox{\rm curl}\Delta^{-1}w\\ &=\left( \begin{array}{ccc} Z_{21}w_{3}-Z_{31}w_{2}&Z_{31}w_{1}-Z_{11}w_{3}&Z_{11}w_{2}-Z_{21}w_{1}\\ Z_{22}w_{3}-Z_{32}w_{2}&Z_{32}w_{1}-Z_{12}w_{3}&Z_{12}w_{2}-Z_{22}w_{1}\\ Z_{23}w_{3}-Z_{33}w_{3}&Z_{33}w_{3}-Z_{13}w_{3}&Z_{13}w_{2}-Z_{23}w_{1}\\ \end{array} \right). \end{split} \end{equation} \begin{remark} If the domain has boundary, then in general, (\ref{1.4}) is not valid. Neither is \eqref{2.4}. But conceptually, the equality (\ref{2.4}) is still close to be true. We refer to \cite{Dur} and references therein for more details about the reconstruction of the velocity from the vorticity. \end{remark} The Constantin-Lax-Majda model has the following form: \begin{equation}\label{1.1} \theta_{t}=H(\theta)\theta, \end{equation} where $H$ is the Hilbert transform. One nature generalization of Constantin-Lax-Majda model is:\\ {\bf Model 1.} \begin{equation}\label{2.5} w_{t}=Z_{11}w ~ w,x\in\Omega\subset R^{2}. \end{equation} \begin{claim} The model equation is locally well-posed in $W^{1,p}, p>2$, $i.e. \ \forall w_{0}\in W^{1,p},\exists w\in C((0,T),W^{1,p}),s.t. \ w(x,0)=w_{0} $,\ $w$ satifies (\ref{2.5}). \end{claim} The proof is pretty standard and we omitted it. Next we present some elementary singular solutions of Model 1. One feature of the zero order model is that self-similar singular solutions could be considered in bounded domain. Let \begin{equation}\label{2.6} w=\frac{1}{T-t}Q(\frac{x}{T-t}). \end{equation} Then the equation for $Q$ is \begin{equation}\label{2.7} Z_{11}Q~Q=Q. \end{equation} The interesting thing is that when the domain is an ellipse, the equation (\ref{2.7}) has constant solution. \begin{theorem}\label{claim2.3} Assume $\Omega=\{ax_{1}^{2}+bx_{1}x_{2}+cx_{2}^{2}<1\},a,c>0,b^{2}-4ac<0$, then the equation (\ref{2.7}) has a constant solution $Q=1+\frac{c}{a}$. \end{theorem} \begin{proof} By the definition of ellipse, we know\\ \begin{equation*} \Delta^{-1}~1=\frac{1}{2(a+c)}(ax_{1}^{2}+bx_{1}x_{2}+cx_{2}^{2}-1). \end{equation*} So \begin{equation*} Z_{11}~1=\frac{a}{a+c}. \end{equation*} Therefore $Q=1+\frac{c}{a}$ solves (\ref{2.7}). The theorem is proven. \end{proof} Going back to the original equation, we see that $w=\frac{a+c}{a} \cdot \frac{1}{T-t}$ is a singular solution to Model 1 if the domain is an ellipse. Theorem 2.1 seems to be the first singular solution result regarding two-dimensional nonlocal models. Consider the following simpler version of Model 1. \begin{equation}\label{a} (Z_{11}+aZ_{22})w ~w=w. \end{equation} \begin{claim} Assume $a>0$, the domain $\Omega$ is rectangle or whole space. Then for any measurable set $E\subset \Omega$, (\ref{a}) has solution $w\in L^2(\Omega)$ such that \begin{equation*} (Z_{11}+aZ_{22})w\big|_{\Omega \setminus E}=1. \end{equation*} \end{claim} \begin{proof} Without loss of generality, we may assume the rectangle is $(0,\pi) \times (0,\pi)$. In this case,~$\sin k\cdot x,~k=(k_{1},k_{2})$,~$k_{i}$~positive integers are complete orthogonal basis and $w=\lambda_k \sin k \cdot x $. So we have \begin{equation*} Z_{11}w=\sum_{k_{1},k_{2}=1}^{\infty}\frac{k_{1}^{2}}{k_{1}^{2}+k_{2}^{2}}\lambda_{k}. \end{equation*} Therefore \begin{equation}\label{1.5} \int_{(0,\pi)\times (0,\pi)}Z_{11} ww \mbox{\rm d}x\geq0. \end{equation} In whole space case,~similarly we have \begin{equation}\label{1.6} \int_{R^{2}}Z_{11} ww \mbox{\rm d}x=\int_{R^{2}}\frac{k_{1}^{2}}{k_{1}^{2}+k_{2}^{2}}|\tilde{w}|^{2}~\mbox{\rm d}k\geq0, \end{equation} where~$\widetilde{w}$~is the Fourier transform of~$w$. Also note that $Z_{11}$ is self-adjoint. Define \begin{equation*} L_a=Z_{11}+aZ_{22}. \end{equation*} So under the assumptions of the current claim, $L_a$ is coercive: \begin{equation} \langle L_a w,w\rangle_{L^2}\ge a\parallel w\parallel_{L^2}. \end{equation} The proof mainly comes from the coerciveness of $L_a$. Below is the details. Note $$(\ref{a})\Longleftrightarrow (L_aw-1)w=0. $$ So the solving of (\ref{a}) reduces to find $w$ such that \begin{equation*} \begin{cases} w=0, \quad x\in E,\\ L_aw=1,\quad x\notin E. \end{cases} \end{equation*} Define \begin{equation}\label{b} \tilde L_a\tilde w=L_a w\big|_{\Omega\setminus E}, \end{equation} \begin{equation*} \quad w=\begin{cases}0, x\in E,\\ \tilde w, x\notin E, \end{cases} \tilde w\in L^2(\Omega\setminus E). \end{equation*} In some sense $\tilde L_a$ is the restriction of $L_a$ on $L^2(\Omega\setminus E)$. Note $\tilde L_a$ is also self-adjoint. Below we show $\tilde L_a$ is one-to-one and the inverse of it is bounded.\\ 1) one-to-one. \\ Assume $\tilde w\in L^2(\Omega\setminus E)$ and $\tilde L_a \tilde w=0$. Define \begin{equation*} w=\begin{cases} 0,\quad x\in E,\\ \tilde w,\quad x\in \Omega\setminus E \end{cases} \mbox{as in (\ref{b})}, \end{equation*} Then we have \begin{equation}\label{c} \langle L_a w, w\rangle_{L^2(\Omega)}=\langle \tilde L_a\tilde w, \tilde w\rangle_{L^2(\Omega\setminus E)}=0. \end{equation} Therefore \begin{equation*} w\equiv 0, \end{equation*} which implies $\tilde w\equiv 0$. So $\tilde L_a$ is one-to-one.\\ 2) The inverse is bounded.\\ Using (\ref{b}) and (\ref{c}), we get \begin{eqnarray*} \langle\tilde L_a\tilde w, \tilde w\rangle&=&\langle L_a w, w\rangle\nonumber\\&\ge& a\parallel w\parallel_{L^2}^2\nonumber\\&\ge &a\parallel \tilde w\parallel_{L^2}. \end{eqnarray*} So \begin{eqnarray*} \parallel\tilde w\parallel^2\le a^{-1}\parallel\tilde w\parallel_{L^2}\parallel\tilde L_a\tilde w\parallel_{L^2}. \end{eqnarray*} ie. \begin{eqnarray*} \parallel\tilde w\parallel_{L^2}\le a^{-1}\parallel\tilde L_a\tilde w\parallel_{L^2}. \end{eqnarray*} Hence the inverse of $\tilde L_a$ is bounded. \\ Next we show $\tilde L_a$ is onto.\\ Assume the contrary. Then $\exists\tilde w\notin \tilde L_a(L^2(\Omega\backslash E))$. Since the inverse of $\tilde L_a$ is bounded, the space $\tilde L_a(L^2(\Omega\setminus E))$ is closed. Denote $\tilde w_1$ the projection of $\tilde w$ on $\tilde L_a(L^2(\Omega\setminus E))$. Then $$\langle \tilde w-\tilde w_1, \tilde L_a(L^2(\Omega\setminus E))\rangle=0$$ So $\forall \tilde u \in L^2(\Omega\setminus E)$, $$\langle \tilde L_a(\tilde w-\tilde w_1),\tilde u \rangle=0.$$ This means $\tilde L_a(\tilde w-\tilde w_1)=0$. So $\tilde w=\tilde w_1$. A contradiction. Therefore $\tilde L_a$ is onto.\\ After proving $\tilde L_a$ is onto, the proof is essential finished. Let $\tilde w=\tilde L_a^{-1}1$ and \begin{eqnarray*}w=\begin{cases}0,\quad x\in E,\\ \tilde w, \quad x\notin E. \end{cases}\end{eqnarray*} Then $w$ is a solution to (\ref{a}). The claim is proven. \end {proof} \begin{remark} The claim above might help a little bit in the study of singular solutions of Model 1. \end{remark} There is a small generalization of Model 1. \\ {\bf Model $1^{'}$}\\ \begin{equation}\label{2.8} w_{t}=Z_{12}w~w,x\in\Omega\subset R^{2}. \end{equation} Model $1^{'}$ seems a little bit harder than Model 1. There are some related evidences. For instance, assume $\varphi\in L^{2}(R^{2})$,~then $$\int_{R^{2}}Z_{12}\varphi\varphi \mbox{\rm d}x=\int_{R^{2}}\frac{k_{1}k_{2}}{k_{1}^{2}+k_{2}^{2}}|\tilde{\varphi}|^{2} \mbox{\rm dk} \quad \mbox{\rm will change sign} $$ where $\tilde{\varphi}$ is the Fourier transform of $\varphi$. Also for the simple singular solution in the ellipse, we need $b$ to be non-zero. More precisely,~we have the following theorem. \begin{theorem} Assume $\Omega=\{ax_{1}^{2}+bx_{1}x_{2}+cx_{2}^{2}<1\},a,c>0,b^{2}-4ac<0,b\neq0$,~then \begin{equation*} w(t)=\frac{1}{T-t}\cdot\frac{2a+2c}{b} \end{equation*} is a self-similar singular solution to (\ref{2.8}). \end{theorem} \begin{proof} The proof is essentially the same as Theorem 2.1. \end{proof} For zero order models, bounded domain case might be simpler than the whole space case since the former is compact region. \section{Further models} The vorticity formulation of the three-dimensional Navier-Stokes equations is the following: \begin{equation}\label{3.1} w_{t}-\Delta w+u\cdot\nabla w-\nabla u w=0. \end{equation} By simplifying the zero order term, removing first order or second order term, we could get various model equations of the Navier-Stokes equations. The usual way of simplifying zero order term is to replace $\nabla u$ with simpler zero order operater. {\bf Examples.} \begin{equation}\label{3.2} w_{t}= \left( \begin{array}{cc} w_{1}+Z_{11}w_{1}&\frac{1}{2}w_{1}\\ \frac{1}{2}w_{1}&w_{1}+Z_{11}w_{1}\\ \end{array} \right)w, ~x\in\Omega\subset R^{2},w\in R^{2}, \end{equation} \begin{equation}\label{3.3} w_{t}=\nabla u~w,~x\in\Omega\subset R^{3},w\in R^{3}, \end{equation} \begin{equation}\label{3.4} w_{t}+u\cdot\nabla w-Z_{11}w~w=0,~x\in R^{2}, \end{equation} \begin{equation}\label{3.5} w_{t}-\Delta w-Z_{11}w~w=0,~x\in R^{2}, \end{equation} \begin{equation}\label{3.6} w_{t}-\Delta w+u\cdot\nabla w-Z_{11}w~w=0,~x\in R^{2}, \end{equation} In 2D,~$u=(-\partial_{x_{2}}\Delta^{-1}w,\partial_{x_{1}}\Delta^{-1}w)$. Roughly speaking, equation (\ref{3.2}) is a simple situation that the model is a system (The matrix in (\ref{3.2}) is symmetric, and the equation also has simple singular solutions similar with Theorem 2.1). Equation (\ref{3.3}) models the pro-ingularity effect of zero order term in the vorticity formulation. Equations (\ref{3.4}), (\ref{3.5}) and (\ref{3.6}) model the effects of first order perturbation, second order perturbation, first and second order perturbation combined for scalar equations. \begin{remark} Different from zero order model, it seems that for first and second order models, the whole space case is inclined to be first considered. One reason is that self-similar singular solutions for PDEs only occur in whole space. \end{remark} Next we make some further discussions. {\bf .Divergence-free requirement on initial vorticity} First note that if we have solutions to the vorticity equations in 3D and the initial vorticity is divergence free, then we can construct solutions to the original equations. \begin{lemma} Assume $w$ satisfy (2.1),(2.2)~or~(3.1),(2.2) in $R^{3}$,~$\mbox{\rm div}~w(x,0)=0$,~$w$~are regular enough and have good decay at infinity. Then $u=\mbox{\rm curl}~\Delta^{-1}w$ solves the original Euler or Navier-Stokes equations. \end{lemma} \begin{proof} We will mainly present the proof for the Euler equations. The situation for the Navier-Stokes equations is very similar. Let $w\in C((0,T),~H^{2}(R^{^{3}}))$.~Let $u_{0}=\mbox{\rm curl}~\Delta^{-1}w(x,0)$,~then $u_{0}\in W^{2,6}$.~Standard local existence for Euler equations (for instance,\cite{MB}) implies there exists a solution $u^{(1)}\in C((0,T_{1}),W^{2,6}),~T_{1}=T_{1}(\parallel u_{0}\parallel_{W^{2,6}})$.~Set $w^{(1)}=\mbox{\rm curl}~u^{(1)}$.~Note that $\mbox{\rm curl}~\Delta^{-1}w$ is the unique solution to\\ \begin{equation}\label{3.15} \begin{cases} \mbox{\rm curl}~v=w_{1},~~x\in R^{3},\\ \mbox{\rm div}~v=0. \\ \end{cases} \end{equation} Then $w^{(1)}$ satisfies (2.1)-(2.2) and $w^{(1)}(x,0)=w(x,0)$.~Take $w^{(2)}=w^{(1)}-w$.~Standard energy estimates imply $w^{(2)}\equiv 0$.~Now the Euler equations case is finished by standard continuation argument.~For the Navier-Stokes equations,~we can assume $w\in C((0,T),L^{2}(R^{3}))$. The rest is essentially the same. \end{proof} Note that the singular solutions with well-posed initial data are regular before the singularities occur. So Lemma 3.1 implies the singular solutions to the vorticity equations will generate singular solutions to the original equations. Therefore as long as a model is system, there are two situations: the initial data is divergence-free or not. In general, the divergence-free requirement would make situation harder. If the dimension is higher than three, it is convenient to think the velocity as 1-form and vorticity as 2-form. In this case, the divergence-free requirement becomes $\mbox {\rm d} w_0=0$, where d is the exterior differential and $ w_0$ is initial vorticity. We refer to \cite{Se} for more details in the case of $R^n, n>3$. {\bf . Skew-symmetry of zero order term} In the roughest sense, one may think of the zero order term $\nabla u w$ as $w^2, w \in R$. Therefore one might expect that it has some pro-singularity effect. \begin{claim} Singular solutions generated from constant don't hold true for equation (\ref{3.3}). \end{claim} \begin{proof} Define the generalized Kronecker sign: \begin{equation*} \delta_{jl}^{i}= \begin{cases} ~1,~~(i,j,l)~is~an~even~arrangement~of~(1,2,3), \\ -1,~(i,j,l)~is~an~odd~arrangement~of~(1,2,3),\\ ~0,~~otherwise. \\ \end{cases} \end{equation*} So \begin{equation*} \begin{split}(\nabla u)_{mi}&=\partial_{m}u_{i}\\ &= \partial_{m}\delta_{jl}^{i}\partial_{j}\Delta^{-1}w_{l}\\ &=\delta_{jl}^{i}Z_{jm}w_{l}. \end{split} \end{equation*} And \begin{equation}\label{3.7} (\nabla u~w)_{m}=\delta_{jl}^{i}Z_{jm}w_{l}w_{i}. \end{equation} Given any constant vector $c\in R^{3}$,~$Z_{jm}~c=a_{jm}c$. Here the domain is ~$\{a_{ij}x_{i}x_{j}<1\},~a_{ij}=a_{ji},~\sum\limits_{i=1}^{3} a_{ii}=1$, and $(a_{ij})$~is positive definite. Therefore \begin{equation}\label{3.8} \begin{split}&(\nabla uw)_{m}\mid_{w=c}\\ =&\delta_{jl}^{i}Z_{jm}c_{l}c_{i}\\ =&\delta_{jl}^{i}a_{jm}c_{l}c_{i}\\ =&0. \end{split} \end{equation} The claim is proven. \end{proof} The claim above suggests that the zero order term has certain algebraic skew-symmetry, which may cause some more trouble in the study of singular solutions. {\bf .Possible steps toward Euler equations} In the luckiest scenario, the study of model equations might lead to the existence of singular solutions of the Euler equations and even Navier-Stokes equations. The following are possible steps toward Euler equations. \begin{enumerate} \item Model 1, \item \eqref{3.3}, \item \eqref{3.4}, \item The whole Euler equations. \end{enumerate} \begin{remark} It was suggested in \cite[~p.3]{Du05} that the degree of difficulty for singular solutions to Navier-Stokes equations may decrease a lot in higher dimensions. Probably this scenario will also hold true for certain second order models. \end{remark} \begin{remark} For zero order models, if there are no divergence-free requirements on the initial data, the self-similar singular solutions probably exist. But for more complicated situations, one might have to work on singular solutions with general form. One evidence is that the Navier-Stokes equations don't have self-similar singular solutions at any dimensions\cite{NRS,Ts}. There were also no reliable numerical evidence that Euler equations have self-similar singular solutions. \end{remark} \section{Possible connection with turbulence} It is well accepted that the main features of turbulence is irregular, random, and chaotic. Based on what's known on Navier-Stokes equations and the features of turbulence, it seems reasonable to make the following guess. \noindent {\bf Conjecture 4.1. } {\it The singular solutions of three-dimensional Navier-Stokes equations generically are fluctuated.} Using the conjecture above, we could interpreter the turbulence in the following way. Since the solution is fluctuated singular, the average of it is irregular. The randomness comes from the infinite amplifying effect of fluctuated singular solution over arbitrarily small experimental error. The chaotic behavior could be explained in the similar way. \begin{remark} The difficulties for singular solutions of the Navier-Stokes equations might be viewed as the combination of difficulties for the local convection-diffusion equations with energy conservation and Euler equations. The results on model equations\cite{DL, PS} and numerical simulation for Euler equations suggest that, in dimension five and higher the usual singular solutions possibly are also typical for Navier-Stokes equations. There is no information in dimension four so far. \end{remark} At this stage little is known regarding the singular solution of the Navier-Stokes equations. Therefore the application in the turbulence theory is not much. With the development of the mathematical theory on the singular solutions, more and more applications could be expected. To some degree, the good understanding of turbulence may depend on the good understanding of singular solutions to the three-dimensional Navier-Stokes equations. \section*{Acknowledgements} The author wishs to thank Hongjie Dong and Vladimir Sverak for valuable comments. The author also would like to thank Yipeng Shi for wonderful discussions on turbulence. The author was partially supported by NSFC grant No. 11571066. \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} High redshift star-forming galaxies (SFG) show increasingly irregular and clumpy morphologies compared to the local Universe. This was first observed at rest-frame UV wavelengths with the Hubble Space Telescope (HST) \citep{Griffiths1994,vandenbergh1996,Elmegreen2007}, and has since been confirmed in rest-frame optical light \citep{Elmegreen2009,Forster2011,Guo2012}, H$\alpha$ emission \citep{Swinbank2009,Jones2010, Genzel2011, Livermore2012, Wisnioski2012}, and sub-millimeter emission \citep{Swinbank2010}. Locally, such irregular morphologies are associated with mergers, but a range of recent observational findings point toward this not being the common explanation at high redshift. The existence of a tight correlation between star formation rate (SFR) and stellar mass out to at least $z\sim2.5$, referred to as the main-sequence of star formation, favors continuous star formation activity over a series of rapid, luminous merger-driven bursts \citep{Noeske2007,Elbaz2007,Daddi2007,Wuyts2011}. Outliers of the main-sequence account for only 10\% of the cosmic star formation density at $z\sim2$ \citep{Rodighiero2011}. Counts of both galaxy pairs and disturbed morphologies \citep{Conselice2009,Lotz2011,Kaviraj2013}, as well as studies of gas-phase kinematics \citep{Forster2009,Epinat2012}, limit the $z\sim2$ merger rate to up to 30\%. Based on the high gas fractions of 30-80\% \citep{Daddi2010,Tacconi2010,Tacconi2012} and high velocity dispersions of 50-100~km/s \citep{Forster2009, Law2009, Wisnioski2011} of $z\sim2$ SFGs, an alternative picture has developed where luminous kpc-sized clumps are formed through gravitational instabilities in a dynamically unstable, gas-rich, turbulent disk \citep{Noguchi1999, Immeli2004a, Immeli2004b, Bournaud2007, Dekel2009, Genel2012}. If they can survive long enough, these clumps are expected to migrate towards the center of the galaxy due to dynamical friction and coalesce into a young bulge on timescales of $\sim0.5$~Gyr. In this scenario the galaxy's gas reservoir is continuously replenished by the accretion of gas from the halo through minor mergers and cold flows, to sustain the observed large gas fractions and strong star formation activity \citep{Keres2005, Keres2009, Bournaud2009, Dekel2009}. These observational and theoretical results favour internal secular evolution over major mergers to explain the high star formation density and clumpy morphology of SFGs at $z\sim2$. However, merging is still thought to play an important role in the cosmological mass assembly of galaxies, the quenching of star formation and the morphological transitions of galaxies from late to early-type (e.g. \citealt{Springel2005, Naab2003, Naab2007, Guo2008, Hopkins2010}). \par \vspace{2.3ex plus 0.3ex minus 0.3ex} Current observational studies of the kinematics of high-z SFGs and the properties of their individual star-forming regions are limited by the available spatial resolution. At FWHM $\sim 0\farcs1$, which corresponds to roughly 1~kpc at $z=2$, both HST imaging and adaptive optics (AO) assisted integral field spectroscopy (IFS) at 8-10~m class telescopes barely resolve the largest clumps. Beam-smearing of the velocity field can compromise the kinematic classification of merger signatures as well as the analysis of turbulence, shocks and outflows from the velocity dispersion map (e.g. \citealt{Kronberger2007, Davies2011}). Strong gravitational lensing of high redshift background galaxies by foreground galaxy clusters can increase their apparent size by an order of magnitude or more. IFS studies of lensed galaxies have already provided detailed views of galaxy kinematics up to $z=4.9$ \citep{Nesvadba2006, Swinbank2009, Jones2010, Jones2013, Yuan2011, Yuan2012, Shirazi2013}, confirming the common occurrence of rotating disks at high redshift. The improved spatial resolution has allowed these studies to specifically address shocks and outflows \citep{Yuan2012}, metallicity gradients \citep{Yuan2011, Jones2013}, and the properties of star-forming clumps \citep{Swinbank2009, Jones2010}. In this paper we present a detailed analysis of HST/WFC3 optical/near-IR imaging and AO-assisted Keck/OSIRIS IFS data for the brightest distant lensed galaxy currently known in the Universe, RCSGA 032727-132609 at $z=1.7$ \citep{me2010}. This combined dataset probes the galaxy kinematics as well as the morphology of both the ongoing star formation and the established stellar population at spatial resolutions down to $\sim100$pc in the galaxy's source plane. \par \vspace{2.3ex plus 0.3ex minus 0.3ex} The paper is organised as follows. \S\ref{sec:data} describes the main Keck/OSIRIS integral field spectroscopy observations and data reduction as well as supporting datasets from HST/WFC3, Keck/NIRSPEC and Magellan/FIRE. The kinematic analysis of the IFS data is presented in \S\ref{sec:kin} and \S\ref{sec:clumps} reports on the physical properties of the individual star-forming clumps that can be identified in both datasets. The radial variations of these properties across the galaxy are studied in \S\ref{subsec:radial} and the clumps are compared to scaling relations of local H~II regions in \S\ref{subsec:scaling}. We present spatially resolved SED modelling of the system in \S\ref{sec:spatialsed}. \S\ref{sec:disc} summarises the observational results and discusses what we can infer for the physics governing the morphology and kinematics of the system. Throughout this work, we adopt a flat cosmology with $\Omega_M = 0.3$ and H$_0 = 70$\,km\,s$^{-1}$\,Mpc$^{-1}$. All magnitudes are quoted in the AB system. \section{Observations and Data Reduction} \label{sec:data} RCSGA 032727-132609, hereafter RCSGA0327, at $z=1.703$ is the brightest and most obvious strong lensing system found in the Second Red-Sequence Cluster Survey (RCS2; \citealt{Gilbank2011}). Its discovery, preliminary lensing analysis and global galaxy properties are presented in \cite{me2010,me2012a}. The system consists of a counter-image, and a giant arc which extends over 38\arcsec\ on the sky and is made up of three merged images of the background source. Its intrinsic stellar mass of $6.3\pm0.7 \times 10^9$~M$_\odot$ and SFR of 30-50~M$_\odot$~yr$^{-1}$ \citep{me2012a} translate into a specific star formation rate $\log(\mathrm{sSFR}/\mathrm{yr}^{-1})=-8.3$, which lies a factor of 3 above the main sequence of star formation at $z\sim2$ \citep{Daddi2007}. This paper focuses on the joint analysis of HST/WFC3 imaging and Keck/OSIRIS integral field spectroscopy observations of the arc. To support the analysis, we fold in additional long-slit near-infrared spectroscopy from the NIRSPEC instrument on Keck, and the FIRE instrument on Magellan. \begin{figure*} \centering \includegraphics[width=\textwidth]{clumps_eva.eps} \caption{Identification of substructure in the giant arc and counter-image of RCSGA0327 from HST/WFC3 imaging; North is up and East is left. The color rendition is composed of F160W+F125W+F098M (red); F814W+F606W (green); and F390W (blue), to highlight color gradients in the arc. The left panel extends over 35\arcsec\ $\times$ 15\arcsec. The dashed lines approximately enclose the parts of the arc that compose each of the three merged images, indicated by numbers 1-3; image 4 corresponds to the counter-image, which is a relatively undistorted image of the source-plane galaxy. Individual star-forming regions are labeled with letters $a$ through $k$; the two cluster galaxies that fall on top of image 3 are labeled $G1$ and $G2$ in green. Due to the location of the source galaxy with respect to the lensing caustic, the western side (clumps $a$-$f$; red labels) appears in all of the four images, while the brightest knot of the source (labeled $g$) and the eastern ``arm'' (clumps $h$-$k$; blue labels) only appear in images 3 and 4. In image 3, the lensing perturbation by cluster galaxy $G1$ results in another instance of clump $g$. The two OSIRIS pointings with a $1\farcs6 \times 6\farcs4$ field of view and position angles of 297\arcdeg\ and 352\arcdeg\ East of North are overlaid in yellow. Figure adapted from \cite{Sharon2012}. \label{fig:implane}} \end{figure*} \subsection{HST/WFC3 Imaging} \label{subsec:data-hst} RCSGA0327 was imaged with the Wide Field Camera 3 (WFC3) on HST under GO program 12267 (PI: Rigby). In four orbits, the source was observed with one narrow-band filter targeting H$\beta$ (F132N) and six broad-band filters (F390W-F606W-F814W in the UVIS channel, F098M-F125W-F160W in the IR channel). The imaging strategy consisted of four sub-pixel dither positions in each filter to reconstruct the PSF, reject cosmic rays and compensate for the chip-gap. Individual frames were processed with the standard WFC3 calibration pipeline, and combined using the Multidrizzle routine \citep{Koekemoer2002}. Based on the HST data, \cite{Sharon2012} have matched the substructure in the four different images of the source galaxy to create a robust and well-constrained lens model with magnification uncertainties $\le$10\%. We adopt the naming convention introduced there and shown in Figure~\ref{fig:implane} to identify the star-forming clumps. Source-plane reconstructions of the galaxy are created by ray-tracing the image pixels through deflection maps generated from the lens model. \subsection{Keck/OSIRIS observations} \label{subsec:data-osiris} We observed RCSGA0327 with the OH-Suppressing Infrared Imaging Spectrograph (OSIRIS; \citealt{Larkin2006}) on the Keck~II telescope on two half nights of 2011, October 17-18 UT. Conditions were good with seeing measured at 0.7-1.2\arcsec. OSIRIS is an integral field spectrograph with a lenslet array to provide simultaneous near-IR spectra at spectral resolution $R\sim3600$ for up to 3000~pixels in a rectangular field of view up to $4\farcs8 \times 6\farcs4$. Due to a technical difficulty with OSIRIS at the time of observations, only the broadband filters were available. We targeted the H$\alpha$ emission line with the Hbb filter in the $0\farcs1$ pixel$^{-1}$ scale, which limits the field of view to $1\farcs6 \times 6\farcs4$. To correct for atmospheric distortion, the laser-guide star adaptive optics system (LSGAO; \citealt{Wizinowich2006,vanDam2006}) was applied with a tip-tilt star of $R=17.0$ at a distance of $\sim40$\arcsec\ from the giant arc. This delivers a Strehl ratio of $\sim0.2$. Unfortunately, no suitable tip-tilt star is available for LSGAO observations of the counter-image. Due to the limited field of view of the spectrograph and the large size of the giant arc, we were not able to observe its full extent. Figure~\ref{fig:implane} shows the two pointings we selected in the image plane; we will refer to them as pointing 2 and 3 since they target parts of the arc that correspond to images 2 and 3 of the source galaxy. Pointing 2 at a position angle of 297\arcdeg\ East of North corresponds to one of the most highly magnified regions of the giant arc \citep{Sharon2012}, where we can take maximal advantage of the increased brightness and spatial resolution to study clumps $a$ through $f$. Pointing 3 at a position angle of 352\arcdeg\ East of North was chosen because it covers the full extent of the source. The early-type cluster galaxies $G1$ and $G2$ that fall on top of the arc in this pointing are not expected to show any emission lines, so they should not contaminate the H$\alpha$ flux. The observations started with short 30\,s exposures of the tip-tilt star to center the pointing. These exposures are also used to calculate the point spread function (PSF): 2D Gaussian fits to the tip-tilt exposures yield a FWHM resolution of $\sim0.15$\arcsec. From the tip-tilt star, we applied a blind offset to acquire each of the two pointings. Individual science exposures have an integration time of 900\,s and are dithered by up to $0\farcs5$ around the base position to remove bad pixels and cosmic rays. Off-source sky-frames were necessary because the source fills most of the narrow $1\farcs6 \times 6\farcs4$ field of view. We achieved a total on-source integration time of 1.5~hours for pointing 2 and 3.5~hours for pointing 3. Data reduction was carried out with the OSIRIS data reduction pipeline (version 2.3)\footnotemark[1], which removes crosstalk, detector glitches, and cosmic rays and performs a scaled sky subtraction based on \cite{Davies2007}. Individual datacubes are mosaicked using a 3$\sigma$ clipping average and the final datacube is flux calibrated based on the 2MASS $H$-band magnitude of the tip-tilt star. \cite{Law2009} estimate a 30\% systematic uncertainty in the fluxing, largely from rapid variations of the AO-corrected PSF. \footnotetext[1]{http://irlab.astro.ucla.edu/osiris/} \subsection{Magellan/FIRE observations} \label{subsec:data-fire} \begin{figure*} \centering \includegraphics[width=14cm]{fire_nirspec_slits.eps} \caption{32\arcsec\ $\times$ 16 \arcsec\ HST/WFC3 image of RCSGA0327 in the F390W band. The black boxes approximately enclose each of the three images of the source galaxy which together make up the giant arc (as in Figure~\ref{fig:implane}). The $0\farcs76 \times 42$\arcsec\ NIRSPEC slit is shown in blue at PA=134\arcdeg\ East of North. Three pointings of the 1\arcsec\ $\times$ 6\arcsec\ FIRE slit are shown in red, targeting clumps $d1$-$e1$-$b1$ in image 1, clump $u$ and clump $b2$ in image 2 respectively. For simplicity the slits are shown centred on the arc, in reality we placed the source on the left and right sides of the slit for an ABBA nod pattern. \label{fig:slits}} \end{figure*} We observed RCSGA0327 with the Folded-Port Infrared Echelette (FIRE; \citealt{Simcoe2013}) at the Magellan Baade telescope in Chile on 2010, October 14-15 UT. The echelle mode delivers a continuous spectrum from 0.82-2.5$\micron$ at a spectral resolution $R=3600$ for the widest 1\arcsec\ $\times$ 6\arcsec\ slit. The seeing was monitored at the telescope and ranged from $0\farcs8$ to $1\farcs2$ on both nights. Based only on ground-based imaging, we chose three separate positions of the arc as shown in Figure~\ref{fig:slits}. With the HST imaging, we now know they correspond to clumps $b1$-$d1$-$e1$ in image 1, clump $u$, and clump $b2$ in image 2. The pointings were acquired through a blind offset from a nearby cluster galaxy; source acquisition was verified with the near-IR slit-viewing camera. The observations consisted of four individual 600~s exposures for each pointing, nodded along the slit in an ABBA pattern. The telluric star HD21875 was observed every hour for flux calibration purposes. We reduced the data using the custom pipeline provided by R. Simcoe \citep{Simcoe2013}, which uses OH skylines for wavelength calibration and performs sky subtraction using the techniques presented by \cite{Kelson2003}. The extracted spectra for clumps $u$ and $b2$ are shown in Figure~\ref{fig:spec}. We cannot spatially resolve clumps $b1$, $d1$ and $e1$ covered by the pointing in image 1. Since clumps $b1$ and $e1$ are of comparable brightness, we cannot derive line fluxes for individual clumps from this pointing and do not consider it further. For each spectrum, we simultaneously fit all emission lines with a multi-component Gaussian model, using the IDL Levenberg-Marquardt least-squares fitting code MPFITFUN \citep{mpfitfun}. We obtain an initial fit of the bright H$\alpha$ emission line to establish a first guess for the redshift and linewidth. For the combined fit, we set the initial wavelength centroids of all lines based on their NIST rest wavelengths\footnotemark[2] and allow them to vary by up to three times the 1$\sigma$ uncertainty from the initial fit to allow for errors in the wavelength calibration. For each spectrum, all lines are forced to share a common velocity width, since the nebular emission is expected to originate from the same physical region within the galaxy. We report the flux measurements in Table~\ref{tab:fluxes}. In the FIRE spectrum of clump $u$, the [Ne~III]~$\lambda$3869 emission line is not detected. We derive an upper limit as the flux contained within a Gaussian with the common linewidth and a peak value equal to twice the noise at the expected line center. \footnotetext[2]{http://www.pa.uky.edu/$\sim$peter/atomic/} \subsection{Keck/NIRSPEC observations} \label{subsec:data-nirspec} \begin{figure} \centering \includegraphics[width=8.5cm]{nirspec2d.eps} \caption{Two-dimensional sky-subtracted, nod-subtracted NIRSPEC spectra of RCSGA0327. Each frame is a subtraction of two nods, with one nod as light and the other as dark. In each panel, wavelength increases from bottom to top. Counter-clockwise from top left, the panels show: \textit{(top left)} the N6 spectrum, with H$\alpha$, the [N~II] doublet, and [S~II]; \textit{(top right)} the N3 spectrum, with H$\beta$ and the [O~III]~$\lambda$4959,5007 doublet; \textit{(bottom left)} same as top right, but two additional nods; \textit{(bottom right)} the N1 spectrum, with [O~II]~$\lambda$3727. \label{fig:nirspec2d}} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth,angle=90]{plotspec_fire_nirspec_forpaper.eps} \vspace{0.15in} \caption{Extracted NIRSPEC spectra for clumps $u$ and $e2$ and extracted FIRE spectra for clumps $u$ and $b2$. Three wavelength ranges are chosen to show the [O~II]~$\lambda$3727, H$\gamma$, H$\beta$, [O~III]~$\lambda$4959,5007, H$\alpha$ and [N~II] emission lines. The 1$\sigma$ error spectra are shown in red. The y-axis shows specific flux density in units of $10^{-16}$~erg~s$^{-1}$~cm$^{-2}$~\AA$^{-1}$. \label{fig:spec}} \end{figure*} We obtained a total of 1.3~hr of near-IR long-slit spectroscopy of RCSGA0327 with Keck/NIRSPEC on 2010 February 4 UT, targeting the brightest 10\arcsec\ of the arc at a PA of 134\arcdeg\ East of North (Figure~\ref{fig:slits}) with three grating settings (NIRSPEC filters N1, N3 and N6). Detailed analysis of the collapsed one-dimensional spectra was published in \cite{Rigby2011}, hereafter R11. The subsequently obtained HST/WFC3 imaging revealed that the slit position covered several individual star-forming regions: clumps $f2$, $e2$ and $d2$ in image 2 and clump $u$ shared between image 1 and 2. Here we reanalyse the spectra to quantify the spatial variation of line fluxes and line ratios across the clumps. Details of the observations and the data reduction can be found in R11. Figure~\ref{fig:nirspec2d} shows the two-dimensional spectra for the main emission lines. The spectra are dominated by emission from clumps $u$ and $e2$, which are clearly resolved. Clump $e2$ has a $\sim20$~\% contribution from clumps $f2$ and $d2$, which cannot be separated. To compensate for differential seeing and slit losses, we apply the same bulk scaling of the N1 and N6 filters relative to the N3 filter as in R11. We again note that the relative fluxing should be excellent within a single grating setting, but from one grating setting to another the relative flux offsets may be large. The extracted one-dimensional spectra of clumps $u$ and $e2$ are plotted in Figure~\ref{fig:spec}. For each clump and grating setting, we simultaneously fit all emission lines as described above for the FIRE spectra; line fluxes can be found in Table~\ref{tab:fluxes}. Adding line fluxes for clumps $u$ and $e2$ generally agrees within 2$\sigma$ of the line fluxes published in R11. The absolute flux values of clump $u$ should not be compared between the FIRE and NIRSPEC spectra, since we have not applied any aperture correction between both slits. We can address the relative fluxing of the different gratings by comparing line ratios for clump $u$ to the ratios measured in the FIRE spectrum of this clump. Using [O~II]~$\lambda$3727 in N1, H$\gamma$, H$\beta$ and [O~III]~$\lambda$4959 in N3, and H$\alpha$ in N6, we find a weighted mean offset of 9\%, 35\% and 40\% for N6-N3, N3-N1 and N6-N1 respectfully. We caution that the only bright line in N1, the [O~II]~$\lambda$3727 doublet, is difficult to measure accurately. \begin{deluxetable}{lll|ll} \tabletypesize{\footnotesize} \tablewidth{0pc} \tablecaption{Measured line fluxes from NIRSPEC and FIRE. \label{tab:fluxes}} \tablehead{ \colhead{} & \multicolumn{2}{c}{NIRSPEC} & \multicolumn{2}{c}{FIRE} \\ \colhead{emission line} & \colhead{clump $u$} & \colhead{clump $e2$} & \colhead{clump $u$} & \colhead{clump $b2$}} \startdata $[$O~II]~$\lambda$3727 & $544\pm13$ & $189\pm8$ & $12.6\pm1.1$ & $2.3\pm0.2$ \\ $[$Ne~III]~$\lambda$3869 & $64\pm5$ & $30\pm3$ & $<1.8$ & $0.69\pm0.05$ \\ H$\gamma$ & $68\pm3$ & $50\pm2$ & $2.4\pm0.3$ & $0.64\pm0.04$ \\ H$\beta$ & $171\pm3$ & $119\pm2$ & $5.8\pm0.3$ & $1.40\pm0.04$ \\ $[$O~III]~$\lambda$4959 & $275\pm2$ & $189\pm2$ & $10.1\pm0.5$ & $2.99\pm0.07$ \\ $[$O~III]~$\lambda$5007 & $909\pm4$ & $611\pm3$ & $44\pm1$ & $7.28\pm0.12$ \\ H$\alpha$ & $675\pm3$ & $388\pm2$ & $26.1\pm0.5$ & $4.82\pm0.06$ \\ $[$N~II]~$\lambda$6583 & $34\pm4$ & $51\pm3$ & $1.3\pm0.2$ & $0.20\pm0.02$ \\ \enddata \tablecomments{Fluxes are in units of $10^{-17}$~erg~s$^{-1}$~cm$^{-2}$.} \end{deluxetable} \section{Galaxy Kinematics} \label{sec:kin} \subsection{Mapping the Velocity Field} \label{subsec:kinmaps}[h] \begin{figure*} \centering \vspace*{-10cm} \includegraphics[width=17cm]{kin3s_resubmission.eps} \vspace{-0.8cm} \caption{Source-plane maps of the F390W flux, H$\alpha$ flux, velocity, and velocity dispersion for image 3. The latter three maps are smoothed with a boxcar average of 3 pixels for the purpose of visualisation.The x- and y-axes are centered on clump $g$; clumps $g$, $e$, and $b$ are marked by black crosses (from left to right). The kinematic axis and the axis along the ``arm'' which extends to the North-East from clump $g$ are overlaid on the velocity map as dashed and solid lines respectively. \label{fig:kin3}} \vspace{-10cm} \includegraphics[width=17cm]{kin1s_resubmission.eps} \vspace{-0.8cm} \caption{Source-plane maps for image 2, which contains a smaller, but more highly-magnified part of the source-plane galaxy. The size, centering and scaling of the maps is identical to Figure~\ref{fig:kin3}. Clumps $e$ and $b$ are marked by black crosses (from left to right). \label{fig:kin1}} \end{figure*} We create spatial and kinematic maps of the H$\alpha$ emission in RCSGA0327 by fitting a Gaussian profile to the H$\alpha$ emission line for every spatial pixel. The noise is estimated separately from a blank region of sky in each of the pointings. A minimum signal-to-noise of $4.5\sigma$ is required for a detection of H$\alpha$; if this criterion is not met, the surrounding $3 \times 3$ spatial pixels are averaged and the fit is re-attempted. Formal uncertainties are derived for each spatial pixel from fitting 100 mock spectra consistent with the noise. The relative shifts of the wavelength centroid of the H$\alpha$ emission line translate into a velocity map of the ionized gas. We define an absolute velocity zeropoint based on the H$\alpha$ centroid determined for clump $e$ from the NIRSPEC spectra (\S~\ref{subsec:data-nirspec}), which corresponds to $\lambda_{H\alpha} = 1.774903\pm0.000003$\micron\ or $z=1.7037455\pm0.000005$. After applying a barycentric correction to the OSIRIS wavelength calibration, we find a wavelength shift of 1.1\AA\ for pointing 2 and 0.7\AA\ for pointing 3, well within the OSIRIS wavelength calibration uncertainty of up to 5\AA \footnotemark[3]. We correct the calibration of both pointings for these wavelength shifts. \footnotetext[3]{http://irlab.astro.ucla.edu/osiriswiki/} The H$\alpha$ flux, velocity and velocity dispersion maps are transformed into the source plane and shown in Figures~\ref{fig:kin3} and \ref{fig:kin1}. The maps are smoothed with a boxcar average of 3 pixels for the purpose of visualisation. The velocity dispersion is corrected for the instrument response function by subtracting the instrumental resolution in quadrature. This is determined from Gaussian fits to the OH sky lines and corresponds to a FWHM of 5.6\AA. Typical uncertainties in the velocity and velocity dispersion maps are $\sim10$~km/s. The x- and y-axes of the maps have been centered on the brightest clump, clump $g$. To allow the source-plane transformations, the OSIRIS maps have to be aligned with the HST images on which the lens model is based. This is done visually, using the positions of the various clumps in both datasets. While this alignment does not produce global spatial accuracy to better than one OSIRIS pixel ($0\farcs1$), it does not influence the comparison of the relative location of multiple emission features within each pointing. The velocity map agrees with the velocity offsets between clumps measured in the FIRE and NIRSPEC data. Based on the H$\alpha$ emission line centroid, we find $\delta v = 15\pm1$~km~s$^{-1}$ between clumps $u$ and $e2$ from the NIRSPEC spectra and $\delta v = 44\pm3$~km~s$^{-1}$ between clumps $u$ and $b2$ from the FIRE spectra. \subsection{Analyzing the Velocity Field} \label{subsec:kinan} \begin{figure} \centering \includegraphics[width=9cm]{projectvel_resubmission.eps} \caption{\textit{(Top)} 1D velocity profile of the source-plane galaxy along its kinematic axis, constructed from the median and standard deviation of the velocity map in 0.7~kpc wide bins. The locations of clumps $g$, $e$ and $b$ are indicated for reference. \textit{(Bottom)} 1D velocity profile along the ``arm'' extending towards the North-East from clump $g$, indicated by the solid line in Figure~\ref{fig:kin3}. The peak in velocity near the middle of the arm is typical for a tidal tail. In both panels, the gray diamonds indicate the coarser spatial resolution available without lensing magnification; the points are shifted down in velocity by 40~km/s and 20~km/s in the top and bottom panels respectively to improve the clarity of the figure.\label{fig:kin1d}} \end{figure} The velocity map shown in Figure~\ref{fig:kin3} is well-structured and contains a strong velocity gradient. We define a kinematic axis connecting the maximum and minimum velocity regions and extract a 1D velocity profile as the median and standard deviation within bins of 0.7~kpc width along this axis (Figure~\ref{fig:kin1d}; top panel). The bins were chosen in correspondence with the FWHM of the OSIRIS datacubes such that individual datapoints in the 1D profile represent independent measurements. The locations of clumps $g$, $e$ and $b$ are shown for reference. This 1D profile does not display the smooth S-shaped curve expected from a rotating disk but instead levels off to a plateau between clumps $g$ and $e$. This suggests the system is currently undergoing an interaction. The bottom panel in Figure~\ref{fig:kin1d} shows the 1D velocity profile extracted along the solid line in Figure~\ref{fig:kin3} (PA=40\arcdeg\ East of North). This corresponds to a region of clumpy emission (clumps $h$ through $k$, see Figure~\ref{fig:implane}) which extends from clump $g$ towards the North-East and shows blue broad-band rest-frame UV colors. The velocity profile agrees with the expectation for a tidal tail swinging away from the observer and curving back into the plane of the sky: the velocity peaks in the middle and falls off to either side. Within this interpretation, separate star-forming knots in the tail (clumps $h$-$k$) might evolve into tidal dwarf galaxies (e.g. \citealt{Mihos1998, Hibbard1994, Duc2000}). The gray diamonds in Figure~\ref{fig:kin1d} (shifted down in velocity to improve the clarity of the figure) illustrate the coarser spatial resolution that would be obtained without lensing magnification. The velocity peak in the tidal tail remains visible, but the plateau between clumps $g$ and $e$ would be largely smoothed out, resulting in an ambiguous velocity profile which could easily be interpreted as a single rotating disk. \par \vspace{2.3ex plus 0.3ex minus 0.3ex} The velocity dispersion maps contain additional kinematic information. For a rotating disk at higher redshift, the dispersion is typically seen to increase towards the center due to beam-smearing of the velocity gradient (e.g. \citealt{Epinat2010}). Additionally, areas of active star formation, like clumps, can show elevated velocity dispersion because of outflows \citep{Newman2012a}. RCSGA0327 shows a noticeable peak in velocity dispersion at the location of clump $e$, which agrees with the detection of a strong outflow from this clump presented in \S~\ref{subsec:clumposiris}. The other two peaks in velocity dispersion are located between clump $g$ and the tidal tail and likely originate from increased turbulence due to the ongoing interaction. Additionally, elevated velocity dispersions could be caused by overlapping components along the line of sight, where the H$\alpha$ profile consists of two emission lines separated slightly in velocity. Attempting to fit the H$\alpha$ emission with a single Gaussian function will result in an overestimate of the linewidth. This has been seen in local mergers (e.g. \citealt{Mihos1998}), but we lack sufficient signal-to-noise to robustly identify any such multiple H$\alpha$ emission peaks in the OSIRIS data. \par \vspace{2.3ex plus 0.3ex minus 0.3ex} The physical extent of RCSGA0327 further strengthens the kinematic arguments for an ongoing interaction. We estimate the size of the system from a segmentation map of all pixels $>3\sigma$ in the source-plane F160W image, including the arm which extends to the North-East. With a radius $r_e=\sqrt{A/\pi}=7.1$~kpc and a stellar mass of $6 \times 10^9$~M$_\odot$ \citep{me2012a}, RCSGA0327 lies near the upper extreme of the size-mass relation of star-forming galaxies at $z\sim2$, even when taking into account the considerable scatter in this relation \citep{Franx2008,Williams2010,Wuyts2011,Barro2012}. Thus, as an isolated galaxy this system would be unusually large for its stellar mass. \section{Physical Properties of Clumps} \label{sec:clumps} This section covers measurements of the individual star-forming regions that can be identified in RCSGA0327. From the WFC3 imaging, we measure broad-band photometry and use spectral energy distribution (SED) modelling to constrain the stellar populations of the clumps: age, stellar mass, extinction and SFR. Integrated H$\alpha$ spectra are created for each clump from the OSIRIS data, from which velocity dispersion, SFR, metallicity and outflow properties can be derived. \subsection{Broad-band Photometry} \label{subsec:clumpphot} The photometric analysis of the clumps is performed in the image plane because the PSF is not well defined in the source-plane reconstructions. It varies across the source plane depending on the location with respect to the lensing caustic, and has an elliptical shape due to the non-isotropic magnification (see \citealt{Sharon2012} for more details). The clumps are identified in the higher resolution WFC3/UVIS images; we transform the WFC3/IR images to the same reference for a uniform photometry measurement. A series of elliptical apertures of increasing radial extent is defined for each region. The ellipticity changes according to how significantly the clumps are stretched by the lensing. We measure the flux at a radius of roughly twice the FWHM and use the zeropoint and aperture corrections (typically 15-25\%) as defined in the WFC3 handbook\footnotemark[4]. \footnotetext[4]{http://www.stsci.edu/hst/wfc3/phot\_zp\_lbn} The clumps are embedded in the galaxy, and the removal of underlying galaxy background light has to be handled carefully. At rest-frame UV wavelengths, the contribution of the diffuse galaxy background is often ignored since the clumps are typically 2-4 times as bright as their surroundings \citep{Elmegreen2009}. However, the lensing magnification of RCSGA0327 allows us to study smaller and fainter clumps; we find that the galaxy background accounts for 30-80\% of the total flux in the clump apertures. Additionally, the contrast between the clumps and the background is lower at rest-frame optical wavelengths, where the emission of bright, young stars is less dominant. Following \cite{Guo2012}, we determine the background for each WFC3 band as all pixels that belong to the galaxy (i.e. lie above a threshold of 3$\sigma$) and do not fall within one of the photometric clump apertures. The \textit{global} background is then simply the average value of these background pixels. We also define a \textit{local} background for each clump as the mean and standard deviation of the background pixels within an annulus of width $\sim0.4$\arcsec\ outside the clump aperture. For RCSGA0327, the local backgrounds generally agree with the global estimate. Similarly, \cite{Guo2012} find that the change in rest-frame UV colors between a global and local background subtraction for $z\sim2$ SFGs in the Hubble Ultra Deep Field is not statistically significant. However, it remains important to check this consistently in future studies since an incorrect assumption of a constant global background across the galaxy can introduce false radial variations in clump color and related properties. Outliers within the clump apertures, such as contributions from neighbouring clumps, are masked by hand. Identifying exactly which pixels should be masked can be uncertain in the IR images where the larger PSF blends close overlapping neighbours such as clumps $d$, $e$ and $f$ in image 2 and clumps $d$, $e$ and $b$ in image 3. In those cases, the added flux uncertainty is typically 10\%. In image 3, the two cluster galaxies that fall on top of the arc are removed before measuring the clump photometry, using the technique presented in \cite{me2010}. In short, for each WFC3 band we subtract a scaled F390W image to remove the arc. Any remaining positive flux at the positions of the cluster galaxies is then subtracted and the original frame is restored by adding the scaled F390W image back in. Final magnitudes are corrected for galactic extinction \citep{Schlegel1998}. The photometric uncertainties include Poisson noise, an absolute WFC3 zeropoint uncertainty of 1\%, an uncertainty in the background subtraction determined from the standard deviation of the local background value, and an uncertainty from the masking of neighbouring clumps. The background subtraction dominates the total uncertainty. \par \vspace{2.3ex plus 0.3ex minus 0.3ex} For the galaxy-integrated photometry measurement of RCSGA0327, the contribution of rest-frame optical nebular emission lines to the near-IR photometry was estimated at 5-10\%, which is negligible compared to the photometric uncertainties \citep{me2012a}. However, individual star-forming regions have much higher star formation surface densities than the host galaxy as a whole, such that the nebular emission adds a sizeable, and possibly dominant, contribution to the near-IR light. At the redshift of RCSGA0327, the F098M band includes the [O~II]~$\lambda$3727 emission line, while H$\beta$ and the [O~III]~$\lambda$4959,5007 doublet are the most important contaminating lines for the F125W band. H$\alpha$ falls just redward of the F160W band. The removal of this nebular line emission from the broad-band photometry is not trivial. We can use the F132N narrowband image to measure the H$\beta$ line emission within each of the clump apertures used for the broad-band photometry. Knowledge of the line ratios of H$\beta$ to the [O~II] and [O~III] emission lines then allows removal of the line contamination from the F098M and F125W fluxes. However, the NIRSPEC and FIRE data show that these line ratios can vary across the arc by more than a factor of two (Table~\ref{tab:fluxes}). Correcting individual clumps for emission line contamination therefore adds large uncertainties to the F098M and F125W photometric uncertainty, which results in minimal constraints on the observed spectral energy distribution. For this work, the F098M and F125W bands remain uncorrected and are not included in the SED modelling described below. \par \vspace{2.3ex plus 0.3ex minus 0.3ex} Finally, the photometry for each clump needs to be corrected for the lensing magnification. Using the magnification map presented in \cite{Sharon2012}, we derive individual clump magnifications as the flux-weighted mean magnification within the photometric clump aperture. Figure~\ref{fig:clumpmag} shows the demagnified AB magnitudes for each of the clumps in the multiple images of the source galaxy. Not all clumps are present in all images and some of the clumps are too faint and/or blended for an accurate measurement, especially in the WFC3/IR bands. There is a general agreement between the independent clump measurements from the multiple images, which confirms the robustness of the photometry method. The weighted mean and uncertainty in the mean derived from the available measurements for each clump are shown with black datapoints. We use this weighted mean in all subsequent analysis. Image 3 is complicated by the presence of cluster member $G1$ (see Figure~\ref{fig:implane}), which appears to split clump $g$ in two images. The photometry of those two images (shown with open and filled orange circles in the bottom left panel of Figure ~\ref{fig:clumpmag}) does not agree with the well-measured appearance of clump $g$ in image 4. Given this lensing complexity, we discard the measurements of clump $g$ from image 3. \begin{figure} \centering \includegraphics[width=9cm]{clump_sed_resubmission.eps} \caption{Final photometry for the clumps as measured from the six WFC3 UVIS and IR bands, corrected for the lensing magnification. The measurements from the multiple images of the source are in reasonable agreement. The black squares correspond to the weighted mean and uncertainty in the mean of the photometry for each clump; they are shifted slightly in wavelength to improve the clarity of the figure. The F098M and F125W magnitudes are not corrected for nebular line emission, they are shown with large grey crosses in all panels and not included in the SED fit. The best-fit SED to the weighted mean UVIS+F160W photometry is shown in black, with a solid line for the default Calzetti extinction law and a dashed line for the SMC extinction law. \label{fig:clumpmag}} \end{figure} \subsection{SED Modelling} \label{subsec:clumpsed} The stellar populations of the clumps can be constrained with spectral energy distribution modelling of the observed clump photometry. We use the SED fitting code FAST \citep{Kriek2009} at fixed spectroscopic redshift with the \cite{Bruzual2003} stellar population synthesis models (BC03), a \cite{Chabrier2003} IMF and \cite{Calzetti2000} dust extinction law. The metallicity is restricted to 0.2 or 0.4\,Z$_\odot$ for all clumps, consistent with the oxygen abundance measured from the integrated NIRSPEC spectrum (R11), as well as the metallicity measurements of individual clumps from the OSIRIS, FIRE and NIRSPEC data (see \S~\ref{subsec:clumposiris}). We initially adopt exponentially decreasing star formation histories (SFH) with minimum $e$-folding time $\log(\tau)=8.5$ and a minimum age limit of 50~Myr. \cite{Wuyts2011} shows good agreement between SFR estimates based on these assumptions and other multi-wavelength SFR indicators out to $z\sim3$. This default fit returns best-fit models at the age cut-off of 50~Myr for all clumps except clump $a$. Removing the age limit significantly improves the fit and returns ages between 3 and 16~Myr, albeit with large uncertainties. The 50~Myr age limit roughly corresponds to the dynamical time scale of a $z\sim2$ galaxy and is typically included to avoid a luminosity bias from the most recently formed O and B stars in the SED fit. However, star-forming clumps have much shorter dynamical time scales ($t_{dyn} \sim r/v \sim 5$~Myr) and in the local Universe, star clusters are often dated between a few and a few ten Myr (e.g. \citealt{Bastian2006}). We caution that these age estimates should be interpreted as the age of the current episode of star formation within the regions, which outshines the contribution of a possible underlying older stellar population (age $>100$~Myr). The best-fit SED models for the weighted mean photometry of each clump are shown in Figure~\ref{fig:clumpmag} and 68\% confidence intervals for the stellar population parameters are reported in Table~\ref{tab:clumpsed}. The stellar masses fall below the typical clump masses of $10^8-10^{10}$~M$_\odot$ reported in non-lensed studies \citep{Forster2011, Guo2012}, and lie closer to the range of $10^6-10^{8}$~M$_\odot$ found for a lensed spiral galaxy at $z=1.5$ \citep{Adamo2013}. As is often seen in SED modelling, the $e$-folding time $\tau$ is not constrained by the fit. \par \vspace{2.3ex plus 0.3ex minus 0.3ex} The SFRs derived from the best-fit models are very high, especially for clumps $e$ and $g$ with 110 and 60~M$_\odot$~yr$^{-1}$ respectively. From a range of different SFR indicators, \citet{me2012a} estimate a galaxy-integrated SFR for RCSGA0327 of 30-50~M$_\odot$~yr$^{-1}$. This suggests that the SED fit is overestimating the SFR of individual clumps. We experimented with alternative SFHs by including a) exponentially declining models with shorter $e$-folding times $\tau$ down to 10~Myr; b) delayed histories with $SFR \sim t \exp(-t/\tau)$; and c) inverted models with $SFR \sim \exp(t/\tau)$. The stellar population parameters remain within the $1\sigma$ uncertainties of the default fit for all these alternative SFH templates. At the young clump ages of $\lesssim20$~Myr, the shape of the SFH has negligible impact on the best-fit stellar population parameters and is therefore not causing the high clump SFRs. The SED-derived SFR estimates can be reduced by assuming a different dust extinction law. It has been suggested that the assumption of a patchy dust distribution inherent in the Calzetti dust extinction law might not be a good representation of the dust geometry in young star-forming galaxies at $z\sim2$ and could significantly overpredict their dust extinction \citep{Reddy2006,Siana2009,me2012a}. The large covering fraction of outflowing gas observed for two $z\sim2$ lensed SFGs inferred from the presence of opaque interstellar absorption lines in their rest-frame UV spectra \citep{Siana2008,Siana2009}, is indicative of a more uniform foreground sheet of dust. This results in a steeper extinction law, such as the one derived for the Small Magellanic Cloud \citep{Prevot1984}. Rest-frame UV spectra of clumps $e2$ and $u$ in RCSGA0327 taken with the MAGE spectrograph at Magellan, show similar opaque absorption lines (J.~R.~Rigby et al. 2014, in preparation). When adopting the SMC extinction law in the SED fit, we cannot distinguish the best-fit models from the Calzetti result in terms of $\chi^2$-statistics. The best-fit models and derived stellar population parameters are included in Figure~\ref{fig:clumpmag} and Table~\ref{tab:clumpsed}. The SFRs of clumps $e$ and $g$ are now 5 and 3.1~M$_\odot$~yr$^{-1}$ respectively, a more plausible result in light of both the galaxy-integrated SFR and the H$\alpha$-derived clump SFRs (see \S~\ref{subsec:clumposiris}). The reddening is lower due to the steeper extinction curve, and the stellar ages are overall higher than for the Calzetti fit. The stellar mass is consistent within the $1\sigma$ uncertainties and therefore not included in Table~\ref{tab:clumpsed}. \begin{deluxetable*}{lcccc|ccc|c} \tabletypesize{\footnotesize} \tablewidth{0pc} \tablecaption{Clump Stellar Population Parameters and Radii. \label{tab:clumpsed}} \tablehead{ \colhead{} & \multicolumn{4}{c}{Calzetti extinction} & \multicolumn{3}{c}{SMC extinction} & \colhead{} \\ \colhead{} & \colhead{Age} & \colhead{$E(B-V)_s$} & \colhead{$\log(M_*/\mathrm{M}_\odot)$} & \colhead{$SFR$} & \colhead{Age} & \colhead{$E(B-V)_s$} & \colhead{$SFR$} & \colhead{$r_{cl}$} \\ \colhead{} & \colhead{(Myr)} & \colhead{} & \colhead{} & \colhead{(M$_\odot$~yr$^{-1}$)} & \colhead{(Myr)} & \colhead{} & \colhead{(M$_\odot$~yr$^{-1}$)} & \colhead{(pc)}} \startdata $a$ & $3200^{+0}_{-2800}$ & $0.02^{+0.13}_{-0.02}$ & $8.1^{+0.1}_{-0.5}$ & $0.05^{+0.10}_{-0.02}$ & $2500^{+1250}_{-1350}$ & $0.00^{+0.04}_{-0.00}$ & $0.05^{+0.02}_{-0.01}$ & 250$\pm$120 \\ $b$ & $6^{+21}_{-5}$ & $0.17^{+0.10}_{-0.10}$ & $7.7^{+0.5}_{-0.2}$ & $8^{+124}_{-6}$ & $8^{+1}_{-2}$ & $0.07^{+0.01}_{-0.01}$ & $3.3^{+0.9}_{-1.0}$ & 180$\pm$100 \\ $c$ & $10^{+1140}_{-9}$ & $0.25^{+0.2}_{-0.25}$ & $6.8^{+0.8}_{-0.2}$ & $1^{+24}_{-1}$ & $90^{+420}_{-70}$ & $0.07^{+0.04}_{-0.07}$ & $0.05^{+0.09}_{-0.04}$ & 270$\pm$160 \\ $d$ & $16^{+573}_{-13}$ & $0.17^{+0.14}_{-0.17}$ & $7.5^{+0.6}_{-0.2}$ & $2^{+12}_{-2}$ & $100^{+220}_{-90}$ & $0.04^{+0.04}_{-0.00}$ & $0.4^{+0.3}_{-0.2}$ & 320$\pm$190 \\ $e$ & $3^{+1}_{-2}$ & $0.32^{+0.03}_{-0.08}$ & $8.5^{+0.1}_{-0.4}$ & $110^{+220}_{-90}$ & $11^{+9}_{-5}$ & $0.11^{+0.04}_{-0.00}$ & $5.0^{+8.3}_{-1.0}$ & 170$\pm$120 \\ $f$ & $5^{+2}_{4}$ & $0.37^{+0.08}_{-0.06}$ & $7.7^{+0.4}_{-0.2}$ & $9^{+91}_{-4}$ & $22^{+18}_{-14}$ & $0.15^{+0.00}_{-0.04}$ & $0.6^{+0.4}_{-0.4}$ & 160$\pm$80 \\ $g$ & $8^{+65}_{-7}$ & $0.37^{+0.13}_{-0.12}$ & $8.7^{+0.5}_{-0.1}$ & $60^{+1320}_{-50}$ & $64^{+116}_{-28}$ & $0.11^{+0.04}_{-0.00}$ & $3.1^{+2.1}_{-2.4}$ & 200$\pm$140 \\ \enddata \end{deluxetable*} \subsection{Clump Size} \label{subsec:clumpsize} Accurately determining the size of the individual clumps is not trivial. In the literature, region size is often measured from the area above a chosen surface brightness level. This isophote method has a few important problems. First of all, the surface brightness threshold is typically determined visually, from a trade-off between identifying a maximum number of regions while minimizing blending between individual regions. The chosen isophote is thus subjective and difficult to compare between studies, especially over a range of redshifts. Secondly, this method can be influenced significantly by local background variations, especially at high redshifts where undetected low surface brightness regions or overlap light from regions that are only separated by a few pixels can enhance the local background level. Finally, and most importantly, the isophote method will automatically find brighter regions to be larger, since more of the diffuse outskirts of the region will fall above the chosen isophote. A more robust way to measure region size is provided by the core method, in which a 2D light profile is fitted to the surface brightness profile of each region (e.g. \citealt{Wisnioski2012}). The local background is a free parameter in the profile fit, thus minimizing its influence on the size measurement. Most commonly, a 2D Gaussian light profile is used, which probes primarily the central ionized core of the H~II regions. Adopting this method, we use GALFIT (version 3.0, \citealt{Peng2010}) to create a model for each individual star-forming region in the F390W image, using a Tiny Tim model for the PSF \citep{tinytim}. The F390W image is chosen for the profile fit because the contrast between the clumps and the background is largest in this bluest HST band. A single Gaussian profile provides an adequate fit for most regions, except for the brightest ones, clumps $b$, $e$ and $g$. There, an additional Gaussian component is required to model the surrounding diffuse nebula, which is bright enough to rise above the background level. The best-fit GALFIT models are deconvolved and mapped back to the source plane. We define the clump radius $r_{cl}$ as the effective 1$\sigma$ Gaussian width of the source-plane model. Table~\ref{tab:clumpsed} reports the weighted mean of the radii measured for the multiple images of each clump. The uncertainties of 50-70~\% reflect the scatter in size measurements for the same clump in the multiple images of the source as well as a systematic uncertainty of 30\%, to account for uncertainties introduced when the clumps are not truly Gaussian as well as resolution effects \citep{Wisnioski2012}. The clumps in RCSGA0327 range in size from $\sim$300 to 600~pc (quoted here as the clump diameter). Clumps studied in other lensed galaxies in the literature occupy a similar size range of $\sim300-1000$~pc \citep{Swinbank2009, Jones2010, Livermore2012}. This is significantly smaller than the clump sizes of 1-2~kpc typically measured for non-lensed galaxies (e.g. \citealt{Genzel2011, Wisnioski2012}), where the limited available resolution can blend neighbouring clumps into larger, more luminous regions. Additionally, since lensing studies typically target less massive galaxies, they are expected to probe less massive, smaller clumps. \subsection{Clumps in OSIRIS} \label{subsec:clumposiris} \begin{figure} \centering \includegraphics[width=8cm]{clumps.eps} \caption{Image-plane H$\alpha$ emission maps for pointing 2 \textit{(left)} and pointing 3 \textit{(right)}, overlayed with F390W flux contours in green. The maps show the total H$\alpha$ flux in each spatial pixel in the OSIRIS data cubes as derived from Gaussian fits to the line profile (see \S~\ref{subsec:kinmaps}). The clump apertures are shown in red. These result from the simultaneous fit of multiple 2D Gaussians to the H$\alpha$ emission maps and correspond to the 1$\sigma$ extent of the best-fit 2D Gaussians. \label{fig:clumps}} \end{figure} The OSIRIS data provide additional information on the individual star-forming regions. To avoid the ill-defined source-plane PSF as well as uncertainties introduced when mapping the OSIRIS data to the HST reference frame and subsequently to the galaxy source plane, the clumps are identified and characterized in the image-plane datacubes. Implementing the core-method for region identification, we fit multiple 2D Gaussian profiles simultaneously to the H$\alpha$ intensity maps. This identifies clumps $b$ and $e$ in both pointings, as well as both appearances of clump $g$ in pointing 3. Clump apertures are defined as the 1$\sigma$ extent of the best-fit Gaussians. Through a comparison with the F390W image, we additionally identify clumps $d$ and $f$ in pointing 2. These two clumps can also be seen in pointing 3, but there they only extend over one or two pixels, which introduces large uncertainties. Figure~\ref{fig:clumps} shows the image-plane H$\alpha$ emission maps for both pointings, overlayed with the F390W contours in green. The clump apertures are marked in red. There is good agreement between the rest-frame UV and H$\alpha$ morphology. An integrated spectrum is constructed for each clump by adding all the spatial pixels within its aperture. We note that typical rotation signatures across individual clumps are insignificant compared to the velocity uncertainties; shifting all pixels to a common central wavelength to correct for velocity broadening has negligible influence on the shape of the integrated spectra. As discussed above, the core method provides a more robust region identification compared to the subjective isophote method. The total H$\alpha$ emission will be somewhat underestimated due to diffuse emission beyond the clump aperture, which is aggravated by an imperfect Strehl ratio for the adaptive optics correction. The wavelength centroid and linewidth do not depend critically on the clump aperture. \begin{deluxetable*}{lcccccccc} \tabletypesize{\footnotesize} \tablewidth{0pc} \tablecaption{Clump Properties from the OSIRIS Data. \label{tab:clumposiris}} \tablehead{ \colhead{clump} & \colhead{$\Delta \chi^2$} & \colhead{$L_{H\alpha}^{\mathrm{narrow}}$} & \colhead{FWHM$^{\mathrm{narrow}}$} & \colhead{FWHM$^{\mathrm{broad}}$} & \colhead{$\Delta v$} & \colhead{F$^{\mathrm{broad}}$/F$^{\mathrm{tot}}$} & \colhead{$12+\log(O/H)$} \\ \colhead{} & \colhead{} & \colhead{($10^{41}$~erg/s)} & \colhead{(km/s)} & \colhead{(km/s)} & \colhead{(km/s)} & \colhead{} & \colhead{}} \startdata $b$ & 0.17 & 1.7$\pm$0.1 & 81$\pm$6 & 270$\pm$33 & 23$\pm$11 & 0.35$\pm$0.12 & 8.02$\pm$0.08 \\ $d$ & 0.11 & 1.6$\pm$0.1 & 99$\pm$7 & 145$\pm$30 & -123$\pm$20 & 0.28$\pm$0.06 & 8.11$\pm$0.07 \\ $e$ & 0.50 & 4.0$\pm$0.2 & 109$\pm$4 & 315$\pm$10 & -23$\pm$3 & 0.54$\pm$0.05 & 8.28$\pm$0.02 \\ $f$ & 0.12 & 0.9$\pm$0.1 & 84$\pm$8 & 270$\pm$41 & 6$\pm$9 & 0.40$\pm$0.17 & 8.08$\pm$0.06 \\ $g$ & 0.05 & 14.3$\pm$0.3 & 160$\pm$5 & & & & 8.07$\pm$0.06 \\ \enddata \tablecomments{Columns are the improvement in the reduced $\chi^2$ between the single and double component Gaussian fits ($\sim$145 degrees of freedom); narrow-component de-lensed H$\alpha$ luminosity; FWHM of the narrow and broad component; velocity offset between both components; the ratio of broad to total flux; and oxygen abundance derived from the [N~II]/H$\alpha$ ratio. The quoted uncertainties are based on random errors, the metallicity calibration from \cite{pp04} has a 0.2~dex systematic uncertainty and for the H$\alpha$ luminosity one should take into account an additional systematic flux uncertainty of $\sim30$\%.} \end{deluxetable*} \begin{figure*} \centering \vspace{-2.5cm} \includegraphics[width=\textwidth]{winds_forpaper_resubmission.eps} \caption{Integrated spectra for the clumps in pointing 2, all with robust wind detections. For every clump, the best-fit single component Gaussian model is shown in red in the left panel. The right panel shows the double component model in blue, the narrow and broad component are shown separately in grey. The residuals are shown at the bottom. From these, it becomes clear that a single Gaussian component fails to fit the broad wings of the H$\alpha$ emission line, the double-component residual is significantly reduced at the location of the wings. The vertical dotted lines note the expected wavelength positions of the [N~II]~$\lambda$6548,6584 doublet. \label{fig:winds}} \end{figure*} \par \vspace{2.3ex plus 0.3ex minus 0.3ex} In pointing 2, single Gaussian fits to the integrated spectra fail to fit the broad wings of the H$\alpha$ emission line, as can be seen in Figure~\ref{fig:winds}. The line profiles include a broad underlying component, which signifies the presence of outflows. Star formation driven galactic winds are seen in most high redshift SFGs (e.g. \citealt{Shapiro2009,Weiner2009,Rubin2010,Steidel2010}) and have recently been localized for a handful of massive, individual star-forming clumps \citep{Genzel2011,Newman2012a,Wisnioski2012}. Following \cite{Newman2012a}, we fit the H$\alpha$ line profile with a double-component Gaussian model when it improves the reduced $\chi^2$ of the fit over the $H\alpha$ and [N~II] region ($\sim$145 degrees of freedom) by at least 10\%. This is the case for all clumps in pointing 2, none of the line profiles in pointing 3 have sufficient signal-to-noise to detect an underlying broad component. The line profile parameters are reported in Table~\ref{tab:clumposiris}. The luminosities are corrected for the flux-weighted mean magnification within the OSIRIS clump aperture, and the linewidths are corrected for instrumental broadening. Clump star formation rates are derived from the narrow-component H$\alpha$ luminosities with the \cite{Kennicutt1998} conversion, corrected to the Chabrier IMF, and range from 0.4 to 6.6~M$_\odot$~yr$^{-1}$ (uncorrected for dust extinction). We find winds with a FWHM$^{\mathrm{broad}}=150-320$~km/s which account for 30-55\% of the total H$\alpha$ flux. These are somewhat less broad than the FWHM$^{\mathrm{broad}}\sim500$~km/s reported so far for a handful of more massive clumps in more massive $z\sim1-2$ SFGs \citep{Newman2012a,Wisnioski2012}. This is expected given the positive correlation between wind velocity/FWHM and host galaxy stellar mass \citep{Shapiro2009, Weiner2009,Newman2012b}. Both the galaxy integrated mass and clump masses of RCSGA0327 are more than an order of magnitude lower than the $z\sim2$ SFGs for which stellar winds have been resolved so far. Estimates of the mass-loading factors of outflows are highly uncertain due to the necessary assumptions on the geometry, rate and physical extent of the outflow. Following the assumptions made in \cite{Newman2012b} for a warm ionized outflow with radially constant outflow velocity, we find mass-loading factors of 1-4 times the clump SFR. AGN feedback is a common explanation for the presence of broad emission lines. The outflows detected in RCSGA0327 are unlikely to be AGN-powered for the following reasons: 1) the diagnostic BPT diagram (as derived from the long-slit NIRSPEC and FIRE data as well as HST grism data) does not show the extreme line ratios expected for an AGN origin of the emission lines (K.~Whitaker et al. 2014, in preparation); 2) we see no evidence for point-source AGN activity in Mg~II 2800\AA\ emission or Chandra X-ray data (J.~R.~R Rigby et al. 2014, in preparation); and 3) the broad component is spatially extended, it can be identified for multiple spatial pixels within the clump apertures. Unfortunately the OSIRIS data have insufficient signal-to-noise to fit a double component Gaussian model to individual spatial pixels and spatially map the strength of the outflow. We note that the kinematic maps presented in \S\ref{subsec:kinmaps} result from single Gaussian fits. The presence of outflows will not affect the velocity map, since the velocity shift between the single best-fit model and the narrow-component of a double model is negligible. The velocity dispersion is overestimated by 30-40\% in the clump regions. \par \vspace{2.3ex plus 0.3ex minus 0.3ex} \cite{Newman2012b} present evidence for a strong dependence of the strength of outflows on the star formation surface density ($\Sigma_{SFR}$) of the galaxy or clump from which they originate. They propose a threshold of $\Sigma_{SFR} > 1$~M$_\odot$~yr$^{-1}$~kpc$^{-2}$ to power a strong wind at $z\sim2$, i.e. where the broad component accounts for at least one third of the total flux. This is an order of magnitude higher than the SF surface density threshold found in the local Universe \citep{Heckman2002}. Since gravitational lensing conserves surface brightness, $\Sigma_{SFR}$ for the clumps in RCSGA0327 can be estimated in the image plane from the observed narrow-component H$\alpha$ luminosity and the size of the OSIRIS clump apertures. A dust correction is applied based on the SED-derived reddening, using either the Calzetti or SMC extinction law and assuming that the nebular emission lines and stellar continuum suffer the same amount of extinction (see \S\ref{subsec:clumpfirenirspec}). Figure~\ref{fig:treshold} shows that the clumps all have high SF surface densities significantly above the proposed threshold $\Sigma_{SFR} > 1$~M$_\odot$~yr$^{-1}$~kpc$^{-2}$. \begin{figure} \centering \includegraphics[width=9.5cm]{treshold.eps} \caption{Dependence of the outflow strength, characterized as the fraction of H$\alpha$ flux contained in the broad versus narrow component, on star formation surface density. The grey circles present stacked results for $z\sim2$ SFGs from the SINS survey \citep{Newman2012b}. The black squares correspond to outflows detected for 3 individual clumps within two of the most massive SINS galaxies \citep{Newman2012a}. The clumps in RCSGA0327 are shown in red and blue for pointing 2 and 3 respectively. The clump SFR is derived from the narrow-component H$\alpha$ luminosity and corrected for dust extinction using the SED-derived reddening $E(B-V)_s$ from either the Calzetti (filled squares) or the SMC (open squares) dust extinction law. We assume no additional extinction towards the nebular emission lines. \label{fig:treshold}} \end{figure} \par \vspace{2.3ex plus 0.3ex minus 0.3ex} The signal-to-noise of the OSIRIS observations is insufficient to detect [N~II] emission in most individual spatial pixels, but the lines are detected at S/N$>2$ in the integrated spectra of all clumps in pointing 2, as well as clumps $e$ and $g$ in pointing 3. We measure the [N~II] flux by fitting a multi-component Gaussian model to H$\alpha$ and both [N~II] lines using MPFITFUN. We fix the linewidths to a common value and constrain the ratio of the [N~II] doublet to its theoretical value of 3.071 \citep{Storey2000}. For pointing 2, a second broad component is included for all lines, also with a common linewidth. Additionally, we constrain the flux ratio of the narrow and broad component of each line to a common value. We estimate the metallicity of each clump from the ratio of [N~II]~$\lambda$6585 to H$\alpha$, the N2 index, as empirically calibrated by \cite{pp04}, and report the results in Table~\ref{tab:clumposiris}. The calibration of strong-line metallicity indicators remains uncertain at high redshift (e.g. \citealt{Kewley2008, me2012b}), but here we are mainly concerned with the relative variation in abundance between the clumps. The \cite{pp04} calibration has a systematic uncertainty of 0.2~dex. \subsection{Clumps in FIRE and NIRSPEC} \label{subsec:clumpfirenirspec} This section explores additional information that can be learned from the long-slit near-IR spectra taken with Magellan/FIRE for clumps $u$ and $b2$ and Keck/NIRSPEC for clumps $u$ and $e2$ presented in \S\ref{subsec:data-fire} and \S\ref{subsec:data-nirspec}. \subsubsection{Extinction} Following R11, we measure extinctions from the NIRSPEC spectra of clumps $u$ and $e2$ from the H$\beta$/H$\gamma$ ratio, which is the brightest pair of Balmer lines covered within a single grating setting. Using the Calzetti extinction law, the measured reddening is $E(B-V)_g = 0.34\pm0.10$ for clump $u$, and $E(B-V)_g = 0.23\pm0.09$ for clump $e2$. This is a more accurate measurement than in R11, mostly because we have switched to fitting a common continuum level and linewidth for all lines in each grating setting. For FIRE, we can use both the H$\alpha$/H$\beta$ and H$\beta$/H$\gamma$ ratios. We find $E(B-V)_g^{\alpha \beta} = 0.38\pm0.04$ and $E(B-V)_g^{\beta \gamma} = 0.32\pm0.30$ for clump $u$ and $E(B-V)_g^{\alpha \beta} = 0.16\pm0.03$ and $E(B-V)_g^{\beta \gamma} = 0.07\pm0.15$ for clump $b2$. For clump $u$, the reddening estimates derived from NIRSPEC and FIRE are consistent. We can compare these reddening measures to the reddening of the stellar light as derived from the best-fit SED model and reported in Table~\ref{tab:clumpsed}. This comparison is visualised in the bottom left panel of Figure~\ref{fig:radial} and suggests there is no need for additional extinction towards the ionized gas when the Calzetti extinction law is applied, while $E(B-V)_g$ is significantly higher than $E(B-V)_s$ for the SMC law. \subsubsection{Electron density} We use the [O~II]~$\lambda$3727 doublet to constrain the electron density, using the task \textit{stsdas.analysis.temden} in IRAF\footnotemark[5] with $T_e = 10^4$~K as done in R11. From the NIRSPEC data, a line flux ratio of f(3726/3729) $= 1.19 \pm 0.07$ translates to an electron density $n_e = 600 \pm 100$~cm$^{-3}$ for clump $e2$; for clump $u$ we find f(3726/3729) $= 0.84 \pm 0.03$ and $n_e = 180 \pm 35$~cm$^{-3}$. From the FIRE data, we find f(3726/3729) $= 0.84 \pm 0.08$ and $n_e = 180 \pm 90$~cm$^{-3}$ for clump $b2$ and f(3726/3729) $= 1.01 \pm 0.13$ and $n_e = 370 \pm 150$~cm$^{-3}$ for clump $u$. \footnotetext[5]{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} \subsubsection{Metallicity} We estimate the metallicity from the H$\alpha$ and [N~II] emission line fluxes measured in the NIRSPEC and FIRE spectra. Using the third-order polynomial fit of \citet{pp04}, we infer a metallicity of $12 + \log(O/H) = 8.16\pm0.02$ for clump $u$, and $12 + \log(O/H) = 8.34\pm0.02$ for clump $e2$ from the NIRSPEC data. From FIRE, we derive a metallicity of $12 + \log(O/H) = 8.16\pm0.02$ for clump $u$ and $12 + \log(O/H) = 8.12\pm0.02$ for clump $b2$. These estimates are consistent with the OSIRIS results presented in \S\ref{subsec:clumposiris} as can also be seen in the bottom right panel of Figure~\ref{fig:radial}. The extended wavelength coverage of the NIRSPEC and FIRE spectra includes additional lines which can be used to estimate metallicity. One needs to keep in mind the significant offsets between different strong-line indicators, which need to be converted to the same base calibration before any comparison can be made \citep{Kewley2008}. Additionally, emission line fluxes need to be corrected for dust extinction when the strong-line indicator spans a large wavelength range. The $R_{23}$ index, $\log{R_{23}} = \log[($[O~II]~$\lambda$3727 + [O~III]~$\lambda$4959 + [O~III]~$\lambda$5007)/H$\beta]$ is commonly used in the literature. This indicator is double-valued with a low and high metallicity result for every value of $R_{23}$. Using the [N~II]/[O~II] flux ratio to distinguish between both metallicity branches \citep{Kewley2008}, we find that all clumps fall on the upper branch. We proceed to use the upper branch $R_{23}$ calibration from \cite{Zaritsky1994} as well as \cite{kk04} and convert both results to our base metallicity calibration of [N~II/H$\alpha$] from \cite{pp04}. The results for all clumps are shown in Figure~\ref{fig:clumpmet}. Only statistical uncertainties are shown, the indicators each have a systematic uncertainty of at least 0.2~dex. The larger uncertainties for the $R_{23}$ indicators originate from the propagation of the uncertainty in the reddening. We see a general agreement between metallicity indicators, except for clump $e2$, where the metallicity derived from the [N~II]/H$\alpha$ ratio is larger by $\sim0.15$~dex. This offset falls within the significant systematic uncertainties involved in the comparison of metallicity indicators, but given the agreement between the N2 and $R_{23}$ index for the other clumps, it does suggest that the [N~II]/H$\alpha$ ratio of clump $e2$ is elevated. This agrees with the higher electron density measured for this clump, which results in an increased rate of collisional excitation. Additionally, the galactic wind detected for clump $e$ could drive shock excitation. At high redshift ($z>1.5$), the presence of slow shocks mimics a higher metallicity starburst \citep{Kewley2013}. \subsubsection{Outflows} The FIRE spectrum of clump $b2$ and the NIRSPEC spectra of clump $e2$ show an outflow consistent with the OSIRIS results. No outflow is detected in the FIRE or NIRSPEC spectra of clump $u$. \begin{figure} \centering \includegraphics[width=9cm]{clump_metallicity.eps} \caption{Comparison of metallicity indicators for the NIRSPEC and FIRE spectra. We show metallicities derived from the N2 index, $\log([N~II]~\lambda6584/H\alpha)$ as calibrated by \cite{pp04} \textit{(red circles)}, as well as metallicities derived from the $R_{23}$ index, $\log{R_{23}} = \log[($[O~II]~$\lambda$3727 + [O~III]~$\lambda$4959 + [O~III]~$\lambda$5007)/H$\beta]$ as calibrated by \cite{Zaritsky1994} \textit{(blue squares)} and \cite{kk04} \textit{(blue diamonds)}. The latter two have been converted to the N2 calibration of \cite{pp04} with the conversions from \cite{Kewley2008}. Only statistical uncertainties are shown, the indicators each have a systematic uncertainty of at least 0.2~dex. \label{fig:clumpmet}} \end{figure} \subsection{Radial Variation of Clump Properties} \label{subsec:radial} \begin{figure*} \centering \includegraphics[width=\textwidth]{clumpradial_resubmission.eps} \caption{Radial variation of clump properties. The x-axis corresponds to the projected source-plane distance from each clump to clump $g$. Red and blue symbols denote results from the SED fit with the Calzetti and SMC extinction laws respectively. Orange and green symbols correspond to OSIRIS results, blue symbols show long-slit spectroscopy with NIRSPEC and FIRE. Errorbars only reflect statistical uncertainties. \textit{(Top left)} F390W-F814W rest-frame UV color; \textit{(Top right)} stellar age; \textit{(Bottom left)} reddening from the SED fit and from the Balmer ratio; \textit{(Bottom right)} metallicity. \label{fig:radial}} \end{figure*} Any trends in clump properties with respect to their location within the galaxy can provide additional constraints on the clump origin. No correlation is expected between the properties of separate interacting components. Since there is no clear definition of the galaxy center for RCSGA0327, we adopt the position of clump $g$, which is the brightest clump and corresponds to a strong peak in stellar mass surface density (see \S\ref{sec:spatialsed}). For each clump, the projected distance to clump $g$ is measured in the reconstructed source-plane images. The top left panel of Figure~\ref{fig:radial} shows the rest-frame $U-V$ color F606W-F160W as well as a measure of the UV-slope from the F390W-F814W color. We see a slight radial trend, where clumps become redder by 0.5-1~mag when moving East from clump $a$ towards clump $g$. Such color trends have been interpreted as evidence for a radial age trend, confirming a picture of radial migration of clumps formed through gravitational collapse of a turbulent disk \citep{Guo2012}. However, rest-frame UV color is governed by a degeneracy between age, metallicity and dust extinction, and a color trend can be caused by a trend in any of these parameters. The top right panel shows the stellar age as derived from the SED using both the Calzetti and SMC extinction laws (filled and open red squares respectively). The stellar age remains roughly constant across the clumps, within the significant uncertainties. There is some evidence for an increase in reddening towards clump $g$, as shown in the bottom left panel. Some correlation between color and reddening is expected from the SED modelling, but the trend is confirmed by the reddening of the ionized gas derived from the NIRSPEC and FIRE spectra (blue filled circles). The bottom right panel of Figure~\ref{fig:radial} shows the [N~II]/H$\alpha$ ratio, as measured in OSIRIS as well as the NIRSPEC and FIRE spectra. The [N~II]/H$\alpha$ ratio shows a mostly flat gradient across the clumps, with the exception of clump $e$. We have discussed in \S\ref{subsec:clumpfirenirspec} how the elevated [N~II]/H$\alpha$ ratio for this clump is likely due to an increased electron density and/or shock excitation in the outflow. Based on the $R_{23}$ indicator, the metallicity of clump $e$ agrees with the other clumps. A flat metallicity gradient supports the scenario of an ongoing interaction within the system. In merger simulations, galaxy metallicity gradients are found to flatten as the merger progresses and low-metallicity gas is transported from the outskirts of the interacting galaxies to the central region \citep{Rupke2010a}. This has been seen in local close-pair spiral galaxies \citep{Kewley2010, Rupke2010b} and local LIRGs \citep{Rich2012}. Integral-field spectroscopy studies of both lensed and non-lensed $z\sim2$ SFGs are also finding flatter or even inverted metallicity gradients in interacting systems, though samples are still small \citep{Jones2013}. \subsection{Clump Scaling Relations} \label{subsec:scaling} \begin{figure*} \centering \includegraphics[width=\textwidth]{scaling.eps} \caption{Scaling relations between H$\alpha$ luminosity and size \textit{(left)} or velocity dispersion \textit{(right)} for local and high redshift star-forming clumps. The dashed black lines display the best-fit scaling relations from \cite{Wisnioski2012}. The local sample consists of giant H~II regions in local spirals (rotating disks; \citealt{Gallagher1983, Arsenault1988,Rozas2006}); and giant H~II regions in local ULIRGs (interacting systems; \citealt{Bastian2006,Monreal2007,Rodriguez2011}). We use open and closed black diamonds to differentiate between these kinematic classifications. The high-z clumps come from six different studies mentioned in the text and legend. We have color-coded them by redshift to look for redshift evolution of the scaling relations: $0.8<z<2$ (orange) and $2<z<3$ (blue). Open and closed symbols are again used to differentiate between kinematically classified rotating disks and interacting systems. The lensed galaxies from \cite{Livermore2012} do not have kinematic information and are shown with plus-symbols. Neither \cite{Livermore2012} nor \cite{Jones2010} report velocity dispersion measurements for their clumps, which are therefore not included in the right panel. Results from the narrow-component H$\alpha$ emission line profile for the clumps in RCSGA0327 are shown with filled red circles and a filled red square for clump $g$ . The open red symbols in the right panel show the overestimate of both H$\alpha$ luminosity and linewidth when fitting the line profiles with a single Gaussian, not taking into account the broad underlying wind component. \label{fig:scaling}} \end{figure*} The five well-measured clumps in RCSGA0327 present a sizeable contribution to the current sample of 40 star-forming regions in $z\sim2$ SFGs with reliable H$\alpha$ measurements: a) five clumps from three $z\sim2$ SINS galaxies \citep{Genzel2011}; b) eight clumps from three massive SFGs at $z\sim1.3$ from the WiggleZ survey \citep{Wisnioski2012}; c) eight clumps from four lensed galaxies at $z=1.6-2.6$ \citep{Jones2010}; d) nine clumps from four H$\alpha$-selected galaxies at $z=1.4$ and $z=2.2$ from HiZELS \citep{Swinbank2012}; and e) ten clumps in three submillimeter-selected galaxies (SMG) at $z=1.4-2.4$ \citep{Menendez2013}. Additionally, \cite{Livermore2012} use HST/WFC3 narrowband imaging centered on H$\alpha$ to study clump sizes and luminosities in an additional eight lensed galaxies at $z=1-1.5$; they have no kinematic information. Figure~\ref{fig:scaling} compares H$\alpha$ luminosity (uncorrected for dust extinction), size and velocity dispersion measurements for the sample of high-z clumps to local scaling relations between these parameters taken from \cite{Wisnioski2012}. These authors have remeasured all clump sizes consistently with 2D elliptical Gaussian fits. The size-luminosity relation in the left panel clearly shows how the three lensing studies (\citealt{Jones2010, Livermore2012} and this work) probe clump sizes up to an order of magnitude smaller than what can be resolved in non-lensed studies. The clumps in RCSGA0327 are broadly consistent with the other high-z clumps and lie roughly two orders of magnitude above the local luminosity-size scaling relation. As was first pointed out by \cite{Livermore2012}, the offset seems to increase with redshift. These authors found correlations between clump SFR surface density and the SFR surface densities and gas surface densities of the host galaxies. As such, high redshift clumps appear to be scaled-up analogues of local H~II regions, simply bigger and brighter because of the increasing gas fractions in high-z SFGs. It is worth asking whether the dynamical state of the host galaxy plays a role in determining the clump SFR surface densities. In the local Universe, giant H~II regions found in interacting systems (shown with black filled diamonds) show systematically higher SFR surface densities compared to giant H~II regions in local spirals (open diamonds). At high redshift, we do not see elevated SFR surface densities for clumps within kinematically classified interacting systems (RCSGA0327, two lensed galaxies from \cite{Jones2010}, one HiZELS source from \cite{Swinbank2012} and three SMGs studied by \cite{Menendez2013}). It would be very valuable to obtain direct gas measurements of these systems to clarify the connection between gas surface density, SF surface density and galaxy kinematics. \par \vspace{2.3ex plus 0.3ex minus 0.3ex} \begin{figure*} \centering \includegraphics[width=\textwidth]{resolvedsed.eps} \caption{Source-plane images of RCSGA0327 based on the counter-image: surface brightness distributions in the F606W and F160W bands (corresponding to rest-frame 2800\AA\ and 5500\AA); rest-frame U-V color map based on F814W-F160W; stellar age; dust extinction; and stellar mass surface density. Clumps $g$, $e$ and $b$ are marked by black crosses, from left to right. The contour on the stellar mass surface density map corresponds to $\log(\Sigma_{M*})=8.0$. \label{fig:resolvedsed}} \end{figure*} A few caveats should be kept in mind in the analysis of the scaling relations presented above. First, the high redshift clumps form by no means a uniformly selected sample, but span a large range of host galaxy selection and integrated properties. Secondly, the H$\alpha$ luminosities in Figure~\ref{fig:scaling} have not been corrected for dust extinction. This is mostly driven by the lack of reliable extinction estimates, especially on the scale of the individual clumps. Variations in dust extinction with redshift or host galaxy properties could have a significant effect on the scatter in these scaling relations. Finally, not taking into account a broad underlying wind component when fitting the H$\alpha$ emission line profile will lead to overestimates of both the H$\alpha$ luminosity and velocity dispersion of clumps. This is illustrated for RCSGA0327 with the open and closed red symbols in the right panel of Figure~\ref{fig:scaling}. \section{Spatially Resolved SED Modelling} \label{sec:spatialsed} The stellar mass surface density of a galaxy holds crucial information regarding the physical origin of its individual star-forming regions. When clumps correspond to separate interacting components, one can expect a well-established older stellar population underlying their strong rest-frame UV presence caused by the recent star formation triggered in the interaction. In contrast, clumps formed through gravitational collapse of a gas-rich, turbulent disk are too short-lived to build up a significant population of old stars. Dynamical friction against the underlying galaxy disk and clump-clump interactions cause the clumps to spiral inwards and coalesce into the galaxy center on timescales $\sim300-500$~Myr (e.g. \citealt{Dekel2013}). The detection of strong outflows originating from clumps in RCSGA0327 and other studies \citep{Genzel2011,Newman2012a,Wisnioski2012} could disrupt the clumps on even shorter timescales \citep{Genel2012}. Thus, while clumps dominate the galaxy morphology in rest-frame UV light which mostly traces newly-formed O and B stars, they become much less prominent at rest-frame optical wavelengths. Along these lines, \cite{Wuyts2012} have recently quantified the reduced contribution of clumps to stellar mass maps of clumpy galaxies at $0.5<z<2.5$ in the CANDELS fields based on spatially resolved SED modelling. Here we perform similar modelling for RCSGA0327. \par \vspace{2.3ex plus 0.3ex minus 0.3ex} We model the counter-image to obtain a full image of the source-plane galaxy without the contamination of the cluster galaxies that fall on top of image 3. Following the procedure outlined in \cite{Wuyts2012}, we PSF-match the different WFC3 images to the broadest F160W PSF and group pixels using 2D Voronoi binning \citep{Cappellari2003} to achieve S/N$\ge$10 in this band. For the SED fit, we use the default assumptions described in \S\ref{subsec:clumpsed} for the modelling of the clumps: BC03 models, Calzetti dust extinction, Chabrier IMF, 0.2-0.4~Z$_\odot$ metallicity, exponentially decreasing SFHs with $\log(\tau)\ge8.5$, no age restriction. Different modelling assumptions will generally affect only the absolute value of the derived stellar population parameters and here we are mostly interested in the relative variation of these parameters across the galaxy. As discussed in \S\ref{subsec:clumpsed}, the F098M and F125W filters can be heavily contaminated by line emission and are excluded from the SED fit. As a consistency check, adding the stellar mass of all the Voronoi bins returns a total galaxy mass within 0.1~dex of the stellar mass derived from the galaxy-integrated photometry; \cite{Wuyts2012} found a scatter of 0.08~dex in their comparison of integrated and resolved stellar masses for $1.5<z<2.5$ CANDELS galaxies. \par \vspace{2.3ex plus 0.3ex minus 0.3ex} The top row of Figure~\ref{fig:resolvedsed} shows source-plane maps of the F606W and F160W WFC3 images, which roughly correspond to rest-frame 2800\AA\ and 5500\AA. Clumps $g$, $e$ and $b$ are shown with black crosses, from left to right. The clumps become less pronounced at redder wavelengths, but clump $g$ still shows up strongly in the F160W image. This agrees with its significant presence in the stellar mass surface density map in the bottom right panel of Figure~\ref{fig:resolvedsed}. Based on this map, the other large mass concentration in RCSGA0327 lies on its western edge, to the right of all the clumps. This region has not been discussed so far, since it was not detected in H$\alpha$ emission\footnotemark[6]. It is very red in the false-color image presented in Figure~\ref{fig:implane} and the rest-frame $U-V$ color map in Figure~\ref{fig:resolvedsed}. Based on the stellar age and dust extinction maps, the red color is attributed to an old stellar population, and not to dust-obscured star formation. This agrees with the non-detection of this region in the OSIRIS data and recent Herschel PACS/SPIRE observations (Figure~\ref{fig:herschel}; J.~R.~Rigby et al.~2014, in preparation). Out to an isophote of $\log(\Sigma_{M*}) \ge 8.0$ (the black contour on top of the stellar mass surface density map in Figure~\ref{fig:resolvedsed}), clump $g$ and the red western region each contain $\sim1.2 \times 10^9$~M$_\odot$, or $\sim20$\% of the galaxy-integrated stellar mass in RCSGA0327. Within this contour, the western mass component has an intrinsic SFR of 0.5~M$_\odot$/yr, which puts it $\sim2\sigma$ below the main sequence at $z=2$. \footnotetext[6]{Due to the non-detection in the OSIRIS data, we have no spectroscopic confirmation of the redshift of this red component. However, given that it is multiply imaged together with the rest of the system, the details of the lens modelling restrict its redshift to within $\sim1000$~km/s of systemic velocity.} \begin{figure*} \centering \includegraphics[width=\textwidth]{clumps_eva_pacs.eps} \caption{Same as Figure~\ref{fig:implane}. The green lines correspond to the 1.5-2.0-2.5-3.0$\sigma$ contours of the PACS 100~$\micron$ image, which represents the best combination of signal-to-noise, spatial resolution, and sensitivity to re-processed dust emission out of the available far-infrared imaging. The yellow arrows indicate the second, red mass component in each of the four images of the source. The Herschel data make clear that the bulk of the long wavelength emission is associated with clump $u$ and that there is no emission associated with this second mass concentration. \label{fig:herschel}} \end{figure*} \section{Summary of observational results and discussion} \label{sec:disc} We have presented a detailed analysis of the kinematics, spatially resolved stellar population parameters and clump properties of RCSGA0327 based on multi-wavelength HST/WFC3 imaging and AO-assisted OSIRIS IFS data. The main results are summarized below. \begin{itemize} \item The kinematical analysis of the OSIRIS data strongly suggests an ongoing interaction, which has caused a large tidal tail extending from clump $g$ towards the North-East. Velocity dispersion peaks between the clumps could arise from high turbulence due to the interaction or overlapping H$\alpha$ emission along the line of sight. \item We have identified seven individual star-forming regions in the WFC3 imaging with diameters ranging from 300 to 600~pc. SED modelling of the clump photometry predicts stellar masses $10^7$ - $5 \times 10^8$~M$_\odot$, young ages of $\sim5-100$~Myr and low reddening $E(B-V)<0.4$. The steeper SMC extinction law is preferred over the default Calzetti law to avoid unphysical high clump SFRs compared to the H$\alpha$-derived values as well as the galaxy-integrated SFR. \item RCSGA0327 is the lowest mass high-z SFG to date with resolved outflows. We find broad underlying wind components in the H$\alpha$ emission line profile of four clumps, contributing on average $\sim40$\% to their total H$\alpha$ flux. The SFR surface densities of these clumps all fall above the high-z threshold of $\Sigma_{SFR} > 1$~M$_\odot$~yr$^{-1}$~kpc$^{-2}$ to power strong winds. \item We find a radial gradient in rest-frame UV color of the clumps across the galaxy, which we infer to be caused by a gradient in reddening. The stellar age of the clumps, though uncertain in absolute value, remains roughly constant. We find a flat metallicity gradient, as expected for an interacting system. \item The clumps in RCSGA0327 agree with the size-luminosity and dispersion-luminosity correlations inferred from earlier lensed and non-lensed studies of high-z clumps. Kinematical classification of the host galaxy as an interacting system does not result in higher clump SFR surface densities, unlike what has been seen for local giant H~II regions. \item Stellar mass surface density maps based on spatially resolved SED modelling suggest an established stellar population at the location of clump $g$ and a second mass component at the western edge of the galaxy. Both contain $\sim20$\% of the total stellar mass of the system. The western mass component is not detected in H$\alpha$ or far-IR emission. \end{itemize} \par \vspace{2.3ex plus 0.3ex minus 0.3ex} In contrast to most clumpy $z\sim2$ SFGs that have been studied in detail so far, RCSGA0327 does not agree with a single turbulent rotating disk where clumps have formed through gravitational collapse. The system is undergoing an interaction, which has boosted the specific SFR to a factor 3 above the main-sequence at $z\sim2$. The enhanced star formation is localised in multiple compact star-forming regions with high star formation surface densities and signatures of outflows. Such off-center areas of enhanced SF activity have also been seen in local mergers, such as the Antennae galaxies \citep{Whitmore1995, Karl2010}. The red mass component at the western edge of the system does not show significant dust-obscured or unobscured star formation. As such, this component must have been gas-poor before the start of the interaction, such that no new burst of SF could be triggered. This would also explain why we don't see a tidal tail originating from this component. We note that for clump $g$, the young stellar age derived from the SED fit is only relevant for the current SF episode triggered by the interaction, and does not contradict the interpretation of this clump as an established stellar population. As a roughly equal-mass, mixed merger of one gas-rich and one gas-poor component, RCSGA0327 is not a common occurrence. Both theoretical and observational estimates of merger rates find that only $\sim10$\% of $z\sim2$ galaxies at $10^9$~M$_\odot$ are currently undergoing an interaction \citep{Conselice2003,Guo2008}. However, the number of mergers involving one or both components to be gas-poor is significantly less than that, given the overall increase in galaxy gas fraction with redshift. \cite{Lin2008} find that 24\% of mergers are mixed at $z\sim1.1$ based on galaxy pair counts in the DEEP2 Redshift Survey. On top of that, the stellar mass estimate of $\sim 1.2 \times 10^9$~M$_\odot$ for the gas-poor component in RCSGA0327 is unusually low, as can be seen for example from the mass distribution of passive galaxies at $1.4 < z < 2.5$ in GOODS-South (Figure 3, \citealt{Lee2013}). Merger simulations mostly focus on the low-redshift Universe and have so far failed to take into account the different nature of $z\sim2$ SFGs as evidenced by their clumpy morphology, higher gas fractions and stronger turbulence. Recent studies have shown that turbulence and clumpiness have a substantial effect in mergers of present-day spirals with just a few percent of gas, causing significant differences in the star formation history during the interaction \citep{Teyssier2010, Saitoh2009}. The effect could presumably be more dramatic in high-redshift mergers involving high gas fractions. \cite{Bournaud2011} present the first wet merger simulations involving two realistic massive, gas-rich, clumpy disks. Such work needs to be extended to lower mass galaxies, as well as mixed and dry mergers involving early type, gas-poor components. \par \vspace{2.3ex plus 0.3ex minus 0.3ex} To finish, we would like to stress two main points which were instrumental in obtaining a full understanding of the physical nature of RCSGA0327. The first point concerns the combination of IFS data with high-resolution rest-frame UV to optical imaging. The kinematics of the ionized gas and the morphology of current star formation derived from IFS data need to be complemented with an understanding of the underlying stellar population derived from spatially resolved SED modelling. In the case of RCSGA0327, the gas-poor component in the interaction only became apparent in the stellar mass surface density maps. Secondly, the analysis presented here would not have been possible without the lensing magnification, which allowed a high-resolution velocity profile and detailed measurements of multiple $<1$~kpc size clumps. The resulting unprecedented view of a rare ongoing interaction at $z\sim2$ shows the promise of detailed study of individual systems to aid and constrain theoretical efforts towards understanding galaxy formation and evolution. \vspace{0.5cm} \begin{acknowledgments} We thank our anonymous referee for a thorough reading of the paper and insightful comments. E.~W. thanks John Hibbard, Tucker Jones, Rachael Livermore, Chris Mihos, Thorsten Naab, Sarah Newman, Emily Wisnioski and Tian-Tian Yuan for sharing data and/or stimulating discussions. Support for HST program 12267 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. Travel support for the Keck observations was provided by the Grants-in-Aid of Research Program of the Sigma Xi Scientific Research Society and the NASA Keck PI Data Award, administered by the NASA Exoplanet Science Institute. K.S acknowledges support from the University of Michigan's Presidential Fellowship. Data presented in this paper were partly obtained at the W.M. Keck Observatory from telescope time allocated to the National Aeronautics and Space Administration through the scientific partnership with the California Institute of Technology and the University of California. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. We acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. \end{acknowledgments} \input{biblio} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Deep generative models can be categorized in the following categories: (i) Flow-based models, such as Glow \cite{kingma2018glow} (ii) Autoregessive models, e.g. Transformer, for language modeling \cite{vaswani2017attention}, (iii) GAN \cite{goodfellow2014generative} based models, such as, WaveGAN \cite{donahue2018adversarial} for speech and StyleGAN \cite{karras2020analyzing} for vision application. (iv) VAE \cite{kingma2013auto} based models, e.g. VQ-VAE \cite{razavi2019generating} and NVAE \cite{vahdat2020nvae}, and (v) Diffusion Probabilistic Models \cite{sohl2015deep} such as ADM \cite{dhariwal2021diffusion}. Diffusion probabilistic models achieve comparable and superior results to other deep generation models such as WaveGrad for speech synthesis \cite{chen2020wavegrad} and ADM for image generation \cite{dhariwal2021diffusion}. The underline architecture of the diffusion probabilistic models is a chain of Markov latent variables. The data flows in two directions: (i) the diffusion process, and (ii) the denoising process. The denoising process is the inference process which generates the data starting from Gaussian noise. The diffusion process is the training process which learns to transform data samples into Gaussian noise. In the seminal work of Hoogeboom et al. \cite{hoogeboom2021argmax}, a diffusion model for categorical variable was introduced. The paper shows that the original diffusion process, which is suitable for continuous data such as speech and and ordinal data such as images, can model discrete categorical data. They trained a diffusion network on the language modeling task. In this work we propose a diffusion model for neural machine translation. Furthermore, we show that the proposed model has some capabilities of zero-shot translation. To our knowledge, we are the first to perform conditional text generation using a diffusion model. \section{Related Work} In \cite{sohl2015deep} Sohl-Dickstein et al. introduce the diffusion process. The diffusion process takes the variational distribution $q(x_t|x_{t-1})$ and adds Gaussian noise at each time step where $t \in {1,...,T}$, $x_0$ is the original data point and $x_T$ is completely noise. In this section we will recap the multinomial diffusion process as defined by Hoogeboom et al. \cite{hoogeboom2021argmax} for categorical data. We denote $x_t$ as a 1-hot vector with $K$ categories. $x_0$ is the data point, and $q(x_t|x_{t-1})$ is the diffusion model that gradually adds a small amount of noise at each step. At $t=T$, $x_T$ is almost completely noise. The opposite direction $p(x_{t-1}|x_t)$ is a learnable distribution that denoises the data. The diffusion model is optimized with the variational bound on negative log likelihood: \begin{multline} \log P(x_0) \geq E_{x_1, \ldots x_T \sim q} \Big{[} \log p(x_T) \\ + \sum_{t=1}^T \log \frac{p(x_{t-1} | x_t)}{q(x_{t} | x_{t-1})} \Big{]}. \label{eq:diff_categorical_forward_origin} \end{multline} Sohl-Dickstein et al. \cite{sohl2015deep} use $x_0$ as condition and show that Eq.\ref{eq:diff_categorical_forward_origin} becomes: \begin{multline} \log P(x_0) \geq E_{q}\Big{[}\log p(x_0 | x_1) \\ - \mathrm{KL} \big{(} q(x_T|x_0) | p(x_T) \big{)} \\ - \sum_{t=2}^T \mathrm{KL} \big{(}q(x_{t-1} | x_t, x_0) |p(x_{t-1} | x_t)\big{)} \Big{]} \label{eq:diffusion_final_objective_kl} \end{multline} where $\mathrm{KL} \big{(} q(x_T | x_0) | p(x_T)\big{)} \approx 0$ if the diffusion trajectory $q$ is defined well. The variational distribution $q(x_t|x_{t-1})$ is defined as follows: \begin{equation} q(x_t|x_{t-1}) = \mathcal{C}(x_t | (1 - \beta_t) x_{t-1} + \beta_t / K ) \label{eq:diff_categorical_forward} \end{equation} where $\beta_t$ is the probability to sample from the uniform distribution. Using a Markov chain property, one can get the closed form to sample $x_t$ from $x_0$: \begin{equation} q(x_t | x_{0}) = \mathcal{C}(x_t | \bar{\alpha}_t x_{0} + (1 - \bar{\alpha}_t) / K ) \label{eq:diff_categorical_forward_x0} \end{equation} where $\bar{\alpha}_t$ and $\alpha_t$ are defined in the same manner as in the original DDPM \cite{ho2020denoising}, i.e. $\alpha_t = 1 - \beta_t$ and $\bar{\alpha}_t = \prod_{\tau=1}^t \alpha_\tau$. One can further relax the closed form: \begin{equation} q(x_{t-1} | x_{t}, x_0) = \mathcal{C}(x_{t-1} | \tilde{\theta} / \sum_{k=1}^K \tilde{\theta}_k) \label{eq:q_posterior_1} \end{equation} where \begin{equation} \tilde{\theta} = [\alpha_t x_t + (1 - \alpha_t) / K] \odot [\bar{\alpha}_{t-1} x_0 + (1 - \bar{\alpha}_{t-1}) / K ] \label{eq:q_posterior_2} \end{equation} Hoogeboom et al. \cite{hoogeboom2021argmax} predicts a probability vector for $\hat{x}_0$ from $x_t$. They parametrize $p(x_{t-1} | x_{t})$ from $q(x_{t-1} | x_t, \hat{x}_0)$, where $x_0$ is approximated with a neural network $\hat{x}_0 = \mu(x_t, t)$. Denote \begin{equation} \theta_{\mathrm{post}}(x_t, x_0) = \tilde{\theta} / \sum_{k=1}^K \tilde{\theta}_k \end{equation} Then, the variational lower bound Eq.\ref{eq:diffusion_final_objective_kl} becomes: \begin{dmath} \log P(x_0) \geq E_{q} \Big{[} \sum_k x_{0,k} \log \hat{x}_{0,k} - \sum_{t=2}^T \mathrm{KL} \big{(} \mathcal{C}(\boldsymbol{\theta}_{\mathrm{post}}(x_t, x_0)) | \mathcal{C}(\boldsymbol{\theta}_{\mathrm{post}}(x_t, \hat{x}_0)) \Big{]} \label{eq:loss} \end{dmath} It is worth to mention the work by Austin et al. \cite{austin2021structured} which improves Hoogeboom et al. \cite{hoogeboom2021argmax} by introducing corruption processes with uniform transition probabilities. They use transition matrices that mimic Gaussian kernels in continuous space and show that using different transition matrix leads to improved results in text generation. \begin{figure}[h] \hspace{-0.9cm} \includegraphics[width=.55\textwidth,keepaspectratio]{arch-high-level.png} \caption{High level description of the proposed model. The encoder receives the source language sentence $x$, which computes outputs that are fed into the decoder using a cross attention mechanism, as customary. The decoder receives a noisy target language sentence, $\hat{y}_t$, and outputs a slightly less noisy $\hat{y}_{t-1}$. $\phi$ is the time positional encoding module, consisting of a sinusoidal positional embedding and a linear layer. Its output is added to each layer of the encoder and the decoder. This is in addition to the axial positional encoding which is built in the transformer.} \label{fig:arch} \end{figure} \section{Method} We use a neural network $\mu$ which predicts a probability vector at each diffusion step, similar to the method used by Hoogeboom et al. \cite{hoogeboom2021argmax}. The architecture used is transformer-based with an additional time-based positional encoding. Unlike Hoogeboom et al., which performs unconditional text generation, we are interested in sentence translation, thus we add the sentence in the source language as a condition and predict probability vectors to create the sentence in the target language. Our architecture is inspired by an encoder-decoder approach, where the source-language sentence is given as input to a transformer encoder and the noisy target sentence is given as input to a transformer decoder, such that the encoder's outputs are used in a cross attention mechanism in each layer of the decoder. Unlike standard encoder-decoder systems, our method does not predict output tokens one at a time (autoregressively), but rather predicts all tokens' probabilities at each denoising step. During training, a time step $t$ is randomly sampled, and using the noise schedule $\alpha_t, \bar{\alpha}_t$, the posteriors are calculated and a noisy target sentence $y_t$ is created, using the closed-form formula for the uniform noise \ref{eq:q_posterior_1}. Then, a forward pass is performed to predict $\hat{y}_{t-1}$ using our neural network: $\hat{y}_{t-1} = \mu(y_t, x, t)$, which in turn is used to calculate the loss function. In this notation $x$s are source sentences and $y$s are target sentences. During inference, we start $y_T$ from random uniform noise and iteratively run $\mu$ on it $T$ times to get $\hat{y}_0$. \noindent{\bf Data Processing\quad} The diffusion model requires inputs to be of fixed length. Thus, we use padding and truncation of all sentences to a fixed length $L$. Sentences are padded with a special token $[PAD]$. Two special language tokens are added to each input sentence, the first indicates the source language and the second indicates the target language. This is used to accelerate convergence and to allow for zero-shot learning, given pairs of languages unmet during training. \section{Experiment Setup} \subsection{Datasets} We used three datasets in total, all of which are from WMT. We trained our net on WMT14 \cite{bojar2014findings} DE-EN and WMT14 FR-EN jointly, and from both directions, meaning German to English, English to German, French to English and English to French. We downsampled the larger French dataset in each epoch in order to have the same number of German-English and French-English samples. Lastly, we used the WMT19 \cite{barrault2019findings} DE-FR for evaluation only, in order to test the method's zero-shot learning performance. \subsection{Evaluation Metrics} We used common machine translation evaluation metrics: Corpus level BLEU, SacreBLEU, TER and chrF. We used the official SacreBLEU implementation \cite{post-2018-call} with default parameters. \subsection{Baselines} Current state-of-the-art results for the WMT14 translation tasks perform $\sim35$ BLEU and SacreBLEU for the German-English translation and $\sim45$ for the French-English translation. All of these methods use some type of a transformer with autoregressive decoding, use very large models, and some use extra data. \subsection{Hyperparameter Settings} We used the ADAM optimizer \cite{kingma2014adam}, and tuned the learning rate, batch size and gamma parameters. Eventually, we used a learning rate of $5e-4$, gamma value of $0.9$ and batch size of $512$. We also tried $1000$ versus $4000$ diffusion steps, and didn't see a major difference. We also experimented with the number of transformer layers. Our best model has $12$ transformer layers. Other parameters are $16$ attention heads and hidden dimension of $512$. \noindent{\bf Tokenization\quad} A Tokenizer is trained on data from all three languages, using the WordPiece method with normalization similar to BERT \cite{devlin2018bert} (NFD Unicode, followed by Lowercase and StripAccents). Whitespace pre-tokenization is also used. The vocabulary size is a hyper parameter $K=|V|$, which we tuned. \noindent{\bf Vocabulary Size\quad} The vocabulary size is an important hyper parameter since it determines the dimension of the space the diffusion model needs to predict, and it changes how the noise probability is distributed (since the constant probability of change is distributed to a different number of tokens). Hoogeboom et. al. \cite{hoogeboom2021argmax} worked with 27 and 256 categories for the $text8$ and $enwik8$ datasets, respectively. The fact that no tokenization was used hinted us that perhaps large a vocabulary size doesn't work well with this method. However, austin et. al. \cite{austin2021structured} was able to train a network with 8192 categories, using a slightly different method. Furthermore, we know of the importance of tokenization in complicated tasks such as translation. Therefore, we decided to try different vocabulary sizes and see how sensitive the method is to it. \begin{table}[] \centering \begin{tabular}{l|cccc} \toprule \textbf{Tasks} & BLEU & SacreBLEU & TER & chrF \\ \midrule DE$\shortrightarrow$EN & 7.17 & 8.13 & 93.1 & 34.7 \\ EN$\shortrightarrow$DE & 3.54 & 4.54 & 102.3 & 33.5 \\ FR$\shortrightarrow$EN & 8.62 & 9.93 & 88.4 & 37.8 \\ EN$\shortrightarrow$FR & 7.56 & 9.02 & 90.7 & 37.5 \\ \midrule DE$\shortrightarrow$FR & 4.17 & 5.06 & 94.7 & 31.4 \\ FR$\shortrightarrow$DE & 2.96 & 4.04 & 98.1 & 31.4 \\ \bottomrule \end{tabular} \caption{Results for the different translation tasks. The first four rows are of supervised tasks, and the last two rows are for zero-shot tasks, i.e. pairs of languages unmet during training.} \label{tab:results} \end{table} \begin{table}[] \centering \setlength\tabcolsep{4pt} \begin{tabular}{l|cccc} \toprule \textbf{V. Size} & DE$\shortrightarrow$EN & EN$\shortrightarrow$DE & FR$\shortrightarrow$EN & EN$\shortrightarrow$FR \\ \midrule 1024 & 5.60 & 3.18 & 7.23 & 6.73 \\ 2048 & 7.92 & 4.42 & 9.83 & 8.99 \\ \textbf{4096} & \textbf{8.13} & \textbf{4.54} & \textbf{9.93} & \textbf{9.02} \\ 8192 & 6.76 & 4.00 & 8.89 & 7.75 \\ \bottomrule \end{tabular} \caption{SacreBLEU results with different vocabulary sizes. We see that $K=4096$ gives the best results, which is not on the edge of the values chosen. This indicated its a sweet spot for the vocabulary size tradeoff.} \label{tab:vocab-size} \end{table} \begin{table*}[h] \centering \begin{tabular}{l|lcc} \toprule \textbf{Sample} & \textbf{Sentence} & \textbf{Lang.} & \textbf{SacreBLEU} \\ \midrule 1st Input & je sais qu'il voudrait une garantie de quatre ans. & FR & - \\ 1st Reference & i know he would like a four - year guarantee. & EN & - \\ 1st Prediction & i know he need a guarantee for four years. & EN & 17.47 \\ \midrule 2st Input & \vtop{\hbox{\strut the ecb's sole mandate has always revolved around} \hbox{\strut inflation, therefore mario draghi and his team have all} \hbox{\strut the more reason to take action at their meeting next week.}} & EN & - \\ 2st Reference & \vtop{\hbox{\strut le mandat unique de la bce a toujours porte sur l'inflation,} \hbox{\strut donc mario draghi et son equipe ont davantage de raisons} \hbox{\strut d'agir lors de la reunion de la semaine prochaine.}} & FR & - \\ 2st Prediction & \vtop{\hbox{\strut lle systeme unique unique de la bce est derriere le trend} \hbox{\strut en phoque, afin ou mario draghi et son mont sont plus} \hbox{\strut justifies de prendre en contact a la session prochaine.}} & FR & 17.10 \\ \midrule 3rd Input & \vtop{\hbox{\strut zwei kinder haben in uruguay den mord eines } \hbox{\strut 11 - jahrigen eingestanden.}} & DE & - \\ 3rd Reference & \vtop{\hbox{\strut two children have confessed to the murder of an } \hbox{\strut 11 - year - old in uruguay.}} & EN & - \\ 3rd Prediction & \vtop{\hbox{\strut they had spent a 118'old increased blood in uruguay } \hbox{\strut about alleged murders abandone treating two children.}} & EN & 6.84 \\ \midrule 4th Input & town council delighted with solid budget & EN & - \\ 4th Reference & gemeinderat freut sich uber soliden haushalt & DE & - \\ 4th Prediction & der stadtrat erfullt einen beliebten haushalt & DE & 8.12 \\ \bottomrule \end{tabular} \caption{Randomly selected samples from our model. 2nd sample shows a relatively good translation for a long sentence, and the 3rd sample shows a failed translation for a seemingly easier sentence.} \label{tab:samples} \end{table*} \section{Results} Results for the different translation tasks are depicted in Table \ref{tab:results}. The results are unsatisfactory, implying the method is currently not suitable for the translation task. Results for the zero-shot translation tasks (WMT19) show that some generalization to unmet pairs of languages was possible, but because the overall performance of the system is low, it is hard to estimate if the method transforms well to zero-shot learning. Results for the vocabulary size tuning is depicted in Table \ref{tab:vocab-size}, suggesting a vocabulary of size $K=4096$ is closest to the optimal value in this case. Qualitatively speaking, results quality vary, and overall we see an expected correlation between the difficulty of inputs and the quality of the translation. Nonetheless, some observations are hard to explain, such as relatively good translations for seemingly hard sentences and relatively bad translations for seemingly easy sentences. Table \ref{tab:samples} shows four randomly selected samples from the test set, one from each task (ordered pair of languages). \section{Discussion} \subsection{Learning the Transition Matrix} One idea we had was to learn the transition matrices that determine the probabilities of noise changing one token to another. In the described "vanilla" implementation, all probabilities of change are uniform. Diffusion models for continuous or ordinal data use Gaussian noise, which gives higher probabilities to small changes, resulting in a much easier learning ground for the denoising optimization procedure. This advantage is lost when using the uniform distribution for categorical data. Austin et. al. \cite{austin2021structured} was able to improve on that by using non-uniform noise distributions. Following this idea, we aimed to learn the noise distribution jointly with the diffusion model. Later we found out it was infeasible, since the learning procedure uses pre-computed powers of the transition matrix to enable fast learning. Specifically, for each training iteration at some $t$, this would require the computation of the $t^{th}$ power of a $K\times K$ matrix, where $K\sim 2^11$ and $t\sim 1000$. This makes the technique infeasible. \subsection{Conclusions} In this work, we tried to solve a thoroughly researched NLP task, MNT, using a recent and very promising method, DDPMs, for the first time (to our knowledge). This method has the potential to generate text with high performance in a non-autoregressive way. Although DDPMs achieve state-of-the-art results in generating both continuous and ordinal data, it is yet to show competing results for categorical data such as text. We hoped to show that it can give reasonable results for non-autoregressive translation. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Most of the modern deep Reinforcement Learning (RL) algorithms are built on the foundations of the approximate dynamic programming (ADP) formulation~\citep{bertsekas1995neuro}. This formulation led to a family of temporal difference algorithms and to some of the most performant RL algorithms. Model-free deep RL algorithms such as SAC~\citep{haarnoja2018soft}, TD3~\citep{fujimoto2018addressing}, DDPG~\citep{lillicrap2015continuous} and model-based deep RL algorithms such as Dreamer~\citep{hafner2023mastering}, TD-MPC~\citep{hansen2022temporal}, LOOP~\citep{sikchi2022learning}, MuZero~\citep{schrittwieser2020mastering} and MBPO~\citep{janner2019trust} exemplify this class of successful algorithms. These methods have accomplished various feats including solving challenging games such as Starcraft, Chess, Go, and Shogi while also being very successful in robotics for learning manipulation and locomotion policies. In this work, we consider an alternate formulation for RL, which views RL as a convex objective with linear constraints. This framework has been formalized before~\citep{nachum2020reinforcement} and has been known in its linear programming form since the work of~\cite{manne1960linear,denardo1970linear}. The convex program for RL can be converted into a dual unconstrained form that admits the tools already present in convex optimization literature, making it more suitable for stochastic optimization. The methods arising from the convex program formulation of RL also have the benefit of being truly off-policy: they can use arbitrary off-policy data to better estimate the on-policy policy gradient, i.e they can implicitly perform distribution correction. For this reason, the methods in this space have often been referred to as DICE (DIstribution Correction Estimation) methods in previous literature~\citep{nachum2019algaedice,kostrikov2019imitation,lee2021optidice,ma2022smodice, zhang2020gendice}. Our first contribution is a demonstration of how several recent algorithms in deep RL and imitation learning can be viewed as dual RL methods in a unified framework. These algorithms represent state-of-the-art methods~\citep{garg2023extreme,zhu2020off,kumar2020conservative, garg2021iq} in reinforcement and imitation learning, for both online and offline settings. A number of these recent works have utilized differing tools to derive their method (e.g Gumbel regression~\citep{garg2023extreme}, change of variables~\citep{garg2021iq,zhu2020off}, lower bounded Q-function~\citep{kumar2020conservative}) which makes it difficult to study them on common ground. We hope that our presented unification provides a framework for future methods to perform evaluation and analyze which factors actually make the algorithm better or worse. Second, building upon the dual framework, we propose a new algorithm for off-policy imitation learning that is able to leverage arbitrary off-policy data to learn near-expert policies, relaxing the coverage assumption of previous works~\citep{ma2022smodice,zhu2020off,kim2022demodice}. Our resulting algorithm, $\texttt{ReCOIL}$, is simple, non-adversarial, and admits a single-player optimization in contrast to previous works in imitation~\citep{ghasemipour2020divergence,ho2016generative,fu2017learning, sikchi2022ranking}. We empirically demonstrate the failure of previous imitation learning methods based on the coverage assumption in the dual setting. We also evaluate our methods for learning to imitate in offline continuous control settings on a set of MuJoCo environments and show competitive performance. Finally, we point out scope for future exploration in designing better algorithms for deep RL and imitation learning methods utilizing the dual framework. \begin{figure*}[h] \begin{center} \includegraphics[width=1.0\linewidth]{figures/figure1/dual_landscape.pdf} \\ \vspace{-2.0mm} \end{center} \caption{We show that a number of prior methods can be understood as a special case of the dual RL framework. Based on this framework, we also propose new methods addressing the shortcomings of previous works (boxed in green).} \label{fig:dualRL_main} \end{figure*} \section{Preliminaries} A Markov Decision Process (MDP) is defined by the tuple $(\mathcal{S}, \mathcal{A}, p, r, d_0)$ with state-space $\mathcal{S}$, action-space $\mathcal{A}$, transition probability $p(s_{t+1}| s_t, a_t)$, reward function $r(s,a)$, and initial state distribution $d_0(s)$. In the infinite horizon discounted MDP, the goal of reinforcement learning algorithms is to maximize the return for policy $\pi$ given by $J^\pi=\mathbb{E}{a_t\sim\pi(s_t),s_0 \sim d_0}{\sum_{t=0}^\infty\gamma^t r(s_t,a_t)}$. \textbf{Value functions:} $V^\pi$ : $\mathcal{S}\rightarrow \mathbb{R}$ represents a state-value function which estimates the return from the current state $s_t$ and following policy $\pi$, defined as $ V^\pi(s)=\mathbb{E}{a_t\sim\pi(s_t)}{\sum_{t=0}^\infty\gamma^t r(s_t,a_t)|s_0=s}$. Similarly, $Q^\pi$ : $\mathcal{S}\times\mathcal{A}\rightarrow \mathbb{R}$ represents an action-value function, usually referred as a Q-function, defined as $Q^\pi(s,a)=\mathbb{E}{a_t\sim\pi(s_t)}{\sum_{t=0}^\infty\gamma^t r(s_t,a_t)|s_0=s,a_0=a}$. Value functions corresponding to the optimal policy $\pi^*$ are defined to be $V^*$ and $Q^*$. The value function can be updated by minimizing mean-squared error with the target given by Bellman operator $\mathcal{T}^{\pi_Q}$ in the ADP setting: \begin{equation} \mathcal{T}^{\pi_Q}{Q}(s_t,a_t) = r(s_t,a_t)+\mathbb{E}{s_{t+1}\sim p,a_{t+1}\sim\pi_Q}{\gamma( Q(s_{t+1},a_{t+1})}, \end{equation} where $\pi_{Q}$ is updated to be greedy with respect to $Q$, the current Q-function. We will use $\mathcal{T}V(s,a)$ to denote $r(s,a)+\mathbb{E}{s_{t+1}\sim p(\cdot|s_T,a_t)}{V(s_{t+1})}$. Note the absence of an explicit policy in this operator. \textbf{State-action/State visitation distribution: } The state-action visitation distribution (stationary occupancies) $d^\pi(s,a): \mathcal{S}\times\mathcal{A}\to[0,\infty)$ of $\pi$ is: \begin{equation} d^\pi(s,a) = (1-\gamma) \sum_{t=0}^{\infty} \gamma^t P(s_t=s,a_t=a|s_0\sim\rho_0,a_t\sim\pi(s_t),s_{t+1}\sim p(s_t,a_t)). \end{equation} The state-visitation distribution marginalizes over actions on $d^\pi(s,a)$ and will be overloaded by the notation $d^\pi(s)=\sum_{a\in\mathcal{A}}d^\pi(s,a)$. In this work, we will use $d^O$, $d^R$, and $d^E$ to denote offline, replay-buffer, and expert state-action visitation distribution respectively. \textbf{$f$-divergence and its dual form: }Let $f:(0,\infty)\to \mathbb{R}$ be a convex lower semi-continuous function with $f(1)=0$. Let $P$ and $Q$ be two probability distributions, then the $f$-divergence is defined as: \begin{equation} \f{P}{Q}=\mathbb{E}_{z\sim Q}\left[f\left(\frac{P(z)}{Q(z)}\right)\right]. \end{equation} The convex conjugate $f^*$ of $f$ is defined by: \begin{equation} \label{eq:cx_conjugate_def} f^*(y)= \text{sup}_{x\in\mathbb{R}}[\langle x,y \rangle-f(x)]. \end{equation} where $\langle \rangle$ denotes dot product. Table~\ref{tbl:div} provides a list of common functions $f$ and their conjugates: \section{Dual Connections in Reinforcement Learning} \label{sec:background} A number of recently proposed algorithms for deep reinforcement learning and imitation learning use different mathematical tools to derive their method (e.g Gumbel regression~\citep{garg2023extreme}, change of variables~\citep{garg2021iq,zhu2020off}, pessimistic Q-function~\citep{kumar2020conservative}) which makes it difficult to study them on common ground. This paper aims to connect a number of algorithmic developments in deep reinforcement learning as a special case in a simple unified framework of dual reinforcement learning. (True) Off-policy algorithms, that can leverage arbitrary off-policy data for policy improvement, are a simple by-product of this previously known but mostly ignored in-practice framework~\citep{nachum2020reinforcement} we consider. In Section~\ref{sec:dual_rl}, we first describe the general framework for dual reinforcement learning and then proceed to chalk out the connections for reinforcement learning and imitation learning in Section~\ref{sec:connections_to_rl} and Section~\ref{sec:connections_to_il} respectively. In Section~\ref{sec:new_il_method}, we proceed to propose a new method, \texttt{ReCOIL}, for imitation learning from arbitrary experience. \subsection{Dual Reinforcement Learning} \label{sec:dual_rl} We will consider the regularized policy optimization setting in this work given below: \begin{equation} \label{eq:reg_rl} \max_\pi \mathbb{E}_{d^\pi(s,a)}[r(s,a)] -\alpha \f{d^\pi(s,a)}{d^O(s,a)}, \end{equation} where $d^O$ is a known state-action visitation distribution, $\alpha$ is the temperature parameter that allows us to weigh policy improvement against conservatism by staying close to the state-action distribution $d^O$, and $f$ denotes a particular $f$-divergence. Optimizing over $\pi$, at first sight, gives us a non-convex problem. We can rewrite the problem as a convex optimization problem (CoP) by considering optimization over \textit{valid} state-action visitations satisfying Bellman-flow constraints: \begin{align} \label{eq:primal_rl_q} \textbf{Q-CoP:}~~\max_{\pi,d\ge0} \mathbb{E}_{d(s,a)}[r(s,a)]-{\alpha}\f{d(s,a)}{d^O(s,a)}\\ \text{s.t}~~d(s,a)=(1-\gamma)d_0(s).\pi(a|s)+\gamma \sum_{s',a'} d(s',a')p(s|s',a')\pi(a|s), \nonumber \end{align} The above problem is overconstrained -- the inner maximization w.r.t $d$ is unnecessary as the ($\mathcal{S}\times \mathcal{A}$) constraints uniquely determine the distribution. We can relax the constraints to get an equivalent but different form for the problem: \begin{align} \label{eq:primal_rl_v} \textbf{V-CoP:}~~\max_{d\ge0} \mathbb{E}_{d(s,a)}[r(s,a)]-\alpha\f{d(s,a)}{d^O(s,a)}\\ \text{s.t}~~\sum_{a\in\mathcal{A}} d(s,a)=(1-\gamma)d_0(s)+\gamma \sum_{s',a'} d(s',a')p(s|s',a'). \nonumber \end{align} We can leverage Lagrangian duality to convert the convex program above into its dual unconstrained optimization form (see Appendix~\ref{ap:dual_rl_review} for derivation and review) since the objective is convex and the constraints are linear. In summary, we have two methods for policy optimization given by: \begin{align} \label{eq:dual-Q} \texttt{dual-Q}: &\max_{\pi}\min_{Q} (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} +{\alpha}\mathbb{E}{s,a\sim d^O}{f^*\left(\left[\mathcal{T}^\pi Q(s,a)-Q(s,a)\right]/ \alpha)\right)} \end{align} \begin{align} \label{eq:dual-V} &\texttt{dual-V}: \min_{V(s)} {(1-\gamma)}\mathbb{E}{d_0(s)}{V(s)} +{\alpha}\mathbb{E}{s,a\sim d^O}{f^*\left(\left[\mathcal{T}V(s,a)-V(s))\right]/ \alpha\right)}. \end{align} where $Q$ and $V$ are Lagrange variables in this framework. We overload $y(s,a)$ to denote $r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)$ or $r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')-V(s)$ for \texttt{dual-Q} and \texttt{dual-V} settings respectively for notational simplicity. We note an important property of the dual RL formulation: at convergence, the distribution ratio between the optimal policy and distribution used for regularization is captured in the first derivative of the conjugate $f$-divergence. Formally, \begin{equation} \frac{d^*(s,a)}{d^O(s,a)} = {f^*}^'\left(\frac{y(s,a)}{\alpha}\right). \end{equation} The dual formulation has two appealing properties: (a) the gradient of the objective (Eq~\ref{eq:dual-Q} and~\ref{eq:dual-V}) w.r.t $\pi$ as $Q$ is optimized is the on-policy policy gradient using off-policy data\footnote{Note that the on-policy policy gradient is with respect to the regularized Q-values}, (b) this allows us to incorporate regularization of different forms such as w.r.t behavior data (pessimism~\citep{wu2019behavior}), data generated by previous policy (trust region~\citep{kakade2001natural,schulman2015trust,schulman2017proximal}), uniform distribution (max-entropy~\citep{haarnoja2018soft}) or an online planner~\citep{sikchi2022learning}. In the dual formulation above, we have ignored the constraints that imply that the distribution $d(s,a)$ should be always positive $\forall~s,a$. The closed-form solutions for the dual formulation under the positivity constraint can be found in Appendix~\ref{ap:positivity_constraints}. The positivity constraints have the effect of changing the function $f^*$ in Equation~\ref{eq:dual-Q} and~\ref{eq:dual-V} to $f_p^*$ (refer Appendix~\ref{ap:positivity_constraints}). \subsection{Connections to duality in reinforcement learning} \label{sec:connections_to_rl} Equations~\ref{eq:primal_rl_q} and~\ref{eq:primal_rl_v} provide a natural choice for both offline and online reinforcement learning algorithms. Indeed, solving the $f$-divergence regularized objective for the Q-CoP setting using approximate dynamic programming has led to one of the largest classes of methods that avoid overestimation for offline reinforcement learning using pessimism. Using the dual RL framework, we obtain a set of approaches that leverage \texttt{dual-Q} and \texttt{dual-V} for policy improvement. Our first result shows that Conservative Q-learning~\citep{kumar2020conservative}, an offline RL method primarily understood to prevent overestimation by learning a lower bounded Q function is actually a \texttt{dual-Q} method. The lemma below formalizes the above statement: \begin{restatable}[]{lemma}{cql} \label{thm:CQL} Conservative Q-Learning (CQL) is the dual of Q-CoP with the generator function $f=(t-1)^2$ (Pearson $\chi^2$) and when the regularization distribution is the replay buffer ($d^O=d^R$). \end{restatable} In other words, CQL eventually solves the regularized RL problem (Q-CoP) in its dual form where the regularization is a particular form of $f$-divergence. This unification indicates that its better performance compared to the family of behavior-regularized offline RL methods~\citep{nair2020awac,fujimoto2018addressing, wu2019behavior}, which solve the Q-CoP using approximate dynamic programming is likely due to the choice of $f$-divergence and more amenable optimization afforded by the dual formulation. The dual-Q formulation has been previously studied for online RL by the name AlgaeDICE~\citep{nachum2019algaedice} but not evaluated in the context of offline RL. Lemma~\ref{thm:CQL} also suggests that CQL is a special case of AlgaeDICE. Leveraging the dual form of \textbf{V-CoP} converts the policy improvement problem from a min-max two-player game to a single optimization, thus potentially making the optimization easier to solve~\citep{nachum2020reinforcement}. We also note that an additional step needs to be performed to recover policies in \texttt{dual-V} which requires solving a supervised learning problem (see Appendix~\ref{ap:recovering_policy}). Here, we first show that Extreme Q-Learning (X-QL)~\citep{garg2023extreme}, a method for both online and offline RL based on the principle of \textit{implicit maximization} in the value function space using Gumbel regression, can be reduced to a \texttt{dual-V} problem with a semi-gradient update rule (i.e $\texttt{stop-gradient}\left(r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')\right)$) when $f$ is set to be the reverse-KL regularization. Here, \textit{implicit maximization} refers to finding the extreme values of a distribution using only samples from the distribution. This insight obtained through duality, allows us to propose a class of algorithms extending X-QL, by choosing different functions $f$ which we show below to result in a family of implicit maximizers. \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{figures/implicit_maximizers/f_implicit_maximizers.pdf} \\ \end{center} \vspace{-1.0mm} \caption{A family of implicit maximizers arising from semi-gradient dual reinforcement learning corresponding to different f-divergences. 10000 datapoints are sampled from 1-D bounded gaussian distribution $D$ and $v$ is inferred using Equation~\ref{eq:implicit_maximization_general}. As $\tau\to 1$ (see legend) we obtain more accurate estimates for the supremum of the support.} \label{fig:implicit_maximizers} \vspace{5pt} \end{figure} \begin{restatable}[]{lemma}{xql} \label{thm:XQL} Extreme Q-Learning (X-QL) is the dual of V-CoP with $f$-divergence set to be the reverse Kullback-Liebler divergence with a semi-gradient update rule. \end{restatable} \textbf{A family of implicit maximizers}: Consider the $\lambda$-parameterized semi-gradient \texttt{dual-V} objective below: \begin{equation} \label{eq:implicit_maximization} \min_{V(s)} (1-\lambda)\mathbb{E}{d_O(s)}{V(s)} +\lambda\mathbb{E}{s,a\sim d^O}{f^*_p\left(\left[\hat{Q}(s,a)-V(s))\right], \right)} \end{equation} where $\hat{Q}(s,a)=r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')$ with hat denoting stop-gradient. More generally for any random variable $X$ with distribution $D$, \begin{equation} \label{eq:implicit_maximization_general} \min_{v} (1-\lambda)\mathbb{E}{x\sim D}{v}+ \lambda\mathbb{E}{x\sim D}{f^*_p\left( x-v \right)}. \end{equation} We show through Lemma~\ref{thm:implicit_maximizer} below and a simple example~\ref{fig:implicit_maximizers} that the semi-gradient form of \texttt{dual-V} optimization naturally gives rise to a family of implicit maximizers. Intuitively, this is because the second term in Eq~\ref{eq:implicit_maximization_general} is minimized in value as $v$ increases and saturates once $v=\max (x\in support(D))$ while the first term is minimized for smaller $v$. This is opposed to specially curated implicit maximizers found in offline RL methods~\citep{kostrikov2021offline}. Gumbel regression becomes a special case of this family. We list some of the loss functions for value function updates with different $f$-divergences in Appendix~\ref{ap:implicit_maximizers_family}. We also highlight that the full-gradient variant of the \texttt{dual-V} framework for offline RL has been studied extensively in OptiDICE~\citep{lee2021optidice}. \begin{restatable}[]{lemma}{implicit} \label{thm:implicit_maximizer} Let $X$ be a real-valued random variable with bounded support and the supremum of the support is $x^*$. Then optimizing equation~\ref{eq:implicit_maximization_general}, the solution $v_\lambda$ satisfies the following properties \begin{equation} \lim_{\lambda\to 1} v_\lambda = x^* ~\text{and}~ \forall ~\lambda_1<\lambda_2 \in (0,1),~~v_{\lambda_1}\le v_{\lambda_2}. \end{equation} \end{restatable} \textbf{A generalized policy iteration view of semi-gradient \texttt{dual-V}: } \texttt{dual-V} framework presents optimization difficulties when using full-gradients~\citep{dai2017boosting}. X-QL shows that stable learning can be achieved using the semi-gradient form. Our insight into implicit maximizers suggests that using semi-gradients brings the \texttt{dual-V} framework closer to generalized policy-iteration framework. The update of $V$ in its semi-gradient dual form acts as an implicit policy optimizer and the estimation of $\hat{Q}(s,a)$ by regressing to $r(s,a)+\gamma(V(s'))$ is akin to a policy evaluation step, bridging the connection to generalized policy iteration. \subsection{Connections to duality in imitation learning} \label{sec:connections_to_il} In this section, we will first discuss the setting of offline imitation learning with expert data only -- where the agent has no access to the environment and is limited to a fixed amount of offline expert transitions. In this direction, we consolidate prior work and also propose a new algorithm arising from the dual-V formulation. Then, we discuss the off-policy imitation learning setting where the agent has access to limited expert data that it is trying to imitate along with some suboptimal data. This setting is applicable to both cases: online (where the agent is actively collecting data) and offline (the agent has access to a mixed-quality dataset that it needs to leverage for imitation). We discuss prior works from the perspective of duality and show how they are limited by their assumption of coverage (off-policy replay data covers expert data) or their reliance on a particular $f$-divergence. Finally, we propose a new method for off-policy imitation learning that relaxes the coverage assumption and works for arbitrary $f$-divergences. \subsubsection{Offline imitation learning with expert data only} We start with the \texttt{dual-Q} and \texttt{dual-V} equations and repurpose them for imitation by simply setting the reward function to be uniformly 0 across the state-action space and setting the regularization distribution to be the expert distribution $d^O(s,a)=d^E(s,a)$. We use $\mathcal{T}^\pi_0$ and $\mathcal{T}_0$ to denote the backup operator with zero rewards. \texttt{dual-Q} (offline imitation) takes the following form: \texttt{dual-Q}~\text{(offline imitation with expert-only data)}: \begin{align} \label{eq:dual-Q-imit} &\max_{\pi}\min_{Q} (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} +\mathbb{E}{s,a\sim d^E}{f^*\left(\left[\mathcal{T}^\pi_0 Q(s,a)-Q(s,a)\right]/ \alpha\right)}. \end{align} Interestingly, this reduction directly leads us to the equivalence of an imitation learning method IQLearn~\citep{garg2021iq} derived using a change of variables in the form of an inverse backup operator. \cite{garg2021iq} uses this method in the online imitation learning setting with an additional regularization which we suggest is unprincipled, also pointed out by others~\citep{anonymous2023lsiq}(as only expert data samples can be leveraged in the above optimization) and provide a fix in the Section~\ref{sec:new_il_method}. \begin{restatable}[]{lemma}{iqlearn} \label{lemma:iqlearn} $\texttt{dual-Q}$ is equivalent to IQ-Learn when $r(s,a)=0~~\forall ~(\mathcal{S},\mathcal{A})$ and $d^O(s,a)=d^E(s,a)$. \end{restatable} Utilizing Lemma~\ref{lemma:iqlearn} above, we also provide a new observation in the form of Corollary~\ref{corollary:ibc} below. We find that Implicit Behavior Cloning~\citep{florence2022implicit}, a method that performs behavior cloning using a contrastive objective, is a special case of \texttt{dual-Q} for offline imitation learning. \begin{restatable}[]{corollary}{ibc} \label{corollary:ibc} \texttt{dual-Q} is equivalent to Implicit Behavior Cloning~\citep{florence2022implicit} when $r(s,a)=0~~\forall ~(\mathcal{S},\mathcal{A})$ and $d^O(s,a)=d^E(s,a)$ and $f$ is set to be the total variation divergence. \end{restatable} \textbf{A new method for offline imitation learning: } Analogous to \texttt{dual-Q} (offline imitation), we can leverage the \texttt{dual-V} (offline imitation) setting which avoids the min-max optimization given by: \texttt{dual-V}~\text{(offline imitation from expert-only data)}: \begin{equation} \label{eq:dual-V-imit} \min_{V(s)} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)} +\mathbb{E}{s,a\sim d^E}{f^*\left(\left[\mathcal{T}_0 V(s,a)-V(s))\right]/ \alpha\right)} \end{equation} Leveraging the \texttt{dual-V} variant we obtain a new method for offline imitation learning, which we refer to as \texttt{IVLearn} and leave its exploration for future work. \subsubsection{Off-policy imitation learning (under coverage assumption)} The \texttt{dual-Q} and \texttt{dual-V} framework does not naturally extend to off-policy imitation learning. To remedy this, prior methods have relied on careful selection of an $f$-divergence that allows them to arrive at an off-policy objective for imitation learning~\citep{zhu2020off,hoshino2022opirl,ma2022smodice}. Concretely, prior methods consider the reverse KL divergence as the $f$-divergence. First, we show that it is straightforward to see why choosing the $f$-divergence to be reverse KL makes it possible to get an off-policy objective for imitation learning in the dual framework. We start with the Q-CoP for imitation learning using the reverse KL-divergence ($r(s,a)=0~~\text{and}~~d^o=d^E$): \begin{align} \label{eq:primal_imitation_kl} &\max_{d(s,a)\ge0,\pi(a|s)} -\kl{d(s,a)}{d^E(s,a)} \nonumber\\ &\text{s.t}~~d(s,a)=(1-\gamma)\rho_0(s).\pi(a|s)+\gamma \pi(a|s)\sum_{s',a'} d(s',a')p(s|s',a'). \end{align} \textit{Under the assumption that the replay buffer visitation (denoted by $d^R$) covers the expert visitation ($d^R>0$ wherever $d^E$>0)}~\citep{ma2022smodice}, which we refer to as the \textbf{coverage assumption}, the reverse KL divergence can be expanded as follows: \begin{align} \kl{d(s,a)}{d^E(s,a)} &= \mathbb{E}{s,a\sim d(s,a)}{\log \frac{d(s,a)}{d^E(s,a)}}=\mathbb{E}{s,a\sim d(s,a)}{\log \frac{d(s,a)}{d^E(s,a)}\frac{d^R(s,a)}{d^R(s,a)}}\\ &= \mathbb{E}{s,a\sim d(s,a)}{\log \frac{d(s,a)}{d^R(s,a)}+\log \frac{d^R(s,a)}{d^E(s,a)}}\\ &= \mathbb{E}{s,a\sim d(s,a)}{\log \frac{d^R(s,a)}{d^E(s,a)}}+\kl{d(s,a)}{d^R(s,a)}. \end{align} Hence the Q-CoP can now be written as: \begin{align} \max_{d(s,a)\ge0,\pi(a|s)} \mathbb{E}{s,a\sim d(s,a)}{-\log \frac{d^R(s,a)}{d^E(s,a)}}-\kl{d(s,a)}{d^R(s,a)}\\ \text{s.t}~~d(s,a)=(1-\gamma)\rho_0(s).\pi(a|s)+\gamma \sum_{s',a'} d(s',a')p(s|s',a')\pi(a|s). \end{align} Now, in the optimization above the first term resembles the reward function and the second term resembles the divergence constraint with a new distribution $d^R(s,a)$ in the original regularized RL primal (Eq~\ref{eq:primal_rl}). Hence we can obtain respective $\texttt{dual-Q}$ and $\texttt{dual-V}$ in the setting for off-policy imitation learning using the reward function as $r^{imit}(s,a)=-\log \frac{d^R(s,a)}{d^E(s,a)}$ and the new regularization distribution as $d^R(s,a)$. Using $\mathcal{T^}^\pi_{r^{imit}}$ and $\mathcal{T^}_{r^{imit}}$ to denote backup operators under new reward function $r^{imit}$, we have $\texttt{dual-Q}$ for off-policy imitation (coverage assumption): \begin{align} &\max_{\pi(a|s)}\min_{Q(s,a)} (1-\gamma)\mathbb{E}{\rho_0(s),\pi(a|s)}{Q(s,a)}+\mathbb{E}{s,a\sim d^R}{f^*(\mathcal{T^}^\pi_{r^{imit}}Q(s,a)-Q(s,a))}. \end{align} This choice of KL divergence leads us to a reduction of \texttt{dual-Q} to another method, OPOLO~\citep{zhu2020off} for off-policy imitation learning which we formalize in the lemma below: \begin{restatable}[]{lemma}{opolo} $\texttt{dual-Q}$ for off-policy imitation learning reduces to OPOLO~\citep{zhu2020off}, with the $f$-divergence set to the reverse KL divergence when $r(s,a)=0~\forall \mathcal{S},\mathcal{A}$, $d^O=d^E$ and under the assumption that the replay data distribution covers the expert data distribution. \end{restatable} Analogously we have $\texttt{dual-V}$ for off-policy imitation (coverage assumption): \begin{equation} \min_{V(s)} (1-\gamma)\mathbb{E}{\rho_0(s)}{V(s)}+\mathbb{E}{s,a\sim d^R}{f^*(\mathcal{T^}_{r^{imit}}V(s,a)-V(s))}. \end{equation} We note that the \texttt{dual-V} framework for off-policy imitation learning under coverage assumptions was studied in the imitation learning work SMODICE~\citep{ma2022smodice}. \section{A Method for Imitation Learning from Arbitrary Experience} \label{sec:new_il_method} In this section, we propose a new approach to relax the coverage assumption discussed above for imitation as well as allow for arbitrary $f$-divergence matching in the dual framework. We refer to our method as \texttt{ReCOIL} (RElaxed Coverage for Off-policy Imitation Learning). We consider an alternate optimization objective that fits well in the dual RL framework by changing the regularization to be between mixture distributions. Concretely, we are interested in the following $f$-divergence regularization: \begin{equation} \f{\beta d(s,a)+(1-\beta)d^R(s,a)}{\beta d^E(s,a)+(1-\beta)d^R(s,a)}. \end{equation} Minimizing this divergence alone is a valid imitation learning objective~\citep{ghasemipour2020divergence,ke2021imitation,ni2021f,sikchi2022ranking} since the global minimum for this objective is achieved at $d=d^E$. Let $d_{mix}^R :=\beta d(s,a)+(1-\beta)d^R(s,a)$ and $d_{mix}^{E,R} := \beta d^E(s,a)+(1-\beta)d^R(s,a)$. The modified Q-CoP for imitation learning objective is given by: \begin{align} \label{eq:primal_imitation_f_mixture} &\max_{d(s,a)\ge0,\pi(a|s)} -\f{d^R_{mix}(s,a)}{d^{E,R}_{mix}(s,a)} \nonumber\\ &\text{s.t}~~d(s,a)=(1-\gamma)\rho_0(s).\pi(a|s)+\gamma \pi(a|s)\sum_{s',a'} d(s',a')p(s|s',a'). \end{align} The V-CoP for this setting can be similarly specified and the dual results in two variants of \texttt{ReCOIL} that can be leveraged for off-policy imitation learning with arbitrary data as formalized by Lemma~\ref{thm:recoilq} and Lemma~\ref{thm:recoilv} below. \begin{restatable}[]{lemma}{closerq} \label{thm:recoilq} (\textbf{$\texttt{dual-Q}~$ for off-policy imitation (relaxed coverage assumption)}) Imitation learning using off-policy data can be solved by optimizing the following modified dual objective for Q-CoP with $r(s,a)=0~\forall \mathcal{S},\mathcal{A}$ and $f$-divergence considered between distributions $d_{mix}^R :=\beta d(s,a)+(1-\beta)d^R(s,a)$ and $d_{mix}^{E,R} := \beta d^E(s,a)+(1-\beta)d^R(s,a)$, and is given by: \begin{align} &\max_{\pi(a|s)}\min_{Q(s,a)} \beta (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} +\mathbb{E}{s,a\sim d_{mix}^{E,R}}{f^*_p(\mathcal{T}^\pi_0 Q(s,a )-Q(s,a))}\nonumber \\ &~~~~~~~~~~~~~~~~~~ -(1-\beta) \mathbb{E}{s,a\sim d^R}{\mathcal{T}^\pi_0 Q(s,a )-Q(s,a)} \end{align} \end{restatable} Analogously, we have the following result for off-policy imitation learning in the V-space. \begin{restatable}[]{lemma}{closerv} \label{thm:recoilv} (\textbf{$\texttt{dual-V}~$ for off-policy imitation (relaxed coverage assumption)}) Imitation learning using off-policy data can be solved by optimizing the following modified dual objective for V-CoP with $r(s,a)=0~\forall \mathcal{S},\mathcal{A}$ and $f$-divergence considered between distributions $d_{mix}^R :=\beta d(s,a)+(1-\beta)d^R(s,a)$ and $d_{mix}^{E,R} := \beta d^E(s,a)+(1-\beta)d^R(s,a)$, and is given by: \begin{align} &\min_{V(s)} \beta (1-\gamma)\mathbb{E}{d_0(s)}{V(s)} +\mathbb{E}{s,a\sim d_{mix}^{E,R}}{f^*_p(\mathcal{T}_0V(s,a)-V(s))}\nonumber\\ &~~~~~~~~~~~~~~~~~~ - (1-\beta) \mathbb{E}{s,a\sim d^R}{\mathcal{T}_0V(s,a)-V(s)} \end{align} \end{restatable} In the above-proposed methods, we avoid the pitfalls of previous methods based on coverage assumption that requires learning a reward function $\left(\log(\frac{d^{R}(s,a)}{d^E(s,a)})\right)$ that is ill-defined in state-action space with zero expert support. Our methods also implicitly learns a distribution ratio $\frac{\beta d(s,a)+(1-\beta)d^R(s,a)}{\beta d^E(s,a)+(1-\beta)d^R(s,a)}$ as ${f^*}^'(y(s,a))$ which is well-defined for all replay buffer and expert transitions ($d^R>0$ or $d^E>0$) that the policy is trained on. \section{Related Work} \textbf{Developments in deep model-free reinforcement learning methods: } Off-policy reinforcement learning methods promise a way to utilize data collected by arbitrary behavior policy to aid in learning the optimal policy for solving the task and thus are advantageous over on-policy methods. This promise falls short as learning off-policy with function approximators presents a number of issues like over-estimation, training instability and various biases~\citep{thrun1993issues,fu2019diagnosing,fujimoto2018addressing,kumar2019stabilizing}. Previous works have approached fixing these issues plaguing deep reinforcement learning using methods like double-Q learning~\citep{hasselt2010double}, target networks~\citep{mnih2013playing}, emphatic weightings~\citep{jiang2021emphatic,imani2018off} among other approaches. These approaches still do not carry over well to the offline RL setting where overestimation bias is more likely since, unlike online RL, the policy cannot correct its overestimation by deploying the current policy. A number of solutions exist for controlling overestimation in prior work -- $f$-divergence regularization to the training distribution~\citep{wu2019behavior,nair2020awac,fujimoto2019off,fujimoto2021minimalist}, support regularization~\citep{singh2022offline}, implicit maximization~\citep{kostrikov2021offline}, learning a Q function that penalizes OOD actions~\citep{kumar2020conservative} and learning in a pessimistic MDP~\citep{kidambi2020morel,yu2020mopo}. A recent method XQL~\citep{garg2023extreme} uses implicit maximization to obtain significant gains in learning performance across online and offline RL settings. The issue of overestimation also affects the most practical use of off-policy algorithms in domains like robotics and healthcare -- aiding online learning with previously collected off-policy data. A number of prior works have investigated this issue proposing fixes~\citep{xie2021policy,uchendu2022jump,kostrikov2021offline,nair2020awac}. In contrast, our work points out the benefits of an already existing framework for reinforcement learning that is overlooked in practical applications, which can naturally address the problem of overestimation by using off-policy data to infer current policy visitation and update the current policy using a consistent on-policy gradient estimator, and which can easily incorporate learning a pessimistic policy with $f$-divergence regularization. \cite{nachum2020reinforcement} proposed a framework for RL, that we refer to as dual RL, which proposes treating RL as a convex optimization with linear constraints and optimizing its unconstrained dual form. This framework fixes the issue of distribution mismatch in dynamic programming based off-policy reinforcement learning and provides a principled solution. Prior works have proposed fixing the distribution shift by using importance weights~\citep{precup2000eligibility}, which can lead to high variance policy gradients or ignoring the distribution mismatch completely~\citep{haarnoja2018soft,fujimoto2018addressing}. It should come as no surprise that some of the most performant RL algorithms in the space are dual methods (Online RL~\citep{nachum2019algaedice}, Offline RL~\citep{lee2021optidice}). \textbf{Developments in deep model-free imitation learning methods: }Imitation learning has benefited greatly by using off-policy data to improve learning performance~\citep{kostrikov2018discriminator,ni2021f,sikchi2022ranking,zhu2020off}. Often, replacing the on-policy expectation common in most Inverse RL formulations~\citep{ziebart2008maximum,swamy2021moments} by expectation under off-policy samples, which is unprincipled, has led to large gains in sample efficiency~\citep{kostrikov2018discriminator}. Principled offline imitation learning using expert data only is a simple by-product of the dual-RL framework. Previous works have proposed a solution in the dual RL space to this problem based on an unrealistic coverage assumption~\citep{ma2022smodice,zhu2020off, kim2022demodice} and restricting themselves to matching a particular $f$-divergence. In this work, we relax this assumption and allow for generalizing to all $f$-divergences, presenting a principled off-policy approach to imitation. Our work also presents an approach for that allows for single-player non-adversarial optimization for imitation learning in contrast to previous work~\citep{kostrikov2019imitation}. \section{Experimental Analysis} \label{sec:result} Our experimental evaluation aims to illustrate the benefits of the dual RL framework and analyze our proposed method for off-policy imitation learning. In the RL setting, we first present a case study on the failure of ADP-based methods like SAC~\citep{haarnoja2018soft} when bootstrapped with additional (helpful) data. This setting is what motivates the use of off-policy algorithms in the first place and is invaluable in domains like robotics~\citep{uchendu2022jump,nair2020awac}. Our results validate the benefit of utilizing the dual RL framework for off-policy learning. \begin{wrapfigure}{r}{0.4\textwidth} \begin{center} \includegraphics[width=1.0\linewidth]{figures/off-policy/off-policy-data-value.pdf} \\ \includegraphics[width=1.0\linewidth]{figures/off-policy/off-policy-data-value-legend.pdf} \end{center} \caption{SAC and SACfD suffer from overestimation when off-policy data is added to the replay buffer. We hypothesize this to cause instabilities during training while dualQ has no overestimation.} \label{fig:off-policy-reason} \end{wrapfigure} Then, we investigate our proposed imitation learning algorithm \texttt{ReCOIL} in its ability to estimate the agent's visitation distribution from off-policy data, an important step for successful imitation. We show that methods based on learning from expert data alone, or methods with coverage assumption are not able to infer the visitation distribution of the agent's evaluation policy accurately. Finally, we benchmark \texttt{ReCOIL} against recent methods in learning from mixed-quality data for offline imitation learning. Our results show improved performance over baselines and our proposed algorithm enjoys the advantages of being non-adversarial and not requiring training a discriminator. Experimental and implementation details can be found in Appendix~\ref{ap:experiment_details}. \subsection{The failure of ADP-based traditional off-policy algorithms} Our experiments with the popular off-policy method SAC~\citep{haarnoja2018soft} reveal its brittleness to off-policy data. At the beginning of training, each learning agent is provided with expert or human-demonstrated trajectories for completing the task. We add 1000 transitions from this dataset to the replay buffer for the off-policy algorithm to bootstrap from. SAC is able to leverage this helpful data and shows improved performance in Hopper-v2, where the action dimension is small. As the action dimension increases, the brittleness of SAC becomes more apparent (see SAC+off policy data and SACfD plots in Figure~\ref{fig:off-policy-main}). We hypothesize that this failure in the online RL setting is primarily due to the training instabilities caused by TD-backups resulting in overestimation in regions where the agent's current policy does not visit. In Figure~\ref{fig:off-policy-reason}, we observe that overestimation indeed happens in environments with larger action dimensions and these overestimations take longer to get corrected and in the process destabilize the training. Figure~\ref{fig:off-policy-main} shows that the dual-RL method (AlgaeDICE) is able to leverage off-policy data to increase learning performance without any signs of destabilization. This can be attributed to the distribution correction estimation property of dual RL methods which updates the current policy using the corrected on-policy policy visitation~\citep{nachum2019algaedice}. Note, that we set the temperature $\alpha$ to a low value (0.001) to disentangle the effect of pessimism which is an alternate way to avoid overestimation. \begin{figure*}[h] \begin{center} \includegraphics[width=0.9\linewidth]{figures/off-policy/off-policy-data.pdf} \\ \vspace{-2.0mm} \includegraphics[width=1.0\linewidth]{figures/off-policy/off-policy-data-legend.pdf} \end{center} \caption{Despite the promise of off-policy methods, current methods based on ADP such as SAC fail when the dimension of action space, denoted by A, increases even when helpful data is added to their replay buffer. On other hand, dual-Q methods are able to leverage off-policy data to increase their learning performance} \label{fig:off-policy-main} \end{figure*} \subsection{Does \texttt{ReCOIL} allow for better estimation of agent visitation distribution?} We consider the proposed \texttt{ReCOIL} method and investigate its ability to estimate distribution ratios correctly. We consider the inner optimization for \texttt{ReCOIL-Q} : \begin{align} &\min_{Q(s,a)} \beta (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} +\mathbb{E}{s,a\sim d_{mix}^{E,R}}{f^*_p( \mathcal{T}_0^\pi Q(s,a)-Q(s,a))}\nonumber \\ &~~~~~~~~~~~~~~~~~~ - (1-\beta) \mathbb{E}{s,a\sim d^R}{ \mathcal{T}_0^\pi Q(s,a)-Q(s,a)} \end{align} The following holds for the inner optimization for \texttt{ReCOIL-Q} when $Q$ is optimized (see Appendix~\ref{ap:closer} for the proof): \begin{equation} f^{*^'}_p(\mathcal{T}_0^\pi Q(s,a)-Q(s,a))=\frac{\beta d^\pi(s,a) + (1-\beta)d^R(s,a)}{\beta d^E(s,a)+(1-\beta)d^R(s,a)}. \label{eq:mixture_density_ratio} \end{equation} Thus, given the visitation distribution of the replay buffer $d^R$, expert $d^E$ and the policy $\pi$, the inner optimization implicitly learns the distribution ratio in Eq~\ref{eq:mixture_density_ratio}, allowing us to infer agent visitation $d^\pi$. The agent visitation distribution estimation is at the heart of all dual-RL methods and allows for (a) off-policy evaluation which requires inferring $d^\pi$ (b) off-policy optimization where we are interested in figuring out current policy visitation $d^\pi$ in order to improve it and (c) imitation learning where we are interested in the ratio $d^\pi/d^E$ in order to bring the agent closer to the expert. We consider two settings in our experiments: (1) a 2-timestep MDP (Fig~\ref{fig:distribution_ratio_estimation1}) where the agent starts from state $s_0$ and transitions to one of the states $\{s_1,s_2,s_3,s_4,s_5\}$ which are absorbing. In this setting, the replay buffer perfectly covers the unknown ground truth agent visitation. (2) a 2-D gridworld (Fig~\ref{fig:distribution_ratio_estimation2}) where the agent can move cardinally and the replay buffer distribution does not cover the unknown ground truth agent visitation. \begin{figure}[h] \begin{center} \includegraphics[width=0.9\linewidth]{figures/density_ratio_estimation/perfect_coverage_1d.pdf} \\ \vspace{-2.0mm} \end{center} \caption{The replay buffer distribution covers the agent policy visitation distribution. Using \texttt{ReCOIL}, we are perfectly able to infer the agent policy visitation whereas a method that only relies on expert data or the replay data with the coverage assumption fails. Results are averaged over 100 seeds.} \label{fig:distribution_ratio_estimation1} \end{figure} Our results (Figure~\ref{fig:distribution_ratio_estimation1} and \ref{fig:distribution_ratio_estimation2}) demonstrate that in the perfect coverage setting, \texttt{ReCOIL} is able to infer the agent policy visitation perfectly, and in the case of imperfect coverage is able to significantly outperform other methods (IQLearn and SMODICE). Note that we modify SMODICE to incorporate state-action expert data rather than relying on state-only expert data. IQLearn does not leverage replay data information, relying only on expert data to infer agent visitation. SMODICE's reward function $-\log \frac{d^R(s,a)}{d^E(s,a)}$, arising from its coverage assumption is ill-defined in parts of state space where the expert has no support leading to poor downstream density ratio estimation. \begin{figure}[h] \begin{center} \includegraphics[width=0.9\linewidth]{figures/density_ratio_estimation/imperfect_coverage_2d.pdf} \\ \vspace{-2.0mm} \end{center} \caption{Replay buffer consists of data that visits near the initial state (0,0), a setting commonly observed when training RL agents. We estimate agent's policy visitation and observe \texttt{ReCOIL} to outperform both methods which rely on expert data only or use the replay data with coverage assumption} \label{fig:distribution_ratio_estimation2} \end{figure} \subsection{Benchmarking performance of ReCOIL on MuJoCo tasks} In this task, we use an offline dataset of environment interactions from D4RL~\citep{fu2020d4rl}. We consider the following MuJoCo environments: Hopper, Walker2d, HalfCheetah, and Ant. We consider three different dataset compositions for the offline dataset - `random+expert', `medium+expert', and `random+few-expert'. The first two datasets consist of a few expert trajectories ($\le$200) and a large number of suboptimal transitions (1 million). random+few-expert has only $\le 30$ expert trajectories. The learning agent also has access to 1000 expert transitions. We compare recent methods for offline imitation learning with suboptimal data -- RCE~\citep{eysenbach2021replacing}, SMODICE~\citep{ma2022smodice} and ORIL~\citep{zolna2020offline}. We do not compare to DEMODICE~\citep{kim2022demodice} as SMODICE was shown to be competitive in ~\cite{ma2022smodice}. We focus on \texttt{ReCOIL-V} due to its favorable property of being a single-player non-adversarial optimization. Note, that contrary to SMODICE~\citep{ma2022smodice} which is based on the coverage assumption and requires learning a discriminator, \texttt{ReCOIL} just learns a single parametric $V$ function. Figure~\ref{fig:offline_il} empirically validates that \texttt{ReCOIL-V} outperforms or is competitive over baselines in all environments. SMODICE shows poor performance in cases when the dataset has low expert coverage (random+few-expert) or where the discriminator is not able to reliably distinguish expert vs non-expert data (medium+expert). \begin{table*}[h] \centering \footnotesize \begin{tabular}{c|c|c|c|c|c} \toprule \multirow{1}{*}{Dataset} & Env & RCE & ORIL & SMODICE & ReCOIL \\ \midrule \multirow{3}{*}{random+expert}& hopper&51.41$\pm$38.63&73.93$\pm$11.06&\textbf{101.61$\pm$7.69}&95.04$\pm$4.48\\ & halfcheetah&64.19$\pm$11.06&60.49$\pm$3.53& 80.16$\pm$7.30&\textbf{84.10$\pm$4.72}\\ & walker2d&20.90$\pm$26.80&2.86$\pm$3.39&\textbf{105.86$\pm$3.47}&100.84$\pm$6.34\\ & ant&105.38$\pm$14.15&73.67$\pm$12.69&\textbf{126.78$\pm$5.12}&\textbf{126.74$\pm$4.63}\\ \midrule \multirow{3}{*}{random+few-expert}& hopper&25.31$\pm$18.97&42.04$\pm$13.76&60.11$\pm$18.28&\textbf{79.44$\pm$13.53}\\ & halfcheetah& 2.99$\pm$1.07&2.84$\pm$5.52& 2.28$\pm$0.62& \textbf{3.90$\pm$0.66}\\ & walker2d&40.49$\pm$26.52& 3.22$\pm$3.29&\textbf{107.18$\pm$1.87}&83.23$\pm$19.00\\ & ant&67.62$\pm$15.81&25.41 $\pm$ 8.58&-6.10$\pm$7.85&\textbf{94.25$\pm$ 8.30}\\ \midrule \multirow{3}{*}{medium+expert}& hopper&29.37$\pm$3.39&61.35$\pm$7.91&54.77$\pm$6.2&\textbf{63.26$\pm$4.99}\\ & halfcheetah&61.14$\pm$18.31& 57.15$\pm$3.49&58.01$\pm$3.12&\textbf{84.01$\pm$3.40}\\ & walker2d&19.84$\pm$2.71&4.14$\pm$2.69&1.2$\pm$1.57&\textbf{91.51$\pm$16.50}\\ & ant&81.18$\pm$29.73&103.42$\pm$8.43&102.82$\pm$4.63&\textbf{119.94$\pm$4.52} \\ \bottomrule \end{tabular} \caption{Normalized scores for \texttt{ReCOIL} on the different D4RL datasets along with 1000 given expert transitions compared to different offline IL baselines. \texttt{ReCOIL} improves over baselines when the offline dataset has low coverage of expert or when it is more challenging to distinguish expert vs suboptimal transitions. } \label{table:offline_rl_results} \end{table*} \section{Conclusion} \label{sec:conclusion} Dual reinforcement learning algorithms have great potential for developing performant Deep RL methods. Indeed, we show that a number of the recently developed methods for RL -- Extreme Q-learning, Conservative Q-learning, and Imitation learning -- OPOLO, IQLearn, and Implicit Behavior Cloning can be unified through the lens of an already existing framework of dual RL. Our insight calls for these methods to be studied under this unified lens to figure out components that contribute to the success of these methods. As a by-product of this unification, we are able to identify novel insights that give us a way to propose: a family of implicit maximizers that avoids the overestimation problem due to actor-critic learning by relying on in-distribution maximization, an offline single-player non-adversarial offline imitation learning method from expert data only, and a general off-policy imitation learning method from arbitrary data that relaxes the restrictive coverage assumption made by prior work. We analyze the off-policy imitation learning method by experiments in continuous control and demonstrate superior performance. \section{Appendix} \label{ap:theory} \subsection{A Review of Dual-RL} \label{ap:dual_rl_review} In this section, we aim to give a self-contained review for Dual Reinforcement Learning. For a more thorough read, refer to~\citep{nachum2020reinforcement}. \subsubsection{Convex conjugates and $f$-divergence} We first review the basics of duality in reinforcement learning. Let $f:(0,\infty)\to \mathbb{R}$ be a convex function. The convex conjugate $f^*$ of $f$ is defined by: \begin{equation} \label{eq:cx_conjugate_def} f^*(y)= \text{sup}_{x\in\mathbb{R}}[\langle x,y \rangle-f(x)] \end{equation} where $\langle ~ \rangle$ denotes the dot product. The convex conjugates have the important property that $f^*$ is also convex and the convex conjugate of $f^*$ retrieves back the original function $f$. Going forward, we would be dealing extensively with $f$-divergences. Informally, $f$-divergences~\cite{} are a measure of distance between two probability distributions. Here's a more formal definition: Let $f:(0,\infty)\to \mathbb{R}$ be a convex lower semi-continuous function with $f(1)=0$. Let $P$ and $Q$ be two probability distributions, then the $f$-divergence is defined as: \begin{equation} \f{P}{Q}=\mathbb{E}_{z\sim Q}\left[f\left(\frac{P(z)}{Q(z)}\right)\right] \end{equation} Now, we will do a simple exercise in finding the convex conjugate for this $f$-divergence (where $f$ is a convex function) which will also give us the well-known variational representation of $f$-divergence. We will use it frequently in the subsequent sections. Using the definition of convex conjugate and the fact that convex conjugate of $f^*$ gives back $f$, we have: \begin{align} \f{P}{Q} &= \mathbb{E}_{z\sim Q}\left[f\left(\frac{P(z)}{Q(z)}\right)\right]\\ &= \sup_y \mathbb{E}_{z\sim Q} \left[\frac{P(z)}{Q(z)} y(z)\right]-\mathbb{E}_Q[f^*(y(z))]\\ \label{eq:variational_f_div} &= \sup_{y:\mathcal{Z}\to\mathbb{R}}\mathbb{E}_{z\sim P}[y(z)]-\mathbb{E}_{z\sim Q}[f^*(y(z))] \end{align} Thus Eq~\ref{eq:variational_f_div} derives the variational form for $f$-divergence. Although deriving the analytical form of $f^*$ is not complicated for most common $f$-divergences --- set the derivative for Eq~\ref{eq:cx_conjugate_def} to zero and find out the stationary point, it might be useful to list some common $f$-divergences and their conjugates $f^*$. We also note an important relation regarding $f$ and $f^*$: $(f^*)^{'}=(f')^{-1}$, where the $'$ notation denotes first derivative. \begin{table}[h] \small \centering \caption{\small {List of $f$-divergences and their convex conjugates}} % \label{tbl:div} \vskip5pt \def1.5{1.5} \begin{tabular}{l|c|c} Divergence & $f(t)$ & $f^*(u)$ \\ \hline Forward KL & $-\log{t}$ & $-1 -\log(-u)$ \\ Reverse KL & $t\log{t}$ & $e^{(u-1)}$ \\ Squared Hellinger & $(\sqrt{t} - 1)^2$ & $\frac{u}{1-u}$ \\ Pearson $\chi ^{2}$ & $(t-1)^2$ & $u + \frac{u^2}{4}$ \\ Total variation & $\frac{1}{2} | t-1| $ & $u$ \\ Jensen-Shannon & $-(t+1)\log(\frac{t+1}{2}) + t \log{t}$ & $-\log{(2- e^u)}$ \\ \hline \end{tabular} \end{table} \subsubsection{Duality in Reinforcement Learning} \label{ap:dual_rl_intro} Duality in reinforcement learning allows a different perspective for solving RL problems, often giving off-policy alternatives to typical on-policy approaches. We consider a regularized policy optimization objective below: \begin{equation} \label{eq:reg_rl} \max_\pi \mathbb{E}_{d^\pi(s,a)}[r(s,a)] - \alpha \f{d^\pi(s,a)}{d^o(s,a)} \end{equation} where $d^o$ is a known state-action visitation distribution. Optimizing over $\pi$, at first sight, gives us a non-convex problem further complicating the analysis. We can rewrite the problem as a linear program (LP) by considering optimization over \textit{valid} state-action visitations by adding a constraint for the optimization: \begin{align} \label{eq:primal_rl} \max_{\pi,d\ge0} \mathbb{E}_{d(s,a)}[r(s,a)]-\textcolor{red}{\alpha}\f{d(s,a)}{d^O(s,a)}\\ \text{s.t}~~d(s,a)=(1-\gamma)d_0(s).\pi(a|s)+\gamma \sum_{s',a'} d(s',a')p(s|s',a')\pi(a|s) \end{align} where $\alpha$ allows us to weigh policy improvement against conservatism from staying close to the state-action distribution $d^O$. A careful reader may notice that the above problem is overconstrained. The solution to the inner maximization with respect to $d$ is uniquely determined by the $|\mathcal{S}|\times |\mathcal{A}|$ constraints from the formulation. The inner optimization, using only the constraints, uniquely determines the visitation $d^\pi$ - the state action visitation of policy $\pi$ and is independent of the term being optimized~\ref{eq:primal_rl}. The gradient with respect to policy $\pi$ when $d$ is optimized can be shown to be equivalent to the on-policy policy gradient (see Section 5.1 from \cite{nachum2020reinforcement}). The constraints above are the probability flow equations that a stationary state-action distribution must satisfy. Now, how can we go about solving it? Here is where duality comes into play. First, we form the lagrangian dual of our original optimization problem, transforming our constrained optimization into an unconstrained form. This introduces additional optimization variables - the Lagrange multipliers. \begin{align*} &\max_{\pi, d\ge0} \min_{Q(s,a)} \mathbb{E}{s,a\sim d(s,a)}{r(s,a)}-\alpha \f{d(s,a)}{d^o(s,a)}\\ &+\sum_{s,a} Q(s,a)\left((1-\gamma)d_0(s).\pi(a|s)+\gamma \sum_{s',a'} d(s',a')p(s|s',a')\pi(a|s)-d(s,a)\right) \end{align*} where $Q(s,a)$ are the Lagrange multipliers for enforcing the equality constraints. We can now do some algebraic manipulation on the above equation to further simplify it: \begin{align} &\max_{\pi,d\ge0} \min_{Q(s,a)} \mathbb{E}{s,a\sim d(s,a)}{r(s,a)}-\alpha \f{d(s,a)}{d^O(s,a)} \nonumber\\ &+\sum_{s,a} Q(s,a)\left((1-\gamma)d_0(s).\pi(a|s)+\gamma \sum_{s',a'} d(s',a')p(s|s',a')\pi(a|s)-d(s,a)\right) \end{align} \begin{align} &= \max_{\pi,d\ge0} \min_{Q(s,a)} (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} \nonumber \\ &+\mathbb{E}{s,a\sim d}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)}-\alpha\f{d(s,a)}{d^O(s,a)}\\ &=\max_{\pi(a|s)}\min_{Q(s,a)}\max_{d(s,a)\ge0} (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} \nonumber \\ &+\mathbb{E}{s,a\sim d}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)}-\alpha\f{d(s,a)}{d^O(s,a)} \end{align} \begin{align} &=\max_{\pi(a|s)}\min_{Q(s,a)}\max_{d(s,a)\ge0} \frac{(1-\gamma)}{\alpha}\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} \nonumber \\ &+\mathbb{E}{s,a\sim d}{(r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a))/\alpha}-\f{d(s,a)}{d^O(s,a)}\\ &=\max_{\pi(a|s)}\min_{Q(s,a)} \frac{(1-\gamma)}{\alpha}\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} \nonumber\\ &+\mathbb{E}{s,a\sim d^O}{f^*((r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a))/\alpha)} \end{align} The last step is due to the application of Eq~\ref{eq:cx_conjugate_def} (convex conjugate definition). To see this more clearly let $y(s,a)=r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)$. Then, \begin{align} \label{eq:f_cvx_conjugate} &\max_{d\ge0} \mathbb{E}{s,a\sim d}{y(s,a)}-\f{d(s,a)}{d^o(s,a)}\\ &= \max_{d\ge0}\mathbb{E}{s,a\sim d^o}{\frac{d(s,a)}{d^O(s,a)}y(s,a)-f\left(\frac{d(s,a)}{d^o(s,a)}\right)}\\ &=\mathbb{E}{d^o}{f^*(y(s,a))} \end{align} Finally, the policy optimization problem is reduced to the solving the following min-max optimization, which we will refer to as $\texttt{dual-Q}$: \begin{equation} \max_{\pi(a|s)}\min_{Q(s,a)} \frac{(1-\gamma)}{\alpha}\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} +\mathbb{E}{s,a\sim d^O}{f^*((r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a|s')Q(s',a')-Q(s,a))/\alpha)} \end{equation} For common $f$-divergences, table~\ref{tbl:div} lists the corresponding convex conjugates $f^*$. Also, note that the primal RL problem is convex and due to slater's condition~\citep{boyd2004convex} we can interchange the min-max between the Lagrange variable $Q$ and visitation distribution $d$ to max-min. In the case of deterministic policy and deterministic dynamics, the above-obtained optimization takes a simpler form: \begin{equation} \min_{Q(s,a)}\max_{\pi(a|s)} \frac{(1-\gamma)}{\alpha}\mathbb{E}{\rho_0(s)}{Q(s,\pi(s))}+\mathbb{E}{s,a\sim d^o}{f^*((r(s,a)+\gamma Q(s',\pi(s'))-Q(s,a))/\alpha)} \end{equation} Now, we have seen how we can transform a regularized RL problem into its $\texttt{dual-Q}$ form which uses Lagrange variables in the form of state-action functions. Interestingly, we can go further to transform the regularized RL problem into Lagrange variables (V) that only depend on the state, and in doing so we also get rid of the min-max optimization in the $\texttt{dual-Q}$. Consider the regularized RL problem again (Eq~\ref{eq:reg_rl}). This time, we formulate the visitation constraints to depend solely on states rather than state-action pairs. We consider $\alpha=1$ for sake of exposition. Interested readers can derive the result for $\alpha \neq 1$ as in the \texttt{dual-Q} case above. Hence, we are solving the following constrained optimization problem: \begin{align} \max_{d\ge0} \mathbb{E}_{d(s,a)}[r(s,a)]-\f{d(s,a)}{d^O(s,a)}\\ \text{s.t}~~\sum_{a\in\mathcal{A}} d(s,a)=(1-\gamma)d_0(s)+\gamma \sum_{s',a'} d(s',a')p(s|s',a') \end{align} As before we construct the Lagrangian dual to this problem. Note that our constraints now solely depend on $s$. \begin{align} \max_{d\ge 0} \min_{V(s)} \mathbb{E}{s\sim d(s,a)}{r(s,a)}-\f{d(s,a)}{d^o(s,a)}\\+\sum_sV(s)\left((1-\gamma)d_0(s)+\gamma \sum_{s',a'} d(s',a')p(s|s',a')-d(s,a)\right) \end{align} Using similar algebraic manipulations we used to obtain $\texttt{dual-Q}$, we get the $\texttt{dual-V}$ formulation for policy optimization: \begin{align} &\max_{d(s,a)\ge0} \min_{V(s)} \mathbb{E}{s,a\sim d(s,a)}{r(s,a)}-\f{d(s,a)}{d^O(s,a)} \nonumber \\ & ~~~~ +\sum_sV(s)\left((1-\gamma)d_0(s)+\gamma \sum_{s',a'} d(s',a')p(s|s',a')-d(s,a)\right)\\ &= \min_{V(s)}\max_{d(s,a)\ge0} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)}\nonumber \label{eq:dual_v_to_postivity} \\ &~~~~+\mathbb{E}{s,a\sim d}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')-V(s)}-\f{d(s,a)}{d^O(s,a)} \end{align} \begin{align} &=\min_{V(s)}\max_{d(s,a)\ge0} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)}\\ &~~~~+\mathbb{E}{s,a\sim d}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')-V(s)}-\f{d(s,a)}{d^O(s,a)}\\ &=\min_{V(s)} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)}+\mathbb{E}{s,a\sim d^O}{f^*(r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')-V(s))} \end{align} In summary, we have two methods for policy optimization given by: \begin{summarybox} $\texttt{dual-Q}$: $ \max_{\pi}\min_{Q} (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} \nonumber\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\mathbb{E}{s,a\sim d^o}{f^*(r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a))}$ \end{summarybox} and, \begin{summarybox} $\texttt{dual-V}$: $ \min_{V(s)} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)} +\mathbb{E}{s,a\sim d^o}{f^*(r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')-V(s))}$ \end{summarybox} The above derivations for dual of RL CoP - $\texttt{dual-Q}$ and $\texttt{dual-V}$ brings out some important observations \begin{itemize} \item $\texttt{dual-Q}$ and $\texttt{dual-V}$ present off-policy policy optimization solutions for regularized RL problems which requires sampling transitions only from the distribution the policy state-action visitation is being regularized against. \item The above property allows us to solve not only RL problems but also imitation problems by setting the reward function to be zero everywhere and $d^o$ to be the expert dataset, and also offline RL problems where we want to maximize reward with the constraint that our state-action visitation should not deviate too much from the replay buffer ($d^o=\text{replay-buffer}$). \item $\texttt{dual-V}$ formulation presents a way to solve the RL problem using a single optimization rather than a min-max optimization of the Q-CoP or standard RL formulation. V-CoP implicitly subsumes greedy policy maximization. \end{itemize} \subsubsection{Recovering the optimal policy in V-CoP} \label{ap:recovering_policy} In the above derivations for dual-Q and dual-V we leveraged the fact that the closed form solution for optimizing $d$ is known and it could be written in the form of Eq~\ref{eq:f_cvx_conjugate}. The value of $d^*$ can found using KKT conditions on Eq~\ref{eq:f_cvx_conjugate}: \begin{equation} \frac{d^*(s,a)}{d^o(s,a)}=\max\left(0, (f')^{-1} \left(\frac{y(s,a)}{\alpha}\right)\right) \end{equation} Using this ratio there are two ways to recover the optimal policy: \textbf{Method 1: Maximum likelihood on expert visitation distribution} Policy learning can be treated as maximizing the likelihood of optimal actions: \begin{equation} \max \mathbb{E}{s,a\sim d^*}{\pi_\theta(a|s)} \end{equation} Using importance sampling we can rewrite the optimization above in a more tractable form: \begin{equation} \max_\theta \mathbb{E}{s,a\sim d^o}{w^*(s,a)\pi_\theta(a|s)} \end{equation} This way of policy learning is similar to weighted behavior cloning, but suffers from the issue that policy is not optimized at state-actions where the expert does not visit, i.e $w^*(s,a)=0$ \textbf{Method 2: Reverse KL matching on offline data distribution (Information Projection)} To allow the policy to be optimized at all that states in the offline dataset we consider an alternate objective: \begin{equation} \min_\theta \kl{d^o(s)\pi_\theta(a|s)}{d^o(s)\pi^*(a|s)} \end{equation} The objective can be written in a form suitable for optimization as follows: \begin{align} \min_\theta \kl{d^o(s)\pi_\theta(a|s)}{d^o(s)\pi^*(a|s)} &= \min_\theta\mathbb{E}{s\sim d^o(s),a\sim\pi_\theta}{\log \frac{\pi_\theta(a|s)}{\pi^*(a|s)}}\\ &= \min_\theta \mathbb{E}{s\sim d^o(s),a\sim\pi_\theta}{\log \frac{\pi_\theta(a|s)d^*(s)d^o(s)\pi^o(a|s)}{\pi^*(a|s)d^*(s)d^o(s)\pi^o(a|s)}}\\ &= \min_\theta \mathbb{E}{s\sim d^o(s),a\sim\pi_\theta}{\log\frac{\pi_\theta(a|s)}{\pi^o(a|s)}-\log(w^*(s,a))+\log \frac{d^*(s)}{d^o(s)}}\\ &= \min_\theta \mathbb{E}{s\sim d^o(s),a\sim\pi_\theta}{\log(\pi_\theta(a|s))-\log({\pi^o(a|s)})-\log(w^*(s,a))} \end{align} This method recovers the optimal policy at the states present in the dataset but requires learning another policy $\pi^o(a|s)$ which can be obtained by behavior cloning the replay buffer. \subsection{Positivity constraints in Dual RL} \label{ap:positivity_constraints} We have ignored an important consideration in the derivation of dual-RL methods in Section~\ref{ap:dual_rl_intro} -- the constraint that the distribution $d$ we are optimizing for in Q-CoP and V-CoP must be positive. Although this does not affect derivation for \texttt{dual-Q} as it is overconstrained and the distribution is guaranteed to be unique, it is imperative we consider this constraint in the \texttt{dual-V} setting. We will now modify the derivation for \texttt{dual-V} to incorporate these constraints. \begin{align} \max_{d\ge0} \mathbb{E}_{d(s,a)}[r(s,a)]-\f{d(s,a)}{d^O(s,a)}\\ \text{s.t}~~\sum_{a\in\mathcal{A}} d(s,a)=(1-\gamma)d_0(s)+\gamma \sum_{s',a'} d(s',a')p(s|s',a') \end{align} We arrive at the following equation using the steps in Section~\ref{ap:dual_rl_intro} (see Equation~\ref{eq:dual_v_to_postivity}). \begin{align} &= \min_{V(s)}\max_{d(s,a)\ge0} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)}\nonumber \\ &~~~~+\mathbb{E}{s,a\sim d}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')-V(s)}-\f{d(s,a)}{d^O(s,a)}\\ &= \min_{V(s)}\max_{d(s,a)\ge0} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)}\nonumber \\ &~~~~+\mathbb{E}{s,a\sim d^O}{\frac{d(s,a)}{d^O(s,a)}(r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')-V(s))}- \mathbb{E}{s,a\sim d^O}{f(\frac{d(s,a)}{d^O(s,a)})} \end{align} Let $w(s,a)=\frac{d(s,a)}{d^O(s,a)}$ and $r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')-V(s)$ be denoted by $y(s,a)$. We have, \begin{align} \min_{V(s)}\max_{d(s,a)\ge0} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)} +\mathbb{E}{s,a\sim d^O}{w(s,a)(y(s,a))}- \mathbb{E}{s,a\sim d^O}{f(w(s,a))} \end{align} We now direct the attention to the inner maximization and find a closed-form solution under the constraint that $d(s,a)\ge0$. \begin{align} \max_{d(s,a)} \max_{\lambda\ge =0 } \mathbb{E}{s,a\sim d^O}{w(s,a)(y(s,a))}- \mathbb{E}{s,a\sim d^O}{f(w(s,a))} +\sum_{s,a} \lambda(s,a)w(s,a) \end{align} where $\lambda$ is the Lagrangian dual parameter than ensures the positivity constraint. Since strong duality holds, we can use the KKT constraints to find the solutions $w^*(s,a)$ and $\lambda^*(s,a)$. \textbf{Primal feasibility}: $w^*\ge0~~\forall~s,a$\\ \textbf{Dual feasibility}: $\lambda^*\ge0~~\forall~s,a$\\ \textbf{Stationarity}: $d^O(s,a)(f'(w^*(s,a))+y(s,a)+\lambda^*(s,a))=0~~\forall~s,a$\\ \textbf{Complementary Slackness}: $w^*(s,a)\lambda^*(s,a)=0~~\forall~s,a$ Using stationarity we have the following: \begin{equation} f'(w^*(s,a)) = y(s,a)+\lambda^*(s,a)~~\forall~s,a \end{equation} Now using complementary slackness only two cases are possible $w^*(s,a)>0$ or $\lambda^*(s,a)>0$. Combining both cases we arrive at the following solution for this constrained optimization: \begin{equation} w^*(s,a) = \max\left(0,{f'}^{-1}(y(s,a)) \right) \end{equation} We refer to the resulting function after plugging the solution for $w^*$ back as $f^*_p$. \begin{equation} f^*_p(s,a) = w^*(s,a)(y(s,a))-{f(w^*(s,a))} \end{equation} Note that we get the original conjugate $f^*$ back if we do not consider the positivity constraints. i.e \begin{equation} f^*(s,a) ={f'}^{-1}(y(s,a))(y(s,a))- {f({f'}^{-1}(y(s,a)))} \end{equation} Finally, we have the following optimization to solve for \texttt{dual-V} when considering the positivity constraints: \begin{summarybox} $\texttt{dual-V}~\text{(with positivity constraints)}$: $ \min_{V(s)} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)} + \mathbb{E}{s,a\sim d^O}{f^*_p(y(s,a))} $ \end{summarybox} \subsection{Dual Connections to Reinforcement Learning} \cql* \begin{proof} We show that CQL~\cite{kumar2020conservative}, a popular offline RL method is also a special case of $\texttt{dual-Q}$ for offline RL. Consider an $f$-divergence with the generator function $f=(t-1)^2$. The dual function $f^*$ is given by $f^*=(\frac{t^2}{4}+t)$. With the $f$-divergence our Q-CoP can be written as: \begin{align} &\frac{(1-\gamma)}{\alpha}\mathbb{E}{d_0,\pi(a|s)}{Q(s,a)}+\mathbb{E}{s,a\sim d^O}{\frac{y(s,a,r,s')^2}{4\alpha^2}+\frac{y(s,a,r,s')}{\alpha}}\\ &=\frac{(1-\gamma)}{\alpha}\mathbb{E}{d_0,\pi(a|s)}{Q(s,a)}+\mathbb{E}{s,a\sim d^O}{\frac{y(s,a,r,s')}{\alpha}}+\mathbb{E}{s,a\sim d^O}{\frac{y(s,a,r,s')^2}{4\alpha^2}} \end{align} Let's simplify the first two terms: \begin{align} &\frac{1}{\alpha}\left[(1-\gamma)\mathbb{E}{d_0,\pi(a|s)}{Q(s,a)}+ \mathbb{E}{s,a\sim d^O}{r(s,a)+\gamma \sum_{s',a'}p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)}\right]\\ &= \frac{1}{\alpha}\left[(1-\gamma)\mathbb{E}{d_0,\pi(a|s)}{Q(s,a)}+\mathbb{E}{s,a\sim d^O}{\gamma \sum_{s',a'}p(s'|s,a)\pi(a'|s')Q(s',a')}-\mathbb{E}{s,a\sim d^O}{Q(s,a)}+\cancel{\mathbb{E}{s,a\sim d^O}{r(s,a)}}\right]\label{eq:remove_constant_term_cql}\\ &= \frac{1}{\alpha}\left[(1-\gamma)\sum_{s,a}d_0(s)\pi(a|s)Q(s,a) + \gamma \sum_{s,a} d^O(s,a) \sum_{s'}p(s'|s,a)\pi(a'|s')Q(s',a') - \mathbb{E}{s,a\sim d^O}{Q(s,a)}\right] \end{align} \begin{align} &= \frac{1}{\alpha}\left[(1-\gamma)\sum_{s,a}d_0(s)\pi(a|s)Q(s,a) + \gamma \langle d^O,P^\pi Q\rangle - \mathbb{E}{s,a\sim d^O}{Q(s,a)}\right]\\ &= \frac{1}{\alpha}\left[(1-\gamma)\sum_{s,a}d_0(s)\pi(a|s)Q(s,a) + \gamma \langle P^\pi_* d^o, Q\rangle - \mathbb{E}{s,a\sim d^o}{Q(s,a)}\right]\\ &= \frac{1}{\alpha}\left[(1-\gamma)\sum_{s,a}d_0(s)\pi(a|s)Q(s,a) +\gamma \sum_{s,a} \pi(a|s) Q(s,a)\sum_{s',a'}p(s|s',a')d(s',a') - \mathbb{E}{s,a\sim d^o}{Q(s,a)}\right] \end{align} \begin{align} &= \frac{1}{\alpha}\left[\sum_{s,a}(d_0(s)+\gamma \sum_{s'a,'}p(s|s',a')d(s',a') )\pi(a|s)Q(s,a) - \mathbb{E}{s,a\sim d^o}{Q(s,a)}+\mathbb{E}{s,a\sim d^o}{r(s,a)}\right]\\ &= \frac{1}{\alpha}\left[\sum_{s,a} d^o(s)\pi(a|s) Q(s,a) - \mathbb{E}{s,a\sim d^o}{Q(s,a)}+\mathbb{E}{s,a\sim d^o}{r(s,a)}\right]\\ &= \frac{1}{\alpha}\left[\mathbb{E}{s\sim d^o,a\sim \pi}{Q(s,a)}- \mathbb{E}{s,a\sim d^o}{Q(s,a)}\right] \end{align} where $P^\pi$ denotes the policy transition operator, $P^\pi_{*}$ denotes the adjoint policy transition operator. Removing constant terms (Equation~\ref{eq:remove_constant_term_cql}) with respect to optimization variables we end up with the following form for \texttt{dual-Q}: \begin{equation} \frac{1}{\alpha}\left[\mathbb{E}{s\sim d^o,a\sim \pi}{Q(s,a)}- \mathbb{E}{s,a\sim d^o}{Q(s,a)}\right]+\mathbb{E}{s,a\sim d^o}{\frac{y(s,a,r,s')^2}{4\alpha^2}} \end{equation} Hence the \texttt{dual-Q} optimization reduces to: \begin{equation} \max_\pi \min_Q \alpha\left[\mathbb{E}{s\sim d^o,a\sim \pi}{Q(s,a)}- \mathbb{E}{s,a\sim d^o}{Q(s,a)}\right]+\mathbb{E}{s,a\sim d^o}{\frac{y(s,a,r,s')^2}{4}} \end{equation} This equation matches the unregularized CQL objective (Equation 3 in~\cite{kumar2020conservative}). \end{proof} \xql* \begin{proof} We show that the Extreme Q-Learning~\cite{} framework is a special case of the dual framework, specifically the $\texttt{dual-V}$ using the semi-gradient update rule. Consider setting the $f$-divergence to be the KL divergence in the dual V framework, the regularization distribution and the initial state distribution to be the replay buffer distribution ($d^O=d^R$ and $d_0=d^R$). The conjugate of the generating function for KL divergence is given by $f^*(t)=e^{t-1}$. \begin{equation} \label{eq:dual-V-kl} \min_{V(s)} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)} +\mathbb{E}{s,a\sim d^R}{f^*\left(\left[r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')-V(s))\right]/ \alpha\right)} \end{equation} \begin{align} \min_{V(s)} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)} +\mathbb{E}{s,a\sim d^R}{\text{exp}(\left(\left[r(s,a)+\gamma \sum_{s'} p(s'|s,a)V(s')-V(s))\right]/ \alpha-1\right)} \end{align} A popular approach for stable optimization in temporal difference learning is the semi-gradient update rule which has been studied in previous works~\cite{}. In this update strategy, we fix the targets for the temporal difference backup. Our target in the above optimization is given by: \begin{equation} \hat{Q}(s,a) = r(s,a) + \gamma \sum_{s'} p(s'|s,a)V(s') \end{equation} The update equation for V is now given by: \begin{align} \min_{V(s)} (1-\gamma)\mathbb{E}{d_0(s)}{V(s)} +\mathbb{E}{s,a\sim d^R}{\text{exp}(\left(\left[\hat{Q}(s,a)-V(s))\right]/ \alpha-1\right)} \end{align} where hat denotes the $\texttt{stop-gradient}$ operation. We approximate this target by using mean-squared regression with the single sample unbiased estimate as follows: \begin{equation} \min_Q \mathbb{E}{s,a,s'\sim d^R}{(Q(s,a) - (r(s,a) + V(s')))^2} \end{equation} The procedure is now equivalent to the Extreme-Q learning and is a special case of the \texttt{dual-V} framework. \end{proof} \subsubsection{A family of implicit maximizers} \label{ap:implicit_maximizers_family} \implicit* \begin{proof} We analyze the behavior of the following optimization of interest. \begin{equation} \label{eq:ap_implicit_maximization_general} \min_{v} (1-\lambda)\mathbb{E}{x\sim D}{v}+ \lambda\mathbb{E}{x\sim D}{f^*_p\left( x-v \right)} \end{equation} $f^*_p(t)$ is given by (using the result derived in ~\ref{ap:positivity_constraints}): \begin{equation} f^*_p(t) = -f\left(\max({f^'}^{-1}(t),0)\right) + t \max\left({f^'}^{-1}(t),0 \right) \end{equation} The function $f^*_p$ admits two different behaviors given by: \[ f^*_p= \begin{cases} -f({f^'}^{-1}(t)) + t {f^'}^{-1}(t) = f^*(t) ,& \text{if } {f^'}^{-1}(t)>0\\ -f(0), & \text{otherwise} \end{cases} \] where $f^*$ is the convex conjugate of $f$-divergence and is strictly increasing with $t$. We note other properties related to $f$ function for $f$-divergences: $f^*$, $f'$, $(f')^{-1}$ is strictly increasing, $f(0_+)>0$ and $(f')^{-1}(t)>0$ when $t>0$ and 0 otherwise. We analyze the second term in Eq~\ref{eq:ap_implicit_maximization_general}. It can be expanded as follows: \begin{align} \lambda \int_{x:(f^')^{-1}(x-v)>0} p(x) f^*(x-v) dx - \lambda\int_{x:(f^')^{-1}(x-v)<0} f(0)p(x)dx \end{align} From the properties of $f$, we use the fact that $(f^')^{-1}(x-v)>0$ when $x-v>0$ or equivalently $x>v$. \begin{align} \lambda \int_{x>v} p(x) f^*(x-v) dx - \lambda\int_{x\le v} f(0)p(x)dx \end{align} The first term in the above equation decreases monotonically and the second term increases monotonically (thus the combined terms decrease) as $v$ increases until $v=x^*$ (supremum of the support of the distribution) after which the equation assumes a constant value of $-\lambda f(0)$. Going back to our original optimization in Equation~\ref{eq:ap_implicit_maximization_general}, the first term decreases monotonically with $v$. As $\lambda\to 1$, the minimization of the second term takes precedence, with increasing v until saturation ($v=x^*$). We can go further to characterize the effect of $\lambda$ on solution $v_\lambda$ of the equation. The solution of the optimization can be written in closed form as: \begin{equation} \frac{(1-\lambda)}{\lambda} = \mathbb{E}{x\sim D}{{{f_p^*}^'}(x-v)} \end{equation} Using the fact that ${f_p^*}^'$ is non-decreasing, we can show that the right-hand term in the equation above increases as $v$ decreases. This in turn implies that for all $\lambda_1,\lambda_2$ such that $\lambda_1\le\lambda_2$ we have that $v_{\lambda_1}\le v_{\lambda_2}$ . \end{proof} \subsection{Dual Connections to Imitation Learning} \ibc* Equation~\ref{eq:dual-Q-imit} suggests that intuitively IQ-learn trains an energy-based model in the form of Q where it pushes down the Q-values for actions predicted by current policy and pushes up the Q-values at the expert state-action pairs. This becomes more clear when the divergence $f$ is chosen to be Total-Variation ($f^*=\mathbb{I}$), IQ learn reduces to: \begin{align} &(1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)}+\mathbb{E}{s,a\sim d^E}{\gamma \sum_{s',a'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)}\\ &= \left[(1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)}+\mathbb{E}{s,a\sim d^E}{\gamma \sum_{s',a'} p(s'|s,a)\pi(a'|s')Q(s',a')}\right]-\mathbb{E}{s,a\sim d^E}{Q(s,a)} \label{eq:iq_tv} \end{align} Let's simplify the first two terms: \begin{align} &(1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)}+\mathbb{E}{s,a\sim d^E}{\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')}\\ &=(1-\gamma) \sum_{s,a} d_0(s)\pi(a|s)Q(s,a) + \gamma\sum_{s,a} d^E(s,a) \sum_{s',a'} p(s'|s,a)\pi(a'|s')Q(s',a') \end{align} \begin{align} & = (1-\gamma) \sum_{s,a} d_0(s)\pi(a|s)Q(s,a) + \gamma\sum_{s',a'} \sum_{s,a} d^E(s,a) p(s'|s,a)\pi(a'|s')Q(s',a')\\ & = (1-\gamma) \sum_{s,a} d_0(s)\pi(a|s)Q(s,a) + \gamma\sum_{s',a'} \pi(a'|s')Q(s',a')(\sum_{s,a} d^E(s,a) p(s'|s,a))\\ & = (1-\gamma) \sum_{s,a} d_0(s)\pi(a|s)Q(s,a) + \gamma\sum_{s',a'} \pi(a'|s')Q(s',a')(\sum_{s,a} d^E(s,a) p(s'|s,a)) \end{align} \begin{align} & = (1-\gamma) \sum_{s,a} d_0(s)\pi(a|s)Q(s,a) + \gamma\sum_{s,a} \pi(a|s)Q(s,a)(\sum_{s',a'} d^E(s',a') p(s|s',a'))\\ & = \sum_{s,a} (1-\gamma) d_0(s)\pi(a|s)Q(s,a) + \pi(a|s)Q(s,a)(\sum_{s',a'} d^E(s',a') p(s|s',a'))\\ & = \sum_{s,a}\pi(a|s)Q(s,a) \left[(1-\gamma) d_0(s) + \gamma\sum_{s',a'} d^E(s',a') p(s|s',a')\right]\\ & = \sum_{s,a}\pi(a|s)Q(s,a) d^E(s) \end{align} where the last step is due to the steady state property of the MDP (Bellman flow constraint). Therefore IQ-Learn/$\texttt{dual-Q}$ for offline imitation (in the special case of TV divergence) simplifies to (from Equation~\ref{eq:iq_tv}): \begin{align} &\left[(1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)}+\mathbb{E}{s,a\sim d^E}{\gamma \sum_{s',a'} p(s'|s,a)\pi(a'|s')Q(s',a')}\right]-\mathbb{E}{s,a\sim d^E}{Q(s,a)}\\ & ~~~~~~~~~~~= \min_Q \mathbb{E}{d_E(s),\pi(a|s)}{Q(s,a)}-\mathbb{E}{s,a\sim d^E}{Q(s,a)} \label{eq:iqlearn_tv} \end{align} The gradient w.r.t for the above optimization matches the gradient update of Implicit Behavior Cloning~\cite{florence2022implicit} with $Q$ as the energy-based model. \section{Off-policy imitation learning with relaxed coverage} \label{ap:closer} We now derive our proposed method for imitation learning with arbitrary data. The derivation for the $\texttt{dual-Q}$ setting is shown below. \texttt{dual-V} derivation can be done similarly. \closerq* \begin{proof} \begin{align*} & \max_{\pi, d\ge0} \min_{Q(s,a)} \alpha\mathbb{E}{s,a\sim d}{r(s,a)}-\f{d_{mix}^R}{d_{mix}^{E,R}}\\ &+\alpha\sum_{s,a} Q(s,a)\left((1-\gamma)d_0(s).\pi(a|s)+\gamma \sum_{s',a'} d(s',a')p(s|s',a')\pi(a|s)-d(s,a)\right) \end{align*} We can use the same algebraic machinery as before (Section~\ref{ap:dual_rl_intro}) to get an unconstrained tractable optimization problem: \begin{align} &\max_{\pi,d\ge0} \min_{Q(s,a)} \alpha\mathbb{E}{s,a\sim d(s,a)}{r(s,a)}-\f{d_{mix}^R}{d_{mix}^{E,R}}\nonumber\\ &+\alpha \sum_{s,a} Q(s,a)\left((1-\gamma)d_0(s).\pi(a|s)+\gamma \sum_{s',a'} d(s',a') p(s|s',a')\pi(a|s)-d(s,a)\right) \end{align} \begin{align} &= \max_{\pi,d\ge0} \min_{Q(s,a)} \alpha (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} \nonumber \\ &+ \alpha \mathbb{E}{s,a\sim d}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)}-\f{d_{mix}^R}{d_{mix}^{E,R}} \end{align} \begin{align} &= \max_{\pi,d\ge0} \min_{Q(s,a)} \alpha (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} \nonumber \\ &+ \alpha \mathbb{E}{s,a\sim d}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)}\nonumber\\ &+(1-\alpha) \mathbb{E}{s,a\sim d^R}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)}\nonumber\\ &-(1-\alpha) \mathbb{E}{s,a\sim d^R}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)}-\f{d_{mix}^R}{d_{mix}^{E,R}} \end{align} \begin{summarybox} \text{Imitation from Arbitrary data (dualQ, no positivity constraints)}\\ \begin{align} &=\max_{\pi(a|s)}\min_{Q(s,a)}\max_{d\ge0} \alpha (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} \nonumber \\ &+\mathbb{E}{s,a\sim d_{mix}^R}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)}-\f{d_{mix}^R}{d_{mix}^{E,R}}\nonumber\\ & - (1-\alpha) \mathbb{E}{s,a\sim d^R}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)} \label{eq:imitation_mixture_approach2_no_constraint} \end{align} \end{summarybox} Note that the inner maximization with respect to $d$ has the constraint that $d\ge0$. This constraint was not necessary for the previous settings for \texttt{dual-Q} problems we have discussed. In this setting, to get a tractable closed form we replace the optimization variable from $d$ to $d^R_{mix}$ with the constraint that $d\ge0$. This prevents the optimization to result in values for $d^R_{mix}$ which has $d<0$. Ignoring this constraint ($d\ge0$) results in the following dual-optimization for imitation from arbitrary data. \begin{align} &\max_{\pi(a|s)}\min_{Q(s,a)} \alpha (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} \nonumber\\ &+\mathbb{E}{s,a\sim d_{mix}^{E,R}}{f^*(r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a))}\nonumber\\ & - (1-\alpha) \mathbb{E}{s,a\sim d^R}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)} \end{align} To incorporate the positivity constraints we begin on the inner maximization w.r.t $d^R_{mix}$ and consider the terms dependent on $d^R_{mix}$ below. \begin{align} \max_{d^R_{mix},d\ge0} \mathbb{E}{s,a\sim d_{mix}^R}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)}-\f{d_{mix}^R}{d_{mix}^{E,R}} \end{align} Let $p(s,a) = \frac{(1-\alpha)\rho^R(s,a)}{\alpha \rho^E(s,a) + (1-\alpha)\rho^R(s,a)}$, $y(s,a)=r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)$ and $w(s,a)=\frac{d^R_{mix}(s,a)}{d^E_{mix}(s,a)}$. We construct the lagrangian dual to incorporate the constraint $d\ge0$ in its equivalent form $w(s,a)\ge p(s,a)$ and obtain the following: \begin{align} \max_{w(s,a)}\max_{\lambda\ge 0 } \mathbb{E}{s,a\sim d_{mix}^{E,R}}{w(s,a)y(s,a)}-\mathbb{E}{d_{mix}^{E,R}}{f(w(s,a))} + \sum_{s,a} \lambda (w(s,a)-p(s,a)) \end{align} Since strong duality holds, we can use the KKT constraints to find the solutions $w^*(s,a)$ and $\lambda^*(s,a)$. \textbf{Primal feasibility}: $w^*(s,a)\ge p(s,a)~~\forall~s,a$\\ \textbf{Dual feasibility}: $\lambda^*\ge0~~\forall~s,a$\\ \textbf{Stationarity}: $d^{E,R}_{mix}(s,a)(f'(w^*(s,a))+y(s,a)+\lambda^*(s,a))=0~~\forall~s,a$\\ \textbf{Complementary Slackness}: $(w^*(s,a)-p(s,a))\lambda^*(s,a)=0~~\forall~s,a$ Using stationarity we have the following: \begin{equation} f'(w^*(s,a)) = y(s,a)+\lambda^*(s,a)~~\forall~s,a \end{equation} Now using complementary slackness only two cases are possible $w^*(s,a)>p(s,a)$ or $\lambda^*(s,a)>0$. Combining both cases we arrive at the following solution for this constrained optimization: \begin{equation} w^*(s,a) = \max\left(p(s,a),{f'}^{-1}(y(s,a)) \right) \end{equation} We can still find a closed-form solution for the inner optimization, in the case when $d\ge 0$, although a bit more involved (See Appendix for the proof). Let $y(s,a) =r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a) $. Also let $p(s,a) = \frac{(1-\alpha)\rho^R(s,a)}{\alpha \rho^E(s,a) + (1-\alpha)\rho^R(s,a)}$. \begin{align} &\max_{\pi(a|s)}\min_{Q(s,a)} \alpha (1-\gamma)\mathbb{E}{d_0(s),\pi(a|s)}{Q(s,a)} \nonumber\\ &+\mathbb{E}{s,a\sim d^{E,R}_{mix}}{\max\left(p(s,a), (f')^{-1} \left(y(s,a)\right)\right)y(s,a)-\alpha f\left(\max\left(p(s,a), (f')^{-1} \left(y(s,a)\right)\right)\right)}\\ & - (1-\alpha) \mathbb{E}{s,a\sim d^R}{r(s,a)+\gamma \sum_{s'} p(s'|s,a)\pi(a'|s')Q(s',a')-Q(s,a)} \end{align} Thus, the closed-form solution with the positivity constraints requires us to estimate the ratio $p(s,a)$ which is possible by learning a discriminator. We observed in our experiments that ignoring the positivity constraints still resulted in a performant method while having the benefits of being simple. A similar derivation can be done in V-space to obtain an analogous result. \end{proof} \section{Implementation and Experiment Details} \label{ap:experiment_details} \textbf{Environments:} In this work for benchmarking we use 4 MuJoCo (\href{https://github.com/deepmind/mujoco/blob/main/LICENSE}{licensed under CC BY 4.0}) locomotion environments: Hopper, Walker2d, HalfCheetah, and Ant. . \textbf{Offline datasets}: In this task, we use offline dataset of environments interactions from D4RL~\citep{fu2020d4rl}. We consider the following MuJoCo environments: Our dataset composition for `random+expert' is similar to SMODICE~\citep{ma2022smodice} where we use a mixture of a small number of expert trajectories ($\le$ 200 trajectories) and a large number of low-quality trajectories from the "random-v2" dataset (1 million transitions). We similarly create another offline dataset `medium+expert' consisting of 200 expert trajectories and 1 million medium-quality transitions from the "medium-v2". The `random+few-expert' dataset is similar to the `random+expert' dataset except that only 30 expert trajectories are present in the offline dataset. \textbf{Expert dataset} The offline dataset for imitation consists of 1000 transitions obtained from the "expert-v2" dataset for the respective environment. \textbf{Baselines:} We compare our proposed methods against 4 representative methods for offline imitation learning with suboptimal data -- SMODICE~\citep{ma2022smodice}, RCE~\citep{eysenbach2021replacing}, ORIL~\citep{zolna2020offline} and IQLearn~\citep{garg2021iq}. We do not compare to DEMODICE~\citep{kim2022demodice} as SMODICE was shown to be competitive in~\citep{ma2022smodice}. SMODICE is an imitation method emerging from the dual framework but under an restrictive coverage assumption. ORIL adapts GAIL~\citep{ho2016generative} to the offline setting by using an offline RL algorithm for policy optimization. RCE baseline in the paper combine RCE (Eysenbach et al., 2021), the state-of-art online example-based RL method, and TD3-BC. ORIL and RCE share the same state-action based discriminator as in SMODICE, and TD3-BC~\citep{fujimoto2021minimalist} as the offline RL algorithm. All the approaches only have access to expert state-action trajectory. We use the author's open-source implementations of baselines SMODICE, RCE, ORIL available at \url{https://github.com/JasonMa2016/SMODICE}. We use the author-provided hyperparameters (similar to those used in~\citep{ma2022smodice}) for all MuJoCo locomotion environments. IQ-Learn was tested on our expert dataset by following authors implementation found here: \url{https://github.com/Div99/IQ-Learn}. We tested two IQ-Learn loss variants: 'v0' and 'value' as found in their hyperparameter configurations and took the best out of the two runs. \textbf{Policy Optimization:} We use Method 1 in Section~\ref{ap:recovering_policy} for policy update. \subsection{Hyperparameters} Hyperparameters for our proposed offpolicy imitation learning method \texttt{ReCOIL} are shown in Table~\ref{tab:recoil-hp}. \begin{table}[h!] \begin{center} \begin{tabular}{l|c} \toprule \textbf{Hyperparameter} & \textbf{Value}\\ \midrule Policy updates $n_{pol}$ & 1\\ Policy learning rate & 3e-5\\ Value learning rate & 3e-4\\ Temperature $\alpha$ & 0.1\\ $f$-divergence & $\chi^2$\\ \bottomrule \end{tabular} \end{center} \caption{Hyperparameters for \texttt{ReCOIL}. } \label{tab:recoil-hp} \end{table} \section{Additional Experimental Results} \subsection{Benchmarking performance of ReCOIL on MuJoCo tasks} We show learning curves for ReCOIL in Figure~\ref{fig:offline_il} below. \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{figures/offline_il/offline-il-coverage.pdf} \\ \includegraphics[width=1.0\linewidth]{figures/offline_il/offline-il.pdf} \\ \includegraphics[width=0.9\linewidth]{figures/offline_il/offline-il-legend.pdf} \vspace{-2.0mm} \end{center} \caption{ReCOIL performs competitively in the setting of learning to imitate from diverse offline data. The results are averaged over 5 seeds} \label{fig:offline_il} \end{figure} \section*{Acknowledgements} This work has taken place in the Personal Autonomous Robotics Lab (PeARL) at The University of Texas at Austin. PeARL research is supported in part by the NSF (IIS-1724157, IIS-1749204, IIS-1925082), AFOSR (FA9550-20-1-0077), and ARO (78372-CS). This research was also sponsored by the Army Research Office under Cooperative Agreement Number W911NF-19-2-0333. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. \section{You \emph{can} have an appendix here.} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} The analysis of thermodynamic properties of a quantum field around a black hole is an important step towards a deeper understanding the quantum mechanical properties of black holes \cite{Hawking1, Hawking2, Beckenstein}. It is well known \cite{tHooft, Davies, Vega, Susskind, Kabat, Solodukhin, Barbon, Emperan, Alwis0, Alwis, Fursaev, Belgiorno} that the free energy and the entropy of a quantum field around a black hole are plagued by divergences caused by the presence of the horizon. Therefore one needs to devise regularization methods to tackle these so-called horizon divergences. One such regularization procedure is 't Hooft's brick wall method \cite{tHooft} which consists of confining the quantum field inside a spherical shell around a static black hole bounded by two Dirichlet walls, \textit{i.e.} two hypersurfaces where the quantum fields are subject to Dirichlet boundary conditions. The inner Dirichlet wall (the brick wall), situated just outside the black hole horizon, regulates the horizon divergence; and the outer wall provides an infrared cutoff. Our aim in this paper is to investigate the effects of the brick wall on the thermodynamics of both bosonic and fermionic fields in Schwarzschild, Reissner-Nordstr\"{o}m (RN), extreme RN, dilatonic and their near horizon geometries. Our main result is that at the Hawking temperature $T_{H}$ the contribution of the brick wall to the entropy is comparable to the bulk contribution. Here the distinction between bulk and boundary contributions stems from the heat kernel expansion for the Dirichlet problem; the boundary terms are those terms in the heat kernel expansion which have their origin in the Dirichlet boundary condition on the Laplacian. It is well known that the leading bulk contribution to the entropy is proportional to the area $A_{H}$ of the horizon \cite{tHooft, Vega, Susskind, Kabat, Solodukhin, Barbon, Emperan, Alwis0, Alwis, Fursaev, Belgiorno}. We will show that at the Hawking temperature the leading boundary effect is also proportional to $A_{H}$ and of the same order of magnitude as the leading bulk term. More precisely we will show that the entropy of the quantum field at $T_{H}$ is \begin{equation} S=B\, \frac{A_{H}}{\delta^{2}}+C\log \delta^{2}. \end{equation} where \begin{equation} B=\frac{1}{(4\pi^{3})}\left[\frac{\zeta(4)}{\pi^{2}}-\frac{3}{8}\zeta(3)+\frac{2}{3}\zeta(2)\right]+\ldots, \end{equation} and \begin{equation} C=C(\kappa, A_{H})=\frac{\zeta(4)}{\pi^{4}}\left[\frac{1}{2}-\frac{3}{2\sqrt{\pi}}\kappa A^{1/2}_{H}\right]-\frac{\zeta(2)}{8\pi^{3}}m^{2}A_{H}+\ldots. \end{equation} Here $\delta$ is a cutoff parameter regulating the horizon divergences (brick wall cutoff \cite{tHooft}), $\kappa$ is the surface gravity, $A_{H} $ is the area of the event horizon, $m$ is the mass parameter of the quantum field. Both coefficients $B$ and $C$ will be calculated in the ultrarelativistic regime using the first four terms of the high temperature expansion in the small parameter $\beta m$. It will be shown that they both consist of a bulk plus a boundary contribution which are of the same order of magnitude at $T_{H}$. As is clear form the above formulae the coefficient $B$ will be seen to be the same for all geometries considered. This is in accordance with \cite{Ordonez1} where the universality of the $B$ coefficient was argued for by a scaling/dimensional analysis of the heat kernel coefficients in the near horizon geometry. In fact we will also examine the near horizon geometry and show that the common $B$ coefficient has its origin in the near horizon geometry. Moreover, in the near horizon geometry we will show that the logarithmically divergent term produces a finite term in $\log A_{H}$. On the othere hand in all cases considered the $C$ coefficient will be seen to be a specific function of $\kappa$ and $A_{H}$. We will also consider a regularization procedure involving Neumann walls and obtain analogous results for that case as well. Instead of the original metric we will work with the optical metric \cite{gibbons} which is related to the former by a conformal transformation. The role of the optical metric in the computation of thermodynamic quantities in a static background was first noticed in \cite{Alwis} as a direct consequence of the path integral measure used in the calculation of the partition function. Here we will give an alternative derivation of the optical metric based directly on the field equations. The main technical advantage of the optical metric is that it allows us to express the single particle energies as eigenvalues of a Hamiltonian constructed therefrom. For example, for the original static metric the single particle energies arise not as eigenvalues but only as separation constants in the Klein-Gordon equation; whereas after introducing the optical metric, which is an ultra-static metric, via a conformal transformation we get the single particle energies as eigenvalues of the aforementioned Hamiltonian. This then allows us to use the heat kernel expansion to calculate the thermodynamic quantities. Here we also see the importance of the brick wall; it constraints the system in a region where the conformal transformation is free of singularities; this is the brick wall regularization procedure by which the horizon divergences are avoided. In the following we will work in the ultrarelativistic regime where the mass parameter $m$ of the field is much smaller than the temperature $T$, $ m<<T$, and employ an expansion in terms of the small parameter $\beta m$ ($\beta=T^{-1}$) (sometimes called the high temperature expansion) to study the free energy and the entropy of the quantum field in the presence of Dirichlet walls. We will also present an alternative derivation \cite{Akant} of the high temperature expansion based on Mellin transform methods which does not make use of the zeta function regularization. The resulting expansion is in complete agreement with the one obtained by the latter method \cite{Dowker1,Actor,Dowker2,Dowker3,Kirsten1,Kirsten2,T1,T2}. The brick wall method should not be confused with the often used volume cut-off method in which one works within the same spherical shell used in the brick wall method but without imposing any boundary conditions (see \cite {FrolovFursaev} for a comparison of various techniques employed in the calculation of the thermodynamic entropy of quantum fields near a black hole). In particular the heat kernel expansion used in the volume cut-off method does not involve the boundary terms. Thus in the light of our results we see that the volume cut-off method cannot be equivalent to the brick wall method which contains extra boundary terms which are of the same order of magnitude (at $T=T_{H}$) as the bulk terms. The leading order contribution to the entropy in the high temperature expansion was first calculated in \cite{Solodukhin} in the volume cut-off approach by a rather involved analysis based on a Liouville type quantum field theory. Later this result was reproduced in \cite{Alwis, Barvinsky, Barbon} by a simpler analysis based on the use of the optical metric. Our leading order contribution will be in complete agreement with the one given in the above papers. Higher order corrections in the high temperature expansion were obtained in the volume cut-off method by \cite{ Vanzo1, Vanzo2}. The boundary effects of the brick wall on thermodynamics in the approximate near horizon geometry was first considered in \cite{Ordonez1} where the general form of the leading divergence was studied via the heat kernel coefficients. For a WKB analysis of the problem see \cite{Ghosh1, Ghosh2, Ghosh3, Ghosh4, Ordonez2, Ordonez3}. The effect of boundaries in particle detection in AdS space was discussed recently in \cite{Emelyanov}. For an extensive review of the black hole entropy problem see \cite{Solodukhin2}. We would like to remark that there are two different viewpoints towards the brick wall hypothesis. In \cite{tHooft} (and later in \cite{FrolovNovikov, Barvinsky}) the brick wall is a real physical barrier whose origin lies in the full theory of quantum gravity, whereas in \cite{Susskind} (see also \cite{Barbon}) the introduction of the brick wall is a regularization procedure in which the regulated horizon divergences are used to cancel the the divergences in the bare gravitational constant (this is accomplished by requiring the total entropy, the entropy of the matter fields plus the entropy of the gravitons, to be finite). Here is the brief outline of this paper. In Sec. 2 we will briefly review the optical metric construction and derive the high temperature expansion of the free energy for a neutral Bose field with non-vanishing chemical potential. We will first express the free energy as a harmonic sum and then apply the Mellin transform method together with the heat kernel expansion to get the high temperature expansion. We will also show how the resulting expansion is modified when one considers charged boson and fermions. The main results of this section are the high temperature expansions (\ref{free0}),(\ref{cbosef}),(\ref{fermif}) of the free energies of neutral bosons, charged bosons and charged fermions, respectively. In Sec. 3 we will apply the results of Sec. 2 to static black hole backgrounds such as the Schwarzschild, the RN (including the extreme limit) and the dilaton backgrounds and derive the boundary effects on the horizon divergences of the entropy, we will evaluate the latter at the Hawking temperature and compare the bulk and boundary terms with each other. We will also discuss certain issues concerning the near horizon approximation to the entropy calculation. In particular, we will use the transformation properties of the heat kernel coefficients under a conformal scaling to derive the general form of the leading divergent term in the entropy. In Sec. 4 we will summarize our results for the horizon divergences in the entropy, further analyze the logarithmic divergence, and infer the $\kappa$ and $A_{H}$ dependence of the $C$ coefficient. In Sec. 5 we will try to understand the universality of the $B$ coefficient by considering the near horizon geometry (this is the strategy employed in \cite{Ordonez1}). In Sec 6 we will examine the possibility of using Neumann walls as thermodynamic regulators instead of Dirichlet walls. \section{Thermodynamics Around the Black Hole} Consider the $d+1$ dimensional static metric \begin{equation} ds^{2}=-F dt^{2}+g_{ij}dx^{i}dx^{j}. \end{equation} The free energy of the Bose gas is given by \begin{eqnarray} \mathcal{F} = \frac{1}{\beta}\sum_{\sigma}\log(1-e^{-\beta(\epsilon_{\sigma}-\mu)}). \end{eqnarray} Here the $\epsilon_{\sigma}$'s are the single particle energies determined by solving the Dirichlet problem for the Klein-Gordon equation coupled to the space-time scalar curvature $\mathcal{R}$ \begin{equation} \left[-\Box+\xi \mathcal{R}+m^{2}\right]f_{\sigma}=0 \end{equation} by the separation of variable $f_{\sigma}(t,x)=e^{-i\epsilon_{\sigma}t}\phi_{\sigma}(x)$ \cite{Thorne}. Under the scaling transformation \begin{equation} \overline{g}_{\mu\nu}=\Omega^{2}g_{\mu\nu},\;\;\;\overline{f}_{\sigma}=\Omega^{\frac{1-d}{2}}f_{\sigma}, \end{equation} the KG equation becomes \begin{equation} \left\{-\Box_{conf}+\frac{d-1}{4d}R+\Omega^{-2}\left[m^{2}+\left(\xi-\frac{d-1}{4d}\right)R\right]\right\}\overline{f}_{\sigma}=0. \end{equation} Here $\Box_{conf}$ and $R$ are respectively the D'Alembertian and the scalar curvature of the transformed metric $\overline{g}_{\mu\nu}$. In particular if we choose $\Omega=F^{-1/2}$ we get the optical metric \begin{equation} d\overline{s}^{2}=- dt^{2}+\overline{g}_{ij}dx^{i}dx^{j}, \end{equation} where \begin{equation} \overline{g}_{ij}=F^{-1}g_{ij}. \end{equation} Thus the KG equation in the new variables is \begin{equation} \left\{\partial_{0}^{2}-\Delta+\frac{d-1}{4d} R+F\left[m^{2}+\left(\xi-\frac{d-1}{4d}\right)\mathcal{R}\right]\right\}\overline{f}_{\sigma}=0. \end{equation} Here $\Delta$ is the Laplacian of the optical metric. Since $F$ does not depend on $t$ we can still write the solution in the separated form $\overline{f}_{\sigma}(t,x)=e^{-i\epsilon_{\sigma}t}\overline{\phi}_{\sigma}(x)$. Thus we get the eigenvalue problem for $\epsilon_{\sigma}$'s \begin{equation} \left\{-\Delta+\frac{d-1}{4d}R+F\left[m^{2}+\left(\xi-\frac{d-1}{4d}\right)\mathcal{R}\right]\right\}\overline{\phi}_{\sigma}= \epsilon_{\sigma}^{2}\overline{\phi}_{\sigma}(x). \end{equation} Defining \begin{equation} U_{1}=\frac{d-1}{4d}R+F\left[m^{2}+\left(\xi-\frac{d-1}{4d}\right)\mathcal{R}\right], \end{equation} we get \begin{equation} H_{1}\overline{\phi}_{\sigma}=\epsilon_{\sigma}^{2}\overline{\phi}_{\sigma}(x), \end{equation} where \begin{equation} H_{1}=-\Delta+U_{1}. \end{equation} Now coming back to the free energy and proceeding as in \cite{Akant} we write it as \begin{eqnarray} \mathcal{F} &=& -\frac{1}{\beta}\sum_{\sigma}\sum_{k=1}^{\infty}\frac{1}{k}e^{k\beta\mu}e^{-k\beta\epsilon_{\sigma}} \\ &=& -\sum_{k=1}^{\infty}\frac{1}{k\beta}e^{k\beta\mu}\sum_{\sigma}e^{-k\beta\epsilon_{\sigma}}. \end{eqnarray} Using the subordination identity \begin{eqnarray} e^{-b\sqrt{x}} = \frac{b}{2\sqrt{\pi}}\int_{0}^{\infty}\frac{du}{u^{3/2}}\,e^{-\frac{b^{2}}{4u}}\,e^{-ux}, \end{eqnarray} we get \begin{equation}\label{mp} \sum_{\sigma}e^{-k\beta\epsilon_{\sigma}}=\sum_{\sigma}e^{-k\beta m'(m'^{-1}\epsilon_{\sigma})}=\frac{k\beta m'}{2\sqrt{\pi}}\int_{0}^{\infty}\frac{du}{u^{3/2}}e^{-\frac{(k\beta m')^{2}}{4u}-u}Tr\,e^{-um'^{-2}H}, \end{equation} and \begin{eqnarray}\label{F} \mathcal{F}= -\sum_{k=1}^{\infty}e^{k\beta\mu}\frac{m'}{2\sqrt{\pi}}\int_{0}^{\infty}\frac{du}{u^{3/2}}e^{-\frac{(k\beta m')^{2}}{4u}-u}Tr\,e^{-um'^{-2}H}, \end{eqnarray} Here $m'$ is a mass parameter which will be used to form the dimensionless expansion parameter $\beta m'$ for the high temperature expansion. Here we also defined \begin{equation} H=H_{1}-m'^{2}=-\Delta+U, \end{equation} with \begin{equation} U=U_{1}-m'^{2}=\frac{d-1}{4d}R+F\left[m^{2}+\left(\xi-\frac{d-1}{4d}\right)\mathcal{R}\right]-m'^{2}. \end{equation} Defining $y=\beta m'$ let us write \begin{equation}\label{harm} \mathcal{F}(y)=-m'\sum_{k=1}^{\infty}e^{ky\overline{\mu}_{r}}G(ky), \end{equation} with \begin{equation}\label{Gdef} G(y)=\frac{e^{y\overline{\epsilon}_{0}}}{2\sqrt{\pi}}\int_{0}^{\infty}\frac{du}{u^{3/2}}e^{-\frac{y^{2}}{4u}-u}Tr\,e^{-um'^{-2}H}. \end{equation} and \begin{equation} \mu_{r}=\mu-\epsilon_{0},\;\;\;\overline{\mu}=\frac{\mu}{m'},\;\;\;\overline{\mu}_{r}=\frac{\mu_{r}}{m'},\;\;\;\overline{\epsilon}_{0}=\frac{\epsilon_{0}}{m'}. \end{equation} Now the sum in (\ref{harm}) is a harmonic sum which is by definition a series of the form \begin{equation}\label{harmsum} f(y)=\sum_{k=1}^{\infty}F(ky) \end{equation} The small $y=\beta m$ asymptotic of this harmonic sum is completely determined by the poles and the residues of the meromorphic extension of the Mellin transform of $\mathcal{F}$ \cite{Flajolet}. Recall that the Mellin transform of a function $f(y)$ is given by \begin{equation} (\mathcal{M}f(y))(s)= \widetilde{f}(s)=\int_{0}^{\infty}dy y^{s-1}f(y). \end{equation} Let us also recall the definition of the Laplace-Mellin transform \cite{JorLang} of $f$ \begin{equation} (\mathcal{LM}f(y))(s,z)=\widetilde{f}(s,z)=\int_{0}^{\infty}dy y^{s-1}e^{-zs}f(y). \end{equation} The asymptotic expansion of $f(y)$ is then given according to the following rule \cite{Flajolet}. If the Mellin transform has the singular expansion ($\asymp$ means the singular part of the expansion) \begin{equation} \widetilde{f}(s)\asymp\sum_{w,k}\frac{A(w,k)}{(s-w)^{k+1}}, \end{equation} then \begin{equation} f(y)\sim \sum_{w,k}A(w,k)\frac{(-1)^{k}}{k!}y^{-w}(\log y)^{k}. \end{equation} If $f(y)$ is given by a harmonic sum (\ref{harmsum}) then it is easy to see that \begin{equation} \widetilde{f}(s)=\zeta(s)\widetilde{F}(s). \end{equation} Now we can apply this method to $ \mathcal{F}(y)$. The Mellin transform is given by \begin{equation} \mathcal{M}(-\beta\mathcal{F}(y))(s)=\zeta(s)\mathcal{M}(e^{y\overline{\mu_{r}}}G(y))(s). \end{equation} But using the definition of the Laplace-Mellin transform we can also write this as \begin{equation} \mathcal{M}(-\beta\mathcal{F}(y))(s)=\zeta(s)\widetilde{G}(s,-\mu). \end{equation} Here \begin{equation} \widetilde{G}(s,-\mu)=\int_{0}^{\infty}dy y^{s-1}e^{\overline{\mu}_{r} s}G(y). \end{equation} Now the singularities of this integral arise from the $y\rightarrow 0$ limit. First let us use the heat kernel expansion \begin{equation} Tr\, e^{-u H}=\frac{1}{(4\pi u)^{d/2}}\left(a_{0}+a_{1/2}u^{1/2}+a_{1}u+\ldots\right) \end{equation} to get \begin{eqnarray}\label{int} \int_{0}^{\infty}\frac{du}{u^{3/2}}e^{-\frac{y^{2}}{4u}-u}Tr\,e^{-um'^{-2}H}=\sum_{n=0}^{\infty}2 m'^{d-n}a_{n/2}\left(\frac{y}{2}\right)^{ \frac{n-d-1}{2}}K_{\frac{-n+d+1}{2}}(y). \end{eqnarray} Here $K$'s are the modified Bessel functions of second kind. Using their series expansions we arrive at \begin{equation} \frac{1}{2\sqrt{\pi}}\int_{0}^{\infty}\frac{du}{u^{3/2}}e^{-\frac{y^{2}}{4u}-u}Tr\,e^{-um'^{-2}h}\asymp b_{-d-1}y^{-d-1}+b_{-d}y^{-d}+\ldots. \end{equation} where $b$ coefficients are \begin{eqnarray}\label{b} b_{-d-1} &=& \frac{(2m')^{d}}{\sqrt{\pi}}\, \Gamma\left( \frac{d+1}{2}\right)\, a_{0},\nonumber \\ b_{-d} &=& \frac{(2m')^{d-1}}{\sqrt{\pi}} \,\, \Gamma\left( \frac{d}{2}\right)\, a_{1/2},\nonumber \\ b_{-d+1} &=& \frac{(2m')^{d-2}}{\sqrt{\pi}}\,\, \Gamma\left( \frac{d-1}{2}\right) \, \left(-m'^{2}a_{0}+ a_{1} \right) \end{eqnarray} Here we listed only the first few coefficients that we will need in the following. Also note that for the Dirichlet problem the heat kernel coefficients for the operator $-\Delta+U$ are \cite{Gilkey, Vas} \begin{eqnarray}\label{a} a_{0} &=& \frac{1}{(4\pi)^{d/2}}\int_{B} dV, \nonumber\\ a_{1/2} &=& -\frac{1}{4(4\pi)^{\frac{d-1}{2}}}\int_{\partial B}dS, \nonumber \\ a_{1} &=& \frac{1}{6(4\pi)^{d/2}}\left[\int_{B} dV(-6U+R)+2\int_{\partial B}dS \,K_{aa}\right],\nonumber\\ a_{3/2}&=&-\frac{1}{4}\frac{1}{(4\pi)^{(d-1)/2}}\frac{1}{96}\int_{\partial B}dS\,[16(-6U+R)+8R_{aNaN}+\nonumber\\ &&+7K_{aa}K_{bb}-10K_{ab}K_{ab}]. \end{eqnarray} Here one uses an orthonormal basis $E_{a},N$ adapted to $\partial B$ where $E_{a}$'s are tangent to $\partial B$ and $N$ is the inward normal to it. Moreover, $R_{ijkl}$ is the Riemann tensor, $K_{ab}$ is the extrinsic curvature of $\partial B$ and summation over repeated orthonormal basis indices $a,b$ is assumed. Now plugging (\ref{int}) into (\ref{Gdef}), expanding the exponential pre-factor, and multiplying the series we get for small $y$; \begin{equation} G(y)\asymp c_{-d-1}y^{-d-1}+c_{-d}y^{-d}+c_{-d+1}y^{-d+1}+\ldots, \end{equation} where the coefficients of interest to us are \begin{eqnarray}\label{cb} c_{-d-1}&=& b_{-d-1},\;\;\;\;c_{-d}=b_{-d}+\overline{\epsilon}_{0}b_{-d-1}\nonumber \\ c_{-d+1} &=& b_{-d+1}+\overline{\epsilon}_{0}b_{-d}+\frac{1}{2}\overline{\epsilon}_{0}^{2}b_{-d-1}. \end{eqnarray} So \begin{eqnarray} \widetilde{G}(s,-\mu)&=&\int_{0}^{\infty}dy y^{s-1}e^{\overline{\mu}_{r} y}G(y) \nonumber \\ &\sim & c_{-d-1}\frac{\Gamma(s-d-1)}{(-\overline{\mu}_{r})^{s-d-1}}+c_{-d}\frac{\Gamma(s-d)}{(-\overline{\mu}_{r})^{s-d}}+\nonumber\\&& +c_{-d+1}\frac{\Gamma(s-d+1)}{(-\overline{\mu}_{r})^{s-d+1}}+\ldots. \end{eqnarray} Around $-m$ ($m=0,1,2,\ldots$), \begin{equation} \Gamma(s-n)\sim \frac{(-1)^{m}}{m!}\frac{1}{s-n+m}. \end{equation} and we also have the simple pole of $\zeta$ at $s=1$ \begin{equation}\label{zeta} \zeta(s)\sim \frac{1}{s-1}+\gamma. \end{equation} Here $\gamma$ is the Euler-Mascheroni constant. Thus the poles of $\zeta(s)\widetilde{G}(s,-\mu)$ are seen to be the integers $d,d-1,d-2,\ldots$. Here all the poles are simple except the one at $s=1$ which is a double pole. Thus we have the following residues: Around $s=d+1$: \begin{equation} \zeta(s)\widetilde{G}(s,-\mu)\asymp \zeta(d+1)c_{-d-1}\frac{1}{s-d-1}. \end{equation} around $s=d$: \begin{equation} \zeta(s)\widetilde{G}(s,-\mu)\asymp \zeta(d)(c_{-d-1}\overline{\mu}_{r}+c_{-d})\frac{1}{s-d}, \end{equation} around $s=d-1$: \begin{equation} \zeta(s)\widetilde{G}(s,-\mu)\asymp \zeta(d-1)(\frac{1}{2}c_{-d-1}\overline{\mu}_{r}^{2}+c_{-d}\overline{\mu}_{r}+c_{-d+1})\frac{1}{s-d+1}. \end{equation} However, for $d=3$, $d-2=1$ and we have a double pole at $s=1$. Thus for $d=3$ around $s=1$ we have \begin{eqnarray} \zeta(s)\widetilde{G}(s,-\mu)\asymp \left(\frac{1}{2}c_{-d}\mu_{r}^{2}+c_{-d+1}\mu_{r}+c_{-d+2}\right)\frac{1}{(s-d+2)^{2}}\nonumber\\ +\left[FP\widetilde{G}(s=1,-\mu)+\gamma\left(\frac{1}{2}c_{-d}\mu_{r}^{2}+c_{-d+1}\mu_{r}+c_{-d+2}\right)\right]\frac{1}{s-d+2}. \end{eqnarray} Here $FP$ means the finite part of the integral So the small $y$ asymptotic of $\mathcal{F}$ is \begin{eqnarray}\label{f} \mathcal{F} &\sim &- m'\zeta(d+1)c_{-d-1}\left(\frac{T}{m'}\right)^{d+1}-m'\zeta(d)(c_{-d-1}\overline{\mu}_{r}+c_{-d})\left(\frac{T}{m'}\right)^{d}\nonumber\\ &&-m'\zeta(d-1)\left(\frac{1}{2}c_{-d-1}\overline{\mu}_{r}^{2}+c_{-d}\overline{\mu}_{r}+c_{-d+1}\right) \left(\frac{T}{m'}\right)^{d-1}+\ldots. \end{eqnarray} Here using (\ref{cb}) we get \begin{eqnarray}\label{c} &&c_{-d-1} = b_{-d-1}=\frac{(2m')^{d}}{\sqrt{\pi}}\, \Gamma\left( \frac{d+1}{2}\right)\, a_{0} \nonumber\\ &&c_{-d-1}\overline{\mu}_{r}+c_{-d} = \overline{\mu} b_{-d-1}+b_{-d}=\overline{\mu}\, \frac{(2m')^{d}}{\sqrt{\pi}}\, \Gamma\left( \frac{d+1}{2}\right)\, a_{0}+\frac{(2m')^{d-1}}{\sqrt{\pi}} \,\, \Gamma\left( \frac{d}{2}\right)\, a_{1/2}\nonumber\\ &&\frac{1}{2}c_{-d-1}\overline{\mu}_{r}^{2}+c_{-d}\overline{\mu}_{r}+c_{-d+1}=\frac{1}{2}\overline{\mu}^{2}b_{-d-1}+\overline{\mu} b_{-d}+b_{-d+1}\nonumber\\ &&= \frac{1}{2}\overline{\mu}^{2}\,\frac{(2m')^{d}}{\sqrt{\pi}}\, \Gamma\left( \frac{d+1}{2}\right)\, a_{0}+\overline{\mu}\,\frac{(2m')^{d-1}}{\sqrt{\pi}} \,\, \Gamma\left( \frac{d}{2}\right)\, a_{1/2}\nonumber\\ &&+\frac{(2m')^{d-2}}{\sqrt{\pi}}\,\, \Gamma\left( \frac{d-1}{2}\right) \, \left(-m'^{2}a_{0}+ a_{1} \right) . \end{eqnarray} Specializing to $d=3$ we get \begin{eqnarray}\label{free0} \mathcal{F} &=& -\frac{\zeta(4)}{\sqrt{\pi}} \, 8 \, a_{0}\, T^{4} -\frac{\zeta(3)}{\sqrt{\pi}} \left[ 8\mu \,a_{0}+2\,\sqrt{\pi}\,a_{1/2}\right] \, T^{3} \nonumber\\ &-& \frac{\zeta(2)}{\sqrt{\pi}}\left[ (4\mu^{2}-2m'^{2})a_{0}+2\sqrt{\pi}\mu a_{1/2}+2 a_{1}\right] \, T^{2}\nonumber\\ &+&\frac{1}{2\sqrt{\pi}} \left[ \left(\frac{8}{3} \mu^{3}-4 m'^{2}\mu \right) a_{0}+ 2\sqrt{\pi} (\mu^{2}-m'^{2})a_{1/2} +4\mu a_{1} +2 \sqrt{\pi}\, a_{3/2} \right]\nonumber\\ & \times & T \log (T/m') \nonumber\\ &-& \frac{1}{2\sqrt{\pi}} \left( \gamma \left[ \left(\frac{8}{3} \mu^{3}-4 m'^{2}\mu \right) a_{0}+ 2\sqrt{\pi} (\mu^{2}-m'^{2})a_{1/2} +4\mu a_{1} +2\sqrt{\pi}\, a_{3/2} \right] \right. \nonumber\\ &+& \left. FP(\mathcal{LM}G(s=1,|\mu|) \right) T+\ldots \end{eqnarray} In the case of charged bosons we have \begin{eqnarray}\label{cbosef} \mathcal{F}_{charged}&=&\mathcal{F}(\mu)+\mathcal{F}(-\mu) = -\frac{\zeta(4)}{\sqrt{\pi}} \, 16 \, a_{0}\, T^{4} - \frac{\zeta(3)}{\sqrt{\pi}} 4 a_{1/2} \, T^{3} \nonumber\\ &&- \frac{\zeta(2)}{\sqrt{\pi}}4\left[ (2\mu^{2}-m'^{2})a_{0}+ a_{1}\right] \, T^{2}\nonumber\\ &&+\left[ 2 (\mu^{2} -m'^{2}) a_{1/2}+\frac{2}{\pi} a_{3/2}\right] \times T \log (T/ m')\nonumber\\ &&- \left( \gamma \left[ 2(\mu^{2}-m'^{2}) a_{1/2}+2 \, a_{3/2} \right] +\frac{1}{\sqrt{\pi}}FP(\mathcal{LM}G)(1,|\mu|) \right) T\nonumber\\ &&+\ldots \end{eqnarray} Finally, for a Fermi gas we have \begin{equation} \mathcal{F}_{fermion}= -\frac{2}{\beta}\sum_{\sigma}\left[\log(1+e^{-\beta(\epsilon_{\sigma}-\mu)}+\log(1+e^{-\beta(\epsilon_{\sigma}+\mu)})\right] \end{equation} Again expanding the logarithms we get \begin{eqnarray} \mathcal{F} &=& -4\sum_{\sigma}\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k\beta}\cosh (k\beta\mu) e^{-k\beta\epsilon_{\sigma}} \end{eqnarray} The rest of the analysis parallels the bosonic case. Because of the alternating nature of the series expansion the Mellin transform now generates the Dirichlet $\eta$ function instead of the Riemann $\zeta$. \begin{eqnarray}\label{fermif} \mathcal{F}_{fermion}&=& -2\frac{\eta(4)}{\sqrt{\pi}} \, 16 \, a_{0}\, T^{4} -2 \frac{\eta(3)}{\sqrt{\pi}} 4 a_{1/2} \, T^{3} \nonumber\\ &&-2 \frac{\eta(2)}{\sqrt{\pi}}4\left[ (2\mu^{2}-m'^{2})a_{0}+ a_{1}\right] \, T^{2} \nonumber\\ &&-2\eta(1)\left[ 2 (\mu^{2} -m'^{2}) a_{1/2}+\frac{2}{\pi} a_{3/2}\right] \, T +\ldots \end{eqnarray} Here \begin{equation} \eta(s)=\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{k^{s}}. \end{equation} Unlike the bosonic case we do not have a logarithmic term in this expansion since the $\eta$ function is entire and consequently the poles of the Mellin transform are all simple. If we now go back to (\ref{mp}), the very first place we introduced the parameter $m'$, we see that the resulting expansion should be independent of $m'$. In the next section we will show, by direct calculation, that this is indeed the case; for example we will see that in the term $ (2\mu^{2}-m'^{2})a_{0}+ a_{1}$, the $m'^2$ terms (one explicit in $-m'^{2}a_{0}$ and one hidden in $a_{1}$) do cancel each other. The only place where the cancelation is not obvious is the logarithmic term $\log (T/m')$ in the bosonic case. However, on general grounds we expect that this logarithmic term will be canceled by a similar term coming from $FP(\mathcal{LM}G)(1,|\mu|)T$ appearing in the next order of the expansion. A more detailed analysis of this regularised finite part integral will be given elsewhere. \section{Boundary Effects in Black Hole Backgrounds} We will consider Schwarzschild, Reissner-Nordstr\"{o}m (RN) and dilaton black holes \cite{GibMa, Garfinkle} in $1+3$ dimensions and calculate the free energy and the entropy for a neutral Bose field. For charged bosons and charged fermions the results can be written down at once with the minor modifications explained at the end of Sec 2. We will focus on the case of vanishing chemical potential. So setting $\mu=0$ and using the explicit forms of the heat kernel coefficients (\ref{a}) in (\ref{free0}) we get \begin{eqnarray}\label{free} && \mathcal{F} = - \zeta(4)\frac{1}{\pi^{2}}\,V T^{4}+\zeta(3)\left(\frac{1}{8\pi}\,A\right)T^{3}\nonumber\\ &&\frac{-\zeta(2)}{4\pi^{2}}\left[\int_{B}dV\,(-F)\left[m^{2}+\left(\xi-\frac{1}{6}\right)\mathcal{R}\right]+\int_{\partial B}dS\, \frac{K_{aa}}{3} \right] T^{2}\nonumber\\ && -\frac{1}{16\pi}\int_{\partial B}dS\,\left\{-F\left[m^{2}+\left(\xi-\frac{1}{6}\right)\mathcal{R}\right]+\right.\nonumber\\ &&\left.+\frac{1}{96}[8R_{NaNa}+7K_{aa}K_{bb}-10K_{ab}K_{ab}]\right\}T\log\frac{T}{m'}\nonumber\\ &&+\ldots. \end{eqnarray} and the entropy is given as \begin{eqnarray}\label{sexp} && S =- \frac{\partial \mathcal{F} }{\partial T}= 4\zeta(4) \frac{1}{\pi^{2}}\,V T^{3}-3\zeta(3) \left(\frac{1}{8\pi}\,A\right)T^{2}\nonumber\\ &&+ \frac{\zeta(2)}{2\pi^{2}}\left[\int_{B}dV\,(-F)\left[m^{2}+\left(\xi-\frac{1}{6}\right)\mathcal{R}\right]+\int_{\partial B}dS\, \frac{K_{aa}}{3} \right]T\nonumber\\ && +\frac{1}{16\pi}\int_{\partial B}dS\,\left\{-F\left[m^{2}+\left(\xi-\frac{1}{6}\right)\mathcal{R}\right]+\right.\nonumber\\ &&\left.+\frac{1}{96}[8R_{NaNa}+7K_{aa}K_{bb}-10K_{ab}K_{ab}]\right\}\log\frac{T}{m'}\nonumber\\ &&+\ldots. \end{eqnarray} As promised we see that the $m'$ terms disappear, except in the logarithmic term. In all geometries we consider here the metric has the general form \begin{equation} ds^{2}=-F(r)dt^{2}+F^{-1}(r)dr^{2}+r(r-a)[d\theta^{2}+\sin^2\theta\, d\phi^{2}], \end{equation} where $a=0$ for Schwarzschild and RN space-times. We will take $B$ as the "spherical" shell defined by $r_{1}\leq r\leq r_{2}$ with $r_{1}=r_{H}+\epsilon$, where $r_{H}$ is the radial coordinate of the horizon and $\epsilon$ is the brick wall cutoff. The optical metric is \begin{equation} d\overline{s}^{2}=F^{-2}(r)dr^{2}+r(r-a)F^{-1}(r)[d\theta^{2}+\sin^2\theta\, d\phi^{2}]. \end{equation} The quantities needed for the calculation of the heat kernel coefficients (\ref{a}) are then as follows. The volume element is given as \begin{equation}\label{volume} dV=F^{-2}r(r-a)\sin\theta dr\wedge d\theta\wedge d\phi. \end{equation} The area element of a constant $r$ surface is given as \begin{equation}\label{area} dS_{r}= F^{-1}r(r-a)\sin\theta d\theta\wedge d\phi, \end{equation} The inward normal to the inner wall is given by the value at $r=r_{1}$ of the vector field \begin{equation} N = F\partial_{r}, \end{equation} and the trace of the corresponding intrinsic curvature can be calculated as \begin{equation}\label{extrinsic} K_{aa} =-\nabla\cdot N=F'(r)-\frac{(2r-a)F(r)}{r(r-a)}. \end{equation} Consider the orthonormal basis \begin{eqnarray}\label{normal} N_{(r)} = F\partial_{r},\;\;E_{1} = \frac{\sqrt{F}}{\sqrt{r(r-a)}}\partial_{\theta},\;\;E_{2} = \frac{\sqrt{F}}{\sqrt{r(r-a)}\sin\theta}\partial_{\phi}. \end{eqnarray} This basis is adapted to $r=const.$ surface, that is $N$ is normal to that surface and $T$'s are tangent to it. Moreover on the inner wall $r=r_{1}$ of $B$, $N$ is the inward normal of $\partial B$. The extrinsic curvature of the inner wall at $r=r_{1}$ is given by \begin{eqnarray}\label{extrinsicmat} K_{ab}&=& N\cdot(\nabla_{E_{a}}E_{b})\nonumber\\ &=& N_{\nu}E_{a}^{\mu}\partial_{\mu}E_{b}^{\nu}+\Gamma_{\mu\sigma}^{\nu}E_{a}^{\mu}E_{b}^{\sigma}N_{\nu}\nonumber\\ &=&\Gamma_{\mu\sigma}^{\nu}E_{a}^{\mu}E_{b}^{\sigma}N_{\nu}. \end{eqnarray} More explicitly, using $N_{\nu}=F^{-1}\delta_{\nu}^{0}$, \begin{eqnarray} K_{11}=\Gamma^{r}_{\theta\theta}\frac{1}{r(r-a)},\;\;\; K_{22}=\Gamma^{r}_{\phi\phi}\frac{1}{r(r-a)\sin^{2}\theta},\;\;\; K_{12}=K_{21}= \Gamma^{r}_{\theta\phi}\frac{1}{r(r-a)\sin\theta}. \end{eqnarray} Similarly, \begin{eqnarray} R_{aNaN} &=& \frac{F^{3}}{r(r-a)} R_{\theta r \theta r}+ \frac{F^{3}}{r(r-a)\sin^{2}\theta} R_{\phi r \phi r}. \end{eqnarray} \subsection{Schwarzschild Black Hole} We will now specialize to $1+3$ Schwarzschild geometry for which \begin{equation} F=1-\frac{2M}{r},\;\;\;\;\mathcal{R}=0, \;\;\;\; \end{equation} and the extrinsic curvature is \begin{equation} K_{11}=K_{22}=\frac{3M-r}{r^{2}},\;\;\;K_{12}=0, \end{equation} which implies \begin{equation} K_{aa}=2\frac{3M-r}{r^{2}},\;\;\;\;K_{ab}K_{ab}=2\frac{(3M-r)^{2}}{r^{4}}. \end{equation} And finally, \begin{equation} R_{aNaN}=\frac{2M(3M-2r)}{r^{4}}. \end{equation} Now using (\ref{volume}) we get the volume of $B$ as \begin{equation} V=4\pi\left[\frac{r^{3}}{3}+2Mr^{2}+12M^{2}r+32M^{3}\log(r-2M)-\frac{16M^{4}}{r-2M}\right]_{r_{1}}^{r_{2}}. \end{equation} Therefore \begin{equation} V \asymp 64\pi M^{3}\left(\frac{M}{\epsilon}-2\log (\epsilon)\right). \end{equation} Similarly using (\ref{area}) the area of $\partial B$ is given as \begin{eqnarray} A &=& 4\pi\left(\frac{r_{1}^{3}}{r_{1}-2M}+\frac{r_{2}^{3}}{r_{2}-2M}\right). \end{eqnarray} So $A$ diverges as \begin{equation} A\asymp \frac{32\pi M^{3}}{\epsilon}. \end{equation} At $O(T)$ the bulk contribution is \begin{eqnarray} &&\int_{B}dV\,(-F) m^{2} =-4\pi m^{2}\int_{r_{1}}^{r_{2}}dr\frac{r^{2}}{F}\nonumber \\ &&= -4\pi m^{2}\left[(4M^{2}r+Mr^{2}+\frac{r^{3}}{3}+8M^{3}\log(r-2M))\right]_{r_{1}}^{r_{2}}\nonumber\\ & &\asymp 32\pi M^{3}m^{2}\log \epsilon. \end{eqnarray} At the same order the boundary contribution to the horizon divergence comes only from the integral over $r=r_{1}$ and is seen to be \begin{eqnarray} \int_{r=r_{1}} dS_{r_{1}}\,\frac{1}{3}\,K_{aa} &=& \frac{8\pi}{3} \left[\frac{r_{1}(3M-r_{1})}{r_{1}-2M}\right]\nonumber\\ &\asymp &\frac{16\pi M^{2}}{3}\frac{1}{\epsilon} \end{eqnarray} and \begin{eqnarray} && \frac{1}{16\pi}\int_{\partial B}dS\,\left\{-F\,m^{2}+\frac{1}{96}[8R_{NaNa}+7K_{aa}K_{bb}-10K_{ab}K_{ab}]\right\} \nonumber\\ && = \frac{1}{48}\left[\frac{2M\,(3M-2r)+(3M-r)^{2}}{r\,(r-2M)} \right] \nonumber\\ && \asymp -\frac{M}{96}\, \frac{1}{\epsilon} \end{eqnarray} If we compare various bulk and boundary terms appearing in the coefficients of the expansion (\ref{sexp}) we get \begin{equation} \frac{A}{V}=O(M^{-1}), \end{equation} \begin{equation} \frac{\int_{\partial B} dS\ \frac{1}{3}\ K_aa}{V}=O(M^{-2}), \end{equation} \begin{equation} \frac{\int_B F m^2}{V}\rightarrow 0. \end{equation} Thus for generic values of $T$ we see that the boundary contributions are suppressed against the bulk contributions by inverse powers of $M$. However we must evaluate the entropy at the Hawking temperature $T_{H}$ which is $O(M^{-1})$. In this case we have for example \begin{equation} \frac{AT^{2}_{H}}{VT^{3}_{H}}=O(M^{0}), \end{equation} and the boundary contributions become as important as the bulk terms. At this point we evaluate the entropy (\ref{sexp}) at the Hawking temperature \begin{equation} T_{H}=\frac{1}{8\pi M}=\frac{\kappa}{2\pi}. \end{equation} where $\kappa$ is the surface gravity. We also trade the cutoff $\epsilon$ with the proper length cutoff $\delta$ \begin{equation} \epsilon=T_{H}\pi\delta^2=\frac{\delta^2}{8 M}. \end{equation} The result is \begin{eqnarray} S &=& \frac{A_{H}}{(4\pi^{3})}\left[\frac{\zeta(4)}{\pi^{2}}-\frac{3}{8}\zeta(3)+\frac{2}{3}\zeta(2)-\frac{\pi^{2}}{48}\, \log \frac{\kappa}{2\pi m'}\right]\frac{1}{\delta^{2}}\nonumber\\ &+&\frac{1}{\pi^{2}}\left[ -\frac{\zeta(4)}{\pi^{2}}+\frac{\zeta(2)}{8\pi}m^{2}\,A_{H}\right] \log\frac{\delta^{2}}{8ML}. \end{eqnarray} Here $L$ is an infrared cutoff which may be taken as the radial coordinate of the outer wall of $B$. At this point we can use $m'$ independence of our expansion to set $m'=T_{H}$ and thus get rid of the logarithmic term. However we prefer to leave the above expression as it is in order to point out the universal features of the $O(\delta^{2})$ divergence. The terms proportional to $\zeta(4)$ originate from the leading $O(T^{3})$ term of the high temperature expansion which is proportional to the optical volume and are in complete agreement with the results of \cite{Solodukhin} and \cite{Alwis}. Moreover the dependence of $S$ on the cutoff $\delta$ is of the right form for the cancelation against the divergences in the reciprocal of the bare gravitational constant \cite{Davies, Susskind}. \subsection{Reissner-Nordst\"{o}rm Black Hole} For the RN background we have \begin{equation} F(r)=\left(1-\frac{r_{-}}{r} \right) \left( 1- \frac{r_{+}}{r}\right),\;\;\;\;\mathcal{R}=0,\;\;\; \end{equation} \begin{equation} K_{11}=K_{22}=\frac{3r(r_{+}+r_{-})-2r^{2}-4r_{+}r_{-}}{2r^{3}},\;\;\;K_{12}=0, \end{equation} which implies \begin{equation} K_{aa}=\frac{-2r^{2}+3(r_{+}+r_{-})r-4r_{+}r_{-}}{r^{3}},\;\;\;\;K_{ab}K_{ab}=2\left[ \frac{3r(r_{+}+r_{-})-2r^{2}-4r_{+}r_{-}}{2r^{3}}\right] ^{2}. \end{equation} \begin{equation} R_{aNaN}=\frac{8r_{+}^{2}r_{-}^{2}-12r_{+}r_{-}(r_{+}+r_{-})r+3(r_{+}^{2}+6r_{+}r_{-}+r_{-}^{2})r^{2}-4(r_{+}+r_{-})r^{3}}{2r^{6}}. \end{equation} Thus using (\ref{volume}) we get the volume of the spherical shell as \begin{eqnarray} V &=& 4\pi \left[ \frac{r_{-}^{6}}{(r_{-}-r_{+})^{2} (r_{-}-r)} + \frac{r_{+}^{6}}{(r_{-}-r_{+})^{2} (r_{+}-r)} + \left( 3r_{-}^{2}+4r_{-} r_{+} +3r_{+}^{2}\right) r \right. \nonumber\\ &+& \left( r_{-}+r_{+}\right) r^{2} + \frac{r^{3}}{3} + \frac{2r_{-}^{5}(2r_{-}-3r_{+}) \log(r-r_{-})}{(r_{-}-r_{+})^{3}}\nonumber\\ &+& \left. \frac{2(3r_{-}-2r_{+}) r_{+}^{5} \log(r-r_{+})}{(r_{-}-r_{+})^{3}}\right] .\nonumber \\ && \end{eqnarray} thus \begin{equation} V \asymp 4\pi\frac{r_{+}^{5}}{(r_{-}-r_{+})^{2}} \left[ \frac{r_{+}}{\epsilon}+ \frac{3r_{-}-2r_{+}}{r_{+}-r_{-}} 2\log\epsilon \right]. \end{equation} Similarly using (\ref{area}) we get \begin{equation} A \asymp \frac{4\pi r_{+}^{4}}{r_{+}-r_{-}}\frac{1}{\epsilon}. \end{equation} On the other hand \begin{eqnarray} \int_B (-F)\,m^{2} &=& -4\pi m^{2} \int_{r_{1}}^{r_{2}} dr \frac{r^{2}}{F} \nonumber\\ &=& -4\pi m^{2}\left [ (r_{+}^{2}+r_{-}^{2} +r_{+} r_{-}) r+ \frac{1}{2} (r_{+}+r_{-})r^{2} +\frac{r^{3}}{3} \right.\nonumber\\ &+& \left. \frac{r_{-}^{4}}{r_{+}-r_{-}} \log (r-r_{-})+\frac{r_{+}^{4}}{r_{+}-r_{-}} \log(r-r_{+}) \right] _{r_{1}}^{r_{2}} \nonumber\\ &\asymp & \frac{4\pi\,m^{2} r_{+}^{4}}{r_{+}-r_{-}} \log\epsilon. \end{eqnarray} Finally, using (\ref{extrinsic}), the surface integral of extrinsic curvature over the inner wall of $B$ is \begin{equation} \int_{r=r_{1}}dS_{r}\, K_{aa} = -4\pi \frac{2r_{1}^{3}-3r_{1}^{2}(r_{+}+r_{-})+4r_{1}r_{+}r_{-}}{(r_{1}-r_{-})(r_{1}-r_{+})}, \end{equation} Thus \begin{equation} \int_{\partial B} dS\ \frac{1}{3}\ K_{aa} \asymp \frac{4\pi}{3} \frac{r_{+}^{2}}{\epsilon}. \end{equation} and \begin{eqnarray} && \frac{1}{16\pi}\int_{\partial B}dS\,\left\{-F\,m^{2}+ \frac{1}{96}[8R_{NaNa}+7K_{aa}K_{bb}-10K_{ab}K_{ab}]\right\} \nonumber\\ && \asymp -\frac{(r_{+}-r_{-})}{192} \, \frac{1}{\epsilon} \end{eqnarray} Now evaluating $S$ at the Hawking temperature \begin{equation} T_{H}=\frac{r_{+}-r_{-}}{4\pi r_{+}^{2}}=\frac{\kappa}{2\pi} \end{equation} where $\kappa$ is the surface gravity. \begin{equation} A_{H}=4\pi \, r_{+}^{2} \end{equation} with \begin{equation} \epsilon=T_{H}\pi \delta^{2}=\frac{(r_{+}-r_{-})}{4r_{+}^{2}}\delta^{2}, \end{equation} we get \begin{eqnarray} S &=& \frac{A_{H}}{4\pi^{3}}\left[\frac{\zeta(4)}{\pi^{2}}-\frac{3}{8}\zeta(3)+\frac{2}{3}\zeta(2)-\frac{\pi^{2}}{48}\, \log \frac{\kappa}{2\pi m}\right] \frac{1}{\delta^{2}}\nonumber\\ &+&\frac{1}{\pi^{2}}\left[-\frac{\zeta(4)}{\pi^{2}} \, \frac{2r_{+}-3r_{-}}{2r_{+}}+\frac{\zeta(2)}{8\pi}m^{2}\, A_{H} \right] \log\left(\frac{\delta^{2}}{L^{2}} \right).\nonumber\\ && \end{eqnarray} For the extreme Reissner-Nordst\"{o}rm Black Hole case i.e. $ \lim r_{-} \rightarrow r_{+}$ we have \begin{eqnarray} S &=& \frac{A_{H}}{4\pi^{3}}\left[\frac{\zeta(4)}{\pi^{2}}-\frac{3}{8}\zeta(3)+\frac{2}{3}\zeta(2)-\frac{\pi^{2}}{48}\, \log \frac{\kappa}{2\pi m}\right] \frac{1}{\delta^{2}}\nonumber\\ &+&\frac{1}{\pi^{2}}\left[\frac{\zeta(4)}{\pi^{2}} \, \frac{1}{2}+\frac{\zeta(2)}{8\pi}m^{2}\, A_{H}\right] \log\left(\frac{\delta^{2}}{L^{2}} \right).\nonumber\\ &&+\ldots \end{eqnarray} Note that the coefficient of the $\delta^{-2}$ term is the same as in the Schwarzschild case. \subsection{Dilaton Black Hole} For the dilaton metric \begin{equation} F(r)=\left( 1-\frac{2M}{r}\right),\;\;\;\;\mathcal{R}=\frac{a^{2}(r-2M)}{2(r-a)^{2} r^{3}}, \end{equation} \begin{equation} K_{11}=K_{22}=\frac{(-4 a M + a r + 6 M r - 2 r^2)}{2 r^2 (r - a)},\;\;\;K_{12}=0, \end{equation} which implies \begin{equation} K_{aa}=\frac{-2r^{2}+(6M+a)r-4aM}{r^{2}(r-a)},\;\;\;\;K_{ab}K_{ab}=2\left[ \frac{(-4 a M + a r + 6 M r - 2 r^2)}{2 r^2 (r - a)}\right] ^{2}. \end{equation} \begin{equation} R_{aNaN}=\frac{4 M (3 M - 2 r) r^2 + 8 a M r (-3 M + 2 r) + a^2 (16 M^2 - 12 M r + r^2)}{2 (a - r)^2 r^4}. \end{equation} and consequently we have \begin{equation} V \asymp 4\pi \left( 8M^{3}(2M-a)\frac{1}{\epsilon} + 4M^{2}(3a-8M) \log \epsilon \right) \end{equation} \begin{equation} A \asymp 4\pi \left( 4 M^{2}(2M-a)\frac{1}{\epsilon} \right) \end{equation} \begin{equation} \int_B (-F)\left[ m^{2}+\left( \xi-\frac{1}{6}\right) R \right] \asymp \frac{8\pi}{3} m^{2} \,M^{2} (2M-a)\log\epsilon \end{equation} Integrating over the inner wall of $B$ gives \begin{equation} \int_{\partial B} dS \frac{1}{3}K_{aa} \asymp \frac{4\pi}{3} \left( 2M (2M-a) \frac{1}{\epsilon} \right) \end{equation} and \begin{eqnarray} && \frac{1}{16\pi}\int_{\partial B}dS\,\left\{-F\left[m^{2}+\left(\xi-\frac{1}{6}\right)R\right]+\right.\nonumber\\ &&\left.+\frac{1}{96}[8R_{NaNa}+7K_{aa}K_{bb}-10K_{ab}K_{ab}]\right\} \nonumber\\ && \asymp -\frac{(a-2M)}{192} \,\frac{1}{\epsilon} \end{eqnarray} After setting $A_{H}=4\pi\, 2M(2M-a)$, $T_{H}=1/(8\pi M)=\kappa / 2\pi$ for the entropy we have; \begin{eqnarray} S &=& \frac{A_{H}}{(4\pi^{3})}\left[\frac{\zeta(4)}{\pi^{2}}-\frac{3}{8}\zeta(3)+\frac{2}{3}\zeta(2)-\frac{\pi^{2}}{48}\, \log \frac{\kappa}{2\pi m}\right]\frac{1}{\delta^{2}}\nonumber\\ &+&\frac{1}{\pi^{2}}\left[ -\frac{\zeta(4)}{\pi^{2}}\frac{(8M-3a)}{8M}+\frac{\zeta(2)}{8\pi}m^{2}\,A_{H}\right] \log\frac{\delta^{2}}{8ML}. \end{eqnarray} Note that the coefficient of the $\delta^{-2}$ term is the same as in the Schwarzschild and RN cases. \section{Horizon Divergences} Now summarizing our results we have \begin{equation} S=BA_{H}\frac{1}{\delta^{2}}+C\log\delta^{2}. \end{equation} In all cases considered $B$ coefficient has the same value \begin{equation} B=\frac{1}{(4\pi^{3})}\left[\frac{\zeta(4)}{\pi^{2}}-\frac{3}{8}\zeta(3)+\frac{2}{3}\zeta(2)\right]. \end{equation} Here, as explained at the end of Sec. 2, we dropped the logarithmic term by choosing $m'=\kappa/2\pi$. On the other hand the $C$ coefficients are all different from each other. However, we will now show that if the $C$ coefficients are expressed in terms of the surface gravity $\kappa$ and the horizon area $A_{H}$ they all represent the same function $C(\kappa,A_{H})$. Let us start with the RN space-time for which \begin{equation} C_{RN}=\frac{1}{\pi^{2}}\left[-\frac{\zeta(4)}{\pi^{2}} \, \frac{2r_{+}-3r_{-}}{2r_{+}}+\frac{\zeta(2)}{8\pi}m^{2}\, A_{H} \right] \end{equation} Now for RN we have $\kappa=(r_{+}-r_{-})/4\pi r_{+}^{2}$, $A_{H}=4\pi r_{+}^{2} $. So we have $r_{+}-r_{-}=\kappa A_{H}$ and $r_{+}=\sqrt{A_{H}}/2\sqrt{\pi}$. Using these we can express $C_{RN}$ in terms of $\kappa$ and $A_{H}$ as \begin{equation} C_{RN}=\frac{\zeta(4)}{\pi^{4}}\left[\frac{1}{2}+\frac{3}{2\sqrt{\pi}}\kappa \sqrt{A_{H}}\right]-\frac{\zeta(2)}{8\pi^{3}}m^{2}A_{H} \end{equation} Now let us define the function $C(\kappa,A_{H})$ by the expression on the right hand side of this equation. For the Schwarzschild metric we have $\kappa=1/4M$ and $A_{H}=16\pi M^{2}$. So we see that \begin{equation} C\left(\frac{1}{4M},16\pi M^{2}\right)=-\frac{\zeta(4)}{\pi^{4}}+\frac{\zeta(2)}{8\pi^{3}}m^{2}A_{H}=C_{Schwarzschild}. \end{equation} For extreme RN we have $\kappa=0$ and $A_{H}=4\pi r_{+}^{2}$. Thus we get \begin{equation} C\left(0,4\pi r_{+}^{2}\right)=\frac{\zeta(4)}{2\pi^{4}}+\frac{\zeta(2)}{8\pi^{3}}m^{2}A_{H}=C_{extreme RN}. \end{equation} For the dilaton background we have a slight complication due to the dilaton charge $a$. Now we have $\kappa=1/4M$, $A_{H}=4\pi r_{+}(r_{+}-a)$ and using these we can write \begin{eqnarray} C_{dilaton}&=&\frac{\zeta(4)}{2\pi^{4}}\left[-\frac{8M-3a}{2M}\right]+\frac{\zeta(2)}{8\pi^{3}}m^{2}A_{H}\nonumber\\ &=&\frac{\zeta(4)}{2\pi^{4}}\left[\frac{1}{2}-\frac{3}{2\sqrt{\pi}}\kappa A_{H}\frac{1-a\kappa}{\sqrt{1-2a\kappa}}\right]+\frac{\zeta(2)}{8\pi^{3}}m^{2}A_{H}. \end{eqnarray} Thus we arrive at the generalization \begin{equation} C(\kappa,A_{H},a)=\frac{\zeta(4)}{2\pi^{4}}\left[\frac{1}{2}-\frac{3}{2\sqrt{\pi}}\kappa A_{H}\frac{1-a\kappa}{\sqrt{1-2a\kappa}}\right]-\frac{\zeta(2)}{8\pi^{3}}m^{2}A_{H}, \end{equation} which reduces to $C(\kappa,A_{H})$ when $a=0$. \section{Near Horizon Approximation} The near horizon approximation to the metric \begin{equation} ds^{2}=-F(r)dt^{2}+F^{-1}(r)dr^{2}+r(r-a)(d\theta^{2}+\sin^{2}\theta d\phi^{2}) \end{equation} is given as \begin{equation} ds_{NH}^{2}=-2\kappa\rho' dt^{2}+\left(2\kappa \rho'\right)^{-1} d\rho'^{2}+r_{+}(r_{+}-a)(d\theta^{2}+\sin^{2}\theta d\phi^{2}). \end{equation} Here $r_{+}$ is the radial coordinate of the event horizon and $\kappa=2F'(r_{+})$ is the surface gravity thereof. The brick wall is situated at $\rho'=\epsilon$. The proper radial distance of the brick wall to the horizon is given by \begin{equation} \delta=\int_{0}^{\epsilon}\frac{d\rho'}{\sqrt{{2\kappa \rho'}}}=\sqrt{\frac{2\epsilon}{\kappa}}. \end{equation} This is clearly the proper length cut-off we have been using throughout the paper. The scalar curvature of the near horizon metric is \begin{equation} \mathcal{R}=\frac{2}{r_{+}(r_{+}-a)}=\frac{8\pi}{A_{H}}. \end{equation} Here $A_{H}=4\pi r_{+}(r_{+}-a)$ is the area of the event horizon. The corresponding optical metric is \begin{equation} d\overline{s}_{NH}^{2}=\frac{4}{\kappa^{2}}\left[\frac{d\rho'^{2}}{\rho'^{2}}+\frac{2\kappa r_{+}(r_{+}-a)}{\rho'}(d\theta^{2}+\sin^{2}\theta d\phi^{2})\right]. \end{equation} Upon the change of variable \begin{equation} \rho=\sqrt{\frac{2\pi\rho'}{\kappa A_{H}}} \end{equation} we get \begin{equation} d\overline{s}_{NH}^{2}=\frac{1}{\kappa^{2}}d\tilde{s}^{2} \end{equation} where \begin{equation}\label{metilde} d\tilde{s}^{2}=\frac{d\rho^{2}}{\rho^{2}}+\frac{1}{4\rho^{2}}(d\theta^{2}+\sin^{2}\theta d\phi^{2}). \end{equation} Note that in these coordinates the brick-wall is situated at \begin{equation}\label{pos} \rho(\epsilon)=\sqrt{\frac{2\pi\epsilon}{\kappa A_{H}}}=\sqrt{\frac{\pi}{A_{H}}}\delta. \end{equation} Also note that the scalar curvature of the optical metric $d\overline{s}_{NH}^{2}$ is \begin{equation} R=\kappa^{2}\widetilde{R}, \end{equation} where $\widetilde{R}$ is the scalar curvature of $d\tilde{s}^{2}$. From now on we will denote geometric quantities derived from the metric $d\widetilde{s}^{2}$ with a tilde. Similarly the Laplacian of the optical metric is related to the Laplacian of $d\tilde{s}^{2}$ as \begin{equation} \Delta=\kappa^{2}\widetilde{\Delta}. \end{equation} \begin{eqnarray} U_{1}&=&\frac{1}{6}R+F\left[m^{2}+\left(\xi-\frac{1}{6}\right)\mathcal{R}\right]\nonumber\\ &=& \kappa^{2}\widetilde{U}_{1}, \end{eqnarray} where \begin{equation}\label{pot1} \widetilde{U}_{1}=\frac{1}{6}\widetilde{R}+\frac{F}{\kappa^{2}}\left[m^{2}+\left(\xi-\frac{1}{6}\right)\frac{8\pi}{A_{H}}\right] \end{equation} But since \begin{equation} \frac{ F}{\kappa^{2}}=\frac{2\kappa\rho'}{\kappa^{2}}=\frac{A_{H}}{\pi}\rho^{2} \end{equation} is independent of $\kappa$ we see that $H_{1}$ scales like $\kappa^{2}$, \begin{equation} H_{1}=\kappa^{2}\widetilde{H}_{1}, \end{equation} where \begin{equation} \widetilde{H}_{1}=-\widetilde{\Delta}+\widetilde{U}_{1}. \end{equation} Finally for some dimensionless parameter $r$ (which will play the role of dimensionfull $m'$ of Sec. 2) we define \begin{equation}\label{pot2} \widetilde{U}=\widetilde{U}_{1}-r, \end{equation} and \begin{equation} \widetilde{H}=-\widetilde{\Delta}+\widetilde{U}. \end{equation} Thus \begin{equation} \widetilde{H}_{1}=\widetilde{H}+r, \end{equation} and \begin{equation} H_{1}=\kappa^{2}\widetilde{H}+r \kappa^{2}. \end{equation} So we get \begin{eqnarray} \sum_{\sigma}e^{-k\beta\epsilon_{\sigma}} &=&\sum_{\sigma}e^{-(k\beta r)(r^{-1}\epsilon_{\sigma})}\nonumber\\ &=& \frac{k\beta r}{2\sqrt{\pi}}\int_{0}^{\infty}\frac{dv}{v^{3/2}}\,e^{-\frac{(k\beta r)^{2}}{4v}}e^{-v\kappa^{2} }Tr e^{-v\kappa^{2}r^{-1}\widetilde{H}} \end{eqnarray} Upon the change of variable $u=v\kappa^{2}$ we get \begin{equation} \sum_{\sigma}e^{-k\beta \epsilon_{\sigma}}=\frac{k\beta r\kappa }{2\sqrt{\pi}}\int_{0}^{\infty}\frac{du}{u^{3/2}}\,e^{-\frac{(k\beta r\kappa )^{2}}{4u}}e^{-u}Tr e^{-ur^{-1}\widetilde{H}}. \end{equation} So the free energy is \begin{eqnarray} \mathcal{F} &=& -\sum_{k=1}^{\infty}\frac{1}{k\beta }\sum_{\sigma}e^{-k\beta \epsilon_{\sigma}}\nonumber\\ &=& -\frac{\kappa r}{2\sqrt{\pi}}\sum_{k=1}^{\infty}\int_{0}^{\infty}\frac{du}{u^{3/2}}\,e^{-\frac{(k\beta r\kappa)^{2}}{4u}}e^{-u}Tr e^{-ur^{-1}\widetilde{H}}. \end{eqnarray} Comparing this with (\ref{F}) we see that it can be obtained from the latter by performing the substitutions \begin{equation} \beta\rightarrow \beta \kappa,\;\;\; m'\rightarrow r \end{equation} in the latter and by multiplying the result by $\kappa$. The high temperature expansion is obtained by performing the same substitutions and multiplication in (\ref{free0}). Note that in the resulting expression $T$ appears only through the combination $T/\kappa$. Finally for the high temperature expansion of the entropy we get \begin{eqnarray} && S=-\frac{\partial (\mathcal{F}/\kappa)}{\partial(T/\kappa)}=4\zeta(4) \frac{1}{\pi^{2}}\,\widetilde{V} (T\kappa^{-1})^{3}-3\zeta(3) \left(\frac{1}{8\pi}\,\widetilde{A}\right)(T\kappa^{-1})^{2}\nonumber\\ &&+ \frac{\zeta(2)}{2\pi^{2}}\left[\int_{B}d\widetilde{V}\,\frac{(-A_{H}\rho^{2})}{\pi}\left[m^{2}+\left(\xi-\frac{1}{6}\right)\mathcal{R}\right]+\int_{\partial B}dS\, \frac{\widetilde{K}_{aa}}{3} \right](T\kappa^{-1})\nonumber\\ && +\frac{1}{16\pi}\int_{\partial B}dS\,\left\{\frac{-A_{H}\rho^{2}}{\pi}\left[m^{2}+\left(\xi-\frac{1}{6}\right)\mathcal{R}\right]+\right.\nonumber\\ &&\left.+\frac{1}{96}[8\widetilde{R}_{NaNa}+7\widetilde{K}_{aa}\widetilde{K}_{bb}-10\widetilde{K}_{ab}\widetilde{K}_{ab}]\right\}\log (T\kappa^{-1}r^{-1})\nonumber\\ &&+\ldots. \end{eqnarray} First note that at $T=T_{H}$, $T\kappa^{-1}=2/\pi$ and therefore all the terms in the high temperature expansion become of the same order of magnitude. Second since the metric (\ref{metilde}) does not involve any dimensionfull parameters the curvature terms in the heat kernel expansion will not involve such parameters either. The only exception to this is the potential term (\ref{pot1}). Now we have \begin{equation} \widetilde{V}=4\pi\int_{\rho(\epsilon)}^{L}\frac{d\rho}{\rho^{3}}\asymp 4\pi\frac{1}{2\rho^{2}(\epsilon)}=\frac{A_{H}}{2\delta^{2}}, \end{equation} and similarly, \begin{equation} \widetilde{A}\asymp \frac{1}{\rho^{2}(\epsilon)}=\frac{A_{H}}{2\delta^{2}}. \end{equation} If choose the following orthonormal basis adapted to $\rho=const.$ surfaces \begin{equation} N=\rho\partial_{\rho},\;\;\;E_{1}=\rho\partial_{\theta},\;\;\;E_{2}=\frac{\rho}{\sin\theta}\partial_{\phi}. \end{equation} Then from (\ref{extrinsicmat}) the extrinsic curvature of a $\rho=const.$ surface is calculated to be \begin{equation} \widetilde{K}_{ab}=\delta_{ab}. \end{equation} We also have \begin{equation} \widetilde{R}=-6+2\rho^{2}, \end{equation} and \begin{equation} \widetilde{R}_{aNaN}=-2. \end{equation} After performing the similar algebra as in the previous cases, we arrive at \begin{eqnarray} S &=& \frac{A_{H}}{(4\pi^{3})}\left[\frac{\zeta(4)}{\pi^{2}}-\frac{3}{8}\zeta(3)+\frac{2}{3}\zeta(2)-\frac{\pi^{2}}{48}\, \log \frac{\kappa}{2\pi m'}\right]\frac{1}{\delta^{2}}\nonumber\\ && -\frac{\zeta(2)}{8\pi^{3}}\left[m^{2}A_{H}+\left(\xi-\frac{1}{6}\right)(A_{H}\mathcal{R})\right] \log\frac{A_{H}}{\pi\delta^{2}}. \end{eqnarray} Now note that choosing $r=1/2\pi$ we can eliminate the logarithmic term in the $\delta^{-2}$ divergence. The coefficient of $\delta^{-2}$ term is clearly the same $B$ coefficient we derived in earlier sections. Thus, in accordance with \cite{Ordonez1}, we conclude that the common near horizon geometry of the backgrounds we considered is the basic reason for the equality of their $B$ coefficients. On the other hand the coefficient of the logarithmic divergence has a missing piece, namely the $\zeta(4)$ term coming from the volume of the spherical shell. Most probably this situation can be remedied by going to sub-leading orders in the near horizon approximation. A more important observation is that the logarithmic term generates a finite piece proportional to the logarithm of the horizon area. Using $\mathcal{R}=8\pi/A_{H}$ and $\zeta(2)=\pi^{2}/6$, we see that this finite piece is \begin{equation} \frac{1}{6}\left[\left(\frac{1}{6}-\xi\right)-\frac{m^{2}A_{H}}{8\pi}\right]\log A_{H}. \end{equation} A derailed discussion of this term and its comparison with the existing results (\textit e.g. \cite{Fursaev1,Sen}) about logarithmic corrections to black hole entropy will be given in a future work. \section{Thermodynamics in the Presence of Neumann Walls} In this section we will consider the effects of Neumannn boundary conditions on the entropy. Since we impose Neumann boundary conditions on the original mode functions $f_{\sigma}$ the rescaled mode functions $\overline{f}_{\sigma}$ will satisfy generalized Neumann boundary conditions \begin{equation} B\phi= (\phi_{;N}+J\phi)\mid_{\partial M}=0 \end{equation} We calculate $J$ with a little algebra; \begin{equation} \overline{f}_{\sigma}=F^{\frac{1}{2}}f_{\sigma} ,\;\;\; f_{\sigma ;N}\mid_{\partial M}=0 \end{equation} \begin{equation} \overline{f}_{\sigma ;N}=(\frac{1}{2} F^{-1/2} F_{;N})F^{-1/2}\overline{f}_{\sigma} ,\;\;\; \overline{f}_{\sigma ;N}-\frac{1}{2}\frac{F_{;N}}{F}\overline{f}_{\sigma}=0 \end{equation} \begin{equation} J=-\frac{1}{2}\frac{F_{;N}}{F}. \end{equation} For the Neumann problem first few heat kernel coefficients are given as \begin{eqnarray} a_{0} &=& \frac{1}{(4\pi)^{d/2}}\int_{B} dV, \nonumber\\ a_{1/2} &=& \frac{1}{4(4\pi)^{\frac{d-1}{2}}}\int_{\partial B}dS, \nonumber \\ a_{1} &=& \frac{1}{6(4\pi)^{d/2}}\left[\int_{B} dV(-6U+R)+2\int_{\partial B}dS \,(K_{aa}+6J)\right],\nonumber\\ a_{3/2}&=&\frac{1}{4}\frac{1}{(4\pi)^{(d-1)/2}}\frac{1}{96}\int_{\partial B}dS\,[16(-6U+R)+8 R_{aNaN}+\nonumber\\ &&+13K_{aa}K_{bb}+2K_{ab}K_{ab}+96JK_{aa}+192J^{2}]. \end{eqnarray} Comparing these coefficients with the ones for the Dirichlet problem (\ref{a}) we see here some extra $J$ terms and some sign changes. With these minor changes the calculations proceed exactly as in the Dirichlet case. Here we shall report only the final results for the entropy. For the Schwarzschild Black Hole we obtain \begin{eqnarray} S &=& \frac{A_{H}}{(4\pi^{3})}\left[\frac{\zeta(4)}{\pi^{2}}+\frac{3}{8}\zeta(3)-\frac{4}{3}\zeta(2)-\frac{115\, \pi^{2}}{192}\, \log \frac{\kappa}{2\pi m'}\right]\frac{1}{\delta^{2}}\nonumber\\ &+&\frac{1}{\pi^{2}}\left[ -\frac{\zeta(4)}{\pi^{2}}+\frac{\zeta(2)}{8\pi}m^{2}\,A_{H}\right] \log\frac{\delta^{2}}{8ML}. \end{eqnarray} For the Reissner-Nordst\"{o}rm Black Hole we get \begin{eqnarray} S &=& \frac{A_{H}}{4\pi^{3}}\left[\frac{\zeta(4)}{\pi^{2}}+\frac{3}{8}\zeta(3)-\frac{4}{3}\zeta(2)-\frac{115\, \pi^{2}}{192}\, \log \frac{\kappa}{2\pi m'}\right] \frac{1}{\delta^{2}}\nonumber\\ &+&\frac{1}{\pi^{2}}\left[-\frac{\zeta(4)}{\pi^{2}} \, \frac{2r_{+}-3r_{-}}{2r_{+}}+\frac{\zeta(2)}{8\pi}m^{2}\, A_{H} \right] \log\left(\frac{\delta^{2}}{L^{2}} \right).\nonumber\\ && \end{eqnarray} and for the extreme case; \begin{eqnarray} S &=& \frac{A_{H}}{4\pi^{3}}\left[\frac{\zeta(4)}{\pi^{2}}+\frac{3}{8}\zeta(3)-\frac{4}{3}\zeta(2)-\frac{115\, \pi^{2}}{192}\, \log \frac{\kappa}{2\pi m'}\right] \frac{1}{\delta^{2}}\nonumber\\ &+&\frac{1}{\pi^{2}}\left[\frac{\zeta(4)}{\pi^{2}} \, \frac{1}{2}+\frac{\zeta(2)}{8\pi}m^{2}\, A_{H}\right] \log\left(\frac{\delta^{2}}{L^{2}} \right).\nonumber\\ &&+\ldots \end{eqnarray} Finally for the Dilaton Black Hole; \begin{eqnarray} S &=& \frac{A_{H}}{(4\pi^{3})}\left[\frac{\zeta(4)}{\pi^{2}}+\frac{3}{8}\zeta(3)-\frac{4}{3}\zeta(2)-\frac{115\, \pi^{2}}{192}\, \log \frac{\kappa}{2\pi m'}\right]\frac{1}{\delta^{2}}\nonumber\\ &+&\frac{1}{\pi^{2}}\left[ -\frac{\zeta(4)}{\pi^{2}}\frac{(8M-3a)}{8M}+\frac{\zeta(2)}{8\pi}m^{2}\,A_{H}\right] \log\frac{\delta^{2}}{8ML}. \end{eqnarray} Thus for the Neumann boundary conditions we get \begin{eqnarray} B &=& \frac{1}{(4\pi^{3})}\left[\frac{\zeta(4)}{\pi^{2}}+\frac{3}{8}\zeta(3)-\frac{4}{3}\zeta(2)\right], \\ C(\kappa,A_{H},a) &=& \frac{\zeta(4)}{2\pi^{4}}\left[\frac{1}{2}-\frac{3}{2\sqrt{\pi}}\kappa A_{H}\frac{1-a\kappa}{\sqrt{1-2a\kappa}}\right]+\frac{\zeta(2)}{8\pi^{3}}m^{2}A_{H}. \end{eqnarray} Unfortunately for Neumann walls $B$ turns out to be negative, however this most probably means that one has to include more terms in the high temperature expansion to get a positive $B$. \section*{Acknowledgments} This work is supported by Bo\u{g}azi\c{c}i University BAP Project No. 6942. We would like to thank Prof. Teoman Turgut for useful conversations, and Prof. Carlos Ordonez for bringing the references \cite{Ordonez1,Ordonez2,Ordonez3} to our attention.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Overview} Humanoid legged robots require bipedal balancing as base requisite to perform tasks. Since, at the state of the art, humans are still considered superior to robots at controlling posture \cite{NoriPetersPadoisEtAl2014}, there has been much interest in implementing human inspired humanoid control systems. Nowadays, human-likeness of bipedal control is an ongoing research topic, see, e.g., \cite{torricelli2014benchmarking,lippibenchmarking,10.3389/fnbot.2018.00021}. The model proposed in this paper is the so-called DEC (disturbance estimation and compensation) \cite{Mergner2010}, which is based on a neurological model of human posture. This model uses sensor fusion-derived internal reconstructions of the external disturbances affecting body posture. Issues of a modular control system are conflicts among joints, as extensively shown in \cite{ott2016good}. In this work, we show how distributed control theory can be applied to such a framework. In fact, a discrete-time max-consensus algorithm is used to define priorities between DEC modules in a distributed way, so that conflicts can be prevented. Furthermore, this solution is compatible with plug-and-play frameworks, which may be relevant for future system reconfiguration. \subsection{Notation} In the remainder of this paper, $\mathbb{N}$ and $\mathbb{R}$ denote, respectively, the set of positive integers and the set of real numbers. Nonnegative integers and nonnegative real numbers are $\mathbb{N}_{\geq0}$ and $\mathbb{R}_{\geq0}$. The indicator function is defined as follows \begin{equation*} \mathbb{I}(x)= \begin{cases} 1 \quad\text{if x is true}\\ 0 \quad\text{otherwise} \end{cases}. \end{equation*} \section{Problem Description} \subsection{The DEC concept} \begin{figure}[t] \centering \begin{tabular}{c} \IfFileExists{decloops.pdf}{\includegraphics[width=\columnwidth]{decloops.pdf}}{\includegraphics[width=\columnwidth]{Figures/decloops.pdf}} \vspace{10px}\\ \IfFileExists{Fig1anglescrop.pdf}{\includegraphics[width=\columnwidth]{Fig1anglescrop.pdf}}{\includegraphics[width=\columnwidth]{Figures/Fig1anglescrop.pdf}} \end{tabular} \caption{The DEC controller with a schematic model of the DEC concept (above) and the modular 3DoF control architecture used in the experiments (beneath). The angles used in the text are also displayed: {TS} is the orientation of the trunk respect to the gravitational vertical; KNEE is the angle of the knee joint; BS is the angular sway of the CoM around the ankle joint and respect to the gravitational vertical. The \textit{lumped delay} accounts for all the delay effects that are distributed in general. } \label{DEC} \end{figure} The DEC concept provides a descriptive and predictive model of how human postural control mechanisms interact with movement execution control in producing a desired movement \cite{Mergner2010,10.1007/978-3-319-46669-9_42}. A schema of the DEC control is shown in Fig. \ref{DEC} and can be summarized as follows: \begin{itemize} \item A servo control loop for each degree of freedom (DoF). The servo is implemented as a PD controller, it is addressed in Fig. \ref{DEC} (above) as \textit{neural controller}. The controlled variable consists either of the joint angle, the orientation in space of the above joint, or the orientation in space of the center of mass of the whole body above the controlled joint. {Variables are reconstructed locally using the exchanged sensory input}; \item Multisensory estimation of the physical factors affecting the servo. These disturbances are \textit{rotation} and \textit{translation} of the supporting link or support, contact forces (e.g., push) and field forces (e.g., gravity impacting the supported link). Sensory channels are shown in Fig. \ref{DEC} as \textit{Vest}, \textit{Prop}, and \textit{Force}, representing, respectively, the vestibular (IMU), proprioceptive (encoders), and force (joint torque sensors) inputs; \item The disturbance estimates are fed into the servo so that the joint torque compensates on-line for the disturbances while executing the desired movements. \end{itemize} The \textit{lumped delay} in Fig. \ref{DEC} (above) accounts for all the delay effects that, in general, are distributed. In particular, in robots, the main sources of delay are sampling rates in both sensors and computer-controlled system, see \cite{lippi2016human}. In humans, different control loops (e.g., proprioceptive feedback and disturbance compensation) are associated with different delays (see \cite{antritter2014stability}) and the transport time within {the} nervous system creates differences in delay due to the {different distances that the neural signal has to cover in order to reach different joint positions}, i.e. lumped delay is estimated to be $180 ms$ for ankle joint control and $70 ms$ for hip joint control, see \cite{G.Hettich2014}. The disturbance compensation mechanism allows the system to maintain a low loop gain and thus stable control in face of neural time delays. A further, indirect limitation to the gain is represented by the maximum torque that the foot can produce on the ground without losing contact. The DEC concept can be generalized to a modular control architecture, where estimators in each module treat disturbances acting on all supported links as if affecting a single inverted pendulum, see \cite{V.Lippi2013}. This approach has been applied to multiple DoF robots \cite{10.3389/fnbot.2017.00049,lippi2016human, ZebenayLippiMergener2015,lippi2018prediction}. The reference input to each module determines its postural function, e.g. maintaining a given orientation of the supported link (either in space or with respect to the supporting link), or maintaining the center of mass (CoM) above its supporting joint. Modules exchange information with neighboring modules, i.e. those mechanically interconnected. \subsection{Modularity, Coupling Forces and Delays} Since, in DEC control, each DoF of the humanoid is controlled by one DEC module, this results in \textit{control modularity}. Each module commands the torque to be applied to the controlled DoF. In previous work on the topic, the desired trajectory is specified as an input to each module, specifically in the form of a reference for a desired variable, i.e. a joint angle, an orientation in space of the link supported by the controlled joint, or the orientation in space of the center of mass of the whole set of links above the supported joint (e.g. for the ankle joint it would be BS in Fig. \ref{DEC}), see \cite{lippi2016human}. The multitasking capability of the DEC control consists of using each DoF to perform a different task, see \cite{Lippi2015}. All modules have the same structure and there is no centralized model of the whole system. Modules operate not completely independently of each other, since they exchange sensory information, i.e. coordinate transformation across the joints that interconnect the body segments. The DEC model can be defined as a \textit{low level control system} taking care of the fundamental task of posture control and acting at the level of joint kinematics. Coordination between joints emerges from the interaction between different modules and between the modules and the body mechanics. At the level of modules, no kinematic synergy is explicitly specified. In \cite{ott2016good}, it is shown how the modular structure of the DEC can lead to conflicts between modules: an example is the circular overshoot exhibited in body posture's transient behavior, as shown in Fig. \ref{fig:Overshoot}. Specifically, the knee module commands an extension of the leg. As the upper body is also perturbed forward, an extension of the knees produces a disturbance for the ankle joints, which try to move the CoM back to the vertical equilibrium position. In general, coupling forces between body segments are a challenge for distributed modular controllers, especially considering the presence of delays, see e.g. \cite{LippiMergner2015}. In Fig. \ref{fig:Overshoot}, the transient behavior of body posture is shown. Similarly to \cite{ott2016good}, the control parameters and the joint passive stiffness were chosen to produce a compliant behavior to emphasize the effect of competing controllers. Nevertheless, the used parameters are realistic in the sense that are able to stabilize the body. The absence of passive stiffness is common in humanoids robots actuated with DC motors. The parameters used in the simulation are reported in Table \ref{paramTAB}. It should be noticed that the original formulation of the DEC model was designed to describe steady state behavior in human subjects. The distributed control proposed in this work is thought as a possible solution to humanoid control. The comparison with human experiments is beyond the scope of this work. \begin{figure}[htbp] \centering \IfFileExists{swingplotNoBid1.pdf}{\includegraphics[width=\columnwidth]{swingplotNoBid1.pdf}}{\includegraphics[width=\columnwidth]{Figures/swingplotNoBid1.pdf}} \caption{Transient behavior of body posture in response to a movement commanded from a displaced position. In (A) the evolution of the orientation of the three body segments in space is shown (TS = trunk in space, THS = Thigh in space, SS = Shank in space). In (B) the trajectory of the center of mass in the sagittal plane is shown (each dot represents a sample taken at 100 Hz). Notice the typical circular movement, qualitatively resembling the behavior described in \cite{ott2016good} for the modular control system, with modules that can produce conflicting commands. In (C) the body trajectory is shown as a succession of body poses.} \label{fig:Overshoot} \end{figure} \begin{table}[htb] \vspace{10px} \centering \label{paramTAB} \begin{tabular}{|l|l|rl|} \hline Segment/joint & Parameter & value & \\ \hline Trunk/hip & $K_p$ & 73.57 & N*m/rad \\ \cline{2-4} & $K_d$ & 18.394 & N*m*s/rad \\ \cline{2-4} & mass & 30 & Kg \\ \cline{2-4} & length & 0.5 & m \\ \cline{2-4} & center of mass & 0.25 & m \\ \cline{2-4} & kp passive & 0 & N*m/rad \\ \cline{2-4} & kd passive & 0 & N*m*s/rad \\ \cline{2-4} & $G_{servo}$ & 1 & \\ \cline{2-4} & Lumped Delay & 10 & ms \\\hline Thigh/knee & $K_p$ & 220.72 & N*m/rad \\ \cline{2-4} & $K_d$ & 16.55 & N*m*s/rad \\ \cline{2-4} & mass & 10 & Kg \\ \cline{2-4} & length & 0.5 & m \\ \cline{2-4} & center of mass & 0.25 & m \\ \cline{2-4} & $k_p$ passive & 0 & N*m/rad \\ \cline{2-4} & $k_d$ passive & 0 & N*m*s/rad \\ \cline{2-4} & $G_{servo}$ & 1 & N*m*s/rad \\ \cline{2-4} & Lumped Delay & 10 & ms \\\hline Shank/ankle & $K_p$ & 465.98 & N*m/rad \\ \cline{2-4} & $K_d$ & 116.49 & N*m*s/rad \\ \cline{2-4} & mass & 10 & Kg \\ \cline{2-4} & length & 0.5 & m \\ \cline{2-4} & center of mass & 0.25 & m \\ \cline{2-4} & $k_p$ passive & 0 & N*m/rad \\ \cline{2-4} & $k_d$ passive & 0 & N*m*s/rad \\ \cline{2-4} & $G_{servo}$ & 1 & \\ \cline{2-4} & Lumped Delay & 10 & ms \\\hline \end{tabular} \caption{Control and simulation parameters} \end{table} \subsection{Distributed control problem} The main contribution of this work is the design of a distributed control approach, which harnesses the modular nature of the DEC control while trying to reduce conflicts among joints. This approach is innovative in the field of bio-inspired humanoid posture control. The underlying idea is to enable only one module at a time to change its positional reference. This strategy claims to prevent the circular overshoot of {the} CoM trajectory of Fig. \ref{fig:Overshoot}. Starting at time $t=0$, every $T_e\in\mathbb{R}_{>0}$ seconds, a new control module is enabled and all the others are disabled. Precisely, disabled modules implement only the gravity compensation control on torque and are controlled to be at the current fixed position, as it will be explained in Section \ref{sec:posContrWGrav}. Switching between controlled modules would traditionally require a central decision. However, the absence of any centralized control structure results in the need of an agreement among modules. Traditionally, in those cases where a multi-agent system is seeking an agreement on a variable of common interest, consensus-based strategies are employed, see \cite{ren2005survey}. In Section \ref{sec:maxCons}, the mentioned consensus protocol is analyzed. Consensus has been used in robotics for inter-robot and inter-vehicular coordination, see \cite{notarstefano2006distributed} (rendezvous), \cite{brunet2008consensus} (task assignment), or \cite{automation2018Molinari} (conflict resolution); in this paper, the paradigm changes, since consensus is here exploited for intra-robot coordination. This approach is suitable for plug-and-play modules where initialization is not required, since the parameters of each module are defined independently of the ones of the other modules, only on the basis of body anthropometrics. \section{Control Design} \subsection{Posture Control Scenario} Humanoid balance in the sagittal plane can be modeled as the control of a multiple inverted pendulum by means of joint torques, on the basis of sensor inputs, i.e. encoders and {inertial measurement units}. The body is modeled as a triple inverted pendulum, following the robot configuration used in the robotic experiment presented in \cite{ott2016good} where ankle, knee and hip were actuated. The model used in the simulation is implemented in Matlab/Simulink, and it is the same used in \cite{alexandrov2005feedback} and \cite{V.Lippi2013}. \subsection{Control implementation} Let $\mathcal{M}$ be the set of all modules. They can exchange information via a wired communication network. Under a graph-theoretical point of view, let the directed graph $(\mathcal{M},\mathcal{A})$ model the wired network topology, with $\mathcal{M}$ being the set of nodes and $\mathcal{A}\subset\mathcal{M}\times\mathcal{M}$ the respective set of arcs. For any pair $i,j\in\mathcal{M}$, $(i,j)\in\mathcal{A}$ if node $i$ receives information from node $j$. \begin{assumption} \label{ass:retrieveInf} Each module can retrieve information from all the modules connected to it by a body segment (see Figure~\ref{DEC}). \end{assumption} By Assumption \ref{ass:retrieveInf}, it can be easily shown that $(\mathcal{M},\mathcal{A})$ presents a connected topology (further details in \cite[A Tutorial on Graph Theory]{ren2007information}). In the following, $\forall i\in\mathcal{M}$, let $N_i\subseteq\mathcal{M}$ be the set of modules sending information to node $i$, i.e. \begin{equation} N_i= \{ j\in\mathcal{M} \mid (i,j)\in\mathcal{A} \}. \end{equation} We consider {the problem of} controlling body posture and equilibrium in the body sagittal plane. The body is represented as a triple inverted pendulum standing on a fixed support surface. The state of the system is described by the joint angle of ankles, hips and knees, or, equivalently, by the orientation in space of body segments or CoM. In particular, each module, say $i\in\mathcal{M}$, is associated to a different task, meaning that it is controlling a specific variable $\alpha_i$: \begin{itemize} \item the \textit{ankle module} controls the body in space variable, $\alpha_i= BS$, i.e. the CoM sway respect to the foot; \item the \textit{knee module} controls the knee joint angle $\alpha_i= KNEE$; \item the trunk is controlling the trunk orientation in space $\alpha_i=TS$. \end{itemize} In Figure~\ref{DEC}, the general structure of each controller is shown above and the relationship between modules is shown beneath. The controlled variable $BS$ is constructed in the \textit{ankle module} using the down-channeled signal. \begin{figure*}[htbp] \centering \IfFileExists{SwithchC.pdf}{\includegraphics[width=0.8\textwidth]{SwithchC.pdf}}{\includegraphics[width=0.8\textwidth]{Figures/SwithchC.pdf}} \caption{Switching between \textit{enabled} and \textit{disabled} mode for a control module. When the module is set on \textit{enabled} the error on the controlled variable is fed as input to the neural controller (PID). A second control variable, i.e. is kept in memory by a register/delay (block $\Delta$). When the state $y_i$ of the module switches to \textit{disabled} the PID controller is commanded to keep constant such variable. The value used as reference for the \textit{disabled} mode is a variable describing the state of the controlled link before swithcing from \textit{enabled} mode. In the presented example the orientation is space of the links will be used.} \label{fig:SwithchC} \end{figure*} Let, \begin{equation} \label{eq:authVar} \forall i\in\mathcal{M},\ \forall k\in\mathbb{N}_{\geq0},\ y_i(k)\in\{0,1\} \end{equation} be a binary variable, referred to as \textit{enabling variable}, defined as follows: module $i$ is enabled in the continuous time interval $$I_k:=(kT_e,\ (k+1)T_e],$$ $k\in\mathbb{N}_{\geq0}$ and $T_e\in\mathbb{R}_{>0}$ a real-valued time, if and only if $y_i(k)=1$. Moreover, \begin{equation} \label{eq:exclusiveY} \forall k\in\mathbb{N},\ \exists!i\in\mathcal{M}:\ y_i(k)=1. \end{equation} The enabling variables are initialized as, $\forall i\in\mathcal{M},\ y_i(0)=1$. {Disabled} modules are controlled to the current position, see Figure~\ref{fig:SwithchC}. Modules run a consensus protocol which let them agree on which one is the enabled module. At each time $t=kT_e$, $k\in\mathbb{N}$, each module has a state $w_{i_0}^k\in\mathbb{R}_{\geq0}$, that quantifies the need for that module to be enabled. The value of $w_{i_0}^k$ is defined as the error on the controlled variable, i.e. \begin{itemize} \item the CoM sway for the ankle joint; \item the knee joint angle for the knee joint; \item the trunk sway for the hip joint. \end{itemize} Intuitively, the enabled module during interval $I_k$ will be the one retaining the highest $w_{i_0}^k$. That is to say, $\forall k\in\mathbb{N}$, \begin{equation} \label{eq:chooseY} y_i(k)=1 \iff i = \argmax_{j\in\mathcal{M}} \left( w_{j_0}^k \right). \end{equation} \begin{assumption} \label{ass:diffBet} $\forall i,j\in\mathcal{M},\ i\not=j$, $w_{i_0}^k\not=w_{j_0}^k$. \end{assumption} Clearly, by Assumption \ref{ass:diffBet}, (\ref{eq:chooseY}) implies (\ref{eq:exclusiveY}). One way nodes can achieve the solution in (\ref{eq:chooseY}) -- without the presence of any central element -- is by running a distributed max-consensus protocol. \subsection{Max Consensus} \label{sec:maxCons} In the following, a discrete-time max-consensus protocol is presented, which will iterate at every instant $kT_e$, $k\in\mathbb{N}$, thus allowing for distributively retrieving $y_i(k)$, $\forall i\in\mathcal{M}$. Initially, all modules have their respective $w_{i_0}^k$. Let an iteration variable be defined for each module $i\in\mathcal{M}$ as $w_i^{(k)}:\mathbb{N}_{\geq0}\mapsto\mathbb{R}_{\geq0}$, such that $w_i^{(k)}(0)=w_{i_0}^k$. All modules iterate the following protocol: \begin{equation} \label{eq:consProt} \forall\kappa\in\mathbb{N}_{\geq0},\ w_i^{(k)}(\kappa+1)=\max_{j\in N_i\cup\{i\}} w_j^{(k)}(\kappa), \end{equation} where $\kappa$ denotes the iteration index. \begin{proposition} Given a connected network topology $(\mathcal{M},\mathcal{A})$, if all modules in $\mathcal{M}$ iterate (\ref{eq:consProt}), then consensus is achieved at $\bar{\kappa}\in\mathbb{N}$, such that \begin{equation} \label{eq:achievedMaxCons} \forall\kappa>\bar{\kappa},\ \forall i\in\mathcal{M},\ w_i^{(k)}({\kappa})=\max_{j\in\mathcal{M}}w_j^{(k)}(0):=w^*. \end{equation} \begin{proof} Protocol (\ref{eq:consProt}) is a traditional max-consensus protocol. By \cite{nejad2009max}, consensus in connected network topologies is reached on the max-value in a number of steps depending only on the network topology. By Assumption \ref{ass:retrieveInf}, the communication network topology is connected, therefore max-consensus is achieved in the sense of (\ref{eq:achievedMaxCons}). Moreover, by \cite{nejad2009max}, with the given topology, $\bar{\kappa}=2$. \end{proof} \end{proposition} As soon as consensus is achieved, modules can compute their respective $y_i(k)$ as follows: \begin{equation} \label{eq:computeY} \forall i\in\mathcal{M},\ \forall k\in\mathbb{N},\ y_i(k)=\mathbb{I}\left( w^k_{i_0}=w^* \right). \end{equation} \begin{proposition} Under Assumption \ref{ass:diffBet}, (\ref{eq:computeY}) implies (\ref{eq:exclusiveY}). \begin{proof} By Assumption \ref{ass:diffBet}, there is only one module, say $i^*\in\mathcal{M}$, such that $w_{i^*_0}^k=w^*$. By (\ref{eq:computeY}), $$ \forall j\in\mathcal{M}\setminus\{i^*\},\ y_j(k)=\mathbb{I}\left( w^k_{j_0}=w_{i^*_0}^k \right) =0, $$ from which (\ref{eq:exclusiveY}) immediately follows. \end{proof} \end{proposition} As mentioned above, this solution is compatible with a plug-and-play framework. In this context, letting modules communicate over a wireless network can speed up the set-up of the system. However, traditionally, wired communication is sensibly faster than the wireless one. Convergence speed of the consensus protocol (\ref{eq:consProt}) can be improved in the wireless framework by using the strategy presented in \cite{MOLINARI2018176}. \section{Validation Experiment} \label{sec:posContrWGrav} The system is tested with the task shown in Fig. \ref{fig:Overshoot}. The parameters are shown in Table \ref{paramTAB}. They are the same for both the presented cases. In particular, passive stiffness and damping have been set to $0$ and the delay to $10 ms$, i.e. a small delay (compared, for example to the $180 ms$ presented in \cite{G.Hettich2014}) that anyway poses realistic limitations on the servo controller gain. This choice was made because in this work we do not want to examine the relationship between delay and passive stiffness studied in \cite{ott2016good}, but we want to emphasize the relationship between competing modules. In Figure~\ref{DEC}, both the disturbances estimators and the proprioceptive signals are fed as inputs to the neural controller. This is performed by setting the proportional gain $K_p$ to $mgh$, where $m$ is the mass of the body above the controlled joint, $g=9.81 m/s^2$ is the gravity acceleration and $h$ the height of the CoM. The derivative component is set to a fraction of the proportional one. This way, the gravity error is expressed as the CoM sway angle and the other estimators are expressed as an ``angle equivalent'', in the sense that the desired corrective torque is divided by $mgh$. This implies that, for the controller and all the compensated disturbances, the ratio between $K_p$ and $K_d$ is fixed, while each signal can be associated with a specific gain. In this work, the DEC has been implemented as shown in \cite{ott2016good} with a separate neural controller for each signal. A PID controller is designed for the servo and a PD controller for disturbance compensation (with gravity compensation an integrative action is not desired). Body segment positions and velocities are assumed to be known exactly. Since we consider only gravity as external disturbance, the control torque is expressed by: \begin{equation} \forall i \in \mathcal{M}, \ \tau_i=G_g\alpha_{CoM_i}+G_{servo}\epsilon_i (K_p+ sK_d) e^{s \Delta t} \end{equation} where $\epsilon_i$ is the error on the controlled variable and $\alpha_{CoM_i}$ the angle of the CoM with respect to the controlled joint. The used $\epsilon_i$ depends on $y_i$ as follows: \begin{equation} \epsilon_i=\left\{ \begin{array}{lr} \alpha_i-\alpha^{ref}_i & \mathrm{if }\ y_i=1 \\ \alpha_i-\bar{\alpha}^{ref}_i & \mathrm{if }\ y_i=0 \end{array} \right. , \end{equation} where $\alpha_i$ is the respective controlled variable. The value $\bar{\alpha}^{ref}_i$ is set to the value of $\alpha_i$ at the instant of deactivation, when $y_i$ makes a transition from $1$ to $0$, as modeled by the block $\Delta$ in Fig. \ref{fig:SwithchC}. \section{Results} \begin{table}[t] \vspace{10px} \centering \begin{center} \begin{tabular}{|l|l|l|l|} \hline\rule{0pt}{3ex} Variable & Index & Original & Distributed \\ \hline \hline\rule{0pt}{3ex} TS & overshoot & 2.5118\textdegree & 2.1166\textdegree \\ & rise time & 0.80 s & v 0.84 s\\ & settling time & 9.99 s & 9.99 s \\ \hline\rule{0pt}{3ex} KNEE & overshoot & 3.4765\textdegree & 0\textdegree\\ & rise time & 0.07 s & 0.31 \\ & settling time & 9.99 s & 9.99 s \\ \hline\rule{0pt}{3ex} BS & overshoot & 0.0961\textdegree & 0.3075\textdegree\\ & rise time & 0.81 s & 0.86 s\\ & settling time & 9.99 s & 9.99 s\\ \hline\rule{0pt}{3ex} & energy & 196.72 J& 68.25 J \\ \hline \end{tabular} \end{center} \caption{Dynamic performance} \label{performance} \end{table} In order to evaluate the impact of the designed distributed control strategy, the transient behavior of the DEC Control is compared, with and without distributed control policy, in response to a sudden change of reference. The results are shown, respectively, in Figure~\ref{fig:Overshoot} and Figure~\ref{fig:Results}. Performances are summarized in Table \ref{performance}. The dynamic performances for the two controllers are comparable. The CoM trajectory does not produce any circular (or overshooting) movement but it is rather described by straight lines, clearly due to the switching behavior. While for TS and KNEE there is a substantial drop of the overshoot measure, a slight increase of this measure affects BS. As it clearly emerges from the table, the cost to pay for a decreased overshoot is an increase of the rise time. Energy is intended as the integral of the mechanical power provided at the joints; on a real robot, the power consumption can be heavily influenced by the actuation (e.g. DC motors require power in order to hold static positions). In this scenario, the max-consensus algorithm makes the DEC control more energy efficient. \begin{figure*}[htbp] \centering \IfFileExists{swingplot1.pdf}{\includegraphics[width=0.6\textwidth]{swingplot1.pdf}}{\includegraphics[width=0.6\textwidth]{Figures/swingplot1.pdf}} \caption{Transient behavior of body posture using the distributed control strategy. The starting position and the reference are the same as the example shown in Fig. \ref{fig:Overshoot}. The trajectories shown in (A) are comparable to the ones shown in Fig. \ref{fig:Overshoot}.In (B) {it} is possible to notice that the system is not producing a circular movement of the CoM but rather straight lines produced by the activities of the single modules. In (C) the body trajectory is shown as a succession of body poses.} \label{fig:Results} \end{figure*} \section{Discussion} This work has discussed a distributed control approach for the modular bio-inspired DEC controller, where modules negotiate their authorization to move. The DEC original formulation (in Figure~\ref{fig:Overshoot}) is compared to the designed distributed control strategy (in Figure~\ref{fig:Results}). The simulated humanoid was initialized from an initial position, from which it had to reach the upright pose; trajectories of body segments and of the CoM position in the sagittal plane were recorded. Traditionally, DEC system exhibits a mutual obstruction between modules, resulting in a circular (overshooting) CoM trajectory. This work's novelty lies in the fact that modules distributively agree on a common strategy. By doing so, within a DEC framework, conflicts between modules are avoided by having only one of the modules \textit{enabled} at a time. With the addition of this distributed agreement strategy, the transient response does not show circular CoM trajectories anymore (see Figure~\ref{fig:Results}). Moreover, our proposed distributed control strategy appears to be more energy efficient. Such improvements come at expenses of a small delay on the settling time, due to the modules' inactivity when disabled. \section*{ACKNOWLEDGMENT} We gratefully acknowledge financial support for the project MTI-engAge (16SV7109) by BMBF. This work was also funded by the German Research Foundation (DFG) within their priority programme SPP 1914 "Cyber-Physical Networking (CPN)", RA516/12-1. \bibliographystyle{IEEEtran} {\small
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Intro} In recent years, ongoing extremely sensitive experiments searching for physics beyond the current Standard Model (SM) expect to see new physics or to set severe limits on various physical observables and particle model parameters \cite{Kuno,Molzon,Bernstein-Cooper}. In particular, current experiments searching for flavour changing neutral current (FCNC) processes in the leptonic sector \cite{Bernstein-Cooper,COMET,Mu2e-proposal,Mu2e,Bernstein,Kosm-talk,Valle} may provide insights and new results into the physics of charged lepton flavour violation (cLFV) \cite{Bernstein,Kosm-talk}, neutrino oscillation in propagation \cite{Valle} and others. The cLFV experiments, although they have not yet discovered any event, represent a very important probe to search for charged lepton mixing with significant implications on understanding various open issues in particle, nuclear physics and astrophysics \cite{Davidson,Barranco,Amanik_2005,Tomas-Valle-10}. To this purpose, exotic $\mu^-\to e^-$ conversion studies are interesting worldwide theoretically \cite{Kosm_PhysRep,Kos_Dep_Wal} as well as experimentally with two experiments: (i) the COMET at J-PARC, Japan \cite{COMET}, and (ii) the Mu2e at Fermilab, USA \cite{Mu2e-proposal,Mu2e,Bernstein}. Both ambitious experiments expect to reach a single event sensitivity down to $10^{-16}$--$10^{-18}$. The best previous limit for the $\mu^{-} \rightarrow e^{-}$ conversion was obtained by the SINDRUM-II collaboration at PSI on the reaction \begin{equation} \mu^{-} + ^{48} \mathrm{Ti} \rightarrow e^{-} + ^{48} \mathrm{Ti} \, , \label{mue-conversion} \end{equation} as $R^{\mathrm{Ti}}_{\mu e} < 6.1 \times 10^{-13}$ \cite{Wintz} (many authors use the published upper limit $R^{\mathrm{Ti}}_{\mu e} < 4.3 \times 10^{-12}$ \cite{Dohmen}), where $R^{\mathrm{Ti}}_{\mu e}$ denotes the branching ratio of the $\mu^{-} \rightarrow e^{-}$ conversion rate divided by the total $\mu^{-}$-capture rate in the $^{48}$Ti nucleus. The COMET experiment, is expected to reach a high sensitivity, $R^{ \mathrm{Al}}_{\mu e} < 10^{-16}$ \cite{COMET} using $^{27}$Al as muon-stopping target while the Mu2e experiment aims to improve $R^{\mathrm{Al}}_{\mu e}$ even further, i.e. to a single event sensitivity $2\times 10^{-17}$, which with a background of 0.5 events will reach a target sensitivity $R^{\mathrm{Al}}_{\mu e} < 6\times 10^{-17}$ \cite{Mu2e-proposal,Mu2e,Bernstein}. The next decade experiments for cLFV, need very high intensity and quality muon beams, like those planed to be built at Fermilab for the Mu2e at Project-X and at J-PARC for the PRIME/PRISM experiments. The use of Project-X beams by the Mu2e experiment, expects to further decrease the upper bound to $R^{ \mathrm{Al}}_{\mu e} < 2 \times 10^{-18}$ \cite{mu2e-px}, while the PRIME experiment, based on the superior properties of the muon beam at J-PARC that can be delivered to the $^{48}\mathrm{Ti}$, may reach the sensitivity of $R^{ \mathrm{Ti}}_{\mu e} < 10^{-18}$ \cite{PRIME,Kuno-PRIME}. We should mention the most stringent upper bounds on purely leptonic cLFV processes presently available for $\mu-e$ transitions, namely, the new limit on the branching ratio of the $\mu^+ \rightarrow e^+ \gamma$ process, $Br(\mu^+ \rightarrow e^+ \gamma) < 5.7 \times 10^{-13}$, set very recently by the MEG experiment at PSI using one of the most intense continuous $\mu^+$ beams in the world \cite{MEG}, and that of the $\mu \rightarrow e e e$ process set previously by the SINDRUM II collaboration in the value $Br(\mu^+ \rightarrow e^+ e^+ e^-) < 1.0 \times 10^{-12}$ \cite{mu-eee}. In recent works, neutral current (NC) neutrino scattering processes on leptons, nucleons and nuclei involving interactions that go beyond the SM (non-standard interactions, NSI, for short) have been examined \cite{Davidson,Barranco,Amanik_2005}. Such processes may be predicted from several extensions of the SM such as various realizations of the seesaw mechanism in the SM \cite{Kos_Dep_Wal,Schechter-Valle,Forero}, and left-right symmetric models \cite{Dep-Valle}. The reactions of this type that take place in nuclei are represented by \begin{equation} \nu_{\alpha} (\tilde{\nu}_{\alpha}) + (A,Z) \rightarrow \nu_{\beta} (\tilde{\nu}_{\beta}) + (A,Z) \, , \label{neutrin-NSI} \end{equation} ($\alpha, \beta = e,\mu,\tau$) and theoretically they can be studied under the same nuclear methods as the exotic cLFV process of $\mu^-\to e^-$ conversion in nuclei. Among the interesting applications of the reactions (\ref{neutrin-NSI}), those connected with the supernova physics may allow $\nu_e$ neutrinos to change flavour during core collapse creating $\nu_e$ neutrino holes in the electron-neutrino sea \cite{Amanik_2007} which may allow $e^{-}$-capture on nucleons and nuclei to occur and subsequently decrease the value of the electron fraction $Y_{e}$. Such non-standard interactions \cite{Friedland-solar,Friedland-atm2,Scholberg} may suggest alterations in the mechanisms of neutrino-propagation through the supernova (SN) envelope and affect constraints put on the physics beyond the SM as well as on some scenarios of supernova explosion \cite{Barranco-Walle,Mir-Tort-Walle,Bar-Mir-Rashba}. This motivated the investigation of the NSI in both LFV and cLFV processes in solar and supernova environment \cite{Kosm-A570,Kos_Kov_Schm} and motivated our present work too. Furthermore, the impact of non-standard neutrino interactions on SN physics was the main motivation of works examining their effect on supernova when the neutrino self-interaction is taken into account \cite{Tomas-Valle-10}. The extreme conditions under which neutrinos propagate after they are created in the SN core, may lead to strong matter effects. It is known that, in particular, the effect of small values of the NSI parameters can be dramatically enhanced in the inner strongly deleptonized regions \cite{Tomas-Valle-10}. In general, low-energy astrophysical and laboratory neutrino searches provide crucial information towards understanding the fundamental electroweak interactions, within and beyond the SM. Well-known astrophysical neutrino sources like the solar, supernova, Geoneutrinos, etc., constitute excellent probes in searching for a plethora of neutrino physics applications and new-physics open issues \cite{Kosm-Oset}. Since neutrinos interact extremely weakly with matter, they may travel astronomical distances and reach the Earth \cite{SK,SNO,KamLAND}, etc. The recorded $\nu$-signals in sensitive terrestrial nuclear detectors of low-energy neutrinos \cite{Hirata-Bionta,Keil}, could be simulated providing useful information relevant to the evolution of distant stars, the core collapse supernovae, explosive nucleosynthesis \cite{Haxton}, neutrino oscillation effects and others. Recently it became feasible to detect neutrinos by exploiting the NC interactions and measuring the nuclear recoil signal by employing detectors with very low-threshold energies \cite{pion-DAR-nu,Louis}. The NC interactions, through their vector components can lead to an additive contribution (coherence) of all nucleons in the target nucleus \cite{Horowitz,Biassoni,Freedman,Giom-Vergados,Monroe_Fischer,Don-Wal}. The main purpose of the present Letter is to explore the nuclear physics aspects of the $\nu$-nucleus reactions of Eq. (\ref{neutrin-NSI}) focusing on the role of the NSI which have not been studied in detail up to now. We should stress that, our strategy in studying the nuclear aspects of FCNC in nuclei, is to carry out realistic cross sections calculations for the exotic processes (\ref{mue-conversion}) and (\ref{neutrin-NSI}), including NSI terms in the relevant effective Lagrangian. The required nuclear matrix elements are evaluated within the context of the quasi-particle RPA, considering both coherent and incoherent processes by applying the advantageous state-by-state method developed in Refs. \cite{Kosm-A570,vtsak-tsk-1,vtsak-tsk-2}. As a first step, we perform calculations for $gs\to gs$ transitions of the reactions (\ref{neutrin-NSI}) by solving the BCS equations, for even-even nuclear systems, and employing the experimental nuclear charge densities \cite{deVries} for odd-$A$ nuclei. For comparison of our results with those of other methods \cite{Barranco,Amanik_2005,Amanik_2007,Horowitz,Biassoni}, SM cross sections calculations are also carried out. More specifically, our present results refer to the even-even $^{48}\mathrm{Ti}$ isotope, the stopping target of SINDRUM II and PRIME/PRISM $\mu^-\to e^-$ experiments. We perform similar calculations for processes (\ref{neutrin-NSI}) in the $^{27}\mathrm{Al}$ nucleus proposed as detector material in Mu2e and COMET experiments. Finally, we will use the experimental upper limits of the cLFV processes to put robust bounds on model parameters of the relevant Lagrangians and the ratios of the NSI contributions with respect to the SM ones. \section{Description of the formalism} \label{chapt2} The non-standard $\nu$-nucleus processes (\ref{neutrin-NSI}) and the exotic cLFV $\mu^-\to e^-$ conversion in nuclei \cite{Kuno,Kosm_PhysRep,Kos_Dep_Wal,Kos_Kov_Schm}, can be predicted within the aforementioned new-physics models \cite{Kos_Dep_Wal}. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{Figure-1.ps} \end{center} \caption{ Nuclear level Feynman diagrams for: \textit{(a)} SM Z-exchange neutral current $\nu$-nucleus reactions, \textit{(b)} non-standard Z-exchange $\nu$-nucleus reactions, and \textit{(c)} Z-exchange and photon-exchange $\mu^{-}\rightarrow e^{-}$ in the presence of a nucleus (muon-to-electron conversion). The non-standard (cLFV or LFV) physics enters in the complicated vertex denoted by the bullet $\bullet$.} \label{fig.1} \end{figure} In Fig. \ref{fig.1} we show some nuclear-level Feynman diagrams representing the exchange of a $Z$-boson between a lepton and a nucleon for the cases of $\nu$-nucleus scattering in the SM (Fig. \ref{fig.1}(a)) and in the non-standard interactions of neutrinos with nuclei (Fig. \ref{fig.1}(b)). We also show the exchange of a $Z$-boson or a $\gamma$-photon in the $\mu^-\to e^-$ conversion, Fig. \ref{fig.1}(c) \cite{Kosm_PhysRep,Kos_Dep_Wal}. The leptonic vertex in the cases of Fig. \ref{fig.1}(b),(c) is a complicated one. A general effective Lagrangian that involves SM interactions ($\mathcal{L}_{\mathrm{SM}}$) and NSI ($\mathcal{L}_{\mathrm{NSI}}$) with a non-universal (NU) term and a flavour changing (FC) term can be written as \begin{equation} \mathcal{L}_{\mathrm{tot}} = \mathcal{L}_{\mathrm{SM}} + \mathcal{L}_{\mathrm{NSI}} = \mathcal{L}_{\mathrm{SM}} + \mathcal{L}_{\mathrm{NU}} + \mathcal{L}_{\mathrm{FC}} \, . \label{tot_Lagr} \end{equation} The individual components $\mathcal{L}_{\mathrm{SM}}$ and $\mathcal{L}_{\mathrm{NSI}}$ of this Lagrangian are explained in the next subsections. For a concrete example, it has been proposed \cite{Schechter-Valle} that, even small deviation from unitary lepton mixing matrix, may cause sizeable NSI effects and potentially large LFV \cite{Forero}. The non-trivial structure of electroweak currents in low-scale seesaw Lagrangians leads to non-unitary lepton mixing matrix $N_{\alpha \beta}$, which can be parametrized as $N\equiv(1- n)U$. $U_{\alpha \beta}$ is a unitary matrix and $n_{\alpha \beta}$ a model depended non-standard matrix ($\alpha, \beta = e,\mu,\tau$) which takes specific form within seesaw mechanisms \cite{Forero}. \subsection{Non-standard $\nu$-nucleus reaction cross sections} The neutral current non-standard neutrino interactions addressed here, are described by a quark-level Lagrangian, $\mathcal{L}_{\mathrm{NSI}}$, parametrized (for energies $\ll M_{Z}$) as \cite{Barranco,Scholberg,Barranco-Walle} \begin{equation} \mathcal{L}_{\mathrm{NSI}} = - 2\sqrt{2} G_F \sum_{\begin{subarray}{c} f= \, u,d\\ \alpha,\beta = \, e,\mu,\tau\end{subarray}} \epsilon_{\alpha \beta}^{f P}\left[\bar{\nu}_{\alpha}\gamma_\rho L \nu_\beta\right]\left[\bar{f}\gamma^\rho P f\right], \label{NSI_Lagr} \end{equation} where three light neutrinos $\nu_{\alpha}$ with Majorana masses are considered, $f$ denotes a first generation SM quark and $P=\lbrace L, R\rbrace$ are the chiral projectors. The Lagrangian (\ref{NSI_Lagr}) contains flavour preserving non-SM terms, known as non-universal (NU) interactions that are proportional to $\epsilon_{\alpha \alpha}^{f P}$, as well as flavour-changing (FC) terms proportional to $\epsilon_{\alpha \beta}^{f P}$, $\alpha\neq\beta$. These couplings are taken with respect to the strength of the Fermi coupling constant $G_F$ \cite{Barranco,Barranco-Walle}. For the polar-vector couplings we are mainly interested in the present work, it holds $\epsilon_{\alpha\beta}^{f V}=\epsilon_{\alpha\beta}^{f L} + \epsilon_{\alpha\beta}^{f R}$, while for the axial-vector couplings $\epsilon_{\alpha\beta}^{f A}=\epsilon_{\alpha \beta}^{f L} - \epsilon_{\alpha\beta}^{f R}$. The nuclear physics aspects of the non-standard $\nu$-matter reactions can be studied by transforming the Lagrangian (\ref{NSI_Lagr}) to the nuclear level where the hadronic current is written in terms of NC nucleon form factors (functions of the four momentum transfer) \cite{Kos_Kov_Schm}. In the general case of the inelastic scattering of neutrinos on nuclei, the magnitude of the three momentum transfer, $q = \vert{\bf q}\vert$, obtained from the kinematics of the reaction, is a function of the scattering angle of the outgoing neutrino $\theta$ (laboratory frame), the initial, $E_{i}$, and final, $E_{f}$, neutrino energies, as well as the excitation energy of the target nucleus $\omega$ as, $q^2=\omega^2+ 2 E_{i} E_{f} \left(1 - \cos \theta \right)$ \cite{Don-Wal,vtsak-tsk-1}. In the special case of the coherent (elastic) channel we focus in this work ($\omega=0$ and $E_i = E_f \equiv E_\nu$), only $gs \rightarrow gs$ transitions occur (for spin-zero nuclei) and we have $q^2 = 2 E_\nu^2 (1-\cos \theta)$ or $q = 2 E_\nu \sin (\theta/2)$. The coherent differential cross section with respect to the scattering angle $\theta$ for NSI $\nu$-nucleus processes is written as \begin{equation} \frac{d \sigma_{\mathrm{NSI},\nu_{\alpha}}}{d \cos \theta} = \frac{G_{F}^{2}}{2 \pi} E_{\nu}^{2} \left(1 + \cos \theta \right)\left\vert \langle gs \vert \vert G_{V,\nu_{\alpha}}^{\mathrm{NSI}}(q) \vert \vert gs \rangle \right \vert ^{2}, \label{NSI_dcostheta} \end{equation} ($\alpha = e,\mu,\tau$, denotes the flavour of incident neutrinos) where $\vert gs \rangle$ represents the nuclear ground state (for even-even nuclei, like the $^{48}\mathrm{Ti}$, $\vert gs \rangle=\vert J^\pi \rangle\equiv\vert 0^+ \rangle$). The nuclear matrix element, that arises from the Lagrangian (\ref{NSI_Lagr}), takes the form \begin{equation} \begin{aligned} & \left\vert {\cal M}^{\mathrm{NSI}}_{V,\nu_{\alpha}} \right \vert ^{2} \equiv \left\vert \langle gs \vert \vert G_{V,\nu_{\alpha}}^{\mathrm{NSI}}(q) \vert \vert gs \rangle \right \vert ^{2} = \\ & \left[ \left( 2 \epsilon_{\alpha \alpha}^{uV} + \epsilon_{\alpha \alpha}^{dV} \right) Z F_Z (q^2) + \left( \epsilon_{\alpha\alpha}^{uV} + 2\epsilon_{\alpha\alpha}^{dV} \right) N F_N (q^2) \right]^2 \\ & + \sum_{\beta \neq \alpha} \left[\left( 2 \epsilon_{\alpha \beta}^{uV}+ \epsilon_{\alpha \beta}^{dV} \right) Z F_Z (q^2)+ \left(\epsilon_{\alpha \beta}^{uV}+ 2 \epsilon_{\alpha\beta}^{dV} \right)N F_N (q^2)\right]^2, \end{aligned} \label{GV} \end{equation} ($\beta = e,\mu,\tau$) where $F_{Z(N)}$ denote the nuclear (electromagnetic) form factors for protons (neutrons) entered due to the CVC theory. We note that in the adopted NSI model, the coherent NC $\nu$-nucleus reaction is not a flavour blind process. By considering the nuclear structure details, the cross sections provided by Eq. (\ref{NSI_dcostheta}), become more realistic and accurate \cite{Scholberg} (in Ref. \cite{Barranco} the variation versus the momentum transfer of the nuclear form factor is neglected, which for supernova neutrino studies is a rather crude approximation \cite{Pap-Kosm-NPA}). From an experimental physics point of view, many neutrino detectors are more sensitive to the recoil energy of the nuclear target, $T_N$, than to the scattering angles, $\theta$. Therefore, it is also important to compute the differential cross sections $d\sigma/dT_N$. For coherent scattering the nucleus recoils (intrinsically it remains unchanged) with energy which, in the approximation $T_{N} \ll E_{\nu}$ (low-energy limit), is maximized as, $T_N^{\text{max}}=2 E_\nu^2/(M+ 2 E_\nu)$, with $M$ being the nuclear mass \cite{Giom-Vergados,Monroe_Fischer}. Then, to a good approximation, the square of the three momentum transfer, is equal to $q^2 = 2 M T_N$, and the coherent NSI differential cross section with respect to $T_{N}$ is written as \begin{equation} \frac{d\sigma_{\mathrm{NSI},\nu_{\alpha}}}{dT_N} = \frac{G_F^2 \,M}{\pi} \left(1- \frac{M\,T_N}{2 E_\nu^2}\right)\left\vert\langle gs\vert\vert G_{V,\nu_{\alpha}}^{\mathrm{NSI}} (q) \vert\vert gs \rangle \right \vert ^{2}\, . \label{NSI_dT} \end{equation} Both Eqs. (\ref{NSI_dcostheta}) and (\ref{NSI_dT}) are useful for studying the nuclear physics of NSI of neutrinos with matter. Furthermore, by performing numerical integrations in Eq. (\ref{NSI_dcostheta}) over the scattering angle $\theta$ or in Eq. (\ref{NSI_dT}) over the recoil energy $T_N$, one can obtain integrated (total) coherent NSI cross sections, $\sigma_{\mathrm{NSI},\nu_\alpha}$. The individual cross sections $\sigma_{\mathrm{NU,\nu_{\alpha}}}$ and $\sigma_{\mathrm{FC,\nu_{\alpha}}}$ may be evaluated accordingly \cite{Pap-Kosm-NPA}. \subsection{SM coherent $\nu$-nucleus cross sections} At low and intermediate neutrino energies considered in this Letter, the effective (quark-level) SM $\nu$-nucleus interaction Lagrangian, $\mathcal{L}_{\mathrm{SM}}$, reads \begin{equation} \mathcal{L}_{\mathrm{SM}} = - 2 \sqrt{2} G_{F} \sum_{ \begin{subarray}{c} f= \, u,d \\ \alpha= e, \mu,\tau \end{subarray}} g_P^f \left[ \bar{\nu}_{\alpha} \gamma_{\rho} L \nu_{\alpha} \right] \left[ \bar{f} \gamma^{\rho} P f \right], \label{SM_Lagr} \end{equation} where, $g_f^P$ are the $P$-handed SM couplings of $f$-quarks ($f=u,d$) to the $Z$-boson. We mention that, compared to previous studies \cite{Amanik_2005,Amanik_2007}, we have taken into consideration the $\nu-u$ quark interaction [see Eq. (\ref{GV})], in addition to the momentum dependence of the nuclear form factors. For coherent $\nu$-nucleus scattering, the SM angle-differential cross section is given from an expression similar to Eq. (\ref{NSI_dcostheta}) with the nuclear matrix element being that of the Coulomb operator $\hat{\mathcal{M}}_0(q)$ (product of the zero-order spherical Bessel function times the zero-order spherical harmonic \cite{Don-Wal}). This corresponding matrix element can be cast in the form \cite{Kosm-A570} \begin{equation} \left\vert {\cal M}^{\mathrm{SM}}_{V,\nu_{\alpha}} \right \vert ^{2} \, \equiv \, \left\vert\langle gs \vert\vert \hat{\mathcal{M}}_0 \vert\vert gs \rangle\right \vert^2 = \left[g^p_V Z F_Z (q^2) + g^n_V N F_N (q^2) \right]^2 , \label{SM-ME} \end{equation} where, $g^{p(n)}_V$ is the known polar-vector coupling of proton (neutron) to the $Z$ boson (see Fig. \ref{fig.1}(a)). In the low energy limit, one can also write in a straightforward manner the corresponding differential cross section with respect to the nuclear recoil energy, $T_N$ \cite{Giom-Vergados,Monroe_Fischer}). In this work, starting from original differential cross sections $d\sigma_{\lambda,\nu_\alpha}/d\cos\theta$ and $d\sigma_{\lambda,\nu_{\alpha}}/dT_N$, we evaluated individual angle-integrated cross sections of the form $\sigma_{\lambda,\nu_\alpha} (E_\nu)$, with $\alpha = e,\mu,\tau$, and $\lambda= \mathrm{tot, SM, NU, FP, FC}$, where under FC, the six processes $\nu_e\leftrightarrow\nu_\mu, \, \nu_e\leftrightarrow\nu_\tau, \, \nu_\mu\leftrightarrow\nu_\tau$ are included (obviously, $\sigma_{\nu_\alpha\rightarrow\nu_\beta}=\sigma_{\nu_\beta\rightarrow \nu_\alpha}$) for both nuclei, $^{48}\mathrm{Ti}$ and $^{27}\mathrm{Al}$. A great part of these results is presented and used to compute folded cross sections below (for more results see Ref. \cite{Pap-Kosm-NPA}). \section{Results and discussion} \subsection{Nuclear Structure calculations} At first, we studied the nuclear structure details of the matrix elements entering Eqs. (\ref{NSI_dcostheta})-(\ref{NSI_dT}) and Eq. (\ref{SM-ME}) that reflect the dependence of the coherent cross section on the incident $\nu$-energy $E_{\nu}$ and the scattering angle $\theta$ (or the recoil energy $T_{N}$). For the even-even $^{48}\mathrm{Ti}$ nucleus, the stopping target of the PSI \cite{Wintz,Dohmen} and PRIME \cite{PRIME,Kuno-PRIME} experiments, this study involves realistic nuclear structure calculations for the cross sections $d\sigma_{\lambda,\nu_{\alpha}}/d\cos\theta$ and $d\sigma_{\lambda,\nu_{\alpha}}/dT_N$, performed after constructing the nuclear ground state $\vert gs\rangle$ by solving iteratively the BCS equations \cite{vtsak-tsk-1}. Then, the nuclear form factors for protons (neutrons) are obtained as \cite{Kosm-A570} \begin{equation} F_{N_n}(q^2) = \frac{1}{N_n}\sum_j [j]\, \langle j\vert j_0(qr)\vert j\rangle\left(\upsilon^j_{N_n}\right)^2 \end{equation} with $[j]=\sqrt{2 j +1}$, $N_{n}=Z \,\,(\mathrm{or}\,\, N)$. $\upsilon^j_{N_{n}}$ denotes the occupation probability amplitude of the $j$-th single nucleon level. The chosen active model space consists of the lowest 15 single-particle $j$-orbits, $j \equiv (n , \ell, 1/2)j$ without core, up to major h.o. quanta $N=4 \hbar \omega$. The required monopole (pairing) residual interaction, obtained from a Bonn C-D two-body potential was slightly renormalized with the two parameters $g^{p,n}_{\mathrm{pair}}$ ($g^{p}_{\mathrm{pair}}=1.056$, for proton pairs, and $g^{n}_{\mathrm{pair}}=0.999$, for neutron pairs). We note that, we have devoted a special effort on the accurate construction of the nuclear ground state, (i) because the coherent channel is the dominant one for the neutral current SM $\nu$-nucleus processes and we assumed that this holds also for NSI processes, and (ii) because in a next step we are intended to perform extensive incoherent cross sections calculations where all accessible final nuclear states will be built on the present ground state. For the odd-$A$ $^{27}\mathrm{Al}$ nucleus (its ground state spin is $\vert gs\rangle =\vert J^\pi\rangle =\vert (5/2)^+\rangle$), the stopping target of Mu2e and COMET experiments, we obtained the form factor $F_{Z}(q^{2})$, through a model independent analysis (using a Fourier-Bessel expansion model) of the electron scattering data for the charge density distribution of this isotope \cite{deVries}. Since similar data for $F_{N}(q^{2})$ $^{27}\mathrm{Al}$ are not available, we considered (to a rather satisfactory approximation) that $F_{N} \simeq F_{Z}$ (a difference up to about $10 \%$ usually appears for medium and heavy nuclear systems \cite{deVries}). The momentum dependence of the nuclear form factors was ignored by some authors \cite{Barranco} which at low $\nu$-energies relevant for solar neutrinos is practically a good approximation, but for energies relevant to supernova neutrinos addressed in this work, it may lead to differences of even an order of magnitude \cite{Pap-Kosm-NPA}. \subsection{ Integrated coherent $\nu$-nucleus cross sections} \begin{table*}[ht] \centering \begin{tabular}{{c|cccccccc}} \hline \hline \\ $\nu_{\alpha}$ & $(A,Z)$ & $R_{\mathrm{tot}}$ & $R_{\mathrm{NU}}$ & $R_{\mathrm{FP}}$ & $R_{\nu_{\alpha} \leftrightarrow \nu_{e}}$ & $R_{\nu_{\alpha} \leftrightarrow \nu_{\mu}}$ & $R_{\nu_{\alpha} \leftrightarrow \nu_{\tau}}$ & \\ \hline & $^{48}\mathrm{Ti}$ & 1.037 & 0.002 & 0.905 & - & $0.121 \times 10^{-4}$ & 0.130 & \\[-1ex] \raisebox{1.5ex}{ $\nu_{e}$ } & $^{27}\mathrm{Al}$ & 1.044 & 0.003 & 0.902 & - & $0.130 \times 10^{-4}$ & 0.139 & \\ \hline & $^{48}\mathrm{Ti}$ & 1.293 & 0.001 & 0.929 & $0.121 \times 10^{-4}$ & - & 0.361 & \\[-1ex] \raisebox{1.5ex}{ $\nu_{\mu}$ } & $ ^{27}\mathrm{Al}$ & 1.318 & 0.001 & 0.927 & $0.130 \times 10^{-4}$ & - & 0.387 & \\ \hline \hline \end{tabular} \caption{ The ratios $R_{\lambda,\nu_{\alpha}}$ (for the definition see Eq. (\ref{R-lamb-alp}) in the text) of all possible $\nu_{\alpha} + (A,Z) \rightarrow\nu_{\beta} + (A,Z) $ processes. They have been evaluated in their assymptotic values reached at $E_{\nu} \approx 120 \, \mathrm{MeV}$.} \label{table1} \end{table*} In the next step of our calculational procedure we obtained angle-integrated coherent $\nu$-nucleus cross sections by integrating numerically Eq. (\ref{NSI_dcostheta}) over angles [or Eq. (\ref{NSI_dT}) over $T_N$] for the various interaction components as \begin{equation} \sigma_{\lambda,\nu_{\alpha}}(E_\nu) = \int\frac{d\sigma_{\lambda,\nu_{\alpha}}}{d\cos \theta}(\theta , E_\nu) \,\, d\cos \theta \, , \end{equation} ($\lambda= \mathrm{tot, SM, NU, FP, FC }$). We found that the exotic FCNC processes $\nu_\alpha\rightarrow \nu_\beta$ in $^{48}\mathrm{Ti}$ have significantly lower cross section compared to the SM one. From the obtained FCNC $\nu$-nucleus cross sections the most challenging result corresponds to the $\nu_{\mu} \rightarrow \nu_{e}$ transition (and to its lepton conjugate process, $\nu_e\to\nu_\mu$). This is mainly due to the severe constraint $\epsilon_{\mu e}^{f P} = 2.9 \times 10^{-4}$ inserted in the Lagrangian (\ref{NSI_Lagr}) which has been derived from the nuclear $\mu^-\to e^-$ conversion experimental limits on cLFV branching ratio \cite{COMET,Mu2e-proposal,Mu2e,Bernstein}. We remind that, in this work we have employed the NSI parameters $\epsilon_{\alpha \beta}^{f V}$ (except the $\epsilon_{\mu e}^{f V}$) derived from various experimental bounds in Ref. \cite{Davidson}. By exploiting our cross sections $\sigma_{\lambda,\nu_{\alpha}}(E_\nu)$, we find it interesting to estimate the ratio of each of the individual cross sections, $\sigma_{\lambda,\nu_\alpha}$, with respect to the SM cross sections defined as \begin{equation} R_{\lambda,\nu_{\alpha}}(E_{\nu}) = \frac{\sigma_{\lambda,\nu_{\alpha}}(E_{\nu})}{\sigma_{\mathrm{SM}}(E_{\nu})} , \qquad \lambda= \mathrm{tot, NU, FP, FC} \, . \label{R-lamb-alp} \end{equation} For $^{48}\mathrm{Ti}$, the latter ratios initially are slowly increasing functions of $E_\nu$, but eventually (for energies higher than about $80-120$ MeV) they tend asymptotically to the values listed in Table \ref{table2}. For $^{27}\mathrm{Al}$, however, the ratios $R_{\lambda,\nu_{\alpha}}$ are energy independent which is a consequence of the different treatment applied in studying the nuclear structure details than that followed for $^{48}\mathrm{Ti}$. From the comparison of the results of Table \ref{table2} with those of the method \cite{Barranco}, we conclude that our realistic calculations are important in the case of $^{48}\mathrm{Ti}$ nucleus, where the BCS method gave us $F_N \neq F_Z$ and, hence, the results obtained for $R_{\lambda,\nu_{\alpha}}$ differ from those given by Ref. \cite{Barranco}. For $^{27}\mathrm{Al}$, however, for which we considered $F_{N} \simeq F_{Z}$, the dependence on the nuclear structure parameters in the numerator and denominator of Eq. (\ref{R-lamb-alp}) cancel out and, then, our predictions for $R_{\lambda,\nu_{\alpha}}$ are equal to those of Ref. \cite{Barranco}. It is worth noting that, some constraints coming from solar \cite{Friedland-solar} and atmospheric \cite{Friedland-atm2} neutrino data indicate that the NSI might be large, while according to the present experimental data, $\epsilon_{\tau \tau}^{f V}$ is unacceptably large and, consequently, it derives unrealistic results (the corresponding FP and NU cross sections, not included here, are larger than the SM ones) \cite{Davidson,Scholberg}. \subsection{Supernova neutrino fluxes and expected event rates} One of the most interesting connections of our present calculations with ongoing and future neutrino experiments is related to supernova $\nu$-detection. As it is known, in SN explosions most of the energy is released by $\nu$-emission. Then, the total neutrino flux, $\Phi(E_\nu)$, arriving at a terrestrial detector reads \cite{Horowitz,Biassoni} \begin{equation} \Phi(E_\nu) = \sum_{\alpha} \Phi_{\nu_{\alpha}} (E_\nu)=\sum_{\alpha} \frac{N_{\nu_{\alpha}}}{4\pi \, d^ 2}\, \eta_{\nu_{\alpha}}^{\mathrm{SN}} (E_\nu), \end{equation} ($\alpha = e, \mu, \tau$) where $N_{\nu_{\alpha}}$ is the number of (anti)neutrinos emitted from a supernova source at a typical distance (here we used $d = 8.5 \, \mathrm{kpc}$) and $\eta_{\nu_{\alpha}}^{\mathrm{SN}}$ denotes the energy distribution of the (anti)neutrino flavour $\alpha$ \cite{Giom-Vergados}. We assume that the emitted SN-neutrino energy spectra $\eta_{\nu_{\alpha}}^{\mathrm{SN}} (E_\nu)$ resemble Maxwell-Boltzmann distributions that depend on the temperature $T_{\nu_{\alpha}}$ of the (anti)neutrino flavour $\nu_\alpha$ ($\tilde{\nu}_\alpha$). By convoluting the integrated cross section $\sigma_{\lambda,\nu_\alpha}(E_\nu)$ with the neutrino distributions, the signal produced on a terrestrial detector may be simulated as \begin{equation} \sigma^{sign}_{\lambda,\nu_\alpha}(E_\nu)=\sigma_{\lambda,\nu_\alpha}(E_\nu)\,\eta_{\nu_\alpha}^{\mathrm{SN}}(E_\nu). \label{eq.signal} \end{equation} \begin{table*}[ht] \centering \begin{tabular}{{c|ccccccccc}} \hline \hline \\ $\nu_{\alpha}$ & $(A,Z)$ & $\langle\sigma_{\mathrm{tot}}\rangle$ & $\langle\sigma_{\mathrm{SM}}\rangle$ & $\langle\sigma_{\mathrm{NU}}\rangle$ & $\langle\sigma_{\mathrm{FP}}\rangle$ & $\langle\sigma_{\nu_{\alpha} \rightarrow \nu_{e}}\rangle$ & $\langle\sigma_{\nu_{\alpha} \rightarrow \nu_{\mu}}\rangle$ & $\langle\sigma_{\nu_{\alpha} \rightarrow \nu_{\tau}}\rangle$ &\\ \hline & $ ^{48}\mathrm{Ti}$ & $5.32$ & $5.15 $ & $1.20 \times 10^{-2}$ & $4.66$ & - & $6.07 \times 10^{-5}$ & $6.50 \times 10^{-1}$ & \\[-1ex] \raisebox{1.5ex}{ $\nu_{e}$ } & $ ^{27}\mathrm{Al}$ & $1.57$ & $1.50 $ & $3.83 \times 10^{-3}$ & $1.35$ & - & $1.95 \times 10^{-5}$ & $2.09 \times 10^{-1}$ & \\ \hline & $ ^{48}\mathrm{Ti}$ & $19.6$ & $15.2 $ & $1.93 \times 10^{-2}$ & $14.2 $ & $1.80\times 10^{-4}$ & - & $5.36 $ & \\[-1ex] \raisebox{1.5ex}{ $\nu_{\mu}$ } & $ ^{27}\mathrm{Al}$ & $6.07$ & $ 4.61 $ & $6.42 \times 10^{-3}$ & $4.27 $ & $6.00 \times 10^{-5}$ & - & $1.78 $ & \\ \hline \hline \end{tabular} \caption{ Flux averaged cross sections $\langle\sigma_{\lambda,\nu_{\alpha}}\rangle$ (in $10^{-40} \, cm^2$) for various supernova neutrino spectra parametrized by Maxwell-Boltzmann distributions.} \label{table2} \end{table*} \begin{figure}[H] \begin{center} \includegraphics[width=0.40\textwidth]{Figure-2.ps} \end{center} \caption{ The convoluted cross sections, evaluated with Maxwell-Boltzmann distributions, that represent the expected signal to be recorded on $^{48}\mathrm{Ti}$ $\nu$-detector, $\sigma_{\lambda,\nu_\alpha}^{sign}(E_\nu)$. Due to the flavour dependence of the SN neutrino distribution, the energy-window of $\nu_{e}$ neutrinos signal is more narrow compared to those of $\nu_{\mu}$ and $\nu_{\tau}$ neutrinos.}\label{fig.3} \end{figure} The resulting signals, $\sigma^{sign}_{\lambda,\nu_\alpha}(E_\nu)$, obtained by inserting in Eq. (\ref{eq.signal}) the cross sections $\sigma_{\lambda,\nu_{\alpha}}$, are plotted in Fig. \ref{fig.3}. Note that, in contrast to the original cross sections, now $\sigma^{sign}_{\nu_\alpha\to \nu_\beta} \neq \sigma^{sign}_{\nu_\beta \rightarrow \nu_\alpha}$. Figure \ref{fig.3} shows that for incoming $\nu_\mu$ neutrinos the signal $\sigma^{sign}_{\lambda,\nu_\mu}$ presents an appreciably wider energy range compared to that of $\nu_e$ and that the maximum peak is shifted towards higher energies following the features of the distributions $\eta_{\nu_\alpha}^{\mathrm{SN}}(E_\nu)$. The simulated cross sections of Fig. \ref{fig.3} reflect the characteristics of the incident neutrino spectrum of a specific flavour $\alpha$ having its own position of the maximum peak and width of the distribution $\eta_{\nu_\alpha}^{\mathrm{SN}}$. We remind that, as usually, for incoming $\nu_e$ neutrinos, the distribution $(\eta_{\nu_e}^{\mathrm{SN}} + \eta_{\tilde{\nu}_e}^{\mathrm{SN}})/2$ is used. In SN neutrino simulations, another useful quantity is the flux averaged cross section \cite{Kosm-Oset} which in our notation is written as \begin{equation} \langle\sigma_{\lambda,\nu_\alpha}\rangle = \int\sigma_{\lambda,\nu_\alpha}(E_\nu)\, \eta_{\nu_\alpha}^{\mathrm{SN}}(E_\nu) \, dE_\nu \, . \end{equation} The results for $\langle\sigma_{\lambda,\nu_\alpha}\rangle$, obtained by using our angle-integrated cross sections are listed in Table \ref{table2}. We note that our flux averaged cross sections differ by about $30 \%$ from those of \cite{Barranco}. From experimental physics perspectives, it is also interesting to make predictions for the differential event rate of a $\nu$-detector \cite{Horowitz,Biassoni,vtsak-tsk-2}. The usual expression for computing the yield in events is based on the neutrino flux, $\Phi_{\nu_\alpha}$. To include the NSI of neutrinos with nuclei, the yield in events $Y_{\lambda,\nu_{\alpha}}(T_N)$, is \cite{Horowitz,Biassoni} \begin{equation} Y_{\lambda, \nu_\alpha}(T_N) = N_t \int \Phi_{\nu_\alpha} \, dE_{\nu} \int \frac{d\sigma_{\lambda,\nu_\alpha}}{d\cos\theta} \, \delta\left(T_N - \frac{q^2}{2 M}\right) \, d\cos \theta \, , \label{diff.rate} \end{equation} where $N_t$ is the total number of nuclei in the detector material. Assuming a detector filled with one tone $^{48}$Ti, we evaluated differential event rates $Y_{\lambda,\nu_\alpha}(T_N)$ for several supernova scenarios. These results, are plotted in Fig. \ref{fig.4} where for each particular interaction, the corresponding neutrino flux has been considered. We see that, the respective results for the NU and FC processes, especially the case of $\nu_\mu \rightarrow \nu_e$ transition, present appreciably small contributions and that, the lower the energy recoil, the larger the potentially detected number of events. Hence, for the observation of non-standard $\nu$-nucleus events, detector medium with very low energy-recoil threshold is required. With the above results for $Y_{\lambda, \nu_{\alpha}}(T_N)$, one can obtain the total number of counts by integrating Eq. (\ref{diff.rate}) above the energy threshold, $T_N^{thres.}$, of the detector in question. For the $^{48}\mathrm{Ti}$ nucleus, assuming $T_N^{thres.} \approx 1\, \mathrm{keV}$, we find about $13.5$ events/ton for the SM process but only $10^{-3}$ events/ton for the flavour changing $\nu_\mu\leftrightarrow \nu_e$ reaction, i.e. about four orders of magnitude less events \cite{Pap-Kosm-NPA}. We also conclude that, for making accurate predictions of the total number of counts, the nuclear structure parameters play significant role. Thus, for the $\nu_\mu\rightarrow \nu_e$ transition we end up with about $29 \%$ less events, compared to those given by the approximation of Ref. \cite{Barranco}. On the other hand, adding up the total number of events for the three SM processes of the form, $\nu_\alpha \to \nu_\alpha$, we end up with only $2 \%$ less events than those provided from the formalism of Refs. \cite{Horowitz,Biassoni}. \begin{figure}[H] \begin{center} \includegraphics[width=0.40\textwidth]{Figure-3.ps} \end{center} \caption{ Differential event rate, $Y_{\lambda, \nu_{\alpha}}(T_N)$, as a function of the nuclear recoil energy, $T_N$, for $^{48} \mathrm{Ti}$ $\nu$-detector. The line labeling is same to that of Fig. \ref{fig.3}.} \label{fig.4} \end{figure} It is worth noting that, the choice of the target nucleus plays also a key role, since a light nuclear target may yield high energy recoil tails but less counts. On the contrary, a heavy nuclear target provides more counts and yields low-energy recoils making the detection more difficult. This leads to the conclusion that the best choice for a nuclear detector must consists of a combination of light and heavy nuclear isotopes \cite{Biassoni}. \subsection{New stringent limits on $\epsilon_{\mu e}^{f V}$ from $\mu^-\rightarrow e^-$ conversion} \label{section_NSI_limits} In the last part of this analysis, we exploit our channel-by-channel cross sections calculations in order to provide new limits for the NSI parameters $\epsilon_{\mu e}^{f P}$, coming out of the present and future experimental constraints of cLFV $\mu^-\rightarrow e^-$ conversion as follows. The authors of Ref. \cite{Davidson} (assuming that cLFV arises from loop diagrams involving virtual W's) found that the couplings of charged leptons with quarks are given by $C \epsilon_{\alpha\beta}^{f P}$, where $C \approx 0.0027$. Consequently, for the $\nu_\mu\leftrightarrow\nu_e$ transition the NSI parameters are related with the experimental upper limits of $\mu^{-} \rightarrow e^{-}$ conversion as \cite{Davidson} \begin{equation} \epsilon_{\mu e}^{f P}= C^{-1} \sqrt{R_{\mu e}^{(A,Z)}} \, . \label{eps_mue} \end{equation} In our calculations, up to this point we used the value $\epsilon_{\mu e}^{f V} = 2.9 \times 10^{-4}$ resulting from the PSI upper limit, $R_{\mu e}^{Ti} < 6.1 \times 10^{-13}$ \cite{Wintz} (occasionally, this value is a more severe constraint compared to the value $\epsilon_{\mu e}^{f V} = 7.7 \times 10^{-4}$ used in \cite{Davidson} which came out of the upper limit $R_{\mu e}^{Ti} < 4.3 \times 10^{-12}$ \cite{Dohmen}). Significantly lower upper limits on the NSI $\epsilon_{\mu e}^{f P}$ parameters of Eq. (\ref{R-lamb-alp}), are expected to be derived from the COMET, Mu2e, Mu2e at Project-X and PRIME/PRISM $\mu^-\to e^-$ conversion experiments. Then, one may compute new ratios $R_{\nu_\mu\leftrightarrow\nu_e}$ of the FC $\nu_e\leftrightarrow\nu_\mu$ reaction channel. The results for the NSI parameters $\epsilon_{\mu e}^{f V}$ and the respective ratios $R_{\nu_\mu \leftrightarrow \nu_e}$ are listed in Table \ref{table3}. \begin{table}[h] \centering \setlength{\tabcolsep}{0.5 em} \begin{tabularx}{0.48\textwidth}{{c|cccc}} \hline \hline \\ Parameter & COMET & Mu2e & Project-X & PRIME \\ \hline $\epsilon_{\mu e}^{f V} \times 10^{-6} $ & $3.70$ & $2.87$ & $0.52$ & $ 0.37$ \\ $R_{\nu_{\mu} \leftrightarrow \nu_{e}} \times 10^{-10}$ & $21.2$ & $13.0$ & $0.42$ & $0.19$ \\ \hline \hline \end{tabularx} \caption{Upper limits on the NSI parameters $\epsilon_{\mu e}^{f V}$ and the ratios $R_{\nu_\mu \leftrightarrow \nu_e}$ for the FC $\nu_\mu \leftrightarrow\nu_e$ reaction channel resulting from the sensitivity of the $\mu^-\rightarrow e^-$ conversion experiments. } \label{table3} \end{table} Before closing we find interesting to plot the expected neutrino signals $\sigma^{sign}_{\nu_\mu\to\nu_e}(E_\nu)$ resulting by using the limits of Table \ref{table3} in two cases of $\nu$-spectra: (i) supernova neutrinos, and (ii) laboratory neutrinos originating e.g from the BNB (Booster Neutrino Beamline) at Fermilab known as pion decay-at-rest (DAR) neutrinos \cite{pion-DAR-nu,Louis}. In the first case the simulated cross sections are obtained by employing the Supernova $\nu$-spectra, $\eta_{\nu_{\alpha}}^{\mathrm{SN}}$, discussed before \cite{Horowitz,Biassoni} and the results are illustrated in Fig. \ref{fig.5}(a). In the second case, the simulated cross sections are obtained by considering the laboratory neutrino distribution of the stopped pion-muon neutrinos produced according to the reactions $\pi^+ \to\mu^+ + \nu_\mu$, $\mu^+\to e^+ +\nu_e+\tilde{\nu}_\mu$ \cite{pion-DAR-nu,Louis}. In these experiments the emitted $\nu_{e}$ neutrino spectrum is described by the normalized distribution $\eta_{\nu_{\alpha}}^{lab.}$, $\alpha=e,\mu$ \cite{Kosm-Oset,vtsak-tsk-2}. The simulated laboratory neutrino signal $\sigma^{sign}_{\nu_{e} \rightarrow \nu_{\mu}}$ is shown in Fig. \ref{fig.5}(b). As can be seen, in both cases the exceedingly high sensitivity of the designed experiments reduces drastically (compare Figs. \ref{fig.3} and \ref{fig.5}) the area of observation of the $\nu$-signals $\sigma^{sign}_{\nu_e \to \nu_\mu}(E_\nu)$. \begin{figure}[H] \begin{center} \includegraphics[width=0.40\textwidth]{Figure-4.ps} \end{center} \caption{Simulated $\nu$-signal, $\sigma^{sign}_{\nu_e \to\nu_\mu}$, of the FCNC process $\nu_e + (A,Z)\to \nu_\mu + (A,Z)$ in $^{48}$Ti, for the PSI and PRIME/PRISM experiments and in $^{27}$Al, for the COMET, Mu2e and Mu2e at Project-X: \textit{(a)} for supernova neutrinos and \textit{(b)} for pion-muon stopped neutrinos. The shaded area represents the excluded region of observation by the increased sensitivity of the designed experiments. For each plot the relevant NSI parameter $\epsilon_{\mu e}^{f P}$ of Table \ref{table3} has been employed.} \label{fig.5} \end{figure} We should note that for models based on non-unitary lepton mixing matrix (including seesaw), constraints on $n_{\alpha \beta}$ (related to $\epsilon_{\alpha \beta}^{f P}$ within normalisation factors \cite{Forero}) may similarly come out. Obviously, for NSI considering both $d$ and $u$ quarks, $n_{\alpha \beta}$ enter the nuclear matrix elements of Eq. (\ref{GV}). \section{Conclusions} In conclusion, we explored NC non-standard $\nu$-nucleus processes with realistic nuclear structure calculations. As a first step, we evaluated cross sections for the dominant coherent channel (incoming neutrino energies $0 \le E_\nu \le 150$ MeV, which include stopped pion-muon neutrinos, supernova neutrinos, etc). We have examined partial, integrated and total coherent cross sections and determined constraints for the ratios $R_{\nu_\alpha \to \nu_\beta}$ of all relevant reaction channels with respect to the SM cross section. Furthermore, we provided results for the differential event rates and the total number of events assuming one ton of $^{48}$Ti as $\nu$-detector material. In view of operation of the muon-to-electron conversion experiments, searching for the exotic $\mu^-\rightarrow e^-$ conversion, we concentrated on the $^{48}$Ti nucleus previously used as stopping target by the PSI experiment and recently proposed to be used by the PRIME experiment at J-PARC. Similarly we have studied the $^{27}$Al as $\nu$-detector, proposed to be used as muon stopping target in the sensitive Mu2e and COMET experiments. New stringent upper limits (up to even three orders of magnitude lower than those previously put) on the NSI (FC) parameters $\epsilon_{\mu e}^{f V}$ are extracted by using the experimental sensitivity of the $\mu^-\rightarrow e^-$ conversion experiments and our present results. By comparing our results with those of other methods we concluded that the nuclear physics aspects (reflecting the reliability and accuracy of the cross sections), largely affect the coherent $gs \rightarrow gs$ transition rate, a result especially useful for supernova $\nu$-detection probes and low-energy laboratory neutrinos. Finally, we would like to remark that, $\mu^{-} \rightarrow e^{-}$ transition experiments at sensitivities down to $10^{-16}-10^{-18}$ have excellent capabilities to search for evidence of new physics and to study its flavour structure. These well designed experiments at Fermilab and at J-PARC, could be the starting point of such a new effort, which would complement the neutrino programs. They have significant potential for constraining the NSI parameters and shed light on FCNC processes in the leptonic sector and specifically on the existence of the charged-lepton mixing. \section{Acknowledgements} TSK wishes to thank Robert Bernstein and Graham Kribs for the financial support to attend the Project-X Physics Study 2012 and the warm hospitality he enjoyed at Fermilab. \section{References} \bibliographystyle{elsarticle-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Body dynamics and kinematics are essential for correctly understanding human activity contexts. The pressure exerted by the body on its surroundings because of the combination of body mass distribution and skeletal movement is one of the essential components of human body dynamics. It has various clinical applications that include detecting different types of disorders in humans like hemiplegia, diagnosing injuries\cite{brund2017medial}, in rehabilitation after surgeries \cite{hwang2018estimation}, analyzing human locomotion \cite{taha2016finite}, personal fitness \cite{galarza2020effect,wkasik2019changes}, and in bio-mechanics \cite{lorenzini2018synergistic}. Using physical tactile sensing devices such as wearables and external pressure sensors to measure ground pressure exerted by the body can be time-consuming, expensive, and prone to malfunctions. One solution is to leverage neural networks to extract human pressure from other comparatively easier to collect sources of information, like videos. This method has a specific advantage, especially in prolonged monitoring and with tasks requiring a large area of operations over physical pressure sensors. However, It is challenging to use body pose in combination with shapes extracted from videos to estimate ground pressure data. Some past works used deep neural networks to estimate the ground pressure exerted by the human body from data extracted from RGB videos \cite{scott2020image} \cite{clever2021bodypressure} but are limited to specific poses and organs of the body. \begin{figure*} \begin{center} \includegraphics[width=0.9\linewidth]{pic/5.png} \end{center} \caption{Illustration of the proposed end-to-end pipeline that consists of 1. Data Acquisition, 2. Pose Extraction, 3. Deformation Profile Simulation (inverse kinematics with mesh re-targeting, or volumetric pose and shape estimation), and 4. Pressure Map Generation.} \label{fig:4} \end{figure*} We propose a novel solution to the complex problem by combining body shape and poses and using physics-based 3D simulation to create an intermediate datatype that has both the knowledge of body shape and position. In our first approach, we adopt a kinematics-based pose estimation model to extract 3D kinematics and then use inverse kinematics to create an animated sequence. It is re-targeted to a humanoid mesh of equivalent mass and identical bone structure to simulate deformation profiles using physics-based 3D simulation. Afterward, the simulated data and extracted 3D kinematics are used to train our temporal neural network regressor to generate the synthetic pressure maps. Our second approach uses a volumetric model for pose estimation based on SMPL+H \cite{pavlakos2019expressive} model for more accurate deformation profiles. The contributions of our work include the following: \begin{itemize} \item A novel approach to infer ground pressure maps exerted by the human body from monocular videos in the wild using a combination of physics-based simulations and a 3D convolutional neural network as illustrated in \cref{fig:4}. \item A pipeline to simulate intermediate deformation profiles from videos using either inverse kinematics, mesh re-targeting or Volumetric pose and shape estimation along with 3D physics-based simulation. \end{itemize} \section{Related Work} \label{rl} \subsection{Pose estimation} In computer vision, pose estimation predicts a person's joint positions from an image or a video. Predicting 3D pose using multi-camera motion capture systems \cite{nakano2020evaluation, freemocap} has problems since all cameras must be calibrated beforehand, and synchronization must exist between them to predict the pose data accurately. Another alternative is to use a monocular pose estimation model that uses the output of 2D pose estimation models like Mediapipe \cite{singh2021real}, Detectron \cite{wu2019detectron2}, and Openpose \cite{osokin2018real} to reconstruct the joint positions in 3D space as done in VideoPose3D \cite{pavllo:videopose3d:2019} and Blazepose \cite{grishchenko2022blazepose}. We used Detectron2 and VideoPose3D for depth estimation to reconstruct the kinematic body joints in 3D space identical to the approach from Scott, et al.\cite{scott2020image} For volumetric pose estimation, we used easymocap\cite{easymocap} that estimates SMPL+H body shape and pose from monocular videos. \subsection{Physics simulation} Physics-based simulation is about creating a natural world occurrence/physical phenomenon and predicting how the real-world objects would react to it by defining the state of the natural world before starting the simulation \cite{sangaraju20213d}. Then the effect of such occurrence is observed to create more robust designs. It is used in the aviation and automobile industry for testing prototypes, visualizing cloth simulations \cite{li2021deep}, object deformations \cite{fang2021guaranteed}, studying human bio-mechanics \cite{michaud2021biorbd}. We used Eevee, a 3D Physics game engine inside Blender \cite{blender}, to render the contact points between a human model and ground and pressure using rigid body simulation and dynamic paint deformation similar to Clever et al.'s \cite{clever2021bodypressure} approach. \section{Data Collection and Deformation Simulation} \label{dc} \subsection{Data Acquisition} To train our model, we have collected an extensive bi-modal dataset having synchronized whole-body monocular videos and pressure map sequences using a Fitness-mat from Sensing.Tex with a \(80 \times 28\) sensor grid having a sensing area of \(560\times 1680\) \(mm \) with separate sensor area of \(12\times 16 \) \(mm \) with measuring range of 0-5000 millimeters of mercury (mmHg). The hardware consists of a non-elastic, non-slippery yoga mat made with Thermoplastic Polyolefins(TPE) polymers having a range of operation from 5\% – 70\% humidity and 15$ ^\circ C $ – 45 $ ^\circ C $. The videos are collected using an 8-megapixel (\(3840\times2160\) pixels) iPhone 6 camera with Full HD \((1920\times1080p)\) resolution at 30 frames per second. We considered nine persons of diverse ages, sex, and body characteristics while creating our dataset. We created a continuous yoga pose sequence containing 27 poses that involve complex body movements that anyone can perform, regardless of gender or age. Each sequence lasts about 20 minutes, and the subjects try to follow the routine through visual and audio inputs making the total duration of the dataset around 9 hours. Information about each data collection session is detailed in \cref{tab2}. \begin{table} \begin{center} {\small{ \begin{tabular}{lcccc} \hline Subject & Mass(\(kg\)) & Height(\(cm\)) & Gender & Frames \\ \hline 1 & 74.3 & 178 & Male & 40788 \\ 2 & 76.3 & 183 & Male & 40428 \\ 3 & 72.2 & 178 & Male & 43895 \\ 4 & 94.7 & 172 & Male & 43630 \\ 5 & 60.2 & 157.5 & Female & 27932 \\ 6 & 57.7 & 162 & Female & 30359 \\ 7 & 57.2 & 160 & Female & 31600 \\ 8 & 52.3 & 162 & Male & 31608 \\ 9 & 57.3 & 159 & Female & 31023 \\ \hline Total & - & - & - & 321263 \\ \hline \end{tabular} }} \end{center} \caption{ Data Statistics Including Subject Mass (\(kilogram\)), Height (\(centimeter\)) and Gender.} \vspace{-20pt} \label{tab2} \end{table} The pressure data is converted to arrays of size \(f \times 80 \times 28\) where \(f\) is the total number of frames, \(80 \times 28 \in N^2 \) depicts the sensor grid size. Output sensor data has a range from 0 to 5000 mmHg. The collected video data has a frame rate of 30 fps, while the pressure map sequences have a frame rate of around ten fps, and both data are manually synchronized using timestamps. For our first approach, we used Keypoint R-CNN from Detectron2, a 2D pose estimation model trained on COCO dataset\cite{lin2014microsoft} outputs 17 key points to estimate the critical points at each frame. To reconstruct the 3D kinematics from the sequence of the key point positions, we used VideoPose3D. It considers the temporal data while predicting the depth of the 2D key points and is superior to other models at predicting poses from videos of activities\cite{wang2021deep}. VideoPose3D is trained on Human3.6M dataset\cite{ionescu2013human3} Output body kinematics is an array of sizes \(f\times17\times3 \in \mathbb{R}^3\) where f is the total number of frames which is \(Seconds\times fps(10)\). Each frame contains 17 joints with 3D (X, Y, Z) coordinates. For our second approach, we use EasyMocap, a volumetric pose estimation model that estimates both the 3D pose and body shape \cite{dong2020motion}. Unlike our previous approach, this model tracks and estimates hands and legs by considering \(f\times25\times3 \in \mathbb{R}^3\) key points instead of\(f\times17\times3 \in \mathbb{R}^3\). The simulated intermediate data is more similar to the collected pressure maps compared to our first approach. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{pic/p.png} \end{center} \vspace{-10pt} \caption{Deformation profile simulation based on inverse kinematic approach and Volumetric approach: Extracted 3D pose, re-targeted 3D mesh, and corresponding deformed ground plane inside 3d simulation and difference between the simulated deformation based on two approaches. } \vspace{-10pt} \label{fig:2} \end{figure} \subsection{Deformation Profile Simulation} We present a deformation profile simulation pipeline using physics simulations capable of automated creation of large datasets of deformation profiles using the collected 3D pose data and the estimated 3D humanoid mesh. The correlation between collected pressure maps $P_{80 \times 28} \in \{0-5000\}$ and simulated deformation profiles $\bar{P}_{80 \times 28} \in \{0-255\}$ can be expressed by the following mathematical expression: \begin{equation} \resizebox{.9\hsize}{!}{$\left[ {\begin{array}{cccc} p_{1,1} & p_{1,2} & \cdots & p_{1,28}\\ p_{2,1} & p_{2,2} & \cdots & p_{2,228}\\ \vdots & \vdots & \ddots & \vdots\\ p_{80,1} & p_{80,2} & \cdots & p_{80,28n}\\ \end{array} } \right] = \alpha\left[ {\begin{array}{cccc} \bar{p}_{1,1} & \bar{p}_{1,2} & \cdots & \bar{p}_{1,28}\\ \bar{p}_{2,1} & \bar{p}_{2,2} & \cdots & \bar{p}_{2,228}\\ \vdots & \vdots & \ddots & \vdots\\ \bar{p}_{80,1} & \bar{p}_{80,2} & \cdots & \bar{p}_{80,28n}\\ \end{array} } \right]$} \label{eq:df} \end{equation} Where $\alpha$ is a multiplication factor constant for all pressure points at a particular frame but varies from frame to frame and needs to be estimated using neural networks. For our first approach, we convert the extracted 3D kinematics arrays into Bio Vision Hierarchy (BVH) \cite{chung2004mcml} motion capture format using inverse kinematics. We used Blender \cite{blender}, an Open-source 3D software, for most of our work on 3D mesh generation and physics simulation. We create a 3D humanoid mesh similar to the subject's body and rig it with a 17-joint CMU skeleton identical to that from our motion capture file and re-target the motion capture file with our humanoid mesh that also preserves skeletal information like the relative distance between each joint. This information is carried on to the rigged mesh when we re-targeted it with the generated motion capture file. Then, we converted the humanoid mesh into a rigid body and assigned it a mass equivalent to the weight of the participant used for data collection. We created a deformable plane below the humanoid mesh and converted it to a dynamic paint canvas while the humanoid mesh acts as a dynamic paintbrush. Based on the point of contact between the humanoid mesh and plane are distorted up to a certain threshold but return to their original shape once the point of contact is removed. To ensure the simulated deformation profiles do not suffer from any unintentional movement that may result in noise generation, we constrained the movement of the rigid body in X and Y directions and the rotation in all directions such that the mass of the body is the only factor that affects how the plane reacts to the human body. We created a custom render viewpoint of size \((80\times28)\) pixels from the inbuilt Blender camera using composting nodes, identical to the sensor arrays of Fitness-mat. The viewpoint captures the rendered output in terms of deformation profiles. Based on the proximity of each vertice of the plane from the rendering viewpoint, it is represented by a number from 255 to 0, which is analogous to varying pressure points. The output is then rendered in grey-scale PNG format for each frame using the Eevee render engine. Afterward, OpenCV \cite{opencv_library} is used to convert the image sequences into an array of size \(f\times80\times28\) For our second approach, we imported the extracted volumetric SMPL+H poses into Blender using the SMPL blender add-on. Then we followed the exact process as before to simulate the deformation profiles. Representation of each intermediate step from generating motion capture file from 3D pose output to simulating deformable pressure profiles is illustrated in \cref{fig:2} based on both of our approaches. The main advantage of simulated deformation profiles by the volumetric model-based approach from the inverse kinematics approach is that the shape and position of limb terminals, as well as the overall shape of the human body, are considered in the animated mesh. \section{PresSim: Pressure Map Synthesis} \label{ps} \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{pic/4.png} \end{center} \caption{Overall PresSim Architecture. PresSim consists of three networks: Temporal Pose Network (TPN), Temporal Deformation Network (TDN), and Pressure map Synthesis Network (PSN)} \vspace{-10pt} \label{fig: 3} \end{figure} \subsection{Problem Formulation} Our model aims to synthesize pressure sensor data directly from videos by utilizing an intermediate data type that can be simulated from the poses extracted from the videos using physics-based simulation. It is challenging to synthesize pressure maps from 3D kinematics using deep learning techniques since the pressure maps depend heavily on the body's shape. 3D kinematics have no information about the body shape, which is essential in combination with 3D pose to generate accurate ground pressure maps. We used a physics-based simulation system to create the intermediate data type derived from the 3D activity information extracted from the videos. We trained our network to learn the correlation between the kinematics and simulated deformation profiles with the corresponding body pressure map distribution. We used a temporal 3D CNN regressor that considers ten frames (1 sec approx) of temporal pose sequence and associated deformation profiles to generate a single frame of pressure profile with the help of a sliding window. The input for the deep learning model is ten sequential frames from \(f\times17\times3 \in \mathbb{R}^3\) array of pose data and ten frames of simulated deformation profile from \(f\times80\times28 \in \mathbb{N}^2\) array, selected by two synchronized sliding windows. The ground truth is one frame from a \((f-10)\times80\times28 \in \mathbb{N}^2\) array of collected pressure maps. \subsection{PresSim Architecture} PresSim consists of three neural networks. Two 3D temporal convolutional neural networks, TPN and TDN, estimate 2D feature maps from 3D kinematics and 2D simulated deformation profiles, respectively. A 2D CNN called PSN is used to concatenate the output of the first two networks to synthesize 2D pressure maps. All three networks, along with their input and output, are illustrated in \cref{fig: 3}. \paragraph{Temporal Pose Network (TPN)} Temporal Pose Network (TPN) is a 3D temporal CNN that takes ten frames of the extracted pose of size \(10\times17\times3\) as input and produces a 2D feature map of size \(80\times28\) as its output. The primary purpose of TPN is to predict the average values of pressure across the horizontal axis, which is then used afterward to generate the synthesized pressure maps. We used Adam optimizer and L2 loss for training our network. The network estimates the pressure values labeled 2D sensor positions \{$\bar{p}_1,\bar{p}_2...,\bar{p}_N$\} where each $\bar{p}_j\in \mathbb{N}^2$ represents a 2D pressure estimate for sensor j. We compute the loss from the square value of Euclidean error on each joint: \begin{equation} L_{TPN} = \sum_{j=1}^{n} (p_n - \bar{p}_n)^2 \label{eq:TPN} \end{equation} We used up-sampling layers to increase the input dimension to fit the collected pressure maps' size. We implemented a sliding window that takes ten sequential frames from the 3D array as a unit data point input to train our model. TPN has twelve layers and 15,361 parameters, of which 15,265 are trainable while 96 are non-trainable. \paragraph{Temporal Deformation Network (TDN)} Temporal Deformation Network (TDN) is another 3D temporal neural network regressor that takes ten frames of the simulated deformation profiles of size \(10\times80\times28\) as input and produces a 2D array of size \(80\times28\) as its output. The primary purpose of TDN is to accurately predict the shape and relative pressure value across the predicted shapes in the 2D feature map. Like TPN, we used ADAM optimizer and L2 loss to train our network. The network outputs an estimate of the pressure values labeled 2D sensor positions \{$\bar{q}_1,\bar{q}_2...,\bar{q}_N$\} where each $\bar{q}_j\in \mathbb{N}^2$ represents a 2D pressure estimate for sensor j.We compute the loss from the square value of Euclidean error on each joint: \begin{equation} L_{TDN} = \sum_{j=1}^{n} (q_n - \bar{q}_n)^2 \label{eq:TDN} \end{equation} A sliding window with the same clock as the sliding window used in TPN is created to take ten sequential frames from the 3D array as input as a unit datapoint to train our model. TDN has eleven layers consisting of 40,563 parameters, of which 40,371 are trainable while 192 are non-trainable. \paragraph{Pressure map Synthesis Network (PSN)} The output of the TPN, a 2D feature map of size \(80\times28\), and the TDN, a 2D feature map of identical size, are concatenated using the Pressure Map Synthesis Network (PSN), a 2D convolutional neural network. Similar to previous networks, we used ADAM optimizer and L2 loss for training our network. The network outputs an estimate of the pressure values labeled 2D sensor positions \{$({\alpha \bar{p}_1 + \beta \bar{q}_1}),({\alpha \bar{p}_2 + \beta \bar{q}_2})...,({\alpha \bar{p}_N + \beta \bar{q}_N})$\} where each $\bar{p}_j, \bar{q}_j\in \mathbb{N}^2$ represents a 2D pressure estimate for sensor j derived from \cref{eq:TPN} and \cref{eq:TDN} respectively.$\alpha$ and $\beta$ are variable weights optimized for minimum sum score. We compute the loss from the square value of Euclidean error on each joint: \begin{equation} L_{PSN} = \sum_{j=1}^{n} \{(\alpha |p_n - \bar{p}_n| + \beta |q_n - \bar{q}_n|\} ^2 \label{eq:PSN} \end{equation} Additionally three 2D convolutional layers are used to fit the concatenated array to the dimension of the collected pressure maps. PSN consists of 23,873 parameters (23,777 trainable). The combined result of both models gives us more accurate synthetic pressure maps than using only one datatype. \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{pic/old1.png} \end{center} \caption{Reference video frame, ground truth collected using Fitness-mat, pressure map synthesized using PresSim based on inverse kinematic and volumetric model approach, pressure map synthesized using baseline model and all points with different pressure value ignoring \(\pm\)10 mmHg .} \vspace{-15pt} \label{fig:5} \end{figure} \section{Experimental Results } \label{ex} \subsection{Implementation Details} Our network is implemented using TensorFlow Keras 2.9.1. We trained our model by minimizing Mean Square Error loss using a learning rate of \(1e^{-4}\) and a batch size of 128, with Mean Absolute Error as the evaluation metric. Dropout layers of 30\% are used throughout TPN and TDN. We use our neural network to regress the confidence value through the \(relu\) activation function. For PSN, a Concat layer combines the output of the first two models, and three subsequent Connv2D layers are used to fit the dimension with the pressure map array. Our model is trained in a cluster with 128GB memory, four processors, and two Nvidia RTX A6000 graphics memories. \subsection{Evaluation Metrics} \label{em} We used Mean Absolute Error as the metric to evaluate PresSim. For the given dataset we used to train PresSim, the range of the data is from 0 to 5000, where MAE depicts the deviation from the actual values. To understand the correctness of the synthesized pressure maps concerning the collected sensor data, we used the R squared ($R^2$) error metric to justify the model's performance as the value ranges between 0 to 1. For comparing the pressure profile shapes synthesized by PresSim $\bar{P}_{80 \times 28} \in \{0-5000\}$ with that of groundtruth $P_{80 \times 28} \in \{0-5000\}$, we binarized both of those arrays by translating it to $\bar{P}_{80 \times 28} \in [0,1]$ and $P_{80 \times 28} \in [0,1]$ where all values greater than zero is one. To create a truthful comparison of the precision of each synthesized pressure node from ground truth, we calculated $R^2$ only at the point of contact. We created a list by indexing all points in the array with non-zero pressure values and used it as a mask to get equivalent pressure data from the synthesized pressure maps and ground truth $\bar{P}_{80 \times 28} \forall \bar{p}_{i,j} \geq 1 \cap P_{80 \times 28} \forall p_{i,j} \geq 1$. We then calculated Root Mean Square Difference between them and used that RMSD value to calculate the corrected $R^2$ score. \cref{fig:5} illustrates the difference between the ground truth and synthesized pressure maps using Mask - RMSD for our baseline model and PresSim trained on data simulated using both approaches. For comparison, we trained a baseline model identical to Temporal Pose Network with some added fully connected layers only with the extracted 3D kinematics from videos. \subsection{Qualitative Evaluation} Regarding Mean Absolute Error, our baseline model has an average score of 19.7, while PresSim trained on inverse kinematics-based deformation profiles has an average MAE score of 10.6, and the volumetric model-based PresSim has an average of 10.4. The $R^2$ value calculated using binarized pressure maps has an average of 0.659 for baseline, while the average value of the inverse kinematic-based PresSim approach and volumetric model-based PresSim approach is 0.909 and 0.948. The corrected $R^2$ calculated from Mask - RMSD has an average value of 0.407 for the entire dataset for our baseline model. Our inverse kinematics-based PresSim approach has an average value of 0.8, while the average is 0.811 for our volumetric-based PresSim approach. Both models have significant improvements compared to the baseline model. The slight improvement in volumetric-based PresSim is because of improved deformation profile generation and no significant error in simulation because of the lack of inverse kinematics in the simulation pipeline. All estimated average score for each model is mentioned in \cref{table3}. \begin{table} \begin{center} {\small{ \begin{tabular}{p{2cm}|p{1cm}p{1.5cm}p{1.5cm}} \hline & \multicolumn{3}{c}{Avareage Score}\\ \hline Model & Baseline model & I.K. PresSim & Volumetric PresSim \\ \hline MAE & 19.7 & 10.6 & 10.4 \\ Mask RMSD & 7.44 & 3.54 & 3.52 \\ Corrected $R^2$ & 0.407 & 0.800 & 0.811\\ Binarized $R^2$ & 0.659 & 0.909 & 0.948\\ \hline \end{tabular} }} \end{center} \caption{ Comparison table for average estimated score for baseline model and PresSim trained on deformation profiles simulated with inverse kinematics and Volumetric models respectively.} \vspace{-20pt} \label{table3} \end{table} \subsection{Faulty Cases} One faulty case we encountered was where the point of contact between the limbs and the ground was entirely wrong. This is not sporadic but continuous across the prediction sequence and can be observed while the subjects perform a defined set of yoga positions across different data collection sessions. Any yoga sequence that involves extreme twisting/bending part of a body leads to an inaccurate humanoid rigged sequence and failure to predict the ground contact while simulating the deformations correctly. We assumed the most probable cause lies with using inverse kinematics for translating pose to animated sequence as it is not present in the PresSim model we trained using our volumetric approach. Another failure case is where PresSim correctly predicts most parts of the shape of the point of contact between the body and the ground. However, the values of most pressure points are different, while some limb impressions are missing from the synthesized pressure maps. Because of its sporadic nature across the prediction sequence, we assumed that the incorrect pose estimation of certain joints at a frame leads to this kind of error which can be observed in the PresSim model trained by either of our approaches. It can be solved using a more robust pose estimation model like multi-camera pose estimation or monocular pose estimation from two angles to synthesize deformation. \section{Conclusion} \label{co} In conclusion, we present a framework combining video to 3D activities, physics simulations, and deep learning data transformation to synthesize sensor data of dynamic pressure profiles from monocular videos of human activities. We evaluated our framework by comparing the floor pressure data generated by PresSim, a baseline model, and the ground truth sensor data collected using a smart fitness mat. To compare the shapes of the synthesized data with the ground truth, the \(R^2\) score calculated over binarized pressure maps across the dataset is 0.948, demonstrating high similarity. We further demonstrate that on the individual sensitive node level, using a mask that considers only the point of contact between the body and mat, the corrected \(R^2\) score between the ground truth and synthesized pressure maps is, on average, 0.811. This work introduces a sensing modality different from traditional pressure sensing devices, opening up new opportunities for estimating human body dynamics unaffected by limitations faced by contemporary pressure sensing devices and activity recognition, with potential applications in smart homes, healthcare, and rehabilitation. \section*{Acknowledgment} The research reported in this paper was supported by the BMBF (German Federal Ministry of Education and Research) in the project VidGenSense (01IW21003). \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} For sparse compressed signals, Donoho $et~al.$ \cite{2005Stable} proposed a compressive sensing theory that enables efficient data sampling at a much lower rate than the requirements, which can be modeled as follows in its standard formulation. $\bf{Notations}$: In this paper, the matrices are represented in capital letters. For a matrix $A$, $A_{*i}$, $A_{i*}$ and $A_{ij}$ denote the $i-th$ column, the $i-th$ row and $(i,j)-th$ element of $A$, respectively; the $\|\dot\|_i$ represents the $i$-norm of a vector. All the vectors are column vectors unless transposed to a row vector by a prime superscript $T$. Compressive sensing can be formulated as: \begin{eqnarray} b = Ax+\epsilon \end{eqnarray} where $x\in R^n$ is an unknown vector, $b\in R^m$ is an observed vector, $A\in R^{m\times n}$ is called the compressive sensing matrix (usually $m<<n$), and $\epsilon$ is the unknown disturbance term or noise. Obviously, this is an underdetermined system of equations that does not have a sole solution. The least-squares method is usually used to solve the problem. \begin{equation} \min \limits_x \frac{1}{2}\|Ax - b\|_2^2 \end{equation} To suppress overfitting, some scholars \cite{2010On,2010Sparse,1999Sparse,2017A} added the $L_0$-norm regularizer to introduce sparse prior information. \begin{equation}\label{eq:L0} \min\limits_x \frac{1}{2}\| Ax - b\|_2^2 + \beta\|x\|_0 \end{equation} where $\|x\|_0$ denotes the number of nonzero components of $x$ and $\beta>0$ is a hyperparameter to control the tradeoff between accuracy and sparsity. Many methods have been developed to solve this problem, such as the penalty decomposition method \cite{2012Sp}, iterative hard threshold method \cite{2008Iterative}, fixed-point continuation method (FPC) \cite{2008Fixed}, approximate gradient homotopy method (PGH) \cite{2012A} and reweighted $L_0$ minimization method \cite{6743943,2015Zhao}. However, Eq. (\ref{eq:L0}) is an NP-hard optimization problem \cite{1995Sparse}, which is highly discrete so that it is difficult to solve using a precise algorithm. Thus, we need to seek an effective approximation solution for this problem. The $L_1$-norm regularizer is introduced as a substitute for that of the $L_0$-norm. Such an approximation can be traced back to a wide range of fields, such as seismic traces \cite{1979Deconvolution}, sparse signal recovery \cite{2001Atomic}, sparse model selection in statistics (LASSO) \cite{1996Regression}, and image processing \cite{1970Total}. Many scholars have attempted to find the optimal solution to the following problem: \begin{equation} \min\limits_x \frac{1}{2}\|Ax - b\|_2^2 + \beta \| x \|_1 \end{equation} It is a convex continuous optimization problem with a sole nondifferentiable point ($x=0$), which can usually be transformed into a second-order cone programming problem and then solved by methods such as interior-point methods. However, in large-scale problems, due to the high algorithmic complexity, the interior-point method is very time-consuming. Based on this, many researchers have solved the problem through simple gradient-based methods. Among them, the iterative shrinkage-thresholding algorithm (ISTA) proposed by Chambolle $et~al.$ \cite{Chambolle1998,2003An} has attracted much attention. ISTA updates $x$ through a shrinkage/soft threshold operation in each iteration. \begin{equation} x^{k + 1} = soft_{\beta t}[x^k - 2tA^T( {Ax^k - b} )] \end{equation} where $k$ represents the $k$-th iteration, $t$ is an appropriate stepsize and $soft$ is the soft threshold operation function. \begin{equation} soft_\theta (x_i) = sign(x_i)( \|x_i\| - \theta ) \end{equation} Recently, the iteratively weighted shrinkage-thresholding algorithm (IWSTA) has attracted much interest compared with ISTA, which outperforms their unweighted counterparts in most cases. In these methods, decision variables and weights are optimized alternatingly, or decision variables are optimized under heuristically chosen weights. It can be written as: \begin{equation} \min \limits_{x,~w\geq0} \frac{1}{2}\| Ax - b\|_2^2 + \beta \sum\limits_{i = 1}^n w_i |x_i|_1 \end{equation} The method assigns different weights to each component of $x$ in the iterative process and then updates $x$. In this way, each subproblem is convex and easy to solve. Many algorithms have been developed to solve it. For example, the iterative support detection (ISD) method \cite{2009Wang} assigns a weight of 0 to components in the support set and a weight of 1 to the other components during iteration, in which the support set at each iteration consists of all components whose absolute value is greater than the threshold. Zhao $et~al.$ \cite{2015Zhao} proposed a new method to calculate the optimal $w$ by the duality of linear programming based on the property of weighted range space. It alternately solves the weighted original problem with fixed weights to obtain a new solution $x$, and then it solves the duality problem to obtain a new weight $w$. More variants are available in \cite{6743943,David2006For} and its references. The details of some examples are listed in Tab. \ref{tab:existmethods}. \begin{table}[width=.9\linewidth,cols=4,pos=h]\label{tab:exist} \caption{Variants of weighted method.} \label{tab:existmethods} \begin{tabular*}{\tblwidth}{@{} LCCCCC@{} } \toprule Author & Termed & Weights &Min.&Max. & Regularizer\\ \midrule Chambolle $et~al.$ \cite{Chambolle1998} & ISTA & 1 & 1&1 &$\sum\limits_{i = 1}^n | x_i| $ \\ Candes $et~al.$ \cite{EJ2008} & IRL1 & $\frac{1}{|x_i^{k - 1}| + \delta }$& 0&$\frac{1}{\delta}$& $\sum\limits_{i = 1}^n \log (| x_i| + \delta) $ \\ Foucart $et~al.$ \cite{2009Sparsest} & WLP &$\frac{1}{( {|x_i^{k - 1}| + \delta } )^{1 - p}}$& 0&$\frac{1}{\delta^{1-p}}$& $\frac{1}{p}\sum\limits_{i = 1}^n ( |x_i| + \delta ) ^p$ \\ Wipf $et~al.$ \cite{2010Iterative} & NW4 & $\frac{1+(|x^{k - 1}| + \delta)^{p + 1}}{( | x^{k - 1}| + \delta )}^{p + 1}$ & 0&1&$\sum\limits_{i = 1}^n ( |x_i | - \frac{1}{(x_i+ \delta)^p})$ \\ \bottomrule \end{tabular*} \end{table} There is a drawback for the above methods: the weights do not meet the usual definition of weights, and their sum is one, which leads them to be distributed in a very large range (see Tab. \ref{tab:existmethods}). Such weights are difficult to explain and can lead to an inaccurate result. This paper proposes a new IWSTA type, called entropy regularized IWSTA (ERIWSTA), which obtains easily computable and interpretable weights. The weights automatically fall in the range of [0, 1], and the summation is one so that they can be considered a probability of the contribution of each attribute to the model. This is achieved by adding an entropy regularizer to the cost function and then using the Lagrange multiplier method to solve the problem. Experiments are executed for CT image restoration, and the results show that the proposed algorithm performs better in terms of both convergence speed and restoration accuracy compared with some state-of-the-art methods. \section{Methodology} The main idea of the IWSTA type algorithms is to define a weight for each attribute based on the current iteration ${x^k}$ and then use them to obtain a new $x$. In this section, we introduce an entropy regularizer to the cost function and obtain the following optimization model: \begin{eqnarray}\label{eq:mainmodel} \min &&\Phi _{\beta ,\gamma }(x,w) = F(x)+ \beta G_{\gamma}(x,w)\nonumber\\ s.~t. && w_i\geq 0, ~\sum_{i=1}^n w_i =1\nonumber\\ where&&F(X) = \frac{1}{2}\|Ax-b\|_2^2\nonumber\\ &&G_{\gamma}(x,w)=\sum_{i=1}^n w_i |x_i|+\gamma\sum_{i=1}^n {w_i}\ln {w_i} \end{eqnarray} where $\gamma\geq0$ is a given hyperparameter. As can be seen, while we do not use the entropy regularizer, $w$ can easily be solved as $w_i=1$ if $x_i=\arg\min\{|x_1|, ..., |x_n|\}$, or 0 otherwise$\footnote{The update rule can be easily explained by an example as \begin{equation} \begin{aligned} \min~\{4, -1, 5\}= min&~4w_1-1w_2+5w_3\\ s.~t.&~ w_1, w_2, w_3\ge0\\ &~w_1+w_2+w_3=1\nonumber \label{eq9} \end{aligned} \end{equation} The solution is $w_1=0$, $w_2=1$ and $w_3=0$, in which $w_2$ corresponds to the minimum value of \{4, -1, 5\}. It is very similar to the computation of the weights in the k-means algorithm. }$. It shows a simple fact that only one element of $w$ is 1, and the others are 0, which is grossly incompatible with the actual problem. Then, we add the negative entropy of the weights to measure the uncertainty of weights and stimulate more attributes to help signal reconstruction because it is well known that $\sum_{i=1}^n {w_i}\ln {w_i}$ is minimized in information theory when \begin{equation} w_1=w_2=...=w_n \end{equation} As follows, we will alternatively solve $w$ and $x$ in Eq. (\ref{eq:mainmodel}). \subsection{Update rule for $w$} To solve $w$, we introduce the Lagrange multipliers $\lambda$ and then obtain the following Lagrange function. Note that $F(x)$ is a constant with respect to $W$, so we only construct the Lagrange function on $G(x)$. \begin{equation} L(w,\lambda) = G_{\gamma}(x,w) + \lambda(\sum_{i = 1}^n w_i - 1), \end{equation} Set the partial derivative of $L(w,\lambda) $ with respect to $w_i$ and $\lambda$ to zero and then obtain the following two equations. \begin{eqnarray} \frac{\partial L(w,\lambda)}{\partial w_i}&=& |x_i| + \gamma (1 + \ln {w_i})+\lambda = 0\label{eq:Lag1} \\ \frac{\partial L(w,\lambda)}{\partial \lambda}&=& \sum_{i = 1}^n w_i - 1=0\label{eq:Lag2} \end{eqnarray} From Eq. (\ref{eq:Lag1}), we know that \begin{equation}\label{eq:wi} w_i = \exp(- \frac{\lambda}{\gamma})\exp(-\frac{|x_i|}{\gamma}) \end{equation} Substituting Eq. (\ref{eq:wi}) into Eq. (\ref{eq:Lag2}), we have \begin{equation} \sum_{i = 1}^n {w_i} = \exp(- \frac{\lambda}{\gamma})\sum_{i = 1}^n \exp(-\frac{|x_i|}{\gamma}) = 1 \end{equation} It follows that \begin{equation} \exp(- \frac{\lambda}{\gamma})=\frac{1}{\sum_{i = 1}^n \exp(-\frac{|x_i|}{\gamma})} \end{equation} Substituting this expression to Eq. (\ref{eq:wi}), we obtain that \begin{equation} w_i = \frac{\exp(-\frac{|x_i|}{\gamma})}{\sum_{l = 1}^n \exp(-\frac{|x_l|}{\gamma})} \end{equation} Such weights certainly meet the constraints that $w_i\geq0$ and $\sum_{i = 1}^n w_i=1$. \subsection{Update rule for $x$} Inspired by the work of ISTA \cite{2009A}, a similar approach was adopted for the iterative update of $x$. The construction of a majorization is an important step in obtaining the updating rule. \begin{definition}\label{definition:surrogate}(Majorization) Denote $\psi(x|x^k)$ as a majorization for ${\Psi}(x)$ at $x^k$ (fixed) if $\psi(x^k|x^k)={\Psi}(x^k)$ and ${\psi}(x|x^k)\geq {\Psi}(x)$. \end{definition} Clearly, ${\Psi}(x)$ is nonincreasing under the updating rule $x^{k+1}=\min_x \psi(x|x^k)$ because \begin{eqnarray} {\Psi}(x^{k+1})\leq \psi(x^{k+1}|x^k)\leq \psi(x^k|x^k)={\Psi}(x^k) \end{eqnarray} Then, we can construct the majorization for $F(x)$. \begin{proposition} Obviously, $F(x)$ is a Lipschitz continuous and differentiable convex function, which has a majorization function at fixed current iteration $x^k$ as \begin{eqnarray} f(x,x^k) = F(x^k)+[\nabla F(x^k)]^T(x-x^k)+\frac{L}{2}\|x-x^k\|_2^2 \end{eqnarray} where $L$ is larger than or equal to the maximum eigenvalue of $A^TA$. \end{proposition} \begin{proof} It is well-known that \begin{equation} F(x) = \frac{1}{2}\|Ax-b\|_2^2=F(x^k)+[\nabla F(x^k)]^T(x-x^k)+\frac{1}{2}(x-x^k)^TA^TA(x-x^k) \end{equation} We compare $F(x)$ and $f(x,x^k)$ and find that only the last terms are different. By singular value decomposition (SVD) of a symmetric definite matrix, we know that $A^TA=Q^T\Sigma Q$, in which $Q$ is an orthogonal matrix consisting of all eigenvectors and $\Sigma$ is diagonal consisting of all eigenvalues. Let $z=x-x^k$, then \begin{equation} z^T(A^TA)z=z^TQ^T\Sigma Qz\leq L\|Qz\|_2^2=L\|z\|_2^2 \end{equation} And it is also certain that $z^TA^TAz= L\|z\|_2^2=0$ if $x=x^k$. Thus, the proof is established. \end{proof} Now, we obtain the majorization for the cost function $\Phi(x,w)$ on $x$. \begin{equation} \phi(x,x^k)=f(x,x^k)+\beta G_{\gamma}(x,w) \end{equation} which can be reorganized as \begin{eqnarray} \phi(x,x^k)&=& \frac{L}{2}\| {x - [x^k - \frac{1}{L}\nabla F ( x^k )]} \|_2^2 + \beta G_{\gamma}(x,w)\nonumber\\ &=&\sum_{i=1}^n\{\frac{L}{2}\| x_i - [x^k - \frac{1}{L}\nabla F ( {{x^k}}) ]_i \|_2^2 + \beta w_i|x_i|\}+constant \end{eqnarray} We find that the variables of the majorization are separable such that their minimizations can be easily obtained on each $x_i$, respectively, as follows: \begin{equation} x_i^{k + 1} = soft_{\beta t w_i}[x^k - 2tA^T( {A{x^k} - y} )] \end{equation} \begin{figure*} \centering \includegraphics[scale=.4]{1.pdf} \caption{The original and noisy head phantom images. $(a)$ head phantom with 256×256 pixels; $(b)$ and $(c)$ blurred image with a 5$\times$5 uniform kernel and additive Gaussian noise with $\sigma=10^{ - 2}$ and $\sigma=10^{ - 3}$.} \label{FIG:1} \end{figure*} \begin{figure*} \subfigure[]{ \begin{minipage}[b]{0.5\textwidth} \centering \includegraphics[scale=.60]{2-1.pdf} \end{minipage} \subfigure[]{ \begin{minipage}[b]{0.5\textwidth} \centering \includegraphics[scale=.60]{2-2.pdf} \end{minipage}} \caption{3D profile of $\beta$ and $\lambda$ on MAE with different Gaussian noise levels: (a) $\sigma=10^{-2}$ and (b) $\sigma=10^{-3}$.} \label{FIG:hyperparameter} \end{figure*} \begin{table}[width=.9\linewidth,cols=4,pos=h] \caption{The optimal MAE value and corresponding hyperparameter (Gaussian noise with $\sigma=10^{-2}$).} \label{tbl2} \begin{tabular*}{\tblwidth}{@{} LLLLL@{} } \toprule Termed & $\beta$ & $\gamma$ & $\delta$ & MAE\\ \midrule ISTA & ${10^{-3}}$ & $-$ & $-$ & ${5.312077*10^{-7}}$ \\ WLP & ${10^{-5}}$ & ${10^{ - 10}}$& ${10^{ - 3}}$ & ${5.228672*10^{-7}}$ \\ NW4 & ${10^{-5}}$ & ${10^{ - 2}}$ & ${10^{ - 3}}$ & ${5.410231*10^{-7}}$ \\ IRL1 & ${10^{-5}}$ & $-$ & ${10^{ - 3}}$ & ${5.228672*10^{-7}}$ \\ ERIWSTA & ${10^{2}}$& ${10^{-2}}$ & $-$ & ${5.218246*10^{-7}}$ \\ \bottomrule \end{tabular*} \end{table} \begin{table}[width=.9\linewidth,cols=4,pos=h] \caption{The optimal MAE value and corresponding hyperparameter (Gaussian noise with $\sigma=10^{-3}$).} \label{tbl3} \begin{tabular*}{\tblwidth}{@{} LLLLL@{} } \toprule Termed & $\beta$ & $\gamma$ & $\delta$ & MAE\\ \midrule ISTA & ${10^{-3}}$ & $-$ & $-$ & ${5.122013*10^{-7}}$ \\ WLP & ${10^{-5}}$ & ${10^{ - 5}}$ & ${10^{ - 3}}$ & ${5.018339*10^{-7}}$ \\ NW4 & ${10^{-5}}$ & ${10^{ - 2}}$ & ${10^{ - 3}}$ & ${5.410231*10^{-7}}$ \\ IRL1 & ${10^{-5}}$ & $-$ & ${10^{ - 3}}$ & ${5.018340*10^{-7}}$ \\ ERIWSTA & ${10^{2}}$& ${10^{-2}}$ & $-$ & ${5.005524*10^{-7}}$ \\ \bottomrule \end{tabular*} \end{table} \begin{figure*} \centering \subfigure[]{ \includegraphics[scale=.5]{3-1.pdf} } \subfigure[]{ \includegraphics[scale=.5]{3-2.pdf} } \caption{Cost function versus iteration number for different Gaussian noise levels: (a) $\sigma=10^{-2}$ and (b) $\sigma=10^{-3}$.} \label{FIG:CostCurve} \end{figure*} \begin{figure*} \centering \subfigure[]{ \includegraphics[scale=.5]{4-1.pdf} } \subfigure[]{ \includegraphics[scale=.5]{4-2.pdf} } \caption{MAE versus iteration number for different Gaussian noise levels: (a) $\sigma=10^{-2}$ and (b) $\sigma=10^{-3}$.} \label{FIG:MAECurve} \end{figure*} \begin{figure} \centering \includegraphics[scale=.75]{5-1.pdf} \caption{After 30 iterations, the denoising results of ISTA, WLP, NW1, IRL1 and ERIWSTA with Gaussian noise with $\sigma=10^{-2}$.} \label{FIG:ImageHighNoise} \end{figure} \begin{figure} \centering \includegraphics[scale=.75]{5-2.pdf} \caption{After 30 iterations, the denoising results of ISTA, WLP, NW1, IRL1 and ERIWSTA with Gaussian noise with $\sigma=10^{-3}$.} \label{FIG:ImageLowNoise} \end{figure} \begin{figure*} \centering \subfigure[]{ \includegraphics[scale=.33]{6-1.pdf} } \subfigure[]{ \includegraphics[scale=.33]{6-2.pdf} } \caption{Horizontal central profiles of the restored images with different Gaussian noise levels: (a) $\sigma=10^{-2}$ and (b) $\sigma=10^{-3}$..} \label{fig:centralline1} \end{figure*} \begin{figure*} \centering \subfigure[]{ \includegraphics[scale=.33]{7-1.pdf} } \subfigure[]{ \includegraphics[scale=.33]{7-2.pdf} } \caption{Vertical central profiles of the restored images with different Gaussian noise levels: (a) $\sigma=10^{-2}$ and (b) $\sigma=10^{-3}$.} \label{fig:centralline2} \end{figure*} \section{Numerical experiments} Numerical experiments are provided to evaluate the performance of the proposed ERWISTA compared with ISTA, WLP, NW4 and IRL1 on the denoising problem of computed tomography (CT) images. All experiments are performed on an HP computer with a 2.5 GHz Intel(R) Core(TM) i7-4710MQ CPU with 12 GB of memory using MATLAB R2019a for coding. A simulated Shepp-Logan phantom with $256\times256$ pixels was used to evaluate the algorithm performance, which is usually used in CT image analysis. There are many advantages to using simulated phantoms, including prior knowledge of the pixel values and the ability to control noise. We blurred the image by using a uniform $5\times5$ kernel (applied by the MATLAB function "\emph{fspecial}") and then added Gaussian noise by the following formula. We select $\sigma=10^{ - 2}$ and ${10^{ - 3}}$ as examples of high and low noise levels for the following experiments. \begin{equation} x^{noise}=x^{true}+N(0,\sigma) \end{equation} Fig. \ref{FIG:1} shows the original and blurred-and-noisy images. Based on the good time-frequency localization characteristics of the wavelet transform, it can effectively distinguish high-frequency noise from low-frequency information. Therefore, the wavelet transform is used to reduce noise. The introduction of the wavelet matrix can also ensure the sparsity of the whole optimization algorithm. Without losing generality, let $A = PW$, where $P$ is the predetermined system matrix indicating the blurring information and $W$ represents the second-order Haar wavelet matrix. Mean absolute error (MAE) was used to measure the similarity to the true image. The value of the MAE was calculated by taking the average of the squared differences between the restored pixel values and the true pixel values. \begin{equation} MAE = \frac{1}{N}{||x^{restoration} - x^{true} ||_1} \end{equation} \subsection{Hyperparameter selection} To select the penalty hyperparameter $\beta$ and the entropy weighted hyperparameter $\gamma$, we compare the MAE value after 100 iterations with respect to them from $10^{ - 10}$ to $10^{ 10}$. The results are shown in Fig. \ref{FIG:hyperparameter}, demonstrating that ERIWSTA can achieve a consistently low MAE value over a wide range of $\beta$ and $\gamma$, which displays its robustness. We also quantitatively display the optimal MAE and corresponding hyperparameters of the algorithms in Tabs. \ref{tbl2} and \ref{tbl3}. An interesting observation is that, regardless of low or high noise levels, the restoration accuracy of our algorithm is always better than the others. These optimal hyperparameters are also used in the following experiments. \subsection{Algorithmic performance} Fig. \ref{FIG:CostCurve} displays the cost function of the algorithms. As can be seen, the proposed algorithm always have the fast convergence speed, which arrive at the stable status early. Fig. \ref{FIG:MAECurve} shows the MAE curves of the algorithms with respect to the number of iterations. The proposed ERIWSTA always has superior performance to the other algorithms, rapidly obtaining the minimum MAE value. Figs. \ref{FIG:ImageHighNoise} and \ref{FIG:ImageLowNoise} indicate the denoising results with the given noise level. As can be seen, all of the algorithms achieve a similar image. Howver, Figs. \ref{fig:centralline1} and \ref{fig:centralline2} quantitatively compares the vertical profiles of the restored images with that of the true phantom in the central row and column. We can see that ERIWSTA follows the outline of the phantom more accurately than the other algorithms. \section{Conclusions} In this paper, a new IWSTA type, called ERIWSTA, is proposed to solve the linear inverse problem. An entropy weighted term is introduced to measure the certainty of the weights, and then the Lagrange multiplier method is used to obtain a simple solution. The experimental results on image restoration of a synthetic CT head phantom show that ERIWSTA can achieve convergence faster with fewer iterations and better restoration accuracy than the other algorithms. However, as with many existing algorithms, our algorithm also involves two main hyperparameters ($\beta$ and $\gamma$). In the future, we will focus on designing an automatic method to adjust these hyperparameters. \section*{Acknowledgments} This work was supported by the Fundamental Research Funds for the Central Universities (N2019006 and N180719020). \printcredits
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{% \@startsection{section}{1} {\z@}% {-3.5ex \@plus -1ex \@minus -.2ex}% {2.3ex \@plus.2ex}% {\large\bfseries } \renewcommand\subsection{% \@startsection{subsection}{2} {\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1sp {\normalsize\bfseries } \renewcommand\subsubsection{% \@startsection{subsubsection}{3} {\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1sp {\normalfont\normalsize } \makeatother \marginparwidth 0pt \oddsidemargin 0pt \evensidemargin 0pt \topmargin 30pt \textheight 21.0 truecm \textwidth 16.5 truecm \title{{\LARGE\bf Population Monotonicity in Matching Games} \footnote{Supported in part by National Natural Science Foundation of China (Nos.\,12001507, 11971447 and 11871442) and Natural Science Foundation of Shandong (No.\,ZR2020QA024).}} \author{Han Xiao and Qizhi Fang } \affil{School of Mathematical Sciences\\Ocean University of China\\Qingdao, China\\ \{hxiao,~qfang\}$@$ouc.edu.cn} \date{} \begin{document} \maketitle \openup 1.2\jot \begin{abstract} A matching game is a cooperative profit game defined on an edge-weighted graph, where the players are the vertices and the profit of a coalition is the maximum weight of matchings in the subgraph induced by the coalition. A population monotonic allocation scheme is a collection of rules defining how to share the profit among players in each coalition such that every player is better off when the coalition expands. In this paper, we study matching games and provide a necessary and sufficient characterization for the existence of population monotonic allocation schemes. Our characterization also implies that whether a matching game admits population monotonic allocation schemes can be determined efficiently. \hfill \noindent\textbf{Keywords:} cooperative game theory, matching game, population monotonic allocation scheme. \noindent\textbf{Mathematics Subject Classification:} 05C57, 91A12, 91A43, 91A46. \noindent\textbf{JEL Classifcation: } C71, C78. \hfill \hfill \end{abstract} \section{Introduction} Matching games, which capture matching markets with transferable utilities, make one of the cornerstones in cooperative game theory. Roughly speaking, a matching game is a cooperative profit game defined on an edge-weighted graph, where the players are the vertices and the profit of a coalition is the maximum weight of matchings in the subgraph induced by the coalition. The following setting, taken form \cite{BKP12, EK01, Vazi21}, vividly illustrates the underlying scenario of matching games. Consider a group of tennis players that will participate in a doubles tournament. To represent the underlying structure, we introduce a weighted graph $G = (V, E;w)$. The vertices are the players, an edge $ij$ represents that players $i$ and $j$ are compatible doubles partners, and $w: E\rightarrow \mathbb{R}_{+}$ is a function where $w_{ij}$ represents the expected prize money if $i$ and $j$ partner up in the tournament. The total prize money for any subgroup $S\subseteq V$ of players in the doubles tournament is the maximum weight of matchings in the edge-weighted graph induced by $S$. In particular, a matching game is called a \emph{simple matching game} when $w\equiv1$ and called an \emph{assignment game} when $G$ is bipartite, respectively. An essential issue in a cooperative profit game is how to distribute the total profit among the players in a coalition. There are many criteria for evaluating how ``good'' an allocation is, such as stability, fairness, and satisfaction. Emphases on different criteria lead to different allocation concepts, e.g., the core, the Shapley value, and the nucleolus. Various allocation concepts have been studied extensively and intensively for matching games. The core, which addresses the issue of stability for the grand coalition, concerns one of the most attractive allocations. Shapley and Shubik \cite{SS71} show that the core of assignment games is always non-empty. Deng et al. \cite{DIN99} provide a complete characterization for the core non-emptiness of matching games. Eriksson and Karlander \cite{EK01} initialize the study on extreme core allocations for matching games. N{\'u}{\~n}ez and Rafels \cite{NR03} achieve a complete characterization on extreme core allocations for assignment games. Toda \cite{Toda05} propose an axiomatic characterization for the core of assignment games. Klaus and Nichifor \cite{KN10} investigate the relation of the core with other allocation concepts for matching games. Recently, Vazirani \cite{Vazi21} studies the approximate core and achieve the best possible approximation factor. The nucleolus, which maximizes the minimum satisfaction among all players, is another well-studied allocation. Solymosi and Raghavan \cite{SR94} propose an efficient algorithm for computing the nucleolus of assignment games. Kern and Paulusma \cite{KP03} introduce an efficient algorithm for computing the nucleolus of simple matching games. Llerena et al. \cite{LNR15} characterize the nucleolus by properties of the core and the kernel for assignment games. Recently, K\"{o}nemann et al. \cite{KPT20} show that the nucleolus for matching games can be computed efficiently, which resolves an outstanding open problem proposed by Faigle et al. \cite{FKFH98}. In addition to allocations, convexity is also a desirable property to study in cooperative profit games, as convex games possess nice properties both economically and computationally. However, Solymosi and Raghavan \cite{SR01} show that even assignment games are hardly convex. Recently, Kumabe and Maehara turn to study generalizations of matching games and succeed in characterizing the convexity of $b$-matching games \cite{KM20-IJCAI} and hypergraph matching games \cite{KM20-AAMAS} respectively. In this paper, we study population monotonic allocation schemes in matching games. An allocation scheme is a collection of rules defining how to share the profit among players in every coalition. An allocations scheme is population monotonic if every player is better off when the coalition expands. Population monotonic allocation schemes possess an appealing snowball effect, i.e., the incentive to join a coalition increases as the coalition grows larger \cite{Spru90}. Moreover, population monotonic allocation schemes yield group strategyproof mechanisms which resist collusion among players \cite{Moul99, MS01}. However, very few results are known even for allocation schemes. Deng et al. \cite{DINZ00} study allocation schemes consisting of core allocations for every coalition and achieve a sufficient characterization for simple matching games. Immorlica et al. \cite{IMM08} study the limitation of approximate population monotonic allocations schemes and show that no constant approximation factor exists for matching games. In this paper, we complete this line of research by providing a necessary and sufficient characterization for the existence of population monotonic allocation schemes in matching games. The remainder of this paper is organized as follows. In Section \ref{sec:preliminaries}, some basic notions in cooperative game theory are reviewed. Section \ref{sec:PM} is devoted to a complete characterization for the existence of population monotonic allocation schemes in matching games. Section \ref{sec:disscussion} concludes the results in this paper and discusses the directions of future work. \section{Preliminaries} \label{sec:preliminaries} A \emph{cooperative game} $\Gamma=(N,\gamma)$ consists of a \emph{player set} $N$ and a \emph{characteristic function} $\gamma:2^N\rightarrow \mathbb{R}$ with convention $\gamma(\emptyset)=0$. We call $N$ the \emph{grand coalition} and call $S$ a \emph{coalition} for any $S\subseteq N$. An \emph{allocation} of $\Gamma$ is a vector $\boldsymbol{x}=(x_i)_{i\in N}$ specifying how to distribute the profit among players in the grand coalition $N$. A \emph{core allocation} is an allocation $\boldsymbol{x}=(x_i)_{i\in N}$ satisfying \emph{efficiency} and \emph{coalitional rationality} conditions, \begin{enumerate} \item[\textendash] \emph{efficiency}: $\sum_{i\in N} x_{i}=\gamma(N)$; \item[\textendash] \emph{coalitional rationality}: $\sum_{i\in S}x_i \geq \gamma(S)$ for any $S\subseteq N$. \end{enumerate} The \emph{core} is the set of all core allocations. An \emph{allocation scheme} of $\Gamma$ is a collection of allocations $\mathcal{X}=(\boldsymbol{x}_S)_{S\in 2^N \backslash \{\emptyset\}}$ specifying how to distribute the profit among players in every nonempty coalition $S\subseteq N$. A \emph{population monotonic allocation scheme} (\emph{PMAS} for short) is an allocation scheme $\mathcal{X}=(\boldsymbol{x}_S)_{S\in 2^N \backslash \{\emptyset\}}$ satisfying \emph{efficiency} and \emph{monotonicity} conditions, \begin{enumerate} \item[\textendash] \emph{efficiency}: $\sum_{i\in S} x_{S,i}=\gamma(S)$ for any $S\in 2^N\backslash \{\emptyset\}$; \item[\textendash] \emph{monotonicity}: $x_{S,i}\leq x_{T,i}$ for any $S, T\in 2^N\backslash \{\emptyset\}$ with $S\subseteq T$ and any $i\in S$. \end{enumerate} We call $\Gamma$ \emph{population monotonic} if it admits a PMAS. Now we define matching games. Let $G=(V,E;w)$ be a graph with an edge weight function $w:E\rightarrow \mathbb{R}_+$. Throughout this paper, we always assume that $w_e>0$ for any $e\in E$ since we may remove every edge $e$ with $w_e=0$ in any maximum weight matching. The \emph{matching game} on $G=(V,E;w)$ is cooperative profit game $\Gamma_{G}=(N,\gamma)$, where $N=V$ and $\gamma (S)$ equals the maximum weight of matchings in the induced subgraph $G[S]$ for any $S\subseteq N$. \section{Characterizing population monotonicity} \label{sec:PM} To characterize population monotonic matching games, we first study properties of fundamental structures in Subsection \ref{subsec:structures}, then identify some forbidden structures in Subsection \ref{subsec:obstructions}, and finally achieve a necessary and sufficient characterization in Subsection \ref{subsec:characterization}. In the remainder of this section, $G=(V,E;w)$ denotes a simple graph with weight function $w:E\rightarrow \mathbb{R}_+$, and $\Gamma_G=(N,\gamma)$ denotes the matching game on $G$, unless stated otherwise. \subsection{Fundamental structures} \label{subsec:structures} This subsection is devoted to properties of fundamental structures in population monotonic matching games. A graph is a \emph{complete graph} if every pair of vertices is connected by an edge. A complete graph with $n$ vertices is denoted by $K_n$. A graph is a \emph{path graph} if it is a tree with maximum degree no more than $2$. A path graph with $n$ vertices is denoted by $P_n$. A graph is a \emph{cycle graph} if it is a connected graph where every vertex has degree $2$. A cycle graph with $n$ vertices is denoted by $C_n$. A graph is a \emph{paw graph} if it is isomorphic to the third graph in Figure \ref{fig:structures}. A graph is a \emph{diamond graph} if it is isomorphic to the last graph in Figure \ref{fig:structures}. As we shall see, graphs in Figure \ref{fig:structures} are fundamental structures (induced subgraphs) in graphs inducing population monotonic matching games. Thus we study properties of these structures first. \begin{figure}[t \vspace{-1em} \centerline{\includegraphics[width=1.5\textwidth]{fig_structures.pdf}} \vspace{-2em} \caption{Fundamental structures} \label{fig:structures} \vspace{-1em} \end{figure} \begin{lemma}[$K_3$-property] \label{thm:K3} Let $H\subseteq G$ with $E(H)=\{12,13,23\}$ be an induced subgraph with $w_{12}\leq w_{13}\leq w_{23}$. If $\Gamma_G $ is population monotonic, then $w_{23} \geq w_{12}+w_{13}$. \end{lemma} \begin{proof} Let $\mathcal{X}=(\boldsymbol{x}_{S})_{S\in 2^N \backslash \{\emptyset\}}$ be a PMAS in $\Gamma_{G}$. By efficiency and monotonicity of PMASes, we have \begin{equation}\label{eq:C3} \begin{split} 2 w_{23} = & ~ 2 \gamma (\{1,2,3\})\\ = & ~ ( x_{\{1,2,3\},1}+x_{\{1,2,3\},2}+x_{\{1,2,3\},3} ) + ( x_{\{1,2,3\},1}+x_{\{1,2,3\},2}+x_{\{1,2,3\},3} )\\ \geq & ~ (x_{\{1,2\},1}+x_{\{1,2\},2}) + (x_{\{1,3\},3}+x_{\{1,3\},1}) + (x_{\{2,3\},2}+x_{\{2,3\},3})\\ = & ~ \gamma(\{1,2\})+ \gamma(\{1,3\})+ \gamma(\{2,3\})\\ = & ~ w_{12}+w_{13}+w_{23}. \end{split} \end{equation} It follows that $w_{23}\geq w_{12}+w_{13}$. \end{proof} The following lemma provides a unified framework to develop more results on graphs inducing population monotonic matching games. We remark that Lemma \ref{thm:cover} \emph{\ref{itm:2K3}} never occurs in population monotonic matching games, but it serves as an intermediate result in proving Lemma \ref{thm:K4}. \begin{lemma} \label{thm:cover} Let $S=\{1,2,3,4\}$ be a vertex subset of $G$. Let $H_1=G[\{1,2,3\}]$ and $H_2=G[\{2,3,4\}]$ be two induced subgraphs of $G$. \begin{enumerate}[label={\emph{($\roman*$)}}] \item \label{itm:2P3} Assume that $E(H_1)=\{12,23\}$ and $E(H_2)=\{23,34\}$. If $\Gamma_G$ is population monotonic, then $w_{23} \geq w_{12}+w_{34}$. \item \label{itm:P3K3} Assume that $E(H_1)=\{12,23\}$, $E(H_2)=\{23,24,34\}$ and $w_{23}\geq w_{24}$. If $\Gamma_G$ is population monotonic, then $w_{23}\geq w_{12}+w_{34}$ and $w_{23}\geq w_{24}+w_{34}$. \item \label{itm:2K3} Assume that $E(H_1)=\{12,13,23\}$, $E(H_2)=\{23,24,34\}$ and $w_{12}\geq \max\{w_{13}, w_{23}\}$. If $\Gamma_G$ is population monotonic, then $w_{24}\geq w_{23}+w_{34}$. \end{enumerate} \end{lemma} For the vertex subset $S$ and induced subgraphs $H_1$, $H_2$ in Lemma \ref{thm:cover}, we say that $H_1$ and $H_2$ make a \emph{$2 P_3$-cover} of $S$ in \emph{\ref{itm:2P3}}, a \emph{$(P_3, K_3)$-cover} of $S$ in \emph{\ref{itm:P3K3}}, and a \emph{$2 K_3$-cover} of $S$ in \emph{\ref{itm:2K3}}, respectively. \begin{proof} Let $\mathcal{X}=(\boldsymbol{x}_{S})_{S\in 2^N \backslash \{\emptyset\}}$ be a PMAS in $\Gamma_{G}$. By efficiency and monotonicity of PMASes, we have \begin{equation} \label{eq:4cover} \begin{split} &~ \gamma (\{1,2,3\})+\gamma (\{2,3,4\})\\ = &~ (x_{\{1,2,3\},1}+x_{\{1,2,3\},2}+x_{\{1,2,3\},3})+(x_{\{2,3,4\},2}+x_{\{2,3,4\},3}+x_{\{2,3,4\},4})\\ \geq &~ (x_{\{1,2\},1}+x_{\{1,2\},2}) + (x_{\{2,3\},3}+x_{\{2,3\},2}) + (x_{\{3,4\},3}+x_{\{3,4\},4}) \\ = &~ \gamma(\{1,2\})+ \gamma(\{2,3\})+ \gamma(\{3,4\})\\ = &~ w_{12}+w_{23}+w_{34}. \end{split} \end{equation} \emph{\ref{itm:2P3}} Assume that $E(H_1)=\{12,23\}$ and $E(H_2)=\{23,34\}$. Thus \begin{equation} \label{eq:P4} \gamma (\{1,2,3\})+\gamma (\{2,3,4\})= \max \{w_{12},w_{23}\}+\max \{w_{23},w_{34}\}. \end{equation} We claim that $w_{23}> \max\{w_{12},w_{34}\}$. Assume to the contrary that $w_{23}\leq \max\{w_{12},w_{34}\}$. We distinguish two cases and show that neither is possible. If $w_{23}\leq \min\{w_{12},w_{34}\}$, then $w_{12}+w_{34} \geq w_{12}+w_{23}+w_{34}$ follows from \eqref{eq:4cover}, implying $w_{23}\leq 0$. If $\min \{w_{12}, w_{34}\} < w_{23}\leq \max \{w_{12}, w_{34}\}$, then $w_{23}+\max \{w_{12},w_{34}\} \geq w_{12}+w_{23}+w_{34}$ follows from \eqref{eq:4cover}, implying $\min \{w_{12}, w_{34}\}\leq 0$. Neither case is possible. Hence $w_{23}> \max\{w_{12},w_{34}\}$. Then $2 w_{23} \geq w_{12}+w_{23}+w_{34}$ follows from \eqref{eq:4cover}, implying $w_{23}\geq w_{12}+w_{34}$. \emph{\ref{itm:P3K3}} Assume that $E(H_1)=\{12,23\}$, $E(H_2)=\{23,24,34\}$ and $w_{23}\geq w_{24}$. Hence $\gamma (\{2,3,4\})=\max\{w_{23},w_{34}\}$. Then \eqref{eq:P4} still applies, implying $w_{23}\geq w_{12}+w_{34}$. And $w_{23}\geq w_{24}+w_{34}$ follows from Lemma \ref{thm:K3}. \emph{\ref{itm:2K3}} Assume that $E(H_1)=\{12,13,23\}$, $E(H_2)=\{23,24,34\}$ and $w_{12}\geq \max \{w_{13}, w_{23}\}$. Thus \begin{equation} \gamma (\{1,2,3\})+\gamma (\{2,3,4\})=w_{12} + \max \{w_{23},w_{24}, w_{34}\}. \end{equation} Then $\max \{w_{23},w_{24}, w_{34}\}\geq w_{23}+w_{34}$ follows from \eqref{eq:4cover}, implying that $w_{24}\geq w_{23}+w_{34}$. \end{proof} \begin{lemma}[$P_4$-property] \label{thm:P4} Let $H\subseteq G$ with $E(H)=\{12,23,34\}$ be an induced subgraph. If $\Gamma_G$ is population monotonic, then $w_{23} \geq w_{12}+w_{34}$. \end{lemma} \begin{proof} Notice that $G[\{1,2,3\}]$ and $G[\{2,3,4\}]$ make a $2 P_3$-cover of $\{1,2,3,4\}$. Then $w_{23} \geq w_{12}+w_{34}$ follows from Lemma \ref{thm:cover} \emph{\ref{itm:2P3}}. \end{proof} \begin{lemma}[Paw-property] \label{thm:Paw} Let $H\subseteq G$ with $E(H)=\{12,23,24,34\}$ be an induced subgraph with $w_{23}\geq w_{24}$. If $\Gamma_G$ is population monotonic, then $w_{23}\geq w_{12}+w_{34}$ and $w_{23}\geq w_{24}+w_{34}$. \end{lemma} \begin{proof} Notice that $G[\{1,2,3\}]$ and $G[\{2,3,4\}]$ make a $(P_3,K_3)$-cover of $\{1,2,3,4\}$. Then $w_{23}\geq w_{12}+w_{34}$ and $w_{23}\geq w_{24}+w_{34}$ follow from Lemma \ref{thm:cover} \emph{\ref{itm:P3K3}}. \end{proof} Lemma \ref{thm:Paw} implies that for any $K_3$-subgraph $H$ in a graph inducing population monotonic matching games, endpoints of the maximum weight edge in $H$ are the only possible vertices in $H$ incident to other edges in the graph. \begin{lemma}[Diamond-property] \label{thm:Diamond} Let $H\subseteq G$ with $E(H)=\{12,13,23,24,34\}$ be an induced subgraph. If $\Gamma_G$ is population monotonic, then $w_{23}\geq w_{12}+w_{13}$ and $w_{23}\geq w_{24}+ w_{34}$. \end{lemma} \begin{proof} On one hand, $G[\{1,2,4\}]$ and $G[\{2,3,4\}]$ make a $(P_3,K_3)$-cover of $\{1,2,3,4\}$. Lemma \ref{thm:Paw} implies $w_{34} < \max\{w_{23}, w_{24}\}$. On the other hand, $G[\{1,3,4\}]$ and $G[\{2,3,4\}]$ make another $(P_3,K_3)$-cover of $\{1,2,3,4\}$. Lemma \ref{thm:Paw} implies $w_{24} < \max\{w_{23}, w_{34}\}$. It follows that $w_{23}>\max\{w_{24},w_{34}\}$. By symmetry, $w_{23}>\max\{w_{12},w_{13}\}$. Then $w_{23}\geq w_{12}+w_{13}$ and $w_{23}\geq w_{24}+w_{34}$ follow from Lemma \ref{thm:K3}. \end{proof} As we shall see in Lemma \ref{thm:K4}, any population monotonic matching game has no $K_4$-subgraph. Thus Lemma \ref{thm:Diamond} implies that for any two $K_3$-subgraphs $H_1$ and $H_2$ in a graph inducing population monotonic matching games, if $H_1$ and $H_2$ share a common edge $e$, then $e$ must be the maximum weight edge in both $H_1$ and $H_2$. \subsection{Forbidden structures} \label{subsec:obstructions} This subsection develops some forbidden structures in population monotonic matching games. Even though this list is not exhaustive, it is sufficient to achieve a complete characterization for population monotonic matching games. For any two graphs $H_1$ and $H_2$, $H_1$ is said \emph{$H_2$-free} if $H_1$ has no induced subgraph isomorphic to $H_2$. \begin{figure}[t \centering \vspace{-1em} \centerline{\includegraphics[width=1.45\textwidth]{fig_obstructions.pdf}} \vspace{-2em} \caption{Some forbidden structures} \label{fig:obstructions} \vspace{-1em} \end{figure} Lemmas \ref{thm:C4} and \ref{thm:K4} concern two forbidden structures with $4$ vertices. \begin{lemma} \label{thm:C4} If $\Gamma_{G}$ is population monotonic, then $G$ is $C_{4}$-free. \end{lemma} \begin{proof} Assume to the contrary that $H\subseteq G$ is an induced subgraph isomorphic to $C_4$. Without loss of generality, assume that $V(H)=\{1,2,3,4\}$ and $E(H)=\{12,23,34,14\}$. Notice that $G[\{1,2,3\}]$ and $G[\{2,3,4\}]$ make a $2 P_3$-cover of $V(H)$, and that $G[\{2,3,4\}]$ and $G[\{1,3,4\}]$ make another $2 P_3$-cover of $V(H)$. Lemma \ref{thm:cover} \emph{\ref{itm:2P3}} implies $w_{23}\geq w_{12}+w_{34}$ and $w_{34}\geq w_{23}+w_{14}$. A contradiction occurs. \end{proof} \begin{lemma} \label{thm:K4} If $\Gamma_{G}$ is population monotonic, then $G$ is $K_{4}$-free. \end{lemma} \begin{proof} Assume to the contrary that $H\subseteq G$ is an induced subgraph isomorphic to $K_4$. Without loss of generality, assume that $V(H)=\{1,2,3,4\}$. By symmetry, further assume that $w_{13}=\max_{e\in E(H)}\{w_{e}\}$. Lemma \ref{thm:K3} implies that $w_{13}\geq w_{12}+w_{23}$ and $w_{13}\geq w_{14}+w_{34}$. Notice that $G[\{1,2,3\}]$ and $G[\{1,2,4\}]$ make a $2 K_3$-cover of $V(H)$, and that $G[\{1,2,4\}]$ and $G[\{1,3,4\}]$ make another $2 K_3$-cover of $V(H)$. Lemma \ref{thm:cover} \emph{\ref{itm:2K3}} implies that $w_{14}\geq w_{12}+w_{24}$ and $w_{12}\geq w_{14}+w_{24}$, which is absurd. \end{proof} Lemmas \ref{thm:P5C5} - \ref{thm:Butterfly} provide four forbidden structures with $5$ vertices. \begin{lemma} \label{thm:P5C5} If $\Gamma_{G}$ is population monotonic, then $G$ is $(P_5, C_5)$-free. \end{lemma} \begin{proof} Assume to the contrary that $H\subseteq G$ is an induced subgraph isomorphic to $P_5$ or $C_5$. Without loss of generality, assume that $V(H)=\{1,2,3,4,5\}$ and $\{12,23,34,45\}\subseteq E(H)$. Notice that both $G[\{1,2,3,4\}]$ and $G[\{2,3,4,5\}]$ are isomorphic to $P_4$. Lemma \ref{thm:P4} implies that $w_{23}\geq w_{12}+w_{34}$ and $w_{34}\geq w_{23}+w_{45}$, which is absurd. \end{proof} A graph is a \emph{co-banner graph} if it is isomorphic to the fifth graph in Figure \ref{fig:obstructions}. \begin{lemma} \label{thm:CoBanner} If $\Gamma_{G}$ is population monotonic, then $G$ is co-banner-free. \end{lemma} \begin{proof} Assume to the contrary that $H\subseteq G$ is an induced subgraph isomorphic to co-banner. Without loss of generality, assume that $V(H)=\{1,2,3,4,5\}$ and $E(H)=\{12,23,34,35,45\}$. By symmetry, further assume that $w_{34}\geq w_{35}$. Since $G[\{2,3,4,5\}]$ is isomorphic to paw, Lemma \ref{thm:Paw} implies that $w_{34}\geq w_{23}+w_{45}$. Besides, $G[\{1,2,3,4\}]$ is isomorphic to $P_4$. Lemma \ref{thm:P4} implies that $w_{23}\geq w_{12}+w_{34}$. A contradiction occurs. \end{proof} A graph is a \emph{butterfly graph} if it is isomorphic to the sixth graph in Figure \ref{fig:obstructions}. \begin{lemma} \label{thm:Butterfly} If $\Gamma_{G}$ is population monotonic, then $G$ is butterfly-free. \end{lemma} \begin{proof} Assume to the contrary that $H\subseteq G$ is an induced subgraph isomorphic to butterfly. Without loss of generality, assume that $V(H)=\{1,2,3,4,5\}$ and $E(H)=\{12,13,23,34,35,45\}$. By symmetry, further assume that $w_{23}\geq w_{13}$ and $w_{34}\geq w_{35}$. Notice that both $G[\{1,2,3,4\}]$ and $G[\{2,3,4,5\}]$ are isomorphic to paw. Lemma \ref{thm:Paw} implies $w_{23}\geq w_{12}+w_{34}$ and $w_{34}\geq w_{23}+w_{45}$. A contradiction occurs. \end{proof} Now we are ready to characterize population monotonic matching games. \subsection{A complete characterization for population monotonicity} \label{subsec:characterization} A \emph{$k$-star} is a graph comprised of a clique $K$ of size $k$ and an independent set $I$, where every vertex in $I$ is incident to at least one vertex in $K$. We call vertices in $K$ \emph{centers} of the $k$-star. In the following, we concentrate on $k$-stars with $k\leq 2$. We refer to $1$-star as \emph{star} and to $2$-star as \emph{double-star}. Notice that a double-star degenerates to a star if all non-center vertices are connected to only one and the same center. \begin{figure}[t \vspace{-1em} \centerline{\includegraphics[width=1.45\textwidth]{fig_stars.pdf}} \vspace{-2em} \caption{Examples for $k$-stars} \label{fig:stars} \vspace{-1em} \end{figure} We call adjacent vertices $u$ and $v$ in $G$ a \emph{dominant pair} if $w_{uv}\geq w_{uu'}+w_{vv'}$ for any pair of edges $uu'$ and $vv'$ incident to $uv$, where vertices $u'$ and $v'$ might coincide. In particular, we still call $u$ and $v$ a \emph{dominant pair} if $uv$ is the only edge incident to $v$ and $w_{uv}\geq w_{uu'}$ for any other edge $uu'$ incident to $u$. Now we are ready to present our main result. \begin{theorem} \label{thm:main} A matching game is population monotonic if and only if every component of the underlying graph is a double-star where the two centers make a dominant pair. \end{theorem} \begin{proof} Let $\Gamma_G=(N,\gamma)$ be the matching game on $G=(V,E;w)$, where $G$ a simple graph with weight function $w:E\rightarrow \mathbb{R}_+$. Without loss of generality, assume that $G$ is connected. $(\Longrightarrow)$. Assume that $\Gamma_G$ is population monotonic. We distinguish two cases to show that $G$ is a double-star where the two centers make a dominant pair. Case $1$: $G$ is a tree. By Lemma \ref{thm:P5C5}, $G$ is a double-star where every non-center vertex is adjacent to precisely one center. Let $u$ and $v$ be the two centers of $G$. In particular, let $u$ and $v$ be the endpoints of a maximum weight edge in $G$ if $G$ is a star. Lemma \ref{thm:P4} implies that $u$ and $v$ make a dominant pair in $G$. Case $2$: $G$ is not a tree. Lemma \ref{thm:P5C5} suggests that $G$ contains $K_3$-subgraphs. We call two $K_3$-subgraphs of $G$ \emph{disjoint} if they share no common vertex and \emph{non-disjoint} otherwise. We first show that any two non-disjoint $K_3$-subgraphs of $G$ share a common edge. Moreover, the common edge is the maximum weight edge in either $K_3$-subgraph. Assume to the contrary that $H_1$ and $H_2$ are two non-disjoint $K_3$-subgraphs without common edges. Without loss of generality, assume that $V(H_1)=\{1,2,3\}$ and $V(H_2)=\{3,4,5\}$. Clearly, $H_1 \cup H_2$ is a butterfly graph. Lemma \ref{thm:Butterfly} implies that there exist crossing edges in $G$ between vertex sets $\{1,2\}$ and $\{4,5\}$. Lemmas \ref{thm:C4} and \ref{thm:K4} suggest that there exists at most one crossing edge in $G$ between $\{1,2\}$ and $\{4,5\}$. We may assume that $14$ is the only crossing edge between $\{1,2\}$ and $\{4,5\}$. Notice that both $G[\{1,2,3,4\}]$ and $G[\{1,3,4,5\}]$ are isomorphic to diamond. Lemma \ref{thm:Diamond} implies that $w_{13}\geq w_{14}+w_{34}$ and $w_{34}\geq w_{13}+w_{14}$, which is absurd. Hence any two non-disjoint $K_3$-subgraphs of $G$ share a common edge. Moreover, Lemma \ref{thm:Diamond} implies that the common edge is the maximum weight edge in either $K_3$-subgraph. Next we show that all $K_3$-subgraphs of $G$ are pairwise non-disjoint. Moreover, all $K_3$-subgraphs of $G$ share a common edge which is the maximum weight edge in every $K_3$-subgraph. Assume to the contrary that there exist disjoint $K_3$-subgraphs in $G$. Then Lemma \ref{thm:P5C5} guarantees that there are disjoint $K_3$-subgraphs $H_1$ and $H_2$ with crossing edges between $V(H_1)$ and $V(H_2)$. Moreover, Lemmas \ref{thm:C4} and \ref{thm:Butterfly} suggest that there exists at most one crossing edge between $V(H_1)$ and $V(H_2)$. Without loss generality, assume that $V(H_1)=\{1,2,3\}$, $V(H_2)=\{4,5,6\}$ and $14$ is the only crossing edge between $V(H_1)$ and $V(H_2)$. However, $G[\{1,2,3,4,5\}]$ is isomorphic to co-banner, which contradicts Lemma \ref{thm:CoBanner}. Hence all $K_3$-subgraphs of $G$ are pairwise non-disjoint. Recall that any two non-disjoint $K_3$-subgraphs share a common edge which is the maximum weight edge in both of them. Moreover, Lemma \ref{thm:K3} implies that the maximum weight edge is unique in every $K_3$-subgraph. Hence all $K_3$-subgraphs of $G$ share a common edge which is the maximum weight edge in every $K_3$-subgraph. Finally, we show that $G$ is a double-star where the two centers are the endpoints of the common edge for all $K_3$-subgraphs in $G$. Moreover, the two centers of $G$ make a dominant pair. Let $uv$ be the common edge of all $K_3$-subgraphs in $G$. In particular, if there is only one $K_3$-subgraph in $G$, let $uv$ be the edge with maximum weight in the $K_3$-subgraph. Thus $uv$ is the maximum weight edge in every $K_3$-subgraph of $G$. Lemmas \ref{thm:Paw} and \ref{thm:CoBanner} suggest that vertices outside $K_3$-subgraphs of $G$ are adjacent to either $u$ or $v$. It follows that $G$ is a double-star with centers $u$ and $v$. Moreover, Lemmas \ref{thm:P4} - \ref{thm:Diamond} imply that $u$ and $v$ make a dominant pair. $(\Longleftarrow)$. Assume that $G$ is a double-star where the two centers make a dominant pair. Let $u$ and $v$ be the two centers of $G$. In particular, let $u$ and $v$ be the endpoints of a maximum weight edge if $G$ is a star. We prove the matching game $\Gamma_G$ on $G$ is population monotonic by constructing a PMAS $\mathcal{X}=(\boldsymbol{x}_S)_{S\in 2^N \backslash \{\emptyset\}}$. Let $S$ be a nonempty subset of $N$. Define $\boldsymbol{x}_{S}=(x_{S,i})_{i\in S}$ as follows. For any $i\in S\backslash \{u,v\}$, let $x_{S,i}=0$. For values of $x_{S,u}$ and $x_{S,v}$, we distinguish two cases. $(1)$ If $S\cap \{u,v\}=\{i\}$, then $x_{S,i}=\sigma_{i} (S)$, where $\sigma_{i}(S)$ denotes the maximum weight of edges incident to $i$ in $G[S]-uv$. $(2)$ If $S\cap \{u,v\} = \{u,v\}$, then let $x_{S,u}=\frac{\sigma_u}{\sigma_u + \sigma_v} w_{uv}$ and $x_{S,v}=\frac{\sigma_v}{\sigma_u + \sigma_v} w_{uv}$, where $\sigma_{u}$ and $\sigma_{v}$ denote the maximum weight of edges incident to $u$ and $v$ in $G-uv$, respectively. It remains to show that $\mathcal{X}=(\boldsymbol{x}_S)_{S\in 2^N\backslash \{\emptyset\}}$ defined above is indeed a PMAS for $\Gamma_G$. We first consider the efficiency condition. Let $S$ be a nonempty subset of $N$. Since $G$ is a double-star with $u$ and $v$ being a dominant pair, we have \begin{equation*} \gamma (S)= \begin{cases} ~w_{uv} & \text{if $u,v\in S$,} \vspace{1.5 mm}\\ \sigma_u (S) & \text{if $u\in S$, $v\not\in S$,} \vspace{1.5 mm}\\ \sigma_v (S) & \text{if $u\not\in S$, $v\in S$,} \vspace{1.5 mm}\\ \quad 0 & \text{if $u,v\not\in S$.} \end{cases} \end{equation*} Hence $\sum_{i\in S} x_{S,i}=\gamma (S)$ holds in any case. Now we check the monotonicity condition. Let $S$ and $T$ be two subsets of $N$ with $S\subseteq T$. It suffices to show that $x_{S,i}\leq x_{T,i}$ for any $i\in S\cap \{u,v\}$. Without loss of generality, assume that $u\in S$. We proceed by distinguishing three cases. $(1)$ If $v\in S$, then $x_{S,u}=\frac{\sigma_{u}}{\sigma_{u} + \sigma_{v}} w_{uv} =x_{T,u}$. $(2)$ If $v\in T\backslash S$, then $x_{S,u}=\sigma_u (S)\leq \sigma_u \leq \frac{\sigma_u}{\sigma_u +\sigma_v} w_{uv}=x_{T,u}$, since $w_{uv}\geq \sigma_u+\sigma_v$. $(3)$ If $v\not\in T$, then $x_{S,u}=\sigma_u (S)\leq \sigma_u (T)=x_{T,u}$. Hence $x_{S,u}\leq x_{T,u}$ holds in any case. By symmetry, we have $x_{S,i}\leq x_{T,i}$ for any $i\in S\cap \{u,v\}$. Therefore, the matching game $\Gamma_G$ on $G$ is population monotonic. \end{proof} Graphs that are double-stars can be determined efficiently. Indeed, the centers in a double-star are the only possible vertices with degree larger than $2$, and all non-center vertices make an independent set. Besides, dominant pairs in double-stars can be verified efficiently. Hence we have the following corollary for our main result. \begin{corollary} The population monotonicity of a matching game can be determined efficiently. \end{corollary} \section{Discussion} \label{sec:disscussion} In this paper, we study matching games and provide a necessary and sufficient characterization for the population monotonicity. Prior to our work, Immorlica et al. \cite{IMM08} studied the limitation of approximate PMASes and proved that no constant approximation factor exists even for simple matching games. Hence our result completes the line of research on PMASes for matching games. One possible working direction for matching games is to study the existence of allocation schemes consisting of core allocations for every coalition, where Deng et al. \cite{DINZ00} have achieved a sufficient characterization for simple matching games. Our result might be helpful since a PMAS provides a core allocation for every coalition in a population monotonic way. Besides, variants of matching games introduced by Kumabe and Maehara \cite{KM20-IJCAI, KM20-AAMAS} are also worth studying. \bibliographystyle{habbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Critical slowing down is the phenomenon in which the relaxation time of a dynamical system diverges at a bifurcation point \cite{strogatz2018nonlinear}. Biological systems are inherently dynamic, and therefore one generally expects critical slowing down to accompany transitions between their dynamic regimes. Indeed, signatures of critical slowing down, including increased autocorrelation time and increased fluctuations, have been shown to precede an extinction transition in many biological populations \cite{scheffer2009early, scheffer2012anticipating}, including bacteria \cite{veraart2012recovery}, yeast \cite{dai2012generic}, and entire ecosystems \cite{wang2012flickering}. Similar signatures are also found in other biological time series, including dynamics of protein activity \cite{sha2003hysteresis} and neural spike dynamics \cite{meisel2015critical}. Canonically, critical slowing down depends on scaling exponents that define divergences along particular parameter directions in the vicinity of a critical point \cite{hohenberg1977theory}. Therefore, connecting the theory of critical slowing down to biological data requires identification of thermodynamic state variables, their scaling exponents, and a principled definition of distance from the critical point. However, in most biological systems it is not obvious how to define the thermodynamic state variables, let alone scaling exponents and distance from criticality. In a previous study \cite{erez2018universality} we showed how near its bifurcation point, a class of biochemical systems can be mapped to the mean-field Ising model, thus defining the state variables and their associated scaling exponents. This provides a starting point for the investigation of critical slowing down in such systems, as well as how to apply such a theory to experimental data. Additionally, most studies of critical slowing down in biological systems investigate the response to a sudden experimental perturbation (a ``quench''), such as a dilution or the addition of a nutrient or drug. This leaves unexplored the response to gradual environmental changes, a common natural scenario. When a gradual change drives a system near its critical point, critical slowing down delays the system's response such that no matter how gradual the change, the response lags behind the driving. In physical systems this effect is known as the Kibble-Zurek mechanism \cite{kibble1976topology, zurek1985cosmological}, which predicts these nonequilibrium lagging dynamics in terms of the exponents of the critical point. It remains unclear whether and how the Kibble-Zurek mechanism applies to biological systems. Here we investigate critical slowing down for well-mixed biochemical networks with positive feedback, and we use our theory to interpret the response of immune cells to an inhibitory drug. Using our previously derived mapping \cite{erez2018universality}, we show theoretically that critical slowing down in our class of models proceeds according to the static and dynamic exponents of the mean-field Ising universality class. The mapping identifies an effective temperature and magnetic field in terms of the biochemical parameters, which defines a distance from the critical point that can be extracted from experimental fluorescence data. We find that drug-induced quenches that take an immune cell closer to its critical point result in longer response times, in qualitative agreement with our theory. We then show theoretically that our system, when driven across its bifurcation point, falls out of steady state in the manner predicted by the Kibble-Zurek mechanism, thereby extending Kibble-Zurek theory to a biologically relevant nonequilibrium setting. Our work elucidates the effects of critical slowing down in biological systems with feedback, and provides insights for interpreting cell responses near a dynamical transition point. \section{Results} We consider a well-mixed reaction network in a cell where $X$ is the molecular species of interest, and the other species $A$, $B$, $C$, etc.\ form a chemical bath for $X$ [Fig.\ \ref{fig:setup}(a)]. Whereas previously we considered only the steady state distribution of $X$ \cite{erez2018universality}, here we focus on dynamics in and out of steady state. Specifically, as shown in Fig.\ \ref{fig:setup}(b), we consider (i) steady state, where the bath is constant in time; (ii) a quench, where the bath changes its parameters suddenly; and (iii) driving, where the bath changes its parameters slowly and continuously. In each case we are interested in a corresponding timescale: (i) the autocorrelation time $\tau_c$ of $X$, (ii) the response time $\tau_r$ of $X$, and (iii) the driving time $\tau_d$ of the bath. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig1} \caption{(a) Inside a cell, a chemical species $X$ with molecule number $n$ exists in a bath of other species. (b) We consider steady-state, quench, and driven dynamics for the bath, and focus on the autocorrelation time $\tau_c$, response time $\tau_r$, and driving time $\tau_d$, respectively.} \label{fig:setup} \end{figure} First we review the key features of our stochastic framework for biochemical feedback and its mapping to the mean-field Ising model \cite{erez2018universality}. We consider an arbitrary number of reactions $r$ in which $X$ is produced from bath species $Y_r^\pm$ and/or $X$ itself (feedback), \begin{equation} \label{eq:rxns} j_rX + Y_r^+ \rightleftharpoons (j_r+1)X + Y_r^-, \end{equation} where $j_r$ are stoichiometric integers. The probability of observing $n$ molecules of species $X$ in steady state according to Eq.\ \ref{eq:rxns} is \begin{equation} \label{eq:pn} p_n = \frac{p_0}{n!} \prod_{j=1}^n f_j, \end{equation} where $p_0^{-1} = \sum_{n=0}^\infty(1/n!)\prod_{j=1}^n f_j$ is set by normalization, and $f_n$ is a nonlinear feedback function governed by the reaction network. The inverse of Eq.\ \ref{eq:pn}, \begin{equation} \label{eq:fn} f_n = \frac{np_n}{p_{n-1}}, \end{equation} allows calculation of the feedback function from the distribution. The function $f_n$ determines an effective order parameter, reduced temperature, and magnetic field, \begin{equation} \label{eq:cparam} m \equiv \frac{n_*-n_c}{n_c}, \quad h \equiv \frac{2(f_{n_c} - n_c)}{-f'''_{n_c}n_c^3}, \quad \theta \equiv \frac{2(1-f'_{n_c})}{-f'''_{n_c}n_c^2}, \end{equation} respectively, where $n_c$ is defined by $f''_{n_c} = 0$, and $n_*$ are the maxima of $p_n$. Qualitatively, $n_c$ sets the typical molecule number, $\theta$ drives the system to a unimodal ($\theta > 0$) or bimodal ($\theta < 0$) state, and $h$ biases the system to high ($h > 0$) or low ($h < 0$) molecule numbers. The critical point occurs at $\theta = h = 0$. The state variables $m$, $\theta$, and $h$ scale according to the exponents $\alpha=0$, $\beta=1/2$, $\gamma=1$, and $\delta=3$ of the mean-field Ising universality class. Detailed analysis of this mapping in steady state is found in our previous work \cite{erez2018universality}. Near the critical point, all specific realizations of a class of systems scale in the same way, and therefore it suffices to consider a particular realization of Eq.\ \ref{eq:rxns} from here on. We choose Schl\"ogl's second model \cite{erez2018universality}, a simple and well-studied case \cite{schlogl1972chemical, dewel1977renormalization, nicolis1980systematic, brachet1981critical, grassberger1982phase, prakash1997dynamics, liu2007quadratic, vellela2009stochastic} in which $X$ is either produced spontaneously from bath species $A$, or in a trimolecular reaction from two existing $X$ molecules and bath species $B$, \begin{equation} \label{eq:schlogl_rxns} A \xrightleftharpoons[k_1^-]{k_1^+} X, \quad 2X+B \xrightleftharpoons[k_2^-]{k_2^+} 3X. \end{equation} In this case the feedback function is \begin{equation} \label{eq:schlogl_fn} f_n = \frac{aK^2 + s(n-1)(n-2)}{(n-1)(n-2)+K^2}, \end{equation} where we have introduced the dimensionless quantities $a \equiv k_1^+n_A/k_1^-$, $s \equiv k_2^+ n_B/k_2^-$, and $K^2 \equiv k_1^-/k_2^-$ in terms of the reaction rates and the numbers of $A$ and $B$ molecules. Given Eqs.\ \ref{eq:cparam} and \ref{eq:schlogl_fn}, the effective thermodynamic variables $n_c$, $\theta$, and $h$ can be written in terms of $a$, $s$, and $K$ or vice versa \cite{erez2018universality}, with $1/k_1^-$ setting the units of time. \subsection{Critical slowing down in steady state} In steady state, critical slowing down causes correlations to become long-lived near a dynamical transition point. Qualitatively, the fixed point is transitioning from stable to unstable, and therefore the basin of attraction is becoming increasingly wide. As a result, a dynamic trajectory takes increasingly long excursions from the mean, making it heavily autocorrelated. The autocorrelation time $\tau_c$ diverges at the critical point according to \cite{pathria2011statistical} \begin{align} \label{eq:tauc1} \tau_c|_{h=0} &\sim |\theta|^{-\nu z}, \\ \label{eq:tauc2} \tau_c|_{\theta=0} &\sim |h|^{-\nu z/\beta\delta}, \end{align} where we expect $\nu z = 1$ for mean-field dynamics \cite{hohenberg1977theory, kopietz2010introduction}. Here the autocorrelation time $\tau_c$ is defined as \begin{equation} \label{eq:tauc_def} \tau_c = \frac{1}{\kappa(0)}\int_0^\infty dt\ \kappa(t), \end{equation} where $\kappa(t) = \langle n(0)n(t)\rangle - \bar{n}^2$ is the steady-state autocorrelation function, $\kappa(0) = \sigma^2$ is the variance, and we have taken the start time to be $t=0$ without loss of generality because the system is in steady state. To confirm the value of $\nu z$, we plot $\tau_c$ vs.\ $h$ at $\theta = 0$ (Eq.\ \ref{eq:tauc2}). We calculate $\tau_c$ either directly from the master equation or from stochastic simulations \cite{gillespie1977exact} using the method of batch means \cite{thompson2010comparison} (see Appendix \ref{app:time}). The results are shown in Fig.\ \ref{fig:ss}. We see in Fig.\ \ref{fig:ss}(a) that $\tau_c$ indeed diverges with $h$, and that the location of the divergence approaches the expected value $h = 0$ as the molecule number $n_c$ increases. We also see that the height of the peak increases with $n_c$ due to the rounding of the divergence \cite{stephens2013statistical}. The inset of Fig.\ \ref{fig:ss}(b) plots this dependence: we see that $\tau_c$ at the critical point $\theta = h = 0$ scales like $n_c^{1/2}$ for large $n_c$ (the application of this dependence to dynamic driving will be discussed in Section \ref{sec:KZ}). Finally, we see in the main panel of Fig.\ \ref{fig:ss}(b) that when $n_c$ is sufficiently large, $\tau_c$ falls off with $|h|$ with the expected scaling exponent of $\nu z/\beta\delta = 2/3$. Taken together, these results confirm that the divergence of the autocorrelation time in the Schl\"ogl model obeys the static exponents of the mean-field Ising universality class ($\beta\delta = 3/2$) and the dynamic expectation for mean-field systems ($\nu z = 1$). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig2} \caption{Critical slowing down in steady state. (a) Autocorrelation time $\tau_c$ in Schl\"ogl model (Eq.\ \ref{eq:tauc_def}) peaks with field $h$ when reduced temperature $\theta = 0$. Height increases and location moves to $h=0$ as molecule number $n_c$ increases. Time is in units of $1/k_1^-$. (b) At large $n_c$, $\tau_c$ scales with $|h|$ with expected exponent of $\nu z/\beta\delta = 2/3$. Inset: $\tau_c$ at $\theta=h=0$ scales as $n_c^{1/2}$. In a and inset of b, $\tau_c$ is calculated using eigenfunctions with cutoff $N = \max(100,3n_c)$; in main panel of b, $\tau_c$ is calculated using batch means with 250 trajectories, duration $T = 10^5$, and batch time $\tau_b =$ 2,222 (see Appendix \ref{app:time}).} \label{fig:ss} \end{figure} \subsection{Quench response and application to immune cells} When subjected to a sudden environmental change (a quench), the system will take some finite amount of time to respond [Fig.\ \ref{fig:setup}(b), middle]. We expect that if a quench takes the system closer to its critical point, the response time should be longer due to critical slowing down. To make this expectation quantitative, we define the response time $\tau_r$ in terms of the dynamics of the mean molecule number $\bar{n}$ as \begin{equation} \label{eq:taur1} \tau_r = \frac{1}{\Delta\bar{n}(0)}\int_0^{t_{\max}} dt\ \Delta\bar{n}(t), \end{equation} where the quench occurs at $t=0$, we define $\Delta \bar{n}(t) = \bar{n}(t) - \bar{n}(t_{\max})$, and we ensure that $t_{\max} \gg \tau_r$. We compute $\bar{n}(t)$ from the time-dependent distribution $p_n(t)$ using the stochastic simulations. Examples of $p_n(t)$ for a small and a large quench are shown in Fig.\ \ref{fig:quench}(a). We define the distance from the critical point in terms of the state variables $\theta$ and $h$. Specifically, $\tau_c$ scales identically with $\theta^{\beta\delta}$ as it does with $h$ (Eqs.\ \ref{eq:tauc1} and \ref{eq:tauc2}), which defines the Euclidean distance $d_c$ from the critical point as \begin{equation} \label{eq:dc} d_c = \left[(\theta^{\beta\delta})^2 + h^2\right]^{1/2}. \end{equation} This measure will be important when comparing with the experiments because, as opposed to in most condensed matter experiments, it is difficult in the biological experiments we describe to manipulate only one parameter ($\theta$ or $h$) independently of the other. \begin{figure} \centering \includegraphics[width=\linewidth]{fig3} \caption{Quench response in theory (left) and in immune cell experimental data (right). (a) Stochastic simulations of Shl\"ogl model show effect of small and large parameter quenches on distribution. Time is in units of $1/k_1^-$. (b) Initial (black square) and quenched (colored circles) parameter values in $\theta$ and $h$ space in model; $n_c = 500$. Dotted lines show contours of equal $d_c$ (Eq.\ \ref{eq:dc}), distance from critical point ($\theta = h = 0$). Response time $\tau_r$ in model (c) decreases with $d_c$ and (g) increases with entropy $S$. (d) Experimental distributions of T cell ppERK fluorescence intensity measured at times after addition of SRC inhibitor (see Fig.\ \ref{fig:doses} for all doses). (e) $\theta$ and $h$ extracted from initial distribution (black square) and final distributions (colored circles) for all [SRCi] doses (color bar). Experimental response time $\tau_r$ (f) decreases with $d_c$ and (h) increases with $S$. Error bars: for $\theta$ and $h$, standard error from Savitzky-Golay \cite{savitzky1964smoothing} filter windows $25 \le W \le 35$ \cite{erez2018universality}; for $d_c$, propagated in quadrature from e; for $\tau_r$, standard deviation of Riemann sums spanning left- to right-endpoint methods to approximate integral in Eq.\ \ref{eq:taur2}. In h, fluorescence of one molecule set to $I_1 = 10$.} \label{fig:quench} \end{figure} To test whether the response time increases with proximity to the critical point, we must define initial values $\theta_0$ and $h_0$ for the environment before the quench, and a series of values $\theta$ and $h$ for the environment after the quench that are varying distances from the critical point $\theta = h = 0$. There are many such choices for these values, but anticipating the experimental results that we will describe shortly, we choose the initial point (black square) and final points (colored circles) shown in Fig.\ \ref{fig:quench}(b). Dotted curves of equal $d_c$ are also shown, which make clear that larger quenches (yellow circles) take the system farther from the critical point than smaller quenches (blue circles). The dependence of $\tau_r$ on $d_c$ is shown in Fig.\ \ref{fig:quench}(c), and we see that indeed $\tau_r$ decreases as $d_c$ increases, or equivalently the response time increases with proximity to the critical point. We now compare our theory with data from immune cells. We focus on the abundance in T cells of doubly phosphorylated ERK (ppERK), a protein that initiates cell proliferation and is implicated in the self/non-self decision between mounting an immune response or not \cite{vogel2016dichotomy, altan2005modeling}. Specifically, we use flow cytometry to measure the ppERK distribution at various times after the addition of a drug that inhibits SRC, a key enzyme in the cascade that leads to ERK phosphorlyation (see Appendix \ref{app:expt} for experimental methods). When the dose of the drug is small, the distribution hardly changes [Fig.\ \ref{fig:quench}(d), top]; whereas when the dose is large, the distribution changes significantly [Fig.\ \ref{fig:quench}(d), bottom]. The responses to all doses are shown in Appendix \ref{app:expt}. After the addition of the drug, the cells reach a new steady-state ppERK distribution [green curves in Fig.\ \ref{fig:quench}(d)]. The distribution corresponds to an effective feedback function via Eq.\ \ref{eq:fn}, from which the effective temperature $\theta$ and field $h$ can be calculated via Eq.\ \ref{eq:cparam} \cite{erez2018universality}. The values of $\theta$ and $h$ calculated from the experimental distributions are shown in Fig.\ \ref{fig:quench}(e). We see that larger doses take the cells farther from their initial distribution (black square), as expected. We also see that larger doses take the system farther from the critical point $\theta = h = 0$. The general shape of the $\theta$ and $h$ values motivated our choice of theoretical values in Fig.\ \ref{fig:quench}(b). We define the response time to the drug as in Eq.\ \ref{eq:taur1}, here in terms of the mean fluorescence intensity of ppERK, \begin{equation} \label{eq:taur2} \tau_r = \frac{1}{\Delta\bar{I}(0)}\int_0^{t_{\max}} dt\ \Delta\bar{I}(t), \end{equation} where $\Delta \bar{I}(t) = \bar{I}(t) - \bar{I}(t_{\max})$ and $t_{\max} = 30$ min. We calculate the distance from criticality using Eq.\ \ref{eq:dc} as before, here using the experimental values of $\theta$ and $h$. We see in Fig.\ \ref{fig:quench}(f) that the response time $\tau_r$ decreases with the distance from criticality $d_c$, consistent with the prediction from the theory [Fig.\ \ref{fig:quench}(c)]. This suggests that critical slowing down occurs in the response of the T cells to the drug. Although in Fig.\ \ref{fig:quench}(f) the response time $\tau_r$ comes directly from the experimental data, the distance from criticality $d_c$ is calculated from the experimental data using expressions from the theory (Eqs.\ \ref{eq:fn} and \ref{eq:cparam}). This makes the results in Figs.\ \ref{fig:quench}(c) and \ref{fig:quench}(f) not entirely independent. To confirm that the agreement between Figs.\ \ref{fig:quench}(c) and \ref{fig:quench}(f) is not a result of an implicit co-dependence, we seek a measure that is related to distance from criticality but that is not dependent on the theory. We choose the entropy of the distribution $S = -\sum_n p_n\log p_n$ because near criticality, the distribution is broad and flat, and therefore we expect the entropy to be large; whereas far from criticality, the distribution has either one or two narrow peaks, and therefore we expect the entropy to be small \cite{erez2018universality}. Indeed, we see in Fig.\ \ref{fig:quench}(g) that in the theory, the response time $\tau_r$ increases with the entropy $S$, consistent with the fact that it decreases with the distance from criticality [Fig.\ \ref{fig:quench}(c)]. The same is evident in the experiments: we see in Fig.\ \ref{fig:quench}(h) that low drug doses correspond to long response times and high entropies, whereas high drug doses correspond to short response times and low entropies, resulting in an increase of response time $\tau_r$ with entropy $S$. Calculating the entropy in Fig.\ \ref{fig:quench}(h) requires a conversion between intensity $I$ and molecule number $n$, and we have checked that the results in Fig.\ \ref{fig:quench}(h) are qualitatively unchanged for different choices of this conversion factor over several orders of magnitude. The agreement between Figs.\ \ref{fig:quench}(g) and \ref{fig:quench}(h) offers further evidence that the T cells experience critical slowing down, with the data analysis completely independent from our theory. \subsection{Dynamic driving and Kibble-Zurek collapse} \label{sec:KZ} While some environmental changes are sudden, many changes in a biological context are gradual [Fig.\ \ref{fig:setup}(b), right]. When a gradual change drives a system through its critical point, critical slowing down delays the system's response such that no matter how gradual the change, the response lags behind the driving. Although in a biological setting the driving protocol could take many forms, terms beyond the leading-order linear term do not change the critical dynamics \cite{chandran2012kibble}. This is a major theoretical advantage because it allows us to specialize to linear driving without loss of biological realism. Specifically, we focus on linear driving across the critical point with driving time $\tau_d$, setting either $\theta(t) = \theta_i-(\theta_f - \theta_i)t/\tau_d$ and $h=0$, or $h(t) = h_i-(h_f-h_i)t/\tau_d$ and $\theta = 0$, where $i$ and $f$ denote the initial and final parameter values, respectively. In a traditional equilibrium setting, the dynamics of lagging trajectories are described in terms of the critical exponents by the Kibble-Zurek mechanism \cite{kibble1976topology, zurek1985cosmological}. The idea of the Kibble-Zurek mechanism is that far from the critical point, the change in the system's correlation time due to the driving, over a correlation time, is small compared to the correlation time itself, $(d\tau_c/dt)\tau_c \ll \tau_c$, and therefore the system responds adiabatically. However, as the system is driven closer to the critical point, these two quantities are on the same order, or $d\tau_c/dt \sim 1$, and the system begins to lag. Applying this condition to Eqs.\ \ref{eq:tauc1} and \ref{eq:tauc2}, and using the above expressions for $\theta(t)$ and $h(t)$, one obtains \begin{align} \label{eq:kz1} \theta &\sim \tau_d^{-1/(\nu z+1)}, \\ \label{eq:kz2} h &\sim \tau_d^{-\beta\delta/(\nu z + \beta\delta)}, \end{align} respectively. Because $m \sim (-\theta)^\beta$ or $m \sim h^{1/\delta}$ near criticality in the mean-field Ising class, we have \begin{align} \label{eq:kzm1} m &\sim \tau_d^{-\beta/(\nu z+1)}, \\ \label{eq:kzm2} m &\sim \tau_d^{-\beta/(\nu z + \beta\delta)}, \end{align} respectively. Therefore, if the system is driven at different timescales $\tau_d$, the Kibble-Zurek mechanism predicts that plots of the rescaled variables $m\tau_d^{\beta/(\nu z+1)}$ vs.\ $\theta\tau_d^{1/(\nu z+1)}$ or $m\tau_d^{\beta/(\nu z + \beta\delta)}$ vs.\ $h\tau_d^{\beta\delta/(\nu z + \beta\delta)}$ will collapse onto universal curves. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig4} \caption{Dynamic driving and Kibble-Zurek collapse. (a) As reduced temperature $\theta$ is driven over time $\tau_d$ in Schl\"ogl model, order parameter $m$ lags behind due to critical slowing down. Decreasing $\theta$ causes supercooling (left curves), while increasing $\theta$ causes superheating (right curves), resulting in hysteresis. (b) Same, for driving $h$. (c, d) Rescaled curves collapse as predicted. Each point is computed via Eq.\ \ref{eq:cparam} from the mode $n_*$ in b, or the modes $n_*^{(1)}<n_c$ and $n_*^{(2)}>n_c$ in a, of $10^5$ simulation trajectories. For finite-size correction we use $n_c = 10\tau_d$ in a and $n_c = 22\tau_d^{4/5}$ in b. Time is in units of $1/k_1^-$.} \label{fig:kz} \end{figure} When testing these predictions using simulations of a spatially extended physical system, the finite size of the system causes a truncation of the autocorrelation time. This truncation is usually accounted for using a finite-size correction \cite{chandran2012kibble}. In our system, a similar truncation of the autocorrelation time is caused by the finite number of molecules. Specifically, the inset of Fig.\ \ref{fig:ss}(b) shows that at criticality we have $\tau_c \sim n_c^{1/2}$ for large $n_c$, where $n_c$ sets the typical number of molecules in the system. Therefore, we interpret $n_c$ as a ``system size,'' and we correct for finite-size effects in the following way. Combining the relation $\tau_c \sim n_c^{1/2}$ with Eqs.\ \ref{eq:tauc1} and \ref{eq:tauc2}, and Eqs.\ \ref{eq:kz1} and \ref{eq:kz2}, we obtain \begin{align} \label{eq:finite1} n_c &\sim \tau_d^{2\nu z/(\nu z+1)},\\ \label{eq:finite2} n_c &\sim \tau_d^{2\nu z/(\nu z + \beta\delta)}, \end{align} for the driving of $\theta$ or $h$, respectively. We choose $n_c$ arbitrarily for a particular driving time $\tau_d$, and when we choose a new $\tau_d$, we scale $n_c$ appropriately according to Eqs.\ \ref{eq:finite1} and \ref{eq:finite2}. This procedure allows us to test the predictions of the Kibble-Zurek mechanism using simulations of the Schl\"ogl model. The results are shown in Fig.\ \ref{fig:kz}. We see in Fig.\ \ref{fig:kz}(a) that as $\theta$ is driven from a positive to a negative value, the bifurcation response is lagging, occurring at a value less than the critical value $\theta = 0$ (supercooling). Conversely, when $\theta$ is driven from a negative to a positive value, the convergence occurs at a value greater than $\theta = 0$ (superheating). In both directions, the lag is larger when the driving is faster, corresponding to smaller values of $\tau_d$ (from yellow to dark brown). We see in Fig.\ \ref{fig:kz}(b) that similar effects occur for the driving of $h$. Yet, we see in Figs.\ \ref{fig:kz}(c) and (d) that the rescaled variables collapse onto single, direction-dependent curves within large regions near criticality. Note that the direction dependence (i.e., hysteresis) is preserved as part of these universal curves, but the lags vanish in the collapse. This result demonstrates that our nonequilibrium birth-death model exhibits the Kibble-Zurek collapse predicted for critical systems. Together with our previous findings, this result suggests that such a collapse should emerge in biological experiments where environmental parameters (e.g., drug dose) are dynamically controlled in a gradual manner. More broadly, by phenomenologically collapsing such experimental curves, it should be possible to deduce the critical exponents of such biological systems without fine-tuning them to criticality, but instead by gradual parameter sweeps. \section{Discussion} We have investigated critical slowing down in a minimal stochastic model of biochemical feedback. By exploiting a mapping to Ising-like thermodynamic variables, we have made quantitative predictions for the response of a system with feedback to both sudden and gradual environmental changes. In response to a sudden change (a quench), we have shown that the system will respond more slowly if the quench takes it closer to its critical point, in qualitative agreement with multiple-time-point flow cytometry experiments in immune cells. In response to more gradual driving, we have shown that the lagging dynamics of the system proceed according to the Kibble-Zurek mechanism for driven critical phenomena. Together, our results elucidate the consequences of critical slowing down for biochemical systems with feedback, and demonstrate those consequences on an example system from immunology. For the immune cells, critical slowing down may present a tradeoff in terms of the speed vs.\ the precision of an immune response. Specifically, ppERK is implicated in the decision of whether or not to mount the immune response \cite{vogel2016dichotomy, altan2005modeling}, suggesting that ppERK dynamics near the bifurcation point are of key biological importance. Yet, the bifurcation point is the point where critical slowing down is most pronounced. In fact, the inset of Fig.\ \ref{fig:ss}(b) demonstrates that the system slows down as the number of molecules in the system increases. On the other hand, large molecule number is known to decrease intrinsic noise and thereby increase the precision of a response \cite{elowitz2002stochastic}. This suggests that cells may face a tradeoff in terms of speed vs.\ precision when responding to changes that occur near criticality, as suggested for other biological systems \cite{skoge2011dynamics, mora2011biological}. Our work extends the Kibble-Zurek mechanism to a nonequilibrium biological context. Traditionally, the mechanism has been applied to physical systems from cosmology \cite{kibble1976topology} and from hard \cite{zurek1985cosmological, del2014universality} or soft \cite{deutschlander2015kibble} condensed matter. Here, we extend the mechanism to the context of biochemical networks with feedback, where the system already exists in a nonequilibrium steady state, and the external protocol takes the system further out of equilibrium into a driven state. It will be interesting to see to what other nonequilibrium contexts the Kibble-Zurek mechanism can be successfully applied \cite{deffner2017kibble}. The theory we present here assumes only intrinsic birth-death reactions and neglects more complex mechanisms such as bursting \cite{friedman2006linking, mugler2009spectral}, parameter fluctuations \cite{shahrezaei2008colored, horsthemke1984noise}, or cell-to-cell variability \cite{cotari2013cell, erez2018modeling} that may play an important role in the immune cells. Nonetheless, similar models that also focus only on intrinsic noise have successfully described ppERK in T cells in the past \cite{das2009digital, prill2015noise}. Moreover, we expect that intrinsic fluctuations should play their largest role near the bifurcation point. Finally, we expect that near the bifurcation point, the essential behavior of the system should be captured by any model that falls within the appropriate universality class. In this and previous work \cite{erez2018universality} we have explored the dynamic and static scaling properties of single cells subject to biochemical feedback. Natural extensions include generalizing the theory to cell populations or other systems that are not well-mixed such as intracellular compartments. This would allow one to investigate the spatial consequences of proximity to a bifurcation point, such as long-range correlations in molecule numbers and the associated implications for sensing, information transmission, patterning, or other biological functions. \section*{Acknowledgments} We thank Anushya Chandran for helpful communications. This work was supported by Simons Foundation grant 376198 (T.A.B.\ and A.M.), Human Frontier Science Program grant LT000123/2014 (Amir Erez), National Science Foundation Research Experiences for Undergraduates grant PHY-1460899 (C.P.), National Institutes of Health (NIH) grants R01 GM082938 (A.E.) and R01 AI083408 (A.E., R.V., and G.A.-B) and the NIH National Cancer Institute Intramural Research programs of the Center for Cancer Research (A.E.\ and G.A.-B.).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{} \section{Introduction} We consider a light wave of wavelength $\lambda$ that propagates through interstellar space. A thin interstellar cloud of atomic hydrogen is interposed on the wave's path. How will the cloud modify the irradiance of the light an observer measures in the direction of propagation? In standard astrophysical conditions and at UV wavelengths, theory suggests (Sects.~\ref{vdh} to \ref{an}) the counter-intuitive result that the irradiance would be much larger than what it would be without the cloud. Rather than reducing the irradiance at the position of the observer the gas enhances it. It is this result, the reasons why it does not violate energy conservation and why distances do not appear in the analytical expression of the scattered starlight that we wish to address. The paper will be organized as follows. In Sect.~\ref{vdh} we present an original formula H.C. van de Hulst derived for the scattering of a plane-wave by identical, spherically symmetric, particles in his now classic book on scattering \cite{vdh}. Before we realized the formulae were identical we had independently reach a similar expression in the more general case of a star at large but finite distance. We used an alternative method, based on the Huygens-Fresnel theory, which is presented in Sect.~\ref{hf}. The quantitative comparison of the direct and scattered irradiances is made in Sect.~\ref{an}. The discussion that follows is based on the relationship which exists between the irradiances from a Huygens-Fresnel sphere and from its first Fresnel zone (Sects.~\ref{comp}-\ref{sc1}). This correspondence is in our opinion fundamental. It helps to explain the reason why scattering in the forward direction may be enhanced and provides a different and simpler view on the problem. Any attempt to derive a more exact expression of the irradiance due to a thin slab of gas should start with an estimate of the light scattered from the first Fresnel zone alone. We last investigate conditions for enhanced scattering in the forward direction and comment on energy conservation (Sects.~\ref{dis}-\ref{ener}). Our interest in forward coherent scattering by a gas was motivated by questions pertaining to interstellar extinction. Our analysis therefore focuses on the case-study of an interstellar cloud illuminated by a star, although the absence of distances in the expression of the scattered light irradiance may suggest relevant laboratory experiments. \section{Scattering by a cloud of hydrogen from a star at infinity} \label{vdh} A light wave of wavelength $\lambda$ ($k=2\pi/\lambda$) falls on a cloud of hydrogen atoms with average column density $N_H$ (atom/cm$^2$) represented as a slab $\Sigma$ in Fig.~\ref{fig:fig}. Let $u_0$ be the amplitude of the disturbance due to the plane wave an observer P would measure in absence of the cloud, and $u$ the amplitude he would measure in the direction of the wave with the cloud on the line of sight. From sect.~4.3 of van de Hulst's book 'Light scattering by small particles' \cite{vdh} \begin{equation} u=u_0\left( 1-\frac{2\pi}{k^2}N_HS(0)\right), \label{eq:vdh0} \end{equation} where $S(0)$ is defined in sect.~6.12 of the same book\footnote{In sect.~6.13 of the same book van de Hulst considers an additional term, $(2/3)k^6\alpha^2$, in the expression of $S(0)$, to account for the diminution of the plane wave because of the scattering. This term only diminishes the direct light from the source and is moreover totally negligible, of order $10^{-8}u_0$ (with the orders of magnitude given in Sect.~\ref{an}). It is therefore neglected.}: $S(0)=ik^3\alpha_H$ ($\alpha_H=6.7\,10^{-25}\rm cm^3$ is the polarizability of hydrogen). Therefore \begin{equation} u=u_0\left( 1-{2\pi}ik \alpha_H N_H\right) \label{eq:vdh1} \end{equation} The amplitude $u$ consists of two terms, the direct, unattenuated (provided that $N_H\sigma_\lambda\ll 1$, $\sigma_\lambda=8/3 \pi k^4 \alpha_H^2$ the Rayleigh cross-section of hydrogen at wavelength $\lambda$) light from the source-wave, and a scattered light term, $u_s=-{2\pi}ik \alpha_H N_Hu_0$, which is a quarter of a period out of phase with the primary wave. Van de Hulst specifies that Eq.~\ref{eq:vdh1} holds only in the direction of the source-wave, and for a large cloud-observer distance. He emphasizes that the scattered field at P is primarily influenced by the light scattered by the first Fresnel zones\footnote{The n$\rm^{th}$ Fresnel zone is defined as the set of points M on the Huygens-Fresnel sphere of Fig.~\ref{fig:fig} for which $(n-1)\lambda/2\leq OMP-OP<n\lambda/2$. The first zone is a disc, the following are rings \cite{born}.} as viewed from $P$ (Fig.~\ref{fig:fig}). \begin{figure}[t] \resizebox{\columnwidth }{!}{\includegraphics{fig.eps}} \caption{Schematic representation of the interstellar cloud, the Huygens-Fresnel sphere, and the Fresnel zones (from fig.~10.52 in \cite{hecht}). The radius of the Huygens-Fresnel sphere is $d_0$, $d$ is the distance from the source $O$ to a point $M$ in the slab, on or close to the sphere. Distances $l$ and $l_0$ are from the observer $P$ to $M$, and to the sphere. The distance from $P$ to $O$ is $D=l_0+d_0$. When the star is at infinity (Sect.~\ref{vdh}) the sphere and the plane coincide.} \label{fig:fig} \end{figure} \section{Cloud illuminated by a star at large but finite distance} \label{hf} In this section, provided that distances be large enough, we show that the cloud can be assimilated to a Huygens-Fresnel sphere and use Fresnel's theory to re-derive Eq.~\ref{eq:vdh1} in a more general case. For a negligible thickness $e$ of the cloud ($e\ll l_0$ and $e\ll d_0$) the cloud is a plane which can be approximated by a portion of the Huygens-Fresnel sphere centered on the star (Fig.~\ref{fig:fig}). A small area $\mathrm{d}S$ of $\Sigma$, centered on $M$, contains a large number of scatterers\footnote{Each scatterer is equivalent to a small area $\sigma_\lambda\sim 10^{-24}$~cm$^2$ for hydrogen atoms and at 1500~\AA. In a typical interstellar cloud $\Sigma$ will have a surface density of $\sim 3.\,10^{20}$ scatterers per cm$^2$ (Sect.~\ref{an}). The disturbance at the observer position due to $\mathrm{d}S$ is $N_H a_0\mathrm{d}S$, with $a_0$ the disturbance due to one atom.} and approximates very well the secondary sources used in the derivation of Fresnel's theory. Huygens considered light as a spherical wave-front perturbation which propagates through space (if the source is at infinity the wave-front is a plane). Fresnel found that the contribution of an elementary surface $ \mathrm{d}S$ centered on $M$ of the wavefront, at distance $d_0$ from the source of light $O$ and $l$ from the observer $P$ (Fig.~\ref{fig:fig}), to the perturbation at $P$ is\footnote{Demonstrations are given in textbooks: \cite[chap.3]{vdh}; \cite[chap.10.3]{hecht}; \cite[chap.8]{born}. The sign of the $e^{i\pi /2}$ factor may change from one textbook to another.} \begin{equation} \mathrm{d}u_f= \frac{e^{-ikl}}{l\lambda} \mathrm{d}S \frac{u_0 e^{-ikd_0}}{d_0}e^{i\pi /2} \label{eq:fresnel} \end{equation} Only the area close to axis $OP$ contributes significantly and the obliquity factor is here, as in Fresnel's theory, neglected. On the other hand, if $O$ is a star (Fig.~\ref{fig:fig}), assumed to be point-like, a small area $ \mathrm{d}S$ close to axis $OP$ of an HI (atomic hydrogen) cloud scatters the star's light and contributes to the disturbance at $P$ by the amount (deduced from \cite[sects.~4.1 and 6.12]{vdh}) \begin{eqnarray} \mathrm{d}u_s&=& k^2 \alpha_H \frac{e^{-ikl}}{l} N_H \mathrm{d}S \frac{u_0 e^{-ikd}}{d} \nonumber \\ &=& \frac{e^{-ikl}}{l\lambda} \mathrm{d}S \frac{[4\pi^2u_0 \alpha_H N_He^{-i\pi/2}/\lambda ]e^{-ikd}}{d}e^{i\pi /2} \label{eq:scafr} \end{eqnarray} Amplitude $u_0$ is the perturbation due to the star with no cloud on the line of sight. If distances are large the sphere and the cloud coincide over a large number of Fresnel zones (Sect.~\ref{cond}). Identification of the terms in Eq.~\ref{eq:fresnel} with those of Eq.~\ref{eq:scafr} shows that the cloud is virtually equivalent to a second star, $O_1$, superposed to $O$ and a quarter of a period out of phase with it, with amplitude \begin{equation} u_1=(4\pi^2\alpha_H N_He^{-i\pi/2}/\lambda) u_0 \label{eq:e2} \end{equation} The disturbance measured at $P$ is the sum of the disturbances due to $O$ and to $O_1$ \begin{equation} u=u_0\left( 1-{2\pi}ik \alpha_H N_H\right) \label{eq:sca} \end{equation} This is for a star at large but finite distance the same expression as found by van de Hulst (Eq.~\ref{eq:vdh1}). Van de Hulst's situation is the particular case where the distance to the star is infinite ($D=l_0+d_0\sim d_0$) and the Huygens-Fresnel sphere of Fig.~\ref{fig:fig} becomes the slab $\Sigma$. A real star, if it cannot be considered as a point source, will be split into small independent sources. The same result will be reached by adding the contributions of all the sources, provided that $N_H$ remains constant over a sufficiently large area of the cloud. \section{Scattered to direct light intensities ratio } \label{an} The ratio of the scattered to the direct irradiances received at $P$ from the direction of the star, if Eq.~\ref{eq:sca} applies, is \begin{equation} \frac{I_{s}}{I_0}= \left( \frac{|u_s|}{|u_0|}\right)^2= \left( \frac{ 4\pi^2\alpha_H N_H}{\lambda}\right)^2 \label{eq:r2} \end{equation} For an interstellar cloud with column density\footnote{The reddening is $\sim0.5$~mag.. The density of these clouds is generally low, a few tens to a few hundreds atom/cm$^3$.} $N_H=3\,10^{20}\rm cm^{-2}$, at UV wavelength $\lambda$=1500~\AA\ \begin{eqnarray} \left|\frac{u_s}{u_0}\right| & \approx &100 \label{usu0}\\ \frac{I_s}{I_0}&\approx & 10^4 \label{eq:an} \end{eqnarray} With these values $N_H\sigma_\lambda\approx 3.5\,10^{-5}\lll1$. \section{Discussion} \label{dis} \subsection{Role and importance of the first Fresnel zone} \label{comp} Eq.~\ref{eq:sca} does not depend on distances $l_0$ and $d_0$ as long as they are large. This remarkable fact results from Huygens-Fresnel theory and from the correspondence introduced in Sec.~\ref{hf} between a Huygens-Fresnel sphere and the cloud: the disturbance generated by a source of light does not depend on the specific Huygens-Fresnel sphere chosen between the source and the observer. Two additional important properties of light propagation are highlighted by Fresnel's theory \cite{hecht, born}. First, only the lower order Fresnel zones contribute efficiently to the disturbance. Second, the contribution of the first zone alone to the disturbance at $P$ is in absolute value twice the overall disturbance itself; the irradiance at $P$ due to the first zone, if it can be separated from the contribution of the other zones, is four times the total irradiance\footnote{Poisson thought this effect would refute Fresnel's memoir \cite{fresnel}. Observations verified the theory and led to the recognition of Fresnel's work.}. These properties imply that for Eq.~\ref{eq:sca} to hold it is enough for the cloud to match an Huygens sphere over a few Fresnel zones only. They also mean that the scattered irradiance may be estimated from the scattering by the first Fresnel zone alone, and that adding the disturbances from a larger number of zones diminishes but does not destroy the contribution of the first zone (unless the cloud matches exactly an even number of Fresnel's zones). \subsection{Scattered irradiance from the first Fresnel zone alone} \label{sc1} If $d_0$ and $l_0$ are large the irradiance due to scattering by the whole cloud should be one fourth the irradiance $I_{1}$ in the idealized situation of a cloud confined to the first Fresnel zone. The surface of a Fresnel zone is \cite{hecht} \begin{equation} S_{\lambda}=\pi\lambda \frac{d_0l_0}{D} \label{eq:s} \end{equation} For a given $l_0$, $S_{\lambda}$ is maximum for a source at infinity ($d_0\approx D$). Assuming scattering is isotropic the irradiance $i_{0}$ due to the light scattered by a single atom is \begin{equation} i_0=\frac{1}{4\pi l_0^2}I_0\left(\frac{D}{d_0}\right)^2\sigma_\lambda = \frac{1}{4\pi}I_0\left(\frac{D}{d_0l_0}\right)^2\sigma_\lambda \label{eq:i0} \end{equation} The number of atoms in the first Fresnel zone is $n_0=N_HS_{\lambda}$. The order of magnitude $I_1$ of the irradiance due to atoms in the first Fresnel zone alone can be estimated by $I_1=n_0^2i_0$. From Eqs.~\ref{eq:s} and \ref{eq:i0} and the expression of $\sigma_\lambda$ (Sect.~\ref{vdh}) \begin{equation} \frac{I_{1}}{I_0}=N_H^2S_{\lambda}^2\frac{i_0}{I_0} =\frac{\pi}{4}\lambda^2\sigma_\lambda N_H^2 =\frac{2}{3}\pi^2\left( \frac{ 4\pi^2\alpha_H N_H}{\lambda}\right)^2 \label{eq:i1} \end{equation} Distances have cancelled as expected. Eq.~\ref{eq:i1} and Eq.~\ref{eq:r2} differ by a factor 6.6, close to the factor of 4 anticipated previously. The difference should be attributed to the fact that the phase lags between atoms in the first Fresnel zone have been neglected. \subsection{The effect of distances} \label{dis} For a cloud 100~pc~$=3.\,10^{20}$~cm away and a star at infinity (for instance in another galaxy), the first Fresnel zone for $\lambda=1500$~\AA\ is $\sim1500$~km large. The amplification of the scattered light in the forward direction, with respect to what it would be if coherence of the scattered waves was ignored, is then a factor $n_0=5.\,10^{34}$ for $N_H=3.\,10^{20}\rm cm^{-2}$. If the star is at equal distance from the cloud as the observer is ($l_0=d_0=D/2$), $S_\lambda$ is divided by two, and the irradiance $I_1$ due to the scattered starlight is reduced by a factor of 4 only. It nevertheless remains very large. In the expression $I_1\approx n_0^2i_0$ the $n_0^2$ term contributes most to the scattered light when $S_\lambda$ is large (the cloud is far from both the star and the observer) because of the large number of scatterers and their cooperative effect; $i_{0}$ is then minimum. Conversely $i_{0}$ will be enhanced when the scatterers are close to the star or to the observer (because of the $1/(l_0^2d_0^2)$ dependence of $i_0$, Eq.~\ref{eq:i0}), while the size of the Fresnel zone, and thereby coherent scattering, is reduced. Coherent scattering and scattering by one particle have opposite effects which compensate for each other. But for stars close to a cloud the curvature of the Huygens-Fresnel sphere is increased and the identification of the sphere with the cloud will be more difficult; in addition the coherence of the scattered waves within the cloud thickness will tend to disappear. The effect of coherent scattering will be lost and Eq.~\ref{eq:sca} can no more be applied. In this case the scattered light irradiance should be calculated using classical incoherent scattering. It is negligible compared to the star irradiance $I_0$. Only coherent scattering from large Fresnel zones can lead to an appreciable amount of scattered starlight in the sense discussed in the previous sections. For a given observer-cloud distance this will happen for stars sufficiently far-away from the cloud. \subsection{The cloud thickness parameter} \label{cond} The difference $\Delta=OMP-D$ between the paths light traverses from $O$ to $P$ via a point $M$ in the cloud at distance $h$ from axis $OP$, and $D$ is \begin{equation} \Delta= \frac{h^2D}{2l_0d_0} \label{eq:delta}, \end{equation} $\Delta=\Delta_n=n\lambda/2$ gives the radius $h_n$ of the $n^{\rm th}$ Fresnel zone \begin{equation} \frac{h_n^2 D}{l_0d_0}= n\lambda \label{eq:fn} \end{equation} Eq.~\ref{eq:delta} can also be used to find the path-length difference between two points $M_1$ and $M_2$ of the cloud both at distance $h$ from axis $OP $. Let $(l_0,d_0)$ and $(l_0+\epsilon,d_0-\epsilon)$ be the distances of the projection of $M_1$ and $M_2$ on the axis, to $P$ and $O$. With $l_0\ll d_0$ and $\epsilon\ll l_0$, \begin{eqnarray} \Delta_{1,2}&=&|\Delta_{M_1}-\Delta_{M_2}| \nonumber\\ &=&\frac{h^2D}{2}\left|\frac{1}{l_0d_0}-\frac{1}{\left( l_0+\epsilon \right) \left( d_0-\epsilon \right)} \right| \\ &\approx&\frac{h^2D}{2d_0l_0^2}\epsilon \label{eq:deltad} \end{eqnarray} If $h\approx h_n$ \begin{equation} \Delta_{1,2}\approx \frac{\lambda}{2} \frac{n\epsilon}{l_0} \label{eq:deltadn} \end{equation} The scattered waves remain coherent over the thickness $e$ of the cloud as long as $\Delta_{1,2}$ is less than half the wavelength, that is over $n\approx l_0/e$ Fresnel zones. \subsection{Energy conservation} \label{ener} The irradiance at $P$ of light scattered by the whole cloud has the same order of magnitude as if the cloud was localized in the first Fresnel zone. The order of magnitude is that of Eq.~\ref{eq:i1} and must not violate energy conservation: the power extinguished (by Rayleigh scattering) within the first Fresnel zone must remain much larger than the power that is measured at the focus. Consider a small circular surface of area $s_d$ (for instance the surface of a detector) and radius $r_d$ centered at the focus, small enough for the scattered irradiance to vary little across the area. The power crossing $s_d$ is \begin{equation} P_{s_d}= s_dI_1=\frac{\pi}{4}\lambda^2\sigma_\lambda N_H^2 s_d I_0 \label{eq:p0} \end{equation} The power extinguished by $n_0=N_HS_\lambda$ atoms within the cloud (limited to the first Fresnel zone) is \begin{equation} P_{ext}= N_HS_\lambda\sigma_\lambda I_0 \label{eq:pabs} \end{equation} Since $P_{ext}\ggg P_{s_d}$ \begin{equation} \frac{4l_0}{N_H\lambda} \ggg s_d \label{eq:cd} \end{equation} The area at the focus over which the irradiance of the scattered light may be important (compared to the incoming plane wave irradiance) will necessarily be negligibly small in a laboratory experiment. In astronomical conditions, the inequality of Eq.~\ref{eq:cd} is less restrictive since, with the values of Sect.~\ref{an} and a distance $l_0\sim100$~pc, energy conservation imposes that $ r_d\lll 30$~m ($s_d\lll 1$~km$^2$). For an infinite slab, as considered by van de Hulst and which is best illustrated by the idealized\footnote{In addition to the gas real interstellar clouds contain a small amount of dust particles which are extremely efficient at extinguishing starlight and introduce a $1/\lambda$ extinction. The attenuation of both the scattered and the direct lights from the star due to these dust particles is neglected here.} interstellar cloud of atomic hydrogen illuminated by a star at large distance, the same calculation holds for any observer at the same distance from the slab. This is physically justified by the fact that the power extinguished within the slab tends to infinity. We also tried to investigate possible limitations of van de Hulst's formula (Eq.~\ref{eq:vdh1}). In the introduction of his Chapter 4 van de Hulst indicates, with no further justification, that the derivations made along the chapter hold as long as the average distance between the particles remains large compared to the wavelength. We did not find the reason why it should be so, why classical scattering theory wouldn't apply in standard laboratory conditions, why if $n_0$ atoms were to be localized in the first Fresnel zone the irradiance of the scattered light would not be given by $n^2$ times the irradiance due to one atom? If Eqs.~\ref{eq:vdh1} and \ref{eq:i1} are not applicable to this problem then it remains an open question as to how to calculate the irradiance of the scattered light. In interstellar space however average densities are extremely low (Sect.~\ref{an}) and van de Hulst's formula is fully justified. \section{Conclusion } \label{conc} This paper is based on a formula (Eq.~\ref{eq:vdh1}) which provides the irradiance of a plane wave scattered by a slab of identical, spherically symmetric, particles, in the forward direction and at the position of an observer far-away from the slab. The formula in itself presents no special difficulty. It may be obtained by different methods, through direct integration over all the particles in the slab (as it was first derived by van de Hulst in 1957), or using the convenient and more visual framework of the Huygens-Fresnel theory. These methods use no more than straightforward principles of optics and general scattering theory. The Huygens-Fresnel construction highlights specific aspects of Eq.~\ref{eq:vdh1}, its non-dependence on distances, and the role of the first Fresnel zone. Fresnel's theory also relates the irradiance at the position of the observer due to the whole slab (Eq.~\ref{eq:r2}) and the irradiance due to solely the first Fresnel zone (approximated by Eq.~\ref{eq:i1}). Both lead to the same order of magnitude for the irradiance of the scattered light. Numerical application of Eq.~\ref{eq:r2} or Eq.~\ref{eq:i1} in the case of Rayleigh scattering by hydrogen atoms has not been carried out before. It leads to a surprisingly large and counter-intuitive ratio of scattered to direct light irradiances. The symmetry of atoms, the size of the first Fresnel zone, the coherence of the scattering from atoms in the first Fresnel zone, are the determinant factors which contribute to this large ratio. In order to be compared with real measurements theory may need to be refined and its conditions of application evaluated in a more precise way than we did. But it does suggest the possibility for the image of a star to appear brighter when observed behind a cloud of hydrogen than it would be without the cloud on the line of sight. The cloud, rather than acting as a screen, behaves as a lens and enhances the irradiance of the light coming from the direction of the star. The scattering of starlight by an interstellar cloud was the major focus of this paper. We have shown that this problem is equivalent to the coherent scattering from the first Fresnel zone. If $n_0$ atoms are enclosed in the first Fresnel zone, the irradiance at the focus should roughly be the product of $n_{0}^2$ by the irradiance $i_0$ one atom alone would give: two atoms will give $4i_0$ at the focus (four times the irradiance of one atom), three $9i_0$, and so forth. A discussion of Eq.~\ref{eq:vdh1} (van de Hulst's formula) can therefore be simplified by considering first the idealized situation of a cloud localized to the first Fresnel zone. Reciprocally this particular case will be used to understand how theory needs to be improved before it can be compared with observation. How will distances between the source, the slab, and the observer, or the thickness of the cloud, intervene? Do, as suggested by van de Hulst, distances between atoms need to be considered, and if so how will Eq.~\ref{eq:i1} and Eq.~\ref{eq:vdh1} be modified? Our goal was to call attention on a specific case of scattering which received little consideration. The underlying physics is extremely simple but the consequences seem to have passed unnoticed. We have outlined that Fresnel's zones can be unusually large on astronomical scales but the absence of distances in Eq.~\ref{eq:vdh1} may allow laboratory experiments which would provide an insight into the questions forward coherent scattering by a gas can raise. \section*{Acknowledgments} Fr\'ed\'eric Zagury was supported by an Arthur Sachs Fellowship, and is grateful for the hospitality and resources provided by Harvard University. \bibliographystyle{model3-num-names}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Holography has opened a new window in the study of strongly correlated states of matter. This approach is particularly useful when dealing with many interacting gapless degrees of freedom, which is precisely where traditional field-theoretical methods fail yet the dual gravitational description is the most tractable. One phase of holographic matter that has been extensively studied is the holographic superfluid or superconductor \cite{Gubser:2008px,Hartnoll:2008vx,Hartnoll:2008kx}. The ground state of these holographic phases is very different from those of a conventional field-theoretical Bose superfluid or superconductor of the sort found in textbooks. A conventional superfluid contains very few low-energy excitations: at zero temperature, there need only be a single gapless Goldstone mode associated with the breaking of a spontaneous symmetry. A conventional superconductor will not possess even this mode: it will be eaten by the dynamical photon. However, a typical {\it holographic} superfluid or superconductor will often possess many gapless degrees of freedom, as can be seen geometrically in its gravity dual from the existence of a black hole horizon at low temperature. This horizon has an entropy scaling like a power of $T$ with a large coefficient \cite{Horowitz:2009ij,Gubser:2009cg}. These degrees of freedom have nothing to do with any Goldstone mode, and in many cases there is an emergent scaling or even conformal symmetry controlling this low-energy physics. The robust coexistence of these gapless modes together with symmetry-breaking order is somewhat novel from a field-theoretical point of view. In this paper we will study a consequence of this cohabitation by probing the IR structure with an excitation that all such phases possess: a vortex. Previous studies of holographic superfluid and superconducting vortices include \cite{Albash:2009iq,Montull:2009fe,Keranen:2009re,Maeda:2009vf,Domenech:2010nf,Bao:2013fda,Bao:2013ixa}. The basic idea is well-understood: the vortex becomes a cosmic string in the bulk, carrying magnetic flux down to the bulk horizon. Many of the previous studies are in a probe limit, or take backreaction into account perturbatively, which is completely well-defined only at finite temperatures. Our treatment will improve on their work by going to zero temperature and thus truly studying infrared physics. This will require us to include backreaction and thus numerically construct a new class of black hole solutions, resulting in conceptually new ingredients, which we summarize below. \subsection{Motivation and summary of results} We study the (3+1)-dimensional gravitational dual to a (2+1)-dimensional superfluid or superconductor. In the UV, our system is a conformal field theory. We consider the case where the zero temperature bulk solution is a domain wall in the holographic direction: in the IR, the system exhibits an emergent AdS$_4$ region. This means that the infrared degrees of freedom have rearranged themselves into a {\it different} conformal field theory, which we refer to from now on as the {\it IR CFT}. Our goal is to study the interaction of the vortex with these new degrees of freedom. We will actually study the theory with both superfluid and superconducting (in which the boundary symmetry is gauged) boundary conditions, finding very different vortex physics, as expected. The study of this vortex is interesting from several different points of view. From a gravitational point of view, the vortex in the bulk is a cosmic string that carries a single unit of magnetic flux. The vortex line extends all the way from the conformal boundary to a new IR Poincar\'e horizon. The physics where this flux meets the degenerate horizon is nontrivial.\footnote{For a discussion of cosmic strings piercing a finite temperature horizon, see \cite{Achucarro:1995nu}.} The condensate vanishes at the core of the vortex, where the magnetic flux is focused: thus we expect to find a new kind of black hole horizon, containing a small bubble of extremal magnetic Reissner-Nordstr\"{o}m horizon, surrounded by a sea of superconducting Poincar\'e horizon. Associated with this piece of Reissner-Nordstr\"{o}m horizon is a finite $T = 0$ ``impurity'' entropy which may be associated with the presence of the vortex. One of the main results of this paper is an explicit construction of this new kind of horizon. This horizon structure has an elegant understanding from the field theory. The fact that the vortex extends into the horizon means that it interacts nontrivially with the IR degrees of freedom. There is a well-developed formalism to deal with such a situation, that of {\it defect CFT} \cite{Cardy:2004hm}, which deals with the interaction of heavy objects -- such as a single vortex -- with a gapless conformal field theory. Previous study of similar defects at critical points separating antiferromagnetic order from a paramagnetic phase includes \cite{MaxSubir,Sachdev24121999,PhysRevB.61.15152}. One concrete consequence is that there is a reduced conformal symmetry, corresponding to what remains if we remove the translation generators from the full conformal group. This residual conformal symmetry turns out to be enough to reduce the PDEs determining the full gravitational solution to a (relatively) simple set of ODEs that determines the physics of the deep infrared at zero temperature. Furthermore, various observables characterizing the vortex can be calculated in terms of operators living on the conformal defect. In conventional superfluids, the precise form of the forces acting on a superfluid vortex in motion can be a matter of considerable controversy. However, in a conformal superfluid of the sort described here, there are precise Kubo formulas for these forces, written in terms of correlators of operators localized on the defect. This feature is independent of our gravitational description, and we anticipate further applications. While the infrared physics of the system is elegant, we do not restrict ourselves to this limit. We also explicitly solve the partial differential equations corresponding to the vortex at a finite temperature, demonstrating that the IR features discussed above emerge from the full gravitational solution in the $T \to 0$ limit. We also discuss some less universal physics: a novel result concerns the stability of superconducting vortices, i.e. whether a vortex with 2 units of flux is unstable to dissociating into two vortices. This feature is correlated with whether the superconductor is Type I or Type II: interestingly, we find that depending on the charge of the scalar the superconductor may be Type I. (See \cite{Umeh:2009ea} for an earlier indication that holographic superconductors can be Type I.) Finally, there is one further reason to study the physics of holographic vortices \cite{Bao:2013fda,Bao:2013ixa}. It has recently been shown that magnetic monopole operators are likely to play an important role in the characterization of finite-density holographic matter \cite{Faulkner:2012gt,Sachdev:2012tj}. For example, let us imagine taking the bulk S-dual of a holographic superconductor. This is now a phase in which a {\it magnetically} charged scalar field has condensed in the bulk. The quanta of this field are magnetic monopoles. In this S-duality frame the bulk gauge field is {\it confined} (and not Higgsed), and it is now electric (and not magnetic) flux that is forced into tight flux tubes. Where these electric flux tubes intersect the boundary, they appear in the dual field theory as localized point charges: thus the dual field theory is one with a charge {\it gap}, and we are studying the holographic dual of an insulator.\footnote{This is qualitatively different from the holographic insulator one gets using the AdS soliton. In that case all excitations are gapped due to a global property of the solution. Here, electric field is locally confined in the bulk.} Thus the vortex solutions that we study may be viewed through the lens of S-duality as also determining the internal structure of the gapped charges that exist in a novel insulating phase. To avoid notational confusion we will not perform any further S-dualities in the bulk of this paper, but it may be helpful to keep this S-dual interpretation in mind, and we will return to it in the conclusion. This S-dual interpretation was one motivation for how we break the $U(1)$ symmetry. If we start, as usual, with nonzero chemical potential, then the S-dual description will have localized electric charges in a background magnetic field. Instead we work with zero chemical potential, but deform the theory by a relevant double-trace operator which triggers a nonzero scalar condensate. Another motivation for this form of symmetry breaking is purely technical: there is one less bulk function to solve for. We conclude this section with a brief outline of the paper, explaining how the results mentioned above are organized. In the remainder of this introduction we discuss further the infrared physics of vortices. In Section \ref{sec:setup} we outline the gravitational setup and explain the homogeneous symmetry broken phase that we study. In Section \ref{sec:confdef} we elaborate on the interpretation of the vortex as a conformal defect and numerically construct the geometry that captures the infrared physics. In Section \ref{sec:pdenum} we turn to the solution of the full problem at all energy scales and explain the relevant numerical methods and boundary conditions required to solve the PDEs. In Section \ref{sec:results} we present the results from this analysis, including a detailed discussion of vortex stability and thermodynamics. Section \ref{sec:confforce} is somewhat different and does not require a gravitational description: here we point out that the forces on a moving vortex can be expressed in terms of Kubo formulas of defect-localized operators. We present a summary and outline some directions for future work in Section \ref{sec:concl}. The Appendix contains some technical details. \subsection{Infrared physics of vortices} We devote the rest of this introduction to an explanation of the low-energy structure of vortices in holographic superfluids and superconductors and how it differs from that in conventional superfluids and superconductors. A conventional superfluid has only a Goldstone mode at low energies, whose action is given by \begin{equation} S = \rho_s\int d^3 x\;(\nabla {\theta})^2 \ . \end{equation} A vortex configuration is simply one where the phase ${\theta}$ winds around a point, i.e. if we denote the azimuthal coordinate by $\varphi$ we have ${\theta}(\vec{x}) \sim n\varphi$ with the vortex charge $n \in \mathbb{Z}$. This description breaks down at the origin, where the condensate is forced to vanish. As we discuss in detail later, the winding in ${\theta}$ results in an extended current flow and a logarithmic IR divergence of the energy of a single superfluid vortex. Now this action resembles that of a massless scalar, and so one might imagine that there is a conformal structure associated with even ordinary superfluids at low energies. This is not quite correct. ${\theta}$ is the phase of the scalar condensate, and is periodic, ${\theta} \sim {\theta} + 2\pi$. Thus it cannot have a scaling dimension, and in any spacetime dimension higher than $2$, Goldstone modes are not conformal. Indeed, in the action written above $\rho_s$ has mass dimension $1$, and thus provides a scale. Thus one expects that for problems where the compact nature of ${\theta}$ is important (such as those involving vortices), the theory is empty below the scale $\rho_s$, as the Goldstone mode effectively decouples. For a conventional superconductor the effective action is different: here we have a dynamical gauge field $a$, and the coupling above is modified to: \begin{equation} S_{SC} = \rho_s \int d^3x\;(\nabla {\theta} - q a)^2\,. \end{equation} The vortex configuration here is slightly different: we still have ${\theta} \sim n\varphi$, but now the dynamical gauge field tracks this phase, so that far from the core we have $a_{\varphi} = \frac{n}{q}$. This cuts off the logarithmic divergence of the energy, making the vortex a localized excitation. This is related to the fact that the long-range Goldstone mode has been eaten by this gauge field by the familiar Higgs mechanism. Thus all excitations are gapped, and the situation is in some ways even simpler. This simple low-energy behavior is not the case for holographic vortices; due to the presence of other gapless modes, here we have a nontrivial conformal structure at arbitrarily low energies. However one could still ask whether the vortex necessarily {\it needs} to interact with this conformal structure. After all, the definition of the vortex is in terms of its interactions with the Goldstone mode, and so perhaps like the Goldstone mode above the vortex too could decouple from the low-energy dynamics. In a holographic system there is an interesting topological obstruction to such a decoupling. From the bulk point of view, for the vortex to decouple at low energies the vortex line must actually somehow end at some radial coordinate above the horizon. In the bulk we {\it always} have a dynamical gauge field, and thus the bulk vortex line carries magnetic flux. For it to end we thus must terminate it on a magnetic monopole in the bulk. This is not always possible. As the bulk $U(1)$ gauge group is compact, part of the definition of the theory is the specification of the smallest possible unit of electric charge $q_e$. The set of possible bulk magnetic monopole charges $q_m$ is determined by the Dirac quantization condition: \begin{equation} q_e q_m = 2\pi \mathbb{Z} \ . \label{Dirac} \end{equation} Now the bulk magnetic flux carried by the vortex is $\frac{2\pi}{q}$, where $q$ is the charge of the condensed scalar field. $q$ is a multiple of the basic unit $q_e$. Now we see that if $q = q_e$ -- i.e. if we have condensed a scalar field with the smallest possible charge -- then we can terminate the vortex with a monopole, as shown in Fig.~\ref{fig:monopole}. Whether or not this actually happens depends on the dynamics (i.e. the balance between the bulk monopole mass and the tension in the string), but it is at least topologically possible. \begin{figure}[ht] \centering \includegraphics[width=0.7\textwidth]{monopoles.pdf} \caption{Two different possibilities for infrared behavior of a holographic vortex. {\it Left}, vortex extends into horizon (dotted line) and never decouples. {\it Right}, vortex line terminated in bulk by magnetic monopole.} \label{fig:monopole} \end{figure} On the other hand, if instead $q$ is some higher multiple $n q_e$, with $n > 1$, then the flux carried by the vortex is $1/n$ times the basic unit of magnetic flux, and the Dirac condition does not permit the existence of the fractionally charged magnetic monopole that would be required to terminate the line. Thus the line must extend to the horizon and cannot decouple from the conformal dynamics. From the field theory side, $q_e$ is the minimum charge quantum associated with the field theory Hilbert space. A vortex carrying flux $\frac{2\pi}{n q_e}$ will always have a nontrivial Aharonov-Bohm phase with elementary field theory quanta of charge $q_e$. Thus the field theory always knows of its existence and it cannot decouple. This line of reasoning appears to be an example of a general theme in applied holography: the value of the field theory charge quantum $q_e$ can manifest itself in bulk dynamics through the existence of magnetic monopoles \cite{Faulkner:2012gt,Sachdev:2012tj}. We now turn away from these general considerations to explicit computations in the bulk. \section{\label{sec:setup}Setup of gravitational problem} To describe a superconducting (or superfluid) vortex, we must first start with a homogeneous holographic superconductor (or superfluid). The simplest such theory consists of gravity coupled to a Maxwell field and charged scalar, so we will work with the following action: \begin{equation} S= \frac{1}{16 \pi G_N}\int \mathrm{d}^4 x\,\sqrt{-g}\left[R+\frac{6}{L^2}-\frac{1}{2}F_{ab}F^{ab}-2(D_a\Phi) (D^a\Phi)^{\dagger} -2V(|\Phi|^2)\right], \label{eq:action} \end{equation} where $L$ is the AdS length scale, $F = dA$, and $D_a \Phi=\nabla_a \Phi -i\,q\,A_a \Phi$. The equations of motion read \begin{subequations} \begin{equation} G_{ab}\equiv R_{ab}+\frac{3}{L^2}g_{ab}-\left[(D_a \Phi) (D_b \Phi)^\dagger+(D_b \Phi) (D_a \Phi)^\dagger+g_{ab}V(|\Phi|^2)+F_a^{\phantom{a}c}F_{bc}-\frac{g_{ab}}{4}F^{cd}F_{cd}\right]=0\,, \label{eq:einsteinEOM} \end{equation} \begin{equation} \nabla_a F^{ab}=i\,q\,\left[(D^b\Phi)\Phi^\dagger-(D^b\Phi)^\dagger \Phi \right], \label{eq:maxwellEOM} \end{equation} \begin{equation} g^{ab}D_aD_b \Phi-V'(|\Phi|^2)\Phi=0\,. \label{eq:scalarEOM} \end{equation} \label{eqs:EOM} \end{subequations} It will often be convenient to use $U(1)-$gauge invariant variables, which are defined in terms of the gauge field $A$ and complex scalar field $\Phi$ as \begin{equation} M = A-\frac{1}{q}\mathrm{d} \tilde{\varphi},\quad\text{and}\quad \Psi = |\Phi|\,, \end{equation} where $\tilde{\varphi}$ is the phase of the complex scalar field $\Phi$. We will choose our potential $V(|\Phi|^2)$ to be a standard Mexican hat potential, parametrized in the following way: \begin{equation} V(\eta)=\eta\,\mu^2\left(1-\frac{\eta\,\mu^2}{4\,V_0}\right)\,. \label{eq:potential} \end{equation} This potential has two local extrema: one at $\eta=0$, where $V=0$ and another at $\eta = 2V_0/\mu^2$, where $V=V_0$. Furthermore, the mass of the complex scalar field at $\eta=0$ is given by $\mu^2$, whereas at $\eta = 2V_0/\mu^2$ we find an effective mass of $-\mu^4/(4\,V_0)$. Throughout the paper we will use $\mu^2 L^2 = -2$ and $V_0=-L^{-2}$, see Fig.~\ref{fig:potential}. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{potential.pdf} \caption{Choice for the potential (\ref{eq:potential}), with $\mu^2 L^2 = -2$ and $V_0=-L^{-2}$.} \label{fig:potential} \end{figure} To describe vortices at finite temperature, we first need a homogeneous phase with a nonzero scalar field outside a black hole. One way to arrange this is to start with a charged black hole \cite{Hartnoll:2008kx}. However, the essential vortex physics that we would like to study does not require the complication of a nonzero background charge density. A different way to get the scalar field to condense is to add a double trace deformation in the boundary field theory \cite{Faulkner:2010gj}. This can cause nonzero scalar fields outside a neutral black hole as we now review. We require that solutions asymptotically approach AdS in Poincar\'e coordinates, i.e. \begin{equation} \mathrm{d} s^2 = \frac{L^2}{z^2}\left(-\mathrm{d} t^2+\mathrm{d} R^2+R^2\mathrm{d} \varphi^2+\mathrm{d} z^2\right)\,. \end{equation} The asymptotic behavior of $\Phi$ is then \begin{equation}\label{eq:asympscalar} \Phi = \alpha z + \beta z^2 + \cdots \end{equation} One has a choice of boundary conditions. For standard boundary conditions, $\alpha = 0$, $\Phi$ is dual to a dimension two operator. For alternative boundary conditions, $\beta =0$, $\Phi$ is dual to a dimension one operator ${\cal O}$. In this case, the double trace operator ${\cal O}^\dagger {\cal O}$ is relevant, so it is natural to add a coupling $-{\kappa} \int d^3 x\;{\cal O}^{\dagger} {\cal O}$ to the dual field theory action. As explained in \cite{Witten:2001ua,Sever:2002fk}, the effect of adding such a term is to modify the boundary conditions in the bulk to become \begin{equation}\label{eq:double} \beta = \kappa \alpha\,. \end{equation} Positive $\kappa$ corresponds to adding ${\cal O}^\dagger {\cal O}$ to the dual field theory potential with a positive coefficient. This makes it harder for ${\cal O}$ to condense. One might have thought that setting $\kappa < 0$ would destabilize the theory and there would be no ground state. However this is not the case. The full effective potential contains higher powers of ${\cal O}$ which stabilize the theory. This has been shown by proving a bulk ``positive energy theorem" under the boundary condition $\beta = \kappa \alpha$ for $\Phi$ with $\kappa < 0$ \cite{Faulkner:2010fh}. For a given $\kappa < 0$, the planar Schwarzschild solution (with $\Phi = 0$) is stable at high temperature, but becomes unstable to developing scalar hair at low temperature. The critical temperature is set by the only scale in the problem, ${\kappa}$, and can be explicitly computed \cite{Faulkner:2010gj}: \begin{equation} T_c = {3\over 4\pi} {\Gamma(1/3)^3\over \Gamma(-1/3) \Gamma(2/3)^2} \,\kappa \approx -0.62\, \kappa\,. \end{equation} As $T \rightarrow 0$, we are deep in the condensed phase, and the value of the scalar field on the horizon approaches $|\Phi | = 1$. The horizon reduces to the Poincar\'e horizon of a new IR $AdS_4$ geometry. The $T=0$ solution thus interpolates between the UV $AdS_4$ with $\Phi=0$ and this new IR $AdS_4$ with $|\Phi| = 1$. Since we have chosen the minimum of $V$ to be $-1/L^2$, the effective cosmological constant in the deep infrared has increased from its UV value. This corresponds to a smaller effective AdS length, related to $L$ as $\tilde{L}^2= 3\,L^2/4$. From a field-theoretical point of view, this $AdS_4$ means that the symmetry-broken phase is described at low energies by a new IR CFT$_3$. \section{Vortices as conformal defects} \label{sec:confdef} We turn now to a discussion of the vortex. In a $2+1$ dimensional superfluid a vortex is a pointlike excitation around which the phase of the condensate winds. This means that the condensate $\langle \sO \rangle$ must vanish at the location of the vortex; this costs energy and will typically happen over a finite size, defining a core radius for the vortex. In our system the IR conformal invariance provides an extra ingredient. Note that there are two different CFTs, one in the UV and one in the IR. The UV conformal invariance is broken by the relevant double-trace coupling. The UV theory is well-defined to arbitrarily high energy scales, and thus within this theory the vortex should be a normalizable and regular excitation. In particular we expect it to have a finite energy and core radius set by the scale ${\kappa}$ provided by the double-trace coupling. We will demonstrate this explicitly in later sections by constructing a gravitational description of the full vortex in this UV-complete theory. However in this section we will solve a simpler problem. Consider the infrared, i.e. energies much smaller than ${\kappa}$. From this point of view the vortex is an infinitely heavy and pointlike excitation, and thus corresponds to a {\it defect}, a non-normalizable modification of the IR CFT at a single point. At low energies the deformation should flow to a conformally invariant boundary condition at that point; thus we expect that the IR physics of these vortices can be understood from the theory of defect or boundary CFT \cite{Cardy:2004hm}. We first recall some basic concepts. Consider the IR CFT defined on $\mathbb{R}^{2,1}$ with a pointlike defect localized at the origin $\vec{x} = 0$ (and extending for all time). The CFT without the defect is invariant under the full conformal group $SO(3,2)$; this is broken down to $SO(2,1) \times SO(2)$ by the presence of the defect. $SO(2,1)$ is the symmetry group of a CFT$_1$ extending along the vortex worldline; thus one may say that there is a nontrivial CFT$_1$ living on the defect. The symmetry structure described above can be made more transparent if we perform a conformal rescaling to $AdS_2 \times S^1$: \begin{equation} \mathrm{d} s^2 = -\mathrm{d} t^2 + \mathrm{d} \rho^2 + \rho^2 \mathrm{d} \varphi^2 = \rho^2\left(\frac{-\mathrm{d} t^2 + \mathrm{d} \rho^2}{\rho^2} + \mathrm{d} \varphi^2\right). \label{ads2confR} \end{equation} The unbroken $SO(2,1) \times SO(2)$ now acts geometrically in an obvious fashion on $AdS_2 \times S^1$. The defect has been mapped to the boundary of $AdS_2$. The existence of the defect manifests itself in the need to specify boundary conditions at the $AdS_2$ boundary, and possibly around the non-contractible $S^1$. For a vortex it is clear that we should demand that the phase of the scalar condensate wind around this $S^1$. \subsection{Gravity solution} We turn now to an explicit construction of the gravitational dual of this conformal defect. We first seek a suitable bulk coordinate system. Consider the line element of pure AdS$_4$ written in Fefferman-Graham coordinates: \begin{equation} \mathrm{d} s^2 = \frac{\tilde{L}^2}{z^2}\left[-\mathrm{d} t^2+\mathrm{d} R^2+R^2\mathrm{d}\varphi^2+\mathrm{d} z^2\right]. \label{eq:pureAdS} \end{equation} This $AdS_4$ is dual to the IR CFT, and so one should imagine it representing the IR portion of the geometry described in Section \ref{sec:setup}. There is no vortex here yet, but it is nevertheless helpful to imagine one sitting at the origin of field theory coordinates ($R = 0$) and hanging down into the bulk ($z$ arbitary). From this point of view one might think that the vortex solution will always depend on two coordinates $(R,z)$; as we now show, this is not true. Consider the following set of coordinates: \begin{equation} R= \rho\sin\theta \,,\quad\text{and}\quad z= \rho \cos\theta\,, \label{PoincareC} \end{equation} in terms of which the line element (\ref{eq:pureAdS}) reduces to \begin{equation} \mathrm{d} s^2 = \frac{\tilde{L}^2}{\cos^2\theta}\left[\frac{-\mathrm{d} t^2+\mathrm{d} \rho^2}{\rho^2}+\mathrm{d} \theta^2+\sin^2\theta\mathrm{d}\varphi^2\right]\,. \label{eq:A2inA4} \end{equation} In these coordinates, pure AdS$_4$ is viewed as a warped fibration of AdS$_2$. Note that the conformal boundary of AdS$_4$ (located at ${\theta} \to \frac{\pi}{2}$) is precisely $AdS_2 \times S^1$. Thus the dual CFT is defined on $AdS_2 \times S^1$, as in \eqref{ads2confR}, but this metric preserves the full $SO(3,2)$ isometry group of AdS$_4$. We argued above that a vortex breaks $SO(3,2)$ down to $SO(2,1) \times SO(2)$. The most general line element compatible with such symmetries is now \begin{equation} \mathrm{d} s^2 = \frac{L^2}{\cos^2\theta}\left[F(\theta)\left(\frac{-\mathrm{d} t^2+\mathrm{d} \rho^2}{\rho^2}\right)+H(\theta)\mathrm{d} \theta^2+G(\theta)\sin^2\theta\mathrm{d}\varphi^2\right]. \label{eq:scaling} \end{equation} Both the functions in the metric $F({\theta}), G({\theta}), H({\theta})$ and the matter sector $A_\varphi(\theta)$ and $\Phi(\theta)$ are functions of $\theta$ only. As ${\theta} \to 0$ we have the core of the vortex, where the $\varphi$ circle shrinks and the scalar and gauge field will vanish. As ${\theta} \to \frac{\pi}{2}$ we approach the conformal boundary $AdS_2 \times S^1$; the metric functions approach those of $AdS_4$, and the matter fields satisfy the boundary conditions \begin{equation} \arg \Phi = n \varphi \qquad A_{\varphi}\left({\theta} \to \frac{\pi}{2}\right) = \frac{n}{q} \,.\label{ads2bc} \end{equation} It is interesting that this solution depends only on a single coordinate ${\theta}$ rather than $R$ and $z$ independently. We will see that the full solution (out to the UV $AdS_4$) does depend on two variables, but there is enhanced symmetry in the IR. This is a consequence of the conformal symmetry preserved by the vortex, essentially stating that moving away from the vortex is the same as moving deeper into the infrared. We will refer to this as the ``scaling solution". There is an interesting property of the bulk metric \eqref{eq:scaling}; independent of the details of the metric functions, the existence of the $AdS_2$ endows the bulk solution with a Poincar\'e horizon at $\rho \to \infty$. There is an entropy associated with this horizon, which extends from ${\theta} = 0$ to ${\theta} = \pi/2$: \begin{equation} S_H = \frac{ \pi L^2}{2 G_N} \int_0^{{\theta}_\Lam} d{\theta} \frac{\sin {\theta}}{\cos^2{\theta}} \sqrt{H({\theta}) G({\theta})} \ . \label{Shor} \end{equation} In this expression we have cut off the ${\theta}$ integral at a value ${\theta}_{\Lam} \sim \frac{\pi}{2}$. What is the precise interpretation of this entropy in the field theory? In the coordinates given by \eqref{eq:scaling}, this horizon intersects the conformal boundary $AdS_2 \times S^1$ at ${\theta} = \frac{\pi}{2}$: thus in this conformal frame it can be viewed as a bulk minimal surface that hangs down from the boundary, and is computing a field-theoretical {\it entanglement entropy} via the Ryu-Takayanagi prescription \cite{Ryu:2006bv}. In fact any constant $\rho$ surface is a minimal surface, not just the surface as $\rho \to \infty$; furthermore they all have the same area, due to the $AdS_2$ isometry that shifts the value of $\rho$. The surface wraps the $S^1$: on the boundary this $S^1$ surrounds the defect, and thus we are computing the entanglement entropy of the defect with its surroundings. The analogous quantity in 2d CFT is called a {\it boundary entropy} as the defect there cuts the line into two, and is well-studied \cite{PhysRevLett.67.161}. We are not aware of much study in higher dimensions: however see the recent work \cite{Jensen:2013lxa}. There is a divergence in the expression \eqref{Shor} as we approach the boundary; this has nothing to do with the vortex and in the $AdS_2 \times S^1$ conformal frame may be interpreted as the usual UV divergence of the entanglement entropy. We may obtain a finite impurity entropy by subtracting the same entanglement entropy without the defect present, i.e. evaluating \eqref{Shor} on \eqref{eq:A2inA4}. \begin{equation} S_{imp} = \lim_{{\theta}_{\Lam} \to \frac{\pi}{2}}\left(S_H - \frac{ \pi \tilde L^2}{2G_N} \int_0^{{\theta}_{\Lam}} d{\theta} \frac{\sin {\theta}}{\cos^2{\theta}}\right)\,. \label{impSdef} \end{equation} $S_{imp}$ is a finite and universal number characterizing the defect.\footnote{Note that the entanglement entropies involved in this subtraction are defined in the $AdS_2 \times S^1$ conformal frame. One must impose the cutoff differently with the help of \eqref{PoincareC} to obtain entanglement entropies in the $\mathbb{R}^{2,1}$ conformal frame: in fact the value of ${\theta}_{\Lam}$ then depends on $\rho$, introducing $\rho$-dependence in the value of the entanglement entropy.} \subsection{\label{sec:scaling}Numerical construction} We now discuss the explicit numerical construction of this geometry. It turns out to be convenient to work with a different angular coordinate: \begin{equation} \cos\theta = \tilde{y}\sqrt{2-\tilde{y}^2}\,, \end{equation} which brings the line element (\ref{eq:scaling}) to the following form \begin{equation} \mathrm{d} s^2 = \frac{L^2}{\tilde{y}^2(2-\tilde{y}^2)}\left[F(\tilde{y})\left(\frac{-\mathrm{d} t^2+\mathrm{d} \rho^2}{\rho^2}\right)+\frac{4\,H(\tilde{y})\,\mathrm{d} \tilde{y}^2}{2-\tilde{y}^2}+G(\tilde{y})(1-\tilde{y}^2)^2\mathrm{d}\varphi^2\right], \label{eq:ansatznear} \end{equation} where the vortex core is now located at $\tilde{y}=1$, and $\tilde{y}=0$ is the region infinitely far away from the vortex core. The line element (\ref{eq:ansatznear}) still exhibits gauge freedom for arbitrary reparametrizations of $\tilde{y}$. In order to circumvent this problem (and its higher dimensional analog in the solution of partial differential equations in the next sections), we will use the DeTurck method, first introduced in \cite{Headrick:2009pv} and studied in great detail in \cite{Figueras:2011va}. This is based on the so called Einstein-DeTurck equations, which can be obtained from the standard Einstein equations (\ref{eq:einsteinEOM}), by adding the following new term \begin{equation} G^{H}_{ab} \equiv G_{ab}-\nabla_{(a}\xi_{b)}=0, \label{eq:einsteindeturck} \end{equation} where $\xi^a = g^{cd}[\Gamma^a_{cd}(g)-\bar{\Gamma}^a_{cd}(\bar{g})]$ and $\bar{\Gamma}(\bar{g})$ is the Levi-Civita connection associated with a reference metric $\bar{g}$. The reference metric is chosen to be such that it has the same asymptotics and horizon structure as $g$. This produces non-degenerate kinetic terms for all of the metric components and automatically fixes a gauge. Furthermore, the Einstein-DeTurck equation can be shown to be elliptic for static line elements \cite{Headrick:2009pv}\footnote{In fact, in \cite{Headrick:2009pv} it was shown that the Einstein DeTurck equations are elliptic under more general assumptions, but in this paper we only need the results regarding static line elements.}, such as the ones we consider in this manuscript. It is easy to show that any solution to $G_{ab}=0$ with $\xi=0$ is a solution to $G^{H}_{ab}=0$. However, the converse is not necessarily true. In certain circumstances one can show that solutions with $\xi\neq 0$, coined Ricci solitons, cannot exist \cite{Figueras:2011va}. For the case at hand, we did not manage to prove such a theorem. Basically, the presence of the matter fields do not allow for a straightforward extension of proof given in \cite{Figueras:2011va}. However, since the equations we want to solve are elliptic, they can be solved as a boundary value problem for well-posed boundary conditions. The solutions to such equations can be shown to be locally unique. This means that a solution of the Einstein equations cannot be arbitrarily close to a DeTurck soliton, and that we should be able to distinguish between the two by monitoring $\xi_a \xi^a$. Note that for static line element, it can be easily shown that $\xi_a\xi^a>0$. If we input the ansatz (\ref{eq:ansatznear}) into the Einstein-DeTurck equations (\ref{eq:einsteindeturck}) and matter field Eqs.~(\ref{eq:maxwellEOM}-\ref{eq:scalarEOM}), we find that the $\rho$ dependence cancels out and we are left with five second order nonlinear ODEs in $\tilde{y}$. This is not a surprise, since we have maintained $SO(2,1)$ symmetry. In our numerical code, we have decided to solve for the following set of variables $\{F(\tilde{y}),H(\tilde{y}),G(\tilde{y}),\widehat{A}_\varphi(\tilde{y}),\widehat{\Phi}(\tilde{y})\}$, where we defined \begin{equation} A_{\varphi}(\tilde{y})\equiv L (1-\tilde{y}^2)^2\widehat{A}_\varphi(\tilde{y})\quad\text{and}\quad \Phi(\tilde{y})\equiv(1-\tilde{y}^2)^n\,e^{i\,n\,\varphi}\widehat{\Phi}(\tilde{y})\,. \end{equation} Note that the factors of $(1-\tilde{y}^2)^2$ and $(1-\tilde{y}^2)^n$ in the definitions of $\widehat{A}_\varphi$ and $\widehat{\Phi}$, respectively, ensure that regularity at $\tilde{y}=1$ only requires pure Neumann boundary conditions on both $\widehat{A}_\varphi$ and $\widehat{\Phi}$. Furthermore, regularity of the line element (\ref{eq:ansatznear}) also demands $H(1)=G(1)$. The remaining boundary conditions at $\tilde{y}=1$ are of the pure Neumann type. These conditions can be obtained via an analysis similar to the one presented in detail later in Section \ref{sec:pdenum}. At $\tilde{y}=0$, we demand \begin{equation} \widehat{\Phi}(0)=1\,,\quad\widehat{A}_\varphi(0)=\frac{n}{q\,L}\quad\text{and}\quad F(0)=H(0)=G(0)=\frac{3}{4}\,. \end{equation} Note that the factors of $\frac{3}{4}$ here are due to the fact that the IR geometry without the vortex has an effective AdS radius of $\tilde{L}^2 \equiv \frac{3}{4} L^2$, as discussed in Section \ref{sec:setup}. Finally, for the DeTurck reference metric we chose $F(\tilde{y})=H(\tilde{y})=G(\tilde{y})=3/4$. We now present the results from this analysis for a vortex with $n = 1$. The resulting gauge field and scalar profiles are shown in Fig.~\ref{fig:nearhorizonprof} for $q L = 2$. We see that they interpolate smoothly from the core of the vortex at $\tilde{y} = 1$ to the IR CFT vacuum at $\tilde{y} = 0$. We stress that this is only an infrared limit of the vortex; in this approach the temperature is strictly zero, and we cannot include the irrelevant deformations that would take us eventually to the UV. \begin{figure}[ht] \centering \includegraphics[width=0.95\textwidth]{near_horizon.pdf} \caption{Scalar field $|\Phi|$ ({\it left panel}) and gauge field $A_{\varphi}$ ({\it right panel}) as a function of $\tilde{y}$ for $q L = 2, n = 1$. The core of the vortex is at $\tilde{y} = 1$, where both functions must vanish by regularity. At $\tilde{y} \to 0$ we approach the homogeneous ground state, and the scalar approaches the minimum of its potential. } \label{fig:nearhorizonprof} \end{figure} Since the solution is independent of $\rho$, the geometry of the horizon at $\rho =\infty$ is the same as the geometry on any constant $\rho$ (and constant $t$) surface. Fig. \ref{fig:nearhorizoncurv} shows the scalar curvature ${\cal R}$ of the horizon as a function of proper distance from the vortex core. Note the large positive peak near the origin. This reflects a ``bubble of Reissner-Nordstr\"{o}m horizon" sticking out of the usual Poincar\'e horizon as we anticipated in the introduction. The fact that the curvature approaches a negative constant at large distance may seem puzzling, since one often thinks of the Poincar\'e horizon in AdS as being flat. But that impression is incorrect, and results from extrapolating the Poincar\'e coordinates to the horizon where they are no longer valid. To see that a cross-section of the Poincar\'e horizon really has constant negative curvature, we can use \eqref{eq:A2inA4}: since the coordinate transformation \eqref{PoincareC} does not involve $t$, the horizon at $\rho = \infty$ is identical to the usual Poincar\'e horizon. The coordinates $(\theta,\varphi)$ are well defined there and parameterize a hyperbolic plane. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{ricci_scalar_nh.pdf} \caption{The scalar curvature of the $T=0$ horizon as a function of proper distance from the vortex core. The large positive peak near the core denotes a ``bubble of Reissner-Nordstr\"{o}m horizon" sticking out of the usual Poincar\'e horizon. } \label{fig:nearhorizoncurv} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{entropydiff.pdf} \caption{Full entropy difference (defined later in \eqref{eq:difentropy}) as a function of $T/(-\kappa)$ for $q\,L =2$. Squares correspond to $n=1$ and diamonds to $n=2$. The red triangle represents the impurity entropy (defined in \eqref{impSdef}) extracted from the scaling solution \eqref{eq:ansatznear}.} \label{fig:entropydiff} \end{figure} In the next sections we will solve the full partial differential equations describing the vortex at a nonzero $T$ in the UV complete theory. When solving the PDEs it is technically difficult to work at precisely zero temperature. Instead we will demonstrate that as we cool the vortex down various infrared observables computed from the full geometry appear to tend towards those arising from the scaling solution described in this section. We will discuss most of those results in Section \ref{sec:results} after describing their calculation, but to set the stage we present just one result in Fig.~\ref{fig:entropydiff}, where we compare the impurity entropy \eqref{impSdef} with the thermodynamic entropy difference $\Delta S$ of the full black hole with and without the vortex present, i.e. \begin{equation} \Delta S(T) \equiv S(T) - S_{0}(T) \ . \end{equation} We see that as we lower the temperature from finite but small temperatures to $T = 0$, the thermodynamic entropy $\Delta S$ appears to be in perfect agreement with the impurity entropy $S_{imp}$ that we find from the scaling solution. We see this as a very good indication that we have found the correct near horizon geometry. We turn now to a subtle point. In the field theory there are two natural definitions for a defect entropy: we can define a defect entropy at strictly zero temperature via the entanglement entropy of a symmetric region surrounding the defect, or consider instead the zero temperature limit of the thermodynamic entropy $\Delta S(T)$ defined above. In 2d CFT one can show on general grounds that these two definitions are equivalent \cite{Calabrese:2004eu,2009JPhA...42X4005C,2009JPhA...42X4009A}. This has also been directly verified in holographic calculations \cite{Azeyanagi:2007qj,Bak:2011ga}. In higher dimension this need not be the case, and indeed examples are known where these definitions disagree\footnote{An example is given by a probe string in $AdS_{d+1}$ with $d > 2$ \cite{Jensen:2013lxa}; we thank K. Jensen for drawing this to our attention.}. In our calculation we have taken the definition of the defect entropy to be the regulated entanglement entropy evaluated in the $AdS_2 \times S^1$ conformal frame \eqref{impSdef}, and have shown that this matches very well with the $T \to 0$ limit of the thermodynamic entropy. While from the bulk point of view the subtraction involved in \eqref{impSdef} appears natural (in that we are subtracting the areas of two bulk horizons) the precise reason for this agreement from the field theory deserves further study. Another comparison one can make is between the impurity entropy and the entropy of an extreme Reissner-Nordstr\"{o}m solution with one unit of total flux. To define this latter quantity, one can start by compactifying the horizon into a finite volume torus. One finds that the entropy of the extremal solution is proportional to the magnetic flux. One can thus take the infinite volume limit and obtain a finite entropy. We have made this comparison and find that our impurity entropy is roughly double the Reissner-Nordstr\"{o}m entropy with the same total flux. Confining the flux into finite volume apparently increases its entropy. We have not yet been able to construct the near horizon scaling solution for vortices with more than the minimum flux, i.e., $n>1$. So in section \ref{sec:results} we will only compare the finite temperature $n=1$ solutions to their $T=0$ limit. \section{Full solution: boundary conditions and numerical methods} \label{sec:pdenum} In this section we venture away from the infrared and describe the solution to the full problem of constructing a vortex in the UV complete theory. We demand that in the UV we approach the original $AdS_4$, with the scalar approaching the local maximum of its potential at $|\Phi| = 0$ and satisfying the double-trace boundary conditions \eqref{eq:double}. We will first explain the general ansatz used for determining both the metric and matter fields, and then discuss the appropriate boundary conditions and numerical methods used to determine the solution. For convenience of notation we refer to the homogeneous superconducting black hole solution (to which our solutions asymptote in various limits) by the abbreviation HHH. \subsection{Metric and matter fields ansatz} We want a configuration that, from the metric perspective, is symmetric under rotations about the origin of the vortex, so it is clear that we will have a rotation Killing vector $\partial_\varphi$. In addition, we are interested in static black hole solutions, which also means we will have a timelike Killing vector $\partial_t$. Finally, we expect the physics to depend both on the radial variable that measures the distance to the vortex core (we will call it $x$ or $R$) and on the holographic direction (which we denote as $y$ or $z$). So, we anticipate that our problem will be co-homogeneity two, and that cylindrical coordinates will be best adapted to study our problem. The most general metric and matter ansatz compatible with the symmetries outlined above is: \begin{subequations} \begin{multline} \mathrm{d} s^2 = \frac{L^2}{y^2}\Bigg\{-Q_1\,y_+^2 (1-y^3)\mathrm{d} t^2+\frac{Q_2\,\mathrm{d} y^2}{1-y^3}+\\ \frac{y_+^2Q_4}{(1-x)^4}\left[\mathrm{d} x+x\,y^2(1-x)^3\,Q_3\,\mathrm{d} y\right]^2+\frac{y_+^2Q_5\,x^2}{(1-x)^2}\mathrm{d} \varphi^2\Bigg\}\,, \label{eq:ansatzmetric} \end{multline} \begin{equation} \Phi =y\, e^{i\,n\,\varphi}\,x^n\,Q_6\qquad\text{and}\qquad A = L\,x^2 Q_7\,\mathrm{d} \varphi\,, \label{eq:ansatzgauge} \end{equation} \end{subequations} where each of the $Q_i$'s is a function of $x$ and $y$ to be determined in what follows. For later numerical convenience, we have introduced several factors of $x$ and $y$ multiplying the functions $Q_i$. Note that we write the phase of the complex scalar field as $\tilde{\varphi}= n \,\varphi$ with $n$ being the winding number of the vortex along the Killing direction $\varphi$. The eqs.~(\ref{eq:ansatzgauge}) are equivalent to the following gauge independent definitions \begin{equation} \Psi = y\, x^n\,Q_6\qquad\text{and}\qquad M_{\varphi} = L\,x^2 Q_7-\frac{n}{q}\,. \end{equation} In writing the solution in the above form, we have compactified both the radial distance from the vortex and the holographic direction. As a result, the coordinates $(x,y)$ take values in the unit square, with $y=1$ being the horizon location, $y=0$ the boundary at conformal infinity, $x=0$ the core of the vortex and $x=1$ is asymptotic spatial infinity, i.e. infinitely far away from the vortex core. Regularity at the future and past horizons require all $Q_i$ to have a power series expansion in $(1-y)$, with $Q_1(x,1)=Q_2(x,1)$. It follows that the constant $y_+$ in (\ref{eq:ansatzmetric}) is proportional to the black hole Hawking temperature: \begin{equation} T = \frac{3\,y_+}{4\pi}\,. \end{equation} As the dual theory is conformally invariant, the physics of each solution to the theory will then depend only on the dimensionless quantities $T/(-\kappa)$ (where $\kappa$ is given in (\ref{eq:double})), and on the vortex winding $n$. We will fix $\kappa = -1$ and use $y_+$ to probe different values of $T/(-\kappa)$. The boundary conditions at $x=0$ are determined by smoothness along the axis. The detailed conditions on the $Q_i$ are spelled out in Appendix \ref{app:bc}. The boundary conditions in the two asymptotic regions, $y=0$ and $x=1$, are a little subtle and will be discussed below. The line element (\ref{eq:ansatzmetric}) still has gauge freedom associated with reparametrizations of $x$ and $y$. As before, we use the DeTurck method as introduced in \eqref{eq:einsteindeturck}, with the reference metric $\bar{g}$ given by the line element (\ref{eq:ansatzmetric}) with \begin{subequations} \begin{align} &Q_1=Q_4=Q_5=1,\quad\text{and}\quad Q_3=0\,, \\ &Q_2=1-\tilde{\alpha}\,y(1-y)\,,\label{eq:tildealpha} \end{align} \end{subequations} where $\tilde{\alpha}$ is a constant that we will fix later. \subsection{The holographic stress energy tensor and boundary conditions at the conformal boundary} At the conformal boundary, located at $y=0$, we want our solution to approach AdS in Poincar\'e coordinates, i.e. \begin{equation} \mathrm{d} s^2 = \frac{L^2}{z^2}\left(-\mathrm{d} t^2+\mathrm{d} R^2+R^2\mathrm{d} \varphi^2+\mathrm{d} z^2\right)\,. \end{equation} This implies Dirichlet boundary conditions for all metric functions, of the form \begin{equation} Q_1(x,0)=Q_2(x,0)=Q_4(x,0)=Q_5(x,0)=1\quad\text{and}\quad Q_3(x,0)=0\,, \end{equation} and the identification $R=x/(1-x)$, $y=y_+\,z$. Note that our reference metric $\bar{g}$ automatically satisfies these conditions. The boundary conditions for the matter fields are better explained if we first introduce Fefferman-Graham coordinates $(z,\tilde{x})$ (FGC). Because we will determine all the $\{Q_i\}$ numerically, we can only hope to do this analytically close to the boundary. First we determine all the functions in an expansion in powers of $y$, by solving the equations off the boundary: \begin{equation} Q_i = \sum_{j=0}^{+\infty} Q^{(j)}_i(x)\,y^j\,, \label{eq:Qs} \end{equation} where all the $Q^{(j)}_i(x)$ are determined as a function of $\{Q^{(3)}_4(x),Q^{(0)}_6(x),Q^{(1)}_6(x),Q^{(0)}_7(x),Q^{(1)}_7(x)\}$ and their derivatives along $x$. A couple of comments are in order regarding this expansion. First, we have chosen the mass of our scalar field $\Phi$, namely $\mu^2L^2=-2$, to be such that near the conformal boundary \begin{equation} |\Phi| = |\tilde{\Phi}^{(1)}|\,y+|\tilde{\Phi}^{(2)}|\,y^2+\ldots\,\Rightarrow x^n\,Q_6 = |\tilde{\Phi}^{(1)}|+|\tilde{\Phi}^{(2)}|\,y+\ldots. \label{eq:scalardecay} \end{equation} The boundary conditions presented in Section \ref{sec:setup} demand that $\tilde{\Phi}^{(2)}/\tilde{\Phi}^{(1)}$ is a constant, which translates into a Robin boundary condition for $Q_6$: \begin{equation} \left.\frac{\partial Q_6}{\partial y}\right|_{y=0}=\frac{\kappa_1}{y_+}Q_6(x,0)\,. \label{eq:scalarBC} \end{equation} The precise relation between the double trace parameter $\kappa$ and $\kappa_1$ will be presented when discussing how to extract the holographic stress energy tensor. Note that the boundary condition for $Q_6$ is only of this simple form due to the extra factor of $y$ in the definition of $Q_6$ (see the first equation in Eq.~(\ref{eq:ansatzgauge})). The second comment we want to make regarding the expansion (\ref{eq:Qs}) is that in general it will contain logarithms. However, that is not the case if one takes $\tilde{\alpha}=4\kappa_1/y_+$ in Eq.~(\ref{eq:tildealpha}), which we shall do from here henceforth. We have confirmed that this is the case, at least up to tenth order off the boundary. Third, we need to discuss which boundary condition we impose on $A_\varphi$. This will depend on the physics we want to describe. The dual theory has a $2+1$ dimensional gauge coupling. If the gauge coupling is zero (so the $U(1)$ symmetry is not gauged) we have a superfluid. As we do not want any external electromagnetic fields imposed, $F_{\mu\nu}=0$ at the conformal boundary. This implies the following choice of boundary conditions \begin{equation} A_\varphi(x,0)=0\Rightarrow Q_7(x,0)=0\,. \label{eq:BCsuperfluids} \end{equation} The other extreme is an infinite gauge coupling. This is a superconducting regime with zero current. This means we should impose at the conformal boundary \begin{equation} \left.\frac{\partial M_\varphi}{\partial y}\right|_{y=0}=0\Rightarrow \left.\frac{\partial Q_7}{\partial y}\right|_{y=0}=0\,. \label{eq:BCsupercondutors} \end{equation} From all the three comments above, we conclude that once the boundary conditions at the conformal boundary for $\Phi$ and $A_\varphi$ are suitably imposed, the $Q^{(j)}_i(x)$ are determined as functions of $\{Q^{(3)}_4(x),Q^{(0)}_6(x),\eta Q^{(0)}_7(x)+(1-\eta)Q^{(1)}_7(x)\}$ and their derivatives, where $\eta = 1$ for superconductors, and $\eta = 0$ for superfluids, i.e. a total of three functions in each phase. At this stage we would like to understand what is the physical meaning of such functions. This is best understood if we first change to FGC. We do this in an expansion off the boundary, by setting: \begin{subequations} \label{eqs:FGcoordiantes} \begin{equation} \left\{ \begin{array}{l} \displaystyle y=y_+\,z+\sum_{j=2}^{+\infty} a_j(\tilde{x})z^j\,, \\ \\ \displaystyle x=\tilde{x}+\sum_{j=1}^{+\infty} b_j(\tilde{x})z^j\,, \end{array} \right. \end{equation} and demanding that in the $(z,\tilde{x})$ coordinates $g_{zz}=L^2/z^2$ and $g_{z\tilde{x}}=0$. Note that at each order in $z$, we have two conditions to be solved for the two functions $\{a_j(\tilde{x}),b_j(\tilde{x})\}$. For completeness we provide here the first few terms in the above expansion: \begin{align} &a_2(\tilde{x})=\frac{5 \kappa _1 y_+}{4}\,,\\ &a_3(\tilde{x})=\frac{ \kappa_1 y_+}{64} \left(265\,\kappa_1-64\,y_+\right)\,,\\ &a_4(\tilde{x})=-\frac{ y_+ }{768}\left[336\,\kappa_1\,y_+^2\,\tilde{x}^{2 n}\,Q^{(0)}_6(\tilde{x})^2-14625\,\kappa_1^3+6720\,\kappa_1^2\,y_++128 y_+^3\right]\,,\\ &b_1(\tilde{x})=b_2(\tilde{x})=b_3(\tilde{x})=0\,,\\ &b_4(\tilde{x})=-\frac{y_+^2}{8}(1-\tilde{x})^4 \tilde{x}^{2\,n-1} Q^{(0)}_6(\tilde{x}) \left[\tilde{x}\,Q^{(0)}_6{}^\prime(\tilde{x})+n\,Q^{(0)}_6(\tilde{x})\right]\,. \end{align} \end{subequations} We are now ready to explain the physical meaning of $Q^{(0)}_6(\tilde{x})$ and the relation between $\kappa_1$ and the usual double trace parameter $\kappa$. $\kappa$ is usually defined with respect to the FGC in the following way $-$ see \eqref{eq:asympscalar} and \eqref{eq:double} $-$ \begin{equation} \Phi = \Phi^{(1)} z+\kappa\,\Phi^{(1)} z^2+\ldots\,, \label{eq:kappajorge} \end{equation} with $\Phi^{(1)}$ being identified as the expectation value of the operator dual to $\Phi$, i.e. \begin{equation} \Phi^{(1)} =y_+\,e^{i\,n\,\varphi}\,Q^{(0)}_6(\tilde{x})\tilde{x}^n \equiv \langle \mathcal{O}\rangle\Rightarrow \left|\langle \mathcal{O}\rangle\right| = y_+\,\left|Q^{(0)}_6(\tilde{x})\right|\tilde{x}^n\,. \end{equation} Using both Eq.~(\ref{eq:scalardecay}) and Eq.~(\ref{eq:scalarBC}), together with the relation between $y$ and $z$ described in Eqs.~(\ref{eqs:FGcoordiantes}), we find \begin{equation} \kappa_1 = \frac{4\,\kappa}{9}\,. \end{equation} A similar expansion holds close to the conformal boundary for the gauge field $A_\varphi$, namely \begin{equation}\label{eq:bdryA} A_\varphi = L\,A^{(0)}_\varphi+L\,A^{(1)}_\varphi\,z+\ldots\,, \end{equation} where, acording to the usual AdS/CFT dictionary, $A^{(0)}_\varphi$ is the boundary Maxwell field, $A^{\mathrm{FT}}_\varphi$, and $A^{(1)}_\varphi$ the current flowing along $\varphi$, $J_\varphi$. This then implies the identification \begin{equation}\label{eq:bdryAJ} A^{\mathrm{FT}}_\varphi = \tilde{x}^2\,Q^{(0)}_7,\quad \text{and}\quad J_\varphi=y_+\,\tilde{x}^2\,Q^{(1)}_7\,. \end{equation} We are thus left to find an interpretation for $Q^{(3)}_4(\tilde{x})$. Not surprisingly, this will be related to the holographic stress energy tensor, whose extraction from the numerical data we detail below. Here we have decided to use the approach described in \cite{Balasubramanian:1999re}, and reconstruct the holographic stress energy tensor as \begin{equation} T_{\mu\nu} = \frac{1}{8\pi G_N\,L^2}\lim_{z\to 0}\left(\frac{L}{z}\right)\left(K_{\mu\nu}-\gamma_{\mu\nu}K-\frac{2}{L}\gamma_{\mu\nu}+L\,G^{(3)}_{\mu\nu}-\frac{\Phi^2}{L}\gamma_{\mu\nu}\right)\,, \label{eq:stressenergy} \end{equation} where Greek indices run over boundary coordinates, $K_{\mu\nu}$ is the extrinsic curvature associated with an inward unit normal vector to the boundary (located at $z=0$), $K\equiv \gamma^{\mu\nu}K_{\mu\nu}$, $\gamma_{\mu\nu}$ is the induced metric on the constant $z$ surface, and $G^{(3)}_{\mu\nu}$ is the Einstein tensor of $\gamma_{\mu\nu}$. Since we are interested in field theories living on Minkowski space, the fourth term in Eq.~(\ref{eq:stressenergy}) vanishes as $z\to0$. The last term, on the other hand, gives the necessary contribution to cancel the divergences arising due to the presence of the scalar field. Note also that we used FGC to define the holographic stress energy tensor. This stress energy tensor can be shown to obey the following relations \begin{equation} h^{\mu\nu}T_{\mu\nu}=\frac{\kappa}{4\pi G_N}|\langle\mathcal{O}\rangle|^2\quad\text{and}\quad \tilde{\nabla}^\mu T_{\mu\nu}=\frac{\kappa}{8\pi G_N}\tilde{\nabla}_\nu |\langle\mathcal{O}\rangle|^2+\frac{1}{8\pi G_N}F^{\mathrm{FT}}_{\nu\rho}J^\rho\,, \label{eq:conservation} \end{equation} where $h_{\mu\nu}$ is the metric on the conformal boundary, and $\tilde{\nabla}$ its associated Levi-Civita connection. For the solutions we are considering, the last term does not contribute, because superfluid boundary conditions require $F^{\mathrm{FT}}=0$, whereas our superconducting boundary conditions require $J=0$. The stress energy tensor would apriori depend on three unknown functions, $\{Q^{(3)}_1(\tilde{x}),Q^{(3)}_4(\tilde{x}),Q^{(3)}_5(\tilde{x})\}$, however, by using both conditions above, we can solve algebraically for $Q^{(3)}_1(\tilde{x})$ and $Q^{(3)}_5(\tilde{x})$, in terms of $Q^{(3)}_4(\tilde{x})$ and its derivatives along $\tilde{x}$. A useful test of the numerics is given by the first law, expressed in the canonical ensemble variables: \begin{equation} \mathrm{d} F = -S\,\mathrm{d} T\,, \label{eq:firstlaw} \end{equation} where $F=E-TS$ is the Helmoltz free energy. In order to use the above differential equations, we first need to define energy for these systems. This seems rather hopeless, because the holographic stress energy tensor is not covariantly conserved, see the second equation in (\ref{eq:conservation}). However, as we mentioned before, the last term in Eq.~(\ref{eq:conservation}) does not contribute on our solutions, and because the first term on the right hand side of the conservation equation is a total derivative, we can readily reabsorb it into an effective stress energy tensor, $\tilde{T}_{\mu\nu}$, that is conserved on our solutions: \begin{equation} \tilde{T}_{\mu\nu}=T_{\mu\nu}-\frac{\kappa\,h_{\mu\nu}}{8\pi G_N}|\langle\mathcal{O}\rangle|^2\,. \end{equation} We now define the energy in the usual way: \begin{equation} E = -\int_{\Sigma_t} \mathrm{d}^2 x\sqrt{\eta}\,\tilde{T}_{\mu\nu}(\partial_t)^\mu t^\nu\,, \label{eq:energystress} \end{equation} where $\eta_{\mu\nu}$ is the induced metric on the constant $t$ surface with unit normal $t^{\nu}$. Next we discuss the boundary conditions far from the vortex core. \subsection{\label{sec:x1}Boundary conditions infinitely far away from vortex core - $x=1$:} Because the flux is conserved, the boundary conditions at $x=1$ are very distinct for the superconductor and superfluid phases. Let us see why. The flux through a surface $\Sigma$ at constant $t$ and $y$ is given by \begin{equation} \tilde{\Phi} = \int_{\Sigma} F\,. \label{eq:flux1} \end{equation} For an isolated vortex, this flux is quantized and given by \begin{equation} \tilde{\Phi} =2\,\pi\, \frac{n}{q}\,. \label{eq:flux2} \end{equation} Let us first start with the superconducting phase. This phase is characterized by a ``no current'' boundary conditions, see Eq.~\eqref{eq:BCsupercondutors}. This means that the field lines of $A_{\varphi}$ are allowed to penetrate the boundary, and we expect the magnetic field to fall off quickly far away from the vortex core. In particular, there is no tension between Eq.~(\ref{eq:flux1}) and Eq.~(\ref{eq:flux2}). In this case, the solution approaches the HHH solution \cite{Faulkner:2010gj} as we approach $x=1$.\footnote{Note that in the solution \cite{Faulkner:2010gj} both the scalar field phase and the Maxwell field vanish while our solution has a complex scalar and $A_\varphi\neq 0$. However, the gauge transformation $\tilde{\varphi}\to \tilde{\varphi} +q \chi\,,\: A_\varphi \to A_{\varphi} +\nabla_\varphi \chi$, with gauge parameter $\chi=n\varphi/q$ rewrites the solution of \cite{Faulkner:2010gj} in the form \eqref{eq:HHHbc}.} The boundary conditions are simply \begin{align}\label{eq:HHHbc} &Q_1(1,y)=\tilde{Q}_1(y)\,\quad Q_2(1,y)=\tilde{Q}_2(y),\quad Q_3(1,y)=0\,,\nonumber \\ \\ &Q_4(1,y)=Q_5(1,y)=\tilde{Q}_3(y)\,,\quad Q_6(1,y) = \tilde{Q}_4(y),\quad\text{and}\quad Q_7(1,y) = \frac{n}{q\,L}\,,\nonumber \end{align} where $\tilde{Q}_i(y)$ corresponds to the HHH solution expressed in DeTurck like coordinates. Things are different if we instead consider the superfluid phase. Here, because $A_\varphi = 0$ at the boundary, there seems to be a tension between Eq.~(\ref{eq:flux1}) and Eq.~(\ref{eq:flux2}). This conundrum is solved in a simple but very dramatic way, namely, the field lines of $A_{\varphi}$ spread as we reach the boundary, and accumulate at $(x,y)\sim(1,0)$. This accumulation ends up destroying the asymptotics of the would be HHH black hole and creating a new solution that is similar to the usual holographic superconductor, except that $A_\varphi$ now depends on $y$, being $0$ at $y=0$, and approaching a smooth nonzero value at $y=1$. Close to $y=1$ we expect this new solution to be similar to the usual HHH black hole. Unlike the HHH solution, this black hole is not expected to exist as a solution of the Abelian-Higgs model in AdS \emph{per se}. Instead, it only makes sense as an asymptotic solution valid close $x=1$. One easy way of noting this is that this solution is not regular everywhere in our manifold, being singular if continued all the way to $x=0$, i.e. it violates the boundary conditions described in Appendix \ref{app:bc}. To sum up, the boundary conditions close to $x=1$ take the following form \begin{align} &Q_1(1,y)=\tilde{Q}_1(y)\,\quad Q_2(1,y)=\tilde{Q}_2(y),\quad Q_3(1,y)=0\,,\nonumber \\ \\ &Q_4(1,y)=Q_5(1,y)=\tilde{Q}_3(y)\,,\quad Q_6(1,y) = \tilde{Q}_4(y),\quad\text{and}\quad Q_7(1,y) = \tilde{Q}_5(y)\,,\nonumber \end{align} where the $\tilde{Q}_i$ are now determined by solving five coupled ODEs that are obtained as limiting equations of our general PDE system as one approaches $x=1$. Finally, the factor of $(1-x)^3$ in the cross term of the line element (\ref{eq:ansatzmetric}) was chosen such that $Q_3$ vanishes \emph{linearly} at $x=1$, i.e. $Q_3\propto (1-x)$. The fact that $Q_3$ is linear in $(1-x)$, rather than some other higher power of $(1-x)$, is important to achieve the desired numerical accuracy. \subsection{Numerical method and convergence} Before proceeding to the discussion of the results we will first give some details on the numerical methods we employed. We have used a pseudospectral collocation procedure to descretize our PDE system. For both the $x$ and $y$ directions we used a collocation grid on Gauss-Chebyshev-Lobbato points. We solved the resulting system of nonlinear algebraic equations using a standard Newton-Raphson method. We have developed several tests of the convergence of our numerical method. First, we monitored the maximum of the norm of the DeTurck vector, as a function of the number of collocation points $N$, i.e. $\chi_N = \max_{(x,y)\in(0,1)^2} \xi_a \xi^a$. Note that we expect the norm to be zero on solutions of the Abelian Higgs model in AdS. Furthermore, since we are using a pseudospectral method in a Chebyshev grid, we expect the convergence of our numerical method to be exponential. We have confirmed that this is the case, as can be seen in Fig.~\ref{fig:convergence}. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{error.pdf} \caption{$\chi_N$ as a function of the number of grid points $N$. The vertical scale is logarithmic, and the data is well fit by an exponential decay: $\log\chi_N = -7.5-0.60\,N$. In this particular simulation we have used $y_+ = 1/2$, $\kappa = -1$, $q=2$ and $n=1$.} \label{fig:convergence} \end{figure} We have also tested convergence by looking at how quantities such as the energy and entropy vary when we vary $N$, and we always find exponential convergence both for the superconducting and superfluid phases. A couple of remarks are in order about convergence across the parameter range we have probe. First, we note that as we lower the temperature, i.e. small values of $y_+$, we need to increase the number of points in order to keep the resolution. For instance, in order for $\chi_N$ to drop bellow $10^{-10}$ in the superconducting phase for $y_+ = 1/10$ we had to use $61$ grid points in both the $x$ and $y$ grids. Second, all computations in this paper were done using quadruple precision. Finally, we have also tested the first law, Eq.~(\ref{eq:firstlaw}). We find perfect agreement, i.e. deviations under the percent level, when $y_+\gtrsim0.2$. However, when $y_+\sim 0.1$ we find deviations from this expression up to $5\%$. As we have mentioned before, this is not surprising since low temperature solutions are more difficult to determine accurately. Presumably, this difficulty is related with the fact that a throat is developing as we lower the temperature and that a more dense grid is required in order to resolve it. We note, however, that as we increase the number of points, Eq.~(\ref{eq:firstlaw}) is more and more accurate, with the expected exponential convergence for \emph{all} values of $y_+$ we simulated, namely $y_+\in(0.1,2.6)$. This roughly corresponds to the interval $T/(-\kappa)\in (0.023,0.621)$. \section{Full solution: results} \label{sec:results} In this section we present the results for the holographic duals of isolated vortices. We discuss superconducting and superfluid vortices separately: while these are similar in some ways (e.g. the physics of the horizon is very similar for both), certain aspects of the physics are very different. \subsection{Superconducting vortices} \subsubsection{Horizon properties} \label{sec:sc_hor} As we discussed in section III, a novel feature of holographic vortices is that they carry magnetic flux out of the black hole horizon, distorting it. At low temperature, the horizon approaches the Poincar\'e horizon of the IR $AdS_4$ away from the vortex, but at the core of the vortex, the scalar field vanishes and there is a single unit of nonzero magnetic flux. As a result, there is a piece of local Reissner-Nordstr\"{o}m AdS horizon inside the vortex, carrying a finite amount of entropy. To illustrate this ``horizon bubble" sticking out, we start with a diagram showing an isometric embedding of the horizon into 3D hyperbolic space. Using the line element (\ref{eq:ansatzmetric}), one finds that the induced line element on the vortex horizon is given by \begin{equation} \mathrm{d} s^2 = L^2\left[\frac{y_+^2Q_4(x,1)\,\mathrm{d} x^2}{(1-x)^4}+\frac{y_+^2\,Q_5(x,1)\,x^2}{(1-x)^2}\mathrm{d} \varphi^2\right]\,. \label{eq:inducedhorizon} \end{equation} To construct an embedding diagram, one starts with the line element of hyperbolic space: \begin{equation} \mathrm{d} s^2_{\mathbb{H}}=\frac{L^2}{\hat{z}^2}\left[\mathrm{d} \hat{R}^2+\hat{R}^2\mathrm{d}\varphi^2+\mathrm{d}\hat{z}^2\right]\,. \end{equation} One then searches for an embedding of the form $(\hat{R}(x),\hat{z}(x))$, which gives the following metric \begin{equation} \mathrm{d} s^2_{\mathbb{H}}=\frac{L^2}{\hat{z}(x)^2}\left\{\left[\hat{R}'(x)^2+\hat{z}'(x)^2\right]\mathrm{d} x^2+\hat{R}(x)^2\mathrm{d}\varphi^2\right\}\,. \end{equation} By equating the above line element, with Eq.~(\ref{eq:inducedhorizon}), one finds that \begin{equation} \frac{\hat{R}'(x)^2+\hat{z}'(x)^2}{\hat{z}(x)^2}=\frac{y_+^2Q_4(x,1)}{(1-x)^4}\quad \text{and}\quad \frac{\hat{R}(x)^2}{\hat{z}(x)^2}=\frac{y_+^2\,Q_5(x,1)\,x^2}{(1-x)^2}\,. \end{equation} These are first order nonlinear equations in $\hat{R}(x)$ and $\hat{z}(x)$ that can be readily solved using pseudospectral collocation methods. We fix the integration constants by demanding $\hat{z}(1)=1/y_+$. The curve traced by $(\hat{R}(x),\hat{z}(x))$, as we vary $x$ in the interval $(0,1)$, is the embedding diagram. The results for several different temperatures are shown in Fig.~\ref{fig:embeddingsc}, where the temperature decreases from bottom to top. We see that as the temperature is decreased, the backreaction on the metric is such that a bulge is created on the horizon -- recall that smaller $z$ is closer to the conformal boundary. Asymptotically, i.e. as $\hat R(x)\to+\infty$, the horizon becomes flat. We have plotted this diagram for several values of $n$, and they all look qualitatively similar. Since the horizon is bulging out, one might worry about a possible Gregory-Laflamme type instability on the horizon \cite{Gregory:1993vy}. We have not yet performed an exhaustive study of stability of this background, but our preliminary results indicate stability for the $n=1$ mode (see later discussion). \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{embeddingsc.pdf} \caption{Embedding diagram, plotted for several temperatures, for superconductor boundary conditions with $q\, L=2$ and $n=1$. Disks, squares, diamonds, triangles and inverted triangles have $T/(-\kappa) = 0.029,\, 0.048,\,0.072,\,0.119,\,0.571$, respectively.} \label{fig:embeddingsc} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{ricci_n1_n2.pdf} \caption{Ricci scalar of the induced metric on the horizon, ${\cal R}$, for superconductor boundary conditions with $q\, L=2$, as a function of the proper distance to the vortex origin $\ell_\mathcal{H}$. The {\it left panel} has $n=1$, and the {\it right panel} $n=2$. In both panels, disks, squares, diamonds, triangles and inverted triangles have $T/(-\kappa) = 0.029,\, 0.048,\,0.072,\,0.119,\,0.571$, respectively.} \label{fig:scalarHn1n2sc} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{riccisc_t_0.pdf} \caption{Ricci scalar of the induced horizon geometry, evaluated at the origin, for superconductor boundary conditions with $q\, L=2$ and $n=1$, as a function of the temperature. The red triangle indicates the $T = 0$ result from the scaling solution constructed in section \ref{sec:confdef}.} \label{fig:scalarHsc} \end{figure} We have also computed the Ricci scalar, ${\cal R}$, along the horizon, i.e. the Ricci scalar of the line element (\ref{eq:inducedhorizon}), as a function of proper distance, $\ell_H$, from the rotation axis. The results for vortices with $n=1$ or $n=2$ are shown in Fig.~\ref{fig:scalarHn1n2sc}.\footnote{In this, and all subsequent bulk plots, we have set $L = 1$ so everything is measured in units of the AdS radius.} For $n=1$ we find that the maximum always sits at the origin, whereas for $n=2$ it is obtained around $\ell_H\sim1/2$. This shift is simply a consequence of the fact that the energy density caused by the complex scalar has two main contributions: $g^{xx}|\nabla_x\Phi|^2+g^{\varphi\varphi}|\nabla_\varphi\Phi|^2$. Finally, in all cases we see that the Ricci scalar approaches $0$ as $\ell_H \to+\infty$, since we recover the HHH black hole, which has an horizon that preserves translational invariance. In Fig.~\ref{fig:scalarHsc} we plot the Ricci scalar for $n=1$ evaluated at the origin. Note that it increases monotonically as the temperature is decreased; we also indicate the precise value at $T = 0$, obtained from Fig. \ref{fig:nearhorizoncurv}. While this value does fit the trend, we see that we are still some distance away from zero temperature. Note the utility of the scaling solution; without it, the steep upwards slope might have made us nervous that our $T > 0$ solutions would have a singular $T = 0$ limit. Another quantity of interest is the magnetic flux density on the horizon, as a function of the proper distance from the rotation axis. Instead of $F_{x\varphi}$, we plot \begin{equation}\label{eq:fluxdensity} \Phi_D \equiv F_{\ell_{\mathcal{H}}\varphi} = F_{x\varphi}\frac{\mathrm{d} x}{\mathrm{d} \ell_{\mathcal{H}}}\,, \end{equation} so the area under the curve directly gives the total flux (up to a factor of $2\pi$ coming from the $\varphi$ integral). The results, for various temperatures, are shown in Fig. \ref{fig:magneticfieldhorizon}. As expected, the total flux is independent of $T$. Note that the width of the vortex, defined as the region where most of the flux is concentrated, is approximately constant as we lower the temperature. This is also expected, since it is essentially the width of the cosmic string when it hits the horizon. Furthermore, the maximum of $\Phi_D$ is a monotonic function of the temperature, increasing as we decrease $T$. We will see that this last property does not hold for the magnetic field at the conformal boundary at infinity. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{flux_density.pdf} \caption{Magnetic flux density \eqref{eq:fluxdensity} evaluated at the horizon, as a function of the proper distance along the horizon, plotted for several temperatures. The {\it left panel} has $n=1$, and the {\it right panel} $n=2$. In both panels, disks, squares, diamonds, triangles and inverted triangles have $T/(-\kappa) = 0.029,\, 0.048,\,0.072,\,0.119,\,0.571$, respectively. The {\it left panel} also includes the $T=0$ result from our scaling solution (small red disks).} \label{fig:magneticfieldhorizon} \end{figure} \subsubsection{Field-theoretic and thermodynamic observables} We turn now to field-theoretic observables extracted from the asymptotic behavior of our solution. It turns out that there are strong differences between the superconducting case that we are discussing now, and the superfluid case that will be discussed in the next section. We begin with the magnetic field in the boundary theory, $F_{x_1 x_2}\equiv B(x)$. Here $x_1$ and $x_2$ are boundary cartesian coordinates, defined in the usual way $x_1 = R \cos \varphi$ and $x_2 = R \sin \varphi$. $B(x)$ can be easily expressed as a function of $Q_7$ evaluated at the conformal boundary \begin{equation} B(x) = (1-x)^3\left[2 Q_7(x,0)+x \frac{\partial Q_7(x,0)}{\partial x}\right]\,. \label{eq:magneticfieldjorge} \end{equation} In Figs.~\ref{fig:magnetic_field_n1_n2} we plot this boundary magnetic field for several temperatures and for $n=1,\,2$. Interestingly, its maximum value correlates with the location of the maximum Ricci scalar evaluated along the horizon. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{magnetic_field_n1_n2.pdf} \caption{Boundary magnetic field profile as a function of $R$, plotted for several values of $T/(-\kappa)$. The {\it left panel} has $n=1$, and the {\it right panel} $n=2$. Here, disks, squares, diamonds, triangles and inverted triangles have $T/(-\kappa) = 0.029,\, 0.370,\,0.495,\,0.546,\,0.571$, respectively.} \label{fig:magnetic_field_n1_n2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{magneticfieldmax.pdf} \caption{Maximum of the boundary magnetic field as a function of $T/(-\kappa)$.} \label{fig:magneticmaximum} \end{figure} It turns out that the maximum magnetic field is not a monotonic function of the temperature. In particular, for $n=1$ and for $T$ smaller than $[T/(-\kappa)]_c\approx 0.185$, the maximum magnetic field starts decreasing with decreasing temperature, see Fig.~\ref{fig:magneticmaximum}. We note that the magnetic field falls off exponentially outside a core radius that is determined by ${\kappa}$. Even as the temperature is taken to zero this core radius remains finite. This should be contrasted with the falloff of the energy density ${\cal E}(R)$, as shown in Fig.~\ref{fig:energy_falloff}. This is well fit at low temperatures by ${\cal E}(R) \sim \frac{e^{-\alpha(T) R}}{R^3}$ but where the inverse ``energy screening length'' $\alpha(T) \to 0$ as $T \to 0$. Thus at precisely zero temperature the vortex sources a long-range disturbance in the stress tensor, due to its interaction with the IR CFT. The exponent of the power law is simply the dimension of the stress tensor. This long-range tail demonstrates a difference between conformal vortices and conventional superconducting vortices, which source no long range fields. The situation is different for superfluid vortices: while presumably the long-range tail discussed here is still present, it will be swamped by a more powerful (and more conventional) IR divergence. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{gapless_shield.pdf} \caption{Normalized logarithmic plots of energy density and magnetic field as a function of $R$ at two different temperatures. Note that when the temperature is changed the asymptotic slope of the magnetic field changes very little whereas that of the energy density changes significantly. In fact this ``energy screening length'' diverges as $T \to 0$.} \label{fig:energy_falloff} \end{figure} We turn now to the thermodynamics, i.e. entropy, energy and Helmoltz free energy of isolated gravitational vortices at nonzero temperature.\footnote{For each $n$ there is a unique $T=0$ vortex solution with $n$ units of flux. However at $T>0$, the vortex potential acquires temperature corrections which affect the solution. We wish to study this temperature dependence.} We will see that some of their global thermodynamic properties strongly depend on $q$. We start with the entropy. Since we are working at nonzero temperature, it is clear that the total entropy is infinite, as the black hole horizon extends infinitely far from the vortex. The quantity of most interest is the difference between the entropy with the vortex and the entropy without (at the same temperature). From the above metric, it is easy to see that this entropy difference is given by \begin{equation} \Delta S = \frac{\pi}{2}\int_0^1 \frac{y_+^2\,x}{(1-x)^3}\left[\sqrt{Q_4(x,1)Q_5(x,1)}-\tilde{Q}_4(1)\right]\mathrm{d} x\,. \label{eq:difentropy} \end{equation} The fact that this expression is finite is another test of the numerics, since one has to cancel a third order pole at $x=1$ in the denominator. We have actually presented this entropy difference previously in Fig.~\ref{fig:entropydiff}, where we compared it to the impurity entropy of the $T=0$ scaling solution for the $n=1$ vortex. We see that this entropy difference grows as we decrease the temperature, approaching the $T = 0$ result computed previously. This is another illustration of the fact that the vortex causes the horizon to ``bubble out". The $n=2$ vortex is wider and causes a larger bubble on the horizon with greater area. In fact, the $\Delta S$ computed for $n=2$ is about twice $\Delta S$ computed for $n=1$. We will look at this more closely shortly as it is an indication of whether the $n=2$ vortex can fragment into two $n=1$ vortices. The entropy difference remains nonzero at the critical value $[T/(-\kappa)]_c \approx 0.6$. This is the critical temperature for the scalar field to condense \cite{Faulkner:2010gj}, and beyond this value, the vortex no longer exists. It might look strange to see the entropy starting at some finite value precisely at $T=T_c$, i.e. the entropy difference seems to be a discontinuous function of the temperature, which seems to be in some tension with the fact that this is a second order phase transition. Importantly, the discontinuity is not in a thermodynamic entropy {\it density} (which would contribute a total entropy scaling with the system size), but rather in a finite impurity entropy that is independent of the system size. Said differently, this comes from the fact we are looking at a single isolated vortex, together with the fact that we are integrating over an infinite domain. The tail extending from $x\sim 0.5$ to $x=1$ is enough to give a finite contribution if we approach $T_c$ from below. We have explicitly checked that this is the case, by truncating the integration to be only over a finite domain, instead of all the way down to $x=1$. If we compute the integral up to \emph{any} $x=x^{\star}<1$, we find that $\Delta S$ is zero at $T=T_c$. Thus the limit of infinite system size and $T \to T_c$ do not commute. \subsubsection{Superconducting vortex stability} We now turn to an important physical issue: that of vortex stability. In particular, we would like to study whether an $n = 2$ vortex is unstable to breaking into two $n=1$ vortices. Before presenting our results, we discuss the expectations from the boundary field theory, which will require us to revisit the distinction between Type I and Type II superconductors. For a $2+1$ dimensional superconductor, any applied magnetic field will necessarily penetrate the sample\footnote{In a 3+1 dimensional superconductor in an applied field, this is not the case: for a small field, currents running inside the sample can push the field lines {\it around} the boundaries of the sample until a critical field (conventionally called $H_{c1}$) is reached, after which the field will begin to penetrate it. This is clearly not geometrically possible in 2+1 dimensions: in other words, in 2+1 dimensions $H_{c1} = 0$.}. This flux must then locally disrupt the condensate and create some (possibly very small) regions of normal phase. Famously, the way in which this normal phase is distributed is different for Type I and Type II superconductors. Consider the domain wall separating a region of normal phase (carrying magnetic flux) and the superconducting phase. For Type I superconductors this domain wall costs positive energy; thus the system will attempt to minimize the length of this domain wall, which is accomplished by trying to create very large continuous chunks of normal phase that accommodate all the flux, i.e. phase separation. However for Type II superconductors the domain wall costs {\it negative} energy: the system will thus try to {\it maximize} the length of the domain wall by separating the normal phase into as many small pieces as possible. This subdivision will continue until the system hits the quantum limit, with each small piece of normal phase now a vortex carrying a single quantum of flux, arranged in an Abrikosov lattice and preserving superconductivity. Note that from this behavior we may conclude that for Type I superconductors an $n=2$ vortex should be energetically favored over two $n=1$ vortices, as essentially the vortices attract each other and want to merge together. The opposite is true for Type II: here an $n=2$ vortex wants to break into two $n=1$ vortices, which will repel each other and eventually form an Abrikosov lattice. Let us now discuss the microscopic mechanism behind this behavior. The Landau-Ginzburg effective theory of superconductivity has two length scales; the London penetration depth $\lam$ and the coherence length $\xi$. $\lam$ measures how quickly the magnetic field falls off in a superconductor: it is thus the inverse mass of the photon in the Higgsed phase. $\xi$ measures how quickly disturbances of the order parameter fall off: we can view it as the inverse mass of the Higgs boson itself. It is the ratio \begin{equation} \kappa_{LG} = \frac{\lam}{\xi} \end{equation} that controls whether the superconductor is Type I or Type II, with $\kappa_{LG} \to 0$ being the Type I limit, and $\kappa_{LG} \to \infty$ the Type II limit. This can be understood by studying the energetics in the vicinity of the domain wall; see e.g. \cite{tinkham} for details. In the framework of Landau-Ginzburg theory, the threshold between the two is at precisely $\kappa_{LG}^{\star} = \frac{1}{\sqrt{2}}$. Thus we conclude that this ratio of correlation lengths should be correlated with vortex stability. \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{type_ii.pdf} \includegraphics[width=0.47\textwidth]{type_i_q_3.pdf} \caption{Profile of magnetic field $B(R)$ and order parameter $\langle \sO(R) \rangle$ for a single vortex with $q L = 1$ {\it (left panel)} and $q L = 3$ {\it (right panel)}. $\lam$ and $\xi$ are found from exponential fits and measure the rate of fall-off of magnetic field and order parameter, respectively.} \label{fig:typeII} \end{figure} We now return to our gravitational description and see if these expectations are borne out. We will do this for different values of the scalar charge $q$; interestingly we will find different results. First, we construct the correlation lengths $\lam$ and $\xi$ by fitting an exponential profile (with a subleading power-law correction) to the magnetic field $B(R)$ and the order parameter $\langle \sO(R) \rangle$ for a single vortex: \begin{equation} B(R) \sim b \left(\frac{\lambda}{R}\right)^{\alpha}\exp\left(-\frac{R}{\lambda}\right)\,, \qquad \langle \mathcal{O}(\infty) \rangle - \langle \mathcal{O}(R) \rangle \sim o\left(\frac{\xi}{R}\right)^{\beta}\exp\left(-\frac{\sqrt{2} R}{\xi}\right) \ . \end{equation}\ The results are shown in Fig.~\ref{fig:typeII}. It is clear that for $qL =1$ we have $\xi < \lambda$ while for $qL = 3$ we have $\xi > \lambda$. The ratio $\kappa_{LG}$ depends weakly on temperature, but for $qL =1$, ${\kappa}_{LG} > \frac{1}{\sqrt{2}}$ and we might expect to be firmly in the Type II regime, while for $qL = 3$, we have ${\kappa}_{LG} < \frac{1}{\sqrt{2}}$ and we expect to be in the Type I regime. For $qL=2$, ${\kappa}_{LG}$ is close to the expected transition at $ \frac{1}{\sqrt{2}}$. To check these expectations, we compare the entropy at fixed energy and free energy at fixed temperature of an $n=2$ vortex and two $n=1$ vortices. We will see that both the microcanonical and canonical analysis give the same answer for the stability of an $n=2$ vortex. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{comparison2.pdf} \caption{For $qL = 1$, the $n=2$ vortex prefers to break up into two $n=1$ vortices. \emph{Left Panel}: entropy difference (\ref{eq:difentropy}) as a function of the energy difference $\Delta E/(-\kappa)$. Disks correspond to $n=1$ and squares to $n=2$. \emph{Right Panel}: the difference in free energies, $\delta \Delta F = \Delta F_{n=2}/2-\Delta F_{n=1}$, as a function of the temperature $T/(-\kappa)$.} \label{fig:comparison2} \end{figure} We start with the $qL =1 $ case. Since the total energy, like the entropy, diverges due to infinite volume, we will work with the difference, $\Delta E$, which is defined as the difference in energy between the vortex solution and the corresponding HHH black hole at the same temperature. If we use Eq.~(\ref{eq:energystress}), one finds \begin{multline} \Delta E = -\frac{y_+^2}{48}\int_0^1 \frac{x}{(1-x)^3}\Bigg\{3 y_+\left(\left.\frac{\partial^3 Q_1}{\partial y^3}\right|_{y=0}-\left.\frac{\partial^3 \tilde{Q}_1}{\partial y^3}\right|_{y=0}\right)\\+19\kappa \left[x^n Q_6(x,0)^2-\tilde{Q}_4(0)^2\right]\Bigg\}\,\mathrm{d} x\,. \label{eq:energydiffQ} \end{multline} Like the entropy, the fact that this expression is finite is in itself a test of the numerics. First, we will plot the entropy difference (\ref{eq:difentropy}) as a function of the energy difference (\ref{eq:energydiffQ}) for both $n=1$ and $n=2$. This comparison is appropriate for a microcanonical ensemble; the solution with the higher entropy will dominate. The results are illustrated on the {\it left panel} of Fig.~\ref{fig:comparison2} for $q\,L=1$. We have also divided $\Delta S$ by the respective value of $n$, since we want to compare the entropy of two isolated $n=1$ vortices with the entropy of a single vortex with $n=2$. Note that the $n=2$ vortex appears to be always unstable to breaking into two $n=1$ vortices. The same result is obtained in a canonical ensemble when we compare the free energies $F = E -TS$. The {\it right panel} of Fig~\ref{fig:comparison2} shows a plot of $\delta \Delta F = \Delta F_{n=2}/2-\Delta F_{n=1}$. The fact that this quantity is always positive confirms that the $n=2$ configuration is always unstable towards breaking into two $n=1$ vortices. Thus this holographic superconductor is Type II. This is in agreement with our study of the Landau-Ginzburg parameter ${\kappa}_{LG}$ above. We now repeat the analysis for $q L = 2$, shown in Fig.~\ref{fig:comparison}: things have changed, and now the entropies and free energies of the two configurations are very similar. Although the points are very close, we have checked that in both cases the $n=2$ vortex is {\it favored} over two $n=1$ vortices. Thus for this value of the scalar charge the vortex is Type I, but is very close to the threshold for the crossover to Type II. This agrees perfectly with our expectations from studying ${\kappa}_{LG}$, which for this value of the charge was very close to the critical value $\frac{1}{\sqrt{2}}$. Finally, for $qL = 3$, we have verified that both the entropy and free energy differences are larger than $qL = 2$, and continue to favor the $n=2$ vortex over the two $n=1$ vortices. This again agrees with our expectations from studying ${\kappa}_{LG}$. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{comparison.pdf} \caption{For $qL = 2$, the $n=2$ vortex is (slightly) favored over two $n=1$ vortices. \emph{Left Panel}: entropy difference (\ref{eq:difentropy}) as a function of $\Delta E/(-\kappa)$ for $q\,L =2$. Disks correspond to $n=1$ and squares to $n=2$. \emph{Right Panel}: the difference in free energies, $\delta \Delta F = \Delta F_{n=2}/2-\Delta F_{n=1}$, as a function of $T/(-\kappa)$.} \label{fig:comparison} \end{figure} We end our discussion of vortex stability with a final comment: we have seen that vortex stability is precisely the distinction between Type I and Type II superconductors. From our analysis it is clear that whether or not a particular holographic superconductor is Type I or Type II depends on the detailed dynamics, i.e. the non-universal ratio of two different correlation lengths, which appears to be sensitive to (for example) the precise value of the scalar charge. While most of the literature on holographic superconductors states that they are Type II \cite{Hartnoll:2008kx,Domenech:2010nf}, this was originally based on the fact that the scalar condensate starts to condense at a nonzero value of the magnetic field. This was interpreted as $B_{c2}$, the value of the magnetic field in a Type II superconductor below which vortices penetrate the superconductor without destroying it completely. We have checked that in all our examples, the scalar condensate starts to condense at a nonzero value of the magnetic field. So it is now clear that this is not a sufficient condition to determine the type of superconductor (it could simply indicate phase separation in a Type I superconductor.) Furthermore, studying only a single vortex does not provide enough information to settle this question. One must perform a more detailed comparison of free energies of the sort performed here, and indeed over a wide parameter range we have seen that it is possible for a holographic superconductor to be Type I. In the conclusion we discuss some directions to investigate this further. \subsection{Superfluid vortices} We now turn from superconducting vortices to superfluid ones. Recall that the superfluid vortex differs from the superconducting vortex in that the latter is sourced by a boundary magnetic field while the former has no applied field, but does possess a boundary current $J_{\varphi}$. Thus they differ at the conformal boundary but have the same boundary conditions at the horizon. It is then no surprise that quantities measured at the horizon behave similarly in the two phases. In particular, all of the observables studied in Section \ref{sec:sc_hor} -- involving properties of the superconducting horizon -- are largely the same, and we will not discuss them further. We now turn our attention to physical properties that are unique to the superfluid vortices. The boundary current can be extracted from the bulk fields as: \begin{equation} J_\varphi(x) = y_+ \,x^2 \,\frac{\partial Q_7(x,0)}{\partial y}\,. \label{eqG:current} \end{equation} Fig.~\ref{fig:SF:magnetic_current} shows the profile of the boundary current $J_\varphi$ as a function of the boundary radius $R$, for several values of temperature (the {\it left panel} is for $n=1$ while the {\it right panel} is for $n=2$). We see that the current $J_\varphi$ vanishes at the origin ($R=0$) of the superfluid vortex and then, as one moves away from the vortex core, it increases monotonically, initially with a big slope and then flattening out as $R\to +\infty$ to become a constant. As expected, the highest values of the current are attained far away from the core of the vortex and this maximum value decreases as the temperature of the system increases, as better illustrated in Fig.~\ref{fig:SF:magJmax}. Increasing $n$ increases the net circulation and the maximum value of the current, as shown in Figs.~\ref{fig:SF:magnetic_current} and \ref{fig:SF:magJmax}. These plots are for $qL=2$. Not shown in these plots is the fact that, for a given temperature $T/(-\kappa)$ and winding number $n$, we find that $J_\varphi{\bigl |}_{\rm max}$ increases as $qL$ grows. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{magnetic_current_e2_n1_n2.pdf} \caption{Boundary current profile of a superfluid vortex as a function of $R$, plotted for several values of $T/(-\kappa)$. The {\it left panel} has $n=1$, and the {\it right panel} $n=2$ (both are for $q L=2$). Here, disks, squares, diamonds, triangles and inverted triangles have $T/(-\kappa) = 0.029,\, 0.370,\,0.495,\,0.546,\,0.571$, respectively.} \label{fig:SF:magnetic_current} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{magnetic_current_max_e2.pdf} \caption{Maximum of the boundary current as a function of the $T/(-\kappa)$. Here, disks and squares describe the superfluid vortex phase with $n=1$ and $n=2$, respectively (both for $qL=2$).} \label{fig:SF:magJmax} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{densities_sf_e2.pdf} \caption{Entropy density ({\it left panel}) and energy density ({\it right panel}) as a function of $R$ for the vortex superfluid phase. Here, disks and squares describe, respectively, isolated vortices with $n=1$ and $n=2$ (both for $qL=2$). At large $R$, both densities decay polynomially as $1/R^2$ as described by the dashed curves that give the best fit of the asymptotic tails. For example, for $n=1$ one finds the fit $\Delta s/\left(-\kappa \right)^2=A_0/R^{\alpha }$ with $\{\alpha \sim 2.006\pm 0.001,\: A_0\sim 0.0040\pm 0.0001\}$ and $\Delta {\mathcal{E}}/\left(-\kappa \right)^3=B_0/R^\beta$ with $\{ \beta \sim 2.005\pm 0.001,\: B_0\sim 0.0602\pm 0.0002\}$.} \label{fig:SF:SEdensities} \end{figure} We now turn to the thermodynamics, i.e. the entropy, energy and Helmoltz free energy. Before discussing our gravitational results, we briefly recall the expectations from field theory. A vortex in a conventional superfluid has an energy that logarithmically diverges with the system size. Recall that the low-energy dynamics of a superfluid is given by the action for a Goldstone mode ${\theta}$: \begin{equation} S = \rho_s \int d^3 x\;(\nabla {\theta})^2 \ . \label{goldac} \end{equation} A vortex with charge $n$ has ${\theta}(R \to \infty) \sim n \varphi$ with $\varphi$ the azimuthal angle around the vortex. Evaluating the energy following from \eqref{goldac} on such a configuration, we find \begin{equation} E \sim \rho_s \int dR \frac{1}{R} n^2 \sim \rho_s n^2 \log\left(\frac{R_{\max}}{a_0}\right), \label{enIRdiv} \end{equation} where $a_0$ is the vortex core size and $R_{\max}$ an IR cutoff. This is a standard result. Perhaps slightly less obvious is the fact that the first law of thermodynamics $dE = T dS$ states that at finite temperature this IR divergent energy implies also an IR divergent {\it entropy}. One way to understand this is to note that at finite temperature the current $J_{\varphi}$ will contain a normal component, which falls off slowly in space and carries an associated thermal entropy. We now return to our gravitational description and compute the bulk energy density difference $\Delta {\cal E}$ \eqref{eq:energydiffQ} and entropy density difference $\Delta s$ \eqref{eq:difentropy} from our bulk gravitational solution. As expected from the discussion above, both of these quantities decay only as $R^{-2}\sim(1-x)^2$ at large boundary radius $R$, as shown in Fig.~\ref{fig:SF:SEdensities}. The volume integrals of both these densities diverge logarithmically at large $R$, as expected.\footnote{Recall that in the superconducting phase $\Delta S$ and $\Delta E$ are finite because the corresponding densities $\Delta s$ and $\Delta {\cal E}$ have an asymptotic exponential decay.} The entropy density difference has a temperature dependent coefficient, $\Delta s \sim f(T) n^2 /R^2$, and we have verified that the coefficient $f(T)$ vanishes as $T \to 0$. This is expected: it is the thermally excited normal component of the current that is contributing to the IR divergence, and this entropy should indeed vanish as $T \to 0$.\footnote{Note that it is not obvious that the normal component itself should vanish -- defining this precisely in a holographic superfluid is tricky, but there are indications from \cite{Horowitz:2013jaa} that this normal component does not vanish at $T = 0$. We are simply stating that the entropy that it carries vanishes.} At precisely $T = 0$ we expect the entropy of the superfluid vortex to be equal to that of the superconducting vortex, with both answers equal to the impurity entropy arising from the scaling solution, but the IR divergence makes it very difficult to check the approach to this limit. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{ratioenergydens_e2e1.pdf} \caption{Ratio of local energy density of $n=2$ vortex to $n=1$ vortex at $T/(-{\kappa}) = 0.12$. Red triangles indicate $qL = 2$, black inverted triangles $qL = 1$. As expected, at long distances an IR divergence of the form \eqref{enIRdiv} dominates, scaling with the vortex charge as $n^2$.} \label{fig:SF:ratioenerg} \end{figure} We have checked that the coefficient of the IR divergences depend on the vortex winding charge as expected from \eqref{enIRdiv} (see e.g. Fig.~\ref{fig:SF:ratioenerg}). Note that these IR divergences make the question of vortex stability somewhat different in a superfluid as opposed to a superconductor: as both the energy and entropy are dominated by the IR divergence which scales with the vortex charge as $n^2$, one concludes that any high-charge vortex should want to dissociate into vortices with the minimal charge $n = 1$, which will then feel a mutual long-range repulsive force, independent of the details of the dynamics. This stability result is in line with the time evolution study done in \cite{Adams:2012pj}, where (in a different setup) it was found that holographic superfluid vortices with high winding charge, introduced in the system as initial data, rapidly decay into $n=1$ superfluid vortices. \section{Forces on a moving vortex from conformal invariance} \label{sec:confforce} We have presented a detailed discussion of the properties of a vortex in a holographic superfluid/superconductor; as we emphasized at various points, many of the facts that we report can be usefully organized by realizing that at low energies the vortex can be viewed as a conformal defect, with a CFT$_1$ living on it. In this section we switch gears and use the defect conformal invariance to compute the forces on a moving vortex in terms of universal data. In particular, we show that there exist Kubo formulas for these forces in terms of defect-localized operators. This section does not use our gravitational description in any way, and should apply to any situation where a vortex coexists with conformal invariance. As described before, the vortex worldline hosts a CFT$_1$, which may be characterized by the spectrum of operators living on the defect. The full spectrum of operators depends on the theory in question, but every defect has at least a {\it displacement operator} $D^i$. Adding $D^i$ to the full CFT action corresponds to shifting the location of the defect. It is thus intimately related to the breaking of translational symmetry; on $\mathbb{R}^{2,1}$ the following Ward identity is satisfied: \begin{equation} \p_{\mu} T^{\mu i} = D^i \delta^{(2)}(x)\,. \label{wardident} \end{equation} Note that this relation fixes the dimension of $D^i$ to be $2$, and the correlation function of $D^i$ then takes the form \begin{equation} \langle D^i(t) D^j(0) \rangle = \frac{C_D \delta^{ij}}{t^4} \,.\label{ddcorr} \end{equation} As \eqref{wardident} fixes the normalization of $D^i$, $C_D$ is a meaningful and universal number characterizing the defect. Now in general, if a vortex with circulation $ \hat{{\kappa}}$ in any superfluid is moved through the medium at finite temperature with velocity $v$, it will experience a force whose most general form is \begin{equation} F_i = \hat{{\kappa}} \left(-\ga v_i + \rho_M \ep_{ij} v^j\right) \equiv \sigma_{ij} v^j \,. \end{equation} There are two components: a diagonal frictional force parametrized by $\gamma $ and a transverse force -- called the Magnus and/or Iordanskii force -- parametrized by $\rho_M$. The precise nature of these forces in a conventional superfluid is a matter of some controversy: in particular the coefficient $\rho_M$ is thought to be related to a combination of the superfluid and normal fluid densities, but the precise combination remains somewhat uncertain, with different arguments giving different results \cite{PhysRevB.55.485,PhysRevLett.79.1321,PhysRevLett.76.3758}. In our case both of these densities are zero and the transverse force identically vanishes, so we will have little to say about this. These forces were computed for a holographic superfluid vortex in a probe limit at high temperature in \cite{Iqbal:2011bf}. In this work we will study the opposite low-temperature limit. In the defect CFT formalism there is an elegant expression for these forces. Consider a general defect moving with velocity $v^i$. This corresponds to deforming the CFT by the displacement operator $D^i$ with a time-dependent coefficient: \begin{equation} \delta S_{CFT} = \int dt D^i(t) (v_i t) \ .\label{defst} \end{equation} We would now like to calculate the force on the vortex. This force is simply the non-conservation of the stress-tensor in the presence of the moving vortex, and is easily found from the Ward identity \eqref{wardident} \begin{equation} F^i = \langle \p_t T^{ti} \rangle_v = \langle D^i \rangle_v \delta^{(2)}(x^i - v^i t) \ . \end{equation} Thus we simply need to compute the expectation value $\langle D^i \rangle_v$ in the deformed state given by \eqref{defst}. This is a problem in linear response; to lowest order in $v$ the answer is simply given by the retarded correlator of $D^i$, which we can express in frequency space as \begin{equation} \langle D^i(\om) \rangle_v = \frac{\langle D^i(\om) D^j(-\om) \rangle}{i\om} \, v_j \ . \end{equation} Thus we have identified a Kubo formula for the force tensor $\sigma_{ij}$: \begin{equation}\label{eq:twoD} \sigma^{ij} = \lim_{\om \to 0} \frac{\langle D^i(\om) D^j(-\om) \rangle}{i\om}, \ . \end{equation} where it is understood that we are evaluating a retarded correlator. This is one of the main results of this section. We now turn to the computation of this two-point function. The dimension of $D^i$ is fixed to be $2$, and at zero temperature we have \begin{equation} \langle D^i(\om) D^j(-\om) \rangle_{T=0} = \frac{C_D\pi}{3} \,\delta^{ij}(-i\om)^3\,. \end{equation} The overall prefactor is obtained from Fourier transformation of the position-space correlator \eqref{ddcorr}. The $\om \to 0$ limit of this vanishes, as expected: at zero temperature the CFT state is Lorentz invariant, and so a vortex moving at constant speed does not know it is moving. At finite temperature $T \neq 0$ the situation is different. Generically at finite temperature we expect nontrivial spectral weight as $\om \to 0$, i.e. if we expand the answer in powers of $\om$ we expect an answer of the form: \begin{equation}\label{eq:twoDT} \lim_{\om \to 0}\langle D^i(\om) D^j(-\om) \rangle_{T} = \frac{\tilde{C}_D \pi}{3}(2\pi T)^2 (i\om) \delta^{ij}+ \sO(\om^2), \ . \end{equation} where $\tilde{C}_D$ is a coefficient that we expect to be related to $C_D$ and which depends on the theory in question. Finite temperature correlators of CFT$_1$ operators respecting $SL(2,\mathbb{R})$ invariance have been previously calculated in \cite{Faulkner:2011tm,Faulkner:2009wj}. Those results are entirely fixed by conformal invariance: there is a transformation of the time coordinate that can be used to place the $T=0$ CFT$_1$ at finite temperature, and the full finite $T$ correlator can be obtained from the known conformal transformation of the vacuum CFT$_1$ operators. However that transformation has a nontrivial action on the fields outside of the defect, placing them in a different state that is not obviously equivalent to the thermal state, and thus those results do not appear to immediately apply. It would be useful to understand if that formalism could be extended to this case; this would allow an explicit calculation of $\tilde{C}_D$ in terms of $C_D$. From \eqref{eq:twoD} and \eqref{eq:twoDT}, the force tensor $\sigma^{ij}$ is simply \begin{equation} \sigma^{ij} \equiv \sigma \delta^{ij} = \frac{4 \pi^3}{3}\tilde{C}_D T^2 \delta^{ij} \ . \label{fric} \end{equation} This force represents the frictional drag on the vortex as we drag it through the excited medium. It is interesting to compare this to the drag force on a moving vortex in an ordinary (non-holographic) superfluid, which as we argued earlier is essentially empty of excitations at low energies. At a temperature $T$ there is a gas of thermally excited Goldstone modes. By scale invariance the momentum density perceived by the moving vortex from these modes is $\langle T^{0i} \rangle \sim T^3 v$. Each of these modes has a cross section $\sigma \sim T a_0^2$ for interaction with the vortex, where $a_0$ is the radius of the core \cite{PhysRev.136.A1488,PhysRevB.55.485}. Thus the force in a conventional superfluid is $\sigma \sim T^4 a_0^2$, a higher power of $T$ than that arising from \eqref{fric}. This extra suppression is due to the existence of the UV scale $a_0$ in the answer: unlike the pure conformal answer \eqref{fric}, which contains no other scales, this frictional force arises from a leading ``irrelevant'' deformation to an otherwise empty theory. We note that the knowledge of this force lets us trivially compute the Nernst effect arising from a dilute gas of these vortices. We briefly review the physics of the Nernst effect; consider taking a superconductor and applying a magnetic field $B$ into the sample together with a temperature gradient along the $x$ direction. In general this will set up an electric field $\vec{E}$ perpendicular to the temperature gradient; the Nernst signal is defined to be \begin{equation} e_N = \frac{E}{|\nabla T|}\,. \end{equation} We now compute this in our setup. The magnetic field can only be carried by vortices, each of which carries a flux $\frac{2\pi}{q}$; thus we find a vortex density $n = \frac{q B}{2\pi}$. Furthermore in a temperature gradient each vortex will feel an entropic force, arising from the fact that it has an intrinsic entropy: \begin{equation} F^i_{thermal} = s_{tr} \p_i T\,. \end{equation} In general the coefficient $s_{tr}$ is called the ``transport entropy''. In the conformal setting it would be very interesting to understand the precise relation between this thermal entropy and the defect entropy defined above; one is tempted to speculate that they are equal, but we do not know of a proof and for now we take it to be a free parameter. This force will cause the vortices to drift; in a condition of steady state this must be balanced against the frictional force calculated above, leading to a velocity of \begin{equation} v = \frac{s_{tr}|\nabla T|}{\sigma} \ . \end{equation} As each vortex moves across the sample in the $x$ direction it causes a phase slip of $\frac{2\pi}{q}$, generating a voltage difference in the $y$ direction via the Josephson effect. Expressing the vortex density in terms of the magnetic field, we find the Nernst signal to be: \begin{equation} e_N = \frac{3 B s_{tr}}{4\pi^3 \tilde{C}_D T^2}\,. \end{equation} We anticipate further applications of this formalism. \section{Discussion} \label{sec:concl} This has been a somewhat long journey, so we now summarize our results. We have presented an in-depth study of vortices in holographic superfluids and superconductors. We argued that the infrared physics can be usefully understood from the framework of defect CFT, which is elegantly geometrized by a $T=0$ near-horizon scaling solution that described a new kind of extremal black hole horizon: a Poincar\'e horizon with a bubble of Reissner-Nordstr\"{o}m horizon that carries a single unit of flux. We further solved the partial differential equations that captured the physics in the full UV geometry at finite temperatures, demonstrating that the low-temperature limits of various observables tended to the values obtained from the $T = 0$ scaling solution. The embedding into the UV geometry allowed us to study the thermodynamics of vortices in detail. One novel result is that with superconducting boundary conditions, the thermodynamic stability of an $n = 2$ vortex as compared to two $n=1$ vortices can change, being unstable for small values of the bulk scalar charge $q$, but stable for larger values of $q$. This behavior is correlated with whether or not the superconductor is Type I or Type II, which should itself be reflected in the ratio of the London penetration depth $\lam$ and coherence length $\xi$ of the dual superconductor, an expectation that we confirm. Thus we conclude that holographic superconductors may be Type I over a range of parameters. Finally, we turned away from the gravitational description and discussed the forces on a moving conformal vortex. We demonstrated that there are simple expressions for these forces in terms of Kubo formulas of defect-localized operators. We note that some of the lessons from this analysis may be useful beyond the study of vortices. There has been a recent surge of activity in studying holographic systems with either explicit or spontaneous breaking of spacetime symmetries, by the addition of a lattice, vortices, stripes, etc. In many cases this symmetry breaking turns out to be irrelevant in the infrared. However sometimes -- for example in the current case, where the existence of a conserved magnetic flux guarantees that the inhomogeneity is transferred to the infrared -- this is not the case, and the low energy physics is strongly affected. We expect the notion of a conformal defect to be very useful in analyzing such situations and organizing the infrared behavior. Our study suggests several directions for future research, which we discuss below: \begin{enumerate} \item One result from our analysis is that holographic superconductors {\it can} -- depending on the parameters -- be Type I. A more precise characterization of this property is possible. The key distinction between Type I and Type II superconductors is that the sign of the energy of the domain wall between the normal phase (with applied magnetic field) and the superconducting phase (with field expelled) is positive for Type I and negative for Type II. This can be studied directly in a holographic context by studying a different set of boundary conditions at the conformal boundary: \begin{align} F_{x_1 x_2}(x_1 \to -\infty) & = F_0, \qquad \langle \sO(x_1 \to -\infty) \rangle = 0\,, \\ F_{x_1 x_2}(x_1 \to +\infty) & = 0, \qquad \langle \sO(x_1 \to + \infty) \rangle = \langle \sO_0 \rangle \ . \end{align} This will precisely create the domain wall in question, and its energy can now be studied explicitly. \item We have worked with zero chemical potential and induced our scalar field to condense at low temperature by adding a double trace deformation. It would be interesting to repeat our analysis of vortex stability for the more standard holographic superconductor, which starts with nonzero chemical potential and does not need a deformation. It is not obvious that the results will be similar, since in the standard approach, increasing $q$ makes it easier for the scalar to condense, whereas here, increasing $q$ (in the presence of a magnetic field) makes it harder to condense. \item In the regime where the superconductor is Type II, it is now natural to ask about a lattice of such vortices. A perturbative construction of such lattices has been initiated in \cite{Maeda:2009vf,Bao:2013fda,Bao:2013ixa}. While the explicit construction of such a lattice at zero temperature is difficult, armed with the results of this paper we may speculate about the ground state. From the scaling solution constructed in Section \ref{sec:confdef} we know that at zero temperature, even in the far infrared each vortex occupies a finite amount of proper cross-sectional area. Thus for reasonably large lattice spacings we expect to simply have a regular lattice of vortices in the infrared, where each vortex is separated from the rest by regions of superconducting phase. While a single vortex preserves a near-horizon $SO(2,1) \times SO(2)$, this lattice will preserve only an infrared $SO(2,1)$. The construction of this near-horizon geometry remains an open problem. \item In our holographic model the vortex could be interpreted as a conformal defect because superfluid (or superconducting) order coexisted with a conformal sector down to arbitrarily low energies. It is of interest to understand whether this can happen in models with a more conventional UV description, e.g. a lattice Hamiltonian with short-range interactions. One such model where we do expect such a structure is in the $\mathbb{Z}_2$ fractionalized superconductor of \cite{PhysRevB.62.7850}. In that work two phases in $2+1$ dimensions called $SC$ and $SC^{\star}$ are described: both of them exhibit superconducting order but $SC^{\star}$ also supports a deconfined $\mathbb{Z}_2$ gauge field. Both phases are themselves gapped, containing a variety of heavy vortex and quasiparticle interactions. However they are separated by a continuous quantum phase transition in which the $\mathbb{Z}_2$ gauge field confines. This transition is in the 3d Ising universality class, and precisely at the critical point we expect that the vortex excitation in this model will flow to a conformal defect of the 3d Ising model. We expect our discussion in Section \ref{sec:confforce} to apply to this model, and in fact the properties of the Ising conformal defect in question have only recently been studied in \cite{Billo:2013jda}. It would be very interesting to understand other cases where this conformal vortex phenomenology could be applied. \item Finally, there is another way to interpret our results. Consider performing an S-duality in the bulk to re-interpret our calculation as the condensation of a {\it magnetically} charged scalar field. The bulk gauge field is now {\it confined} rather than Higgsed, and it is now electric flux that is confined to tight flux tubes, one of which we have constructed. It turns out that in the language of this paper the appropriate boundary conditions are those that we have labeled ``superconducting'': thus the bulk flux is allowed to penetrate the AdS boundary. Where each flux tube intersects the boundary it may now be interpreted as a heavy point charge. Thus the S-dual interpretation of our calculation is a state with a charge gap, or an {\it insulator} \cite{Faulkner:2012gt,Sachdev:2012tj}. Note that the quantization of electric charge is crucial to truly describe a phase as an insulator: in this S-dual construction this quantization arises from the fact that magnetic flux is quantized in each vortex. This is a novel kind of insulator, as electric charges are gapped, but a neutral sector remains gapless. Our work in this paper amounts to the careful construction of a single gapped electric charge in this novel charge-gapped phase. This is a starting point towards an understanding of insulating phases, but much remains to be done. For example, it would be very interesting to try to construct a phase containing a finite {\it density} of flux, rather than a single unit of flux as studied here. In the Type I phase this would correspond after S-duality to a phase separated system containing macroscopic regions of ``insulator'' coexisting with phases of metal. In the Type II phase we would find a lattice of vortices, which would map to a Wigner crystal of charges. In this discussion we assume that we allow the charges to adjust their own spacing dynamically: if we instead impose a UV lattice periodicity by hand then we only expect to find an insulating phase when a commensurability condition is met, i.e. we require an integer number of quantized charges per lattice site. It would be quite interesting to understand such phases and the transitions to nearby metallic phases in more detail. \end{enumerate} Clearly, there remain many open questions. We can look forward to new insights as the holographic approach pursued here continues to illuminate the physics of strongly correlated states of matter. \begin{acknowledgments} It is a pleasure to thank M.~Fisher, T.~Grover, S.~Hartnoll, K.~Jensen, S.~Sachdev and C.~H.~Yee for helpful discussions. This work was supported in part by the National Science Foundation under Grant No. PHY12-05500 and Grant No. PHY11-25915. J.E.S.'s work is partially supported by the John Templeton Foundation. \end{acknowledgments} \begin{appendix} \section{Boundary conditions along the axis - $x=0$:} \label{app:bc} These boundary conditions are best understood if we first introduce the following radial variable: \begin{equation} R=\frac{x}{1-x}\,, \end{equation} in terms of which the line element (\ref{eq:ansatzmetric}) reduces to \begin{multline} \mathrm{d} s^2 = \frac{L^2}{y^2}\Bigg\{-Q_1\,y_+^2 (1-y^3)\mathrm{d} t^2+\frac{Q_2\,\mathrm{d} y^2}{1-y^3}+\\ y_+^2Q_4\left[\mathrm{d} R+\frac{y^2\,R\,Q_3}{(1+R)^2}\,\mathrm{d} y\right]^2+y_+^2Q_5\,R^2\,\mathrm{d} \varphi^2\Bigg\}. \label{eq:ansatzmetricR} \end{multline} In addition, we also introduce cartesian coordinates \begin{equation} R=\sqrt{\tilde{x}^2_1+\tilde{x}^2_2},\quad\text{and}\quad \varphi = \arctan\left(\frac{\tilde{x}_2}{\tilde{x}_2}\right)\,. \end{equation} Recall that we want to ensure regularity at the axis $R=0$, i.e. that both the metric functions, scalar field and gauge field are regular in cartesian coordinates $(\tilde{x}_1,\tilde{x}_2)$. Let us first expand the line element (\ref{eq:ansatzmetricR}): \begin{multline} \mathrm{d} s^2 = \frac{L^2}{y^2}\Bigg\{-Q_1\,y_+^2 (1-y^3)\mathrm{d} t^2+\frac{Q_2\,\mathrm{d} y^2}{1-y^3}+y_+^2(Q_4\mathrm{d} x^2+Q_5\,R^2\,\mathrm{d} \varphi^2) \\+\frac{2\,y^2\,y_+^2\,Q_3\,Q_4(R\,\mathrm{d} R)\,\mathrm{d} y}{(1+R)^2}+\frac{R^2}{(1+R)^4}\,y^4\,y_+^2\,Q_4\,Q_3^2\,\mathrm{d} y^2\Bigg\}\,. \end{multline} We can now read off the desired boundary conditions. First, the first term in the second line $R\,\mathrm{d} R$ is a regular one form in cartesian coordinates, being equal to $\tilde{x}_1\mathrm{d} \tilde{x}_1+\tilde{x}_2\mathrm{d} \tilde{x}_2$. Second, the third term in the first line is only regular if $Q_4=Q_5$. Under the above considerations, the above line element close to $R=0$ reduces to \begin{multline} \mathrm{d} s^2 \approx \frac{L^2}{y^2}\Bigg\{-Q_1\,y_+^2 (1-y^3)\mathrm{d} t^2+\frac{Q_2\,\mathrm{d} y^2}{1-y^3}+y_+^2Q_4(\mathrm{d} \tilde{x}^2_1+\mathrm{d} \tilde{x}^2_2) \\+2\,y^2\,y_+^2\,Q_3\,Q_4(\tilde{x}_1\,\mathrm{d} \tilde{x}_1+\tilde{x}_2\,\mathrm{d} \tilde{x}_2)\,\mathrm{d} y+(\tilde{x}^2_1+\tilde{x}^2_2)\,y^4\,y_+^2\,Q_4\,Q_3^2\,\mathrm{d} y^2\Bigg\}\,. \end{multline} The remaining boundary conditions are just found by noting that smooth functions of $(\tilde{x}_1,\tilde{x}_2)$, close to the cartesian origin, can only be functions of $\tilde{x}^2_1+\tilde{x}^2_2=R^2$, which translates into Neumann boundary conditions in $R$ for all the remaining \emph{metric functions}, \emph{complex scalar} and \emph{gauge field}. Finally, we need to rewrite these in terms of the original variable $x$. To summarize, the boundary conditions at the axis read \begin{align} &\left.\frac{\partial Q_1}{\partial x}\right|_{x=0}=\left.\frac{\partial Q_2}{\partial x}\right|_{x=0}=\left.\frac{\partial Q_4}{\partial x}\right|_{x=0}=\left.\frac{\partial Q_5}{\partial x}\right|_{x=0}=0\,,\quad Q_4(0,y)=Q_5(0,y)\,,\nonumber \\ \\ &\left.\frac{\partial Q_3}{\partial x}\right|_{x=0}=2\,Q_3(0,y)\,,\quad\left.\frac{\partial Q_6}{\partial x}\right|_{x=0}=n\,Q_6(0,y)\quad \text{and}\quad\left.\frac{\partial Q_7}{\partial x}\right|_{x=0}=2\,Q_7(0,y)\,.\nonumber \end{align} \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The interest towards the study of non-Hermitian Hamiltonians (NHH) has grown exponentially in the last decades and it is still growing. This is due not only to the applications they have in many different fields of Physics \cite{Moiseyev}, but rather to the relevant role played in better understanding and developing fundamental aspects of Quantum Mechanics. To appreciate this point it is enough to consider that particular closed systems may often be described by a non-Hermitian Hamiltonian invariant under a space-time inversion (PT-symmetry), implying in turn the idea of a possible extension of Quantum Mechanics \cite{Bender,Mostafazadeh}. Many decades ago Feshbach employed for the first time non-Hermitian Hamiltonians to represent effectively the coupling between a discrete level and a continuum of states of a given quantum system \cite{Feshbach}. Such a kind of approach is still largely adopted nowadays to bring to light several worth physical aspects of open quantum systems \cite{Peng}, as for example phase transitions and exceptional points \cite{Rotter}. An effective non-Hermitian Hamiltonian is characterized by a secular equation with real coefficients, giving rise thus to either real eigenvalues or pairs of complex-conjugated eigenvalues \cite{Sternheim}. Such a property guarantees that a non-Hermitian Hamiltonian belongs to the class of pseudo-Hermitian operators, provided it is diagonalizable and possesses a discrete spectrum \cite{Mostafazadeh}. This fact paved the way to significant research on such a kind of specific non-Hermitian Hamiltonians \cite{Scolarici}, whose physical implementations may be found in different contexts, like optical microspiral cavities \cite{Wiersig}, microcavities perturbed by particles \cite{Wiersig1}, or modelling the propagation of light in perturbed medium \cite{Yariv,Boyd}. In this paper we want to investigate the dynamical problem of a two-level system described by a time-dependent pseudo-Hermitian su(1,1) Hamiltonian. Such nonautonomous systems were rarely studied in the context pseudo-Hermitian dynamics. As we show, they may be of experimental interest, and one of our aims is finding special classes of new exactly solvable cases. The reason why we concentrate mainly on the dynamics of a two-level system stems from the fact that the dynamical problem of an $N$-level system characterized by an $su$(1,1) Hamiltonian may be always traced back to that of a two-level system \cite{Ellinas}. This implies that we may construct the solution of the $N$-level system by knowing that of the related two-level system \cite{Ellinas}. Furthermore, we know that in conventional quantum mechanics a variety of complicated quantum-mechanical problems can be reduced to a two-level model \cite{Feynman}. In many contexts, for example nuclear magnetic resonance \cite{Abragham}, quantum information processing \cite{NC} and polarization optics \cite{Born}, essential changes in the system may be described in terms of a two-state dynamics. The interest towards $su(1,1)$-symmetric dynamical problem finds its reasons in the fact that many physical scenarios exhibit such a kind of symmetry in their Hamiltonian operators. For example, the dynamics of a $N=2j+1$-level atom in a cascade coupling with a laser beam with time-dependent intensity and in resonance condition (vanishing detuning) is characterized by a time-dependent Hamiltonian embedded in the $su$(1,1) algebra \cite{Ellinas}. Another important $su(1,1)$ physical scenario may be identified in the treatment of squeezed states of the electromagnetic field and scattering of projectiles from simple diatomic molecules \cite{Gilmore}. These kinds of physical systems, indeed, possess a matrix group structure presenting subdynamics with an $su(1,1)$-symmetry form. Moreover, a connection between $PT$-symmetric and $su(1,1)$-symmetric Hamiltonians may be easily found. The most general 2x2 null-trace matrix representing a Hamiltonian that meets all the conditions of $PT$ quantum mechanics presents indeed the following form \cite{Jones-Smith} \begin{equation}\label{2x2 PT Matrix} \begin{pmatrix} \alpha & i \beta \\ i \beta & - \alpha \end{pmatrix}, \end{equation} ($\alpha$ and $\beta$ are real) and this class of non-Hermitian matrices is a particular sub-class of the wider class identifying the $su(1,1)$-symmetry matrices. An important application of $PT$-symmetric Hamiltonian is found in the study and description of the dynamics of the so-called gain and loss systems \cite{Rotter,Croke} which may be encountered and realized in different physical contexts. These physical systems exhibit several interesting properties. In particular these systems can present a phase transition related to the $PT$-symmetry breaking \cite{Ruter,Liertzer,Bittner,Schindler,Tripathi}. In these works, specifically, emphasis is given on how the phase transition may be governed experimentally by manipulating the gain and loss parameters and how it can be justified and related to the fact that in this instance the energy spectrum comes to be complex from real. One may, therefore, wonder what happens if the parameter(s) governing the reality (and/or complexity) of the spectrum and so the symmetry phase of the Hamiltonian are time-dependent. From a theoretical point view several efforts are yet necessary for a total comprehension and unifying description of dynamics related to time-dependent non-Hermitian Hamiltonians. Quite recently, proposals and investigations of fundamental issues have been done \cite{Sergi1,Sergi2,Sergi3,Graefe} and important physical aspects have been brought to light about time-dependent non-Hermitian Hamiltonians \cite{Simeonov, Torosov}. However, very few attempts are present in literature concerning the identification of classes of exactly solvable scenarios for physical systems described by time-dependent non-Hermitian Hamiltonians. The paper is organized as follows. Section \ref{Exact Solutions} is dedicated to the presentation of the mathematical approach to solve the Schr\"odinger equation associated to an su(1,1) Hamiltonian model. The class of exactly solvable su(1,1) problems is reported in the same section together with the analysis of the corresponding dynamical solutions. In Sec. \ref{Phys App} the usefulness of our results is illustrated exactly treating a classical and a quantum problem. Conclusions and remarks are contained in the last subsequent section. \section{Identification of classes of solvable models and their exact solutions} \label{Exact Solutions} The group $SU(1,1)$ is not compact and as such it does not have finite-dimensional unitary representations. Its lowest-dimensional faithful matrix representation consists of the set of all $2\times 2$ unit-determinant complex matrices $U$, satisfying the relation \begin{equation} \hat{\sigma}^{z} U^{\dagger} \hat{\sigma}^{z} = U^{-1}, \end{equation} ${\hat{\sigma}^x,\hat{\sigma}^y,\hat{\sigma}^z}$ being the standard Pauli matrices. Generators of this non-unitary representation (i.e. a basis of the corresponding representation of the $su(1,1)$ algebra) can be chosen as \begin{equation} \hat{K}^{0}= {\hat{\sigma}^{z}\over 2}, \quad \hat{K}^{1}= -i{\hat{\sigma}^{y}\over 2}, \quad \hat{K}^{2}= i{\hat{\sigma}^{x}\over 2}. \label{GSU11} \end{equation} They satisfy the relations \cite{Klimov} \begin{equation} \lbrack \hat{K}^{1},\hat{K}^{2}]=-i \hat{K}^{0},\quad \lbrack \hat{K}^{1},\hat{K}^{0}]=-i \hat{K}^{2}, \quad \lbrack \hat{K}^{2},\hat{K}^{0}]=i \hat{K}^{1}. \end{equation} A $t$-dependent (in general, $t$ is a generic parameter) null-trace $2x2$ $su$(1,1) matrix is linear combination, with real $t$-dependent coefficients $\omega_0(t), \omega_1(t)$ and $\omega_2(t)$ of the generators $\hat{K}^0$, $\hat{K}^1$ and $\hat{K}^2$, namely \begin{equation} H(t)=\omega_0(t)\hat{K}^{0}+\omega_1(t)\hat{K}^{1}+\omega_2(t)\hat{K}^{2}. \end{equation} In terms of Pauli matrices it can be cast as \begin{equation} \begin{aligned} H(t)&=\Omega(t)\hat{\sigma}^{z} + i\omega_x(t)\hat{\sigma}^{x} - i\omega_y(t)\hat{\sigma}^{y} \\ &=\Omega(t)\hat{\sigma}^{z}-\omega(t)\hat{\sigma}^{+} + \omega^{\ast}(t)\hat{\sigma}^{-}, \end{aligned} \end{equation} where, conventionally, $\hat{\sigma}^{\pm}=(\hat{\sigma}^{x}\pm i \hat{\sigma}^{y})/2$ and $\omega(t)$ is a complex parameter defined by $\omega(t)=\omega_y-i\omega_x\equiv|\omega(t)|e^{i\phi_\omega(t)}$, and $\Omega(t)=\omega_0(t)/2, \omega_x(t)=\omega_2(t)/2, \omega_y(t)=\omega_2(t)/2$. In this way, in the basis of $\hat{\sigma}^{z}$, the matrix representation of a general non-Hermitian operator $H(t)$ belonging to the $su(1,1)$ algebra results as \begin{equation}\label{2x2 SU(1,1) Matrix} H(t)= \begin{pmatrix} \Omega(t) & -\omega(t)\\ \omega^*(t) & -\Omega(t) \end{pmatrix}. \end{equation} From Eq. \eqref{2x2 PT Matrix} we see that the subclass of $PT$-symmetric $su$(1,1) Hamiltonians is identified by $\phi_\omega=\pi/2$ or equivalently by $\omega_y=0$. It is important to underline that $su(1,1)$-symmetric Hamiltonians are pseudo-Hermitian, that is, by definition \cite{Mostafazadeh}, there exists at least one non-singular Hermitian matrix $\eta$ such that \begin{equation}\label{PH rel} H^\dagger(t)=\eta H(t) \eta^{-1}. \end{equation} It is easy to see that the simplest matrix satisfying condition \eqref{PH rel} is \begin{equation} \eta=\hat{\sigma}^z= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \end{equation} A worth result is that a digonalizable operator is pseudo-Hermitian if and only if its eigenvalues are either real or grouped in complex-conjugated pairs \cite{Mostafazadeh}. This fact is physically relevant since it turns out to be the feature possessed by the non-Hermitian Hamiltonians resulting by the procedure provided by Feshbach \cite{Feshbach} to describe effectively a quantum system with a discrete spectrum coupled to a continuum. Pseudo-Hermitian Hamiltonians, thus, result very important in the study of open quantum system \cite{Rotter,Sergi1,Sergi2,Sergi3,Graefe, Graefe}, succeeding in describing particular experimentally detectable physical aspects \cite{Rotter,Ruter,Liertzer,Bittner,Schindler,Tripathi}. The $t$-parameter-dependent spectrum of $H(t)$ reads $E_\pm(t)=\pm\sqrt{\Omega^2(t)-|\omega(t)|^2}$, hence it is real under the condition $|\omega(t)|^2 < \Omega^2(t)$. The reality of the spectrum is a sufficient and necessary condition for $H(t)$ to be quasi-Hermitian \cite{Mostafazadeh}; the condition of quasi-Hermiticity consists in the existence of a positive-definite matrix $\eta_+$ in the set of the matrices $\eta$ accomplishing the equality in Eq. \eqref{PH rel} \cite{Mostafazadeh}. It can be verified that such a matrix reads \begin{equation} \eta_+= \begin{pmatrix} 1 & -\omega(t)/\Omega(t) \\ -\omega^*(t)/\Omega(t) & 1 \end{pmatrix}. \end{equation} It is positive-definite for $|\omega(t)|^2 < \Omega^2(t)$. In this case we may identify a new Hilbert space in which $H(t)$ is Hermitian or, in other words, we may define a new scalar product $\average{\cdot| \cdot}_{\eta_+}$ (defining the new Hilbert space), namely $\average{\cdot|\eta_+ \cdot}$ (where $\average{\cdot| \cdot}$ is the standard euclidean scalar product), with respect to which $H(t)$ is Hermitian. However, if the parameter $t$ represents the time, this condition is not sufficient in order that our Hamiltonian describes a closed quantum physical system. It can be shown, indeed, that a quasi-Hermitian time-dependent Hamiltonian describes a closed quantum system characterized by a (pseudo-)unitary dynamics only if the positive-definite matrix $\eta_+$ is time-independent \cite{Mostafazadeh1}. This implies that an $su(1,1)$-symmetry Hamiltonian could describe a closed quantum system only if $\phi_\omega(t)=const.$ and $\Omega(t)$ and $|\omega(t)|$ have the same time-dependence, namely $\omega(t)=|\omega_0| f(t)$ and $\Omega(t)=\Omega_0 f(t)$, with $|\omega_0|^2 < \Omega_0^2$. For all these reasons, in view of possible dynamical applications of finite dimensional $su(1,1)$-symmetry Hamiltonians $H(t)$ in either classical or quantum contexts, we search solutions of the Cauchy problem ($\hbar=1$) \begin{equation}\label{Schroed Eq U} i\dot{U}(t)=H(t) U(t), \quad U(0)=\mathbb{1}, \end{equation} which for Hermitian Hamiltonians constitutes the time evolution Schr\"odinger equation. To this end we write the non-unitary operator $U(t)$ in the form \begin{equation}\label{U Kus} \begin{aligned} U(t)&\equiv e^{u_1(t)\hat{\sigma}^+}e^{-u_2(t)\hat{\sigma}^z}e^{u_3(t)\hat{\sigma}^-} \\ &=\begin{pmatrix} e^{-u_2(t)}+u_1(t)e^{u_2(t)}u_3(t) & u_1(t)e^{u_2(t)} \\ e^{u_2(t)}u_3(t) & e^{u_2(t)} \end{pmatrix}, \end{aligned} \end{equation} getting from Eq. \eqref{Schroed Eq U} the following system of differential equations \begin{equation}\label{RE} \left\{ \begin{aligned} \dot{u}_1(t) &= i\omega(t)-2i\Omega(t)u_1(t)+i\omega^*(t)u_1^2(t), \\ \dot{u}_2(t) &= i\Omega(t)-i\omega^*(t)u_1(t), \\ \dot{u}_3(t) &= -i\omega^*(t)e^{-u_2(t)}, \end{aligned} \right. \end{equation} to be associated with the initial conditions $u_j(0)=0$ ($j=1,2,3$). Once the first Riccati equation is solved, the remaining two can be simply integrated so that the whole $su(1,1)$-symmetry Hamiltonian problem may be exactly solved. A similar Riccati equation may be obtained when the analogous problem for the $su$(2) case is trated and an interesting interplay between Physics and Mathematics has recently been reported \cite{MGMN}. Since no method is available to solve this Riccati Equation for arbitrary $\Omega(t)$ and $\omega(t)$, then, we look for specific relations of physical interest between the Hamiltonian entries so that the Riccati equation under scrutiny can be solved analytically. To this end let us consider the following change of variable \begin{equation}\label{Change of Var} u_1(t)=i e^{i \phi_\omega(t)} Y(t). \end{equation} Plugging this expression into the Riccati equation in Eq. \eqref{RE}, we arrive at the following Riccati-Cauchy problem for the variable $Y(t)$ \begin{eqnarray}\label{RE for Y} \dot{Y}(t) &=& -|\omega(t)|Y^2(t)-i[2\Omega(t)+\dot{\phi}_\omega(t)]Y(t)+|\omega(t)|, \\ Y(0) &=& 0. \nonumber \end{eqnarray} It is quite clear, then, that under the analytical constraint \begin{equation} \label{Exact Scenarios} 2\Omega(t)+\dot{\phi}_{\omega}(t)=2\nu|\omega(t)|, \end{equation} with $\nu$ a time independent real non-negative dimensionless parameter, Eq. \eqref{RE for Y} becomes exactly solvable. It is possible to adopt a different point of view to examine in depth the meaning of Eq. \eqref{Exact Scenarios}. To this end, inspired by the seminal paper \cite{Rabi 1954}, we perform the transformation \cite{GdCNM} \begin{equation}\label{Gen Rabi Transf} \ket{\psi(t)}=\exp\{i \phi_\omega(t) \hat{\sigma}^z/2\} \ket{\tilde{\psi}(t)}, \end{equation} getting the following new time-dependent Schr\"odinger equation \begin{equation} i\ket{\dot{\tilde{\psi}}(t)}=H_{eff}(t)\ket{\tilde{\psi}(t)}, \end{equation} with \begin{equation} H_{eff}(t)=\left[ \Omega(t)+{\dot{\phi}_\omega(t) \over 2} \right] \hat{\sigma}^z - i |\omega(t)| \hat{\sigma}^y. \end{equation} From this expression it is clear why the relation \eqref{Exact Scenarios} is a solvability condition for our problem. Indeed, the corresponding Schr\"odinger equation \begin{equation} i\ket{\dot{\tilde{\psi}}(t)}=|\omega(t)|\left[ 2\nu \hat{\sigma}^z - i \hat{\sigma}^y \right] \ket{\tilde{\psi}(t)}, \end{equation} may be easily solved, even if the effective Hamiltonian is time-dependent. The solution $Y_{\nu}(t)$ of the particular Riccati equation, related to a specific value of $\nu$, reads \begin{equation}\label{Y-g} \!\!\! Y_{\nu}(t) = \frac{\sqrt{\nu^{2}-1}\,\tan[\sqrt{\nu^{2}-1}\,\chi(t)]-i\nu\tan^{2}[\sqrt{\nu^{2}-1}\,\chi(t)]}{\nu^{2}\sec^{2}[\sqrt{\nu^{2}-1}\,\chi(t)]-1}, \end{equation} where the time dependent positive function $\chi(t)$ is defined as \begin{equation} \chi(t)=\int_{0}^{t}|\omega(\tau)|d\tau. \end{equation} We may identify different classes and related different solutions depending on the value of the parameter $\nu$. The case $\nu>1$ defines the trigonometric regime with solution $Y_{\nu}^t(t)$ in the form \eqref{Y-g}. For $0 < \nu < 1$ the solution $Y_{\nu}(t)$ is in the hyperbolic regime having the form \begin{equation}\label{Y-h} Y_{\nu}^h(t)= \frac{\sqrt{1-\nu^{2}}\tanh[\sqrt{1-\nu^{2}} \,\chi(t)]-i\nu\tanh^{2}[\sqrt{1-\nu^{2}}\,\chi(t)]}{1-\nu^{2}\sech^{2}[\sqrt{1-\nu^{2}}\,\chi(t)]}. \end{equation} The case $\nu=1$ defines the rational regime with \begin{equation}\label{Y-r} Y_{\nu}^r(t)= \frac{\chi(t)-i\chi^2(t)}{\chi^{2}(t)+1}. \end{equation} Finally, for $\nu=0$ we have the real solution $Y_{0}(t)$ reading \begin{equation} Y_{0}(t)=\tanh \left[ \chi(t) \right]. \end{equation} In this way, through Eq. \eqref{Change of Var}, we may construct the time evolution operator in Eq. \eqref{U Kus} for our exactly solvable scenario of interest. To this end, it is important to point out that the $SU$(1,1) group elements and then the time evolution operators generated by the Hamiltonians in Eq. \eqref{2x2 SU(1,1) Matrix} depend on only two complex parameters. Indeed, the Caley-Klein parametrization for the $SU$(1,1) group elements reads \begin{equation}\label{U Caley-Klein} U(t)= \begin{pmatrix} a(t) & b(t) \\ b^*(t) & a^*(t) \end{pmatrix}, \end{equation} with $|a(t)|^2-|b(t)|^2=1$. Comparing this form with the one given in Eq. \eqref{U Kus} it is easy to derive the following relations \begin{equation} u_1={b \over a^*}, \quad u_2=\log(a^*), \quad u_3={b^* \over a^*}, \end{equation} allowing us to simplify the matrix representation of the time evolution operator in terms of the $u_js$ parameters as follows \begin{equation} U(t)=\left( \begin{array} [c]{cc}% \exp[u_{2}^{\ast}(t)] & u_{1}(t)\exp[u_{2}(t)]\\ u_{1}^{\ast}(t)\exp[u_{2}^{\ast}(t)] & \exp[u_{2}(t)] \end{array} \right), \end{equation} with $e^{u_{2}^{\ast}(t)} e^{u_{2}(t)}(1-|u_1(t)|^2)=1$. We see that in this case the expressions of the entries are easily readable and symmetric. Moreover, only two out of the three initial parameters appear. Then, the evolution operator for our general exactly solvable scenario may be written down as \begin{widetext} \begin{equation} U_{\nu}(t)=\left( \begin{array} [c]{cc} \exp[\mathfrak{r}_{\nu}(t)]\exp[-i\mathfrak{s}_{\nu}(t)] & |Y_{\nu}(t)|\exp[\mathfrak{r}_{\nu}(t)]\exp\left[ i(\mathfrak{s}_{\nu }(t)+\mathfrak{y}_{\nu}(t))\right] \\ |Y_{\nu}(t)|\exp[\mathfrak{r}_{\nu}(t)]\exp\left[ -i(\mathfrak{s}_{\nu }(t)+\mathfrak{y}_{\nu}(t))\right] & \exp[\mathfrak{r}_{\nu}(t)]\exp[i\mathfrak{s}_{\nu}(t)] \end{array} \right), \label{Usl11} \end{equation} \end{widetext} with \begin{subequations} \begin{align} \mathfrak{r}_{\nu}(t)&=\int_{0}^{t}|\omega(\tau)|\,\text{Re}[Y_{\nu}(\tau)]d\tau, \label{r ni}\\ \mathfrak{s}_{\nu}(t)&=\int_{0}^{t}\Omega(\tau)d\tau+\int_{0}^{t}|\omega(\tau)|\,\text{Im}[Y_{\nu}(\tau)]d\tau, \label{s ni}\\ \mathfrak{y}_{\nu}(t)&=\frac{\pi}{2}+2\nu\int_{0}^{t}|\omega(\tau)|d\tau-2\int_{0}^{t}\Omega(\tau) d\tau+\varphi_{\nu}(t) \label{y ni}, \\ \varphi_\nu(t)&=-\arctan\left[ {\nu\tan[\sqrt{\nu^2-1}\chi(t)] \over \sqrt{\nu^2-1}} \right]. \end{align} \end{subequations} Finally, it is easy to verify that the following identity \begin{equation} \det [U_{\nu}(t)]=\exp[2\mathfrak{r}_{\nu}(t)]\left( 1-|Y_{\nu}(t)|^2\right) = 1, \label{id1} \end{equation} is fulfilled at any time instant $t$ for arbitrary $\nu$. \section{Physical Applications} \label{Phys App} In this section we are going to furnish physically interesting frameworks in which our results may be exploited and could play a relevant role in the solution of the dynamical problems. \subsection{Physical Implementation in Guided Wave Optics} We show now how the knowledge of the exact solution \eqref{Usl11} of the dynamical problem may be intriguingly applied to solve a propagation problem in a guided wave optics scenario. Let us consider two electromagnetic modes counter-propagating in, let us say, the $z$ direction and characterized by the two complex amplitudes $A$ and $B$. The amplitudes $A$ and $B$ depend on the coordinate $z$ if the two modes propagate in an perturbed medium (e.g. by an electric field, a sound wave, surface corrugations, etc.), otherwise they are constant. In the former case, the two amplitudes are mutually coupled in accordance with the following two equations \cite{Yariv} \begin{equation}\label{Problem electric modes} \begin{aligned} {dA(z) \over dz}&=k_{ab}(z)e^{-i\Delta z}B(z), \\ {dB(z) \over dz}&=k_{ba}(z)e^{i\Delta z}A(z), \end{aligned} \end{equation} where $\Delta$ is the phase-mismatch constant and $k_{ab}(z)$ and $k_{ba}(z)$ are complex coupling coefficients determined by the specific physical situation under scrutiny. Considering the case $k_{ab}(z)=k_{ba}^*(z)=k(z)$ \cite{Yariv}, after few algebraic manipulations, it is possible to verify that the system \eqref{Problem electric modes} may be cast in the following form \cite{Simeonov} \begin{equation}\label{su(1,1) problem z dependent} i{dV(z) \over dz}=H(z)V(z), \end{equation} where $V(z)=[\tilde{A}(z),\tilde{B}(z)]^T$, with \begin{equation} \tilde{A}(z)=A(z)e^{i\Delta z/2}, \quad \tilde{B}(z)=B(z)e^{-i\Delta z/2}, \end{equation} and \begin{equation}\label{Hamiltonian z dependent} H(z)= \begin{pmatrix} -\Delta/2 & ik(z) \\ ik^*(z) & \Delta/2 \end{pmatrix}. \end{equation} It is worth noticing that $H(z)$ has the same structure of the general 2x2 su(1,1) matrix written in Eq. \eqref{2x2 SU(1,1) Matrix}. Accordingly, if we write $V(z)=\mathcal{U}(z)V(0)$, then the system turns out in the following Schr\"odinger-Cauchy problem \begin{equation}\label{U(z) problem} i{d\mathcal{U}(z)\over dz}=H(z)\mathcal{U}(z), \quad \mathcal{U}(0)=\mathbb{1}, \end{equation} that is nothing but the problem we studied in the previous section [see Eq. \eqref{Schroed Eq U}] with $t$ replaced by $z$, for which we found sets of exact solutions related to specific relations between the Hamiltonian parameters making the system analytically solvable. Our class of solvability conditions \eqref{Exact Scenarios}, in this case, reads ($k(z)\equiv|k(z)|e^{i\phi_k(z)}$) \begin{equation}\label{Solvability Cond z problem} 2\nu|k(z)|+{d\phi_k(z) \over dz}=\Delta. \end{equation} It means that, if the space-dependence of $k(z)$ is such that Eq. \eqref{Solvability Cond z problem} is fulfilled for a specific phase-mismatch, then we are able to solve the original system in Eq. \eqref{Problem electric modes}. More precisely, under the relation \eqref{Solvability Cond z problem}, we are able with our technique to find the operator $\mathcal{U}(z)$ through which we may construct the solutions $A(z)$ and $B(z)$ of the system \eqref{Problem electric modes}, for any initial condition $A(0)$ and $B(0)$. Thus, Eq. \eqref{Solvability Cond z problem} furnishes special links between $\Delta$ and $k(z)$ turning out in exactly solvable scenarios of two counter-propagating modes in a perturbed medium. This is just one of the several examples of the so called quantum-optical analogy \cite{Longhi}, namely when space-dependent optical problems may be mapped into time-dependent quantum dynamical ones. \subsection{Trace and Positivity Preserving Non-Linear Equation of Motion} The example discussed in the previous subsection provides a problem in the classical optics context where the knowledge of the solutions of the Cauchy problem \eqref{Schroed Eq U}, based on the general $su$(1,1) non-Hermitian Hamiltonian \eqref{2x2 SU(1,1) Matrix}, may be fruitfully exploited to solve Eq. \eqref{U(z) problem}. In what follows we aim at exploring the applicability of our results in a quantum dynamical context. We underline that such an objective is not trivial since in the non-Hermitian Hamiltonian-based quantum dynamics conceptual difficulties in the physical interpretation of the mathematical results, may occur. In the two-dimensional $SU$(1,1) case, differently from $SU$(2), the complex entries $a(t)$ and $b(t)$ appearing in the operator $U(t)$, solution of the Cauchy problem \eqref{Schroed Eq U}, are spoiled of a direct physical meaning. In the $SU$(2) case, in fact, we may interpret $|a(t)|^2$ and $|b(t)|^2$ as probabilities and then $U(t)$ as the time evolution operator of our quantum dynamical system, while for the $SU$(1,1) case, considered in this paper, $|a|^2 \geq 1$ since $|a(t)|^2-|b(t)|^2=1$ and consequently $U(t)$ cannot be identified as the time evolution generator. This is intrinsically related to the the dynamics generated by a $su$(1,1) finite-dimensional Hamiltonian. Indeed, we know that only the infinite dimensional representations of $SU$(1,1) are unitary. More in general, the crucial problem related to the physical interpretation of the mathematical results we get from the study of the Cauchy problem \eqref{Schroed Eq U} for a generic non-Hermitian Hamiltonian, lies on the fact that the trace of any initial density matrix is not preserved in time. To recover the necessary normalization condition at any time instant, following the approach introduced in Ref. \cite{Sergi1}, we put \begin{equation}\label{rescaled} \rho(t)={\hat{\rho}'(t) \over \text{Tr}\{\hat{\rho}'(t)\}}, \end{equation} where $\rho'(t)=U(t)\rho'(0)U^\dagger(t)$ and $\dot{U}(t)=-iH(t) U(t)$. This choice leads to a ``new dynamics'', that is, to a new Liouville-von Neumann equation governing the dynamics of our system, obtained by differentiating Eq.~(\ref{rescaled}), namely \begin{equation}\label{Eq non lin NHH} \dot{\rho}(t)=-i[H_0(t),\rho(t)]-\{\Gamma(t),\rho(t)\}+2\rho(t) \text{Tr}\{\rho(t) \Gamma(t)\}, \end{equation} where we put $H(t)=H_0(t)-i\Gamma(t)$, with $H_0^\dagger(t)=H_0(t)$ and $\Gamma^\dagger(t)=\Gamma(t)$. From a physical point of view, this equation possesses interesting properties \cite{Graefe} which makes it a valid candidate to describe the quantum dynamics of physical systems characterized by a non-Hermitian Hamiltonian like $PT$-symmetric systems \cite{Schindler, Tripathi}. The three most important properties to be pointed out are: 1) a pure state remain pure at any time, while the purity of a mixed state, in general, changes in time; 2) the trace and positivity are preserved at any time since the new equation was constructed \textit{ad hoc} to satisfy this condition in order to recover the concept of probability and a statistical interpretation of the quantum dynamics related to non-Hermitian Hamiltonians; 3) the general solution of Eq. \eqref{Eq non lin NHH} reads, of course, \begin{equation} \rho(t)={U(t)\rho'(0)U^\dagger(t) \over \text{Tr}\left\{ U(t)\rho'(0)U^\dagger(t) \right\}}, \end{equation} where $U(t)$ is the (non-unitary) operator satisfying Eq. \eqref{Schroed Eq U}. Thus the solution of the non-linear problem \eqref{Eq non lin NHH} is traced back to solve our original problem \eqref{Schroed Eq U}. This circumstance means that, through the procedure exposed in Sec. \ref{Exact Solutions}, we are able to solve the generalized Liouville-von Neumann non-linear equation \eqref{Eq non lin NHH} for the class of time-dependent scenarios identified by the relation \eqref{Exact Scenarios}, whose time evolution operator $U_\nu(t)$ is reported in Eq. \eqref{Usl11}. Equation \eqref{Eq non lin NHH} was constructed, to some extent, \textit{ad hoc}, merely postulating a way of keeping the proper normalization of the density matrix during the whole evolution. However, one can argue that the form of an evolution equation is dictated by the fact that it should describe a one-parameter positivity preserving semigroup modeling time evolution of the density matrix of a quantum system. Such demands (semigroup property, positivity-preservation) are natural for a general, ``legitimate'' quantum evolution: the evolution over a given time can be seen as a composition of consecutive evolutions over consecutive, intermediate times and the probability is conserved. Let us consider a general, linear, positivity preserving map, \begin{equation}\label{GL} \phi_t(\rho)=\mathbb{U}(t)\rho \mathbb{U}^\dagger(t). \end{equation} When $\mathbb{U}(t)$ is a one-parameter semigroup $\mathbb{U}(s+t)=\mathbb{U}(s)\mathbb{U}(t)$, then, of course, $\phi_t(\rho)$ has the semigroup property, i.e. $\phi_s\circ\phi_t=\phi_{s+t}$, which means that the evolution from time $0$ to $s+t$ is composed from the evolution from $0$ to $t$ followed by the evolution from $t$ to $t+s$. Let us now consider the following nonlinear modification of the map $\phi_t(\rho)$, \begin{equation}\label{rescaled1} \hat{\phi}_t(\rho)=\frac{\phi_t(\rho)}{\mbox{$\mathrm{Tr}$}\left(\phi_t(\rho)\right)}. \end{equation} Using linearity of trace and $\phi_t$ one easily proves that $\hat{\phi}_t$ also has the semigroup property $\hat{\phi}_s\circ\hat{\phi}_t=\hat{\phi}_{s+t}$ \cite{Grabowski}. Since it is clearly positivity- and trace-preserving, it describes a reasonable quantum evolution. Therefore, Eq.(\ref{rescaled}) itself is less \textit{ad hoc} than it seems and in addition it derives from the same dynamical map generating the von Neumann-Liouville equation when $H=H^\dagger$. \subsubsection{Quantum dynamics of a su(1,1) ``Rabi'' scenario} In order to appreciate better the physical aspects of such an equation of motion, we want to study now the ``Rabi'' scenario for the case of $su$(1,1) Hamiltonians and to point out differences and analogies with the $su$(2) case by bringing to light intriguing dynamical aspects. We know that the `standard' Rabi scenario describes a spin-1/2 subjected to a time-dependent magnetic field precessing around the $\hat{z}$-axis. The matrix representation of a general $su$(2) Hamiltonian may written as \begin{equation} \tilde{H}(t)=\tilde{\Omega}(t)\hat{\sigma}^z+\tilde{\omega}_x(t)\hat{\sigma}^x+\tilde{\omega}_y(t)\hat{\sigma}^y= \left( \begin{array} [c]{cc}% \tilde{\Omega}(t) & \tilde{\omega}(t)\\ \tilde{\omega}^*(t) & -\tilde{\Omega}(t) \end{array} \right), \end{equation} with $\tilde{\omega}(t) \equiv \tilde{\omega}_x(t)-i\tilde{\omega}_y(t) \equiv |\tilde{\omega}(t)|e^{i\phi_{\tilde{\omega}}(t)}$ and where $\hat{\sigma}^k$ ($k=x,y,z$) are the Pauli matrices represented in the eigenbasis $\{\ket{\pm}\}$ of $\hat{\sigma}^z$: $\hat{\sigma}^z\ket{\pm}=\pm\ket{\pm}$. It is easy to see that the consideration of a magnetic field precessing around the $\hat{z}$-axis amounts to consider the three parameters $\tilde{\Omega}$, $|\tilde{\omega}|$ and $\dot{\phi}_{\tilde{\omega}}$ time independent. Further, the well known Rabi's resonance condition, ensuring a complete periodic population transfer between the two states $\ket{+}$ and $\ket{-}$, acquires the form $\tilde{\Omega}+\dot{\phi}_{\tilde{\omega}}/2=0$. It is worth to point out that, also when the three parameters are time dependent, the so-called generalized Rabi's resonance condition $\tilde{\Omega}(t)+\dot{\phi}_{\tilde{\omega}}(t)/2=0$ \cite{GdCNM} is a necessary condition to obtain periodic oscillations with maximum amplitude \cite{GdCNM}. It is possible to convince oneself that the general $su$(1,1) Hamiltonian, whose matrix representation is reported in Eq. \eqref{2x2 SU(1,1) Matrix}, may be written as follows \begin{equation}\label{Gen NHH} H(t)=H_0(t)-i\Gamma(t), \end{equation} with \begin{equation}\label{Gen NHH parts} H_0(t)=\Omega(t) \hat{\sigma}^z, \qquad \Gamma(t)=-\omega_x(t) \hat{\sigma}^x+\omega_y(t) \hat{\sigma}^y, \end{equation} and this time we have $\omega(t) \equiv \omega_y(t)-i\omega_x(t) \equiv |\omega(t)|e^{i \dot{\phi}_\omega(t)}$. We see, then, that we may interpret the $su$(1,1) Hamiltonian as a Rabi problem with a complex transverse magnetic field. Analogously to the $SU(2)$ case, we may define the Rabi-like scenario for a $SU$(1,1) dynamical problem the case in which the three parameters $\Omega$, $|\omega|$ and $\dot{\phi}_\omega$ are time independent. Thus, the related solution for the quantum dynamics is given by Eqs. \eqref{Usl11}, \eqref{r ni}, \eqref{s ni} and \eqref{y ni} with $\Omega(\tau)=\Omega_0$, $|\omega(\tau)|=|\omega_0|$ and $\dot{\phi}_\omega(t)=\dot{\phi}_\omega^0$. We study now the time behaviour of the Rabi's transition probability $P_+^-(t)$, that is the probability to find the system in the state $\ket{-}$ at time $t$ when it is initialized at time $t=0$ in the state $\ket{+}$. In the framework of the non-linear equation of motion discussed before to describe the quantum dynamics of a system governed by a non-Hermitian Hamiltonian, it means to consider $\rho_0=\ket{+}\bra{+}$. Considering the non-unitary operator $U(t)$ both in the Caley-Klein \eqref{U Caley-Klein} and our \eqref{Usl11} form, it is easy to see that $P_+^-(t)=\rho_{22}(t)$ results \begin{eqnarray} P_+^-(t) = {|b(t)|^2 \over |a(t)|^2+|b(t)|^2} = \frac{|Y_{\nu}(t)|^2}{ 1+|Y_{\nu}(t)|^2}. \label{PAST1} \end{eqnarray} In Figs. \ref{fig:P+-ni} and \ref{fig:P+-tol} we report the transition probability $P_+^-$, against the dimensionless time $\tau=|\omega_0|t$, for different values of the parameter $\nu$. This is done in the case of a Rabi-like scenario which amounts, as explained before, to consider the two parameters $\Omega$ and $|\omega|$, defining the operator $U(t)$ by Eqs. \eqref{Usl11}, \eqref{r ni}, \eqref{s ni} and \eqref{y ni}, independent of time. \begin{figure}[htp] \begin{center} \subfloat[][]{\includegraphics[width=0.22\textwidth]{Pni.pdf}\label{fig:P+-ni}} \qquad \subfloat[][]{\includegraphics[width=0.22\textwidth]{Ptoone.pdf}\label{fig:P+-tol}} \captionsetup{justification=raggedright,format=plain,skip=4pt}% \caption{(Color online) a) Time dependence of the transition probability $P_+^-$ for different values of $\nu$: $\nu=0;0.7;1;2;5$ correspond to the color blue, green, red, magenta and brown, respectively; b) The plot illustrates ($\nu=2;1.2;1.01;1 \rightarrow$ blue, green, magenta, red) the passage of $P_+^-(t)$ from the oscillatory regime to the plateau regime.} \end{center} \end{figure} We note that we have oscillations when $\nu \geq 1$ of decreasing amplitude and period as long as $\nu$ increases; for $0 \leq \nu < 1$, instead, an asymptotic regime appears. This constitutes a deep difference between the Rabi scenario in the $SU$(2) and in the $SU$(1,1) case. In the former, the behaviour of the transition probability $P_+^-(t)$ is always oscillatory in time and different values of $\nu$ are related to different amplitudes of the oscillations. In the latter, instead, two kinds of time behaviour appear depending on the value of the parameter $\nu$, with 1 as value of separation between the two regimes. It is important to highlight at this point that the existence of the two regimes, in general, is not related to the reality or complexity of the Hamiltonian spectrum. The latter, indeed, concerning the ``Rabi'' scenario we are analysing, is $t$-independent, namely $\pm\sqrt{\Omega_0^2-|\omega_0|^2}$, and within the solvability condition \eqref{Exact Scenarios} under scrutiny, it is real if ($\Omega_0 > 0$) \begin{equation} \nu > 1+{\dot{\phi}_\omega^0 \over \Omega_0}. \end{equation} We see, then, that only if $\dot{\phi}_\omega^0=0$ the $\nu$-dependent transition between the two dynamical regimes coincides with the passage from a real to a complex spectrum. This happens to be case for the generic $su(1,1)$ 2x2 $PT$-symmetry matrix in Eq. \eqref{2x2 PT Matrix} for which $\phi_\omega(t) = \pi/2$, or for a $t$-independent $su(1,1)$ matrix. Conversely, if $\dot{\phi}_\omega^0 \neq 0$, two possible interesting cases arise. Namely, if $\dot{\phi}_\omega^0 < 0$ it means that the transition between the two dynamical regimes ($\nu > 1$ $\rightarrow$ $\nu<1$) occurs while the spectrum keeps its reality, since, in this case, $1+{\dot{\phi}_\omega^0 / \Omega_0} < 1$; on the other hand, if $\dot{\phi}_\omega^0 > 0$ there is a range of values of $\nu$, namely $ 1 < \nu < 1+{\dot{\phi}_\omega^0 / \Omega_0}$, for which the spectrum becomes complex without any appreciable evidence in the dynamical behaviour of the system. As a last remark we want to highlight that a common feature between the $SU$(2) and the $SU$(1,1) case may be found in the following fact. It is interesting to note that the Rabi-like resonance condition $\Omega+\dot{\phi}_{\omega}/2=0$ amounts at putting $\nu=0$ and the related curve is the (blue) one in Fig. \ref{fig:P+-ni} being the top limit curve. We know that in the $SU$(2) case this condition ensures a complete periodic population transfer between the two levels of the system, that is oscillations with maximum amplitude. Therefore, also in the $SU$(1,1) case, the scenario related to the Rabi's resonance condition is the one with the maximum value for the transition probability at any time. However, it is important to note that in the $SU$(1,1) case the transition probability, defined according to the framework delineated in Refs. \cite{Sergi1} and \cite{Graefe}, cannot overcome the value of 1/2, meaning that, in this instance, we cannot have complete population transfer. \section{Conclusions} The merit of this paper is twofold. First of all we individuate a non-trivial class of $su(1,1)$ time-dependent Hamiltonian models for which exact solutions of the ``dynamical'' problem: $i\dot{U}(t)=H(t)U(t)$, may be provided. The direct applicability of our approach to classical optical problems witnesses the usefulness of our method. Secondly, we construct step by step a reasonable frame within which the knowledge of the non-unitary solution of the above mentioned equation may legitimately exploited as source for generating the time evolution of a generic initial state of the system represented by $H(t)$. Here ``legitimately'' means that the new dynamical equation for $\rho$ introduced in \cite{Sergi1}, rests on the introduction of a good simple dynamical map generating the standard von Neumann-Liouville equation when the system is described by a Hermitian Hamiltonian. Exploiting this new point of view, we treat the dynamics of an $su(1,1)$ ``Rabi'' system generating results interpretable within the quantum context. We analytically evaluate the transition probability $P_+^-(t)$ under the three different regimes highlighted in general terms in Sec. \ref{Exact Solutions}, evidencing remarkable differences from the time behaviour exhibited by the same probabilities in the Rabi su(2) problem. We have in addition clarified that the passage from a $\nu$-regime to another one is governed by a condition on this parameter which does not coincide with the one ruling the transition from a real (time-independent!) energy spectrum to a complex one. This result makes evident that such a coincidence might at most be only a particular case ($\dot{\phi}_\omega=0$) in a wider scenario where a direct link between the regime transition and the change in the spectrum of the Hamiltonian does not generally occur. As a conclusive remark we emphasize that the mathematical analysis and the corresponding results developed in Sec. \ref{Exact Solutions} do not exhaust their potentiality in the quantum context only. We claim in fact that our method might be of some help in all those situations wherein the behaviour of the system under scrutiny is ruled by a system of non-autonomous first order differential equations exhibiting an $su(1,1)$ intrinsic symmetry. \section{Acknowledgements} RG and AM acknowledge stimulating conversation on the subject with A. Sergi. RG acknowledges for economical support by research funds difc 3100050001d08+, University of Palermo, in memory of Francesca Palumbo. ASMC acknowledges the Brazilian agency CNPq financial support Grant No. 453835/2014-7.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Distributed document representation aims some basic problems such as retrieval, classification, and inference to support other downstream tasks. Due to the semantic complexity of the documentation, learning document representation is still a difficult task, not well solved at present. For a document composed of multiple bodies, there should be a logical relationship and smoothness between its adjacent bodies. Therefore, there is a relation between the adjacent bodies in the document, which shows whether a document is meaningful, both logically and syntactically \cite{li2014model}. This relation is very important to the scientific document, without this relation the document will become meaningless. It is very necessary to maintain this relation in distributed document representation. This relation is called coherence. Current distributed document representation methods are based on word embedding or sentence embedding. \citeauthor{yurochkin2019hierarchical} \shortcite{yurochkin2019hierarchical} treat document representation as a hierarchical optimal transport based on the word mover's distance \cite{kusner2015word}, word embedding, and topic. \citeauthor{chen2019self} \shortcite{chen2019self} build the document embedding by averaging the sentence embedding and training through discriminator that determines whether a sentence belongs to a document. However, these methods based on the context information of words or sentences do not take into account the coherence of the document as a whole. The embedding of words and sentences can easily maintain coherence because there is the context (adjacent words or sentences) around the words and sentences. The context of the document exists inside the document, not around it. Maintaining the coherence of the document as a whole is a challenge. In this paper, we propose a coupled text pair embedding (CTPE) model to learn distributed document representation by maintaining the coherence of the scientific document. Specifically, we first divide the scientific document into two parts (such as the title and abstract) to form a coupled text pair. We use some common methods to obtain word or sentence embedding for all coupled text pairs (such as word embedding for short documents and sentence embedding for long documents). Then, we adopt negative sampling to construct uncoupled text pairs whose two parts are randomly derived from different documents. Our model maintains the coherence of the document by training to determine whether the text pair is coupled or uncoupled. We input the word embedding or sentence embedding of the text pair into the model and finally obtain two vector representations of the coupled text pair as the document representation. This paper further proposes a method for calculating the similarity of the coupled text pairs to calculate the similarity between documents represented by the paired embeddings. The main contributions of this paper are summarized as follows: \begin{itemize} \item We introduce coherence into representation of scientific documents for the first time, which makes a document meaningful both logically and syntactically, based on two adjacent bodies in the document. \item We propose a novel form of embedding and a general model, called coupled text pair embedding (CTPE), to maintain document coherence and represent documents. \item We propose a document similarity calculation method for computing documents represented by coupled text pair embedding. \item We perform experiments through information retrieval and recommendation to show that our model is competitive to the state-of-the-art methods based on context information of words or sentences. \end{itemize} \section{Related Work} Current distributed document representation learning methods are not specifically used for scientific documents. In the following we describe the main distributed document representation learning methods. \subsection{Word-level methods} The word-level method uses words as calculation objects to represent the document. Early distributed document representation methods are mainly algebraic or probabilistic models such as TF-IDF \cite{salton1988term} and topic model \cite{blei2003latent,yang2018topic}. These models treat the document as bag of words which neglect other useful information such as context information of words. With the development of neural networks, it is easier to obtain context information about words, such as word2vec \cite{mikolov2013distributed}, lifelong domain word embedding \cite{xu2018lifelong} and XLNet \cite{yang2019xlnet}, etc. On this basis, \citeauthor{le2014distributed} \shortcite{le2014distributed} and \citeauthor{luo2019learning} \shortcite{luo2019learning} predict a word embedding through other word embedding and document embedding while learning learn document embedding. \citeauthor{arora2016simple} \shortcite{arora2016simple}, \citeauthor{chen2017efficient} \shortcite{chen2017efficient} and \citeauthor{schmidt2019improving} \shortcite{schmidt2019improving} treat the average word embedding as document embedding. \citeauthor{hansen2019contextually} \shortcite{hansen2019contextually} weight word embedding to obtain document embedding. \citeauthor{kusner2015word} \shortcite{kusner2015word} and \citeauthor{wu2018word} \shortcite{wu2018word} get the distance between different documents by calculating word mover's distance. There are some methods that base on labels of specific tasks. \citeauthor{xiao2019label} \shortcite{xiao2019label} propose a document representation model base on label semantic information for multi-label text classification. \citeauthor{RN1484} \shortcite{RN1484} use citations between documents to train the transformer model and obtain representations of scientific documents. \subsection{Sentence-level methods} The sentence-level method uses sentences as calculation objects to represent the document. In addition to word embedding, sentence embedding \cite{hill2016learning,logeswaran2018efficient} can also be obtained from the context information of the sentence. The average of a sentence embedding or sentence embedding that treats a document as a sentence can be used to represent document embedding. \citeauthor{li2014model} \shortcite{li2014model} maintain coherence in sentence embedding based on sentence order, but does not form document embedding. \citeauthor{chen2019self} \shortcite{chen2019self} build the document embedding by averaging the sentence embedding and training through discriminator that determines whether a sentence belongs to a document. These methods based on the context information of words or sentences do not take into account the coherence of the document as a whole. \section{Problem Statement and Model Architecture} In this section, we formulate the problem of distributed document representation and introduce the structure and technical details of coupled text pair embedding (CTPE) model for distributed document representation. \begin{figure*}[h] \centering \includegraphics[width=1\textwidth]{model.png} \caption{The overall architecture of CTPE. The positive sample takes $f_i$ and $b_i$ as example, and the negative sample takes $f_x$ and $b_y$ as example. $f_i$, $b_i$, $f_x$ and $b_y$ correspond to four CNN models (former part, latter part, former part and latter part) respectively, and then each CNN model has four parallel convolutional layers.} \label{model} \end{figure*} \subsection{Problem Statement} Firse, we define our term in a formal way. For a documents set $\mathbf{D}=\{d_1,d_2,...,d_{|\mathbf{D}|}\}$, its coupled text pairs set is defined as $\mathbf{S}=\{s(d_i)|s(d_i)=(f_i,b_i),d_i\in\mathbf{D},i\in[1,|\mathbf{D}|]\}$ and its uncoupled text pairs set is defined as $\mathbf{S}^*=\{s_{ij}|s_{ij}=(f_i,b_j),1\leqslant i\neq j\leqslant|\mathbf{D}|\}$, where $f_i$ and $b_i$ denote the former and latter part of the text of document $d_i$, respectively. Document $d_i$ is the text we want to embed. According to the data available to us, documents can be the full text, the title and the abstract only, or the headline and bodies, etc. In order to get $(f_i,b_j)$ based on $d_i$, we define a parameter $pos$ for the segmentation position, which determines the split point that divides the document into two bodies. The left side of $pos$ is $f_i$, and the right side is $b_i$. For example, $pos$ is generally defined as the position between the title and abstract if the document only includes the title and the abstract. We define our problem in a formal way. Given a documents set $\mathbf{D}$ and its coupled text pairs set $\mathbf{S}$. Our goal is to learn distributed representation $g[s(d_i)]=v_i=(v_{f_i},v_{b_i})$ for document $d_i$, where $v_{f_i}$ and $v_{b_i}$ denote embedding of $f_i$ and $b_i$, respectively. In the next subsection, we will describe the proposed coupled text pair embedding (CTPE) model for distributed document representation. \subsection{CTPE architecture} Figure \ref{model} shows the overall architecture of CTPE. We introduce coherence into the document representation with the three parts of \textbf{text pair construction}, \textbf{training model}, and \textbf{loss function}, and propose a novel embedding form with the fourth part \textbf{distributed document representation}. \textbf{Text pair construction.} We propose a representation of text pairs so that we model learns to determine which document is more consistent because coherence is the relation between the two bodies in the document. We first divide each document $d_i$ into former part text $f_i$ and latter part text $b_i$ to form a coupled text pair. The $f_i$ and $b_i$ of the same document are used as a positive sample $(f_i,b_i)\in\mathbf{S}$. Then we randomly select the document $d_j=random(\mathbf{D})$ from the documents set $\mathbf{D}$ by negative sampling. We construct uncoupled text pairs $(f_i,b_j)$ and $(f_j,b_i)$ through $s(d_i)$ and $s(d_j)$. We randomly choose $(f_i,b_j)$ or $(f_j,b_i)$ as a negative sample $(f_x,b_y)=random[(f_i,b_j),(f_j,b_i)]\in\mathbf{S}^*$ whose $f_x$ and $b_y$ come from different documents. We adopt negative sampling to obtain coupled and uncoupled text pairs sets $\mathbf{S}$ and $\mathbf{S}^*$ in each epoch of training. \textbf{Training model.} We introduce a training model that learn the relation between the $f$ and $b$ to determine which document is more consistent. The embedding model aims to represent text pairs in a distributed form, such as word embedding, token embedding, sentence embedding, paragraph embedding model, or so on. We can choose different embedding models for documents of different lengths. We compare different embedding models in experiments. The embedding model converts text consisting of $l$ words, tokens, sentences, or paragraphs into $l$ vectors of dimension $dim$. The $dim*l$ dimension matrix after the embedding model is input to the CNN model to learn the relation between $f$ and $b$. The structure of the CNN model is composed of input layer, convolution layer, pooling layer, output layer. The number of channels of the CNN model is $n_f$, and the sizes of the convolution kernels are $n_s=\{n_{s1}, n_{s2},n_{s3},n_{s4}\}$. The right side of Figure \ref{model} shows the details of the CNN model. The implementation process training model is as follows. Since the length of $f_i$ and $b_i$ are different, the maximum word length of the $f_i$ and $b_i$ is limited to $l_{max}\geqslant l$. First, for any $f_i$ and $b_i$ with an input length of $len\leqslant l_{max}$, a matrix composed of $l$ vector of $dim$ dimension is obtained from the embedding model, which is input to the input layer of the CNN model. Then, through the convolution operation of the rectified linear unit (ReLU) and the convolution filter that stride is 1, four convolutional layers can be obtained, and their sizes are respectively $n_f*(l-n_{s1}+1) $, $n_f*(l-n_{s2}+1)$, $n_f*(l-n_{s3}+1)$ and $n_f*(l-n_{s4}+1)$. Next, the convolutional layer is processed into a one-dimensional vector of length $n_f$ by the max pool operation. Finally, a text vector of length $4*n_f$ is obtained by cascading operation. The $(f_i, b_i)$ and the $(f_x, b_y)$ form vector pairs $(v_{f_i}, v_{b_i})$ and $(v_{f_x}, v_{b_y})$ with the embedding model and CNN model. In order to reduce the training time and space complexity, positive and negative samples usually share the same model parameters. \textbf{Loss function.} In order to maintain coherence in the document representation, the loss function aims to make the distance between the positive and negative samples larger, that is, the similarity between the $f_i$ and $b_i$ of the same document must be greater than the similarity between the $f_x$ and $b_y$ of the different documents. We assume a minimum similarity distance $M$, the difference in similarity between the positive and negative samples must be at least greater than $M$. If the difference of similarity between the positive and negative samples is greater than $M$, the similarity is recorded as the loss value, otherwise is not included in the loss value. For positive sample $(f_i, b_i)$, negative sample $(f_x,b_y)$, the loss function $L$ is defined as: \begin{equation} \label{loss-function} \begin{aligned} L = &max\{0,-(\mathit{diff}_{xy}-M)\}\\ where&~\mathit{diff}_{xy}=\\ &\begin{cases} cos(v_{f_i},v_{b_i})-cos(v_{f_i},v_{b_j}),\; y=j\\ cos(v_{f_i},v_{b_i})-cos(v_{f_j},v_{b_i}),\; x=j \end{cases}\\ \end{aligned} \end{equation} where $cos$ denotes the cosine similarity of two vectors. $M$ represents the minimum similarity distance (called margin). This means that if $\mathit{diff}_{xy}$ is greater than $M$, then it is no longer necessary to modify the weight, i.e., $L=0$. The small batch gradient is reduced using the adaptive moment estimation (Adam) algorithm and the learning rate is set to $ln$. \textbf{Distributed document representation.} After training with the loss function, we need to obtain a distributed representation of the document with the model. As shown in Figure \ref{representation}, we calculate the vector pair $(v_{f_i},v_{b_i})$ of a document $d_i$ through the trained model of the previous section. $v_i=(v_{f_i},v_{b_i})$ is defined as the distributed representation of document $d_i$. $v_i$ is composed of a pair of vectors, which is different from a single embedded vector. The overall procedure from document to distributed representation is summarized in Algorithm \ref{CTPE}. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{representation.png} \caption{Distributed document representation of CTPE.} \label{representation} \end{figure} \begin{algorithm} \caption{\label{CTPE} Coupled text pair embedding (CTPE)} \LinesNumbered \KwIn{$d_k$, $\mathbf{D}$, $l_{max}$, $ln$, $L$, $pos$, embedding model $E$, batch size $b$, epoch $e$} \KwOut{document representation $v_k$} $\verb|//|$ Coupled text pair \\ Initialize coupled text pair's set $\mathbf{P}$ \\ \For{each $d_i\in\mathbf D$} { Obtain $f_i$ and $b_i$ based on $pos$ and $l_{max}$\\ Coupled text pair $p(d_i)=(f_i,b_i)$\\ $\mathbf{P}\leftarrow p(d_i)$ } $\verb|//|$ Training\\ \For{$i$th epoch, $i<e$} { Initialize uncoupled text pair's set $\mathbf{P^*}$ \\ \For{each $d_i\in\mathbf D$} { $d_j=random(\mathbf D)$\\ $(f_i,b_i),(f_j,b_j)=p(d_i),p(d_j)$\\ $\mathbf{P^*}\leftarrow random[(f_i,b_j),(f_j,b_i)]$ } \For{Select $b$ $data$ from $\mathbf{P},\mathbf{P^*}$} { $\mathbf M\leftarrow 4\times b$ matrices$\leftarrow$ input $data$ into $E$\\ $4\times b$ vectors $\leftarrow$ input $\mathbf M$ into CNN\\ Adam processing by $L$ and $ln$ } } $\verb|//|$ Distributed document representation \\ $(f_k,b_k)=p(d_k)$\\ $\mathbf M\leftarrow 2\times b$ matrices$\leftarrow$ enter $(f_k,b_k)$ into $E$\\ $(v_{f_k},v_{b_k})\leftarrow 2\times b$ matrices$\leftarrow$ input $\mathbf M$ into CNN model\\ \textbf{Return}: $v_k=(v_{f_k},v_{b_k})$ \end{algorithm} \subsection{Calculation Method of Similarity} \begin{figure}[!h] \centering \includegraphics[width=1\columnwidth]{similarity.png} \caption{Similarity calculation of documents in CTPE.} \label{similarity} \end{figure} We propose a novel similarity calculation method because of the special document representation. The general similarity calculation method only calculates the similarity without coherence. The calculation of similarity occurs between $f$ in one document and $b$ in another document, because coherence is the relation between the adjacent bodies. As shown in Figure \ref{similarity}, for any two documents $d_i$ and $d_j$, the similarity $sim(d_i,d_j)$ between them is defined as: \begin{equation} \label{sim-function} \begin{aligned} &sim(d_i,d_j)=cos(v_{f_i},v_{b_j})+cos(v_{f_j},v_{b_i}) \end{aligned} \end{equation} $sim(\cdot,\cdot)$ means the coherence between different bodies that come from different documents relative to the same document. $sim(\cdot,\cdot)$ is different from the general similarity and may not satisfy $sim(d_i,d_i)=sim(d_j,d_j)=2$ because it does not measure whether two documents are "exactly the same". Finally, we obtain a distributed representation of the document, and a similarity calculation method for this distributed representation. Through these two operations, we implement downstream tasks. \section{Experiments} In this section, we evaluate the performance and effectiveness of our model. We conduct experiments compared with 24 comparison methods on arXiv (papers), DBLP (papers) and USPTO (patents) datasets. Our code will be released on GitHub\footnote{https://github.com/aitsc/text-representation}. \begin{table*} \centering \begin{tabular}{ccccccc} \hline \multirow{2}{*}{}&\multicolumn{2}{c}{arXiv}&\multicolumn{2}{c}{DBLP}&\multicolumn{2}{c}{USPTO}\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5}\cmidrule(lr){6-7} & candidate & test & candidate & test & candidate & test \\ \hline documents & 20,000 & 1,000 & 20,000 & 1,000 & 20,000 & 1,000 \\ years & 1991-2015 & 2016 & 1967-2016 & 2017 & 2002-2016 & 2017-2018 \\ words per document & 205-425 & 205-418 & 155-421 & 156-412 & 250-680 & 255-629 \\ labels & - & 22-95 & - & 15-16 & - & 8-29 \\ \hline \end{tabular} \caption{\label{datasets} Datasets. The labels in the table refer to the range of the number of groundtruth candidate documents for the test document.} \end{table*} \subsection{Downstream Tasks and Datasets} We use the arXiv dataset for the information retrieval task that aims to retrieve similar candidate papers for a paper (consisting of title and abstract). We use the DBLP and USPTO datasets for recommendation tasks (citation recommendation and patent recommendation). Citation recommendation task aims to recommend appropriate candidate citations for a paper (consisting of title and abstract). Patent recommendation task aims to recommend similar candidate patents for a patent (consisting of title, abstract, and claim). The details of datasets are described below. \textbf{arXiv.} This dataset comes from the public data source of arxiv\footnote{ftp://3lib.org//oai\_dc/arxiv}, which contains a total of 1,180,081 papers. All papers contain titles, abstracts, authors, publication time, subject. For the test paper to be queried, candidate paper with the same subject as it is taken as groundtruth. Evaluation protocol: whether to retrieve papers that are candidates for groundtruth. \textbf{DBLP.} This dataset comes from the public data source of DBLP \cite{tang2008arnetminer}. We use the v10 version\footnote{https://aminer.org/citation}, which includes a total of 3,079,007 papers. We use titles, abstracts, publication times, and citations (groundtruth) in this dataset. Evaluation protocol: whether the groundtruth citation can be recommended. \textbf{USPTO.} This dataset comes from the public data source of the PatentsView database\footnote{http://www.patentsview.org/download} (a total of 6,424,534 patents). We extracte the title, abstract, year, first paragraph of the claim, and the citation (groundtruth) cited by examiner from the dataset. Evaluation protocol: whether the groundtruth citation can be recommended. In order to effectively evaluate the task, we perform experiments using cleaned subsets of arXiv, DBLP, and USPTO that are released on GitHub\footnote{https://github.com/opendata-ai/tr} in detail. Each dataset contains 20,000 candidate documents and 1000 test documents (papers or patents) randomly selected. Table \ref{datasets} describes these datasets in detail. The number of candidate and settings for DBLP and USPTO are similar to \cite{cai2018generative} and \cite{ji2019patent}. In order to conviniently evaluate the performance of different comparison methods, we use the same text preprocessing method for all data. The text preprocessing method is divided into five steps: 1. Convert all text to lowercase; 2. Remove HTML labels; 3. Restore HTML escape characters; 4. Split text with punctuation; 5. Remove tokens without letters. For sentence embedding, we use the punctuation at the end of the sentence to segment the document into sentences before text preprocessing. \begin{table*}[!h] \centering \begin{tabular}{cccccccccc} \hline \multirow{2}{*}{Methods}&\multicolumn{3}{c}{arXiv}&\multicolumn{3}{c}{DBLP}&\multicolumn{3}{c}{USPTO}\\ \cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10} & P & R & F$_1$ & P & R & F$_1$ & P & R & F$_1$ \\ \hline doc2vec(ran) & 0.0030 & 0.0009 & 0.0014 & 0.0007 & 0.0009 & 0.0008 & 0.0005 & 0.0010 & 0.0006 \\ avg-word2vec(ran) & 0.0134 & 0.0048 & 0.0071 & 0.0414 & 0.0542 & 0.0469 & 0.0457 & 0.0849 & 0.0594 \\ \hline TF-IDF & 0.1532 & 0.0571 & 0.0831 & 0.2564 & 0.3360 & 0.2908 & 0.1280 & 0.2419 & 0.1674 \\ LSA & 0.0289 & 0.0098 & 0.0146 & 0.0420 & 0.0549 & 0.0476 & 0.0350 & 0.0653 & 0.0456 \\ LDA & 0.0361 & 0.0132 & 0.0193 & 0.0220 & 0.0287 & 0.0249 & 0.0309 & 0.0589 & 0.0406 \\ \hline avg-GloVe & 0.1025 & 0.0369 & 0.0543 & 0.1564 & 0.2050 & 0.1774 & 0.1061 & 0.1996 & 0.1385 \\ avg-GloVe(full) & 0.1747 & 0.0634 & 0.0931 & 0.1821 & 0.2387 & 0.2066 & 0.1036 & 0.1959 & 0.1355 \\ avg-word2vec & 0.1609 & 0.0570 & 0.0841 & 0.2062 & 0.2702 & 0.2339 & 0.1376 & 0.2588 & 0.1796 \\ avg-word2vec(full) & 0.2551 & 0.0913 & 0.1345 & 0.2163 & 0.2834 & 0.2453& 0.1275 & 0.2419 & 0.1670 \\ WMD-GloVe & 0.1138 & 0.0413 & 0.0606 & 0.2081 & 0.2726 & 0.2360 & 0.1127 & 0.2120 & 0.1471 \\ WMD-GloVe(full) & 0.1393 & 0.0514 & 0.0751 & 0.2225 & 0.2915 & 0.2524 & 0.1185 & 0.2226 & 0.1547 \\ WMD-word2vec & 0.1637 & 0.0602 & 0.0880 & 0.2482 & 0.3253 & 0.2816 & 0.1356 & 0.2563 & 0.1774 \\ WMD-word2vec(full) & 0.2223 & 0.0827 & 0.1206 & 0.2527 & 0.3310 & 0.2866& 0.1376 & 0.2605 & 0.1801 \\ doc2vec & 0.1272 & 0.0462 & 0.0677 & 0.1554 & 0.2036 & 0.1763 & 0.1014 & 0.1914 & 0.1326 \\ Doc2VecC & 0.2825 & 0.1030 & 0.1510 & 0.2579 & 0.3380 & 0.2926 & 0.1635 & 0.3094 & 0.2140 \\ \hline avg-Skip-thoughts & 0.0578 & 0.0231 & 0.0330 & 0.0699 & 0.0918 & 0.0794 & 0.0723 & 0.1364 & 0.0945 \\ \hline CTPE-word2vec(full) & \bf{0.3066} & \bf{0.1124} & \bf{0.1645} & \bf{0.2808} & \bf{0.3680} & \bf{0.3185} & \bf{0.1889} & \bf{0.3609} & \bf{0.2480} \\ \hline \end{tabular} \caption{\label{P} Experimental measured as P, R and F$_1$.} \end{table*} \begin{table*}[!h] \centering \begin{tabular}{cccccccccc} \hline \multirow{2}{*}{Methods}&\multicolumn{3}{c}{arXiv}&\multicolumn{3}{c}{DBLP}&\multicolumn{3}{c}{USPTO}\\ \cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10} & MAP & NDCG & bpref & MAP & NDCG & bpref & MAP & NDCG & bpref \\ \hline doc2vec(ran) & 0.0007 & 0.0034 & 0.4767 & 0.0001 & 0.0007 & 0.4753 & 0.0001 & 0.0004 & 0.4753 \\ avg-word2vec(ran) & 0.0041 & 0.0162 & 0.4834 & 0.0221 & 0.0642 & 0.5051 & 0.0276 & 0.0671 & 0.5081 \\ \hline TF-IDF & 0.0773 & 0.1751 & 0.5666 & 0.1692 & 0.3257 & 0.6475 & 0.0745 & 0.1681 & 0.5624 \\ LSA & 0.0073 & 0.0312 & 0.4912 & 0.0188 & 0.0579 & 0.5039 & 0.0189 & 0.0487 & 0.4994 \\ LDA & 0.0093 & 0.0367 & 0.4944 & 0.0072 & 0.0267 & 0.4890 & 0.0141 & 0.0401 & 0.4954 \\ \hline avg-GloVe & 0.0425 & 0.1173 & 0.5370 & 0.0917 & 0.2083 & 0.5819 & 0.0590 & 0.1404 & 0.5480 \\ avg-GloVe(full) & 0.0875 & 0.1983 & 0.5786 & 0.1084 & 0.2368 & 0.5974 & 0.0596 & 0.1383 & 0.5464 \\ avg-word2vec &0.0740 & 0.1779 & 0.5686 & 0.1280 & 0.2641 & 0.6140 & 0.0796 & 0.1764 & 0.5672 \\ avg-word2vec(full) & 0.1492 & 0.2810 & 0.6241 & 0.1367 & 0.2786 & 0.6211 & 0.0710 & 0.1629 & 0.5607 \\ WMD-GloVe & 0.0527 & 0.1393 & 0.5466 & 0.1406 & 0.2837 & 0.6216 & 0.0686 & 0.1527 & 0.5547 \\ WMD-GloVe(full) & 0.0660 & 0.1659 & 0.5603 & 0.1510 & 0.2993 & 0.6308 & 0.0719 & 0.1596 & 0.5577 \\ WMD-word2vec & 0.0797 & 0.1880 & 0.5736 & 0.1714 & 0.3280 & 0.6474 & 0.0826 & 0.1792 & 0.5693 \\ WMD-word2vec(full) & 0.1291 & 0.2556 & 0.6095 & 0.1733 & 0.3311 & 0.6490 & 0.0837 & 0.1822 & 0.5707 \\ doc2vec & 0.0514 & 0.1420 & 0.5491 & 0.0857 & 0.2027 & 0.5790 & 0.0566 & 0.1350 & 0.5444 \\ Doc2VecC & 0.1727 & 0.3056 & 0.6358 & 0.1719 & 0.3239 & 0.6475 & 0.0964 & 0.2062 & 0.5841 \\ \hline avg-Skip-thoughts & 0.0230 & 0.0676 & 0.5104 & 0.0387 & 0.1044 & 0.5254 & 0.0407 & 0.1005 & 0.5265 \\ \hline CTPE-word2vec(full) & \bf{0.1893} & \bf{0.3233} & \bf{0.6474} & \bf{0.1846} & \bf{0.3476} & \bf{0.6595} & \bf{0.1146} & \bf{0.2381} & \bf{0.6020} \\ \hline \end{tabular} \caption{\label{MAP} Experimental measured as MAP, NDCG and bpref.} \end{table*} \subsection{Comparison Methods} We compare our model with the following three types of unsupervised distributed document representation methods on retrieval and recommendation tasks. \textbf{Random embedding model.} We generate random embedding (based on uniform distribution) of documents and words as baseline methods, respectively called doc2vec(ran) and word2vec(ran). The avg-word2vec(ran) use average word embeddings as a document embedding. We use randomly generated embedding to show the difficulty of the task. \textbf{Word-level methods.} Algebraic or probabilistic model: TF-IDF \cite{salton1988term}, LSA \cite{deerwester1990indexing} and LDA \cite{blei2003latent}. Word embedding based model: GloVe \cite{pennington2014glove}, word2vec \cite{mikolov2013distributed}, doc2vec \cite{le2014distributed}, WMD \cite{kusner2015word} and Doc2VecC \cite{chen2017efficient}. Since the corpus of datasets is a subset of the public data source, we train word2vec and GloVe on the experimental dataset and the public data source, called avg-word2vec and avg-word2vec(full) and avg-GloVe and avg-GloVe(full). Deep language model: ELMo \cite{peters2018deep}, GPT \cite{radford2018improving}, GPT-2 \cite{radford2019language}, BERT \cite{devlin2019bert}, TransXL \cite{dai2019transformer}, XLM \cite{lample2019cross}, XLNet \cite{yang2019xlnet} and RoBERTa \cite{liu2019roberta}. We adopt pre-trained models without fine-tuning because of unsupervised document representation learning on these methods to verify the effectiveness of CTPE on different embedding models. \textbf{Sentence-level methods.} Sentence embedding based model: Skip-thoughts \cite{kiros2015skip}. The average of the sentence embeddings in the document is represented as the document embedding, called avg-Skip-thoughts. \subsection{Metrics and Setup} We use the methods to find the $topN$=20 results for three datasets and compare the result of each method with the labels in the datasets. We use the precision (P), recall (R), and F1 score (F$_1$) as evaluation metrics. We also employ several popular information retrieval measures \citep{buttcher2016information} including mean averaged precision (MAP), normalized discounted cumulative gain (NDCG), and bpref \citep{buckley2004retrieval}. These are popular measures for information retrieval \cite{zhang2020selective} and recommendation \cite{cai2018generative}. The parameters and setup of the comparison methods are described in detail below. \textbf{random embedding.} This is a baseline method for randomly getting text embedding. We generate random embedding of documents and words, respectively called doc2vec(ran) and word2vec(ran). The embedded dimensions of both methods are set to 100. \textbf{TF-IDF\footnote{https://scikit-learn.org/stable/modules/generated/ sklearn.feature\_extraction.\\text.TfidfVectorizer.html}.} The distributed representation of the document consists of the TF-IDF of each word in the document. The dimension of the document is the number of unique words in the dataset, and the cosine distance is use to calculate the similarity. \textbf{LSA\footnote{https://radimrehurek.com/gensim/models/lsimodel.html}.} We set the number of topics and iterations to 100. Other parameters are consistent with gensim's default settings. \textbf{LDA\footnote{https://radimrehurek.com/gensim/models/ldamodel.html}.} We set the number of topics and iterations to 100. Other parameters are consistent with gensim's default settings. \textbf{word2vec\footnote{https://code.google.com/archive/p/word2vec}.} We use the average word embedding of word2vec as the document embedding, and the dimension is set to 100. The window size is set to 10, the maximum number of iterations is set to 100, and the strategy is set to skip-gram. \textbf{GloVe\footnote{https://github.com/stanfordnlp/GloVe}.} We use the average word embedding of GloVe as the document embedding, and the dimension is set to 100. Since the experimental dataset is a subset of all the data, we train the word embedding on the experimental data set and all the data, called avg-GloVe and avg-GloVe(full). The window size is set to 10 and the maximum number of iterations is set to 100. \textbf{doc2vec\footnote{https://radimrehurek.com/gensim/models/doc2vec.html}.} The dimension is set to 100. The window size is set to 10. The number of iterations is set to 50. The strategy is set to PV-DM. \textbf{WMD\footnote{https://github.com/src-d/wmd-relax}.} The WMD is a distance measurement algorithm based on word embedding. We evaluate the WMD based on GloVe and word2vec, including WMD-word2vec, WMD-word2vec(full), WMD-GloVe, WMD-GloVe(full). \textbf{Doc2VecC\footnote{https://github.com/mchen24/iclr2017}.} Doc2VecC is based on word2vec, where the settings for the word2vec part are consistent with the word2vec model above. The sentence sample is set to 0.1. \textbf{Skip-thoughts\footnote{https://github.com/ryankiros/skip-thoughts\#getting-started}.} Skip-thoughts is a sentence embedding model. We use the generic model trained on a much bigger book corpus to encode the sentences. A vector of 4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, are generated for each sentences. The EOS token is used. The average of the sentence embeddings in the document is represented as the embedding of the document. \textbf{ELMo, GPT, GPT-2, BERT, Transformer XL, XLM, XLNet, RoBERTa.} The distributed document representation is represented by the average pooling of token embedding (BERT does not include [CLS] and [SEP]). The pre-trained models used in this paper are Original (5.5B)\footnote{https://allennlp.org/elmo}, openai-gpt\footnote{https://huggingface.co/pytorch-transformers/pretrained\_models.html\label{web}}, gpt2-medium\textsuperscript{\ref{web}}, bert-large-uncased-whole-word-masking\textsuperscript{\ref{web}}, transfo-xl-wt103\textsuperscript{\ref{web}}, xlm-mlm-en-2048\textsuperscript{\ref{web}}, xlnet-large-cased\textsuperscript{\ref{web}}, and roberta.large\footnote{https://github.com/pytorch/fairseq/tree/master/examples/\\roberta}, respectively. Because these deep language models are context-dependent token embedding, there are two strategies for training text pairs: the former part text and the latter part text are trained separately or together. We take the maximum of the performance of these two strategies. We use Tensorflow to implement our model. We set batch size to 200. The parameters in CTPE include $l$, $l_{max}$, $n_s$, $n_f$, $ln$, $M$ and $dim$, which are empirically set to 200, 200, 1024,\{1,2,3,5\}, 0.001, 0.1, 100, respectively. If the embedding model is a sentence embedding model, $l$ is set to 20. The word embedding dimension of all models is set to 100, and other parameters are set to the default recommendation. All comparison methods use cosine similarity. In order to apply to various unsupervised tasks, there is no verification set and the parameters of different tasks are consistent. Our method and all comparison methods use all documents in the dataset for unsupervised training and obtain all document embeddings. Test documents with groundtruth are used to test performance. Our model is trained on the RTX 2080TI. The training time for each result is limited to one day. There is no verification set here. If the loss function value does not decrease for more than 12 hours, the training will be stopped early. \subsection{Experimental Results} \label{sec:results} The CTPE model uses word2vec(full) as the embedding model. We discuss other embedding models in Section \ref{sec:ablation}. For the arXiv and DBLP datasets, $pos$ is set between the title and abstract. For the USPTO dataset, $pos$ is set between abstract and claim. We discuss other $pos$ in section \ref{sec:segmentation}. Table \ref{P} and Table \ref{MAP} show the results of 17 methods on 3 datasets. From these results, we have the following observations and analysis. CTPE has the best performance. On the arXiv dataset, our model outperforms Doc2VecC by 2.41\% and 1.66\% on precision and MAP. On the DBLP dataset, our model outperforms Doc2VecC by 2.29\% and 1.27\% on precision and MAP. On the USPTO dataset, our model outperforms Doc2VecC by 2.54\% and 1.82\% on precision and MAP. The CTPE model maintains the coherence of the document as a whole, so it achieves the best performance. Normal documents are coherent, otherwise they don't make sense. Here are some other results and facts that need to be explained: (1) The performance of avg-word2vec(ran) is better than doc2vec(ran), because doc2vec(ran) is completely random, and avg-word2vec(ran) can uniquely identify a word. (2) The arXiv dataset has lower recall than other datasets. Because the arxiv dataset has more labels (see Table \ref{datasets}), it is more difficult to find all labels. (3) The word2vec-based approach is better than the GloVe-based approach on all three datasets. (4) The algorithm that uses all the data (full) is mostly better than the algorithm using the experimental dataset, which verifies the impact of data quality on the algorithm. (5) The WMD is better than average word embedding in most experiments, but not all, such as WMD-w(full) and WMD-g(full) on arXiv, because the WMD based on word embedding does not learn new information from the document. \begin{table*}[!h] \centering \begin{tabular}{cccccccccc} \hline \multirow{2}{*}{Methods}&\multicolumn{3}{c}{arXiv}&\multicolumn{3}{c}{DBLP}&\multicolumn{3}{c}{USPTO}\\ \cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10} & avg & CTPE$_0$ & CTPE & avg & CTPE$_0$ & CTPE & avg & CTPE$_0$ & CTPE \\ \hline Skip-thoughts & 0.0330 & 0.0007 & \bf{0.0623} & 0.0794 & 0.0015 & \bf{0.1302} & 0.0945 & 0.0067 & \bf{0.1142} \\ \hline word2vec(ran) & 0.0071 & 0.0034 & \bf{0.0637} & 0.0469 & 0.0022 & \bf{0.2277} & 0.0594 & 0.0549 & \bf{0.1378} \\ GloVe & 0.0543 & 0.0137 & \bf{0.0752} & 0.1774 & 0.0212 & \bf{0.2867} & 0.1385 & 0.1581 & \bf{0.2220} \\ GloVe(full) & 0.0931 & 0.0209 & \bf{0.1230} & 0.2066 & 0.0442 & \bf{0.2941} & 0.1355 & 0.1617 & \bf{0.2215} \\ word2vec & 0.0841 & 0.0555 & \bf{0.1098} & 0.2339 & 0.1390 & \bf{0.3002} & 0.1796 & 0.2182 & \bf{0.2381} \\ word2vec(full) & 0.1345 & 0.1093 & \bf{0.1645} & 0.2453 & 0.1607 & \bf{0.3185} & 0.1670 & 0.2095 & \bf{0.2480} \\ \hline ELMo & 0.0915 & 0.0356 & \bf{0.1099} & 0.1346 & 0.0230 & \bf{0.2428} & 0.1349 & 0.1198 & \bf{0.1965} \\ GPT & 0.0676 & 0.0080 & \bf{0.0730} & 0.1910 & 0.0288 & \bf{0.1962} & 0.1651 & 0.0987 & \bf{0.1780} \\ GPT-2 & 0.0423 & 0.0011 & \bf{0.0738} & 0.0638 & 0.0007 & \bf{0.2027} & 0.0531 & 0.0021 & \bf{0.1537} \\ BERT & 0.1139 & 0.0267 & \bf{0.1280} & 0.1824 & 0.0095 & \bf{0.2639} & 0.1628 & 0.0991 & \bf{0.2130} \\ TransXL & 0.0190 & 0.0105 & \bf{0.0717} & 0.1489 & 0.0137 & \bf{0.1771} & 0.1626 & 0.1284 & \bf{0.1641} \\ XLM & 0.1226 & 0.0315 & \bf{0.1292} & 0.2107 & 0.0258 & \bf{0.2816} & 0.1753 & 0.1163 & \bf{0.2200} \\ XLNet & 0.0792 & 0.0060 & \bf{0.1083} & 0.2107 & 0.0121 & \bf{0.2412} & 0.1753 & 0.0097 & \bf{0.2137} \\ RoBERTa & 0.0616 & 0.0020 & \bf{0.0684} & 0.1493 & 0.0038 & \bf{0.2255} & 0.1190 & 0.0454 & \bf{0.1624} \\ \hline \end{tabular} \caption{\label{avg-CTPE} Experimental results based on different embedding models on F$_1$.} \end{table*} \subsection{Ablation Analysis} \label{sec:ablation} We conduct an ablation analysis on CTPE to examine the effectiveness of each component. We conduct experiments on three components: (1) We use different embedding models in the \textbf{training model}. (2) A model called CTPE$_0$ that does not use the \textbf{loss function} to train. (3) A \textbf{distributed document representation} method (avg) that uses average embedding instead of coupled text pair representation. Table \ref{avg-CTPE} shows the impact of 14 embedding models on the performance of avg, CTPE$_0$ and CTPE in 3 datasets using F$1$ metric. We have the following observations and analysis: (1) CTPE improves the performance of various types of embedding models (word and sentence embedding models) effectively. CTPE outperforms avg model on average by 2.6\%, 7.9\%, and 5.4\% on arXiv, DBLP, and USPTO, because the avg model does not consider coherence. (2) The performance of CTPE is significantly better than CTPE$_0$, which proves the effectiveness of the loss function because the loss function guides the training model to determine which documents are more coherent. (3) Our distributed document representation cannot be directly used for general embedding models. More than 90\% of avg methods perform better than CTPE$_0$ models, because our similarity calculation method is dependent on coherence. (4) The performance of CTPE-Skip-thoughts is higher than that of avg-Skip-thoughts, which means that CTPE is suitable for super-long documents. CTPE can be based on embedding models of different granularity, such as sentence and paragraph embedding, which is several orders of magnitude less complex than CTPE that requires input of all words. The performance of word embedding is stronger than the deep language model because the latter is evaluated by unsupervised document representation without fine-tuning. \begin{table*}[!h] \centering \begin{tabular}{ccccccccccc} \hline \multirow{2}{*}{$pos$}&\multicolumn{3}{c}{arXiv}&\multicolumn{3}{c}{DBLP}&\multicolumn{3}{c}{USPTO}&\multirow{2}{*}{avg}\\ \cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10} & P & R & F$_1$ & P & R & F$_1$ & P & R & F$_1$ \\ \hline 20\% & 0.3265 & 0.1211 & 0.1767 & \textbf{0.2925} & \textbf{0.3835} & \textbf{0.3319} & 0.1855 & 0.3545 & 0.2435 & \bf{0.2684} \\ 40\% & \textbf{0.3283} & \textbf{0.1219} & \textbf{0.1778} & 0.2786 & 0.3651 & 0.3160 & 0.1821 & 0.3485 & 0.2392 & 0.2620\\ 60\% & 0.3149 & 0.1155 & 0.1690 & 0.2652 & 0.3477 & 0.3009 & 0.1628 & 0.3103 & 0.2136 & 0.2444\\ 80\% & 0.2934 & 0.1054 & 0.1551 & 0.2319 & 0.3040 & 0.2631 & 0.1526 & 0.2893 & 0.1998 & 0.2216\\ m & 0.3066 & 0.1124 & 0.1645 & 0.2808 & 0.3680 & 0.3185 & \textbf{0.1889} & \textbf{0.3609} & \textbf{0.2480} & 0.2610\\ \hline \end{tabular} \caption{\label{split} Experimental results of CTPE on different segmentation positions. The embedding model is based on word2vec(full). The above m and avg denote the meaningful segmentation and the average value of the row.} \end{table*} \begin{table*}[!h] \centering \begin{tabular}{cccccccccc} \hline \multirow{2}{*}{$pos$}&\multicolumn{3}{c}{arXiv}&\multicolumn{3}{c}{DBLP}&\multicolumn{3}{c}{USPTO}\\ \cmidrule(lr){2-4}\cmidrule(lr){5-7}\cmidrule(lr){8-10} & P & R & epochs & P & R & epochs & P & R & epochs \\ \hline CTPE$_T$ & 0.2944 & 0.1074 & \textbf{3} & 0.2716 & 0.3561 & \textbf{78} & \textbf{0.1903} & \textbf{0.3628} & \textbf{9} \\ CTPE & \textbf{0.3066} & \textbf{0.1124} & 48 & \textbf{0.2808} & \textbf{0.3680} & 206 & 0.1889 & 0.3609 & 133 \\ \hline \end{tabular} \caption{\label{sampling} Experimental results of CTPE on different sampling methods. The embedding model is based on word2vec(full). Metrics include accuracy, recall, and training epochs of the result.} \end{table*} \subsection{Segmentation Position Analysis} \label{sec:segmentation} The setting of the parameter $pos$ in Section \ref{sec:results} is called the meaningful segmentation that divides the document into chapters or paragraphs, retaining complete paragraphs and sentences. We need to analyze how different segmentation positions affect the maintenance of coherence, as not all documents are suitable for this split method, such as some documents without meaningful segmentation. We examine the impact of different $pos$ on experimental performance. We divide the document into five equal parts and evaluate performance at four different $pos$ (20\%, 40\%, 60\%, 80\%). Our segmentation is at the token level. The meaningful segmentation positions (all samples average) of arXiv, DBLP, and USPTO are 4.7\%, 5.1\%, and 40.2\%, respectively. Table \ref{split} shows the experimental results of five positions on all datasets. We have the following observations and analysis. Experiments show that different segmentation positions have an impact on the maintaining of coherence because not all adjacent bodies in a document have strong coherence. Based on these results, we can draw three valuable suggestions: (1) If the document has meaningful segmentation, using meaningful segmentation can achieve excellent performance because meaningful segmentation outperforms all comparison methods, although not the best. (2) If the document does not have meaningful segmentation, using 20\%-40\% segmentation position can also achieve excellent performance because the performance of these $pos$ is the best. (3) Finding the best segmentation position will be a meaningful future work. \subsection{Sampling with Imbalance Data} In the process of text pair construction, negative samples are randomly combined from the former part text and latter part text of different documents, which makes the number of negative samples far more than positive samples. If the number of positive samples is $n$, then the number of negative samples is $n(n-1)$. Using negative sampling may ignore important negative samples and take longer. We propose TF-IDF sampling to analyze the efficiency and performance of different sampling methods. In negative sampling, for document $d_i$, we choose document $d_j=random(\mathbf{D})$. In the TF-IDF sampling, for the document $d_i$, we select the document $d_j=random(random(T_{100}(d_i,\mathbf{D})),random(\mathbf{D}))$, where $T_{100}(d_i,\mathbf{D})$ denotes the top 100 documents in $\mathbf{D}$ that are most similar (TF-IDF similarity) to $d_i$. This sampling method is based on three considerations: (1) We introduce $T_{100}(d_i,\mathbf{D})$ to reduce the probability of ignoring important negative samples. (2) It is easier to overfit by training with only 100 documents instead of all documents. (3) The documents obtained with TF-IDF cannot completely represent the real similar documents. If $random(\mathbf{D})$ is not used, the model will overfit the results of TF-IDF. Table \ref{sampling} shows the experimental results of CTPE and CTPE$_T$ (TF-IDF sampling) on all datasets. We draw a conclusion from these results: CTPE has better performance, but CTPE$_T$ is faster. The TF-IDF sampling allows the model to better distinguish the differences between similar documents, so it costs less epochs for training, but it is easier to overfit the samples sampled by TF-IDF. If we have higher requirements for training speed, CTPE$_T$ is more suitable, otherwise CTPE is more suitable. \section{Conclusions} In this paper, we propose a coupled text pair embedding (CTPE) model for distributed document representation, a novel architecture that is able to maintain the coherence of the scientific document for better document representing. We divide the scientific document into two parts and obtain a distributed representation of the document through CNN and embedding model. We use the coupling relationship between the two parts of the document to train the model. In the future, we will focus on the improvement of model structure and the application of supervised tasks (e.g., supervised information retrieval, text classification, or clickbait and fake news identification, etc). We will try more elaborate document encodings (e.g., perhaps splitting documents into more than two parts). There are also two interesting future works that are the precise measure of coherence and automatic selection of $pos$. For natural language processing tasks with more supervisory information, we will perform supervised fine-tuning on specific tasks based on the pre-training model of CTPE. In addition, CTPE model is a good choice for some tasks that lack supervisory information. \begin{acks} This work was partially supported by National High Technology Research and Development Program (Grant \# 2017YFB1401903), the National Natural Science Foundation of China (Grants \# 61876001, \# 61602003 and \# 61673020), the Provincial Natural Science Foundation of Anhui Province (Grants \# 1708085QF156), and the Recruitment Project of Anhui University for Academic and Technology Leader. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The theory of compensated compactness initiated and developed by L. Tartar \cite{{Ta}} and F. Murat \cite{{M}} has been largely studied and extended to various setting. The famous paper of Coifman, Lions, Meyer and Semmes ( \cite{CLMS}) gives an overview of this theory in the context of Hardy spaces in the Euclidean space ${\mathbb R}^{n}$ $(n\geq 1)$. They prove in particular, that, for $\frac{n}{n+1}<p,q<\infty$ such that $\frac{1}{p}+\frac{1}{q}<1+\frac{1}{n}$, when $F$ is a vector field belonging to the Hardy space ${\mathcal H}^{p}({\mathbb R}^{n},{\mathbb R}^{n})$ with ${\mbox{\small\rm curl}}\, F=0$ and $G$ is a vector field belonging to ${\mathcal H}^{q}({\mathbb R}^{n},{\mathbb R}^{n})$ with ${\mbox{\small\rm div}}\, G=0$, then the scalar product $F\cdot G$ can be given a meaning as a distribution of ${\mathcal H}^{r}({\mathbb R}^{n})$ with \begin{equation} \left\|F\cdot G\right\|_{{\mathcal H}^{r}({\mathbb R}^{n})}\leq C\left\|F\right\|_{{\mathcal H}^{p}({\mathbb R}^{n},{\mathbb R}^{n})}\left\|G\right\|_{{\mathcal H}^{q}({\mathbb R}^{n},{\mathbb R}^{n})} \end{equation} where $\frac{1}{r}=\frac{1}{p}+\frac{1}{q}$. The endpoint $\frac{n}{n+1}$ is related to cancellation properties of Hardy spaces: bounded functions with compact support and zero mean do not belong to ${\mathcal H}^{\frac n {n+1}}$ unless their moments of order one are zero, a property that the scalar product $F.G$ does not have in general. We shall consider here the endpoint $q=\infty$. Let us first start by some description of what is known. Auscher, Russ and Tchamitchian remarked in \cite{ART} that, for $p=1$, one has, under the same assumptions of being respectively curl free and divergence free, \begin{equation} \left\|F\cdot G\right\|_{{\mathcal H}^{1}({\mathbb R}^{n})}\leq C\left\|F\right\|_{{\mathcal H}^{1}({\mathbb R}^{n},{\mathbb R}^{n})}\left\|G\right\|_{L^{\infty}({\mathbb R}^{n},{\mathbb R}^{n})}. \end{equation} In fact it is easy to see that the proof given in \cite{CLMS} is also valid for $q=\infty$. They give in \cite{ART} another proof, which has its own interest and has helped us in our generalization to $B\! M\! O$. Remark that the scalar product does not make sense in general when $F$ is in ${\mathcal H}^{p}({\mathbb R}^{n},{\mathbb R}^{n})$ for $p<1$, so that one can only write a priori estimates, such as the following one. \begin{thm}\label{main01} Let $\frac{n}{n+1}<p\leq1$. If $F\in {\mathcal H}^{p}({\mathbb R}^{n},{\mathbb R}^{n})$ is integrable and such that ${\mbox{\small\rm curl}}\,\,F=0$ and if $G\in L^{\infty}({\mathbb R}^{n},{\mathbb R}^{n})$ with ${\mbox{\small\rm div}}\, G=0$, then there exists a constant $C$, independent of $F$ and $G$, such that \begin{equation}\label{p-div-curl} \left\| F\cdot G\right\|_{{\mathcal H}^{p}({\mathbb R}^{n})}\leq C \left\|F\right\|_{{\mathcal H}^{p}({\mathbb R}^{n},{\mathbb R}^{n})}\left\|G\right\|_{L^{\infty}({\mathbb R}^{n},{\mathbb R}^{n})}. \end{equation} \end{thm} This a priori estimate allows to give a meaning to $F\cdot G$ in the distribution sense. There is no hope to give such a meaning for general products $fg$, with $f\in {\mathcal H}^p({\mathbb R}^n)$ and $g\in L^\infty$. It is proved in \cite{BF} that this is possible when $g$ is in the inhomogeneous Lipschitz space $\Lambda_\alpha({\mathbb R}^n)$, with $\alpha=n(\frac 1 p-1)$. Moreover, one has \begin{equation} fg\in L^{1}({\mathbb R}^{n})+{\mathcal H}^{p}({\mathbb R}^{n}) \end{equation} for $f$ in ${\mathcal H}^{p}$ (with $p<1$) and $g$ in $\Lambda_{n\left(\frac{1}{p}-1\right)}$. So, cancellation properties of the scalar product of curl free and divergence free vector fields allow to get rid of the integrable part and to weaken the assumptions from Lipschitz to bounded. We will show in Section 4 that this generalizes to wedge products of closed forms. Remark that end-point estimates would imply all other ones by interpolation if we could interpolate between ${\mathcal H}^p$ spaces of closed forms. Indeed, for instance, the generalization to closed forms allows to have \eqref{p-div-curl} when assumptions on the two factors are exchanged: $F$ is bounded and $G$ is in $ {\mathcal H}^{p}({\mathbb R}^{n},{\mathbb R}^{n})$. Unfortunately, one does not know whether one can interpolate: while there is a bounded projection onto closed forms in ${\mathcal H}^p$ for $p<\infty$, it is not the case for $p=\infty$. \bigskip The core of this paper concerns ${\mbox{\small\rm div}}\,$-${\mbox{\small\rm curl}}\,$ lemmas (and their extensions for the wedge product of closed forms) when the assumption to be bounded is weakened into an assumption of type $B\! M\! O$. Products of functions in ${\mathcal H}^1$ and $B\! M\! O$ have been considered by Bonami, Iwaniec, Jones and Zinsmeister in \cite{BIJZ}. Such products make sense as distributions, and can be written as the sum of an integrable function and a function in a weighted Hardy-Orlicz space. In order to have a ${\mbox{\small\rm div}}\,$-${\mbox{\small\rm curl}}\,$ lemma in this context, we make some restriction for one of the two factors. Recall that ${\mathfrak {bmo}}:={\mathfrak {bmo}}({\mathbb R}^{n})$ is the set of locally integrable functions $b$ satisfying \begin{equation}\label{bmo} \sup_{|B|\leq 1}\left( \frac 1 {|B|}\int_B |b-b_B|dx\right)<\infty\ \ \ \ \mbox{\rm and } \sup_{|B|\geq 1}\left( \frac 1 {|B|}\int_B |b|dx\right)<\infty \end{equation} with $B$ varying among all balls of ${\mathbb R}^n$ and $|B|$ denoting the measure of the ball $B$. The sum of the two finite quantities will be denoted by $\left\|b\right\|_{{\mathfrak {bmo}}}$. Then ${\mathfrak {bmo}}$ is well-known to be the dual space of the localized version of the Hardy space, which we note $\mathfrak h^{1}({\mathbb R}^{n})$, see \cite{G}. To be more precise, for $f\in{\mathcal H}^1({\mathbb R}^n)$ and $g\in {\mathfrak {bmo}}$, we define the product (in the distribution sense) $f g$ as the distribution whose action on the Schwartz function $\varphi\in\mathcal S({\mathbb R}^{n})$ is given by \begin{equation} \left\langle f g,\varphi\right\rangle:=\left\langle \varphi g,f\right\rangle, \end{equation} where the second bracket stands for the duality bracket between ${\mathcal H}^1$ and $B\! M\! O$. It is then proved in \cite{BIJZ} that \begin{equation}\label{nocancel} f g\in L^{1}({\mathbb R}^{n})+{\mathcal H}^{\Phi}_\omega({\mathbb R}^{n}). \end{equation} Here $\mathcal H^\Phi_\omega({\mathbb R}^n)$ is the weighted Hardy-Orlicz space related to the Orlicz function \begin{equation}\label{orl-log} \Phi(t):=\frac t{\log (e+t)} \end{equation} and with weight $\omega(x):=(\log (e+|x|))^{-1}$. This extends immediately to vector-valued functions. In the next theorem, we prove that there is no $L^1$ term in the context of the ${\mbox{\small\rm div}}\,$-${\mbox{\small\rm curl}}\,$ lemma. \begin{thm}\label{bijz1} Let $F\in {\mathcal H}^{1}({\mathbb R}^{n},{\mathbb R}^{n})$ with ${{\mbox{\small\rm curl}}\,}F=0$ and $G\in {\mathfrak {bmo}} ({\mathbb R}^{n},{\mathbb R}^{n})$ with ${\mbox{\small\rm div}}\, G=0$. Then there exists some constant $C$, independent of $F$ and $G$, such that \begin{equation} \left\|F\cdot G\right\|_{{\mathcal H}^{\Phi}_\omega({\mathbb R}^{n})}\leq C\left\|F\right\|_{{\mathcal H}^{1}({\mathbb R}^{n},{\mathbb R}^{n})}\left\|G\right\|_{{\mathfrak {bmo}}({\mathbb R}^{n},{\mathbb R}^{n})}. \end{equation} \end{thm} The theorem is also valid for $G\in B\! M\! O$, but for local Hardy spaces $\mathfrak h^1$ and $\mathfrak h^\Phi_\omega$ instead of ${\mathcal H}^1$ and ${\mathcal H}^\Phi_\omega$. We do not know whether it is valid without this restriction on $F$ or $F\cdot G$. There is an ${\mathcal H}^p$ version of this theorem, for $p<1$, which we give also. Note that ${\mbox{\small\rm div}}\,$-${\mbox{\small\rm curl}}\,$ have been developed in the context of local Hardy spaces by Dafni \cite{D}. These results can be compared to what can be said on products of holomorphic functions, for which analogous estimates are elementary and have a weak converse, see \cite{BG}. \medskip To simplify notations, we restricted to vector fields in the introduction, but we shall write below these results in the context of the wedge product of closed differential forms. Indeed, recall that a divergence free vector field can be identified with an $(n-1)$-closed differential form, while a ${\mbox{\small\rm curl}}\,$ free vector field identifies with a $1$- closed form, their scalar product being given by the wedge product of these two forms. The usual ${\mbox{\small\rm div}}\,$-${\mbox{\small\rm curl}}\,$ lemma has been extended to wedge products of closed differential forms by Lou and Mc Intosh \cite{LM1, LM2} when $\frac 1p+\frac 1q=1$, with both $p$ and $q$ finite. We will do it in general. \medskip Our paper is organized as follows. We recall basic results about classical Hardy spaces in the second section. We define an appropriate grand maximal function to characterize ${\mathcal H}^{p}({\mathbb R}^{n})$, which has been introduced in \cite{ART}. In Section 3, after recalling some basic facts about differential forms, we give the analogous of the previous grand maximal function characterization in this context. In Section 4 we give the whole range of the ${\mbox{\small\rm div}}\,$-${\mbox{\small\rm curl}}\,$ Lemma for closed forms. Section 5 is devoted to assumptions of type $B\! M\! O$. \medskip Throughout this paper, $C$ denotes constants that are independent of the functions involved, with values which may differ from line to line. For two quantities $A$ and $B$, the notation $A\sim B$ means that there exist two positive constants $C_{1}$ and $C_{2}$ such that $C_{1}A\leq B\leq C_{2}A$. If $E$ is a measurable subset of ${\mathbb R}^{n}$, then $\left|E\right|$ stands for its Lebesgue measure. \section{Some basic facts about classical Hardy spaces} We fix $\varphi\in{\mathcal S}({\mathbb R}^{n})$ having integral $1$ and support in the unit ball $\mathbb B=\left\{x\in{\mathbb R}^{n}:|x|<1\right\}$. For $f\in\mathcal S' ({\mathbb R}^{n})$ and $x\in{\mathbb R}^{n}$, we put \begin{equation} \left(f\ast\varphi\right)(x):=\left\langle f,\varphi(x-\cdot)\right\rangle, \end{equation} and define the maximal function $\mathcal M f=\mathcal M_{\varphi}f$ by \begin{equation}\label{maximal} \mathcal M f(x):=\sup_{t>0}\left|\left(f\ast\varphi_{t}\right)(x)\right|, \end{equation} where $\varphi_{t}(x)=t^{-n}\varphi\left(t^{-1}x\right)$. For $p>0$, a tempered distribution $f$ is said to belong to the Hardy space ${\mathcal H}^{p}({\mathbb R}^{n})$ if \begin{equation}\label{def} \|f\|_{{\mathcal H}^{p}({\mathbb R}^{n})}:=\left (\int_{{\mathbb R}^n}\mathcal M_{\varphi}f(x)^{p}dx\right)^{\frac 1p}=\left\|\mathcal M_{\varphi}f\right\|_{L^{p}} \end{equation} is finite. It is well known that, up to equivalence of corresponding norms, the space ${\mathcal H}^p({\mathbb R}^n)$ does not depend on the choice of the function $\varphi$. So, in the sequel, we shall use the notation ${\mathcal M} f$ instead of ${\mathcal M}_{\varphi}f$. For $\frac{n}{n+1}<p\leq 1$, an ${\mathcal H}^{p}$-atom (related to the ball $B$) is a bounded function $a$ supported in $B$ and satisfying the following conditions \begin{equation} \left\|a\right\|_{L^{\infty}}\leq\left|B\right|^{-\frac{1}{p}}\text{ and }\int_{{\mathbb R}^{n}}a(x)dx=0. \end{equation} The atomic decomposition of ${\mathcal H}^{p}$ states that a temperate distribution $f$ belongs to ${\mathcal H}^{p}$ if and only if there exist a sequence $(a_{j})$ of ${\mathcal H}^{p}$-atoms and a sequence $(\lambda_{j})$ of scalars such that \begin{equation}\label{atomic} f=\sum^{\infty}_{j=1}\lambda_{j}a_{j}\quad\text{ and } \quad\sum^{\infty}_{j=1}\left|\lambda_{j}\right|^{p}<\infty, \end{equation} where the first sum is assumed to converge in the sense of distributions. Moreover, $f$ is the limit of the partial sums in ${\mathcal H}^{p}$, and $\left\|f\right\|_{{\mathcal H}^{p}}$ is equivalent to the infimum, taken over all such decomposition of $f$, of the quantities $\left(\sum^{\infty}_{j=1}\left|\lambda_{j}\right|^{p}\right)^{\frac{1}{p}}$. We refer to \cite{St} for background on Hardy spaces. For the purpose of our main results, we are going to define an appropriate grand maximal function, which induces on ${\mathcal H}^{p}$ a semi-norm equivalent to the previous one. Let $q>n$. For $x\in{\mathbb R}^{n}$, we denote by ${\mathcal F}^{q}_{x}$, the set of all $\psi\in W^{1,q}({\mathbb R}^{n})$ supported in some ball $B(x,r)$ centered at $x$ with radius $r>0$ which satisfy \begin{equation} \left\|\psi\right\|_{L^{q}({\mathbb R}^{n})}+r\left\|\nabla\psi\right\|_{L^{q}({\mathbb R}^{n})} \leq|B_{(x,r)}|^{-\frac{1}{q'}}, \end{equation} where $\frac{1}{q}+\frac{1}{q'}=1$. Here $W^{1,q}({\mathbb R}^{n})$ denotes the Sobolev space of functions in $L^q$ with derivatives in $L^q$. Since $q>n$, the Sobolev theorem guarantees that the test functions are bounded, which allows to give the following definition. For $f\in L^{1}_{loc}({\mathbb R}^{n})$, and $x\in{\mathbb R}^{n}$, put \begin{equation} \mathcal M_{q} f(x):=\sup_{\psi\in {\mathcal F}^{q}_{x}}|\int_{{\mathbb R}^{n}}f\psi|.\label{grand-maximal} \end{equation} The following lemma is classical, but we give its proof for completeness. \begin{lem}\label{classical-result1} Let $f$ be a locally integrable function on ${\mathbb R}^{n}$. \begin{enumerate} \item [(i)] There exists a constant $C$ not depending on $f$, such that \begin{equation} \mathcal Mf\leq C\mathcal M_{\infty}f,\label{point-wise1} \end{equation} \item [(ii)] For $\frac{n}{n+1}<p\leq1$ and $\frac 1q <\frac {n+1}n-\frac 1p$ \begin{equation} \left\|\mathcal M_{q} f\right\|_{L^{p}({\mathbb R}^{n})}\sim \left\|f\right\|_{{\mathcal H}^{p}({\mathbb R}^{n})}.\label{norm-equi1} \end{equation} \end{enumerate} \end{lem} \begin{proof} Let $f\in L^{1}_{loc}({\mathbb R}^{n})$. To prove (i), it is sufficient to see that, for $\varphi$ the test function used in the definition of Hardy space, there exists some constant $c$ such that, for all $x\in{\mathbb R}^{n}$ and $t>0$, the function $\varphi_{x,t}(y):=c\varphi_{t}(x-y)$ belongs to ${\mathcal F}^{\infty}_{x}$. One can choose $c=\left(\left\|\varphi\right\|_{L^{\infty}({\mathbb R}^{n})}+ \left\|\nabla\varphi\right\|_{L^{\infty}}\right)^{-1}$. Let us now prove (ii). It is sufficient to consider $q<\infty$ and the inequality \begin{equation} \left\|\mathcal M_{q} f\right\|_{L^{p}}\leq C \left\|f\right\|_{{\mathcal H}^{p}},\label{norm-control1} \end{equation} since \begin{equation} \mathcal M f\leq C\mathcal M_{\infty}f\leq C\mathcal M_{q}f. \end{equation} By sub-linearity of the maximal operator ${\mathcal M}_q$, it is sufficient to prove a uniform estimate for atoms, \begin{equation} \left\|\mathcal M_{q} a\right\|_{L^{p}}\leq C \label{norm-control2} \end{equation} for some uniform constant $C$. Indeed, once we have this, we conclude for $f=\sum \lambda_j a_j$ that $$\left\|\mathcal M_{q} f\right\|_{L^{p}}\leq \left(\sum |\lambda_j|^p\|\mathcal M_{q} a\|_{L^{p}}^p\right)^{1/p}\leq C \left(\sum|\lambda_j|^p\right)^{1/p}.$$ So let us prove \eqref{norm-control2}. Without loss of generality, using invariance by translation and dilation, we may assume that $a$ is a function with zero mean, supported by the unit ball $\mathbb B$ centered at $0$, and bounded by $1$. We prove that there exists $\varepsilon>0$ depending on $q$ such that $$|\mathcal M_{q} a(x)|\leq C(1+|x|)^{-n-1+\varepsilon}.$$ By assumption on $\psi\in{\mathcal F}^q_x$, using H\"older's Inequality, we find that $\|\psi\|_1\leq 1$. So $\mathcal M_{q} a$ is bounded by $1$, and it is sufficient to prove that, for $\psi\in{\mathcal F}^q_x$, $$|\int_{\mathbb{B}}\psi a| \leq C|x|^{-n-1+\varepsilon}$$ for $|x|\geq 2$. Moreover, in this range of $x$, we may restrict to functions $\psi$ supported in $B(x,r)$ with $r>|x|/2$, so that $\|\nabla \psi\|_q\leq C|x|^{-\frac n {q'}-1}$. Since $a$ has mean zero, $$|\int_{\mathbb{B}}\psi a|=|\int_{\mathbb{B}}(\psi-\psi_{\mathbb{B}}) a|\leq C\|\nabla \psi\|_q.$$ We have used Poincar\'e Inequality for the last inequality. The condition on $q$ is required for $|x|^{-p(\frac n{q'}+1)}$ to be integrable at infinity. \end{proof} This discussion extends to local Hardy spaces, that we define now. We first define the truncated version of the maximal function, namely \begin{equation}\label{maximal-tr} {\mathcal M}_{\varphi}^{(1)}f(x):=\sup_{0<t<1}\left|\left(f\ast\varphi_{t}\right)(x)\right|. \end{equation} A tempered distribution $f$ is said to belong to the space $\mathfrak h^{p}({\mathbb R}^{n})$ if \begin{equation}\label{def-h} \|f\|_{\mathfrak h^{p}({\mathbb R}^{n})}:=\left (\int_{{\mathbb R}^n}{\mathcal M}_{\varphi}^{(1)}f(x)^{p}dx\right)^{\frac 1p}<\infty. \end{equation} The atomic decomposition holds for local Hardy spaces, with only atoms associated to balls of radius less than $1$ satisfying the moment condition, see \cite{G}. The previous lemma is valid in the context of $\mathfrak h^{p}({\mathbb R}^{n})$, with ${\mathcal M}^{(1)}$ in place of ${\mathcal M}$. \section{Hardy spaces of differential forms}\label{difform} Let us first fix notations and recall standard properties. Let ${\mathbb R}^{n}$ be the Euclidean space equipped with its standard orthonormal basis $\mathcal B=\left\{e_{1},\ldots,e_{n}\right\}$, and let $\mathcal B^{\ast}=\left\{e^{1},\ldots,e^{n}\right\}$ be its dual basis. For $\ell\in\left\{1,\ldots,n\right\}$, denote by $\Lambda^{\ell}$ the space of $\ell$-linear alternating forms, which consists in linear combinations of exterior products \begin{equation} e^{I}=e^{i_{1}}\wedge\ldots\wedge e^{i_{\ell}}, \end{equation} where $I=(i_{1},\ldots,i_{\ell})$ is any $\ell$-tuple. The standard basis of $\Lambda^{\ell}$ is $\left\{e^{I}\right\}$ where $I$ is an ordered $\ell$-tuple, $1\leq i_{1}<\ldots<i_{\ell}\leq n$. For $\alpha=\sum_{I}\alpha_{I}e^{I}$ and $\beta=\sum_{I}\beta_{I}e^{I}$ in $\Lambda^{\ell}$, we define the inner product of $\alpha$ and $\beta$ as follows \begin{equation} \left\langle \alpha,\beta\right\rangle:=\sum\alpha_{I}\beta_{I}, \end{equation} where the summation is taken over all ordered $\ell$-tuples. The Hodge operator is the linear operator $\ast:\Lambda^{\ell}\rightarrow\Lambda^{n-\ell}$ defined by \begin{equation} \alpha\wedge\ast\beta=\left\langle \alpha,\beta\right\rangle e^{1}\wedge\ldots\wedge e^{n} \end{equation} for all $\alpha,\beta\in\Lambda^{\ell}$. An $\ell$-form on ${\mathbb R}^{n}$ is defined as a function $u:{\mathbb R}^{n}\rightarrow\Lambda^{\ell}$ which may be written as \begin{equation} u=\sum_{I}u_{I}e^{I}, \end{equation} where the $u_{I}$'s are (real-valued) functions on ${\mathbb R}^{n}$ and all the $I$'s are of length $\ell$. \begin{defn} Let $\Omega\subset{\mathbb R}^{n}$ be an open set, $\ell$ a positive integer as above and $\mathcal E(\Omega)$ a normed space of functions $f:\Omega\rightarrow{\mathbb R}$ equipped with the norm $\left\|f\right\|_{\mathcal E(\Omega)}$. We say that an $\ell$-form $\omega=\sum_{I}\omega_{I}e^{I}$ belongs to $\mathcal E(\Omega,\Lambda^{\ell})$ if $\omega_{I}\in\mathcal E(\Omega)$ for all ordered $\ell$-tuples $I$, and we pose \begin{equation} \left\|\omega\right\|_{\mathcal E(\Omega,\Lambda^{\ell})}:=\sum_{I}\left\|\omega_{I}\right\|_{\mathcal E(\Omega)}. \end{equation} \end{defn} Let $d:\mathcal D'(\Omega,\Lambda^{\ell-1})\rightarrow\mathcal D'(\Omega,\Lambda^{\ell})$ denote the exterior derivative operator given by \begin{equation} d\omega=\sum_{k,I}\partial_{k}\omega_{I}e^{k}\wedge e^{I} \end{equation} where $\partial_{k}\omega_{I}$ is the partial derivative with respect to the $k$-th variable. The Hodge operator $\delta:\Lambda^{\ell}\rightarrow\Lambda^{\ell-1}$ defined by $\delta=(-1)^{n(n-\ell)}\ast d \ast$ is the formal adjoint of $d$ in the sense that if $\alpha\in\mathcal C^{\infty}(\Omega,\Lambda^{\ell})$ and $\beta\in\mathcal C^{\infty}(\Omega,\Lambda^{\ell+1})$, then \begin{equation} \int_{\Omega}\left\langle \alpha,\delta\beta\right\rangle =-\int_{\Omega}\left\langle d\alpha,\beta\right\rangle , \end{equation} provided that one of these forms has compact support. We also define the Laplacian \begin{equation} \Delta_\ell=d\delta+\delta d:\mathcal D'({\mathbb R}^{n},\Lambda^{\ell})\rightarrow\mathcal D'({\mathbb R}^{n},\Lambda^{\ell}) \end{equation} and a simple calculation shows that for $\omega=\sum_{I}\omega_{I}e^{I}\in W^{2,p}({\mathbb R}^{n},\Lambda^{\ell})$ with $1\leq p\leq\infty$, \begin{equation} \Delta_\ell\omega=\sum_{I}\Delta\omega_{I}e^{I}, \end{equation} where $\Delta\omega_{I}$ is the usual Laplacian on functions. For $f=\sum_{I}f_{I}e^{I}\in \mathcal D'({\mathbb R}^{n},\Lambda^{\ell})$, we put \begin{equation} \partial_{j}f:=\sum_{I}\partial_{j}f_{I}e^{I}. \end{equation} \begin{defn} Let $\ell\in\left\{1,\ldots,n-1\right\}$, and $\frac{n}{n+1}<p\leq 1$. The Hardy space of closed $\ell$-forms is defined as \begin{equation} {\mathcal H}^{p}_{d}\left({\mathbb R}^{n},\Lambda^{\ell}\right):=\left\{f\in{\mathcal H}^{p}\left({\mathbb R}^{n},\Lambda^{\ell}\right): \quad df=0 \right\} \end{equation} endowed with the norm of ${\mathcal H}^{p}({\mathbb R}^{n},\Lambda^{\ell})$. \end{defn} Recall that all closed $\ell$-forms are exact, that is, there exists some $ g\in\mathcal D'({\mathbb R}^{n},\,\Lambda^{\ell-1})$ such that $f=dg$. We will need the analogue of the previous scalar characterizations of Hardy spaces. For $1\leq q\leq\infty$, we first define, for $ f\in L^{1}_{loc}({\mathbb R}^{n},\,\Lambda^{\ell})$, the grand maximal function $\vec{\mathcal M}_{q} f$ as follows. \begin{equation} \vec{\mathcal M}_{q} f(x):=\sup_{\Phi\in\vec{{\mathcal F}}^{q}_{x}}|\int_{{\mathbb R}^{n}} f\wedge\Phi|,\label{vect-max} \end{equation} where $\vec{{\mathcal F}}^{q}_{x}$ denote the set of all $\Phi\in W^{1,q}({\mathbb R}^{n},\,\Lambda^{n-\ell})$ for which there exists $r>0$ such that $\Phi$ is supported in the ball $B_{(x,r)}$ and satisfies $$(*)\quad\left\|\Phi\right\|_{L^{q}({\mathbb R}^{n},\,\Lambda^{n-\ell})} +r\left\|\nabla\Phi\right\|_{L^{q}({\mathbb R}^{n},\,\Lambda^{n-\ell})}\leq\left|B_{(x,r)}\right|^{-\frac{1}{q'}}.$$ The next lemma is a direct consequence of Lemma \ref{classical-result1} and the fact that for $ f=\sum_{I}f_{I}e^{I}\in{\mathcal H}^{p}({\mathbb R}^{n},\Lambda^{\ell})$ and any positive integer $k$, \begin{equation} \vec{\mathcal M}_{q} f\leq\sum_{I}\mathcal M_{q} f_{I}. \end{equation} \begin{lem}\label{control} Let $\frac{n}{n+1}<p\leq 1$ and $\frac 1q <\frac {n+1}n-\frac 1p$. There exists a constant $C$ such that, for all $f\in L^{1}_{loc}({\mathbb R}^{n},\Lambda^{\ell})$, \begin{equation} \left\|\vec{\mathcal M}_{q} f\right\|_{L^{p}({\mathbb R}^{n})}\leq C \left\|f\right\|_{{\mathcal H}^{p}({\mathbb R}^{n},\Lambda^{\ell})}\label{norm-control2a}. \end{equation} \end{lem} We need a weaker version of this grand maximal function, denoted by $\vec{\mathcal M}_{q,d} f$, which is adapted to Hardy spaces of closed forms. We define \begin{equation} \vec{\mathcal M}_{q,d} f(x):=\sup_{\Phi\in \vec{{\mathcal F}}^{q}_{x,d}}|\int_{{\mathbb R}^{n}} f\wedge\Phi|, \end{equation} where $\vec{{\mathcal F}}^{q}_{x,d}$ denote the set of $ \Phi\in L^\infty({\mathbb R}^{n},\Lambda^{n-\ell})$ supported in some ball $B(x,r)$ satisfying $$(**)\quad\left\|\Phi\right\|_{L^{q}({\mathbb R}^{n},\Lambda^{n-\ell})} +r\left\|d\Phi\right\|_{L^{q}({\mathbb R}^{n},\Lambda^{n-\ell+1})}\leq\left|B_{(x,r)}\right|^{-\frac{1}{q'}}.$$ \begin{lem}\label{maximal-d} Let $q>n$ and $1\leq \ell \leq n-1$. For all $f\in L^{1}_{loc} ({\mathbb R}^{n},\Lambda^{\ell})$, the following inequality holds \begin{equation} \vec{\mathcal M}_{q}f\leq\vec{\mathcal M}_{q,d}f.\label{pointwise2a} \end{equation} Moreover, if $f$ is a closed form, then \begin{equation} \vec{\mathcal M}_{q,d}f \leq C\vec{\mathcal M}_{q}f. \label{pointwise2b} \end{equation} for some uniform constant $C$. \end{lem} \begin{proof} Let $\Phi=\sum_{I}\Phi_{I}e^{I}\in \mathcal C^{\infty} ({\mathbb R}^{n},\Lambda^{n-\ell})$. It follows from the fact that $\displaystyle d\Phi=\sum_{I,j}\partial_{j}\Phi_{I}e^{j}\wedge e^{I}$, that \begin{equation} \left\|d\Phi\right\|_{L^{q}({\mathbb R}^{n},\Lambda^{n-\ell+1})}\leq\sum_{I,j}\left\|\partial_{j}\Phi_{I}\right\|_{L^{q}({\mathbb R}^{n})}\leq\left\|\nabla\Phi\right\|_{L^{q}({\mathbb R}^{n},\Lambda^{n-\ell})}. \end{equation} Thus, for all $x\in{\mathbb R}^{n}$, we have $\vec{{\mathcal F}}^{q}_{x}\subset\vec{{\mathcal F}}^{q}_{x,d}$ so that (\ref{pointwise2a}) follows from the definition of the maximal functions $\vec{\mathcal M}_{q}f$ and $\vec{\mathcal M}_{q,d}f$. Assume now that $f$ is a locally integrable closed form. Remark first that, for $\phi$ and $\psi$ bounded compactly supported such that $d\psi=d\phi$, we have \begin{equation}\label{egalite} \int f\wedge \phi =\int f\wedge \psi. \end{equation} Indeed, we can assume by regularization that $f$ is a smooth function on some open set containing the supports of $\phi$ and $\psi$. Moreover, $f$ may be written as $dg$, with $g$ a smooth function on this open set. So the equality follows from integration by parts. Now, let $x\in{\mathbb R}^{n}$ and $\Phi=\sum_{I}\Phi_{I}e^{I}\in\vec{{\mathcal F}}^{q}_{x,d}$ supported in $ B_{(x,r)}$. We put $\varphi(y)=r^{n}\Phi(x+ry)$ for all $y\in{\mathbb R}^{n}$. Then $\varphi$ is supported in ${\mathbb B}$ and \begin{equation} d\varphi(y)=r^{n+1}\sum_{I,j}(\partial_{j}\Phi_{I})(x+ry)e^{j}\wedge e^{I}=r^{n+1}d\Phi(x+ry). \end{equation} So, we obtain \begin{equation} \left\|d\varphi\right\|_{L^{q}({\mathbb B},\Lambda^{n-\ell+1})} = r\left\|d\Phi\right\|_{L^{q}({\mathbb R}^{n},\Lambda^{n-\ell+1})}\leq\left|B_{(x,r)}\right|^{-\frac{1}{q'}}, \end{equation} according to the definition of $\Phi\in\vec{{\mathcal F}}^{q}_{x,d}$. To conclude for the lemma, it is sufficient to find $\psi$ in $ W^{1,q}({\mathbb R}^n,\Lambda^{n-\ell})$ supported in ${\mathbb B}$ and such that $d\psi=d\varphi$ with \begin{equation}\label{sol-d} \|\psi\|_{W^{1,q}({\mathbb R}^n,\Lambda^{n-\ell})} \leq C\left\|d\varphi\right\|_{L^{q}({\mathbb B},\Lambda^{n-\ell+1})}. \end{equation} Indeed, if we let $\Psi(y)=\psi_{r}(y-x)$, then $C^{-1}{\Phi}\in \vec{{\mathcal F}}^{q}_{x}$, and $d\Psi=d\Phi$, so that $\int f\wedge \Phi = \int f\wedge \Psi$. So we conclude easily from the following lemma. \begin{lem}\label{le-d} Let $1<q<\infty$ and $1\leq\ell\leq n-1$. Let ${\mathbb B}$ be the unit ball. Let $\varphi\in L^\infty({\mathbb R}^n, \Lambda^{\ell})$ compactly supported in ${\mathbb B}$ such that $d\varphi$ is in $L^q({\mathbb R}^n, \Lambda^{\ell+1})$. Then there exists $\psi\in W^{1,q}({\mathbb R}^n,\Lambda^{\ell})$ vanishing outside ${\mathbb B}$, such that $d\psi=d\varphi$. Moreover, we can choose $\psi$ such that $$\|\psi\|_{W^{1,q}({\mathbb R}^n,\Lambda^{\ell})} \leq C\left\|d\varphi\right\|_{L^{q}({\mathbb R}^n,\Lambda^{\ell+1})}$$ for some uniform constant $C$. \end{lem} \begin{proof} The existence of a form $\psi\in W^{1,q}_0({\mathbb B},\Lambda^{\ell})$ such that $d\psi=d\varphi$ is given by Theorem 3.3.3 of \cite{Sc}. Moreover, one has the inequality $$\|\psi\|_{W^{1,q}({\mathbb B},\Lambda^{\ell})} \leq C\left\|d\varphi\right\|_{L^{q}({\mathbb B},\Lambda^{\ell+1})}.$$ Then $\psi$ extends into a form of $W^{1,q}({\mathbb R}^n,\Lambda^{\ell})$ when given the value $0$ outside the unit ball. We still note $\psi $ the form on ${\mathbb R}^n$, which is supported by ${\mathbb B}$. \end{proof} This allows to conclude for the proof of Lemma \ref{maximal-d}. \end{proof} \section{Wedge products} We are interested in estimates of wedge products of two differential forms of degree $\ell$ and $n-\ell$ respectively, with $1\leq \ell\leq n-1$. Recall that, for $f=\sum_{I}f_{I}e^{I}\in \mathcal{C}(\mathbb R^{n},\Lambda^{\ell})$ and $g=\sum_{J}g_{J}e^{J} \in \mathcal{C}(\mathbb R^{n},\Lambda^{\mathfrak n-\ell})$, with $I$ varying among all ordered $\ell$-tuples $1\leq i_{1}<\ldots<i_{\ell}\leq n$ and $J$ among all ordered $n-\ell$-tuples, we put \begin{equation} f\wedge g=\sum_{I,J}\left(f_{I}\cdot g_{J}\right)e^{I}\wedge e^{J}. \end{equation} The $n$-form $f\wedge g$ identifies with a function via the Hodge operator. It is clear that the wedge product can also be defined as soon as products are. In particular, it is the case when $f\in L^p(\mathbb R^{n},\Lambda^{\ell})$ and $g\in L^q(\mathbb R^{n},\Lambda^{n-\ell})$, with $\frac 1p+\frac 1q\leq 1$. Using the results of \cite{BIJZ} and \cite {BF}, it is also the case when one of the two forms belongs to the Hardy space ${\mathcal H}^p({\mathbb R}^n, \Lambda^\ell)$ while the other one is in the dual space. Moreover, it is proved that \begin{equation} f\wedge g\in L^{1}({\mathbb R}^{n},\Lambda^{n})+{\mathcal H}^{\Phi}_\omega({\mathbb R}^{n},\Lambda^{n}) \end{equation} if $f\in {\mathcal H}^{1}(\mathbb R^{n},\Lambda^{\ell})$ and $g \in {\mathfrak {bmo}}(\mathbb R^{n},\Lambda^{n-\ell})$, while \begin{equation} f\wedge g\in L^{1}({\mathbb R}^{n},\Lambda^{n})+{\mathcal H}^{p}({\mathbb R}^{n},\Lambda^{n}) \end{equation} if $p<1$, $f\in {\mathcal H}^{p}(\mathbb R^{n},\Lambda^{\ell})$ and $g \in \Lambda_{n(\frac{1}{p}-1)}(\mathbb R^{n},\Lambda^{n-\ell})$. Here ${\mathcal H}^{\Phi}_\omega({\mathbb R}^{n},\Lambda^{n})$ is the Hardy Orlicz space associated to the function $\Phi(t)=\frac{t}{\log(e+t)}$ and $\omega(x)=(\log (e+|x|))^{-1}$. \medskip We are now interested in improving these estimates when $f$ and $g$ are closed. The ${\mbox{\small\rm div}}\,$-${\mbox{\small\rm curl}}\,$ lemma can be generalized to closed forms: this has already been observed by Lou and Mc Intosh in \cite{LM1} when $\frac 1p+\frac 1q=1$. In general, we can state the following. \begin{thm}\label{main1} Let $\frac{n}{n+1}<p\leq 1$ and $1\leq\ell\leq n-1$. Let $1<q\leq \infty$ be such that $\frac 1r:=\frac 1p+\frac 1q\leq \frac{n+1}{n}$. Then, if $f\in {\mathcal H}^{p}_{d}(\mathbb R^{n},\Lambda^{\ell})\cap L^{q'}(\mathbb R^{n},\Lambda^{\ell})$ and $g \in L^{q}(\mathbb R^{n},\Lambda^{n-\ell})$ is such that $dg=0$, then $ f\wedge g\in{\mathcal H}^{r}_{d}({\mathbb R}^{n},\Lambda^{n})$. Moreover, there exists a constant $C$ not depending on $f$ and $g$, such that \begin{equation} \left\| g\wedge f\right\|_{{\mathcal H}^{r}_{d}({\mathbb R}^{n},\Lambda^{n})}\leq C \left\|g\right\|_{L^{q}({\mathbb R}^{n},\Lambda^{n-\ell})}\left\|f\right\|_{{\mathcal H}^{p}_{d}({\mathbb R}^{n},\Lambda^{\ell})}. \label{controle1} \end{equation} \end{thm} \begin{proof} Remark that the forms $f\in {\mathcal H}^{p}_{d}(\mathbb R^{n},\Lambda^{\ell})\cap L^{q'}(\mathbb R^{n},\Lambda^{\ell})$ are dense in ${\mathcal H}^{p}_{d}(\mathbb R^{n},\Lambda^{\ell})$: just take $f *P_\varepsilon$, where $P_t$ is the Poisson kernel, to approach $f$. Remark also that the assumptions can be made symmetric: just replace $\ell$ by $n-\ell$. To adapt the proof given in \cite{CLMS} the main point is given in the next lemma, which has its own interest. \begin{lem}\label{lemmedeMeyer} Let $\frac{n}{n+1}<p\leq 1$ and $1\leq\ell\leq n-1$. Then, for $f\in {\mathcal H}^{p}_{d}(\mathbb R^{n},\Lambda^{\ell})$, there exists $h\in L^{p^*}_{d}(\mathbb R^{n},\Lambda^{\ell-1})$ such that $dh=f$ and $\delta h=0$, with $\frac 1{p^*}=\frac 1p-\frac 1n$. Moreover $h$ is unique up to the addition of a constant form, and there exists some uniform constant $C$ such that \begin{equation}\label{meyer} \|{\M_{ Sob}}(f)\|_p\leq C\|f\|_{\mathcal H^p}, \end{equation} with $${\M_{ Sob}} (f)(x)=\sup_{t>0}\frac 1{t|B(x,t)|}\int_{B(x,t)}|h(y)-h_{B(x,t)}|dy.$$ \end{lem} Recall that $h_{B(x,t)}$ is the mean of $h$ over the ball $B(x,t)$, which is well defined since $h$ is in $L^{p^*}$ and $p^*>1$. Remark that ${\M_{ Sob}} (f)$ is independent on the choice of $h$ since $h$ is unique up to the addition of a constant form. \begin{proof} The case $\ell=1$ is Lemma II.2 of \cite{CLMS}. So it is sufficient to consider $\ell>1$. Let us first remark that the uniqueness is direct, since in $L^{p^*}({\mathbb R}^n, \Lambda^{\ell-1}) $ only constants are $d$-closed and $\delta$-closed. Assume that $h$ is a solution. Then all the derivatives $\partial_j h_I$ are in the Hardy space ${\mathcal H}^p({\mathbb R}^n)$. Indeed, we use the fact that, by definition of $\Delta_{\ell-1}$, we have the identities $$\Delta_{\ell-1}h=\delta f, \qquad \qquad \partial_j h= (\partial_j (-\Delta_{\ell-1})^{-1/2})( (-\Delta_{\ell-1})^{-1/2}\delta)f.$$ But both operators arising in the last expression, that is $\partial_j (-\Delta_{\ell-1})^{-1/2}$ and $(-\Delta_{\ell-1})^{-1/2}\delta$, are linear combinations of Riesz transforms and preserve Hardy spaces. Indeed, since $\Delta_{\ell-1}$ is given by the Laplacian, coefficients by coefficients, the same is valid for all its powers. Furthermore, we have $$\Vert \nabla h\Vert_{\mathcal H^p}\le C\Vert f\Vert_{\mathcal H^p}.$$ Conversely, given $f$, we can use these formulas to fix the values of $\partial_j h_I$ in the Hardy space ${\mathcal H}^p({\mathbb R}^n)$. Using this lemma for one-forms, we know the existence of $h_I\in L^{p^*}({\mathbb R}^n)$ having these functions as derivatives. It is elementary to see that the form $h=\sum h_I e^I$ is such that $dh=f$ and $\delta h=0$, using commutation properties of the operators. Finally, we write \eqref{meyer} for each $f_j=\partial_jh_I$ to obtain the inequality for $f$. \end{proof} It is elementary to adapt the rest of the proof of Theorem II.3 in \cite{CLMS}, once the lemma has been settled, and we leave it to the reader. So this gives the proof of Theorem \ref{main1}. Remark that $f\wedge g$ can be defined in the distribution sense without the additional assumption that $g\in L^{q'}(\mathbb R^{n},\Lambda^{\ell})$: we pose $f\wedge g = d(h\wedge g)$, with $h$ given by Lemma \ref{lemmedeMeyer}. Indeed, $h\wedge g$ is in the Lebesgue space $L^s$, with $s>1$ given by $\frac 1s = \frac 1r-\frac 1n$. So its exterior derivative is well defined as a distribution. \end{proof} \medskip Following the ideas of \cite{ART}, let us sketch another proof for the endpoint $q=\infty$. \begin{proof}[Proof for the endpoint] Let $f\in {\mathcal H}^{p}_{d}(\mathbb R^{n},\Lambda^{\ell})$ and $g \in L^{\infty}(\mathbb R^{n},\Lambda^{n-\ell})$ such that $dg=0$. We want to prove that $$ \mathcal M \left(f\wedge g\right)(x)\leq C \left\|g\right\|_{L^{\infty}({\mathbb R}^{n},\Lambda^{n-\ell})}\vec{\mathcal M}_{\infty,d} f(x), $$ from which we conclude directly by using Lemma \ref{control} and Lemma \ref{maximal-d}. By linearity we can assume that $\|g\|_\infty=1$. In order to estimate ${\mathcal M} (f\wedge g)(x)$, we have to consider $$\int \varphi_{x,t} f\wedge g, \qquad \qquad \varphi_{x,t}(y)=t^{-n}\varphi ((x-y)/t).$$ Here $\varphi$ is chosen smooth and supported in the unit ball as in \eqref{maximal}. It is sufficient to prove the existence of some uniform constant $c$ such that $c\varphi_{x,t}\, g$ belongs to $\vec{{\mathcal F}}^{\infty}_{x,d}$. This follows from the inequality $$\|\varphi_{x,t}\|_\infty + t\|d\varphi_{x,t}\|_\infty\leq |B_{(x,t)}|^{-1}.$$ Indeed, since $g$ is closed, we have the equality $d(\varphi_{x,t}\, g)=d\varphi_{x,t}\wedge g$ and the uniform norm of a wedge product is bounded by the product of norms. This finishes the proof. \end{proof} \section{$B\! M\! O$ estimates} Let us first recall some facts on $BMO({\mathbb R}^{n})$ and weighted Hardy-Orlicz spaces. Given a continuous function $\mathcal P :\left[0,\infty\right)\rightarrow\left[0,\infty\right)$ increasing from 0 to $\infty$ (but not necessarily convex, $\mathcal P$ is called the Orlicz function), and given a positive measurable function $\omega$, the weighted Orlicz space $L^{\mathcal P}_\omega({\mathbb R}^{n})$ consists in the set of functions $f$ such that \begin{equation} \left\|f\right\|_{L^{\mathcal P}_\omega({\mathbb R}^{n})}:=\inf\left\{k>0:\int_{{\mathbb R}^{n}}\mathcal P(k^{-1}\left|f\right|)\omega(x)dx\leq 1\right\} \end{equation} is finite. The Hardy Orlicz space ${\mathcal H}^{\mathcal P}_\omega({\mathbb R}^{n})$ (resp. the local Hardy Orlicz space $\mathfrak h^{\mathcal P}_\omega({\mathbb R}^{n})$ is the space of tempered distributions $f$ such that $\mathcal Mf$ (resp. $\mathcal M^{(1)}f$) belongs to $L^{\mathcal P}_\omega({\mathbb R}^{n})$. We will consider here the Orlicz space associated to the function $\Phi(t)=\frac{t}{\log(e+t)}$ and the weight $(\log (e+|x|))^{-1}$, as mentioned in the introduction. The space $L^{\Phi}_\omega({\mathbb R}^{n})$ is not a normed space. Let us state the following properties of the function $\Phi$ and $L^{\Phi}_\omega$. \begin{eqnarray} \quad \Phi(s+t) &\leq & \Phi(s)+\Phi(t), \label{sub-add} \qquad\qquad \qquad\qquad \mbox{\rm for all }\ s,t>0.\\ \quad \Phi(st) &\geq & \Phi(s)\Phi(t), \label{upper-mult} \qquad\qquad \qquad\qquad \mbox{\rm for all }\ s,t>0.\\ \Phi(st)&\leq & s + e^t-1 \qquad\qquad\qquad\qquad \mbox {\rm for all }\ s,t>0.\label{conj}\\ \Phi(st)&\leq & \log (e+d) \;( s +\frac {1}d (e^t-1)) \qquad\qquad \mbox {\rm for all }\ s,t, d>0.\label{conjugate} \end{eqnarray} The two first inequalities are elementary. The third one is given in \cite{BIJZ}. For $d>0$, we write that $\Phi(d)\Phi(st)\leq \Phi((sd)t)$, and use the previous one with $sd$ in place of $s$. Next, by using the fact that $\Phi(4)>2$ and \eqref{upper-mult} we obtain the inequality $$\Phi\left(\frac{s+t}4\right)\leq \frac {\Phi(s)+\Phi(t)}2,$$ from which we conclude that \begin{equation}\label{additivity} \left\|f+g\right\|_{L^{\Phi}_\omega({\mathbb R}^{n})}\leq 4\left\|f\right\|_{L^{\Phi}_\omega({\mathbb R}^{n})}+4\left\|g\right\|_{L^{\Phi}_\omega({\mathbb R}^{n})}. \end{equation} We will also need the fact that products of integrable functions with functions in the exponential class are in $L^\Phi$. More precisely, we will use the following lemma, for $B$ a ball with radius $1$. It is a direct consequence of \eqref{conjugate}. \begin{lem}\label{holder} For $c>0$ given, there exists $C$ such that, for $d>0$, \begin{equation} \int_B \Phi(|fg|)\frac {dx}{\log (e+d)}\leq C \int_B |f| dx +\frac {C}d \int_B (e^{c|g|}-1)dx. \end{equation} \end{lem} \medskip Our main theorem is the following. \begin{thm}\label{bijz} Let $f\in {\mathcal H}^{1}_{d}({\mathbb R}^{n},\Lambda^{\ell})$ and $g\in {\mathfrak {bmo}} ({\mathbb R}^{n},\Lambda^{n-\ell})$ such that $dg=0$, then the product $f\wedge g$ is in ${\mathcal H}^{\Phi}_\omega({\mathbb R}^{n},\Lambda^{n})$. Moreover, there exists a uniform constant $C$ such that \begin{equation} \left\|f\wedge g\right\|_{{\mathcal H}^{\Phi}_\omega({\mathbb R}^{n},\Lambda^{n})} \leq C\left\|f\right\|_{{\mathcal H}^{1}({\mathbb R}^{n},\Lambda^{\ell})}\left\|g\right\|_{{\mathfrak {bmo}}({\mathbb R}^{n},\Lambda^{n-\ell})}. \end{equation} \end{thm} \begin{proof} Recall that the wedge product $f\wedge g$ is well defined in the distribution sense for $f\in{\mathcal H}^{1}_{d}({\mathbb R}^{n},\Lambda^{\ell})$ and $g\in{\mathfrak {bmo}}({\mathbb R}^{n},\Lambda^{n-\ell})$. It is sufficient to have an a priori estimate for $g$ bounded, which we assume from now on. We also assume that $\left\|g\right\|_{{\mathfrak {bmo}}({\mathbb R}^{n},\Lambda^{n-\ell})}=1$ and $\left\|f\right\|_{{\mathcal H}^{1}({\mathbb R}^{n},\Lambda^{\ell})}=1$. Then, for $x\in{\mathbb R}^{n}$ and $\varphi\in {\mathcal F}^{\infty}_{x}$ supported in $B(x,r)$, we have \begin{equation}\label{maximal-function} \left|\int\left(f\wedge g\right)\varphi\right|\leq\left|\int f\wedge\left(g-g_{B(x,r)}\right)\varphi\right|+\left|\int f\wedge(g_{B(x,r)}\varphi)\right|, \end{equation} where \begin{equation} g_{B(x,r)}=\sum_{I}(g_{I})_{B(x,r)}e^{I}\text { if } g=\sum_{I}g_{I}e^{I}. \end{equation} Let us first evaluate the second term of the sum (\ref{maximal-function}). We have \begin{eqnarray} \left|\int f\wedge(g_{B(x,r)}\varphi)\right|&=&\left|\sum_{I}(g_{I})_{B(x,r)}\int f\wedge\varphi e^{I}\right|\nonumber\\&\leq&\left(1+\mathfrak M^{(1)} g(x)\right)\vec{\mathcal M}_{\infty}(f)(x)\label{second-part} \end{eqnarray} where $\mathfrak M^{(1)} g(x)=\sum_{I}\mathfrak M^{(1)} g_{I}(x)e^{I}$, and for $h\in L^{1}_{loc}({\mathbb R}^{n})$ a scalar valued function, \begin{equation} \mathfrak M^{(1)}h(x)=\sup\left\{\frac{1}{\left|B\right|}\int_{B}\left|h(y)\right|dy,x\in B \text { and } \left|B\right|<1\right\}. \end{equation} Indeed, when $r\geq 1$, the mean of $g$ is, by definition of ${\mathfrak {bmo}}$, bounded by $\left\|g\right\|_{{\mathfrak {bmo}}({\mathbb R}^{n},\Lambda^{n-\ell})}=1$, while, for $r<1$, it is bounded by the maximal function related to small balls. For the first term, we proceed in the same way as for bounded $g$. By John-Nirenberg Inequality, the form $\left(g-g_{B(x,r)}\right)\varphi$ satisfies Condition $(**)$ for all $q>1$ up to some uniform constant $C$. So, according to Lemma \ref{maximal-d}, for $q$ large enough, \begin{equation}\label{first-part} \left|\int f\wedge\left(g-g_{B(x,r)}\right)\varphi\right|\leq C\vec{\mathcal M}_{q}f(x). \end{equation} Taking in (\ref{second-part}) and (\ref{first-part}) the supremum over all $\varphi\in {\mathcal F}^{\infty}_{x}$ yields \begin{equation}\label{decomposition2} \mathcal M_{\infty}(f\wedge g)\leq C\left(\vec{\mathcal M}_{q}(f)+\mathfrak M^{(1)} (g)\vec{\mathcal M}_{\infty}(f)\right). \end{equation} The first term is in $L^{1}({\mathbb R}^{n})$ under the assumption that $f\in{\mathcal H}^{1}_{d}({\mathbb R}^{n},\Lambda^{\ell})$ according to Lemma \ref{control}. It remains to prove that the second term is in $L^{\Phi}_\omega({\mathbb R}^{n})$ to conclude the proof of our theorem. Roughly speaking, this is the consequence of the fact that the product of an integrable function with a function in the exponential class is in the Orlicz space $L^{\Phi}$. Indeed, we recall that, by John-Nirenberg's Inequality, there exists $c>$ such that, for each ball $B$, $$ \int_{B}e^{c\left|g-g_{B}\right|}dx\leq C.$$ The following lemma allows to have the same kind of estimate for $\mathfrak M^{(1)}(g)$ in place of $g$. \begin{lem} Let $c>0$ fixed. Then there exists some uniform constant $C$ such that \begin{equation} \int_{B}e^{\frac c2\left|\mathfrak M^{(1)}(g)\right|}\leq C \int_{3B}e^{c|g|}dx, \end{equation} for every ball $B$ with radius $1$. The ball $3B$ is the ball with same center and radius $3$. \end{lem} \begin{proof} We have \begin{equation} \int_{B}e^{\frac c2\left|\mathfrak M^{(1)}(g)(x)\right|}dx\leq e^c|B|+\int^{\infty}_{1}e^{\frac {cs}2}\left|\left\{x\in B: \mathfrak M^{(1)}(g)>s\right\}\right|ds. \end{equation} Let $g_{1}=g\chi_{3B}$, where $3B$ is the ball having same center as $B$, but with radius $3$ times the one of $B$ and $\chi_{3B}$ is the characteristic function of $3B$. It is easy to see that for all $s>0$, we have $$ \left\{x\in B : \mathfrak M^{(1)}g(x)>s \right\}\subset\left\{x\in{\mathbb R}^{n}: \mathfrak M g_{1}(x)>s\right\}. $$ Thus from the above inclusion and the weak type $(1,1)$ boundedness of the Hardy-littlewood maximal function, we have $$ \left|\left\{x\in B: \mathfrak M^{(1)}(g)>s\right\}\right|\leq\left|\left\{x\in{\mathbb R}^{n}: \mathfrak M g_{1}(x)>s\right\}\right|\leq\frac{C}{s}\int_{\left\{\left|g_{1}\right|>\frac{s}{2}\right\}}\left|g_{1}(x)\right|dx $$ with $C$ independent of $B$ and $g$, so that the integral in the second member becomes \begin{eqnarray*} \int^{\infty}_{1}e^{\frac {cs}2}\left|\left\{x\in B: \mathfrak M^{(1)}(g)>s\right\}\right|ds &\leq& C\int^{\infty}_{1}\frac{e^{\frac {cs}2}}{s}\left(\int_{\left\{\left|g\right|>\frac{s}{2}\right\}\cap 3B}\left|g(x)\right|dx\right)ds\\ &\leq&C' \int_{3B}e^{c\left|g(x)\right|}dx \end{eqnarray*} by using Fubini's Theorem. We conclude for the lemma. \end{proof} Let us come back to the proof of Theorem \ref{bijz}. We write ${\mathbb R}^n$ as the almost disjoint union of balls $B_j=:{\mathbb B}+j$, with $j\in {\mathbb Z}^n$, with ${\mathbb B}$ the unit ball centered at $0$. We make use of Lemma \ref{holder} on each of these balls, with $d:=d_j:=(1+|j|)^{-N}$, with $N$ large enough so that $\sum d_j^{-1}<\infty$ while $\omega(x)\simeq (\log d_j)^{-1}$ for $x\in B_j$. We recall that by assumption $|g_{3B_j}|\leq C$, since $\left\|g\right\|_{{\mathfrak {bmo}}({\mathbb R}^{n},\Lambda^{n-\ell})}=1$. So \begin{equation}\label{from-bmo} \int_{3B_j} e^{c\left|g(x)\right|}dx \leq C. \end{equation} We finally have \begin{equation} \int_{{\mathbb R}^n}\Phi\left(\mathfrak M^{(1)}(g)(x) \vec{\mathcal M}_{\infty}(f)(x)\right)\omega(x) dx \leq C\sum_{j\in {\mathbb Z}^n} \int_{B_j} |f|dx, \end{equation} from which we conclude that $\mathfrak M^{(1)}(g)\vec{\mathcal M}_{\infty}(f)$ has a bounded norm in $L^{\Phi}_\omega({\mathbb R}^{n})$ because of the finite overlapping of balls $B_j$. By finite additivity (\ref{additivity}) we have $$ \left\|\mathcal M(f\wedge g)\right\|_{L^{\Phi}_\omega({\mathbb R}^{n})}\leq 4C\left\|\vec{\mathcal M}_{q}f\right\|_{L^{\Phi}_\omega({\mathbb R}^{n})}+4C\left\|\mathfrak M^{(1)}(g)\vec{\mathcal M}_{\infty}(f)\right\|_{L^{\Phi}_\omega({\mathbb R}^{n})} $$ so that \begin{equation} \left\|f\wedge g\right\|_{{\mathcal H}^{\Phi}_\omega({\mathbb R}^{n},\Lambda^{n})}\leq C \end{equation} for $\left\|f\right\|_{{\mathcal H}^{1}({\mathbb R}^{n},\Lambda^{\ell})}=1$. \end{proof} We have as well the following theorem. \begin{thm}\label{bijz2} Let $f\in \mathfrak h^{1}_{d}({\mathbb R}^{n},\Lambda^{\ell})$ and $g\in B\! M\! O ({\mathbb R}^{n},\Lambda^{n-\ell})$ such that $dg=0$, then the product $f\wedge g$ is in $\mathfrak h^{\Phi}_\omega({\mathbb R}^{n},\Lambda^{n})$. Moreover, there exists a uniform constant $C$ such that \begin{equation} \left\|f\wedge g\right\|_{\mathfrak h^{\Phi}_\omega({\mathbb R}^{n},\Lambda^{n})} \leq C\left\|f\right\|_{\mathfrak h^{1}({\mathbb R}^{n},\Lambda^{\ell})}\left\|g\right\|_{B\! M\! O({\mathbb R}^{n},\Lambda^{n-\ell})}. \end{equation} \end{thm} The key point is that again we only have to make use of $\mathfrak M^{(1)}g$ and not of $\mathfrak M g$. The only difference in the proof is the replacement of \eqref{from-bmo} by \begin{equation}\label{from-BMO} \int_{3B_j} e^{c\left|g(x)\right|}dx \leq C(1+|j|)^M \end{equation} for some $M>0$: use the well-known fact that $g_{3B_j}\leq C \log(1+|j|)$. \medskip The generalization of Theorem \ref{bijz} to ${\mathcal H}^p_\omega$ for $p<1$ is direct from \eqref{decomposition2}. Then the product $f\wedge g$ belongs to ${\mathcal H}^{\Phi_p}_\omega$, with $\Phi_p(t) =\left(\frac{t}{\log(e+t)}\right)^p$. Remark that, in this case, the product of a function in ${\mathfrak {bmo}}$ and a function in ${\mathcal H}^p$ does not make sense as a distribution in general. But we can establish as above an a priori estimate, which allows to give a meaning to the wedge product of two closed forms.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} In the queueing literature one has traditionally studied queues with Poisson input. The Poisson assumption typically facilitates explicit analysis, but it does not always align well with actual data, see e.g.\ \cite{KW2014} and references therein. More specifically, statistical studies show that in many practical situations, Poisson processes underestimate the variability of the queue's input stream. This observation has motivated research on queues fed by arrival processes that better capture the burstiness observed in practice. \vspace{3mm} The extent to which burstiness takes place can be measured by the dispersion index, i.e.\ the ratio of the variance of the number of arrivals in a given interval, and the corresponding expected value. In arrival streams that display burstiness, the dispersion index is larger than unity (as opposed to Poisson processes, for which it is equal to unity), a phenomenon that is usually referred to as {\it overdispersion}. \begin{comment} In the literature there has been a myriad of efforts to realistically model arrival streams. In various papers the use of {\it time-inhomogeneous Poisson processes} has been advocated. In such processes the Poisson arrival rate is time-dependent but still deterministic, say with value $\lambda(t)$ at time $t$. The function $\lambda(\cdot)$ could for instance be chosen such that it captures a certain periodicity (to model e.g.\ daily or weekly patterns). Models of this type have been studied in a queueing context in e.g.\ \cite{EMW1993}. It is noted, however, that the for these time-inhomogeneous Poisson processes the number of arrivals in a given time interval still has a Poisson distribution, and hence this class of models fails to incorporate overdispersion. \end{comment} It is desirable that the arrival process of the queueing model takes the observed overdispersion into account. One way to achieve this, is to make use of {\it Cox processes}, which are Poisson processes, conditional on the stochastic time-dependent intensity. It is an immediate consequence of the law of total variance, that Cox processes \textit{do} have a dispersion index larger than unity. Therefore, this class of processes makes for a good candidate to model overdispersed input processes. \vspace{3mm} In this paper we contribute to the development of queueing models fed by input streams that exhibit overdispersion. We analyze infinite-server queues driven by a particular Cox process, in which the rate is a (stochastic) shot-noise process. The shot-noise process we use is one in which there are only upward jumps (or: shots), that arrive according to a homogeneous Poisson process. Furthermore, we employ an exponential `response' or `decay' function, which encodes how quickly the process will decline after a jump. In this case, the shot-noise process is a Markov process, see \cite[p.\ 393]{Ross}. There are several variations on shot-noise processes; see e.g. \cite{IJ2003} for a comprehensive overview. \vspace{3mm} It is not a novel idea to use a shot-noise process as stochastic intensity. For instance, in insurance mathematics, the authors of \cite{D2003} use a shot-noise-driven Cox process to model the claim count. They assume that disasters happen according to a Poisson process, and each disaster can induce a cluster of arriving claims. The disaster corresponds to a shot upwards in the claim intensity. As time passes, the claim intensity process decreases, as more and more claims are settled. Another example of shot-noise arrival processes is found in the famous paper \cite{O1989}, where it is used to model the occurences of earthquakes. The arrival process considered in \cite{O1989} has one crucial difference with the one used in this paper: it makes use of Hawkes processes \cite{Hawkes}, which do have a shot-noise structure, but have the special feature that they are {\it self-exciting}. More specifically, in Hawkes processes, an arrival induces a shot in the arrival rate, whereas in our shot-noise-driven Cox model these shots are merely exogenous. The Hawkes process is less tractable than the shot-noise-driven Cox process. A very recent effort to analyze $\cdot/G/\infty$ queues that are driven by a Hawkes process has been made in \cite{Gao2016}, where a functional central limit theorem is derived for the number of jobs in the system. In this model, obtaining explicit results (in a non-asymptotic setting), as we are able to do in the shot-noise-driven Cox variant, is still an open problem. \vspace{3mm} \begin{comment} Even though shot-noise processes are not new in queueing theory, they have never been regarded as a stochastic arrival rate, to the best of our knowledge. A shot-noise process appears in a queueing context, if e.g.\ one considers an $M/G/1$ server which has a linear service rate depending on the current workload. For literature on this, consider \cite{KellaWhitt1999} and its follow up papers. Furthermore, storage models with perishable goods also admit shot-noise forms, see \cite{Britt}. \end{comment} In order to successfully implement a theoretical model, it is crucial to have methods to estimate its parameters from data. The shot-noise-driven Cox process is attractive since it has this property. Statistical methods that filter the unobservable intensity process, based on Markov Chain Monte Carlo (MCMC) techniques, have been developed; see \cite{Centanni2006} and references therein. By filtering, they refer to the estimation of the intensity process in a given time interval, given a realized arrival process. Subsequently, given this `filtered path' of the intensity process, the parameters of the shot-noise process can be estimated by a Monte Carlo version of the expectation maximization (EM) method. Furthermore, the shot-noise-driven Cox process can also be easily simulated; see e.g.\ the thinning procedure described in \cite{sim}. \vspace{3mm} In this paper we study networks of infinite-server queues with shot-noise-driven Cox input. We assume that the service times at a given node are i.i.d.\ samples from a general distribution. The output of a queue is routed to a next queue, or leaves the network. Infinite-server queues have the inherent advantage that jobs do not interfere with one another, which considerably simplifies the analysis. Furthermore, infinite-server systems are frequently used to produce approximations for corresponding finite-server systems. In the network setting, we can model queueing systems that are driven by correlated shot-noise arrival processes. With regards to applications, such a system could, e.g., represent the call centers of a fire department and police department in the same town. \vspace{3mm} The contributions and organization of this paper are as follows. In this paper we derive exact and asymptotic results. The main result of the exact analysis is Thm.\ \ref{thm:main}, where we find the joint Laplace transform of the numbers of jobs in the queues of a feedforward network, jointly with the shot-noise-driven arrival rates. We build up towards this result as follows. In Section \ref{sec:Notation and preliminaries} we introduce notation and we state the important Lemma \ref{lemma} that we repeatedly rely on. Then we derive exact results for the single infinite-server queue with a shot-noise arrival rate, in Section \ref{sec: Exact analysis}. Subsequently, in Section \ref{sec:asymptotic analysis}, we show that after an appropriate scaling the number of jobs in the system satisfies a functional central limit theorem (Thm.\ \ref{thm:FCLT}); the limiting process is an Ornstein-Uhlenbeck (OU) process driven by a superposition of a Brownian motion and an integrated OU process. We then extend the theory to a network setting in Section \ref{sec:Networks}. Before we consider full-blown networks, we first consider a tandem system consisting of an arbitrary number of infinite-server queues in Section \ref{sec:tandem}. Then it is argued in Section \ref{sec:parallel} that a feedforward network can be seen as a number of tandem queues in parallel. We analyze two different ways in which dependency can enter the system through the arrival process. Firstly, in Model (M1), parallel service facilities are driven by a multidimensional shot-noise process in which the \textit{shots} are simultaneous (which includes the possibility that all shot-noise processes are equal). Secondly, in Model (M2), we assume that there is one shot-noise arrival intensity that generates simultaneous \textit{arrivals} in all tandems. In Section \ref{sec:concluding remarks} we finish with some concluding remarks. \section{Notation and preliminaries} \label{sec:Notation and preliminaries} Let $(\Omega, \mathcal{F}, \{\mathcal{F}_t\}_{t\geq0},\Pb)$ be a probability space, in which the filtration $\{\mathcal{F}_t\}_{t\geq0}$ is such that $\L(\cdot)$ is adapted to it. A shot-noise process is a process that has random jumps at Poisson epochs, and a deterministic `response' or `decay' function, which governs the behavior of the process. See \cite[Section 8.7]{Ross} for a brief account of shot-noise processes. The shot noise that we use in this paper has the following representation: \begin{equation} \label{eq:SN} \L(t) = \L(0)e^{-rt} + \sum_{i=1}^{P_B(t)} B_i e^{-r(t-t_i)}, \end{equation} where the $B_i\geq0$ are i.i.d.\ shots from a general distribution, the decay function is exponential with rate $r>0$, $P_B$ is a homogeneous Poisson process with rate $\nu$, and the epochs of the shots, that arrived before time $t$, are labelled $t_1,t_2,\ldots,t_{P_B(t)}$. As explained in the introduction, the shot-noise process serves as a stochastic arrival rate to a queueing system. It is straightforward to simulate a shot-noise process; for an illustration of a sample path, consider Fig.\ \ref{fig:sn}. Using the thinning method for nonhomogeneous Poisson processes \cite{sim}, and using the sample path of Fig. \ref{fig:sn} as the arrival rate, one can generate a corresponding sample path for the arrival process, as is displayed in Fig.\ \ref{fig:arrivals}. Typically, most arrivals occur shortly after peaks in the shot-noise process in Fig.\ \ref{fig:sn}, as expected. \begin{figure}[ht!] \centering \begin{tikzpicture} \begin{axis}[ ticks=none, scaled ticks=false, xmin=1, xmax=13, ymin=0, xlabel=$t$, ylabel=$\L(t)$, width=14cm, height=6cm, axis y line=left, axis x line=bottom ] \addplot[domain=0:1, black, thick, smooth] {0}; \addplot[black, thick, dashed, smooth]coordinates{(1,0)(1,3)}; \addplot[domain=1:3, black, thick,smooth] {3*exp(-(x-1))}; \addplot[black, thick, dashed, smooth]coordinates{(3,3*0.13533528323)(3,3*0.13533528323+2)}; \addplot[domain=3:4, black,thick,smooth]{2.40600584969*exp(-(x-3))}; \addplot[black, thick, dashed, smooth]coordinates{(4,0.88512008743)(4,1.18512008743)}; \addplot[domain=4:8, black,thick,smooth]{1.18512008743*exp(-(x-4))}; \addplot[black, thick, dashed, smooth]coordinates{(8,0.02170623156)(8,0.02170623156+1.5)}; \addplot[domain=8:9.75, black,thick,smooth]{(0.02170623156+1.5)*exp(-(x-8))}; \addplot[black, thick, dashed, smooth]coordinates{(9.75,0.26443289263)(9.75,0.26443289263+3.5)}; \addplot[domain=9.75:16, black,thick,smooth]{(0.26443289263+3.5)*exp(-(x-9.75))}; \end{axis} \end{tikzpicture} \caption{\textit{Sample path of shot-noise process}} \label{fig:sn} \centering \begin{tikzpicture} \begin{axis}[ ticks=none, scaled ticks=false, xmin=1.35, xmax=13, ymin=0, xlabel=$t$, ylabel=number of arrivals, width=14cm, height=6cm, axis y line=left, axis x line=bottom ] \addplot[thick,black] table {% 0 0 1.81 0 1.82 1 1.83 2 1.98 2 1.99 3 2.26 3 2.27 4 2.3 4 2.31 5 2.48 5 2.49 6 3.33 6 3.34 7 3.41 7 3.42 8 3.44 8 3.45 9 3.73 9 3.74 10 4.17 10 4.18 11 4.49 11 4.5 12 4.76 12 4.77 13 5.04 13 5.05 14 6.43 14 6.44 15 8.17 15 8.18 16 8.33 16 8.34 17 8.96 17 8.97 18 10.13 18 10.14 19 10.28 19 10.29 20 10.3 21 10.37 21 10.38 22 10.45 22 10.46 23 10.58 23 10.59 24 10.75 24 10.76 25 10.86 25 10.87 26 10.88 27 10.92 27 10.93 28 11.02 28 11.03 29 11.04 29 11.05 30 11.23 30 11.24 31 11.71 31 11.72 32 12.19 32 12.2 33 12.21 34 12.44 34 12.45 35 12.99 35 }; \end{axis} \end{tikzpicture}% \caption{\textit{A realization of arrival process corresponding to the sample path of the arrival rate process in Fig.\ \ref{fig:sn}}} \label{fig:arrivals} \end{figure} \vspace{3mm} We write $\L$ (i.e., without argument) for a random variable with distribution equal to that of $\lim_{t\to\infty}\L(t)$. We now present well-known transient and stationary moments of the shot-noise process, see Appendix \ref{app:cov} and e.g.\ \cite{Ross}: with $B$ distributed as $B_1$, \begin{align} \label{eq:momentsSN} \nonumber\E \L(t) = \L(0)e^{-rt} + \f{\nu\E B}{r}(1-e^{-rt})&, \quad\E\L=\f{\nu\E B}{r},\\ \Var \L(t)= \f{\nu\E B^2}{2r}(1-e^{-2rt})&, \quad\Var \L = \f{\nu\E B^2}{2r}, \\ \nonumber\Cov(\L(t),\L(t+\d))= e^{-r\d} \Var \L(t)&. \end{align} We remark that, for convenience, we throughout assume $\Lambda(0)=0$. The results can be readily extended to the case in which $\L(0)$ is a non-negative random variable, at the cost of a somewhat more cumbersome notation. \vspace{3mm} In the one-dimensional case, we denote $ \beta(s) = \E e^{-s B} $ , and in the multidimensional case, where $s=(s_1,s_2,\ldots,s_d)$, for some integer $d\geq2$, now denotes a vector, we write \[ \beta(s) = \E e^{-\sum_i s_i B_i}. \] The following lemma will be important for the derivation of the joint transform of $\Lambda(t)$ and the number of jobs in system, in both single and multi-node cases. \begin{lemma} \label{lemma} Let $\L(\cdot)$ be a shot-noise process. Let $f:\R\times\R^{d}\to\R$ be a function which is piecewise continuous in its first argument, with at most a countable number of discontinuities. Then it holds that \begin{align*} &\E \exp\left(\int_0^t f(u,z) \L(u) \dif u - s \L(t)\right)\\ &= \exp\left(\nu \int_0^t\left( \beta\left(s e^{-r(t-v)}-e^{rv}\int_{v}^t f(u,z) e^{-ru}\dif u \right) -1\right)\dif v\right). \end{align*} \end{lemma} \begin{proof} See appendix \ref{app:proof}. \end{proof} \section{A single infinite-server queue} In this section we study the $M_{\rm S}/G/\infty$ queue. This is a single infinite-server queue, of which the arrival process is a Cox process driven by the shot-noise process $\L(\cdot)$, as defined in Section \ref{sec:Notation and preliminaries}. First we derive exact results in Section \ref{sec: Exact analysis}, where we find the joint transform of the number of jobs in the system and the shot-noise rate, and derive expressions for the expected value and variance. Subsequently, in Section \ref{sec:asymptotic analysis}, we derive a functional central limit theorem for this model. \label{sec:A single infinite-server queue} \subsection{Exact analysis} \label{sec: Exact analysis} We let $J_i$ be the service requirement of the $i$-th job, where $J_1,J_2,\ldots$ are assumed to be i.i.d.; in the sequel $J$ denotes a random variable that is equal in distribution to $J_1$. Our first objective is to find the distribution of the number of jobs in the system at time $t$, in the sequel denoted by $N(t)$. This can be found in several ways; because of the appealing underlying intuition, we here provide an argument in which we approximate the arrival rate on intervals of length $\Delta$ by a constant, and then let $\Delta\downarrow 0$. This procedure works as follows. We let $\L(t)=\Lambda(\omega,t)$ be an arbitrary sample path of the driving shot-noise process. Given $\Lambda(t)$, the number of jobs that arrived in the interval $[k\Delta,(k+1)\Delta)$ and are still in the system at time $t$, has a Poisson distribution with parameter $\Pb(J>t-(k\Delta+\Delta U_k))\cdot \Delta\Lambda(k\Delta) + o(\Delta)$, where $U_1, U_2,\ldots$ are i.i.d.\ standard uniform random variables. Summing over $k$ yields that the number of jobs in the system at time $t$ has a Poisson distribution with parameter \[ \sum_{k=0}^{t/\Delta-1} \Pb(J>t-(k\Delta+\Delta U_k)) \Delta\Lambda(k\Delta) + o(\Delta), \] which converges, as $\Delta\downarrow0$, to \begin{equation} \label{eq:randompar} \int_0^t \Pb(J>t-u) \L(u) \dif u. \end{equation} The argument above is not new: a similar observation was mentioned in e.g.\ \cite{EMW1993}, for deterministic rate functions. Since $\L(\cdot)$ is actually a stochastic process, we conclude that the number of jobs has a mixed Poisson distribution, with the expression in Eqn.\ \eqref{eq:randompar} as random parameter. As a consequence, we find by conditioning on $\mathcal{F}_t$, \begin{eqnarray} \label{trans1} \nonumber\xi(t,z,s)&:=&\E z^{N(t)} e^{-s\Lambda(t)} = \E\left(e^{-s\L(t)}\E\left(z^{N(t)}\,|\,\mathcal{F}_t\right)\right)\\ &=& \E\exp\left( \int_0^t (z-1) {\mathbb P}(J>t-u)\Lambda(u) \dif u - s\Lambda(t) \right). \end{eqnarray} We have found the following result. \begin{theorem}\label{THM1} Let $\L(\cdot)$ be a shot-noise process. Then \begin{equation} \label{eq:thm} \log \xi(t,z,s)= \nu \int_0^t\left( \beta\left((1-z) e^{rv} \int_{v}^t {\mathbb P}(J>t-u)e^{-ru}\dif u + s e^{-r(t-v)} \right) -1\right)\dif v. \end{equation} \end{theorem} \begin{proof} The result follows directly from Lemma \ref{lemma} and Eqn.\ \eqref{trans1}. \end{proof} \begin{comment} \begin{remark} Note that Eqn.\ \eqref{eq:thm} coincides with the result derived by Mayank for ${\mathbb P}(J>t-u) = \exp({-\mu(t-u)})$. Indeed, then we have \begin{align*} e^{rv} \int_v^t e^{-\mu(t-u) - ur} \dif u &= \f{1}{\mu-r} (e^{-r(t-v)}-e^{-\mu(t-v)}), \end{align*} thus Eqn.\ \eqref{eq:thm} becomes \[ \log {\mathbb E}\, z^{N(t)}e^{-s\Lambda(t)} =\nu \int_0^t\left( \beta\left( \f{1-z}{\mu-r} (e^{-r(t-v)}-e^{-\mu(t-v)}) + s e^{-r(t-v)} \right) -1\right)\dif v. \] Now substitute $t-v=u$, then this equals \[ \log {\mathbb E}\, z^{N(t)}e^{-s\Lambda(t)} =\nu \int_0^t\left( \beta\left( \f{1-z}{\mu-r} (e^{-ru}-e^{-\mu u}) + s e^{-ru} \right) -1\right)\dif v, \] in which we recognize the old result if we let $t\to\infty$. \end{remark} We can now obtain transient moments of the number of jobs: \begin{equation} \label{EN} \E N(t) = \left. \f{\partial \xi(t,z,0)}{{\partial} z} \right|_{z=1} = - \nu \beta'(0) \int_0^t \int_v^t \Pb(J>t-u) e^{r(v-u)} \dif u\; \dif v. \end{equation} Define the function $h_{r,\mu}$ by \[ t\mapsto \left\{ \begin{array}{ll} \f{\mu(1-e^{-rt})-r(1-e^{-\mu t})}{\mu - r} & \text{if } \mu\neq r\\ 1-e^{-rt}-rte^{-rt} & \text{if } \mu = r. \end{array} \right. \] In case $J$ is exponentially distributed with mean $1/\mu$, then we find \begin{align*} \E N(t) &= \f{\nu \E B}{r\mu} h_{r,\mu}(t) = \f{\E \Lambda}{\mu} h_{r,\mu}(t).\\ \Var N(t) &= \f{\nu \E B^2}{r\mu} h_{r,\mu}(t) = \f{2 \Var \Lambda}{\mu} h_{r,\mu}(t), \end{align*} where $\Lambda$ denotes $\lim_{t\to\infty} \Lambda(t)$. \begin{remark} From Eqn.\ \eqref{trans1} we would get \begin{equation} \label{EN2} \E N(t) = \int_0^t \E \Lambda(s) \Pb(J>t-s) \dif s \end{equation} which does not immediately seem to correspond to Eqn.\ \eqref{EN}. However, we have to recall that in Eqn.\ \eqref{EN} we implicitly assumed that $\Lambda(0)=0$, because of the choice of the representation of $\Lambda$. In that case, one should note that \[ \E \Lambda(s) = \f{\nu\E B }{r}(1-e^{-rs}). \] Then we can do the following calculation to show equality between Eqns. \eqref{EN} and \eqref{EN2} in this particular case: \begin{align*} \E N(t) &= \int_0^t \f{\nu\E B }{r}(1-e^{-rw})\Pb(J>t-w) \dif w\\ &=\nu\E B \int_0^t \int_0^s e^{-ru} \dif u \Pb(J>t-w)\dif w\\ &=\nu\E B \int_0^t \int_v^t e^{-r(u-(t-w))} \dif u \Pb(J>t-w) \dif w\\ &=\nu\E B \int_0^t \int_v^t e^{-r(u-v)} \Pb(J>v) \dif u \dif v. \end{align*} \end{remark} \end{comment} In Thm.\ \ref{THM1} we found that $N(t)$ has a Poisson distribution with the random parameter given in Eqn.\ \eqref{eq:randompar}. This leads to the following expression for the expected value \begin{equation} \label{eq:EN} \E N(t) = \int_0^t \E \Lambda(u) \Pb(J>t-u) \dif u. \end{equation} In addition, by the law of total variance we find \begin{equation}\label{eq:VN} \Var N(t) = \Var \left(\int_0^t \L(u) \Pb(J>t-u) \dif u\right) + \E \left(\int_0^t \L(u) \Pb(J>t-u) \dif u\right). \end{equation} The latter expression we can further evaluate, using an approximation argument that resembles the one we used above. Using a Riemann sum approximation, we find \begin{eqnarray*} \lefteqn{\Var \left(\int_0^t \Lambda(u) \Pb(J>t-u) \dif u\right) = \lim_{\Delta\downarrow0} \Var \left(\sum_{i=0}^{t/\Delta-1} \L(i \Delta) \Pb(J> t-i \Delta) \Delta\right)}\\ &=&2 \lim_{\Delta\downarrow0} \sum_{i=0}^{t/\Delta-1} \sum_{j>i}^{t/\Delta-1} \Cov(\L(i \Delta) \Pb(J>t-i \Delta) \Delta, \L(j \Delta) \Pb(J>t-j \Delta) \Delta)\\ &=& 2\int_0^t \int_v^t \Cov(\L(u),\L(v)) \Pb(J>t-u) \Pb(J>t-v) \dif u \dif v. \end{eqnarray*} Assuming that $u\geq v$, we know that $\Cov(\L(u),\L(v)) = e^{-r(u-v)} \Var \L(v)$ (cf. Lemma \ref{cov}). We thus have that (\ref{eq:VN}) equals \[2\int_0^t \int_v^t e^{-r(u-v)} \Var \L(v) \Pb(J>t-u) \Pb(J>t-v) \dif u \dif v + \int_0^t \E\L(u) \Pb(J>t-u) \dif u. \] We can make this more explicit using the corresponding formulas in \eqref{eq:momentsSN}. \begin{example}[Exponential case]\label{Ex32} Consider the case in which $J$ is exponentially distributed with mean $1/\mu$ and $\L(0)=0$. Then we can calculate the mean and variance explicitly. For $\mu\neq r$, \[ \E N(t) = \f{\E \Lambda}{\mu} h_{r,\mu}(t), \] where the function $h_{r,\mu}(\cdot)$ is defined by \[ t\mapsto \left\{ \begin{array}{ll} {\displaystyle \f{\mu(1-e^{-rt})-r(1-e^{-\mu t})}{\mu - r} } & \text{if } \mu\neq r,\\ 1-e^{-rt}-rte^{-rt} & \text{if } \mu = r. \end{array} \right. \] For the variance, we thus find for $\mu\neq r$ \[ \Var N(t) = \f{\nu\E B^2}{2r}\f{r^2(1-e^{-2\mu t})+\mu^2(1-e^{-2rt}) + \mu r(4e^{-t(\mu+r)}-e^{-2\mu t} - e^{-2rt} -2)}{\mu (\mu-r)^2 (\mu+r)}+\E N(t), \] and for $\mu=r$ \[ \Var N(t) = \f{\nu\E B^2}{4 r^3}{\left(1-e^{-2rt}-2rt(1+rt)e^{-2rt}\right)}{} + \E N(t). \] \end{example} \subsection{Asymptotic analysis} \label{sec:asymptotic analysis} This subsection focuses on deriving a functional central limit theorem (FCLT) for the model under study, after having appropriately scaled the shot rate of the shot-noise process. In the following we assume that the service requirements are exponentially distributed with rate $\mu$, and we point out how it can be generalized to a general distribution in Remark \ref{rem:kiefer} below. We follow the standard approach to derive the FCLT for infinite-server queueing systems; we mimic the argumentation used in e.g.\ \cite{PW2007,Anderson2016}. As the proof has a relatively large number of standard elements, we restrict ourselves to the most important steps. \vspace{3mm} We apply a linear scaling to the shot rate of the shot-noise process, i.e.\ $\nu\mapsto n\nu$. It is readily checked that under this scaling, the steady-state level of the shot-noise process, as well as the steady-state number of jobs in the queue, blow up by a factor $n$. It is our objective to prove that, after appropriate centering and normalization, the process recording the number of jobs in the system converges to a Gaussian process. In the $n$-th scaled model, the number of jobs in the system at time $t$, denoted by $N_n(t)$, has the following (obvious) representation: with $A_n(t)$ denoting the number of arrivals in $[0,t]$, and $D_n(t)$ the number of departures, \begin{equation} \label{eq:FCLTrep} N_n(t) = N_n(0) + A_n(t) - D_n(t). \end{equation} Here, $A_n(t)$ corresponds to a Cox process with a shot-noise-driven rate, and therefore we have, with $\Lambda_n(s)$ the shot-noise in the scaled model at time $s$ and $S_A(\cdot)$ a unit-rate Poisson process, \[A_n(t) = S_A\left(\int_0^t \Lambda_n(u){\rm d}u\right);\] in line with our previous assumptions, we put $\L_n(0)=0.$ For our infinite-server model the departures $D_n(t)$ can be written as, with $S_D(\cdot)$ a unit-rate Poisson process (independent of $S_A(\cdot)$), \[D_n(t)=S_D\left(\int_0^t \mu N_n(u)\dif u\right).\] We start by identifying the average behavior of the process $N_n(t)$. Following the reasoning of \cite{Anderson2016}, assuming that $N_n(0)/n \Rightarrow \rho(0)$ (where `$\Rightarrow$' denotes weak convergence), $N_n(t)/n$ converges almost surely to the solution of \begin{equation}\label{RHO} \rho(t) = \rho(0) + \int_0^t \E\L(u)\dif u - \int_0^t \mu \rho(u)\dif u. \end{equation} \iffalse Let $\Rightarrow$ and $\stackrel\Pb\longrightarrow$ refer to weak convergence and convergence in probability, respectively, and define the function space $D=D([0,\infty),\R)$ as the set of all right-continuous functions with left limits, which we endow with the standard Skorohod $J_1$ topology, see e.g.\ \cite{Billingsley}. The following uniform convergence can be established: \[ \sup_{0\leq s\leq t} \left|\f1n A_n(t) - \int_0^t \E\L(u) \dif u\right| \stackrel{\Pb}{\longrightarrow}0. \] Furthermore, assume that $\f1n N_n(0) \Rightarrow \rho(0)$ in $\R$. Then $\f1n N_n(t)$ converges in probability, uniformly on compacts, to the solution of $ \rho(t) = \rho(0) + \int_0^t \E\L(u)\dif u - \int_0^t \mu \rho(s)\dif s. $ Assuming $\E\Lambda(0)=0$,\fi This equation is solved by $\rho(t)= \E N(t)$, with $\E N(t)$ provided in Example \ref{Ex32}. Now we move to the derivation of the FCLT. Following the approach used in \cite{Anderson2016}, we proceed by studying an FCLT for the input rate process. To this end, we first define \[\hat{\L}_n(t) := \sqrt n \left(\f1n\L_n(t)-\E\L(t)\right); \qquad \hat{K}_n(t) := \int_0^t \hat\L_n(u)\dif u.\] The following lemma states that $\hat K_n(\cdot)$ converges to an integrated Ornstein-Uhlenbeck (OU) process, corresponding to an OU process $\hat{\L}(\cdot)$ with a speed of mean reversion equal to $r$, long-run equilibrium level 0, and variance $\sigma_\L^2:=\nu\E B^2/(2r)$. \begin{lemma} \label{lem:IOU} Assume that for the shot sizes, distributed as $B$, it holds that $\E B,\E B^2<\infty$. Then $\hat{K}_n(\cdot)\Rightarrow \hat{K}(\cdot)$ as $n\to\infty$, where \begin{equation} \label{eq:barL} \hat{K}(t) = \int_0^t \hat{\L}(u) \dif u, \end{equation} in which $\hat{\L}$ satisfies, with $W_1(\cdot)$ a standard Brownian motion, \begin{equation} \label{eq:barhat} \hat{\L}(t) = \sigma_\Lambda W_1(t)- r \int_0^t \hat{\L}(u)\dif u. \end{equation} \end{lemma} \begin{proof} This proof is standard; for instance from \cite[Prop. 3]{Britt}, by putting the $\l_d$ in that paper to zero, it follows that $\hat\L_n(\cdot) \Rightarrow \hat\L(\cdot)$. This implies $\hat{K}_n(\cdot)\Rightarrow \hat{K}(\cdot)$, using \eqref{eq:barL} together with the continuous mapping theorem. \end{proof} Interestingly, the above result entails that the arrival rate process displays mean-reverting behavior. This also holds for the job count process in standard infinite-server queues. In other words, the job count process in the queueing system we are studying, can be considered as the composition of two mean-reverting processes. We make this more precise in the following. \vspace{3mm} From now on we consider the following centered and normalized version of the number of jobs in the system: \[\hat N_n(t) := \sqrt n \left(\f1n N_n(t) - \rho(t)\right).\] We assume that $\hat N_n(0)\Rightarrow \hat N(0)$ as $n\to\infty$. To prove the FCLT, we rewrite $\hat N_n(t)$ in a convenient form. Mimicking the steps performed in \cite{Anderson2016} or \cite{PW2007}, with $\bar S_A(t):=S_A(t)-t$, $\bar S_D(t):=S_D(t)-t$, \[R_n(t):= \bar S_A\left(\int_0^t \Lambda_n(u){\rm d}u\right) - \bar S_D\left(\mu\int_0^t N_n(u){\rm d}u\right),\] and using the relation (\ref{RHO}), we eventually obtain \[\hat N_n(t)=\hat N_n(0) +\frac{R_n(t)}{\sqrt{n}} +\hat K_n(t)-\mu\int_0^t \hat N_n(u){\rm d}u.\] Our next goal is to apply the martingale FCLT to the martingales $R_n(t)/\sqrt{n}$; see for background on the martingale FCLT for instance \cite{Ethier} and \cite{whitt2007}. The quadratic variation equals \[\left[\frac{R_n}{\sqrt{n}}\right]_t = \frac{1}{n}\left(S_A\left(\int_0^t \Lambda_n(u){\rm d}u\right) + S_D\left(\mu\int_0^t N_n(u){\rm d}u\right)\right),\] which converges to $\int_0^t \E\L(u){\rm d}u+\mu\int_0^t\rho(u){\rm d}u.$ Appealing to the martingale FCLT, the following FCLT is obtained. \begin{theorem} \label{thm:FCLT} The centered and normalized version of the number of jobs in the queue satisfies an FCLT: $\hat N_n(\cdot)\Rightarrow\hat N(\cdot)$ as $n\to\infty$, where $\hat N(t)$ solves the stochastic integral equation \[\hat N(t) =\hat N(0) +\int_0^t \sqrt{\E\L(u)+\mu \rho(u)} \,{\rm d}W_2(u) + \hat K(t) -\mu\int_0^t \hat N(u){\rm d}u,\] with $W_2(\cdot)$ a standard Brownian motion that is independent of the Brownian motion $W_1(\cdot)$ we introduced in the definition of $\hat K(\cdot)$. \end{theorem} \iffalse Furthermore, it is required that the arrival process satisfies an FCLT, i.e., \begin{equation} \label{eq:FCLTA} \sqrt n \left(\f1n A_n(t) - \int_0^t \E\L(u)\dif u\right)\Rightarrow A(t)\quad\text{in } D, \end{equation} for some (undetermined) process $A$. In the following we show that this FCLT holds, where $A$ is a sum of a Brownian motion and an Ornstein-Uhlenbeck process. For notational convenience, define \begin{alignat}{2} \label{eq:hats} \nonumber\hat{\L}_n(t) &:= \sqrt n \left(\f1n\L_n(t)-\E\L(t)\right); &&\qquad \hat{K}_n(t) := \int_0^t \hat\L_n(u)\dif u;\\ \hat{C}_n(t) &:= \sqrt n \left(\f1n A_n(t) - \f1n\int_0^t \L_n(s) \dif s\right); &&\qquad\hat{A}_n(t) := \sqrt n \left(\f1n A_n(t)-\int_0^t \E\L(u)\dif u\right);\\ \nonumber\hat N_n(t) &:= \sqrt n \left(\f1n N_n(t) - \rho(t)\right). \end{alignat} Consider the following decomposition: \begin{align} \label{eq:decomp} \hat{A}_n(t) =\hat{C}_n(t) + \hat{K}_n(t). \end{align} This way, we essentially decompose the fluctuation of the arrival process into fluctuation of the Cox process and fluctuation of the underlying stochastic arrival rate. We are able to prove FCLTs for both terms separately, in Lemmas \ref{lem:IOU} and \ref{lem:det} below. \iffalse \begin{lemma} \label{lem:IOU} Assume that for a shot $B$ it holds that $\E B,\E B^2<\infty$. If $\hat{\L}_n(0)\Rightarrow \hat{\L}(0)$ in $\R$, then $\hat{K}_n(t)\Rightarrow \hat{K}(t)$ in $D$, as $n\to\infty$, where \begin{equation} \label{eq:barL} \hat{K}(t) = \int_0^t \hat{\L}(u) \dif u, \end{equation} in which $\hat{\L}$ satisfies \begin{equation} \label{eq:barhat} \hat{\L}(t) = \hat{\L}(0) - r \int_0^t \hat{\L}(u)\dif u + \sigma W_1(t), \end{equation} with $\sigma^2=\nu\E B/(2r^2)$ and where $W_1$ is a standard Brownian motion. In other words, $\hat{K}$ is an integrated Ornstein-Uhlenbeck (OU) process, of which the corresponding non-integrated OU process $\hat{\L}$ has a speed of mean reversion $r$, long-run equilibrium level 0 and variance $\sigma^2$. \end{lemma} \begin{proof} It can be seen that, according to e.g.\ Proposition 3 in \cite{Britt}, by putting the $\l_d$ in their paper to zero, $\hat\L_n \Rightarrow \hat\L$, where $\hat{K}$ satisfies \eqref{eq:barL}, with $\sigma^2,W_1$ as mentioned in the statement of this Lemma. Since integration is a continuous mapping, the lemma follows by the continuous mapping theorem. \end{proof}\fi For the next lemma we need the notion of quadratic variation, which can be defined as follows (cf. \cite{PW2007}, Thm. 3.3). \begin{definition}[Quadratic variation] If $M_1$ and $M_2$ are (local) martingales with $M_1(0)=M_2(0)=0$, then \[ [M_1,M_2](t)\equiv\lim_{n\to\infty} \sum_{i=1}^\infty (M_1(t_{n,i})-M_1(t_{n,i-1}))(M_2(t_{n,i})-M_2(t_{n,i-1})), \] where $t_{n,i}:= t\wedge(i2^{-n})$ and the convergence as $n\to\infty$ is convergence in probability. The limit is independent of the way that the points $t_{n,i}$ are chosen in the interval $[0,t]$, as long as $t_{n,i}>t_{n,i-1}$ and the maximum difference $t_{n,i}-t_{n,i-1}$ for points inside the interval $[0,t]$ goes to zero as $N$ tends to infinity. Furthermore, if $M_1=M_2$ with probability $1$, then we denote $[M_1]=[M_2]=[M_1,M_2]$. \end{definition} \begin{lemma} \label{lem:det} Under the same assumptions as in Lemma \ref{lem:IOU}, with $\L(\cdot)$ starting in stationarity, it holds that $\hat{C}_n \Rightarrow \sigma_C W_2$, where $W_2$ is a standard Brownian motion and $\sigma_C^2 = \nu \E B /r$. \end{lemma} \begin{proof} Note that $\hat{C}_n$ is a continuous-time martingale. The quadratic varation of $\hat{C}_n$ is given by \begin{align*} \label{eq:quadraticvar} [\hat{C}_n(t)] = \f1n C\Big(\int_0^t \L_n(u) \dif u\Big), \end{align*} because, in the second equality, the quadratic variation is the sum of the jumps squared, and all jumps are of size one. Since $\L(\cdot)$ is a stationary process, it follows that \[ [\hat{C}_n(t)] \Rightarrow \int_0^t \E\L(u)\dif u = \f{\nu\E B} r t. \] Using the Martingale FCLT in \cite{PW2007}, we conclude the proof. \end{proof} In view of Lemmas \ref{lem:IOU} and \ref{lem:det}, we have thus shown that the arrival process satisfies the following FCLT. \begin{proposition} \label{prop:A} Under the assumptions of Lemmas \ref{lem:IOU} and \ref{lem:det}, the scaled arrival process of Eqn.\ \eqref{eq:FCLTA} satisfies an FCLT, with limiting process the sum of a Brownian motion and an integrated OU process, \[ \hat{A} = \sigma_C W_2 + \bar\L, \] where $\bar\L$ is defined in \eqref{eq:barL}. \end{proposition} \vspace{3mm} Now we center $N_n(t)$ by its fluid limit $\rho(t)$ and scale by $\sqrt n$, we rewrite it using the representation in Eqn.\ \eqref{eq:FCLTrep}, yielding \begin{align*} \hat{N}_n(t) = \sqrt n \left(\f1n N_n(t) - \rho(t)\right) &= \sqrt n \left(\f1n N_n(t) - \left[\rho(0) + \int_0^t \E\L(u)\dif u - \int_0^t \mu\rho(u)\dif u\right]\right)\\ &=\sqrt n\left(\f1n N_n(0)-\rho(0)\right) + \hat{A}_n(t)\\ &- \sqrt n\left(\f1nS\left(\int_0^t\mu N_n(u)\dif u\right)-\int_0^t\mu\rho(u)\dif u\right), \end{align*} of which the former two terms converge to zero in $\R$ and $\hat{A}$ in $D$, respectively, as mentioned before, and the latter can be rewritten as follows \begin{equation} \label{eq:rewrite} -\sqrt n \left(\f1nS\left(\int_0^t \mu N_n(u)\dif u\right)-\int_0^t \mu \f1n N_n(u) \dif u \right) - \mu \sqrt n \left(\int_0^t \f1n N_n(u) \dif u - \int_0^t \rho(u)\dif u\right). \end{equation} Thus \begin{equation} \label{eq:cmt} \hat{N}_n(t) = X_n(t) - \mu \int_0^t \hat{N}_n(u)\dif u, \end{equation} with \[ X_n(t) := \sqrt n\left(\f1n N_n(0)-\rho(0)\right) +\hat{A}_n(t)-\sqrt n \left(\f1n S\left(\int_0^t \mu N_n(u)\dif u\right)-\int_0^t \mu \f1n N_n(u) \dif u \right). \] The first term tends to zero by assumption. Again using the martingale FCLT in \cite{PW2007}, now for the last term, and assuming that $N$ is a stationary process, yields that \[ \sqrt n \left(\f1nS\left(\int_0^t \mu N_n(u)\dif u\right)-\int_0^t \mu \f1n N_n(u) \dif u \right) \Rightarrow \sigma_S W_3(t), \] with $\sigma_S^2 = \mu\E N = {\E \L}$ (cf.\ Eqn.\ \eqref{eq:EN}) and where $W_3$ is a Brownian motion, independent of $(W_1,W_2)$. Note that, due to an equation in \eqref{eq:momentsSN}, we find $\sigma_S^2= {\nu\E B}/{r}$. Conclude that $X_n\Rightarrow X$ in $D$, with \[ X(t) = \hat A(t) - \sigma_S W_3(t), \] It holds that $\hat N_n(t)$ is a continuous function of $X_n(t)$, cf.\ Eqn.\ \eqref{eq:cmt}. Hence, by the continuous mapping theorem and by virtue of Eqn.\ \eqref{eq:cmt}, $\hat N_n$ converges in $D$ to the solution of the following equation \[ \hat N(t) = X(t) - \int_0^t \mu \hat N(u)\dif u. \] We summarize the results found so far in the following theorem. \begin{theorem} \label{thm:FCLT} Assume that \eqref{eq:simplicity} holds and the arrival process $\L(\cdot)$ is a shot-noise-driven Cox process. Furthermore, assume that $\L$ and $N$ are stationary processes. The scaled arrival process of Eqn.\ \eqref{eq:hats} satisfies the following FCLT \[ \hat A_n(t) \Rightarrow \bar\L(t) + \f{\nu \E B}{r} W_2(t), \] where $\bar\L$ is an integrated Ornstein-Uhlenbeck process, as defined in Lemma \ref{lem:IOU}, and $W_2$ is a standard Brownian motion independent of $W_1$ (which is the driving Brownian motion of $\bar\L$). Furthermore, $\hat N_n(t) \Rightarrow \hat N(t)$, where $\hat N(t)$ satisfies the stochastic differential equation \[ \dif \hat N(t) = - \mu \hat N(t) + \sqrt 2 \sigma_S \dif \tilde W(t) + \dif \bar\L(t), \] where $\sigma_S^2:= \E\L=\nu\E B/r$ and $\tilde W$ is a standard Brownian motion which is independent of $W_1$. In other words, the limiting number of jobs process $\hat N$ is an Ornstein-Uhlenbeck process, driven by a Brownian motion and an integrated Ornstein-Uhlenbeck process. \end{theorem} \fi \begin{remark} \label{rem:arrival} In passing, we have proven that the arrival process as such obeys an FCLT. With \[\hat{A}_n(t) := \sqrt n \left(\f1n A_n(t)-\int_0^t \E\L(u)\dif u\right),\] we find that $\hat{A}_n(t)\Rightarrow \hat A(t)$ as $n\to\infty$, where \[\hat A(t) := \int_0^t \sqrt{\E\L(u)+\mu \rho(u)} \,{\rm d}W_2(u) + \hat K(t)=\int_0^t \sqrt{2\mu \rho(u)+\rho'(u)} \,{\rm d}W_2(u) + \hat K(t);\] the last equality follows from the fact that $\rho(\cdot)$ satisfies (\ref{RHO}). \end{remark} \begin{remark} \label{rem:kiefer} The FCLT can be extended to non-exponential service requirements, by making use of \cite[Thm.\ 3.2]{PW2010}. Their approach relies on two assumptions: \begin{itemize} \item[$\circ$] The arrival process should satisfy an FCLT; \item[$\circ$] The service times are i.i.d.\ non-negative random variables with a general c.d.f.\, independent of the arrival process. \end{itemize} As noted in Remark \ref{rem:arrival}, the first assumption is satisfied for the model in this paper. The second assumption holds as well. In the non-exponential case the results are less clean; in general, the limiting process can be expressed in terms of a Kiefer process, cf.\ e.g.\ \cite{strongapproximations}. \end{remark} \section{Networks} \label{sec:Networks} Now that the reader is familiar with the one-dimensional setting, we extend this to networks. In this section, we focus on feedforward networks in which each node corresponds to an infinite-server queue. Feedforward networks are defined as follows. \begin{definition}[feedforward network] \label{def:ff} Let $G=(V,E)$ be a directed graph with nodes $V$ and edges $E$. The nodes represent infinite-server queues and the directed edges between the facilities demonstrate how jobs move through the system. We suppose that there are no cycles in $G$, i.e. there is no sequence of nodes, starting and ending at the same node, with each two consecutive nodes adjacent to each other in the graph, consistent with the orientation of the edges. \end{definition} We focus on feedforward networks to keep the notation manageable. In Thm.\ \ref{thm:main}, we derive the transform of the numbers of jobs in all nodes, jointly with the shot-noise process(es) for feedforward networks. Nonetheless, we provide Example \ref{example: network with loops}, to show that analysis is in fact possible if there is a loop, but at the expense of more involved calculations. \vspace{3mm} Since all nodes represent infinite-server queues, one can see that whenever a node has multiple input streams, it is equivalent to multiple infinite-server queues that work independently from each other, but have the same service speed and induce the same service requirement for arriving jobs. Consider Fig.\ \ref{fig:graaf} for an illustration. The reason why this holds is that different job streams move independently through the system, without creating waiting times for others. Therefore, merging streams do not increase the complexity of our network. The same holds for `splits' in job streams. By this we mean that after jobs finished their service in a server, they move to server $i$ with probability $q_i$ (with $\sum_i q_i=1$). Then, one can simply sample the entire path that the job will take through the system, at the arrival instance at its first server. \begin{figure} \centering \begin{tikzpicture}[>=latex ,auto ,node distance =4 cm and 5cm ,on grid , semithick , state/.style ={ circle , color =white , draw,processblue , text=black , minimum width =1 cm}] \node[state] (A) {$1$}; \node[state] (B) [right =2 of A] {$3$}; \node[state] (C) [below =2 of A] {$2$}; \node[state] (D) [below =2 of B] {$3'$}; \node[state] (E) [below left=1 and 5.2 of A]{$3$}; \node[state] (F) [below left=2 of E]{$2$}; \node[state] (G) [above left=2 of E]{$1$}; \path[->] (A) edge (B); \path[->] (C) edge (D); \path[->] (F) edge (E); \path[->] (G) edge (E); \coordinate[left =1.5 of G](c); \coordinate[left =1.5 of F](d); \coordinate[right=1.5 of E](e); \draw[->] (c) to [left] node[auto] {} (G); \draw[->] (d) to [left] node[auto] {} (F); \draw[->] (E) to [right] node[auto] {} (e); \coordinate[right =1.5 of D](f); \coordinate[right =1.5 of B](g); \draw[->] (B) to [right] node[auto] {} (g); \draw[->] (D) to [right] node[auto] {} (f); \coordinate[left = 1.5 of A](h); \coordinate[left = 1.5 of C](i); \draw[->] (h) to [right] node[auto] {} (A); \draw[->] (i) to [right] node[auto] {} (C); \end{tikzpicture} \caption{\textit{Since the jobs are not interfering with each other, the network on the left is equivalent to the graph on the right. Node 3' is a copy of node 3: it works at the same speed and induces the same service requirements.}} \label{fig:graaf} \end{figure} If one recognizes the above, then all feedforward networks reduce to parallel tandem systems in which the first node in each tandem system is fed by external input. The procedure to decompose a network into parallel tandems consists of finding all paths between nodes in which jobs either enter or leave the system. Each of these paths will subsequently be considered as a tandem queue, which are then set in parallel. To build up to the main result, we first study tandem systems in Section \ref{sec:tandem}. Subsequently, we put the tandem systems in parallel in Section \ref{sec:parallel} and finally we present the main theorem and some implications in Section \ref{sec:mainthm}. \subsection{Tandem systems} \label{sec:tandem} As announced, we proceed by studying tandem systems. In Section \ref{sec:parallel} below, we study $d$ parallel tandem systems, where $i=1,\ldots,d$. In this subsection we consider the $i$-th of these tandem systems. Suppose that tandem $i$ has $S_i$ service facilities and the input process at the first node is Poisson, with a shot-noise arrival rate $\L_i(\cdot)$. We assume that jobs enter node $i1$. When they finish service, they enter node $i2$, etc., until they enter node $iS_i$ after which they leave the system. We use $ij$ as a subscript referring to node $j$ in tandem system $i$ and we refer to the node as node $ij$. Hence $N_{ij}(t)$ and $ J_{ij}$ denote the number of jobs in node $ij$ at time $t$, and a copy of a service requirement, respectively, where $j=1,\ldots,S_i$. Fix some time $t>0$. Again we derive results by splitting time into intervals of length $\Delta$. Denote by $M_{ij}(k,\Delta)$ the number of jobs present in node $ij$ at time $t$ that have entered node $i1$ between time $k\Delta$ and $(k+1)\Delta$; as we keep $t$ fixed we suppress it in our notation. Because jobs are not interfering with each other in the infinite-server realm, we can decompose the transform of interest: \begin{equation} \label{eq:chop} \E \left(\prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)}\right) = \lim_{\Delta\downarrow0}\prod_{k=0}^{t/\Delta-1} \E \left(\prod_{j=1}^{S_i} z_{ij}^{M_{ij}(k,\Delta)}\right). \end{equation} Supposing that the arrival rate is a deterministic function of time $\l_i(\cdot)$, by conditioning on the number of arrivals in the $k$-th interval, \begin{align*} \E \left(\prod_{j=1}^{S_i} z_{ij}^{M_{ij}(k,\Delta)} \right)&= \sum_{m=0}^\infty e^{-\l_i(k\Delta) \Delta} \f{(\l_i(k\Delta)\Delta)^m}{m!} \left( f_i(k\Delta,z) \right)^m\\ &= \exp\Big(\Delta \l_i(k\Delta)(f_i(k\Delta,z)-1) \Big), \end{align*} where \begin{equation} \label{eq:defpsi} f_i(u,z) := p_i(u) + \sum_{j=1}^{S_i} z_{ij} p_{ij}(u), \end{equation} in which $p_i(u)$ ($p_{ij}(u)$, respectively) denotes the probability that the job that entered tandem $i$ at time $u$ has already left the tandem (is in node $j$, respectively) at time $t$. Note that \begin{equation} \label{eq:p} p_i(u) = \Pb\left(\sum_{\ell=1}^{S_i} J_{i\ell} < t-u\right),\quad p_{ij}(u) = \Pb\left(\sum_{\ell=1}^{j-1} J_{i\ell} < t-u, \sum_{\ell=1}^{j} J_{i\ell}>t-u\right). \end{equation} Recognizing a Riemann sum and letting $\Delta\downarrow0$, we conclude that Eqn.\ \eqref{eq:chop} takes the following form: \[ \E \left(\prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)} \right)= \exp\left(\int_0^t \lambda_i(u) (f_i(u,z)-1)\dif u\right). \] In case of a stochastic rate process $\Lambda_i(\cdot)$, we obtain \begin{equation*} \E\left.\left(\prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)} \,\right|\,\L_i(\cdot)\right) = \exp\left(\int_0^t \Lambda_i(u) (f_i(u,z)-1)\dif u\right). \end{equation*} Therefore it holds that \begin{eqnarray*} \E\left( \prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)} e^{-s\L_i(t)}\right)& =& \E\left(\E\left(\left.\prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)}e^{-s\L_i(t)}\,\right|\,\L_i(\cdot)\right)\right)\\& =& \E\left(e^{-s\L_i(t)} \E\left(\left.\prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)}\,\right|\,\L_i(\cdot)\right)\right), \end{eqnarray*} and we consequently find \begin{equation} \label{eq:trans2} \E\left( \prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)} e^{-s\L_i(t)}\right) = \E \exp\left(\int_0^t \Lambda_i(u) (f_i(u,z)-1)\dif u - s\L_i(t)\right). \end{equation} \subsection{Parallel (tandem) systems} \label{sec:parallel} Now that the tandem case has been analyzed, the next step is to put the tandem systems as described in Section \ref{sec:tandem} in parallel. We assume that there are $d$ parallel tandems. There are different ways in which dependence between the parallel systems can be created. Two relevant models are listed below, and illustrated in Fig.\ \ref{fig:M1M2}. \begin{itemize} \item[\textbf{(M1)}] Let $\L\equiv\L(\cdot)$ be a $d$-dimensional shot-noise process $(\L_1,\ldots,\L_d)$ where the shots in all $\L_i$ occur simultaneously (the shot distributions and decay rates may be different). The process $\L_i$, for $i=1,\ldots,d$, corresponds to the arrival rate of tandem system $i$. Each tandem system has an arrival process, in which the Cox processes are independent given their shot-noise arrival rates. \item[\textbf{(M2)}] Let $\L\equiv\L(\cdot)$ be the shot-noise rate of a Cox process. The corresponding Poisson process generates {\it simultaneous} arrivals in all tandems. \end{itemize} \begin{figure}[h!] \centering \begin{tikzpicture}[>=latex ,auto ,node distance =4 cm and 5cm] \node (L1) {$\Lambda_1$}; \node (L2) [below =0.8 of L1] {$\Lambda_2$}; \draw[<->] (L1) to (L2); \node[box] (T1) [right = 1.2 of L1] {Tandem 1}; \path[->] (L1) edge (T1); \node[box] (T2) [right = 1.2 of L2] {Tandem 2}; \path[->] (L2) edge (T2); \coordinate[right =1.2 of T2](c2); \draw[->] (T2) to [right] node[auto] {} (c2); \coordinate[right =1.2 of T1](c1); \draw[->] (T1) to [right] node[auto] {} (c1); \node[box,right=4 of c1](T12){Tandem 1}; \coordinate[right =1.2 of T12](d1); \draw[->] (T12) to [right] node[auto] {} (d1); \node[box,right=4 of c2](T22){Tandem 2}; \coordinate[right =1.2 of T22](d2); \draw[->] (T22) to [right] node[auto] {} (d2); \node (L) [below right = 0.4 and 1 of c1] {$\Lambda$}; \node[phase] (split) [right=1 of L] {}; \draw[->] (L) to (split); \coordinate[left=0 of T12] (e1); \coordinate[left=0 of T22] (e2); \draw[->] (split) to (e1); \draw[->] (split) to (e2); \end{tikzpicture} \caption{\textit{Model (M1) is illustrated on the left, and Model (M2) is illustrated on the right. The rectangles represent tandem systems, which consist of an arbitrary number of nodes in series.}} \label{fig:M1M2} \end{figure} \begin{remark} The model in which there is essentially one shot-noise process that generates arrivals for all queues independently, is a special case of Model (M1). This can be seen by setting all components of $\L=(\L_1,\ldots,\L_d)$ equal, by letting the shots and decay rate be identical. \end{remark} In Model (M1), correlation between the shot-noise arrival rates induces correlation between the numbers of jobs in the different queues. In Model (M2), correlation clearly appears because all tandem systems have the same input process. Of course, the tandem systems will not behave identically because the jobs may have different service requirements. In short, correlation across different tandems in Model (M1) is due to linked arrival rates, and correlation in Model (M2) is due to simultaneous arrival epochs. We feel that both versions are relevant, depending on the application, and hence we analyze both. \paragraph{Analysis of (M1) ---} Suppose that the dependency is of the type as in Model (M1). This means that the shots, in each component of $\L$, occur simultaneously. Recall the definition of $f_i$ as stated in Eqn.\ \eqref{eq:defpsi}. It holds that \begin{align} \label{eq:prelemmaM1} \nonumber \E\left(\prod_{i=1}^d e^{-s_i\L_i(t)} \prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)}\right) \nonumber &= \E\left(\prod_{i=1}^d \E\left(\left.\prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)} e^{-s_i\L_i(t)} \,\right|\, \L_i(\cdot)\right)\right)\\ \nonumber &= \E\left(\prod_{i=1}^d\left( e^{-s_i\L_i(t)} \E\left(\left.\prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)}\, \right| \,\L_i(\cdot)\right)\right)\right)\\ &=\E\exp\left(\sum_{i=1}^d\left(\int_0^t \L_i(u)(f_i(u,z)-1)\dif u - s_i\L_i(t)\right)\right), \end{align} where the last equality holds due to (\ref{eq:trans2}). \paragraph{Analysis of (M2) ---} Now suppose that the dependency in this model is of type (M2), i.e., there is one shot-noise process that generates simultaneous arrivals in the parallel tandem systems. First we assume a deterministic arrival rate function $\l(\cdot)$. Let $M_{ij}(k,\Delta)$ be the number of jobs present in tandem system $i$ at node $j$ at time $t$ that have arrived in the system between $k\Delta$ and $(k+1)\Delta$. Note that \[ \E\left(\prod_{i=1}^d \prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)}\right) = \lim_{\Delta\downarrow0} \prod_{k=0}^{t/\Delta-1} \E\left(\prod_{i=1}^d\prod_{j=1}^{S_i} z_{ij}^{M_{ij}(k,\Delta)}\right). \] To further evaluate the right hand side of the previous display, we observe that we can write \[ \E\left(\prod_{i=1}^d \prod_{j=1}^{S_i} z_{ij}^{M_{ij}(k,\Delta)}\right) = \sum_{m=0}^\infty e^{-\l(k\Delta)\Delta}\f{\l(k\Delta)\Delta)^m}{m!}(f(k\Delta,z))^m =e^{\Delta\l(k\Delta)(f(k\Delta,z)-1)},\]where \begin{equation} \label{eq:defupsilon} f(u,z) := \sum_{j=1}^d \sum_{\ell_j=1}^{S_{j+1}} p_{\ell_1,\ldots, \ell_d} \prod_{i=1}^d z_{i\ell_i}; \end{equation} in this definition $p_{\ell_1,\ldots, \ell_d}\equiv p_{\ell_1,\ldots, \ell_d}(u)$ equals the probability that a job that arrived at time $u$ in tandem $i$ is in node $\ell_i$ at time $t$ (cf.\ Eqn.\ \eqref{eq:p}). The situation that $\ell_i=S_i+1$ means that the job left the tandem system; we define $z_{i,S_i+1} = 1$. In a similar fashion as before, we conclude that \begin{equation} \label{eq:prelemmaM2} \E\left( \prod_{i=1}^d \prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)} e^{-s \L(t)} \right)= \E\exp\left(\int_0^t \L(u)(f(u,z)-1)\dif u - s\L(t)\right), \end{equation} with $f$ defined in Eqn.\ \eqref{eq:defupsilon}. \begin{example}[Two-node parallel system] \label{example:two-node parallel system} In the case of a parallel system of two infinite-server queues, $f(u,z)$ simplifies to \begin{align*} f(u,z_{11},z_{21}) = \sum_{\ell_1=1}^2 \sum_{\ell_2=1}^2 z_{1\ell_1}z_{2\ell_2} p_{\ell_1,\ell_2} = z_{11}z_{21} p_{11} + z_{21}p_{21} + z_{11}p_{12} + p_{22}. \end{align*} \end{example} \begin{remark}[Routing] Consider a feedforward network with routing. As argued in the beginning of this section, the network can be decomposed as a parallel tandem system. In case there is splitting at some point, then one decomposes the network as a parallel system, in which each tandem $i$ receives the job with probability $q_i$, such that $\sum q_i=1$. This can be incorporated simply by adjusting the probabilities contained in $f_i$ in Eqn.\ \eqref{eq:prelemmaM1}, which are given in Eqn.\ \eqref{eq:p}, so that they include the event that the job joined the tandem under consideration. For instance, the expression for $p_i(u)$ in the left equation in \eqref{eq:p} would become \[ \Pb\bigg(Q=i, \sum_{\ell=1}^{S_i} J_{i\ell}<t-u\bigg), \] where $Q$ is a random variable with a generalized Bernoulli (also called `categorical') distribution, where \[ \Pb(\text{job is assigned to tandem } i) = \Pb(Q=i)=q_i,\quad\text{for } i=1,\ldots d, \] with $\sum q_i=1$; the right equation in \eqref{eq:p} is adjusted similarly. Other than that, the analysis is the same for the case of splits. \end{remark} \begin{remark}[Networks with loops] \label{example: network with loops} So far we only considered feedforward networks. Networks with loops can be analyzed as well, but the notation becomes quite cumbersome. To show the method in which networks with loops and routing can be analyzed, we consider a specific example. Suppose that arrivals enter node one, after which they enter node two. After they have been served in node two, they go back to node one with probability $\eta$, or leave the system with probability $1-\eta$. In this case, with similar techniques as before, we can find \[ \E z_1^{N_1(t)}z_2^{N_2(t)} = \exp\left(\int_0^t \L(u)(f(u,z_1,z_2)-1)\dif u \right), \] with \[ f(u,z_1,z_2) = \Pb(\,\text{job($u$) left system}) + \sum_{i=1}^2 z_i \Pb(\,\text{job($u$) is in node $i$}), \] in which $\text{job}(u)$ is the job that arrived at time $u$ and we are examining the system at time $t$. Now, if we denote service times in the $j$-th node by $J^{(j)}$, then, at a specific time $t$, \[ \Pb(\,\text{job($u$) left system}) = \sum_{k=0}^\infty \Pb\left(\sum_{i=1}^{k+1} (J_i^{(1)}+J_i^{(2)}) \leq t-u \right) \eta^k(1-\eta). \] Analogously, $\Pb(\,\text{job($u$) is in node $1$})$ equals, by conditioning on the job having taken $k$ loops, \[ \sum_{k=0}^\infty \eta^{k} \Pb\left(J_{k+1}^{(1)} + \sum_{i=1}^{k} (J_i^{(1)} + J_i^{(2)}) > t-u , \sum_{i=1}^{k} (J_i^{(1)} + J_i^{(2)}) \leq t-u\right); \] likewise, $\Pb(\,\text{job($u$) is in node $2$})$ equals \[ \sum_{k=0}^\infty \eta^{k} \Pb\left(\sum_{i=1}^{k+1}( J_i^{(1)} + J_i^{(2)}) > t-u , J^{(1)}_{k+1} + \sum_{i=1}^{k}( J_i^{(1)} + J_i^{(2)}) \leq t-u\right). \] For example, in case all $J_i^{(j)}$ are independent and exponentially distributed with mean $1/\mu$, we can calculate those probabilities explicitly. Indeed, if we denote by $Y$ a Poisson process with rate $\mu$, then e.g., \begin{align*} \Pb\left(\sum_{i=1}^{k+1} (J_i^{(1)} + J_i^{(2)}) > t-u , J^{(1)}_{k+1} + \sum_{i=1}^{k} (J_i^{(1)} + J_i^{(2)} )\leq t-u\right) &= \Pb(Y(t-u)=2k+1)\\ &= e^{-\mu (t-u)} \f{(\mu (t-u))^{2k+1}}{(2k+1)!} \end{align*} and thus \[ \Pb(\,\text{job($u$) is in node $2$}) = \sum_{m=0}^\infty \eta^m e^{-\mu (t-u)} \f{(\mu (t- u))^{2m+1}}{(2m+1)!}. \] A similar calculation can be done for the probability that the job is in node one. Recalling that a sum of exponentials has a Gamma distribution, we can write \begin{eqnarray*} f(u,z_1,z_2) &=& z_1\sum_{m=0}^\infty \eta^m e^{-\mu (t-u)} \f{(\mu (t-u))^{2m}}{(2m)!} + z_2 \sum_{m=0}^\infty \eta^m e^{-\mu (t-u)} \f{(\mu (t-u))^{2m+1}}{(2m+1)!}\\ &&+ \,\sum_{m=0}^\infty \eta^m(1-\eta) F_{\Gamma(2m+2,\mu)}(t-u)\\ &=& z_1 e^{-\mu(t-u)} \cosh\left( {\mu\sqrt{\eta} (t-u)} \right) + z_2 \f{e^{-\mu(t-u)}}{\sqrt{\eta}}\sinh\left(\mu\sqrt{\eta}(t-u) \right)\\ &&\,+ (1-\eta)\sum_{ m=0}^\infty \eta^m F_{\Gamma(2m+2,\mu)}(t-u), \end{eqnarray*} where $F_{\Gamma(2m+2, \mu)}$ denotes the distribution function of a $\Gamma$-distributed random variable with rate $\mu$ and shape parameter $2m+2$. \end{remark} \subsection{Main result} \label{sec:mainthm} In this subsection we summarize and conclude with the following main result. Recall Definition \ref{def:ff} of a feedforward network. In the beginning of Section \ref{sec:Networks} we argued that we can decompose a feedforward network into parallel tandems. In Section \ref{sec:parallel} we studied exactly those systems, leading up to the following results. \begin{theorem} \label{thm:main} Suppose we have a feedforward network of infinite-server queues, where the input process is a Poisson process with shot-noise arrival rate. Then the network can be decomposed into parallel tandem systems. In Model (M1), it holds that \begin{eqnarray*} \lefteqn{\hspace{-35mm}\E \left(\prod_{i=1}^d \prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)} e^{-s\L_i(t)} \right)= \E\exp\left(\sum_{i=1}^d\left(\int_0^t \L_i(u)(f_i(u,z)-1)\dif u - s_i\L_i(t)\right)\right)}\\ &=&\exp\left(\nu \int_0^t \left(\beta(g(s,v))-1\right)\dif v\right), \end{eqnarray*} with $f_i(\cdot,\cdot)$ as defined in Eqn.\ \eqref{eq:defpsi} and where $g(s,v)$ is a vector-valued function in which component $i$ is given by \[ s_i e^{-r_i(t-v)} - e^{r_iv} \int_v^t (f_i(u,z)-1) e^{-r_iu} \dif u. \] Furthermore, in Model (M2), \begin{eqnarray*} \lefteqn{\E\left( \prod_{i=1}^d \prod_{j=1}^{S_i} z_{ij}^{N_{ij}(t)}e^{-s\L(t)} \right)=\E\exp\left(\int_0^t \L(u)(f(u,z)-1)\dif u - s\L(t)\right)}\\ &=& \exp\left(\nu\int_0^t \left(\b\left( se^{-r(t-v)} - e^{rv} \int_v^t (f(u,z)-1)e^{-ru}\dif u \right)-1\right)\dif v\right), \end{eqnarray*} with $f(\cdot,\cdot)$ as defined in Eqn.\ \eqref{eq:defupsilon}. \end{theorem} \begin{proof} These are Eqns.\ \eqref{eq:prelemmaM1} and \eqref{eq:prelemmaM2} to which we applied Lemma \ref{lemma}. \end{proof} Next we calculate covariances between nodes in tandem and parallel thereafter. \paragraph{Covariance in Tandem System ---}Consider a tandem system consisting of two nodes and we want to analyze the covariance between the numbers of jobs in the nodes. Dropping the index of the tandem system, denote by $N_1(\cdot)$ and $N_2(\cdot)$ the numbers of jobs in node 1 and 2, respectively. Using Eqn.\ \eqref{eq:prelemmaM1}, differentiation yields \[ \E N_2(t) = \int_0^t \Pb(J_1<t-u, J_1+J_2>t-u) \E \L(u) \dif u \] and \[ \E N_1(t) N_2(t) = \E\left(\int_0^t \Pb(J_1<t-u, J_1+J_2>t-u) \L(u)\dif u \int_0^t \Pb(J_1>t-v)\L(v) \dif v\right) \] so that \begin{eqnarray*} \lefteqn{\Cov(N_1(t),N_2(t))}\\& =& \Cov\left(\int_0^t \Pb(J_1<t-u, J_1+J_2>t-u) \L(u)\dif u, \int_0^t \Pb(J_1>t-v)\L(v) \dif v\right)\\ &=& 2 \int_0^t \int_v^t \Pb(J_1<t-u, J_1+J_2>t-u)\Pb(J_1>t-v) \Cov(\L(u),\L(v)) \dif u \dif v\\ &= &2 \int_0^t \int_v^t \Pb(J_1<t-u, J_1+J_2>t-u)\Pb(J_1>t-v) e^{-r(u-v)} \Var \L(v) \dif u \dif v, \end{eqnarray*} cf.\ Eqn.\ \eqref{eq:momentsSN} for the last equality. \paragraph{Covariance parallel (M1) ---} Consider a parallel system consisting of two nodes only. In order to study covariance in the parallel (M1) case, we need a result about the covariance of the corresponding shot-noise process. \begin{lemma} \label{cov} Let $\L_1(\cdot),\L_2(\cdot)$ be shot-noise processes of which the jumps occur simultaneously according to a Poisson arrival process with rate $\nu$. Let the decay be exponential with rate $r_1, r_2$, respectively. Then it holds that, for $\d>0$, \begin{equation} \label{eq:covl1l2} \Cov(\L_1(t),\L_2(t+\d)) = e^{-r_2\d} \Cov(\L_1(t),\L_2(t)) = e^{-r_2\d} \f{\nu \E B_{11} B_{12}}{r_1+r_2}(1-e^{-(r_1+r_2)t}), \end{equation} which, in case $\L_1=\L_2$, reduces to \[ \Cov(\L_i(t),\L_i(t+\d)) = e^{-r_i\d} \Var \L_i(t),\quad\text{for } i=1,2, \] corresponding to {\em \cite[{\em p}.\ 394]{Ross}} and Eqn.\ \eqref{eq:momentsSN}. \end{lemma} \begin{proof} See Appendix \ref{app:cov}. \end{proof} By making use of Eqn.\ \eqref{eq:prelemmaM1}, we find \[ \E N_1(t) N_2(t) = \E\left( \int_0^t \Lambda_1(u)\Pb(J_1>t-u)\dif u \int_0^t \Lambda_2(v)\Pb(J_2>t-v) \dif v\right). \] This implies \begin{eqnarray*} \lefteqn{\Cov(N_1(t),N_2(t)) = \Cov\left(\int_0^t \L_1(u)\Pb(J_1>t-u)\dif u, \int_0^t \L_2(v)\Pb(J_2>t-v)\dif v\right)}\\ &=&2\int_0^t \int_v^t \Cov(\L_1(u), \L_2(v)) \Pb(J_1>t-u)\Pb(J_2>t-v) \dif u \dif v\\ &=&2\int_0^t \int_v^t \f{\nu \E B_{11}B_{12}}{r_1+r_2} \left(1-e^{-(r_1+r_2)v }\right)e^{-r_2(u-v)} \Pb(J_1>t-u)\Pb(J_2>t-v) \dif u \dif v \end{eqnarray*} where we made use of the fact that, for $u\geq v$, \begin{align*} \Cov(\L_1(u),\L_2(v)) = \f{\nu \E B_{11}B_{12}}{r_1+r_2} \left(1-e^{-(r_1+r_2)v}\right)e^{-r_2(u-v)}, \end{align*} cf.\ Lemma \ref{cov}. \paragraph{Covariance parallel (M2) ---} Extracting the mixed moment from the transform in Eqn.\ \eqref{eq:prelemmaM2}, we derive directly that \begin{eqnarray*} \E N_1(t)N_2(t) &=& \E\left( \int_0^t \L(u) \Pb(J_1>t-u,J_2>t-u)\dif u\right)\\ && +\, \E \left(\int_0^t \L(u) \Pb(J_1>t-u)\dif u \int_0^t \L(u) \Pb(J_2>t-u)\dif u\right). \end{eqnarray*} This implies \begin{eqnarray*} \Cov(N_1(t),N_2(t)) &=& \Cov\left(\int_0^t \L(u)\Pb(J_1>t-u)\dif u, \int_0^t \L(u)\Pb(J_2>t-u)\dif u\right)\\ &&+ \,\int_0^t \E \Lambda(u) \Pb(J_1>t-u,J_2>t-u) \dif u. \end{eqnarray*} The following proposition compares the correlations present in Model (M1) and (M2). In the proposition we refer to the number of jobs in queue $j$ in Model (M$i$) at time $t$ as $N_j^{(i)}(t)$, for $i=1,2$. We find the anticipated result, that the correlation in Model (M2) is stronger than in Model (M1). \begin{proposition} Let $\Lambda(\cdot)$ be the shot-noise process that generates simultaneous arrivals in both queues and let $\Lambda_1(\cdot), \Lambda_2(\cdot)$ be processes that have simultaneous jumps and generate arrivals in both queues independently. Suppose that $ \Lambda_1(t) \stackrel{{\rm d}}{=} \Lambda_2(t) \stackrel{{\rm d}}{=} \Lambda(t)$, for $t\geq 0$. Then, for any $t\ge 0$, \[ \Corr(N_1^{(1)}(t),N_2^{(1)}(t)) \leq \Corr(N_1^{(2)}(t),N_2^{(2)}(t)). \] \end{proposition} \begin{proof} Because of the assumption $ \Lambda_1(t) \stackrel{{\rm d}}{=} \Lambda_2(t) \stackrel{{\rm d}}{=} \Lambda(t)$, we have that, for all combinations $i,j\in\{1,2\}$, the $N_i^{(j)}(t)$ are equal in distribution. Therefore it is sufficient to show that \[ \Cov(N_1^{(1)}(t), N_2^{(1)}(t)) \leq \Cov(N_1^{(2)}(t), N_2^{(2)}(t)). \] The expressions for the covariances, which are derived earlier in this section, imply that \[\Cov(N_1^{(2)}(t), N_2^{(2)}(t)) - \Cov(N_1^{(1)}(t), N_2^{(1)}(t))= \E \int_0^t \Lambda(u) \Pb(J_1>t-u, J_2>t-u)\dif u,\] which is non-negative, as desired. \end{proof} \section{Concluding remarks} \label{sec:concluding remarks} We have considered networks of infinite-server queues with shot-noise-driven Coxian input processes. For the single queue, we found explicit expressions for the Laplace transform of the joint distribution of the number of jobs and the driving shot-noise arrival rate, as well as a functional central limit theorem of the number of jobs in the system under a particular scaling. The results were then extended to a network context: we derived an expression for the joint transform of the numbers of jobs in the individual queues, jointly with the values of the driving shot-noise processes. We included the functional central limit theorem for the single queue, but it is anticipated that a similar setup carries over to the network context, albeit at the expense of considerably more involved notation. Our future research will include the study of the departure process of a single queue; the output stream should remain Coxian, but of another type than the input process. \iffalse \section{Output analysis} In this section the departure (or: output) process is studied. Intuitively, it is clear that if all jobs would have some fixed deterministic size, then the departure process is generated by a shifted version of the arrival process. In particular, in the $M_S/G/\infty$ model studied in this paper, one would find that the departure process is a shot-noise-driven Cox process as well. The question arises to which extent this holds if the service requirements are stochastic. The following proposition reveals that the departure process of a Cox process is a Cox process. This result generalizes the result in \cite{mirasol1963}, which states that the output process of an $M/G/\infty$ system is a Poisson process. We will use this proposition in the particular case that $\L$ is a shot-noise process, but it holds more generally. \begin{proposition} \label{prop:output} Let $\cdot/G/\infty$ be an infinite-server queue in which the arrival process is a Cox process with stochastic arrival rate $\{\L(t):t\in\R\}$. Then the deparature process $D$ is a Cox process with rate \[ \delta(t) = \int_0^\infty \L(t-u) \dif \Pb(J\leq u), \] provided that the integral is well-defined and finite. \end{proposition} \begin{proof} We can employ an elegant argument that appeared in \cite{EMW1993}, which is about the nonhomogeneous Poisson arrival queue ($M_t/G/\infty$), and adjust it to the case of Cox processes. The argument was used before by Pr\'ekopa, who credits C. Ryll-Nardzewski for the idea during a 1953 lecture. Conditional on the entire $\L$ process (or, conditional on $\F_\infty^\L$), the arrival and service times generate a measure on $(-\infty,\infty)\times[0,\infty)$. That is, a point at $(u,v)$ corresponds to an arrival at time $u$ which has a service time $v$. The number of points in $(a,b]\times(c,d]$ is Poisson with mean $\Pb(c<J<d)\int_a^b \l(u)\dif u$. The numbers of points in disjoint rectangles $(a,b]\times(c_1,d_1]$ and $(a,b]\times(c_2,d_2]$, where $c_1<d_1<c_2<d_2$, are independent Poisson random variables. Hence, the numbers of points in all sets in any finite collection of disjoint rectangles are independent Poisson random variables, which determines the Poisson random measure. Note that the number of departures in $[0,t)$, denoted by $D(t)$, is the number of pairs $(u,v)$ with $0\leq u+v\leq t$. Thus $D(t)$ is a Poisson random measure with conditional rate \[ \delta(t) = \int_0^\infty \L(t-u) \dif \Pb(J\leq u), \] which concludes the proof. \end{proof} \begin{remark} \label{rem:output} In some cases this result has a clear intuitive interpretation. For example, one can verify that if $J=j$ with probability one, then the departure rate process is simply the shifted shot-noise sample path, $\delta(t)=\L(t-j)$. More generally, if we suppose that $J=j_i$ with probability $p_i$, for $i=1,\ldots,n$, then \[ \delta(t) = \sum_{i=1}^n p_i \L(t-j_i). \] In particular, as $\delta(t)$ is the sum of shot noises, it is shot noise itself. Furthermore, it can be seen that, under the assumption that $\L(t)$ starts at $t=-\infty$, and hence is in stationarity for any $t>0$, the expected departure rate equals $\E \delta(t) = \E\L = \E \L(t)$. Informally, if $\L$ is stationary, the expected departure and arrival rates are equal. Note that this does not imply that the departure process is actually a homogeneous Poisson process: for this to hold it is required that $\delta(t)=\E\L$. Additionally, if one takes $J$ to be uniformly distributed on $[0,b)$, then it is expected to observe the conditional departure rate, for $t\in\R$, \[ \d(t) = \f1b \int_0^{b} \L(t-u)\dif u. \] In other words, the departure rate is the average of the arrival rate over the last $b>0$ time units. Clearly, the output process is a Cox process, although it is not a shot-noise process. It can be seen that as $b$ grows large, $\delta(t)$ tends to a constant, which implies that the departure process tends to a homogeneous Poisson process. \end{remark} \paragraph{Properties of departure process in case of exponential jobs ---} Now let us assume that $\L(0)=0$ and we are interested in the expected value and variance of $D(t+\Delta)-D(t)$. First, considering Proposition \ref{prop:output}, note that \[ \E[D(t+\Delta)-D(t)|\F^D_t\vee\F^\L_\infty] = \int_t^{t+\Delta} \int_0^v \L(v-u)\dif \Pb(J\leq u)\dif v, \] and thus, if we assume an exponential job distribution with rate $\mu<\infty$, \begin{align*} \E[D(t+\Delta)-D(t)] &= \int_t^{t+\Delta} \int_0^v\mu \E \L(v-u)e^{-\mu u}\dif u\dif v\\ &=\f{\nu\E B}{r} \int_t^{t+\Delta} \int_0^v \mu(1-e^{-r(v-u)})e^{-\mu u}\dif u\dif v\\ &=\f{\nu\E B}{r}\left(\Delta+ \f{\mu^2 e^{-rt}(1-e^{-r\Delta}) - r^2e^{-\mu t}(1-e^{-\mu\Delta})}{r\mu(r-\mu)}\right). \end{align*} Note that this implies that, as $t\to\infty$, the departure rate becomes ${\nu\E B}/{r}=\E \L$, corresponding to the observation in Remark \ref{rem:output}, which stated that in stationarity the expected input and output rates are fixed and equal. Using the law of total variance, we find \begin{align} \label{eq:varD} \Var[D(t+\Delta)-D(t)]= \Var\left[\int_t^{t+\Delta} \int_0^v \L(v-u)\mu e^{-\mu u}\dif u\dif v\right] + \E[D(t+\Delta)-D(t)]. \end{align} By making use of Riemann sums and the identity to write the variance of a sum as a sum of covariances, the first term on the right-hand side evaluates to \begin{align} \label{eq:iiiint} 2\mu^2 \int_t^{t+\Delta}\int_x^{t+\Delta}\int_0^w \int_0^x e^{-\mu v - \mu u} \Cov(\L(w-v),\L(x-u)) \dif u \dif v \dif w \dif x \end{align} Using that \begin{align*} \Cov (\L(w-v),\L(x-u))&= \Var[\L(\min(w-v,x-u)] e^{-r|(w-v)-(x-u)|}\\ &=\f{\nu\E B^2}{2r} (1-e^{-2r\min(w-v,x-u)})e^{-r|(w-v)-(x-u)|}, \end{align*} cf.\ Eqn.\ \eqref{eq:momentsSN}, Eqn.\ \eqref{eq:iiiint} can be rewritten to \begin{equation} \label{eq:iiint2} \f{\mu^2\nu\E B^2}{r}\int_t^{t+\Delta}\int_x^{t+\Delta}\int_0^w \int_0^x e^{-\mu v - \mu u} (1-e^{-2r\min(w-v,x-u)})e^{-r|(w-v)-(x-u)|} \dif u \dif v \dif w \dif x. \end{equation} Note that the inner integral of the last expression can be rewritten to \begin{align*} \label{eq:innerint} \int_0^{(x+v-w)^+}f_1(u,v,w,x)\dif u +\int_{(x+v-w)^+}^x f_2(u,v,w,x) \dif u, \end{align*} where \[ f_1(u,v,w,x) = e^{-\mu(v+u)}(1-e^{-2r(w-v)})e^{-r((x-u)-(w-v))} \] and \[ f_2(u,v,w,x) = e^{-\mu(v+u)}(1-e^{-2r(x-u)})e^{-r((w-v)-(x-u))}. \] We split the integration in Eqn.\ \eqref{eq:iiint2} up in a set of $v\geq0$ for which $x+v-w\geq0$, and one for which $x+v-w\leq0$. Then Eqn.\ \eqref{eq:iiint2} turns into \begin{align*} \f{\mu^2\nu\E B^2}{r}\bigg(&\int_t^{t+\Delta}\int_x^{t+\Delta}\int_{w-x}^w\left(\int_0^{x+v-w} f_1(u,v,w,x)\dif u + \int_{x+v-w}^x f_2(u,v,w,x)\dif u \right)\dif v \dif w \dif x\\ +&\int_t^{t+\Delta}\int_x^{t+\Delta}\int_{0}^{w-x} \int_{0}^x f_2(u,v,w,x)\dif u\dif v \dif w \dif x\bigg). \end{align*} After performing the calculus (using Mathematica) and doing some simplifications, this yields, for $\mu\neq r$, \begin{align*} &\frac{\nu\E B^2}{2 \mu r^3 (r-\mu)^2 (\mu +r)}\Big(2 r^3 e^{-\Delta \mu } (r-\mu )+2 r^3 (\mu +r) e^{\mu (-\Delta -2 t)}-r^3 (\mu +r) e^{-2 \mu (\Delta +t)}-r^3 (\mu +r) e^{-2\mu t}\\ &+2 (r-\mu )^2 \left(-\mu ^2+r^2 (\Delta \mu -1)+\mu r (\Delta \mu -1)\right)+4 \mu ^2 r^2 e^{(\mu +r) (-\Delta -t)}-4 \mu^2 r^2 e^{-r (\Delta +t)-\mu t}-4 \mu ^2 r^2 e^{-r t-\mu (\Delta +t)}\\ &+4 \mu ^2 r^2 e^{-t (\mu +r)}-2 \mu ^3 e^{\Delta (-r)}(r-\mu )-\mu ^3 (\mu +r) e^{-2 r (\Delta +t)}+2 \mu ^3 (\mu +r) e^{-r (\Delta +2 t)}-\mu ^3 (\mu +r) e^{-2 r t}\Big) \end{align*} and when we let $t\to\infty$ this yields, for $\mu\neq r$, \[ \nu \E B^2 \frac{r^3 \left(\Delta \mu +e^{-\Delta \mu }-1\right)-\Delta \mu ^3 r+\mu ^3 \left(1-e^{-\Delta r}\right)}{\mu r^3 \left(r^2-\mu ^2\right)}. \] When we let $\mu\downarrow0$ is can be verified that the above expression reduces to zero, and as $\mu\to\infty$ it tends to \[ \f{\nu\E B^2}{r^2}\left(\Delta - \f{1}{r}(1-e^{-r\Delta})\right). \] \fi \subsection*{Acknowledgements} The authors thank Marijn Jansen for insightful discussions about the functional central limit theorem in this paper. The research for this paper is partly funded by the NWO Gravitation Project NETWORKS, Grant Number 024.002.003. The research of Onno Boxma was also partly funded by the Belgian Government, via the IAP Bestcom Project. \begin{appendices} \section{Proof of Lemma \ref{lemma}} \label{app:proof} There are various ways to prove this result; we here include a procedure that intensively relies on the probabilistic properties of the shot-noise process involved. Observe that, recognizing a Riemann sum, \begin{equation} \label{RS} \int_0^t f(u,z)\Lambda(u)\,\dif u =\lim_{\Delta\downarrow 0} \Delta \sum_{k=1}^{t/\Delta} f(k\Delta,z)\Lambda(k\Delta). \end{equation} With $P_B(t)$ a Poisson process with rate $\nu$ and the $U_i$ i.i.d.\ samples from a uniform distribution on $[0,1]$, it holds that \[\Lambda(k\Delta) = \sum_{\ell=1}^k \sum_{i = P_B({(\ell-1)\Delta})+1}^{P_B({\ell\Delta})} B_i e^{-r\Delta U_i} e^{-r(k-\ell)\Delta}.\] We thus obtain that the expression in (\ref{RS}) equals (where the equality follows by interchanging the order of the summations) \begin{align*} &\lim_{\Delta\downarrow 0} \Delta \sum_{k=1}^{t/\Delta}f( k\Delta,z)\sum_{\ell=1}^k \sum_{i = P_B({(\ell-1)\Delta})+1}^{P_B({\ell\Delta})} B_i e^{-r\Delta U_i} e^{-r(k-\ell)\Delta}\\ &=\,\lim_{\Delta\downarrow 0} \Delta \sum_{\ell=1}^{t/\Delta}\sum_{i = P_B({(\ell-1)\Delta})+1}^{P_B({\ell\Delta})}B_ie^{-r\Delta U_i} \sum_{k=\ell}^{t/\Delta} f( k\Delta,z) e^{-r(k-\ell)\Delta}, \end{align*} which behaves as \[\lim_{\Delta\downarrow 0} \sum_{\ell=1}^{t/\Delta}e^{r\ell\Delta} \int_{\ell\Delta}^t f(u,z) e^{-ru}\dif u\sum_{i = P_B({(\ell-1)\Delta})+1}^{P_B({\ell\Delta})} B_i e^{-r\Delta U_i} .\] Furthermore, we have the representation \[ \Lambda(t) = \lim_{\Delta\downarrow0} \sum_{\ell=1}^{t/\Delta} \sum_{i=P((\ell-1)\Delta) +1}^{P_B(\ell\Delta)} B_i e^{-r(t-\ell\Delta + \Delta U_i)}. \] We conclude that $\E z^{N(t)} e^{-s\Lambda(t)}$ equals \begin{equation} \label{eq:v1} \lim_{\Delta\downarrow 0} {\mathbb E}\exp\left( \sum_{\ell=1}^{t/\Delta}\sum_{i = P_B({(\ell-1)\Delta})+1}^{P_B({\ell\Delta})} B_i \left(e^{-r\Delta U_i}e^{r\ell\Delta} \int_{\ell\Delta}^t f(u,z) e^{-ru}\dif u - s e^{-r(t-\ell\Delta+\Delta U_i)}\right)\right). \end{equation} Conditioning on the values of $P_B(\ell\Delta)-P_B((\ell-1)\Delta)$, for $\ell=1,\ldots, t/\Delta$, and using that the $B_i$ are i.i.d., we find that the expression in Eqn.\ \eqref{eq:v1} equals \begin{align*} &=\lim_{\Delta\downarrow 0 } \prod_{\ell=1}^{t/\Delta} e^{-\nu \Delta} \sum_{k_\ell=0}^\infty \f{(\nu\Delta)^{k_\ell}}{k_{\ell}!} \left(\E\exp\left(B_1\left(e^{-r\Delta U_i}e^{r\ell\Delta} \int_{\ell\Delta}^t f(u,z) e^{-ru} \dif u - s e^{-r(t-\ell\Delta+\Delta U_i)}\right) \right)\right)^{k_\ell}\\ &=\lim_{\Delta\downarrow 0 } \prod_{\ell=1}^{t/\Delta} e^{-\nu \Delta} \exp \left(\nu\Delta \E\exp\left(B_1\left(e^{-r\Delta U_i}e^{r\ell\Delta} \int_{\ell\Delta}^t f(u,z) e^{-ru} \dif u - s e^{-r(t-\ell\Delta+\Delta U_i)}\right) \right)\right), \end{align*} which can be written as \[\lim_{\Delta\downarrow 0}\exp\left(\nu\Delta \sum_{\ell=1}^{t/\Delta} \left(\beta\left( s e^{-r(t-\ell\Delta+\Delta U_i)} -e^{-r\Delta U_i}e^{r\ell\Delta} \int_{\ell\Delta}^t f(u,z) e^{-ru}\dif u \right) -1\right)\right).\] The lemma now follows from continuity of the exponent and the definition of the Riemann integral. \section{Proof of Lemma \ref{cov}} \label{app:cov} Let $P_B(\cdot)$ be the Poisson process with rate $\nu$, corresponding to the occurences of shots, and let ${\mathcal E}_{t,\delta}(n)$ be the event that $P_B(t+\d)-P_B(t)=n$. By conditioning on the number of shots in the interval $(t,t+\d]$, we find \begin{align*} \E \L_1(t) \L_2(t+\d) &= \sum_{n=0}^\infty \E(\L_1(t) \L_2(t+\d) \,|\, {\mathcal E}_{t,\delta}(n)) \Pb({\mathcal E}_{t,\delta}(n))\\ &= \sum_{n=0}^\infty \E(\L_1(t) \L_2(t+\d)\, |\,{\mathcal E}_{t,\delta}(n)) \; \f{ (\d \nu)^n}{n!} e^{-\nu\d}. \end{align*} We proceed by rewriting the conditional expectation as \begin{align*} \E(\L_1(t) \L_2(t+\d) \,|\, {\mathcal E}_{t,\delta}(n) )= \f{1}{\delta^n} \int\limits_{t}^{t+\delta} \dots \int\limits_{t}^{t+\delta} \E(\L_1(t) \L_2(t+\d) \,|\, \mathcal{F}_{t_1,\ldots,t_n,\delta}(n)) \dif t_1 \dots \dif t_n, \end{align*} denoting by $\mathcal{F}_{t_1,\ldots,t_n,\delta}(n)$ the event ${\mathcal E}_{t,\delta}(n)$ and the arrival epochs are $t_1,\ldots,t_n$. Note that we have due to Eqn.\ \eqref{eq:SN}, conditional on $\mathcal{F}_{t_1,\ldots,t_n,\delta}(n)$, the distributional equality \begin{equation} \label{eq:disteq} \L_2(t+\delta) = \L_2(t)e^{-r_2\d} + \sum_{i=1}^n B_{i2} e^{-r_2(t+\delta-t_i)}, \end{equation} and consequently \begin{equation} \label{eq:tbi} \E\left(\L_1(t)\L_2(t+\delta) \,|\, \mathcal{F}_{t_1,\ldots,t_n,\delta}(n) \right)=\E \L_1(t)\L_2(t) e^{-r_2\d} + \E \L_1(t) \sum_{i=1}^n \E B_{i2} e^{-r_2(t+\delta-t_i)}. \end{equation} Note that for all $i=1,\ldots,n$ we have \begin{align} \label{eq:int} \int\limits_{t}^{t+\delta} \dots \int\limits_{t}^{t+\delta}\int\limits_{t}^{t+\delta} e^{-r_2(t+\delta-t_i)} \dif t_1 \dif t_2 \dots \dif t_n = \f1{r_2}(1-e^{-r_2\d}) \d^{n-1}. \end{align} After unconditioning Eqn.\ \eqref{eq:tbi} with respect to the arrival epochs by integrating over all $t_i$ from $t$ to $t+\d$ and dividing by $\d^n$, we thus obtain \[ \E(\L_1(t) \L_2(t+\delta) \,| \,{\mathcal E}_{t,\delta}(n) )=\E \L_1(t) \E\L_2(t) e^{-r_2\d} + \E \L_1(t) \f{1}{r_2\d}(1-e^{-r_2\d})n \E B_{12} \] and hence, denoting $\L_i:=\lim_{t\to\infty}\L_i (t)$ for $i=1,2$, \begin{eqnarray*} \E \L_1(t) \L_2(t+\d) &=& \sum_{n=0}^\infty \left(\E\L_1(t)\L_2(t)e^{-r_2\d} + \E \L_1(t)\f{1}{r_2\d}(1-e^{-r_2\d}) n \E B_{12}\right) \frac{(\d \nu)^n}{n!} e^{-\nu\d}\\ &=&\E\L_1(t)\L_2(t) e^{-r_2\d} + (1-e^{-r_2\d})\E \L_1(t)\E\L_2\\ &=& \E\L_1(t)\E\L_2 + e^{-r_2\d}\big(\E\L_1(t)\L_2(t) - \E\L_1(t)\E\L_2\big), \end{eqnarray*} where we made use of $\E\L_i = {\nu\E B_{1i}}/{r_i}$. It follows that \begin{eqnarray*} \Cov(\L_1(t), \L_2(t+\d)) &=& \E \L_1(t) \L_2(t+\d) - \E \L_1(t) \E \L_2(t+\d)\\ &=& \E \L_1(t)\L_2(t) e^{-r_2\d} + (1-e^{-r_2\d}) \E \L_1(t) \E \L_2\\ &&-\, \E \L_1(t)\big(\E\L_2(t) e^{-r_2\d} + (1-e^{-r_2\d})\E \L_2 \big), \end{eqnarray*} where the equality $\E\L_2(t+\delta)=\E\L_2(t)e^{-r_2\delta} + (1-e^{-r_2\delta})\E\L_2$ is used, which can be directly checked using the expressions for the mean in Eqn.\ \eqref{eq:momentsSN}. This proves the first equality in Eqn.\ \eqref{eq:covl1l2}. The proof of the second equality follows from $\Cov(\L_1(t),\L_2(t))=\E \L_1(t)\L_2(t) - \E \L_1(t)\E\L_2(t)$, in which \begin{align*} &\E \L_1(t) \L_2(t) =\E\left[\left(\sum_{i=1}^{N(t)} B_{i1} e^{-r_1(t-U_i)}\right)\left(\sum_{j=1}^{N(t)} B_{j2} e^{-r_2(t-U_j)}\right)\right]\\ &=\sum_{n=0}^\infty e^{-{\nu t}}\f{(\nu t)^n}{n!} \sum_{i=1}^n \sum_{j=1}^n \E(B_{i1} B_{j2} e^{-r_1(t-U_i)}e^{-r_2(t-U_j)})\\ &=\sum_{n=0}^\infty e^{-{\nu t}}\f{(\nu t)^n}{n!} \Big(\f n t \E(B_{11}B_{12}) \int_0^t e^{-(r_1+r_2)(t-u)}\dif u\\ &+ \E B_{11} \E B_{12} \f{n(n-1)}{t^2} \int_0^t e^{-r_1(t-u)}\dif u \int_0^t e^{-r_2(t-v)}\dif v\Big)\\ &=\f{\nu^2\E B_{11}\E B_{12}(1-e^{-r_1 t})(1-e^{-r_2t})}{r_1r_2} + \f{\nu\E B_{11}B_{12}}{r_1+r_2}(1-e^{-(r_1+r_2)t}), \end{align*} and $\E\L_i(t)$, for $i=1,2$, is given in Eqn.\ \eqref{eq:momentsSN}. \end{appendices} {\small \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $\phi:\mathbb{R}^n\to[0,\infty)$ be a norm in $\mathbb{R}^n$, $n\geq 2$. The associated \emph{Finsler} or \emph{anisotropic perimeter} of a Lebesgue measurable set $E\subset\mathbb{R}^n$ is defined as \[ P_{\phi}(E)=\sup \left \{ \int_{E} \mathrm{div}(V) \; dp : V \in \mathrm{C}^\infty_c(\mathbb{R}^n;\mathbb{R}^n) \textrm{ with } \max_{p\in \mathbb{R}^n} \phi(V (p)) \leq 1 \right \}. \] If $E$ is regular, $P_\phi(E)$ can be represented as a surface integral as follows \[ P_\phi(E)= \int_{\partial E}\phi^*(\nu_E)\;d\mathcal H^{n-1}, \] where $\nu_E$ is the inner unit normal to $\partial E$ and $\phi^*:\mathbb{R}^n\to[0,\infty)$ is the dual norm defined by \[ \phi^* (w ) = \max _{\phi(v) =1} \langle w, v\rangle,\quad w\in\mathbb{R}^n. \] Here, $\langle\cdot,\cdot\rangle$ denotes the standard Euclidean scalar product in $\mathbb{R}^n$ and $|\cdot|$ the Euclidean norm. In the theory of crystals, $\phi^*$ is the surface tension of the interface between an anisotropic material $E$ and a fluid, and $P_\phi(E)$ is the total free energy. In the case where $\phi=|\cdot|$, $P_\phi$ is the standard De Giorgi perimeter and isoperimetric sets (i.e., sets of fixed volume that minimize perimeter) are Euclidean balls. For a general norm $\phi$, isoperimetric sets are translations and dilations of the \emph{Wulff shape}, first considered by G.~Wulff in \cite{Wulff}. In our notation it corresponds to the unit ball of the $\phi$-norm. The first complete proof of the isoperimetric property of Wulff shapes in the class of Lebesgue measurable sets with given volume is contained in \cite{Fonseca91,FM91}, and based on the Brunn-Minkowski inequality. We refer to \cite{FMP} for a quantitative version. In this paper, we study the isoperimetric problem for sub-Finsler perimeter measures in the Heisenberg group $\mathbb H^1$. The latter is $\mathbb{R}^3$ endowed with the non-commutative group law \begin{equation}\label{eq:*Heis} (\xi,z)*(\xi',z')=\left(\xi+\xi',z+z'+\omega(\xi,\xi')\right),\quad \xi,\xi'\in\mathbb{R}^2,\quad z,z'\in\mathbb{R}, \end{equation} where $\omega:\mathbb{R}^2\times\mathbb{R}^2\to\mathbb{R}$ is the symplectic form \begin{equation}\label{eq:omega} \omega(\xi,\xi') = \frac{1}{2}(xy'-x'y),\quad \xi=(x,y),\ \xi'=(x',y')\in\mathbb{R}^2. \end{equation} The vector fields \[ X=\frac{\partial}{\partial x}-\frac{y}{2} \frac{\partial}{\partial z} \qquad \text{and} \qquad Y=\frac{\partial}{\partial y}+\frac{x}{2}\frac{\partial}{\partial z} \] are left-invariant for the group action and span a two-dimensional distribution $\mathcal{D}(\H^1)$ in $T\H^1$, called the \emph{horizontal distribution}. We denote by $\mathcal{D}(p)$ the fiber of $\mathcal{D}$ at $p\in\H^1$. Given a norm $\phi:\mathbb{R}^2\to[0,\infty)$, the associated anisotropic perimeter measure in $\H^1$ is introduced in Definition \ref{d:per} and takes into account only horizontal directions. For a regular set $E\subset\mathbb{R}^3$ it can be represented as \[ \P_\phi(E)= \int_{\partial E}\phi^*(N_E)\;d\mathcal H^{2}, \] where $N_E$ is obtained by projecting the inner unit normal $\nu_E$ onto the horizontal distribution. A set $E\subset\H^1$ is said to be \emph{$\phi$-isoperimetric} if there exists $m>0$ such that $E$ minimizes \begin{equation}\label{eq:ip} \inf\left\{\P_\phi(E) : E\subset \H^1\text{ measurable, }\mathcal L^3(E)=m\right\}. \end{equation} If $\phi=|\cdot|$ is the Euclidean norm in $\mathbb{R}^2$, then $\P_\phi$ corresponds to the standard horizontal perimeter in $\H^1$, introduced and studied in \cite{CDG94,GN96,FSSC96}. In this case, the problem of characterizing $\phi$-isoperimetric sets in the class of Lebesgue measurable sets in $\H^1$ is open. According to Pansu's conjecture \cite{P82}, $|\cdot|$-isoperimetric sets are obtained through \emph{left-translations} and \emph{anisotropic dilations} $\delta_\lambda:\H^1\to\H^1$, $\lambda>0$, \begin{equation}\label{eq:dilat} \delta_\lambda(\xi,z)=(\lambda \xi,\lambda^2 z), \end{equation} of the so-called \emph{Pansu's bubble}. An absolutely continuous curve $\gamma:I\to\H^1$ is said to be \emph{horizontal} if $\dot\gamma(t)\in\mathcal{D}(\gamma(t))$ for a.e.\ $t\in I$ and we call \emph{horizontal lift} of an absolutely continuous curve $\xi:I\to\mathbb{R}^2$ any horizontal curve $\gamma=(\xi,z)$ with \begin{equation*}\label{eq:hlift} \dot z=\omega(\xi,\dot\xi). \end{equation*} Pansu's bubble is the bounded set whose boundary is foliated by horizontal lifts of planar circles of a given radius, passing through the origin. Such horizontal curves are length minimizing between their endpoints for the sub-Riemannian distance in $\H^1$, so that Pansu's conjecture in $\H^1$ explicits a relation between isoperimetric sets and the geometry of the ambient space. The conjecture is supported by several results contained in \cite{RR08,M08,MR09,R12,FM16,FMM19}, but it is still unsolved in its full generality. A quantitative version of the Heisenberg isoperimetric inequality has been proposed in \cite{FLM15}. Very little is known on the isoperimetric problem when $\phi:\mathbb{R}^2\to[0,\infty)$ is a general norm in $\mathbb{R}^2$. While preparing the final version of this article, we became aware that J.~Pozuelo and M.~Ritor\'e have recently obtained several results on the problem, considering also the case where $\phi$ is convex and homogeneous, but not necessarily a norm (see \cite{PozueloRitore}). Existence of $\phi$-isoperimetric sets follows by the arguments of \cite{LR03}, see Section~\ref{s:ex}. The construction of the Pansu's bubble can be generalized to the sub-Finsler context in the following way. We call \emph{$\phi$-circle} of radius $r>0$ and center $\xi_0\in\mathbb{R}^2$ the set \begin{equation}\label{eq:qCircle} C_\phi(\xi_0,r) = \{\xi \in \mathbb{R}^2: \phi(\xi-\xi_0) =r \}, \end{equation} and we call \emph{$\phi$-bubble} the bounded set $E_\phi$ whose boundary is foliated by horizontal lifts of $\phi$-circles in the plane of a given radius, passing through the origin. In Figure~\ref{f:bubbles} we represent two $\phi$-bubbles, corresponding to $\phi=\ell^p$, with $\ell^p(x,y)=(|x|^p+|y|^p)^{\frac{1}{p}}$, in the cases $p=3$ and $p=100$. The latter can be seen as an approximation of the crystalline case. \begin{figure} \includegraphics[width=.4\textwidth]{FG3.pdf}\hspace{1cm} \includegraphics[width=.44\textwidth]{FG100.png} \caption{The $\ell^{p}$-bubbles with $p=3$ (left) and $p=100$ (right). In blue we outlined three horizontal lifts of $\ell^{p}$-circles foliating the $\ell^{p}$-bubble. } \label{f:bubbles} \end{figure} Our main result is the characterization of $\mathrm{C}^2$-smooth $\phi$-isoperimetric sets when $\phi$ and $\phi^*$ are $\mathrm{C}^2$-smooth. This result suggests that the $\phi$-bubble is the solution to the isoperimetric problem for $\P_\phi$. Here and in the following, if $\phi\in \mathrm{C}^k(\mathbb{R}^2\setminus\{0\})$ we say that $\phi$ is of class $\mathrm{C}^k$, for $k\in \mathbb{N}$. \begin{theorem}\label{thmi:class} Let $\phi$ be a norm of class $\mathrm{C}^2$ such that $\phi^*$ is of class $\mathrm{C}^2$. If $E\subset \mathbb H^1$ is a $\phi$-isoperimetric set of class $\mathrm{C}^2$ then we have $E=E_\phi$, up to left-translations and anisotropic dilations. \end{theorem} The proof of Theorem~\ref{thmi:class} is presented in Section~\ref{s:proof} and is based on a fine study of the \emph{characteristic set} of isoperimetric sets. The characteristic set of a set $E\subset\H^1$ of class $\mathrm{C}^1$ (equivalently, of its boundary $\partial E$) is \begin{equation}\label{eq:char} \mathcal{C}(E){=\mathcal{C}(\partial E)}=\{p \in \partial E : T_p \partial E = \mathcal{D}(p)\}. \end{equation} In Section~\ref{s:char} we characterize the structure of $\mathcal{C}(E)$ for a $\mathrm{C}^2$-smooth $\phi$-isoperimetric set $E\subset\H^1$, proving that $\mathcal{C}(E)$ is made of isolated points. For the more general case of $\phi$-critical surfaces we obtain the following result, that we prove {by} adapting to the sub-Finsler case the theory of Jacobi fields of \cite{RR08}. Any $\phi$-critical surface has constant $\phi$-curvature and the definition is presented in Section~\ref{s:char}. \begin{theorem}\label{thmi:char} Let $\phi$ and $\phi^*$ be of class $\mathrm{C}^2$ and let $\Sigma\subset \H^1$ be a complete and oriented surface of class $\mathrm{C}^2$. If $\Sigma$ is $\phi$-critical with non-vanishing $\phi$-curvature then $\mathcal{C}(\Sigma)$ consists of isolated points and $\mathrm{C}^2$ curves that are either horizontal lines or horizontal lifts of simple closed curves. \end{theorem} The simple closed curve{s} of Theorem~\ref{thmi:char} are described by a {suitable ordinary differential equation. We expect that these curves are $\phi^\dagger$-circles, where $\phi^\dagger$ is the norm defined as \begin{equation*} \phi ^\dag (\xi) = \phi^*(\xi^\perp),\quad \xi\in\mathbb{R}^2. \end{equation*} Here and hereafter, $\perp:\mathbb{R}^2\to\mathbb{R}^2$ denotes the perp-operator $\perp\!\!(\xi) =\xi^\perp$, with \[ \xi^\perp = (x,y)^\perp = (-y,x),\quad \xi=(x,y)\in\mathbb{R}^2. \] Theorem~\ref{thmi:class} then follows by combining the results of {Sections~\ref{ss:fv-reg}, \ref{s:integ}, and \ref{s:char}}. In particular, starting from a first variation analysis, we establish a foliation property outside the characteristic set for $\mathrm{C}^2$-smooth $\phi$-isoperimetric sets (and more generally for constant $\phi$-curvature surfaces). {Theorem \ref{thmi:char} is a key step for concluding the proof.} We also identify an explicit relation between $\phi$-isoperimetric sets and geodesics in the ambient space. In Section~\ref{s:PMP}, we show that, outside the characteristic set, $\phi$-isoperimetric sets are foliated by \emph{sub-Finsler geodesics} in $\H^1$ relative to the norm $\phi^\dag$. We refer to Corollary~\ref{c:isop-lm} for a statement of the result. Notice that when $\phi=|\cdot|$ is the Euclidean norm, $\phi^\dagger$ reduces to $|\cdot|$, and we recover the foliation by sub-Riemannian geodesics of $\mathrm{C}^2$-smooth $|\cdot|$-isoperimetric sets. The \emph{regularity} of the candidate isoperimetric sets $E_\phi$ is a major issue that we treat in Section~\ref{s:proof}. While it is rather easy to check that $\phi$-Pansu bubbles have the same regularity as $\phi$ outside the characteristic set (at least if $\phi$-circles are strictly convex, see Lemma~\ref{l:sm}), it is not clear what regularity is inherited from $\phi$ at characteristic points. In Section~\ref{ss:regE} we prove the following. \begin{theorem} \label{iFlynt} Assume that $\phi$ is of class $\mathrm{C}^4$ and that $\phi$-circles have strictly positive curvature. Then $\partial E_\phi$ is an embedded surface of class $\mathrm{C}^2$. \end{theorem} In the case where $\phi$ or $\phi^*$ are {not differentiable}, Theorems~\ref{thmi:class} and \ref{iFlynt} cannot be applied in a direct way. In Corollary~\ref{rem:pieceC2} we show that the foliation property by horizontal lifts of $\phi$-circles outside the characteristic set can be recovered when $\phi^*$ is only \emph{piecewise $\mathrm{C}^2$}, thus allowing to cover the case $\phi=\ell^p$ for $p>2$. For general non-differentiable norms, our next result is conditioned to the validity of the following conjecture. \begin{ass} \label{iBenbow} For any norm $\phi$ of class $\mathrm{C}^\infty_+$, $\phi$-isoperimetric sets are of class $\mathrm{C}^2$. \end{ass} Here, a norm $\phi$ in $\mathbb{R}^2$ is said to be of class $\mathrm{C}^\infty_+$ if $\phi \in \mathrm{C}^\infty(\mathbb{R}^2\setminus\{0\})$ and $\phi$-circles have strictly positive curvature. The proof of the following result is presented in Section~\ref{s:cr}. \begin{theorem} \label{thmi:appr} { Assume that Conjecture \ref{iBenbow} holds true. Then for any norm $\phi$ in $\mathbb{R}^2$ the $\phi$-bubble $E_\phi\subset \mathbb H^1$ is $\phi$-isoperimetric. } \end{theorem} { Of a particular interest is the case of a \emph{crystalline norm}. A norm $\phi:\mathbb{R}^2\to[0,\infty)$ is called crystalline if the $\phi$-circle $C_\phi=C_\phi(0,1)$ is a convex polygon centrally symmetric with respect to the origin. Let $v_1,\ldots,v_{2N}\in C_\phi$ be the ordered vertices of this polygon, and denote by $e_i = v_i-v_{i-1}$, $i=1,\ldots,2N$, the edges of $C_\phi$, where $v_0 =v_{2N}$. We consider the left-invariant vector fields \begin{equation}\label{eq:X_i} X_i:=e_{i,1}X+e_{i,2}Y,\quad i=1,\ldots, 2N, \end{equation} where $e_i =(e_{i,1}, e_{i,2})$, and we notice that $X_{i+N}=-X_i$ for $i=1,\dots,N$. By a first variation argument, we deduce a foliation property for $\phi$-isoperimetric sets by integral curves of the $X_i$, see Section~\ref{ss:fv-cr}. \begin{theorem}\label{thmi:folz} Let $E\subset\H^1$ be $\phi$-isoperimetric for a crystalline norm $\phi$. Let $A\subset\H^1$ be an open set such that $\partial E\cap A$ is a connected $z$-graph of class $\mathrm{C}^2$. Then there exists $i=1,\dots,N$ such that $\partial E\cap A$ is foliated by integral curves of $X_i$. \end{theorem} Geodesics of sub-Finsler structures on the Heisenberg group and other Carnot groups have been studied in several papers (see, in particular, \cite{ALDS,BBLS,Ber94,L19,S20}). Unfortunately, Theorem \ref{thmi:folz} does not provide enough information in order to establish the global foliation property by $\phi^\dagger$-geodesics in the crystalline case. \subsection{Structure of the paper} In Section~\ref{s:per} we introduce the notion of sub-Finsler perimeter and we deduce a representation formula for Lipschitz sets (see Proposition~\ref{p:repr}), holding for any norm $\phi$ in $\mathbb{R}^2$. In Section~\ref{s:ex} we prove existence of $\phi$-isoperimetric sets for a general norm $\phi$, following the arguments in \cite{LR03}. In Section~\ref{s:fv} we derive first-variation necessary conditions for $\phi$-isoperimetric sets, both when $\phi^*$ is of class $\mathrm{C}^1$ (see Section~\ref{ss:fv-reg}) and when $\phi^*$ is not differentiable (see Section~\ref{ss:fv-cr}). In the former case, we introduce the notion of $\phi$-curvature of a $\mathrm{C}^2$-smooth surfaces (when $\phi^*$ is $\mathrm{C}^2$) and of $\phi$-critical surface. In the latter case, we deduce the (partial) foliation property stated in Theorem~\ref{thmi:folz} for crystalline norms. In Section~\ref{s:integ} we deduce a foliation property outside the characteristic set for $\phi$-isoperimetric sets of class $\mathrm{C}^2$, assuming $\phi$ and $\phi^*$ to be regular enough. We then study such a foliation from the point of view of geodesics in the ambient space in Section~\ref{s:PMP}, and in Section~\ref{s:char} we study the characteristic set of $\mathrm{C}^2$-smooth {$\phi$-critical surfaces and of} $\phi$-isoperimetric sets, assuming $\phi$ and $\phi^*$ to be $\mathrm{C}^2$ (Theorem~\ref{thmi:char}). In Section~\ref{s:proof} we then prove Theorem~\ref{thmi:class} and we study regularity of $\phi$-bubbles, summarized in Theorem~\ref{iFlynt}. Finally, Section~\ref{s:cr} is dedicated to general norms and contains the proof of Theorem~\ref{thmi:appr}. \subsection*{Acknowledgments} The authors thank M.~Ritor\'e and C.~Rosales for pointing out a gap in a preliminary version of the paper. The first and third authors acknowledge the support of ANR-15-CE40-0018 project \textit{SRGI - Sub-Riemannian Geometry and Interactions}. The first author acknowledges the support received from the European Union's Horizon 2020 research and innovation programme under the \emph{Marie Sklodowska-Curie grant agreement No.~794592}, of the INdAM--GNAMPA project \emph{Problemi isoperimetrici con anisotropie}, and of a public grant of the French National Research Agency (ANR) as part of the Investissement d'avenir program, through the iCODE project funded by the IDEX Paris-Saclay, ANR-11-IDEX-0003-02. \section{Sub-Finsler perimeter}\label{s:per} In this section, we introduce the notion of $\phi$-perimeter in $\H^1$ for a norm $\phi$ in $\mathbb{R}^2$. We start by fixing the notation relative to horizontal vector fields and sub-Finsler norms in $\H^1$. A smooth horizontal vector field is a vector field $V$ on $\mathbb{R}^3$ that can be written as $V=aX+bY$ where $a,b\in \mathrm{C}^\infty(\H^1)$. When $A\subset \H^1$ is an open set and $ a,b \in \mathrm{C}^\infty_c(A)$ have compact support in $A$ we shall write $V\in \mathcal{D}_c(A)$. We fix on $\mathcal{D}(\H^1)$ the scalar product $\langle \cdot ,\cdot\rangle_\mathcal{D}$ that makes $X,Y$ pointwise orthonormal. Then each fiber $\mathcal{D}(p)$ can be identified with the Euclidean plane $\mathbb{R}^2$. Let $\phi:\mathbb{R}^2\to[0,\infty)$ be a norm. We fix on $\mathcal{D}(\H^1)$ the left-invariant norm associated with $\phi$. Namely, with a slight abuse of notation, for any $p\in \H^1$ and with $a,b\in \mathbb{R}$ we define \[ \phi(aX(p)+bY(p)) = \phi\big( (a ,b )\big) . \] Since the Haar measure of $\H^1$ is the Lebesgue measure of $\mathbb{R}^3$, the divergence in $\H^1$ is the standard divergence. Therefore, for a smooth horizontal vector field $V = a X+ b Y$ we have $\mathrm{div}(V) = Xa + Yb$. \begin{definition}\label{d:per} The \emph{$\phi$-perimeter} of a Lebesgue measurable set $E \subset \H^1$ in an open set $A\subset \H^1$ is \begin{equation*} \P_{\phi}(E;A)=\sup \left \{ \int_{E} \mathrm{div}(V) dp : V \in \mathcal{D}_c(A) \textrm{ with } \max_{\xi\in A} \phi(V (\xi)) \leq 1 \right \}. \end{equation*} When $\P_{\phi }(E;A)<\infty$ we say that $E$ has finite perimeter in $A$. When $A=\H^1$, we let $ \P_{\phi} (E) = \P_{\phi}(E;\H^1) $. \end{definition} Since all the left-invariant norms in the horizontal distribution are equivalent, we have $\P_{\phi}(E)<\infty$ if and only if the set $E$ has finite horizontal perimeter in the sense of \cite{CDG94,FSSC96,GN96}. For regular sets, we can represent $\P_\phi(E)$ integrating on $\partial E$ a kernel related to the normal. Let $\nu_E$ be the Euclidean unit inner normal to $\partial E$. We define the horizontal vector field $N_E:\partial E\to \mathcal{D}(\H^1)$ by \begin{equation*} \label{N_E} N_E=\langle \nu_E ,X \rangle X+\langle \nu_E , Y \rangle Y, \end{equation*} where $\langle\cdot,\cdot\rangle$ denotes the Euclidean scalar product in $\mathbb{R}^3$. \begin{proposition}[Representation formula]\label{p:repr} Let $E\subset \H^1$ be a set with Lipschitz boundary. Then for every open set $A\subset \H^1$ we have \begin{equation}\label{eq:repr} \P_\phi (E;A) = \int_{\partial E\cap A} \phi^* ( N_E) \; d\mathcal{H}^{2}, \end{equation} where $\mathcal{H}^{2}$ is the standard $2$-Hausdorff measure in $\mathbb{R}^3$. \end{proposition} \begin{proof} Let $V \in \mathcal{D}_c (A)$ be such that $\phi( V) \leq 1$. By the standard divergence theorem and by the definition of dual norm, we have \begin{equation*} \begin{split} \int_{E} \mathrm{div} (V) \,d\xi & = -\int_{\partial E} \langle V, N_E\rangle_\mathcal{D} \; d\mathcal{H}^2 \leq \int_{\partial E\cap A} \phi(V) \phi ^* (N_E) d\mathcal{H}^{2} \\ & \leq \int_{\partial E\cap A } \phi^*(N_E) d\mathcal{H}^{2}. \end{split} \end{equation*} By taking the supremum over all admissible $V $ we then obtain \begin{equation*} \P_\phi (E;A) \leq \int_{\partial E\cap A} \phi^*( N_E) d\mathcal{H}^{2}. \end{equation*} To get the opposite inequality it is sufficient to prove that for every $\varepsilon >0$ there exists $V \in \mathcal{D} _c (A) $ such that $\phi( V ) \leq 1$ and \begin{equation*}\label{eq:Rappr1} -\int_{\partial E} \langle V, N_E\rangle_\mathcal{D} \; d\mathcal{H}^2 \geq \int_{\partial E\cap A} \phi^* ( N_E) d\mathcal{H}^{2} - \varepsilon. \end{equation*} Here, without loss of generality, we assume that $A$ is bounded. We will construct such a $V$ with continuous coefficients and with compact support in $A$. The smooth case $V\in \mathcal{D}_c(A)$ will follow by a standard regularization argument. Let us define the sets \begin{equation*} \begin{split} \mathcal{U} &= \big \{ p \in \partial E \cap A : N_E(p ) \textrm{ is defined} \big \},\quad \mathcal{Z} = \big \{ p \in \mathcal{U} : N_E (p) =0 \big \}. \end{split} \end{equation*} From the results of \cite{B03} it follows that $\mathcal Z$ has vanishing $\mathcal{H}^2$-measure. For any $p \in \mathcal{U} \setminus \mathcal{Z} $ we take $ V \in \mathcal{D}(p)$ such that $\phi(V)=1 $ and \[ \langle V, N_E\rangle_\mathcal{D} = \phi^* ( N_E). \] % % In general, this choice is not unique. However, there is a selection $p\mapsto V (p)$ that is measurable (this follows since the coordinates are measurable, see for instance \cite[Theorem~8.1.3]{AubinFrank}). We extend $ V$ to $\mathcal Z$ letting $ V =0$ here. This extension is still measurable. Since $\partial E \cap A$ has finite $\mathcal{H}^{2}$-measure, by Lusin's theorem there exists a compact set $K_\varepsilon \subset \partial E \cap A$ such that $\mathcal{H}^{2}\big( (\partial E \cap A) \setminus K_\varepsilon \big) < \varepsilon$ and the restriction of $ V $ to $K_\varepsilon$ is continuous. Now, by Tietze--Uryshon theorem we extend $V$ from $K_\varepsilon$ to $A$ in such a way that the extended map, still denoted by $V$, is continuous with compact support in $A$ and satisfies $\phi (V) \leq 1$ everywhere. Our construction yields the following \begin{equation*} \begin{split} \int_{\partial E \cap A} \phi^* ( N_E) d\mathcal{H}^{2} &= \int_{K_\varepsilon} \langle V, N_E\rangle_\mathcal{D} \, d\mathcal{H}^{2} + \int_{(\partial E \cap A)\setminus K_\varepsilon} \phi^* ( N_E ) d\mathcal{H}^{2} \\ &= \int_{\partial E \cap A } \langle V, N_E\rangle_\mathcal{D} \, d\mathcal{H}^{2} - \int_{(\partial E \cap A)\setminus K_\varepsilon} \left( \langle V, N_E\rangle_\mathcal{D} - \phi^* (N_E) \right) d\mathcal{H}^{2} \\ &\leq \int_{\partial E \cap A} \langle V, N_E\rangle_\mathcal{D} \, d\mathcal{H}^{2} + C\varepsilon. \end{split} \end{equation*} In the last inequality we used the fact that $\langle V, N_E\rangle_\mathcal{D} - \phi^*( N_E) $ is bounded and $\mathcal{H}^{2}\big( (\partial E \cap A) \setminus K_\varepsilon \big) < \varepsilon$. The claim follows. \end{proof} \section{Existence of isoperimetric sets} \label{s:ex} For a measurable set $E\subset \H^1$ with positive and finite measure and a given norm $\phi$ on $\mathbb{R}^2$ we define the $\phi$-\emph{isoperimetric quotient} as \[ \operatorname{Isop_{\phi }}(E)=\dfrac{\P_{\phi}(E)}{\mathcal L^3(E)^{\frac{3}{4}}}, \] where $\mathcal L^3$ denotes the Lebesgue measure of $\mathbb{R}^3$. The isoperimetric quotient is invariant under left-translation (w.r.t.\ the operation in \eqref{eq:*Heis}), i.e., $\operatorname{Isop}_{\phi}(p*E)=\operatorname{Isop}_{\phi}(E)$ for any $p\in\H^1$ and $E\subset\H^1$ admissible, and it is $0$-homogeneous with respect to the one-parameter family of automorphisms \eqref{eq:dilat}, i.e., $\operatorname{Isop}_{\phi}(\lambda E )=\operatorname{Isop}_{\phi}(E)$, where $\lambda E = \delta_\lambda(E)$. The isoperimetric problem \eqref{eq:ip} is then equivalent to minimizing the isoperimetric quotient among all admissible sets. Namely, given $m\in(0,\infty)$, any isoperimetric set $E\subset\H^1$ with $\mathcal L^3(E)=m$ is a solution to \begin{equation}\label{eq:isop1} C_I=\inf\left\{\operatorname{Isop}_{\phi}(E) : E\subset \H^1\text{ measurable, }0<\mathcal L^3(E)<\infty\right\}, \end{equation} and, \emph{vice versa}, any solution $E\subset\H^1$ to \eqref{eq:isop1} solves \eqref{eq:ip} within its volume class, i.e., with $m=\mathcal L^3(E)$. In particular, we have \begin{equation}\label{eq:cisop} C_I=\inf\left\{\P_{\phi}(E) : E\subset \H^1\text{ measurable, }\mathcal L^3(E)=1\right\}. \end{equation} The constant $C_I$ depends on $\phi$. Since $\P_\phi$ is equivalent to the standard horizontal perimeter, the isoperimetric inequality in \cite{GN96} implies that $C_I>0$ and the validity of the following inequality for any measurable set $E$ with finite measure: \begin{equation}\label{eq:iisop} \P_{\phi}(E)\geq C_I \mathcal L^3(E)^{\frac{3}{4}}. \end{equation} The constant $C_I$ is the largest one making true the above inequality and isoperimetric sets are precisely those for which \eqref{eq:iisop} is an equality. \begin{theorem}[Existence of isoperimetric sets]\label{thm:ex} Let $\phi$ be any norm on $\mathbb{R}^2$. There exists a set $E\subset\H^1$ with non-zero and finite $\phi$-perimeter such that \begin{equation}\label{eq:ex} \P_\phi(E)=C_I \mathcal L^3(E)^{\frac{3}{4}}. \end{equation} \end{theorem} Theorem~\ref{thm:ex} follows by applying the strategy of \cite[Section 4]{LR03}. In the sequel we denote the left-invariant homogeneous ball centered at $p\in\H^1$ of radius $r>0$ by $B(p,r)$. \begin{proof}[Proof of Theorem~\ref{thm:ex}] We give a sketch of the proof. By perimeter and volume homogeneity with respect to $\{\delta_\lambda\}_{\lambda\in\mathbb{R}}$ it is enough to prove the existence of a minimizing set in the class of volume $\mathcal L^3(E)=1$. Let $\{E_k\}_{k\in\mathbb{N}}$ be a minimizing sequence for \eqref{eq:cisop} such that for $k\in\mathbb{N}$ we have \[ \mathcal L^3(E_k)=1,\qquad \P_{\phi}(E_k)\leq C_I\left(1+\frac{1}{k}\right). \] Assume that there exists $m_0\in(0,1/2)$ such that for any $k\in\mathbb{N}$ there exists $p_k\in\H^1$ satisfying \begin{equation}\label{eq:ass} \mathcal L^3(E_k\cap B(p_k,1))\geq m_0. \end{equation} Then, the translated sequence $\{-p_k*E_k\}_{k\in\mathbb{N}}$, still denoted $\{E_k\}_{k\in\mathbb{N}}$, is also minimizing for \eqref{eq:cisop} and satisfies $\mathcal L^3( E_k \cap B(0,1)) \geq m_0$. Since $\P_\phi$ is equivalent to the standar horizontal perimeter, we have a compactness theorem for sets of finite $\phi$-perimeter as in \cite[Theorem~1.28]{GN96}. Then, we can extract a sub-sequence, still denoted $\{ E_k\}_{k\in\mathbb{N}}$, converging in the $L^1_{\mathrm{loc}}(\H^1)$ sense to a set $E\subset\H^1$ of finite $\phi$-perimeter. The lower semi-continuity of $\P_\phi$ therefore implies \begin{equation*}\label{eq:existence4} \P_{\phi}(E) \leq \liminf_{k\to \infty} \P_{\phi}( E_k) \leq C_I. \end{equation*} Moreover, we have \begin{equation}\label{eq:existence5} \begin{split} & \mathcal L^3(E) \leq \liminf_{k\to \infty} \mathcal L^3( E_k)=1 \quad \text{ and } \\ & \mathcal L^3(E \cap B(0,1) ) = \lim_{k\to \infty} \mathcal L^3( E_k \cap B(0,1)) \geq m_0. \end{split} \end{equation} To prove \eqref{eq:ex} we are left to show that $\mathcal L^3(E)=1$, which follows by applying a sub-Finsler version of \cite[Lemma~4.2]{LR03}, ensuring existence of a radius $R>0$ such that $\mathcal L^3(E\cap B(0,R))=1$. This is based on \eqref{eq:existence5} and on a canonical relation between perimeter and derivative of volume in balls with respect to the radius, which is valid in quite general metric structures, including sub-Finsler ones, see \cite[Lemma~3.5]{Amb01}. We conclude by justifying the assumption \eqref{eq:ass}. This follows by a sub-Finsler version of \cite[Lemma~4.1]{LR03}. Using once more the equivalence of $\P_{\phi}$ with the standard horizontal perimeter, we deduce from \cite[Theorem~1.18]{GN96} the validity of the following \emph{relative isoperimetric inequality} holding for a constant $C>0$ and any measurable set $E$ with finite measure \begin{equation*}\label{eq:riisop} \min \left\{ \mathcal L^3(B\cap E)^{\frac{3}{4}}, \mathcal L^3(B\setminus E)^\frac{3}{4} \right\} \leq C \P_{\phi}(E, \lambda B), \end{equation*} where $\lambda\geq 1$ is a constant depending only on $\phi$, and $B$ is any left-invariant homogeneous ball. Together with the fact that the family $\{B(p,\lambda) : p\in\H^1\}$ has bounded overlap, we can reproduce the argument of \cite[Lemma~4.1]{LR03} and prove the claim. \end{proof} \begin{remark}\label{rem:bdd} Following the arguments of \cite[Lemma~4.2]{LR03}, one also shows that any isoperimetric set is equivalent to a bounded and connected one (i.e., it is bounded and connected up to sets of zero Lebesgue measure). \end{remark} \section{First variation of the isoperimetric quotient}\label{s:fv} In this section we derive a first order necessary condition for $\phi$-isoperimetric sets, both when $\phi$ is regular or not. \subsection{Notation} We now introduce some notation that will be used throughout the paper. Let $E,A\subset \H^1$ be sets such that $E$ is closed, $A$ is open and there exists a function $g\in \mathrm{C}^1( A)$, called \emph{defining function for $\partial E\cap A$}, such that $\partial E\cap A=\{p\in A : g(p)=0\}$ and $\nabla g(p)\ne 0$ for every $p\in \partial E\cap A$. We say that $ E\cap A$ is a \emph{$z$-subgraph} if there exist an open set $D\subset \mathbb{R}^2$ and a function $ f\in \mathrm{C}^1 (D)$, called \emph{graph function} for $\partial E\cap A$, such that \[ E\cap A = \{ (\xi, z)\in A : \xi \in D \textrm{ and } z\leq f(\xi) \}. \] In this case, $g(\xi,z)=f(\xi)-z$ is a defining function for $\partial E\cap A$. The definition of \emph{$z$-epigraph} is analogous and all results given below for $z$-subgraphs have a straightforward counterpart for $z$-epigraphs. In a similar way, one can also define \emph{$x$-subgraphs}, \emph{$y$-subgraphs}, \emph{$x$-epigraphs}, and \emph{$y$-epigraphs}. Given a function $g\in \mathrm{C}^1(A) $, we denote by $\mathcal G=(Xg) X+(Yg) $ the \emph{horizontal gradient} of $g$ and we define the \emph{projected horizontal gradient} as \begin{equation}\label{egge} G=(Xg,Yg) \in \mathbb{R}^2. \end{equation} If $\partial E\cap A$ is a $z$-graph with graph function $f\in\mathrm{C}^1 (D)$, we define $F: D\to \mathbb{R}^2$ by \begin{equation}\label{effe} F(\xi)=G(\xi,f(\xi))=\nabla f(\xi)-\frac12 \xi^\perp, \end{equation} and \begin{equation} \label{eq:charf} \mathcal{C}(f) \{ \xi\in D : F(\xi)=0\}. \end{equation} Hence $\mathcal{C}(E)\cap A=\{(\xi,f(\xi)):\xi\in \mathcal{C}(f)\}$, where $\mathcal{C}(E)$ is the characteristic set of $E$, defined in \eqref{eq:char}. The set $\mathcal{C}(f)$ has zero Lebesgue measure in $D$. If $E\cap A$ is the $z$-subgraph of a function $f\in\mathrm{C}^1 (D)$, by the representation formula \eqref{eq:repr} we have \[ \P_\phi(E;A) = \int _D \phi^*(F (\xi) ) d\xi. \] When the dual norm $\phi^*$ is of class $\mathrm{C}^1$, starting from a graph function $f\in\mathrm{C}^1 (D)$ we define the vector field $\X: D\to \mathbb{R}^2$ by \begin{equation*}\label{eq:Xf} \X (\xi ) = \nabla \phi^* (F(\xi )),\quad \xi \in D. \end{equation*} The geometric meaning of the vector field $\X$ will be clarified in the next section, see \eqref{eq:cN}. \begin{remark} At any point $\xi\in D$ such that $F(\xi) \neq 0$ the vector field $\X$ satisfies \begin{equation}\label{l:norm1} \phi(\X(\xi)) =1, \end{equation} since the gradient of $\phi^*$ at any nonzero point has norm $\phi$ equal to one (even when $\phi^*$ is not regular, by replacing the gradient by any element of the subgradient; see, for instance, \cite[Example 3.6.5]{Lange16}). \end{remark} \subsection{Regular norms}\label{ss:fv-reg} \begin{proposition}[First variation for isoperimetric sets] \label{pix} Let $\phi$ be a norm such that $\phi^*$ is of class $\mathrm{C}^1$. Let $E\subset\H^1$ be a $\phi$-isoperimetric set such that, for some open set $A\subset \H^1$, $E\cap A$ is a $z$-subgraph of class $\mathrm{C}^1$. Then the graph function $f\in \mathrm{C}^1(D)$ satisfies in the weak sense the partial differential equation \begin{equation}\label{EQU} \mathrm{div}\big (\X\big) = - \frac{3}{4} \frac{\P_{\phi}(E)}{\mathcal {L} ^3(E)} \quad \textrm{in } D. \end{equation} \end{proposition} \begin{proof} For small $\varepsilon\in\mathbb{R}$ and $\varphi\in \mathrm{C}^\infty_c(D)$ let $E_\varepsilon\subset \H^1$ be the set such that \[ E_\varepsilon \cap A =\{(\xi ,z)\in A : z\leq f(\xi)+\varepsilon\varphi(\xi ),\ \xi \in D\}, \] and $E_\varepsilon \setminus A =E\setminus A $. Starting from the representation formula \begin{equation}\label{eq:svi1} \P_{\phi}(E_\varepsilon;A) = \int_{\partial E_\varepsilon \cap A} \phi^*( N _{E_\varepsilon}) d \mathcal{H}^2 = \int_D \phi^* ( +\varepsilon (X\varphi,Y\varphi ) d\xi, \end{equation} we compute the derivative \begin{equation}\label{eq:der-per} {\P_\phi}' = \left. \frac{d}{d\varepsilon}\P_{\phi}(E_\varepsilon;A) \right|_{\varepsilon=0} = \int_D \langle \X , (X\varphi,Y\varphi \rangle d\xi =\int_D \langle \X , \nabla \varphi \rangle d\xi. \end{equation} On the other hand, the derivative of the volume is \[ \left. \mathcal {V} ' = \frac{d}{d\varepsilon}\mathcal {L}^3 (E_\varepsilon ) \right|_{\varepsilon=0} = \int_D \varphi\, d\xi. \] Inserting these formulas into \[ 0 = \left. \frac{d}{d\varepsilon} \frac{ \P_{\phi}(E_\varepsilon )^{4}}{\mathcal {L}^3(E_\varepsilon )^{3}} \right|_{\varepsilon=0} = \frac {\P_{\phi}(E)^{3}}{\mathcal {L}^3(E )^{4}}\big( 4 {\P_{\phi}}' \mathcal {L}^3(E )-3 \mathcal {V}' \P_{\phi}(E ) \big), \] we obtain \[ \int_D \langle \ , \nabla \varphi \rangle d\xi = \frac 34 \frac{\P_{\phi}(E)} {\mathcal {L}^3(E )} \int_D \varphi\, d\xi \] for any test function $\varphi\in \mathrm{C}^\infty_c(D)$. This is our claim. \end{proof} { Proposition \ref{pix} still holds if we only have $f\in\mathrm{Lip} (D)$. } If $\phi^*$ is of class $\mathrm{C}^2$ and $f \in \mathrm{C}^2(D)$ then we have $\X \in \mathrm{C}^1(D\setminus \mathcal{C}(f) ;\mathbb{R}^2)$. So equation \eqref{EQU} is satisfied pointwise in $D\setminus \mathcal{C}(f) $ in the strong sense. \begin{definition} Let $f\in \mathrm{C}^2(D)$. We call the function $H_\phi : D\setminus \mathcal{C}(f)\to\mathbb{R}$ \begin{equation} \label{eq:curvz} H_{\phi}(\xi) = \mathrm{div}\big (\X (\xi) \big) ,\quad \xi \in D\setminus \mathcal{C}(f), \end{equation} the \emph{$\phi$-curvature of the graph $\mathrm{gr}(f)$}. {We say that $\mathrm{gr}(f)$ has \emph{constant $\phi$-curvature} if there exists $h\in\mathbb{R}$ such that $H_\phi=h$ on $D\setminus\mathcal{C}(f)$. Finally, we say that $\mathrm{gr}(f)$ is \emph{$\phi$-critical} if there exists $h\in\mathbb{R}$ such that \begin{equation}\label{eq:daverif} \int_D \langle \ , \nabla \varphi \rangle d\xi = - h \int_D \varphi\, d\xi \end{equation} is satisfied for every $\varphi\in C^\infty_c(D)$. } \end{definition} Proposition \ref{pix} then asserts that the part of the boundary of a $\phi$-isoperimetric set of class $\mathrm{C}^2$ that can be represented as a $z$-graph {is $\phi$-critical and in particular} it has constant $\phi$-curvature at noncharacteristic points. { \begin{remark}\label{rem:x-graphs} Let us discuss how the proof of Proposition~\ref{pix} can be adapted to the case where $E\cap A$ is a $x$-subgraph of class $\mathrm{C}^2$. The case of $y$-subgraphs is analogous. We have a defining function for $\partial E\cap A$ of the type $g(x,y,z)=f(y,z)-x$ with $f\in \mathrm{C}^2 (D)$. The projected horizontal gradient in \eqref{egge} reads \[ G(y,z) = \Big(-1-\frac 1 2 yf_z, f_y +\frac 12 f f_z\Big). \] For $\varepsilon \in\mathbb{R} $ and $\varphi\in \mathrm{C}_c^\infty (D)$ let $E_\varepsilon $ be the $x$-subgraph in $A$ of $f+\varepsilon\varphi$. Then the derivative of the $\phi$-perimeter of $E_\varepsilon$ is \begin{equation* \begin{split} \left. \frac{d} {d\varepsilon} \P_{\phi}(E_\varepsilon;A)\right|_{\varepsilon=0} & = \int_D \big \langle \nabla \phi^* (G) , \big( -y \varphi_z /2 , \varphi _y +(\varphi f)_z /2 \big) \big \rangle dydz \\ & = - \int_D \varphi (y,z)\, \mathcal L f(y,z) \, dydz , \end{split} \end{equation*} where $\mathcal L : \mathrm{C}^2(D)\to \mathrm{C}(D)$ is the partial differential operator \begin{equation} \label{eq:Delta} \mathcal L f = \Big( \frac {\partial }{\partial y}+ \frac {f}{2} \frac {\partial }{\partial z}\Big) {\phi}^*_b(G)-\frac y 2 \frac {\partial }{\partial z}{\phi}^*_a(G), \end{equation} {with} $\nabla {\phi}^* = ({\phi}_a^*,{\phi}_b^*)$. The statement for $x$-graphs is then that if $E\subset\H^1$ is $\phi$-isoperimetric and $E\cap A$ is a $x$-subgraph with graph function $f\in \mathrm{C}^2 (D)$, then \[ \mathcal L f = - \frac{3}{4} \frac{\P_{\phi}(E)}{\mathcal {L} ^3(E)}\quad \textrm {in }D. \] When we only have $f\in\mathrm {Lip }(D)$, $\mathcal L f $ is well-defined in the distributional sense. \end{remark} } \subsection{Crystalline norms} \label{ss:fv-cr} In this section we focus on a norm $\phi$ having non-differentia\-bility points, and in particular on the case where it is crystalline. Recall that the dual norm $\phi^*$ to a non-differentiable one is not strictly convex, so that $\nabla \phi^*$ is constant on subsets of $\mathbb{R}^2$ having nonempty interior. { \begin{lemma}\label{lem:foliazione1} Let $O$ be a subset of $\mathbb{R}^2$ where $\nabla \phi^*$ exists and is constant. Let $E\subset\H^1$ be such that $E\cap A =\{(\xi,z)\in A : z\leq f(\xi),\ \xi\in D\}$ for some open set $A\subset \H^1$ and $f\in \mathrm{Lip}(D)$. If $F(\xi)\in O$ for almost every $\xi\in D$ then $E$ is not $\phi$-isoperimetric. \end{lemma} } \begin{proof} As in the proof of Proposition~\ref{pix}, consider $\varphi\in \mathrm{C}^\infty_c(D)$ and, for $\varepsilon\in\mathbb{R}$ small, let $E_\varepsilon\subset \H^1$ be the set such that \[ E_\varepsilon \cap A =\{(\xi,z)\in A : z\leq f(\xi)+\varepsilon\varphi(\xi),\ \xi\in D\}, \] and $E_\varepsilon \setminus A =E\setminus A $. Then, as in \eqref{eq:der-per}, \[ {\P_{\phi}}' = \left. \frac{d}{d\varepsilon}\P_{\phi}(E_\varepsilon;A) \right|_{\varepsilon=0} = \int_D \langle {\nabla\phi^*(F)}, \nabla \varphi\rangle d\xi. \] \noindent By hypothesis, {$\nabla\phi^*(F)$} is constant on $D$, so that ${\P_{\phi}}' =0$. Now, choosing $\varphi\neq 0$ with constant sign, we deduce that \[\left. \frac{d}{d\varepsilon} \frac{ \P_{\phi}(E_\varepsilon )^{4}}{\mathcal {L}^3(E_\varepsilon )^{3}} \right|_{\varepsilon=0} = { -3\frac {\P_{\phi}(E)^{4}}{\mathcal {L}^3(E )^{4}} \int_D \varphi(\xi)d\xi \ne 0, } \] contradicting the extremality of $E$ for the isoperimetric quotient. \end{proof} We are ready for the proof of Theorem \ref{thmi:folz}. This theorem disproofs Conjecture~8.0.1 in \cite{Sanchez-thesis}, where Pansu's bubble was conjectured to solve the isoperimetric problem for crystalline norms. Let $\phi$ be a crystalline norm and denote by $v_1,\ldots, v_{2N}\in \mathbb{R}^2 $ the ordered vertices of the polygon $C_\phi=C_\phi(0,1)$. Notice that $v_{i+N}=-v_{i}$ for $i=1,\dots,N$. The dual norm $\phi^*$ is also crystalline and the vertices of $C_{\phi^*}(0,1)$ are in one-to-one correspondence with the edges $e_i = v_i-v_{i-1}$ of $C_\phi(0,1)$ (with $v_0=v_{2N}$). Namely, $C_{\phi^*}(0,1)$ is the convex hull of $v_1^*,\ldots, v_{2N}^*$ where, for $i=1,\dots,2N$, the vertex $v_i^*$ is the unique vector of $\mathbb{R}^2$ such that \begin{equation} \label{ORTO} \langle v_i^*, e_i\rangle =0 \end{equation} and $\langle v_i^*,v_i\rangle = \langle v_i^*,v_{i-1} \rangle=1$. In particular, $v_{i+N}^*=-v_{i}^*$ for $i=1,\dots,N$. Along the lines $L_i =\mathbb{R} v_i^*$, the norm $\phi^*$ is not differentiable. In the positive convex cone bounded by $\mathbb{R}^+ v_i^*$ and $\mathbb{R}^+ v_{i+1}^*$ the gradient $\nabla \phi^*$ exists and is constant, and we have $\nabla \phi^* = v_i$. For piecewise $\mathrm{C}^1$-smooth $\phi$-isoperimetric sets the projected horizontal gradient $F$ takes values in $L_1\cup\ldots\cup L_{N}$, by Lemma~\ref{lem:foliazione1}. \begin{proof}[Proof of Theorem~\ref{thmi:folz}] Let $f\in \mathrm{C}^2(D)$ be the graph function of $\partial E\cap A$. For $i=1,\ldots, N$, we let \begin{equation*}\label{eq:SXY} D_i=\{ \xi \in D: F(\xi)\in {L_i = \mathbb{R} v_i^* } \}. \end{equation*} If $\xi\in D_i$ then by \eqref{ORTO} we have \[ F(\xi)^\perp \in \mathbb{R}( v_i^*)^\perp = \mathbb{R} e_i. \] This implies that the vector field $X_i$ in \eqref{eq:X_i} is tangent to $\partial E\cap A$ at the point $(\xi,f(\xi))$. We are going to prove the theorem by showing that $D=D_i$ for some~$i\in \{1,\dots,N\}$. Notice that, for $i,j\in \{1,\dots,N\}$ and $i\ne j$, $v_i$ and $v_j$ are linearly independent. By Lemma~\ref{lem:foliazione1} we have that $D=\cup_{i=1}^N D_i$. We claim, moreover, that \begin{equation}\label{eq:cover} \overline{D}= \cup_{i=1}^{N} \overline{\operatorname{int}D_i}. \end{equation} In order to check the claim, pick $\xi\in D$ and assume by contradiction that $\xi\not\in \overline{\operatorname{int}D_i}$ for $i=1,\dots,N$. Let $i_1$ be such that $\xi\in D_{i_1}$. Since $\xi\not\in \operatorname{int}D_{i_1}$, for every $\epsilon>0$ the set $D\setminus D_{i_1}$ intersects the disc of radius $\epsilon$ centered at $\xi$. Hence, there exist $i_2\ne i_1$, and a sequence $(\xi_n)_{n\in\mathbb{N}}$ in $D_{i_2}\setminus D_{i_1}$ converging to $\xi$. Now, either $\xi_n\in \operatorname{int}D_{i_2}$ for infinitely many $n$ or $\xi_n\not\in \operatorname{int}D_{i_2}$ for $n$ large enough. In the first case $\xi\in \overline{\operatorname{int}D_{i_2}}$, leading to a contradiction. In the second case, we repeat the reasoning leading to $(\xi_n)_{n\in\mathbb{N}}$, replacing $D_{i_1}$ by $D_{i_2}$ and $\xi$ by $\xi_n$ for every $n\in \mathbb{N}$, and, by a diagonal argument, we obtain $i_3\ne i_1,i_2$, and a sequence $(\hat \xi_n)_{n\in\mathbb{N}}$ in $D_{i_3}\setminus(D_{i_1}\cup D_{i_2})$ converging to $\xi$. Repeating the argument finitely many times, we end up with $i_{N}\in \{1,\dots,N\}$ and a sequence $(\tilde \xi_n)_{n\in\mathbb{N}}$ in $D_{i_N}\setminus(\cup_{j=1}^{N-1}D_{i_j})$ converging to $\xi$ with $D=D_{i_1}\cup \dots \cup D_{i_{N}}$. Since $D_{i_N}\setminus(\cup_{j=1}^{N-1}D_{i_j})=D\setminus(\cup_{j=1}^{N-1}D_{i_j})$ is open, we deduce that $\xi\in \overline{\operatorname{int}D_{i_N}}$. This concludes the contradiction argument, proving \eqref{eq:cover}. Let $v_i$ and $v_j$ be linearly independent. We claim that \begin{equation}\label{eq:claimint} \overline{\operatorname{int}(D_i)}\cap \overline{\operatorname{int}(D_j) =\emptyset. \end{equation} Consider the vector field $X'$ on $D\times \mathbb{R}$ defined by $X'(\xi,z)=(e_i,e_{i,1}f_x(\xi)+e_{i,2}f_y(\xi))$. Then $X'$ is $\mathrm{C}^1$ and both $X'$ and $X_j$ are tangent to $\partial E\cap A$ in a neighborhood of any point of $S_j$, where \[S_k=\{(\xi,f(\xi)): \xi\in \operatorname{int} (D_k) \},\quad k=1,\dots,N. \] Hence $[X',X_j]\in T_\xi(\partial E\cap A)$ for every $\xi\in S_j$. On the other hand, $X'$ coincides with $X_i$ on $S_i\times \mathbb{R}$, and therefore $[X',X_j]=[X_i,X_j]=c_{ij} Z$ on $S_i$, with $c_{ij}\in \mathbb{R}\setminus\{0\}$. Assume by contradiction that $\overline{S}_i\cap \overline{S}_j$ contains at least one point $\xi$. By continuity of $[X',X_j]$, we deduce from the above reasoning that $Z(\xi)\in T_\xi(\partial E\cap A)$. The contradiction comes from the remark that, by definition of $S_i$ and $S_j$, also $X_i(\xi)$ and $X_j(\xi)$ are in $T_\xi(\partial E\cap A)$. We proved \eqref{eq:claimint}. We deduce from \eqref{eq:cover} and \eqref{eq:claimint} that $\{\overline{\operatorname{int}(D_1)},\dots, \overline{\operatorname{int}(D_N)}\}$ is an open disjoint cover of $\overline{D}$. We conclude by connectedness of $D$. \end{proof} \section{Integration of the curvature equation}\label{s:integ} Throughout this section $\phi^*$ is a norm of class $\mathrm{C}^2$, unless explicitly mentioned other\-wise. Let $A\subset \H^1$ be open and $g\in \mathrm{C}^{1,1}(A)$ be such that $\nabla g(p)\ne 0$ for every $p$ in $\Sigma=\{p\in A : g(p)=0\}$. The projected horizontal gradient $G:A\to\mathbb{R}^2$ introduced in \eqref{egge} is Lipschitz continuous. Assume that $\Sigma$ has no characteristic points, that is, $G(p)\ne 0$ for every $p\in \Sigma$. We use the coordinates $G=(a,b)$ with $a,b\in \mathrm{Lip}(A)$ and we consider $G^\perp=(-b,a)$. The horizontal vector field $\mathcal G ^\perp =-bX+aY$ is tangent to $\Sigma$. \begin{definition}\label{def:Legendre} A curve $\gamma\in \mathrm{C}^1( I;\Sigma) $ is said to be a \emph{Legendre curve} of $\Sigma$ if $\dot\gamma(t) = \mathcal G^\perp (\gamma(t))$ for all $t\in I$. \end{definition} \noindent In coordinates, a curve $\gamma=(\xi,z)$ in $\Sigma$ is a Legendre curve if and only if \begin{equation*} \dot \xi =G^\perp(\gamma)\quad\textrm{and}\quad \dot z=\omega (\xi, \dot\xi) . \end{equation*} Since $\mathcal G^\perp$ is Lipschitz continuous, the graph $\Sigma$ is foliated by Legendre curves: for any $p\in \Sigma$ there exists a unique (maximal) Legendre curve passing through $p$. Consider now the case where $\Sigma$ is a $z$-graph with graph function $f\in \mathrm{C}^{1,1}(D)$, where $D$ is an open subset of $\mathbb{R}^2$. Then $G(\xi,f(\xi))=F(\xi)$, where $F$ is defined as in \eqref{effe}, and a Legendre curve $\gamma=(\xi,z)$ satisfies \begin{equation}\label{eq:legendre} \dot \xi =F^\perp(\xi)\quad\textrm{and}\quad \dot z=\omega (\xi, \dot\xi) . \end{equation} The domain $D$ is foliated by integral curves of $F^\perp$. On $D$ we define the vector field $\mathcal N \in \mathrm{Lip}(D;\mathbb{R}^2)$ by \begin{equation}\label{eq:cN} \mathcal N (\xi) = \X (\xi) = \nabla \phi^* (F(\xi)),\quad \xi\in D. \end{equation} We know that $\phi(\mathcal N)=1$, by \eqref{l:norm1}. We may call $\mathcal N$ the $\phi$-\emph{normal} to the foliation of $D$ by integral curves of $F^\perp$. We denote by $H_\phi =\mathrm{div}(\mathcal N)$ the divergence of $\mathcal N$. \begin{theorem}\label{thm:CMCfol} Let $\phi^*$ be of class $\mathrm{C}^2$. Let $\Sigma$ be the $z$-graph of a function $f\in \mathrm{C}^{2}(D)$ with $\mathcal{C}(f)=\emptyset$. Then any Legendre curve $\gamma\in \mathrm{C}^1(I;\Sigma)$, with $\gamma = (\xi, z)$, satisfies \begin{equation}\label{eq:dotNperp} \frac{d}{dt}\mathcal N(\xi)= H_\phi(\xi) \dot \xi\quad \textrm{and} \quad \dot z=\omega(\xi,\dot\xi). \end{equation} \end{theorem} \begin{proof} The second equality in \eqref{eq:dotNperp} is part of the definition of a Legendre curve. We prove the first equality. We identify $\mathcal N(\xi)$ and $\dot \xi =F^\perp(\xi )$ with column vectors and we denote by $Jg$ the Jacobian matrix of a differentiable mapping $g$. By the chain rule, using the coordinates $F =(a,b)$ and $\dot \xi = ( -b(\xi), a(\xi))$ we obtain \begin{equation}\label{eq:rperp'} \begin{split} &\frac{d}{dt}\mathcal N (\xi )= {\mathcal H \phi^*} ( F(\xi) ) JF (\xi) \dot \xi =\begin{pmatrix} -b a_x \phi^*_{{aa}} - b b_x \phi^*_{{ab}} +a a_y \phi^*_{{aa}} +a b_y \phi^*_{{ab}} \\ -b a_x \phi^*_{{ab}} -b b_x \phi^*_{{bb}} +a a_y \phi^*_{{ab}} + a b_y\phi^*_{{bb}} \end{pmatrix}, \end{split} \end{equation} where $\mathcal H\phi^*$ is the Hessian matrix of $\phi^*$ and the second order derivatives of $\phi^*$ are evaluated at $F(\xi)$. Since $\phi^*$ is of class $\mathrm{C}^2$, we identified $\phi^*_{{ab}}=\phi^*_{{ba}}$. By Euler's homogeneous function theorem, since $\nabla\phi^*$ is $0$-positively homogeneous there holds $\langle \nabla\phi^*_a(F),F\rangle=0$ and $\langle \nabla\phi^*_b(F),F\rangle=0$. These formulas read \[ a\phi^*_{{aa}} +b \phi^*_{{ab}}=0\quad\textrm{and}\quad a \phi^*_{{ab}}+b\phi^*_{{bb}}=0. \] Plugging these relations into \eqref{eq:rperp'}, we obtain \begin{equation}\label{eq:rperp'1} \frac{d}{dt}\mathcal N(\xi)= \left(a_x \phi^*_{{aa}} +b_x \phi^*_{{ab}} +a_y \phi^*_{{ab}} +b_y \phi^*_{{bb}} \right)\dot \xi . \end{equation} On the other hand, we have \[ \operatorname{div}(\mathcal N)= \operatorname{div}(\X)=a_x \phi^*_{{aa}} +b_x \phi^*_{{ab}} +a_y \phi^*_{{ab}} +b_y \phi^*_{{bb}}, \] so that \eqref{eq:rperp'1} yields the claim. \end{proof} { \begin{remark}\label{rem:x-graphs2} An analogue of Theorem~\ref{thm:CMCfol} holds true for $x$-graphs. Let $\Sigma$ be a $x$-graph $\Sigma$ without characteristic points and with defining function $g(x,y,z)=f(y,z)-x$ for some $f$ of class $\mathrm{C}^2$. Let $\gamma\in \mathrm{C}^1(I;\Sigma)$ be a Legendre curve with coordinates $\gamma(t) = ( f(\zeta(t)), \zeta(t))$ for $t\in I$ and consider the vector $\mathcal N(y,z)=\nabla \phi^* (G(y,z) )$. Following the same steps as in the proof of Theorem~\ref{thm:CMCfol}, one gets \[ \frac{d}{dt} {\mathcal N}(\zeta )=\mathcal L f (\zeta) G^\perp (\zeta) \quad \textrm {on } I. \] Hence, the conclusion of Theorem~\ref{thm:CMCfol} holds with $H_\phi=\mathrm{div}\big (\X\big) $ replaced by the quantity $\mathcal L f$ defined in Remark~\ref{rem:x-graphs}. Notice that $H_\phi$ and $\mathcal L f $ coincide on surfaces that are both $x$-graphs and $z$-graphs. An analogous remark can be made for $y$-graphs. \end{remark} } \begin{corollary}\label{l:conto} Let $\phi^*$ be of class $\mathrm{C}^2$. Let $\Sigma$ be the $z$-graph of a function $f\in \mathrm{C}^2(D)$ with $\mathcal{C}(f) =\emptyset$. If $\Sigma$ has constant $\phi$-curvature $ {h}\neq 0$ then it is foliated by Legendre curves that are horizontal lift of $\phi$-circles in $D$ with radius $1/|{h}|$, followed in clockwise sense if ${h}>0$ and in anti-clockwise sense if ${h}<0$. \end{corollary} \begin{proof} Having constant $\phi$-curvature ${h}$ means that \[ \mathrm{div}(\mathcal N)= \mathrm{div}(\X) = {h}\quad\textrm{in } D. \] By Theorem \ref{thm:CMCfol}, for any Legendre curve $\gamma =(\xi,z)$ we have \[ \frac{d}{dt}\mathcal N(\xi)- H(\xi) \dot \xi=0. \] We may than integrate this equation and deduce that there exists $\xi_0\in\mathbb{R}^2$ such that along $\xi$ we have \begin{equation} \label{eq:NP} \mathcal N (\xi )-{h}\xi =-{h}\xi_0. \end{equation} From \eqref{l:norm1} and \eqref{eq:cN} we conclude that \[ |{h}| \phi (\xi-\xi_0) = \phi( {h}(\xi-\xi_0) ) =\phi(\mathcal N)=1. \] Finally, notice that $\langle \mathcal N(\xi),F(\xi)\rangle>0$ if $F(\xi)\ne 0$, so that $t\mapsto F(\xi(t))$ rotates clockwise if ${h}>0$ and anti-clockwise if ${h}<0$, according to \eqref{eq:dotNperp}. Hence, $t\mapsto F(\xi(t))^\perp$ and $t\mapsto \xi(t)$ also rotate clockwise if ${h}>0$, and anti-clockwise if ${h}<0$. \end{proof} { Let us discuss an extension of Corollary~\ref{l:conto} to the case in which we replace the assumption that $\phi^*$ is $\mathrm{C}^2$ by the weaker assumption that $\phi^*$ is \emph{piecewise $\mathrm{C}^2$}, in the following sense: there exist $k\in \mathbb{N}$ and $A_1,\dots,A_k\in \mathbb{R}^2$ such that $\phi^*$ is $\mathrm{C}^2$ on $\mathbb{R}^2\setminus \cup_{j=1}^k {\rm span}(A_j)$. A relevant case where this assumption holds true is when $\phi$ is the $\ell^{p}$ norm \[\ell^{p}(x,y)=(|x|^p+|y|^p)^{\frac{1}{p}},\qquad x,y\in\mathbb{R},\] with $p>2$. Indeed, the dual norm $(\ell^{p})^*$ coincides with the norm $\ell^q$, with $q=p/(p-1)<2$, which is $\mathrm{C}^2$ out of the coordinate axes, but not on the whole punctured plane $\mathbb{R}^2\setminus\{0\}$. We can prove the following. \begin{corollary \label{rem:pieceC2} Let $\phi^*$ be piecewise $\mathrm{C}^2$. Let $\Sigma$ be the $z$-graph of a function $f\in \mathrm{C}^2(D)$ with $\mathcal{C}(f) =\emptyset$. If $\Sigma$ has constant $\phi$-curvature ${h}\neq 0$ then it is foliated by Legendre curves that are horizontal lifts of $\phi$-circles in $D$ with radius $1/|{h}|$, followed in clockwise sense if ${h}>0$ and in anti-clockwise sense if ${h}<0$. \end{corollary} \begin{proof} Under the assumptions of the corollary, the projected horizontal gradient is $\mathrm{C}^1$ on $D$ and Legendre curves can be introduced as in Definition~\ref{def:Legendre}. Consider any Legendre curve $\gamma=(\xi,z)$ on $\Sigma$. Let us denote by $I\subset \mathbb{R}$ the maximal interval of definition of $\gamma$ and by $J$ the open subset of $I$ defined as follows: $t\in J$ if and only if $F(\xi(t))$ is in the region where $\phi^*$ is $\mathrm{C}^2$. For the restriction of $\gamma$ to a connected component $J_0$ of $J$, Theorem~\ref{thm:CMCfol} can be recovered. In particular, since $\Sigma$ has constant $\phi$-curvature ${h}\ne 0$, then $\gamma|_{J_0}$ is the lift of a $\phi$-circle of radius $1/|{h}|$, followed clockwise or anti-clockwise depending on the sign of ${h}$. If $t\in I\setminus J$, then $F(\xi(t))$ belongs to one of the lines ${\rm span}(A_1),\dots,{\rm span}(A_k)$ on which $\phi^*$ may lose the $\mathrm{C}^2$ regularity. Notice that the restriction of $\xi$ to a connected component of $J$ compactly contained in $I$ follows an arc of $\phi$-circle connecting two lines of the type ${\rm span}(A_j)$. In particular, it cannot have arbitrarily small length. If $I\setminus J$ is made of isolated points, then $\gamma:I\to \Sigma$ is the lift of a $\phi$-circle of radius $1/|{h}|$. Indeed, an arc of $\phi$-circle of prescribed radius followed in a prescribed sense is only determined by its initial point and its tangent line there. Since $\gamma$ is an arbitrary Legendre curve on $\Sigma$, the proof is complete if show that $I\setminus J$ does not contain intervals of positive length. Assume by contradiction that $[t_0,t_1]$ is contained in $J$ with $t_0<t_1$. Then $F(\xi(t))$ is constantly equal to some $A\in \mathbb{R}^2$ for $t\in [t_0,t_1]$. Let $\delta>0$ and $\kappa:(-\delta,\delta)\to \Sigma$ be a $C^1$ curve such that $\kappa(0)=\gamma(t_0)$ and $\kappa'(0)$ is not proportional to $\gamma'(t_0)$. Write $\kappa(s)=(\xi_s,z_s)$ and notice that $F(\xi_s)$ converges to $A$ as $s\to 0$. Consider for each $s\in (-\delta,\delta)$ the Legendre curve $\gamma_s$ such that $\gamma_s(t_0)=\kappa(s)$. Then $\gamma_s$ converges to $\gamma$ and $F\circ \gamma_s$ converges to $F\circ \gamma$, uniformly on $[t_0,t_1]$, as $s\to 0$. Hence, for $\varepsilon>0$ and $|s|$ small enough, the restriction of $\gamma_s$ to $(t_0+\varepsilon,t_1-\varepsilon)$ cannot contain the lift of any arc of $\phi$-circle of radius $1/|{h}|$. This implies that there exists a nonempty open region of $\Sigma$ of the form $\{\gamma_s(t) : t\in(t_0+\varepsilon,t_1-\varepsilon),\;|s|<\bar \delta\}$ on which $F(\xi)=A$, contradicting the assumption that $\Sigma$ has constant nonzero $\phi$-curvature. \end{proof} } \section{Foliation property with geodesics}\label{s:PMP} In this section we prove that the Legendre foliation of a surface (a $z$-graph) with constant $\phi$-curvature consists of length minimizing curves in the ambient space (geodesics) relative to the norm $\phi^\dag$ in $\mathbb{R}^2$ defined by \[ \phi ^\dag (\xi) = \phi^*(\xi^\perp),\quad \xi\in\mathbb{R}^2. \] We consider a general norm $\psi$ in $\mathbb{R}^2$ and for $T\geq0 $ we introduce the class of curves \[ \mathcal A_ T=\big\{ \gamma =(\xi,z) \in {\rm AC}([0,T];\H^1) : \textrm{$\dot z =\omega(\xi,\dot\xi)$ and $\psi( \dot \xi) \leq 1$ a.e.}\big\}, \] where $\omega$ is the symplectic form introduced in \eqref{eq:omega}. In the sequel, we denote by $u=\dot \xi \in L^1([0,T];\mathbb{R}^2)$ the \emph{control} of $\gamma$. For given points $p_0,p_1\in \H^1$ we consider the \emph{optimal time problem} \begin{equation} \label{eq:TOC} \inf \big \{ T\geq 0 : \textrm{ there exists $\gamma\in\mathcal A_T$ such that $\gamma(0)=p_0$ and $\gamma(T) =p_1$} \}. \end{equation} We call a curve $\gamma$ realizing the minimum in \eqref{eq:TOC} a \emph{$\psi$-time minimizer} between $p_0$ and $p_1$. In this case, we call the pair $(\gamma, u)$ with $u=\dot\xi$ an \emph{optimal pair.} A $\psi$-time minimizer is always parameterized by $\psi$-arclength, i.e., $\psi(u) =1$. So, $\psi$-time minimizers are $\psi$-length minimizers parameterized by $\psi$-arclength. An optimal pair $(\gamma,u)$ satisfies the necessary conditions given by Pontryagin's Maximum Principle. As observed in \cite{Ber94}, it necessarily is a \emph{normal extremal}, whose definition is recalled below. The Hamiltonian associated with the optimal time problem \eqref{eq:TOC} is ${\mathfrak H}:\H^1\times \mathbb{R}^3\times\mathbb{R}^2\to\mathbb{R}$ \[ \begin{split} {\mathfrak H}(p,\lambda,u) & =\left(\lambda_x-\frac{y}{2}\lambda_z\right)u_1+\left(\lambda_y+\frac{x}{2}\lambda_z\right)u_2 = \langle \lambda_\xi +\frac 12 \lambda _z \xi^\perp, u \rangle, \end{split} \] where $\lambda = (\lambda_\xi,\lambda_z)\in\mathbb{R}^2\times\mathbb{R}$. \begin{definition} The pair $(\gamma, u)\in {\rm AC}([0,T];\H^1)\times L^1([0,T];\mathbb{R}^2)$ is a \emph{normal extremal} if there exists a nowhere vanishing curve $\lambda \in {\rm AC}([0,T];\mathbb{R}^3)$ such that $(\gamma,\lambda)$ solves a.e.~the Hamiltonian system \begin{equation*}\label{eq:sys:1} \begin{cases} \dot \gamma = {\mathfrak H}_\lambda (\gamma,\lambda,u) \\ \dot\lambda = - {\mathfrak H} _ p (\gamma, \lambda, u), \end{cases} \end{equation*} and for every $t\in [0,T] $ we have \begin{equation}\label{eq:Hmaxx} 1={\mathfrak H}(\gamma(t),\lambda(t) ,u(t))=\max_{\psi(u)\leq 1}{\mathfrak H}(\gamma(t),\lambda(t),u). \end{equation} \end{definition} In the coordinates $\gamma =(\xi,z)$ and $\lambda =(\lambda_\xi,\lambda_z)$, the Hamiltonian system reads \begin{equation}\label{eq:sys} \begin{cases} \dot \xi =u,\\ \dot z=\omega( \xi, u) , \end{cases} \begin{cases} \dot \lambda_\xi =\frac 12 \lambda_ z u^\perp ,\\ \dot \lambda_z=0. \end{cases} \end{equation} \begin{theorem}\label{l:PMP} Let $\psi$ be of class $\mathrm{C}^1$ and let $\gamma = (\xi,z) \in {\rm AC}([0,T];\H^1)$ be a horizontal curve. The following statements (i) and (ii) are equivalent: \begin{itemize} \item[(i)] $\gamma$ is a local $\psi$-length minimizer parametrized by $\psi$-arclength; \item[(ii)] the pair $(\gamma, u)$ with $u=\dot\xi$ is a normal extremal. \end{itemize} \noindent Moreover, if $\psi$ is of class $\mathrm{C}^2$ then each of (i) and (ii) is equivalent to \begin{itemize} \item[(iii)] $\gamma$ is of class $\mathrm{C}^2$ and parameterized by $\psi$-arclength, and there is $\lambda_0 \in\mathbb{R}$ such that \begin{equation}\label{eq:dotN} \mathcal H \psi( \dot \xi)\ddot \xi =\lambda_0 \dot \xi ^\perp, \end{equation} where $\mathcal H \psi$ is the Hessian matrix of $\psi$. \end{itemize} \end{theorem} \begin{proof} The equivalence between (i) and (ii) is \cite[Theorem~1]{Ber94}. Let us show that (ii) implies (iii). We set \begin{equation} \label{eq:rl} {\mathcal M}(t)=\lambda_\xi(t) +\frac 12 \lambda_z (t)\xi(t)^\perp, \quad t\in [0,T], \end{equation} where $\lambda = (\lambda_\xi,\lambda_z)$ is the curve given by the definition of extremal. Then the maximality condition in \eqref{eq:Hmaxx} for normal extremals reads \begin{equation}\label{eq:Hmax} 1=\langle{\mathcal M}(t),u(t)\rangle=\max_{\psi(u)\leq 1}\left\langle {\mathcal M}(t),u\right\rangle=\psi^*({\mathcal M}(t)). \end{equation} This is equivalent to the identity \begin{equation}\label{eq:N} {\mathcal M}(t)=\nabla\psi(u(t)). \end{equation} When $\psi$ is of class $\mathrm{C}^2$, from \eqref{eq:N}, \eqref{eq:rl}, and \eqref{eq:sys} we obtain the differential equation for $u=\dot\xi$ \begin{equation} \label{EMME} \mathcal H \psi (u) \dot u = \dot {\mathcal M }= \dot\lambda_\xi +\frac 12 \dot \lambda_z \xi +\frac12 \lambda _z u^\perp =\lambda_z u^\perp. \end{equation} This is \eqref{eq:dotN} with $\lambda_0: = \lambda_z$. Now we show that (ii) is implied by (iii). Consistently with \eqref{eq:N}, we define $ \mathcal M (t) = \nabla \psi(u(t))$, for $t\in [0,T]$. Then $\psi^*({\mathcal M})=1$. We define the curve $\lambda = (\lambda_\xi, \lambda_z)$ letting $\lambda_z =\lambda_0$ and $ \lambda_\xi = \mathcal M -\frac 12 \lambda _z \xi^\perp$. When $\psi$ is of class $\mathrm{C}^2$, we obtain \[ \dot\lambda _\xi =\ \dot{\mathcal M} -\frac 12 \lambda_z \dot\xi^\perp = \mathcal H \psi(\dot \xi) \ddot\xi -\frac 12 \lambda_z \dot\xi^\perp =\frac 12 \lambda_z u^\perp. \] Hence, all equations in \eqref{eq:sys} are satisfied, showing that the pair $(\gamma, u)$ is a normal extremal. This proves that (iii) implies (ii). \end{proof} \begin{remark} \label{PALLO} When $\lambda_0\ne 0$, equation \eqref{eq:dotN} can be integrated in the following way. Using \eqref{EMME}, the equation is equivalent to $ \dot{\mathcal M} = \lambda_0 \dot \xi^\perp , $ that implies $\mathcal M = \lambda _0 (\xi^\perp -\xi_0^\perp)$ for some constant $\xi_0\in\mathbb{R}^2$. So from \eqref{eq:Hmax} we deduce that $ | \lambda_0| \psi^*(\xi^\perp -\xi_0^\perp) = 1. $ If we choose $\psi = \phi^\dag$ then we have $\psi^*(\xi^\perp) =\phi(\xi)$. So the previous equation becomes the equation for a $\phi$-circle \[ \phi(\xi-\xi_0) = {1}/{|\lambda_0|}. \] \end{remark} \begin{corollary}\label{c:isop-lm} Let $\phi$ be a norm with dual norm $\phi^*$ of class piecewise $\mathrm{C}^2$ and let $f\in \mathrm{C}^2(D)$ be such that $\mathcal{C}(f) =\emptyset$. If $\mathrm{gr}(f)$ has constant $\phi$-curvature, then it is foliated by geodesics of $\H^1$ relative to the norm $\phi^\dag$. \end{corollary} The proof is Corollary~\ref{rem:pieceC2}, combined with Remark \ref{PALLO} and Theorem \ref{l:PMP}. \section{Characteristic set of $\phi$-critical surfaces}\label{s:char} In this section we study the characteristic set of $\phi$-critical surfaces and then apply the results to $\phi$-isoperimetric sets. For a $\mathrm{C}^2$ surface $\Sigma\subset\H^1$, the characteristic set is \begin{equation} \mathcal{C}(\Sigma)=\{p \in \partial E : T_p \Sigma = \mathcal{D}(p)\}. \end{equation} Note that any $\mathrm{C}^2$ surface $\Sigma\subset\H^1$ is a $z$-graph around any of its characteristic points $p\in\mathcal{C}(\Sigma)$. When $\Sigma$ is oriented, the $\phi$-curvature $H_\phi$ of $\Sigma$ can be defined in a globally coherent way. When $\Sigma$ is a $z$-graph at the point $p=(\xi, z)=(x,y,z)\in\Sigma$ we let $H_\phi(p) = \mathrm {div}(\X)(\xi)$ where $f$ is a $z$-graph function; when $\Sigma$ is a $x$-graph, we let $H_\phi(p) = \mathcal L f(y,z)$, where now $f$ is a $x$-graph function and $\mathcal L f$ is defined in \eqref{eq:Delta}; when $\Sigma$ is a $y$-graph we proceed analogously. We say that $\Sigma$ is \emph{$\phi$-critical} if it is closed, has constant $\phi$-curvature and it is $\phi$-critical in a neighborhood of any characteristic point. Our goal is to prove Theorem~\ref{thmi:char}. The proof is obtained combining Lemma~\ref{l:char1} and Theorem~\ref{thm:ccurves} below. In this section, $\phi$ and $\phi^*$ are two norms of class $\mathrm{C}^2$. We will omit to mention this assumptions in the various statements. \subsection{Qualitative structure of the characteristic set} \begin{lemma}\label{l:char1} Let $\Sigma\subset \H^1$ be a $\mathrm{C}^2$ surface with constant $\phi$-curvature. Then $\mathcal{C}(\Sigma)$ consists of isolated points and $\mathrm{C}^1$ curves. Moreover, for every isolated point $p_0=(\xi_0,z_0)\in \mathcal{C}(\Sigma)$ and every $f$ such that $p_0\in \mathrm{gr}(f)\subset \Sigma$, we have $\operatorname{rank}(JF(\xi_0))=2$, {where $F$ is the projected horizontal gradient introduced in \eqref{effe}}. \end{lemma} \begin{proof} We let $\mathcal{C}(f)$ be as in \eqref{eq:charf}. For any $\xi_0 \in \mathcal{C}(f)$, the Jacobian matrix $JF(\xi_0)$ has rank 1 or 2. Indeed, an explicit calculation shows that $JF(\xi_0)\neq 0$ for all $\xi_0 \in D$. If $\mathrm{rank}(JF(\xi_0))=2$ then $\xi_0$ is an isolated point of $\mathcal{C}(f)$. We study the case $\mathrm{rank}(JF(\xi_0))=1$. We claim that in this case $\mathcal{C}(f)$ is a curve of class $\mathrm{C}^1$ in a neighborhood of $\xi_0$. {The argument that we use here is inspired by \cite{CHMY}.} For $b\in \mathbb{R}^2$ we define $F_b:D\to \mathbb{R}$, $F_b =\langle F,b\rangle$. When $b\notin \operatorname{ker}(JF(\xi_0))$, the equation $F_b=0$ defines a $\mathrm{C}^1$ curve $\Gamma_b$ near and through $\xi_0$. We have $\mathcal{C}(f)\subset \Gamma_b$. Since $\nabla F_b (\xi_0)$ is in the image of $JF(\xi_0)$, which is a line independent of $b$, the normal direction to $\Gamma_b$ at $\xi_0$ does not depend on $b$. We choose one of the two unit normals and we call it $N\in \mathbb{R}^2$. We claim that there exist $a,b\in \mathbb S^1$, where $\mathbb S^1 \{w\in\mathbb{R}^2 : |w|=1\}$, such that \begin{equation} \label{nonno} a\notin \{b,-b\},\quad a,b \notin \operatorname{ker}(JF(\xi_0)),\quad |\langle \nabla\phi^*(b^\perp ), N\rangle|\neq |\langle \nabla\phi^*(a^\perp), N\rangle|. \end{equation} To prove the claim pick $b\in\mathbb S^1\setminus \operatorname{ker}(JF(\xi_0))$ (this is possible since $\operatorname{rank}(JF(\xi_0))\neq0$), and define the set \[ K_b:=\left\{v\in C_\phi: |\langle v, N\rangle|=|\langle \nabla\phi^*(b^\perp ), N\rangle|\right\}. \] Since the map $\nabla\phi^*:\mathbb S^1\to C_\phi$ is continuous, the set $(\nabla\phi^*)^{-1}(K_b)\subset \mathbb S^1$ is closed in $\mathbb S^1$. Moreover, $\nabla\phi^*:\mathbb S^1\to C_\phi$ is surjective, since for every $w\in C_\phi$ and every $v$ in the subgradient of $\phi^*$ at $w$, we have $w=\nabla\phi^*(v)$ (see, e.g., \cite[Theorem 23.5]{Rockafellar}). As a consequence, $(\nabla\phi^*)^{-1}(K_b)\neq \mathbb S^1$, since otherwise we would have $K_b=C_\phi$, which is impossible. The set \[\Upsilon=\operatorname{ker}(JF(\xi_0))^\perp\cup(\nabla\phi^*)^{-1}(K_b)\cup\{b^\perp,-b^\perp\}\] is therefore a proper closed subset of $\mathbb S^1$, and the claim follows by choosing $a^\perp\in\mathbb S^1\setminus \Upsilon$. \begin{figure} \includegraphics[width=.85\textwidth]{coni.jpeg} \caption{The cone $C_\alpha$ and the region $A$. On the left, $A$ does not touch $\partial\{|\xi-\xi_0|<\delta\}$, while it does on the right. We can always restrict our attention to the case on the left when $\xi_0$ is a density point of $\mathcal{C}(f)$. } \label{f:cono} \end{figure} Fix $a,b\in\mathbb S^1$ such that \eqref{nonno} holds and, for $\alpha\in(0,1)$, let $C_\alpha:=\{v\in\mathbb{R}^2 : |\langle N,v\rangle|< |v|\sin\alpha\}$ be the cone centered at $\xi_0$ with axis parallel to $N^\perp$ and aperture $2\alpha$. Since $\Gamma_a,\Gamma_b$ are $\mathrm{C}^1$, there exists $\delta\in(0,1)$ such that \begin{equation}\label{eq:cont} \{\xi\in\Gamma_a\cup\Gamma_b : |\xi-\xi_0|<\delta\}\subset C_{\alpha,\delta}, \end{equation} where we set $C_{\alpha,\delta}=\{\xi\in C_\alpha: |\xi-\xi_0|<\delta\}$. Let us assume by contradiction that $\mathcal{C}(f)$ is not a $\mathrm{C}^1$ curve near $\xi_0$. Then there exists a nonempty connected component $A$ of $C_{\alpha,\delta}\setminus (\Gamma_a\cup \Gamma_b)$ such that, letting \[ \Lambda_a=\Gamma_a\cap \partial A \quad \Lambda_b:=\Gamma_b\cap \partial A, \quad \Lambda_\partial:=\partial \{|\xi-\xi_0|<\delta\}\cap \partial A, \] we have \begin{equation}\label{eq:hp} \Lambda_a\neq \emptyset,\quad \Lambda_b\neq \emptyset,\quad \partial A=\Lambda_a\cup\Lambda_b\cup\Lambda_\partial, \quad \sharp(\Lambda_a\cap\Lambda_b)\leq 2. \end{equation} See Figure~\ref{f:cono}. Notice that $A$, $\Lambda_a$, $\Lambda_b$, and $\Lambda_\partial$ depend on $\delta$. By \eqref{eq:cont} (see also Figure~\ref{fig:conto_trig}), we have \begin{equation}\label{eq:area} \mathcal L^2(A) \leq \delta^2\tan(\alpha). \end{equation} \begin{figure} \includegraphics[width=.5\textwidth]{conti.jpeg} \caption{Proportions in $C_{\alpha,\delta}$.} \label{fig:conto_trig} \end{figure} By \eqref{eq:hp} and since $\mathcal{C}(f)\subset \Lambda_a\cap \Lambda_b$, for $\xi\in \operatorname{int}(\Lambda_a)\cup\operatorname{int}(\Lambda_b)$ we have $F(\xi)\neq 0$, where we endow $\Lambda_a$ and $\Lambda_b$ with their relative topologies. We deduce that $F(\xi)=c_a(\xi)a^\perp$ with $c_a(\xi)\ne 0$ for $\xi\in \operatorname{int}(\Lambda_a)$ and $F(\xi)=c_b(\xi)b^\perp$ with $c_b(\xi)\ne 0$ for $\xi\in \operatorname{int}(\Lambda_b)$. Using the fact that $\nabla\phi^*$ is positively $0$-homogeneous it then follows that the vector field $\mathcal N: D\setminus \mathcal{C}(f)\to \mathbb{R}^2$, $\mathcal N (\xi) = \nabla \phi^* (F(\xi) )$, is constant along $\Lambda_a$ and $\Lambda_b$. Namely, \[ \begin{split} \mathcal N (\xi) & = \operatorname{sgn}(c_a) \nabla\phi^*(a^\perp)=:\mathcal N_a ,\quad \xi \in \mathrm{int}(\Lambda_a), \\ \mathcal N (\xi) &= \operatorname{sgn}(c_b) \nabla\phi^*(b^\perp)=:\mathcal N_b ,\quad \xi \in \mathrm{int}(\Lambda_b). \end{split} \] By assumption, and since $\phi^*\in \mathrm{C}^2$, there exists a constant $h \in\mathbb{R}$ such that \[ \mathrm{div}(\mathcal N (\xi) ) = h , \quad x\in D\setminus \mathcal{C}(f), \] in the strong sense. Then by the divergence theorem, and since $A\cap \mathcal{C}(f) =\emptyset$, we have \[ h \mathcal L^2(A) =\int_A\mathrm{div} (\mathcal N) dxdy = \int _{\Lambda _a} \langle \mathcal N_a, N_a\rangle d \mathcal H^1 + \int _{\Lambda_b} \langle \mathcal N_b, N_b\rangle d \mathcal H^1 + \int _{\Lambda_\partial} \langle \mathcal N, N_\partial\rangle d \mathcal H^1, \] where $N_a$, $N_b$, and $N_\partial$ are, respectively, the normals to $\Lambda_a$, $\Lambda_b$, and $ \Lambda_\partial$, exterior with respect to $A$. For $\alpha\to0^+$ we have \[ \begin{split} & \int _{\Lambda _a} N_a \mathcal H^1 = \delta(-N +o(1)), \\ & \int _{\Lambda _b} N_b \mathcal H^1 = \delta(N +o(1)), \\ &\left|\int _{\Lambda_\partial} \langle \mathcal N, N_\partial\rangle d \mathcal H^1\right|\le C\delta\alpha, \end{split} \] where $o(1)\to 0 $ as $ \alpha \to 0^+$ and $C>0$ denotes a suitable constant. Now from \eqref{eq:area} we deduce that \[ | \delta\tan(\alpha) h | \geq |\langle \mathcal N_b-\mathcal N _a, N\rangle + o(1)|-C\alpha, \] that implies $ \langle \mathcal N_b-\mathcal N _a, N\rangle=0$ in contradiction with \eqref{nonno}. This proves that $\mathcal{C}(f)$ is a $\mathrm{C}^1$ curve around any point $\xi_0$ with $\mathrm{rank}(JF(\xi_0))=1$. % \end{proof} \subsection{Characteristic curves in $\phi$-critical surfaces} Given a surface $\Sigma\subset\H^1$, we call a \emph{characteristic curve on $\Sigma$} any (nontrivial) curve $\Gamma\subset \mathcal{C}(\Sigma)$. In this section we prove the following result. \begin{theorem}\label{thm:ccurves} Let $\Sigma$ be a complete and oriented surface of class $\mathrm{C}^2$. If $\Sigma$ {is $\phi$-critical with non-vanishing $\phi$-curvature $h\neq 0$} then any characteristic curve on $\Sigma$ is either a horizontal line or the horizontal lift of a simple closed curve. \end{theorem} For a characteristic curve $\Gamma$ in $\Sigma$ we denote its coordinates by $\Gamma=(\Xi,\zeta)\in\mathbb{R}^2\times\mathbb{R}$. For any $p_0=(\xi_0,z_0)$ on $ \Gamma$, let $\delta>0$ be small enough to have \begin{equation}\label{eq:divi} \{\xi\in\mathbb{R}^2 : |\xi-\xi_0|<\delta\}\setminus \mathrm {supp}( \Xi)=B^+\cup B^-, \end{equation} where $B^+,B^-\subset\mathbb{R}^2$ are disjoint open connected sets. The $\phi$-normal $\mathcal N$ in \eqref{eq:cN} is well-defined in $B^+\cup B^-$. \begin{lemma}\label{p:cont} Let $\Sigma$ be a $\mathrm{C}^2$ surface with constant $\phi$-curvature. With the above notation, the following limits exist \begin{equation}\label{eq:limits} \mathcal N^\pm(\xi_0):=\lim_{{B^\pm\ni \xi}\to \xi_0} \mathcal N(\xi) \end{equation} and satisfy $\mathcal N^+(\xi_0)=-\mathcal N^-(\xi_0)$. \end{lemma} \begin{proof} This is a straightforward corollary of ~\cite[Proposition~3.5]{CHMY}. \end{proof} \begin{proposition}\label{p:orto} Let {$\Sigma$ be a $\phi$-critical surface} of class $\mathrm{C}^2$ and let $\Gamma=(\Xi,\zeta)$ be a characteristic curve on $\Sigma$. Then for every $p_0=(\xi_0,z_0)$ in $ \Gamma$ we have \begin{equation}\label{eq:orto} {\mathcal N^{\pm}(\xi_0)\in T_{\xi_0}\Xi}, \end{equation} where $\mathcal N^\pm$ is defined as in Lemma~\ref{p:cont}. \end{proposition} \begin{proof} Let $f\in\mathrm{C}^2(D)$ be a graph function for $\Sigma$ with $\xi_0\in D\subset\mathbb{R}^2$. Without loss of generality we assume $D=\{|\xi-\xi_0|<\delta\}$ and let $D^\pm:=D\cap B^\pm$, where $B^\pm$ are {as in} \eqref{eq:divi}. Let $h\in\mathbb{R}$ be the $\phi$-curvature of $\Sigma$. Since $\Sigma$ is $\phi$-critical, for any $\varphi \in \mathrm{C}^\infty_c(D)$ we have \[ \int_{D }\langle\X,\nabla \varphi\rangle\;d\xi = - \int_D h\varphi\;d\xi \] and $\operatorname{div}(\X)=h$ pointwise in $D^+\cup D^-$. Then, { denoting by $N_\Xi$ the normal to $\Xi$ pointing towards $D^-$,} by the divergence theorem we have \[\begin{split} \int_D h\varphi\;d\xi&=\int_{D^+}\operatorname{div}(\X)\varphi\;d\xi+\int_{D^-}\operatorname{div}(\X)\varphi\;d\xi\\ &=-\int_{D^+\cup D^-}\langle\X,\nabla \varphi\rangle\;d\xi+\int_\Xi\varphi\langle\mathcal N^+,N_\Xi\rangle\;d\mathcal H^1-\int_\Xi\varphi\langle\mathcal N^-,N_\Xi\rangle\;d\mathcal H^1\\ &=\int_D h\varphi\;d\xi+\int_\Xi\varphi\langle\mathcal N^+-\mathcal N^-,N_\Xi\rangle\;d\mathcal H^1. \end{split} \] By Lemma~\ref{p:cont}, this implies that \[ \int_\Xi \varphi\langle\mathcal N^ + ,N_\Xi\rangle\;d\mathcal H^1=0 \] and since $\varphi$ is arbitrary, this yields the claim. \end{proof} { \begin{remark} \label{p:chc2} Under the assumptions of the previous proposition, the characteristic curves $\Gamma=(\Xi,\zeta)$ of $\partial E$ are of class $\mathrm{C}^2$. This can be proved exactly as in Proposition~4.20 of \cite{RR08} using {condition} \eqref{eq:orto}. In particular, $\Xi$ is of class $\mathrm{C}^2$. \end{remark} } \subsubsection{Parametrization of constant $\phi$-curvature surfaces around characteristic curves} In this section, we study a $\phi$-{critical} surface $\Sigma$ of class $\mathrm{C}^2$ having constant $\phi$-curvature $h\neq 0$ near a characteristic curve. Without loss of generality we assume $h>0$. We assume $\phi$ to be normalized in such a way that $\phi(1,0)=1$ and we fix a parametrization $\mu:[0,M]\to\mathbb{R}^2$ of $C_\phi$ such that $\phi^\dagger( \dot\mu )= 1$, $\mu([0,M])=C_\phi$, with initial and end-point $\mu(0)=\mu(M)$. We choose the clockwise orientation and we extend $\mu$ to the whole $\mathbb{R}$ by $M$-periodicity. We have $\mu\in \mathrm{C}^2(\mathbb{R};\mathbb{R}^2)$ and \begin{equation}\label{eq:phiphis} \mu(\tau)=\nabla\phi^*(\dot \mu(\tau)^\perp),\qquad\mbox{ for all }\tau\in \mathbb{R}. \end{equation} In fact, letting $\mathcal N(t)=\nabla\phi^*(\dot\mu(t)^\perp)$, we have $\dot\mathcal N=\dot\mu$ as in \eqref{eq:dotNperp}. Equation~\eqref{eq:phiphis} then follows by integration using the fact that $0$ is the center of $C_\phi$. Let $\Gamma=(\Xi,\zeta)\in \mathrm{C}^2(I; \Sigma)$ be a characteristic curve parameterized in such a way that \begin{equation}\label{eq:paraXi} \phi(\dot \Xi) = 1\quad \text{on }I. \end{equation} Locally, $\Gamma$ disconnects $\Sigma$ and there are no other characteristic points of $\Sigma$ close to $\Gamma$, by Lemma~\ref{l:char1}. According to Corollary~\ref{l:conto}, $\Sigma\setminus\mathcal{C}(\Sigma)$ admits near $\Gamma$ a Legendre foliation made of horizontal lifts of $\phi$-circles of radius $1/h$, followed in the clockwise sense. Hence, given a point $(\xi_0,z_0)\in \Sigma\setminus\mathcal{C}(\Sigma)$ near $\Gamma$, there exist $c\in \mathbb{R}^2$ and $\tau\in [0,M]$ such that the horizontal lift of \[\xi(s)=c+h^{-1}\mu(\tau+h s)\] passing through $(\xi_0,z_0)$ at $s=0$ stays in $\Sigma$ until it meets a characteristic point. Here, $c$ is the center of the $\phi$-circle. Notice that $\nabla \phi^*(\dot\xi(s)^\perp)=\mathcal N(\xi(s))$, so that, by Lemma~\ref{p:cont} and \eqref{eq:orto}, $\nabla \phi^*(\dot\xi(0)^\perp)$ converges to a vector collinear to $\dot\Xi(t)$ as $\xi_0$ approaches $\Xi(t)$ for some $t\in I$. By \eqref{l:norm1} and \eqref{eq:paraXi}, $\nabla \phi^*(\dot\xi(0)^\perp)$ converges either to $\dot \Xi(t)$ or to $-\dot \Xi(t)$ as $\xi_0$ approaches $\Xi(t)$. Since $\Xi$ locally disconnects the plane, we can fix a side from where $\xi_0$ approaches $\Xi$ and, up to reversing the parameterization of $\Gamma$, we can assume that $\nabla \phi^*(\dot\xi(0)^\perp)$ converges to $\dot \Xi(t)$ as $\xi_0$ converges to $\Xi(t)$. Thanks to \eqref{eq:phiphis} and since $\dot\xi(0)=\dot\mu(\tau)$, we deduce that $\mu(\tau)=\nabla \phi^*(\dot\xi(0)^\perp)$ converges to $\dot \Xi(t)$ as $\xi_0\to \Xi(t)$. In particular, the limit direction of $\dot \xi(0)$ as $\xi_0\to \Xi(t)$ is transversal to $\Xi$. By local compactness of the set of $\phi$-circles with radius $1/h$, the horizontal lift passing through $\Gamma(t)$ at $s=0$ of a curve $c+h^{-1}\mu(\tau+h s)$ with $\mu(\tau)=\dot \Xi(t)$ is a Legendre curve contained in $\Sigma$, for $s$ either in a positive or a negative neighborhood of $0$. To fix the notations, we assume that $s$ is in a positive neighborhood of $0$, the computations being equivalent in the other case. Moreover, there is no other Legendre curve having $\Gamma(t)$ in its closure and whose projection on the $xy$-plane stays in the chosen side of $\Xi$, since $\tau\in [0,M)$ and $c\in \mathbb{R}^2$ are uniquely determined by \[\mu(\tau)=\dot \Xi(t),\qquad c=\Xi(t)-h^{-1}\mu(\tau)=\Xi(t)-h^{-1}\dot\Xi(t).\] It is then possible to parameterize locally near $\Gamma$ one of the two connected components of $\Sigma\setminus \Gamma$ by Legendre curves using the function \begin{equation}\label{eq:gamma} (t,s)\mapsto\gamma(t,s)=(\xi(t,s),z(t,s)) \end{equation} where \begin{equation}\label{eq:xi} \xi(t,s)=h^{-1}\mu(\tau(t)+hs)+\Xi(t)-h^{-1}\dot\Xi(t),\quad t\in I,\ s>0, \end{equation} with $\tau$ uniquely defined via the equation \begin{equation}\label{eq:tau} \mu(\tau(t))=\dot\Xi(t),\quad t\in I, \end{equation} and $z$ defined by \begin{equation}\label{eq:z} z(t,s)=\zeta(t)+\int_0^s \omega(\xi(t,\sigma),\xi_s(t,\sigma))d\sigma. \end{equation} As discussed above, we have \begin{align} \nabla\phi^*(\xi_s(t, 0)^\perp)&=\dot\Xi(t),\label{eq:NXi}\\ \label{eq:parad} \phi^\dagger(\xi_s) &=1. \end{align} For $t\in I$, we define the \emph{characteristic time} $s(t)$ as the first positive time $s>0$ such $\gamma(t,s(t))\in\mathcal{C}(\Sigma)$. We will prove later that such a $s(t)$ exists. Finally, we let $S:=\{(t,s) : t\in I,\ 0\leq s\leq s(t)\}$ and we consider the surface $\gamma(S)\subset \Sigma$. \begin{lemma}\label{p:parag} We have $\gamma \in \mathrm{C}^1(S;\Sigma)$ with $\gamma(\cdot,0)=\Gamma $. Moreover, the second order derivatives $\gamma_{ss}, \gamma_{ts},\gamma_{st}$ are well-defined and \begin{equation} \label{eq:schwarz}\gamma_{ts}=\gamma_{st}. \end{equation} \end{lemma} \begin{proof} By \eqref{eq:xi} and \eqref{eq:z}, we see that $\gamma_{ss}$ exists and that $\xi_{ts}=\xi_{st}$. Moreover, \begin{align*} z_{st}&= \omega(\xi_t(t,\cdot),\xi_{s}(t,\cdot))+\omega(\xi(t,\cdot),\xi_{st}(t,\cdot))\\ &=\omega(\xi_t(t,\cdot),\xi_{s}(t,\cdot))+\omega(\xi(t,\cdot),\xi_{ts}(t,\cdot)) = z_{ts}. \end{align*} \end{proof} On the surface $\gamma(S)$ we consider the vector field \begin{equation} V(t,s):=\gamma_t(t,s)=(\xi_t(t,s),z_t(t,s))\in\mathbb{R}^3. \end{equation} It plays the role of the Jacobi vector field $V$ in \cite[Lemma~6.2]{RR08}. The characteristic time $s(t)$ is precisely the first positive time such that $\langle V(s(t),t), Z\rangle_{\mathcal D}=0$. Here, with a slight abuse of notation, $\langle\cdot,\cdot\rangle_{\mathcal D}$ denotes the scalar product that makes $X,Y,Z$ orthonormal. The following computation is crucial in what follows. We recall that we are assuming the $\phi$-curvature to be a constant $h\neq 0$. \begin{lemma}\label{l:contoVZ} We have the identity \[ \langle V(t,s),Z \rangle_{\mathcal D}= 2\big[h^{-2}\omega(\ddot\Xi,\dot\Xi) +\omega(\dot\Xi-h^{-1}\ddot\Xi,h^{-1}\mu(\tau+hs))\big]. \] \end{lemma} \begin{proof} First notice that \begin{equation}\label{eq:VZ} \langle V,Z\rangle_{\mathcal D}=z_t+\omega(\xi_t,\xi), \end{equation} where \[ z_t(t,s)=z_t(t,0)+\int_0^s\omega(\xi_t(t,\sigma),\xi_s(t,\sigma))\;d\sigma+\int_0^s\omega(\xi(t,\sigma),\xi_{st}(t,\sigma))\;d\sigma. \] Using \eqref{eq:xi}, \eqref{eq:tau}, and the skew-symmetry of $\omega$, the above implies \[ \begin{split} z_t(\cdot,s)&=\omega(\Xi,\dot\Xi) +\int_0^s\omega(\dot\Xi-h^{-1}\ddot\Xi+h^{-1}\dot\tau\dot\mu(\tau+h\sigma),\dot\mu(\tau+h\sigma))\;d\sigma\\ &\quad +\int_0^s\omega(\Xi-h^{-1}\dot\Xi+h^{-1}\mu(\tau+h\sigma),\dot \tau\ddot\mu(\tau+h\sigma))\;d\sigma\\ &=\omega(\Xi,\dot\Xi) +h^{-1}\omega(\dot\Xi-h^{-1}\ddot\Xi,\mu(\tau+hs)-\mu(\tau)) \\&\quad +h^{-1}\omega(\Xi-h^{-1}\dot\Xi,\dot \tau\dot\mu(\tau+hs)-\dot \tau\dot\mu(\tau) )\\ &\quad +h^{-2}\omega(\mu(\tau+hs),\dot \tau\dot\mu(\tau+hs))-h^{-2}\omega(\mu(\tau),\dot \tau\dot\mu(\tau))\\ &=\omega(\Xi,\dot\Xi) +h^{-1}\omega(\dot\Xi-h^{-1}\ddot\Xi,\mu(\tau+hs)) -h^{-1}\omega(\dot\Xi-h^{-1}\ddot\Xi,\dot\Xi)\\ &\quad+h^{-1}\omega(\Xi-h^{-1}\dot\Xi,\dot\tau\dot\mu(\tau+hs)) -h^{-1}\omega(\Xi-h^{-1}\dot\Xi,\ddot\Xi) \\&\quad +h^{-2}\omega(\mu(\tau+hs),\dot\tau\dot\mu(\tau+hs)) -h^{-2}\omega(\dot\Xi,\ddot\Xi)\\ &=\omega(\Xi,\dot\Xi) -h^{-1}\omega(\Xi,\ddot\Xi) +h^{-2}\omega(\ddot\Xi,\dot\Xi) +\omega(\dot\Xi-h^{-1}\ddot\Xi,h^{-1}\mu(\tau+hs))\\ &\quad +h^{-1}\omega(\Xi-h^{-1}\dot\Xi+h^{-1}\mu(\tau+hs),\dot\tau\dot\mu(\tau+hs)). \end{split} \] Moreover, we have \[\begin{split} \omega(\xi_t,\xi)&= \omega(\dot\Xi-h^{-1}\ddot\Xi+h^{-1}\dot\tau\dot\mu(\tau+hs),\Xi-h^{-1}\dot\Xi+h^{-1}\mu(\tau+hs)) \\ &=h^{-1}\omega(\dot\tau\dot\mu(\tau+hs),\Xi-h^{-1}\dot\Xi +h^{-1}\mu(\tau+hs)) +\omega(\dot\Xi,\Xi) \\&\quad -h^{-1}\omega(\ddot\Xi,\Xi) +\omega(\dot\Xi-h^{-1}\ddot\Xi,h^{-1}\mu(\tau+hs)) +h^{-2}\omega(\ddot\Xi,\dot\Xi). \end{split} \] Summing up, we obtain the claim. \end{proof} We show next that for every $t\in I$, the Legendre curve $s\mapsto \gamma(t,s )$ meets a characteristic point before that $\xi(t,s)$ comes back to the point $\xi(t,0)=\Xi(t)$, i.e., $hs(t)<M$. \begin{lemma}\label{l:exss} For any $t\in I$, there exists $s(t)\in(0,M/h)$ such that $\langle V(t,s(t)),Z\rangle_{\mathcal D}=0$. \end{lemma} \begin{proof} For fixed $t$, consider the function $\theta:[0,M]\to\mathbb{R}$, defined by \[ \theta(s)=\omega(\dot\Xi-h^{-1}\ddot\Xi,h^{-1}\mu(\tau+hs)). \] By Lemma~\ref{l:contoVZ}, we have that $\langle V(t,s),Z\rangle_{\mathcal D}=0$ if and only if $\theta(s)=b$ with $b:=h^{-2}\omega(\dot\Xi,\ddot\Xi)$. The equation $\theta(s) =b$ is certainly satisfied for $hs=nM$, $n\in\mathbb{N}$. This follows by the $M$-periodicity of $\mu$ and the fact that $V(t,0)=\dot \Gamma(t)$ is horizontal. It is enough to consider the case $b\geq0$, the case $b<0$ being analogous. By \eqref{eq:tau} we have \[ \dot\theta(0)=\omega(\dot\Xi-h^{-1}\ddot\Xi,\dot\mu(\tau)) =\omega(\mu(\tau),\dot\mu(\tau)). \] By the fact that $C_\phi$ is a convex curve around $0$, it follows that $\dot\theta(0)\neq 0$. If $\dot\theta(0)>0$ there exists $s^*\in (0,M/(2h))$ such that $\theta(s^*)>\theta(0)=b$. In this case, by symmetry of $C_\phi$ we have $\mu(\tau+h(s^*+M/(2h)))=-\mu(\tau+hs^*)$, thus implying $\theta(s^*+M/(2h))=-\theta(s^*)<-b\leq 0$. By continuity of $\theta$, we deduce the existence of $\bar s\in(0,M/h)$ satisfying $\theta(\bar s)=b$. We argue in the same way in the case $\dot\theta(0)<0$. \end{proof} We now determine a quantity that remains constant along the Legendre curves $s\mapsto \gamma(t,s)$. \begin{proposition}\label{p:constant} For any $t\in I$ and for all $s\in [0,s(t)]$ we have \begin{equation} \langle V(t,s),Z\rangle_{\mathcal D}+h\langle\nabla\phi^\dagger(\xi_s(t,s)),\xi_t(t,s)\rangle = 0. \end{equation} \end{proposition} \begin{proof} By \eqref{eq:VZ}, \eqref{eq:z} and \eqref{eq:schwarz}, we have \begin{align} \frac{\partial}{\partial s}\langle V,Z\rangle_{\mathcal D} &=z_{ts}+\omega(\xi_{ts},\xi)+\omega(\xi_t,\xi_s) =\frac{\partial}{\partial t}\omega(\xi,\xi_s)+\omega(\xi_{st},\xi)+\omega(\xi_t,\xi_s)\nonumber\\ &=\omega(\xi_t,\xi_s)+\omega(\xi,\xi_{st})+\omega(\xi_{st},\xi)+\omega(\xi_t,\xi_s)=2\omega(\xi_t,\xi_s).\label{eq:claim1cc} \end{align} We claim that \begin{equation}\label{eq:claim2cc} h\frac{\partial}{\partial s}(\langle\nabla\phi^\dagger(\xi_s),\xi_t\rangle)=2\omega(\xi_s,\xi_t). \end{equation} Indeed, by Theorem~\ref{l:PMP} and Remark~\ref{PALLO}, we have \begin{equation} \frac{\partial}{\partial s}\nabla\phi^\dagger(\xi_s)=\mathcal H\phi^\dagger(\xi_s)\xi_{ss}=\frac{1}{h}\xi_s^\perp, \end{equation} and therefore \[ \begin{split} \frac{\partial}{\partial s}\langle\nabla\phi^\dagger(\xi_s(t,s)),\xi_t(t,s)\rangle &= \frac{1}{h}\langle\xi_s^\perp,\xi_t\rangle +\langle\nabla\phi^\dagger(\xi_s),\xi_{st}\rangle \end{split} \] On differentiating \eqref{eq:parad} w.r.t.\ $t$ we see that $\langle\nabla\phi^\dagger(\xi_s),\xi_{st}\rangle=0$. This is \eqref{eq:claim2cc}. Summing up \eqref{eq:claim1cc} and \eqref{eq:claim2cc}, we deduce that the function $\Lambda_{t}(s)=\langle V(t,s),Z\rangle_{\mathcal D}+h\langle\nabla\phi^\dagger(\xi_s(t,s)),\xi_t(t,s)\rangle$ is constant. To conclude the proof it is enough to check that $\Lambda_t(0)=0$. On the one hand, we have $\langle V(t,0),Z\rangle_{\mathcal D}=\langle \dot\Gamma(t),Z\rangle_{\mathcal D}=0$, since $\Gamma$ is horizontal. On the other hand, since $\nabla\phi^\dagger(v)=-\nabla\phi^*(v^\perp)^\perp$ for any $v\neq 0$, using \eqref{eq:NXi} we finally obtain \[ \langle\nabla\phi^\dagger(\xi_s(t,0)),\xi_t(t,0)\rangle= -\langle\nabla\phi^*(\xi_s(t,0)^\perp)^\perp,\dot\Xi(t)\rangle=0. \qedhere \] \end{proof} Since the set $\Gamma_1:=\{\gamma(t,s(t)) : t\in I\}$ is made of characteristic points, it is either an isolated point or a nontrivial characteristic curve (Lemma~\ref{l:char1}). We will see in the proof of Theorem~\ref{thmi:class}, contained in Section \ref{s:proof2}, that if $\Gamma_1$ were an isolated characteristic point, then the same would be true for $\Gamma$. We stress that the argument leading to such a conclusion does not rely on the characterization of $\Gamma$ provided in this section. We then have that $\Gamma_1:=\{\gamma(t,s(t)) : t\in I\}$ is a nontrivial characteristic curve. \begin{proposition}\label{p:sconstant} The function $t\mapsto s(t)$ is constant. \end{proposition} \begin{proof Let $t\in I$. Since $\langle V(t,s(t)),Z\rangle_{\mathcal D}=0$, the point $\gamma(t,s(t))$ is characteristic for $\Sigma$. Then, by Lemma~\ref{l:char1} and Remark~\ref{p:chc2}, $\Gamma_1$ is a $\mathrm{C}^2$ characteristic curve. By the implicit function theorem, the function $t\mapsto s(t)$ is $\mathrm{C}^1$-smooth and for $t\in I$ we have \[ \dot\Gamma_1(t)=V(t,s(t))+\dot s(t)\gamma_s(t,s(t)). \] The curve $\Xi_1$ obtained by projecting $\Gamma_1$ on the $xy$-plane then satisfies \[ \dot\Xi_{1}(t)=\xi_t(t,s(t))+\dot s(t)\xi_s(t,s(t)). \] Since $\gamma(t,s(t))\in\mathcal{C}(\Sigma)$, by Proposition~\ref{p:orto}, and using the fact that $\nabla\phi^\dagger(v)=-\nabla\phi^*(v^\perp)^\perp$ for any $v\neq 0$ we have \[ \langle\nabla\phi^\dagger(\xi_s(t, s(t))),\dot\Xi_1(t)\rangle=-\langle\nabla\phi^*(\xi_s(t, s(t))^\perp)^\perp,\dot\Xi_1(t)\rangle=0. \] Therefore we obtain \begin{equation}\label{eq:ppp} 0=\langle\nabla\phi^\dagger(\xi_s(t, s(t))),\xi_t(t,s(t))\rangle+\dot s(t)\langle\nabla\phi^\dagger(\xi_s(t, s(t))),\xi_s(t,s(t))\rangle, \end{equation} where, by Proposition~\ref{p:constant}, \[ \langle\nabla\phi^\dagger(\xi_s(t, s(t))),\xi_t(t,s(t))\rangle=0, \] and moreover, by \eqref{eq:parad}, \[ \langle\nabla\phi^\dagger(\xi_s(t, s(t))),\xi_s(t,s(t))=\phi^\dagger(\xi_s(t,s(t)))=1. \] Equation \eqref{eq:ppp} thus implies $\dot s= 0$, which concludes the proof. \end{proof} We are now ready to prove Theorem~\ref{thm:ccurves}. \begin{proof}[Proof of Theorem~\ref{thm:ccurves}]{Without loss of generality we assume $h>0$.} By Remark~\ref{p:chc2}, $\Gamma$ is of class $\mathrm{C}^2$ and we denote by $I$ an interval of parametrization of $\Gamma=(\Xi,\zeta)$ satisfying \eqref{eq:paraXi}. We consider the parametrization $\gamma$ given by Lemma~\ref{p:parag}. By Proposition~\ref{p:sconstant} the characteristic time $s(t)$ is constant on $I$ and we let $s(t)=\bar s\in\mathbb{R}$. Since $\langle V(t,\bar s),Z\rangle_{\mathcal D}=0$, by Lemma~\ref{l:contoVZ} we thus have \[ h^{-2}\omega(\ddot\Xi(t),\dot\Xi(t)) +\omega(\dot\Xi(t)-h^{-1}\ddot\Xi(t),h^{-1}\mu(\tau(t)+h\bar s))=0. \] Using \eqref{eq:tau}, the last equation reads \begin{equation}\label{eq:gatto} \dot\tau\omega(\dot\mu(\tau),\mu(\tau)-\mu(\tau+h\bar s))=h\omega(\mu(\tau+h\bar s),\mu(\tau)). \end{equation} If the right-hand side is $0$ at some $t\in I$, then $\mu(\tau(t))$ and $\mu(\tau(t)+h\bar s)$ are parallel by definition of $\omega$ (cf.~\eqref{eq:omega}). Since $h\bar s\in (0,M)$ by Lemma~\ref{l:exss}, the only possible choice is $h\bar s=M/2$. Plugging such choice into the left-hand side and using the fact that $\mu(\tau+M/2)=-\mu(\tau)$, we obtain \[ 2\dot\tau\omega(\dot\mu(\tau),\mu(\tau))= 0\quad\text{on}\quad I. \] This implies that $\dot\tau=0$ on $I$ and therefore that $\tau$ is constant on $I$. By \eqref{eq:tau} we deduce that $\dot \Xi$ is constant on $I$ implying that $\Xi$ is a straight line. We are now left to consider the case $h\bar s\in(0,M)$, $h\bar s\neq M/2$, so that $\omega(\mu(\tau(t)+h\bar s),\mu(\tau(t)))\neq 0$ for every $t\in I$. Equation~\eqref{eq:gatto} reads \[ \dot\tau= f(\tau)\quad\text{with}\quad f(\tau):=\frac{h\omega(\mu(\tau+h\bar s),\mu(\tau))}{\omega(\dot\mu(\tau),\mu(\tau)-\mu(\tau+h\bar s))}. \] For the sake of simplicity, assume $0\in I$. Notice that $f$ is $M/2$-periodic and of class $\mathrm{C}^1$ as a function of $\tau$. Hence, given $\tau_0\in\mathbb{R}$ satisfying $\mu(\tau_0)=\dot \Xi(0)$, there is a unique maximal solution $\tau$ to the differential equation with the initial condition $\tau(0)=\tau_0$. Since $h\bar s\in (0,M)$, $h\bar s\neq M/2$, we have $f(\tau)\neq 0$, yielding that $\dot\tau$ has constant sign. To fix the ideas, assume that $\operatorname{sign}(\dot\tau)=1$. Then, there exists $T_0>0$ such that $\tau(T_0)=\tau_0+M/2$. We claim that \begin{equation} \label{eq:pert} \tau(t+T_0)=\tau(t)+\frac{M}{2}\qquad\text{for all\ } t\in\mathbb{R}. \end{equation} This follows from the fact that $\tau_1(t):=\tau(T_0+t)$ and $\tau_2(t):=\tau(t)+M/2$ for $t\in\mathbb{R}$ solve the same Cauchy problem $ \dot\tau(t)=f(\tau)$, $\tau(0)=\tau_0+M/2$. Then, by \eqref{eq:pert}, $M$-periodicity of $\mu$, and \eqref{eq:tau}, we have for every $t\in\mathbb{R}$ \[ \dot\Xi(t+2T_0)=\mu(\tau(t+2T_0))=\mu(\tau(t)+M)=\mu(\tau(t))=\dot\Xi(t), \] i.e., $\dot\Xi$ is $2T_0$-periodic. This implies that $\Xi$ is also $2T_0$-periodic. Indeed, for $t\in\mathbb{R}$ we have \[ \begin{split} \Xi(t+2T_0)-\Xi(t)&=\int_t^{t+2T_0}\dot\Xi(\sigma)\;d\sigma=\int_t^{t+2T_0}\mu(\tau(\sigma))\;d\sigma\\ &= \int_t^{t+T_0}\mu(\tau(\sigma))\;d\sigma+ \int_{t}^{t+T_0}\mu(\tau(\sigma+T_0))\;d\sigma\\ &= \int_t^{t+T_0}\mu(\tau(\sigma))\;d\sigma- \int_{t}^{t+T_0}\mu(\tau(\sigma))\;d\sigma=0, \end{split}\] where we have used again the symmetry of $C_\phi$ and \eqref{eq:pert}. We are left to show that $\Xi(\bar \sigma)\neq \Xi(\bar t)$ for any $0\le \bar \sigma<\bar t<2T_0$. Assume that $\Xi(\bar \sigma)=\Xi(\bar t)$ for some $0\le \bar \sigma<\bar t\le 2T_0$. Then we have $0=\int_{\bar \sigma}^{\bar t}\dot\Xi(t)\;dt= \int_{\bar \sigma}^{\bar{t}}\mu(\tau(t))\;dt$. Now, letting $v:=\mu(\tau(\bar \sigma))$, by the symmetry of $C_\phi$ the function \[ \sigma\mapsto \int_{\bar \sigma}^\sigma\langle\mu(\tau(t),v)\rangle\;dt \] is monotone increasing for $\sigma\in[\bar \sigma,\bar \sigma+T_0]$ and decreasing for $\sigma\in[\bar \sigma+T_0,\bar \sigma+2T_0]$. Hence, the equation $\int_{\bar \sigma}^{\bar{t}}\mu(\tau(t))\;dt=0$ implies $\bar \sigma=0$ and $\bar t= 2T_0$. \end{proof} \subsection{Characteristic set of isoperimetric sets} In this section we apply the previous results to the study of the characteristic set of $\phi$-isoperimetric sets. As a Corollary of Theorem~\ref{thm:ccurves} we have the following \begin{corollary}\label{l:char1old} Let $\phi^*$ be of class $\mathrm{C}^2$ and let $E\subset \H^1$ be a $\phi$-isoperimetric set of class $\mathrm{C}^2$. Then $\mathcal{C}(E)$ consists of isolated points. Moreover, for every $p_0=(\xi_0,z_0)\in \mathcal{C}(E)$ and every $f$ such that $p_0\in \mathrm{gr}(f)\subset \partial E$, we have $\operatorname{rank}(JF(\xi_0))=2$. \end{corollary} \begin{proof} By Remark~\ref{rem:bdd}, we know that $\partial E$ is bounded. Therefore we exclude the possibility that $\mathcal{C}(\partial E)$ contains complete (unbounded) lifts of simple curves. \end{proof} \begin{lemma}\label{l:ch2} Let $\phi^*$ be of class $\mathrm{C}^2$ and $E\subset \H^1$ be a $\phi$-isoperimetric set of class $\mathrm{C}^2$. Let $p_0\in \mathcal{C}(E)$. There exists $r>0$ such that for $p\in \partial E \cap B(p_0,r) $, $p\neq p_0$, the maximal horizontal lift of the $\phi$-circle in $\partial E$ through $p$ meets $p_0$. \end{lemma} \begin{proof} The surface $\partial E\cap B(p_0,r)$ is the $z$-graph of $f\in \mathrm{C}^2(D)$ and $p_0 = (\xi_0, f(\xi_0)) $ with $\mathcal{C}(f) \cap \{|\xi-\xi_0|<r\} = \{\xi_0\}$. Let $\Theta_\xi\subset $ be the maximal $\phi$-circle (integral curve of $F^\perp$) passing though $\xi\in D\setminus\{\xi_0\}$. Notice that the radius of $\Theta_\xi$ does not depend on $\xi$, as it follows from Corollary~\ref{l:conto}. If $\xi_0\notin \Theta_\xi$, then the normal vector $\mathcal N_\xi =\nabla \phi^*(F)$ is continuously defined on $\Theta_\xi$. Assume that there exists a sequence of such $\xi$ with $\xi\to \xi_0$. By an elementary compactness argument it follows that there exists a $\phi$-circle $\Theta$ passing through $\xi_0$ and there exists a normal $\mathcal N$ that is continuously defined along $\Theta$ and, in particular, through $\xi_0$. Outside $\xi_0$ we have $\mathcal N = \nabla \phi^*(F)$. Let $b\in\mathbb{R}^2$ the unit vector tangent to $\Theta$ at $\xi_0$. Then we have \[ F(\xi_0+ tb) = F(\xi_0) + t J F(\xi_0) b+ o(t) = t J F(\xi_0) b+ o(t), \] with $JF(\xi_0) b \neq 0$, because $JF(\xi_0)$ has rank $2$ by Lemma~\ref{l:char1}. Since $\nabla \phi (-v) = - \nabla \phi(v)$, for $v\in\mathbb{R}^2\setminus\{0\}$, it follows that \[ \begin{split} & \lim _{t\to 0^+} \nabla \phi^*(F(\xi_0+ tb)) = \nabla \phi^*(J F(\xi_0) b), \\ & \lim _{t\to 0^-} \nabla \phi^*(F(\xi_0+ tb)) = -\nabla \phi^*(J F(\xi_0) b). \end{split} \] This contradicts the continuity of $\mathcal N$ along $\Theta$ at $\xi_0$. \end{proof} \section{Classification of $\phi$-isoperimetric sets of class $\mathrm{C}^2$}\label{s:proof} \subsection{Construction of $\phi$-bubbles} Let $\phi$ be a norm in $\mathbb{R}^2$ that we normalize by $\phi(1,0)=1$. For $\xi_0\in \mathbb{R}^2$ and $r>0$, $\phi$-circles are defined in \eqref{eq:qCircle} and we let the \emph{$\phi$-disk} of radius $r$ and center $\xi_0$ be \[ D_\phi(\xi_0,r) = \{\xi \in \mathbb{R}^2: \phi(\xi-\xi_0) <r \}. \] We also let $ C_\phi(r)= C_\phi(0,r)$, $C_\phi= C_\phi(1)$ and $D_\phi(r)= D_\phi(0,r)$, $D_\phi= D_\phi(1)$. The circle $C_\phi$ is a Lipschitz curve and we denote by $L=L_\phi > 0$ its Euclidean length. We parametrize $C_\phi$ by arc-length through $\kappa \in \mathrm{Lip} \big( [0,L];\mathbb{R}^2 \big)$ such that $\kappa([0,L]) = C_\phi$ with initial and end-point $\kappa(0) = \kappa(L) = (-1,0)$. We choose the anti-clockwise orientation and we extend $\kappa$ to the whole $\mathbb{R}$ by $L$-periodicity. Then we have $ \kappa \in \mathrm{Lip} (\mathbb{R};\mathbb{R}^2)$. The map $\xi : \mathbb{R}^2 \to \mathbb{R}^2$, $\xi(t,\tau) = \kappa(t) + \kappa(\tau)$, is in $\mathrm{Lip} (\mathbb{R}^2;\mathbb{R}^2)$. We restrict $\xi$ to the domain \[ D = \big\{ (t,\tau) \in \mathbb{R}^2: \tau \in [0,L], t \in [\tau +{L}/{2}, \tau +{3L}/{2}] \big\}. \] Notice that $\xi (\tau+{L}/{2}, \tau) = \xi (\tau + {3L}/{2}, \tau) = 0$ for any $\tau\in [0,L]$. We define the function $z\in \mathrm{Lip}(D)$, \begin{equation}\label{eq:Psi3} z(t,\tau) = \int _{\tau + {L}/{2}}^t \omega \big (\xi(s,\tau), \xi_s(s,\tau)\big) ds . \end{equation} The map $\Phi:D\to\mathbb{R}^3$ defined by $\Phi = (\xi,z)$ is Lipschitz continuous. Moreover, $\Phi$ is $\mathrm{C}^k$ if $\phi$ is $\mathrm{C}^k$. We define the Lipschitz surface $ \Sigma_\phi= \Phi (D) \subset \mathbb{R}^3 $ and call $S=\Phi (\tau + {L}/{2},\tau) = 0\in \Sigma_\phi$ the south pole of $\Sigma_\phi$ and $N=\Phi(\tau + {3L}/{2},\tau) = (0,0,z(\tau + {3L}/{2},\tau))$ the north pole. We call the bounded region $E_\phi \subset \mathbb{R}^3$ enclosed by $\Sigma_\phi$ the $\phi$-bubble. $E_\phi$ is a topological ball and it is the candidate solution to the $\phi$-isoperimetric problem. When $\phi$ is the Euclidean norm in the plane, the set $E_\phi$ is the well-known Pansu's ball. \subsection{Classification of $\phi$-isoperimetric sets of class $\mathrm{C}^2$}\label{s:proof2} We are ready to prove the main theorem of the paper. \begin{proof}[Proof of Theorem~\ref{thmi:class}] The set $E$ is bounded and connected, by Remark \ref{rem:bdd}. We may also assume that it is open. It follows from Corollary~\ref{l:conto} (and from the analogous result for $x$-graphs and $y$-graphs based on Remark~\ref{rem:x-graphs2}) that, out of the characteristic set $\mathcal{C}(E)$, the surface $\partial E$ is foliated by horizontal lifts of $\phi$-circles. Then $\mathcal{C}(E)$ contains at least one point, since otherwise, $\partial E$ would contain an unbounded curve, contradicting the boundedness of $E$. Let $f\in \mathrm{C}^2 (D)$, with $D\subset \mathbb{R}^2$ open, be a maximal function such that $\mathrm{gr}(f)\subset \partial E$ and $\mathcal C(f)\neq \emptyset$. We may assume that $0\in \mathcal C(f)$, $f(0)=0$ and that $E$ lies above the graph of $f$ near $0$. Around the characteristic point $0$, the function $f$ must have the structure described in Lemma \ref{l:ch2}. It follows that, up to a dilation, we have $\mathrm{gr}(f)\subset \partial E_\phi$. The maximal domain for $f$ must be $D= D_\phi(2)$. Otherwise, at each point $\xi\in \partial D\setminus \partial D_\phi(2)$ the space $T_{(\xi,f(\xi))}\partial E=T_{(\xi,f(\xi))}\partial E_\phi$ is not vertical, contradicting the maximality of $D$. This shows that the graph of $f$ is the `lower hemisphere' of $\partial E_\phi$. Up to extending $f$ by continuity to $\partial D$, we have $(\xi,f(\xi))\notin\mathcal{C}(E)$ for each $\xi\in\partial D$. Hence there exists a $\phi$-circle passing through $0$ whose horizontal lift stays in $\partial E$ and passes through $(\xi,f(\xi))$. The collection of all the maximal extensions of such horizontal lifts completes the upper hemisphere of $\partial E_\phi$, thus implying that $\partial E_\phi\subset \partial E$. Moreover, since $\partial E$ is $\mathrm{C}^2$, we deduce that $\partial E_\phi$ is a connected component of $\partial E$. In conclusion we have proved that $\partial E$ is the finite union of boundaries of $\phi$-bubbles having the same curvature. By connectedness of $E$ this concludes the proof. \end{proof} In general, $\phi$-bubbles are not of class $\mathrm{C}^2$ and not even of class $\mathrm{C}^1$, e.g., in the case of a crystalline norm. Even when $\phi$ is regular, there may be a loss of regularity at the poles of $E_\phi$. \subsection{Regularity of $\phi$-bubbles}\label{ss:regE} We first show that $\phi$-bubbles have the same regularity as $\phi$ outside the poles. \begin{lemma}\label{l:sm} If $\phi$ is strictly convex and of class $\mathrm{C}^k$, for some $k\geq 1$, then the set $\Sigma _\phi \setminus\{ S,N\}$ is an embedded surface of class $\mathrm{C}^k$. \end{lemma} \begin{proof} If the Jacobian of $\Phi$ has rank 2 at the point $(t,\tau)\in D$, then $\Sigma_\phi$ is an embedded surface of class $\mathrm{C}^k$ around the point $\Phi(t,\tau)$. A sufficient condition for this is $ \det J\xi (t,\tau) \neq 0$. The Jacobian of $\xi: D\to\mathbb{R}^2$ satisfies \begin{equation*}\label{eq:JPsi} \det J\xi (t,\tau) = 0\quad \textrm{if and only if}\quad \dot{\kappa}(t) = \pm \dot{\kappa}(\tau). \end{equation*} The case $\dot{\kappa}(t) = -\dot{\kappa}(\tau)$ is equivalent to ${\kappa}(t) = -{\kappa}(\tau)$, by the strict convexity of the norm. This is in turn equivalent to $t=\tau+L/2$ or $t = \tau+3L/2$. In the former case we have $\Phi(t,\tau) = S$, in the latter $\Phi(t,\tau) = N$. We are left to consider the case $\dot{\kappa}(t) = \dot{\kappa}(\tau)$. By strict convexity of $\phi$, this implies ${\kappa}(t) = {\kappa}(\tau)$, that is equivalent to $t =\tau + L$. In this case, we have $\xi(t,\tau) = 2 \kappa(\tau) \in C_\phi(2)$ The point $\Phi(t,\tau)$ is on the `equator' of $\Sigma_\phi$. We study the regularity of $\Sigma_\phi$ at points $\Phi(\tau+L,\tau)$. The height $z (\tau+L,\tau)$ does not depend on $\tau$ because it is half the area of the disk $D_\phi$. It follows that $ 0 = \partial_\tau \big ( z (\tau+L,\tau) \big) = z_t (\tau+L,\tau) + z_\tau (\tau+L,\tau) $ and this implies that \begin{equation}\label{eq:dPsi3t} z_t (\tau+L,\tau) \neq z_\tau (\tau+L,\tau), \end{equation} as soon as we prove that the left-hand side does not vanish. Indeed, differentiating \eqref{eq:Psi3} we obtain \begin{equation*} z_t (\tau+L ,\tau) = 2 \omega\big( \kappa(\tau),\dot\kappa(\tau) \big) \neq 0, \end{equation*} because $\kappa(\tau)$ and $\dot{\kappa}(\tau)$ are not proportional. From $\dot{\kappa}(\tau+L) = \dot{\kappa}(\tau) \neq 0$ and \eqref{eq:dPsi3t}, we deduce that the Jacobian matrix $ J\Phi (\tau+L,\tau)$ has rank 2. This shows that $\Sigma_\phi$ is of class $\mathrm{C}^k$ also around the `equator'. \end{proof} The regularity of $\Sigma_\phi$ at the poles is much more subtle. We study the problem in Theorem~\ref{iFlynt}, whose proof is presented below. \begin{proof}[Proof of Theorem~\ref{iFlynt}] We study the regularity at the south pole. By Lemma \ref{l:sm} there exists a function $ f \in \mathrm{C}^2(D_\phi(2)\setminus\{0\})$ such that the graph of $f$ is the lower hemisphere of $\Sigma_\phi$ without the south pole. We shall show that $f$ can be extended to a function $f\in \mathrm{C}^2(D_\phi(2))$ satisfying $\nabla f(0) =0$ and $\mathcal{H}\! f(0) =0$. Here and in the sequel, we denote by $\mathcal{H}\! f$ the Hessian matrix of $f$. Differentiating the identity \[ z(t,\tau) = f(\xi(t,\tau)),\qquad \tau \in [0,L],\ t \in (\tau +{L}/{2}, \tau +L), \] we find the identities \begin{align} \label{E3} z_t(t,\tau) &= \langle \nabla f, \dot{\k}(t) \rangle, \\ \label{E3p}z_{\tau}(t,\tau) &=\langle \nabla f, \dot{\k}(\tau) \rangle, \\ \label{E41} z_{tt}(t,\tau) &= \langle \mathcal{H}\!f \dot{\k}(t), \dot{\k}(t) \rangle + \langle \nabla f, \ddot{\k}(t) \rangle, \\ \label{E42} z_{\tau \tau}(t,\tau) &= \langle \mathcal{H}\!f \dot{\k}(\tau), \dot{\k}(\tau) \rangle + \langle \nabla f, \ddot{\k}(\tau) \rangle, \\ \label{E43} z_{t \tau}(t,\tau) &= z_{\tau t} (t,\tau)= \langle \mathcal{H}\!f \dot{\k}(t), \dot{\k}(\tau) \rangle, \end{align} where, above and in the following, $\mathcal{ H}\! f$ and $\nabla f$ are evaluated at $\xi(t,\tau)$. On the other hand, from \eqref{eq:Psi3} we compute the derivatives \begin{align} \label{E2} z_t(t,\tau) &= \omega (\k(t)+\k(\tau), \dot{\k}(t)), \\ z_{\tau}(t,\tau) \label{E2p} &= \omega(\dot{\k}(\tau), \k(t) + \k(\tau)), \\ \label{E201} z_{tt}(t,\tau) &= \omega( \k(t) + \k(\tau),\ddot{\k}(t)), \\ \label{E202} z_{\tau \tau}(t,\tau) &=\omega (\ddot{\k}(\tau), \k(t) + \k(\tau) ), \\ \label{E203} z_{t\tau}(t,\tau) &= \omega ( \dot{\k}(\tau), \dot{\k}(t)). \end{align} In formulas \eqref{E3}--\eqref{E203}, we will replace $\k(\tau),\dot\k(\tau),$ and $\ddot\k (\tau)$ with their Taylor expansions at the point $t-L/2$. By assumption, the arc-length parameterization of the circle $C_\phi$ satisfies $ \kappa \in \mathrm{C}^4(\mathbb{R};\mathbb{R}^2)$ and \begin{equation}\label{eq:curv} \ddot\k(t)=\lambda(t)\dot\k(t)^\perp,\quad t\in [0,L], \end{equation} for a function (the curvature) $\lambda \in \mathrm{C}^2(\mathbb{R})$ that is $L$-periodic and strictly positive. So there exist $0<\lambda_0\leq \Lambda _0<\infty$ such that \begin{equation*}\label{eq:bounds} 0 < \lambda_0 \leq \lambda \leq \Lambda_0 ,\quad |\dot{\lambda}| \leq \Lambda_0 ,\quad |\ddot{\lambda}| \leq \Lambda_0 . \end{equation*} The third and fourth derivatives of $\kappa$ have the representation: \begin{equation} \label{eq:dcurv} \k^{(3)} = \dot{\lambda}\dot{\k}^{\perp} - \lambda^2 \dot{\k} \qquad\textrm{and} \qquad \k^{(4)} = (\ddot{\lambda} - \lambda^3) \dot{\k}^{\perp} - 3\lambda \dot{\lambda} \dot{\k}. \end{equation} In the following, we let $\delta = t-\tau -L/2>0$. The third order Taylor expansion for $\k(\tau)$ at $ t- {L}/{2} $ is \begin{align} \k(\tau) &= \k(t- {L}/{2}) - \delta \dot{\k}(t- {L}/{2}) + \frac {\delta^2 }{ 2} \ddot{\k}(t-{L}/{2}) - \frac {\delta^3 }{ 6} {\k}^{(3)}(t-{L}/{2}) + o(\delta^3)\nonumber \\ &=-\k(t) + \delta \dot{\k}(t) - \frac{\delta^2 }{2}\ddot{\k}(t) + \frac{\delta^3}{6} {\k}^{(3)}(t) + o(\delta^3)\nonumber\\ &= -\k(t) + \delta \dot{\k}(t) -\frac{\delta^2}{2}\lambda(t) \dot{\k}(t)^{\perp} + \frac{\delta^3}{6} (\dot{\lambda}(t)\dot{\k}(t)^{\perp}-\lambda(t)^2 \dot{\k}(t)) + o(\delta^3).\label{E5} \end{align} Hereafter, when not explicit, the functions $\kappa$ and $\lambda$ and their derivatives are evaluated at $t$. The little-$o$ remainders are uniform with respect to the base point $t-L/2$. By a similar computation, using \eqref{eq:dcurv} we also obtain \begin{align} \label{E9} \dot{\k}(\tau) &= -\dot{\k} + \delta\lambda \dot{\k}^{\perp} -\frac{\delta^2}{2}\big( \dot{\lambda}\dot{\k}^{\perp}-\lambda^2\dot{\k} \big) {+\frac{\delta^3}{6}(\ddot\lambda-\lambda^3)\dot\k^\perp-\frac{\delta^3}{2}\lambda\dot\lambda\dot\k + o(\delta^3)},\\ \label{E10} \ddot{\k}(\tau) &= -\lambda \dot{\k}^{\perp} + \delta (\dot{\lambda} \dot{\k}^{\perp} - \lambda^2 \dot{\k}) -\frac{\delta^2}{2} \big( (\ddot{\lambda} - \lambda^3) \dot{\k}^{\perp} - 3\lambda \dot{\lambda} \dot{\k} \big) + o(\delta^2). \end{align} We are ready to start the proof. We will use the identities \begin{equation}\label{eq:ome} \omega(\dot\k,\dot\k^\perp)=-\omega(\dot\k^\perp,\dot\k)=\frac{1}{2}. \end{equation} Recall our notation $\delta= t-\tau -L/2$. \medskip \textit{Step 1.} We claim that there exists $C > 0$ such that \begin{equation}\label{E19} |\nabla f(\xi(t,\tau))| \leq C \delta ^2 \quad \textrm{for all } \tau \in [0,L],\ t \in (\tau +{L}/{2}, \tau +L). \end{equation} This estimate implies that $f$ can be extended to a function $f\in \mathrm{C}^1(D_\phi(2))$ satisfying $\nabla f(0) =0$. \medskip Inserting \eqref{E5} and \eqref{E9} into \eqref{E2} and \eqref{E2p} yields \begin{align} \label{E11} z_t(t,\tau) &= \omega \big( (-\tfrac{\delta^2}{2} \lambda + \tfrac{\delta^3}{6} \dot{\lambda})\dot{\k}(t)^{\perp}, \dot{\k}(t) \big) = \frac{\delta^2}{4} \lambda - \frac{\delta^3}{12} \dot{\lambda}+ o(\delta^3),\\ \nonumber z_{\tau}(t,\tau) &= \omega \big( (-1+\tfrac{\delta^2}2\lambda^2 ) \dot{\k}, (-\tfrac{\delta^2}{2}\lambda + \tfrac{\delta^3}{6}\dot{\lambda} ) \dot{\k}^{\perp} \big) + \omega \big( (\delta\lambda - \tfrac{\delta^2}2\dot{\lambda} ) \dot{\k}^{\perp}, (\delta - \tfrac{\delta^3}{6}\lambda^2 ) \dot{\k} \big) + o(\delta^3) \\ \label{E12} &= -\frac{\delta^2}{4} \lambda +\frac{ \delta^3}{ {6}} \dot{\lambda} + o(\delta^3). \end{align} Now, plugging \eqref{E9} and \eqref{E12} into \eqref{E3p} and then using \eqref{E3} and \eqref{E11} we obtain \begin{equation*} \label{E13} \begin{split} -\frac{\delta^2}{4} \lambda +\frac{\delta^3}{ {6}}\dot\lambda& =\langle \nabla f, \dot{\k}(t)\rangle\left(-1+\tfrac{\delta^2}{2}\lambda^2 {-\tfrac{\delta^3}{2}\lambda\dot\lambda}\right) \\ &\quad +\langle \nabla f, \dot{\k}(t)^{\perp}\rangle\left(\delta\lambda -\tfrac{\delta^2}{2} \dot{\lambda} {+\tfrac{\delta^3}{6}(\ddot\lambda-\lambda^3)}\right) + o(\delta^3) \\ &=-\frac{\delta^2}{4} \lambda+ \frac{\delta^3}{12} \dot{\lambda}+\langle \nabla f, \dot{\k}(t)^{\perp}\rangle\left(\delta\lambda -\tfrac{\delta^2}{2}\dot{\lambda} {+\tfrac{\delta^3}{6}(\ddot\lambda-\lambda^3)}\right) + o(\delta^3). \end{split} \end{equation*} Dividing the last equation by $\lambda \delta>0$, we get \begin{equation}\label{E16} \langle \nabla f,\dot{\k}^{\perp} \rangle =\frac{\delta^2}{12}\frac{\dot{\lambda}}{ \lambda} + o(\delta^2). \end{equation} Thus, there exists $C>0$ such that \begin{equation*}\label{E17} \big| \langle \nabla f, \dot{\k}^{\perp} \rangle \big| \leq C \delta^2. \end{equation*} On the other hand, by \eqref{E3} and \eqref{E11}, possibly changing $C>0$ we also obtain \begin{equation*}\label{E18} \big| \langle \nabla f, \dot{\k} \rangle \big| = |z_t(t,\tau)| \leq C \delta^2 \end{equation*} thus yielding \eqref{E19}. \medskip \textit{ Step 2.} We claim that the norm of the Hessian matrix $\mathcal H\! f$ satisfies \begin{equation}\label{eq:claim2} |\mathcal H\! f(\xi(t,\tau))|= o(1)\quad \textrm{for } \tau \in [0,L],\ t \in (\tau +{L}/{2}, \tau +L), \end{equation} where $o(1)\to 0$ as $\delta = t-\tau -L/2\to0$. This implies that $f $ can be extended to a function $f\in \mathrm{C}^2(D_\phi(2))$ satisfying $ \mathcal H\! f (0) = 0$. \medskip Plugging \eqref{E201} and \eqref{E5} into \eqref{E41}, and then using \eqref{eq:curv}, \eqref{eq:ome}, and \eqref{E16} yields \begin{align} \langle \mathcal{H}\!f \dot{\k}, \dot{\k} \rangle & = z_{tt}(t,\tau) - \langle \nabla f, \ddot{k} \rangle = \omega \big( \k(t) + \k(\tau), \ddot{\k} \big) - \langle \nabla f, \ddot{\k} \rangle \nonumber \\ &= \omega \big( (\delta -\tfrac{\delta^3}{6}\lambda^2)\dot{\k} +(- \tfrac{\delta^2}{2}\lambda + \tfrac{\delta^3}{6} \dot{\lambda}) \dot{\k}^{\perp}, \lambda \dot{\k}^{\perp} \big) - \langle \nabla f, \lambda \dot{\k}^{\perp} \rangle + o(\delta^3)\nonumber \\ & = \frac{ \delta}{2}\lambda - \frac{\delta^2}{ {12}} \dot{\lambda} + o(\delta^2).\label{eq:H11} \end{align} On the other hand, plugging \eqref{E9} into \eqref{E43}, and then using \eqref{eq:H11} we get \[ \begin{split} z_{t\tau}(t,\tau) &= \langle \mathcal{H}\!f \dot{\k}(t), \dot{\k}(t) \rangle \left( -1 + \frac{\delta^2 }{2}\lambda^2 \right) + \langle \mathcal{H}\!f \dot{\k}, \dot{\k}^{\perp} \rangle \left( \delta\lambda - \frac{\delta}{2} \dot{\lambda} \right) + o(\delta^2)\\ &=-\frac{ \delta}{2}\lambda + \frac{\delta^2}{ {12}} \dot{\lambda} + \langle \mathcal{H}\!f \dot{\k}, \dot{\k}^{\perp} \rangle \left( \delta\lambda - \frac{\delta}{2} \dot{\lambda} \right) + o(\delta^2), \end{split} \] while from \eqref{E203}, \eqref{E9} and \eqref{eq:ome} we get \[ \begin{split} z_{t\tau}(t,\tau) &=- \frac{1}{2} \left( \delta\lambda - \frac{\delta^2}{2}\dot{\lambda} \right) + o(\delta^2). \end{split} \] Therefore we obtain the identity \[ \begin{split} \left(\delta \lambda - \frac{\delta^2}{2}\dot{\lambda} \right) \langle \mathcal{H}\!f \dot{\k}, \dot{\k}^{\perp} \rangle &= z_{t\tau}(t,\tau) +\frac{ \delta}{2}\lambda - \frac{\delta^2}{ {12}} \dot{\lambda} + o(\delta^2) \\ &=- \frac{1}{2} \big( \delta\lambda - \frac{\delta^2}{2}\dot{\lambda} \big) +\frac{ \delta}{2}\lambda - \frac{\delta^2}{ {12}} \dot{\lambda} + o(\delta^2)\\& = {\frac{\delta^2}{6} \dot{\lambda}}+o(\delta^2), \end{split} \] and dividing by $\lambda \delta>0$ we get \begin{equation}\label{eq:H12} \langle \mathcal{H}\!f \dot{\k}, \dot{\k}^{\perp} \rangle = {\frac{\delta}{6}\frac{\dot{\lambda}}{\lambda} }+o(\delta). \end{equation} By symmetry of the Hessian matrix, we also have \begin{equation}\label{eq:H21} \langle \mathcal{H}\!f \dot{\k}^{\perp}, \dot{\k} \rangle = {\frac{\delta}{6}\frac{\dot{\lambda}}{\lambda}}+o(\delta). \end{equation} We are left to estimate $\langle \mathcal{H}\!f \dot{\k}^{\perp}, \dot{\k} ^\perp \rangle$. By \eqref{E10}, \eqref{E16}, \eqref{E3}, and \eqref{E11} we obtain \begin{align} \langle\nabla f,\ddot\k(\tau)\rangle&=(-\lambda + \delta \dot{\lambda} -\tfrac{\delta^2}{2} (\ddot{\lambda} - \lambda^3) \langle\nabla f, \dot{\k}^{\perp}\rangle +(- \delta \lambda^2 +\tfrac{3}{2}\delta^2\lambda \dot{\lambda}) \langle\nabla f, \dot{\k}\rangle + o(\delta^2)\nonumber\\ &=(-\lambda + \delta \dot{\lambda} -\tfrac{\delta^2}{2} (\ddot{\lambda} - \lambda^3)) \tfrac{\delta^2}{12}\tfrac{\dot{\lambda}}{ \lambda} +(- \delta \lambda^2 +\tfrac{3}{2}\delta^2\lambda \dot{\lambda}) \tfrac{\delta^2}{4} \lambda + o(\delta^2)\nonumber\\ &=-\frac{\delta^2}{ {12}} \dot{\lambda} +o(\delta^2).\label{E112} \end{align} On the other hand, by \eqref{E9}, \eqref{eq:H11}, \eqref{eq:H12}, \eqref{eq:H21} we have \begin{align} \langle \mathcal{H}\!f \dot{\k}(\tau), \dot{\k}(\tau) \rangle & = (-1 +\tfrac{\delta^2}{2}\lambda^2)^2\langle \mathcal H\! f \dot{\k}, \dot{\k}\rangle +(\delta\lambda -\tfrac{\delta^2}{2}\dot{\lambda})^2\langle \mathcal H\! f\dot{\k}^{\perp},\dot{\k}^{\perp})\rangle\nonumber\\ &\qquad +2 (-1 +\tfrac{\delta^2}{2}\lambda^2)(\delta\lambda -\tfrac{\delta^2}{2}\dot{\lambda})\langle\mathcal H\! f \dot{\k},\dot{\k}^{\perp}\rangle+o(\delta^2)\nonumber\\ &= (-1 +\tfrac{\delta^2}{2}\lambda^2)^2(\tfrac{\delta}{2}\lambda - \tfrac{\delta^2}{ {12}} \dot{\lambda} + o(\delta^2)) +(\delta\lambda -\tfrac{\delta^2}{2}\dot{\lambda})^2\langle \mathcal H\! f\dot{\k}^{\perp},\dot{\k}^{\perp}\rangle\nonumber\\ &\qquad +2 (-1 +\tfrac{\delta^2}{2}\lambda^2)(\delta\lambda -\tfrac{\delta^2}{2}\dot{\lambda}) ( {\tfrac{\delta}{6}\tfrac{\dot{\lambda}}{\lambda} }+o(\delta))\nonumber\\ &=\frac{\delta}{2}\lambda -\frac{5}{12} \delta^2\dot\lambda+\delta^2\lambda^2\langle \mathcal H\! f\dot{\k}^{\perp},\dot{\k}^{\perp}\rangle+o(\delta^2).\label{E114} \end{align} Plugging \eqref{E112} and \eqref{E114} into \eqref{E42} we get \begin{equation}\label{eq:comp1} \begin{split} z_{\tau \tau}(t,\tau) &= \frac{ \delta}{2}\lambda-\frac{5}{12}\delta^2\dot\lambda+\delta^2\lambda^2\langle \mathcal H\! f\dot{\k}^{\perp},\dot{\k}^{\perp}\rangle-\frac{\delta^2}{ {12}} \dot{\lambda} +o(\delta^2) \\&=\frac{\delta}{2}\lambda -\frac{\delta^2}{ {2}}\dot\lambda+\delta^2\lambda^2\langle \mathcal H\! f\dot{\k}^{\perp},\dot{\k}^{\perp}\rangle +o(\delta^2). \end{split} \end{equation} Moreover, plugging \eqref{E5} and \eqref{E10} into \eqref{E202}, and using \eqref{eq:ome}, we get \begin{equation}\label{eq:comp2} \begin{split} z_{\tau \tau} (t,\tau) &= \omega \big( -\delta\lambda^2 \dot{\k} +(-\lambda + \delta\dot{\lambda})\dot{\k}^{\perp}, \delta \dot{\k} -\tfrac{\delta^2}{2}\lambda \dot{\k}^{\perp} \big)+o(\delta^2) =-\frac{\delta}{2}(-\lambda + \delta\dot{\lambda})+o(\delta^2). \end{split} \end{equation} Comparing \eqref{eq:comp1} and \eqref{eq:comp2} we therefore obtain \[ \delta^2\lambda^2\langle \mathcal H\! f\dot{\k}^{\perp},\dot{\k}^{\perp}\rangle=-\frac{ \delta}{2}\lambda+\frac{\delta^2}{ {2}}\dot\lambda-\frac{\delta}{2}(-\lambda + \delta\dot{\lambda})+o(\delta^2)=o(\delta^2). \] This yields $\langle \mathcal H \! f\dot{\k}^{\perp},\dot{\k}^{\perp}\rangle=o(1)$ as $\delta\to 0$. Together with \eqref{eq:H11}, \eqref{eq:H12}, and \eqref{eq:H21} this implies \eqref{eq:claim2} and concludes the proof of the theorem. \end{proof} \section{The isoperimetric problem for general norms}\label{s:cr} In the case of crystalline norms, the first order necessary conditions satisfied by an isoperimetric set are not sufficient to reconstruct its structure, even assuming sufficient regularity. In this section we show that the $\phi$-isoperimetric problem for a general norm -- in particular for a crystalline norm -- can be approximated by the isoperimetric problem for smooth norms. By Theorem \ref{iFlynt}, we know that if $\phi$ is of class $\mathrm{C}^\infty_+$ then the $\phi$-bubble $E_\phi$ is of class $\mathrm{C}^2$. In this section, we show that the validity of Conjecture~\ref{iBenbow} implies the $\phi$-isoperimetric property for the $\phi$-bubble of any (crystalline) norm. \subsection{Smooth approximation of norms in the plane} We start with the mollification of a norm. \begin{proposition}\label{p:approx} Let $\phi$ be a norm in $\mathbb{R}^2$. Then, for any $\varepsilon>0$ there exists a norm $\phi_\varepsilon$ of class $\mathrm{C}^\infty_+$ with dual norm of class $\mathrm{C}^\infty$, such that for all $\xi\in\mathbb{R}^2$ we have \begin{equation} \label{piff} (1-\eta( \epsilon) ) \phi_\epsilon(\xi)\leq \phi(\xi)\leq (1+\eta( \epsilon)) \phi_\epsilon(\xi), \end{equation} and $\eta( \epsilon) \to0$ as $\epsilon\to0^+$. \end{proposition} \begin{proof} For $\varepsilon>0$, we introduce the smooth mollifiers $\varrho_\varepsilon:\mathbb{R}\to \mathbb{R}$, supported in $[-\varepsilon\pi,\varepsilon\pi]$ defined by \[ \varrho_\varepsilon(t)= \begin{cases} c_\varepsilon \exp\Big( {\frac{\pi^2\epsilon^2}{t^2-\pi^2\epsilon^2}}\Big) &\text{if }|t|<\pi,\\ 0&\text{if }|t|\geq \pi, \end{cases} \] where $c_\varepsilon$ is chosen in such a way that $\int_\mathbb{R}\varrho_\varepsilon(t)\;dt=1$. Following \cite{DWM99,MBC06}, we define the function $\psi_\varepsilon:\mathbb{R}^2\to[0,\infty)$ letting \[ \psi_\varepsilon(\xi):=\int_{\mathbb{R}}\varrho_\varepsilon(t)\phi(R_t \xi)\;dt, \] where $R_t$ denotes the anti-clockwise rotation matrix of angle $t$. The function $\psi_\varepsilon$ is a $\mathrm{C}^\infty$ norm. On the circle $\mathbb S^1=\{\xi\in\mathbb{R}^2:|\xi|=1\}$, the norms $\psi_\epsilon$ converge uniformly to $\phi$ as $\epsilon\to0^+$. So our claim \eqref{piff} {with $\eta(\epsilon)\to 0$} holds with $\psi_\epsilon$ replacing $\phi_\epsilon$, by the positive $1$-homogeneity of norms. We let $\phi_\varepsilon:\mathbb{R}^2\to [0,\infty)$ be defined by \[ \phi_\varepsilon(\xi):=\sqrt{\psi_\varepsilon(\xi)^2+\varepsilon|\xi|^2},\quad \xi\in\mathbb{R}^2. \] This is a $\mathrm{C}^\infty$ norm in $\mathbb{R}^2$ and \eqref{piff} is satisfied with $\eta(\epsilon)\to 0$. The {unit} $\phi_\epsilon$-circle {centered at the origin} is the $0$-level set of the function \[ F_\epsilon (\xi)=\psi_\varepsilon^2(\xi)+\varepsilon|\xi|^2-1, \quad \xi\in\mathbb{R}^2. \] Since the Hessian matrix of the squared Euclidean norm is proportional to the identity matrix $I_2$ and $\psi_\varepsilon^2$ is convex, we have that $\mathcal H\! F_\epsilon \geq 2\varepsilon I_2$ in the sense of matrices. Then the curvature $\lambda_\epsilon$ of a {unit} $\phi_\epsilon$-circle satisfies \begin{equation*} \label{eq:kurv}\begin{split} \lambda_\epsilon &=\frac{\langle \mathcal H F_\epsilon \nabla F_\epsilon ^\perp,\nabla F_\epsilon ^\perp\rangle}{|\nabla F_\epsilon |^3}\geq \frac{ 2\varepsilon}{|\nabla F_\epsilon |}>0. \end{split} \end{equation*} The proof that the dual norm {of a norm of class $\mathrm{C}^\infty_+$ is itself} of class $\mathrm{C}^\infty$ is standard and {we omit it}. \end{proof} \subsection{Crystalline $\phi$-bubbles as limits of smooth isoperimetric sets} Let $\phi$ be any norm in $\mathbb{R}^2$ and let $\{\phi_\varepsilon\}_{\varepsilon>0}$ be the smooth approximating norms found in Proposition~\ref{p:approx}. Given a Lebesgue measurable set $F\subset \mathbb{R}^2$, from \eqref{piff} and from the definition of perimeter (Definition~\ref{d:per}), we have \begin{equation} \label{Jim} (1-\eta(\epsilon) ) \P_{\phi}(F) \leq \P_{\phi_\epsilon}(F) \leq (1+\eta(\epsilon) ) \P_{\phi}(F) . \end{equation} The $\phi_\epsilon$-circles $C_{\phi_\epsilon}$ converge in Hausdorff distance to the circle $C_\phi$. This implies that the $\phi_\epsilon$-bubbles $E_{\phi_\epsilon}$ converge in the Hausdorff distance to the limit bubble $E_\phi$. This in turn implies the convergence in $L^1(\mathbb H^1)$, namely, \begin{equation}\label{tesoro} \lim_{\epsilon \to0^+} \mathcal L^3( E_{\phi_\epsilon}\Delta E_\phi)=0, \end{equation} where $\Delta$ denotes the symmetric difference of sets. \begin{proof}[Proof of Theorem~\ref{thmi:appr}] Let $F\subset \mathbb H^1 $ be any Lebesgue measurable set with $0<\mathcal L^3(F) <\infty$. Assuming the validity of Conjecture \ref{c:isop-lm}, $E_{\phi_\epsilon}$ is isoperimetric for any $\epsilon>0$. So using twice \eqref{Jim} we find \[ \operatorname{Isop_{\phi}}(F) \geq \frac{ \operatorname{Isop_{\phi_\epsilon}}(F)}{ 1+\eta(\epsilon)} \geq \frac{\operatorname{Isop_{\phi_\epsilon}}(E_{\phi_\epsilon}) }{ 1+\eta(\epsilon)} \geq \frac{1-\eta(\epsilon) }{ 1+\eta(\epsilon)} \operatorname{Isop_{\phi}}(E_{\phi_\epsilon}). \] By the lower semicontinuity of the perimeter with respect to the $L^1$ convergence and from \eqref{tesoro}, we deduce that \[ \liminf_{\epsilon\to0^+} \operatorname{Isop_{\phi }}(E_{\phi_\epsilon})\geq \operatorname{Isop_{\phi}}(E_{\phi }), \] and using the fact that $\eta(\epsilon)\to 0$ we conclude that $ \operatorname{Isop_{\phi}}(F) \geq \operatorname{Isop_{\phi}}(E_\phi). $ \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} One of the major challenges in group cohomology is the computation of the cohomology of nilpotent groups. Such computations are important because general questions about modular group cohomology can often be reduced to questions that concern only the cohomology of its Sylow $p$-subgroup. The structure of the cohomology of $p$-groups can be quite complicated, but in the case when the cohomology ring is Cohen-Macaulay (i.e, when the depth equals the Krull dimension), the homological algebra of the representation theory has more orderly structural features. In the realm of Lie theory and modular representations of algebraic groups the nilpotent restricted $p$-Lie algebras play a similar role to that of the $p$-groups in group representations. For example, an element in the restricted cohomology ring of a restricted Lie algebra is nilpotent if and only if its restriction to every nilpotent subalgebra is nilpotent. In this paper we address some basic questions on the structure of the cohomology rings for these algebras. Suppose that $({\mathfrak n},[p])$ is a nilpotent restricted $p$-Lie algebra and that $k$ is an algebraically closed field of characteristic $p>0$. The spectrum of the cohomology ring identifies with the restricted nullcone ${\mathcal N}_{1}({\mathfrak n})=\{x\in {\mathfrak n}:\ x^{[p]}=0\}$. If we assume that $p > 2$, then there is a spectral sequence \cite{FP} \[ E_2^{2i,j} = S^{2i}(\mathfrak n^*)^{(1)}\otimes \operatorname{H}\nolimits^{j}(\mathfrak n,k) \Rightarrow \operatorname{H}\nolimits^{2i+j}(u(\mathfrak n), k) \] where $S^{*}({\mathfrak n}^{*})^{(1)}$ is the Frobenius twist of the symmetric algebra on the dual of the underlying vector space of $\mathfrak n$, $\operatorname{H}\nolimits^{*}(\mathfrak n,k)$ is the ordinary Lie algebra of $\mathfrak n$, and $\operatorname{H}\nolimits^{*}(u(\mathfrak n), k)$, is the cohomology ring of the restricted enveloping algebra $u(\mathfrak n)$ of $\mathfrak n$. There are many cases in which it is known that the spectral sequence collapses at the $E_2$ page, so that $E_2^{*,*}$ is isomorphic to the associated graded ring of $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$. In this situation, the cohomology ring $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is a free module over the symmetric algebra $S^{*}({\mathfrak n}^{*})^{(1)}$, and the cohomology ring is Cohen-Macaulay. In 1986, Friedlander and Parshall \cite{FP} showed this happens when $\mathfrak n$ is the nilpotent radical of the Borel subalgebra of the restricted Lie algebra of an algebraic group, provided $p>h$ where $h$ is the Coxeter number of the associated root system. Twenty five years later, Drupieski, Ngo and the second author \cite{DNN} showed that a stronger result holds when $p\geq 2(h-1)$, that is, there is a ring isomorphism $\operatorname{H}\nolimits^*(u(\mathfrak n), k) \cong S^{*}({\mathfrak n}^{*})^{(1)}\otimes \operatorname{H}\nolimits^{*}(\mathfrak n,k)$. The investigations of this paper were originally inspired by a computer calculation by the first author which demonstrated that if $\mathfrak n$ is the Lie algebra of the nilpotent radical of a Borel subalgebra of a group of type $B_2$ and if $p=5$ (which is larger than $h$ but {\em not} larger than $2(h-1)$), then the isomorphism $\operatorname{H}\nolimits^*(u(\mathfrak n), k) \cong S^{*}({\mathfrak n}^{*})^{(1)}\otimes \operatorname{H}\nolimits^{*}(\mathfrak n,k)$ holds a modules over the symmetric algebra $S^{*}({\mathfrak n}^{*})^{(1)}$ (so that the cohomology ring is Cohen-Macaulay), but it does {\em not} hold as rings. A search for the reason for this phenomenon led to the discovery of a one-dimensional central extension of $\mathfrak n$ (with trivial $p$th power) whose cohomology ring is easily proved to be not Cohen-Macaulay. Indeed, this illustrates a general situation. One of the theorems in the paper is that if $\operatorname{H}\nolimits^*(u(\mathfrak n), k) \cong S^{*}({\mathfrak n}^{*})^{(1)}\otimes \operatorname{H}\nolimits^{*}(\mathfrak n,k)$ holds as an isomorphism of rings, then when $\mathfrak n$ is replaced by a one-dimensional extension, there is the same isomorphism, though perhaps only as an isomorphism of $S^{*}({\mathfrak n}^{*})^{(1)}$-modules. In the paper, we present numerous examples of this phenomenon. Cohen-Macaulay rings are of general interest, in part, because they have very nice structural properties. For example, it can be shown that for any restricted Lie algebra, if its cohomology ring is Cohen-Macaulay, then the cohomology ring admits a formal Poincar\'e duality and its Poincar\'e series, as a rational polynomial, satisfies a functional equation \cite{BC1, BC2}. In the case of a $p$-nilpotent Lie algebra with vanishing $p$-power operation, we show that the cohomology ring is Cohen-Macaulay if and only if the spectral sequence given above collapses at the $E_2$-page. The paper is organized as follows. In the following section of the paper, we present preliminaries about cohomology rings and the definitions from commutative algebra. A proof that the cohomology ring is Cohen-Macaulay if and only if the spectral sequence collapses is given in Section 3. We also prove results which describe how the spectral sequence behaves under central extensions in that section. The next four sections are concerned with specific examples. In the case that if $\mathfrak n$ is the nilpotent radical of a Borel subalgebra of $\mathfrak{sl}_3$ (i.e., type $A_2$), and the field has characteristic 3, then the isomorphism $\operatorname{H}\nolimits^*(u(\mathfrak n), k) \cong S^{*}({\mathfrak n})^{(1)}\otimes \operatorname{H}\nolimits^{*}(\mathfrak n,k)$ holds as modules over the symmetric algebra, but not as ring. Similar examples are given in type $B_2$ in characteristic 5 and in type $G_2$ in characteristic 7. In each example, there is a one-dimensional extension of the Lie algebra whose cohomology ring is not Cohen-Macaulay. Examples of Lie algebra whose cohomology rings are not Cohen-Macaulay are given in all characteristics. In Section 8, we outline which Lie algebra of dimension 5 have cohomology rings that are Cohen-Macaulay. In Section 9, we look at the special case of the nilpotent radical of the Borel subalgebra of $\mathfrak{sl}_4$ when $h<p<2(h-1)$ and show that the cohomology ring can be identified the symmetric algebra tensored with the ordinary Lie algebra cohomology as rings. \section{Preliminaries} Let $({\mathfrak g},[p])$ be a restricted Lie algebra over an algebraically closed field of characteristic $p>0$. Throughout this paper we will work with the added assumption that $p\geq 3$. The restricted representations for ${\mathfrak g}$ correspond to modules for the restricted enveloping algebra $u({\mathfrak g})$. Since $u({\mathfrak g})$ is a finite-dimensional cocommutative Hopf algebra the cohomology ring $\text{H}^{*}(u({\mathfrak g}),k)$ is a finitely generated graded-commutative $k$-algebra. For these types of rings, notions like Krull dimension and spectrum are well defined. The spectrum of the cohomology ring ${\mathcal V}_{\mathfrak g}$ is homeomorphic to the restricted nullcone: \[ {\mathcal N}_{1}({\mathfrak g}):= \{x\in {\mathfrak g}:\ x^{[p]}=0\}. \] \vskip.1in When $p\geq 3$ there exists a the spectral sequence \begin{equation}\label{specseq1} E_2^{2i,j} \ \ = \ \ S^i({\mathfrak g}^*)^{(1)} \otimes \operatorname{H}\nolimits^i({\mathfrak g},k) \Rightarrow \operatorname{H}\nolimits^{2i+j}(u({\mathfrak g}), k) \end{equation} where $\text{H}^{*}({\mathfrak g},k)$ is the ordinary Lie algebra cohomology. In particular, the map of the universal enveloping algebra of $\mathfrak g$ to the restricted enveloping algebra of $\mathfrak g$ induces an edge homomorphism from the restricted cohomology to the ordinary Lie algebra cohomology. On the other hand, there is another edge homomorphism $\Phi: S^{*}({\mathfrak g}^{*})^{(1)}\rightarrow \operatorname{H}\nolimits^{2*}(u({\mathfrak g}),k)$ which induces an inclusion of $R=k[{\mathcal N}_{1}({\mathfrak g})]\hookrightarrow \operatorname{H}\nolimits^{*}(u({\mathfrak g}),k)$. Furthermore, $\operatorname{H}\nolimits^{*}(u({\mathfrak g}),k)$ is an integral extension of $R$. The following observation is a consequence of these facts in the event that the $p$-power operation vanishes on a nilpotent restricted $p$-Lie algebra $\mathfrak n$. In this situation, ${\mathcal{N}}_1(\mathfrak n) = \mathfrak n$ and its coordinate ring is $S^*(\mathfrak n^*)^{(1)}$. Indeed, any restricted Lie algebra with vanishing $p$-power operation is nilpotent. \begin{prop} \label{prop:symmetric} Suppose that $\mathfrak n$ is a restricted Lie algebra such that $x^{[p]} = 0$ for all $x \in \mathfrak n$. Then the edge homomorphism $S^*(\mathfrak n^*)^{(1)} \to \operatorname{H}\nolimits^{2*}(u(\mathfrak n),k)$ is an injection. \end{prop} We require several ring theoretic notions which, though usually defined for commutative rings or commutative local rings, apply also to graded-commutative $k$-algebras. Suppose that $A = \sum_{i \geq 0} A_i$ is a graded-commutative $k$-algebra and that $M = \sum_{i \geq 0} M_i$ is a graded $A$-module. A sequence $x_1, \dots, x_r$ of homogeneous elements of $A$ is said to be a {\it regular sequence} for $M$ if for every $i = 1, \dots, r$, we have that multiplication by $x_i$ is an injective map from $M/(x_1, \dots, x_{i-1})M$ to itself. The {\it depth} of $M$ is the length of the longest regular sequence for $M$, and the depth of $A$ is the depth of $A$ as a module over itself. A sequence of homogeneous elements $x_1, \dots, x_r$ is a homogeneous set of parameters for $A$, if $x_1, \dots, x_r$ generate a polynomial subring $R = k[x_1, \dots, x_r] \subseteq A$ and that $A$ is finitely generated as a module over $R$. In this case, the number $r$ must be the Krull dimension of $A$. The module $M$ is {\it Cohen-Macaulay} if its depth is equal to the Krull dimension of $A$. The algebra $A$ is Cohen-Macaulay if it is Cohen-Macaulay as a module over itself. This is equivalent to the condition that there be a homogeneous set of parameters $x_1, \dots, x_r$ for $A$ such that $A$ is a finitely generated free module over the polynomial subring $k[x_1, \dots, x_r]$. It is a theorem that if there is a homogeneous set of parameters $x_1, \dots, x_r$ such that $A$ is a finitely generated free module over $k[x_1, \dots, x_r]$, then $A$ is a finitely generated free module over $k[y_1, \dots, y_r]$ for any homogeneous set of parameters $y_1, \dots, y_r$. For a reference, see Proposition 3.1 of \cite{St} or Theorem 2, page IV-20 of \cite{Se}. Note that in the book of Serre, the proof is given only for commutative local rings, but can be easily adapted to the graded-commutative case. With these preliminaries, we can extract the results that we need for this paper. \begin{thm}\label{thm:CM-prelim} Suppose that $\mathfrak n$ is a $p$-nilpotent Lie algebra with trivial $p$th-power operation (i.e., $x^{[p]} = 0$ for all $x \in \mathfrak n$). Then the cohomology ring $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is Cohen-Macaulay if and only if it is a free module over the polynomial subring $S^{*}(\mathfrak n^*)^{(1)}$. In particular, no nonzero element of $S^{*}(\mathfrak n^*)^{(1)}$ can be a divisor of zero if the cohomology ring is Cohen-Macaulay. \end{thm} \begin{proof} The last statement clearly follows from the first part of the theorem. Suppose that $x_1, \dots, x_r$ is a basis for $\mathfrak n^*$. By Proposition~\ref{prop:symmetric}, $$S^{*}(\mathfrak n^*)^{(1)}= k[x_1, \dots, x_r] \subseteq \operatorname{H}\nolimits^*(u(\mathfrak n),k).$$ Because the ordinary Lie algebra cohomology $\operatorname{H}\nolimits^*(\mathfrak n,k)$ is finite dimensional, we have that $x_1, \dots, x_r$ is a homogeneous set of parameters for $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$. If $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is a finitely generated free module over $S(\mathfrak n^*)^{(1)}$, then it is Cohen-Macaulay. On the other hand, if $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is Cohen-Macaulay, then it must be free as a module over $S(\mathfrak n^*)^{(1)}$, by the results cited above. \end{proof} \section{Consequences of the Lie cohomology spectral sequence} Let $\mathfrak n$ be a restricted Lie algebra with trivial $p$-restriction. We consider the first quadrant spectral sequence given as \begin{equation} \label{eq:specseq} E_2^{2i,j} \ \ = \ \ S^i(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^i(\mathfrak n,k) \Rightarrow \operatorname{H}\nolimits^{2i+j}(u(\mathfrak n), j) \end{equation} One of important fact to note about the spectral sequence is that $E_2^{i,j} = \{0\}$ if $j > \operatorname{Dim}\nolimits(\mathfrak n)$. This is simply because the ordinary Lie algebra cohomology of $\mathfrak n$ has the property that $\operatorname{H}\nolimits^j(\mathfrak n, k) = \{0\}$ for $j > \operatorname{Dim}\nolimits(\mathfrak n)$. This spectral sequence can be used to show that the cohomology of $u({\mathfrak n})$ is Cohen-Macaulay. \begin{prop}\label{prop:specseq} If the spectral sequence $E_*^{*,*}$ collapses at the $E_2$ page, then $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is free as module over the polynomial subalgebra $S= S^{*}(\mathfrak n^*)^{(1)}.$ In particular, it is Cohen-Macaulay. \end{prop} \begin{proof} The spectral sequence is a filtered version of the cohomology $\operatorname{H}\nolimits^*(u({\mathfrak n}),k)$. That is, if $\zeta \in E_2^{i,j}$ and $\eta \in E_2^{r,s}$ then $\zeta\eta \in \sum_{\ell \geq 0} E_2^{i+r+\ell, j+s-\ell}$. For this reason, for any $m$ the collection of the lowest $m$ rows, ($U_m = \sum E_2^{i,j}$ with $i \geq 0$ and $0 \leq j \leq m$) is a module over $S$, which lies in the bottom row. Because the spectral sequence collapses, $E_2 = E_{\infty}$ and the quotients $U_m/U_{m-1}$ are free $S$-modules. Hence, the proposition follows from the fact that every one of the quotient maps $U_m \to U_m/U_{m-1}$ must split as a map of $S$-modules. \end{proof} Many of the results of \cite{BC1} apply in the case that the polynomial ring is Cohen-Macaulay. In particular, we have the following adaptation of \cite[Theorem 1.1]{BC1}. We refer the reader to that paper for the proof which carries over from group cohomology to restricted Lie algebra cohomology with only minimal changes. \begin{thm} \label{thm:bc} Suppose that $\mathfrak n$ is a restricted $p$-Lie algebra with trivial $p$th-power operation. Let $d$ denote the dimension of $\mathfrak u$. If the cohomology ring $\operatorname{H}\nolimits^*(u(\mathfrak n), k)$ is Cohen-Macaulay, then any basis for $\mathfrak n^*$, meaning any complete linearly independent set of degree-two generators $X_1, \dots, X_d$ for the symmetric algebra $S^{*}(\mathfrak n^*)^{(1)}$, is a homogeneous set of parameters for $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ and the quotient \[ \operatorname{H}\nolimits^*(u(\mathfrak n),k)/(X_1, \dots, X_d) \] satisfies Poincar\'e duality in formal dimension $d$. Moreover, the Poincar\'e series $P_k(t) = \sum_{i \geq 0} \operatorname{Dim}\nolimits \operatorname{H}\nolimits^i(u(\mathfrak u),k)\ t^{i}$, regarded as a rational function of $t$ satisfies the functional equation \[ P_k(1/t) = (-t)^d P_k(t). \] \end{thm} A consequence of the preceding result is the following. \begin{cor}\label{cor:bc} Suppose that $\mathfrak n$ is a restricted $p$-Lie algebra of dimension $d$ whose $p$-power operation is trivial. Then the spectral sequence (\ref{eq:specseq}) collapses at the $E_2$ page if and only if the cohomology ring $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is Cohen-Macaulay. In this case, the edge homomorphism $\operatorname{H}\nolimits^*(u(\mathfrak n),k) \to \operatorname{H}\nolimits^*(\mathfrak n,k)$ is surjective and the Poincar\'e series, as a rational function has the form \[ P_k(t) = \frac{f_k(t)}{(1-t^2)^d}, \] where $f_k(t)$ is the Poincar\'e polynomial for the ordinary Lie algebra cohomology $\operatorname{H}\nolimits^*(\mathfrak n,k)$. \end{cor} \begin{proof} For convenience of notation let $S = S^{*}(\mathfrak n^*)^{(1)}$ and let $\operatorname{H}\nolimits^* = \operatorname{H}\nolimits^*(u(\mathfrak n).k)$. By Proposition \ref{prop:specseq}, if the spectral sequence (\ref{eq:specseq}) collapses at the $E_2$ page then $\operatorname{H}\nolimits^*$ is Cohen-Macaulay. We need to prove the converse. So assume that $\operatorname{H}\nolimits^*$ is Cohen-Macaulay. For $t \geq 0$, let $M_t = S \cdot \sum_{j=0}^t \operatorname{H}\nolimits^j$, the $S$-submodule of $\operatorname{H}\nolimits^*$ generated by elements of degree at most $t$. By Theorem \ref{thm:bc}, this is a free $S$-module. That is, let $I = (X_1, \dots, X_d)$ (in the notation of the theorem). Then $\operatorname{H}\nolimits^*$ is a free $S$-module on a set of homogeneous elements $\zeta_1, \dots, \zeta_m$ whose classes form a basis of $\operatorname{H}\nolimits^*/I$. If $\zeta_1, \dots, \zeta_\ell$ are all of those element of degree at most $t$, then it is easily seen that $M_t$ is a free module on these elements. Now we proceed by induction on $t$ to prove that \[ M_t/I \cdot M_t \ \cong \ \sum_{i = 0}^t E_2^{0,i}, \] as vector spaces, and that no differential $d_r$ on the $r^{th}$ page of the spectral sequence has a nonzero image on any of the lines $E_r^{*,0}, \dots, E_r^{*,t}$. This is true if $t = 0$, by Proposition \ref{prop:symmetric}. That is, the differential $d_r$ on the $r^{th}$-page cannot have a nonzero image $d_r:E_r^{j,r-1} \to E_r^{j+r,0}$ as otherwise the edge homomorphism onto the bottom row would not be injective. So assume that the statement is true for a certain value of $t$. Then the differential $d_r$ on the $r^{th}$ page of the spectral sequence must vanish on $E_r^{*,t+1}$, as otherwise there would be a nonzero image on one of the lower lines. Therefore, $E_2^{0,j} = E_\infty^{0,j}$ for all $j$ with $0 \leq j \leq t+1$. As a consequence, $M_{t+1}$ is generated as an $S$-module by elements representing a $k$-basis for $\sum_{i=0}^{t+1} E_\infty^{0,i} \cong \sum_{i=0}^{t+1} E_2^{0,i}$. That is, $M_{t+1} \cong \sum_{i=1}^{t+1} E_\infty^{*,i}$. Because, this is a free $S$-module, it must be that \[ \sum_{i=1}^{t+1} E_\infty^{*,i} \ \cong \ \sum_{i=1}^{t+1} E_2^{*,i} \] since these are both free modules on the same number of generators. It follows that no differential of the spectral sequence can have a nonzero image on row $t+1$. The induction proves the corollary. \end{proof} We record the following lemma which will be later useful in comparing spectral sequences. \begin{lemma} \label{lem:eqdim} Suppose that $\operatorname{Dim}\nolimits \operatorname{H}\nolimits^m(u(\mathfrak n),k) = \sum_{2i+j = m} \operatorname{Dim}\nolimits(S^{2i}(\mathfrak n^*) \otimes \operatorname{H}\nolimits^j(\mathfrak n,k))$ for all $m \geq 0$. Then the spectral sequence (\ref{eq:specseq}) collapses at the $E_2$ page and the cohomology ring $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is Cohen-Macaulay. \end{lemma} \begin{proof} The hypotheses of the lemma asserts that $\operatorname{Dim}\nolimits \operatorname{H}\nolimits^m(u({\mathfrak n}),k) = \sum_{m = 2i+j} \operatorname{Dim}\nolimits E_2^{2i+j}$. The condition forces the spectral sequence to collapse at the $E_2$ page, because otherwise there would be some further nontrivial differential that would reduce the dimension. \end{proof} Nilpotent Lie algebras can be built up from central extensions. The next theorem provides conditions on when the spectral sequence will collapse at $E_{2}$ under a central extension and yield an isomorphism of $S^{*}({\mathfrak n}^{*})^{(1)}$-modules. \begin{thm}\label{thm:tensoralg} Let $\mathfrak n$ be a nilpotent restricted $p$-Lie algebra with trivial $p$-power operation. Assume that $\mathfrak z$ is a central ideal of dimension one. Suppose that we have an isomorphism of rings \[ \operatorname{H}\nolimits^*(u(\mathfrak n/\mathfrak z), k) \cong S^{*}((\mathfrak n/\mathfrak z)^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak n/\mathfrak z,k). \] Then \[ \operatorname{H}\nolimits^*(u(\mathfrak n), k) \cong S^{*}(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak n,k). \] as modules over $S^{*}(\mathfrak n^*)^{(1)}$. In particular, $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is Cohen-Macaulay. \end{thm} \begin{proof} We use the Lyndon-Hochschild-Serre (LHS) spectral sequence \[ E_2^{i,j} = \operatorname{H}\nolimits^i(u(\mathfrak n/\mathfrak z), \operatorname{H}\nolimits^j(u(\mathfrak z),k)) \Rightarrow \operatorname{H}\nolimits^{i+j}(u(\mathfrak n),k). \] Since ${\mathfrak z}$ is central, $\mathfrak n$ acts trivially on $\mathfrak z$ and hence also trivially on its cohomology. Thus we have $E_{2}^{i,j}\cong \operatorname{H}\nolimits^i(u(\mathfrak n/\mathfrak z),k)\otimes \operatorname{H}\nolimits^j(u(\mathfrak z),k)$. Next note that $E_2^{0,1}$ has dimension one, and is spanned by an element $x$. Then $d_2(x) = s + v$, where $s$ is in $S^1((\mathfrak n/\mathfrak z)^*)^{(1)} \otimes 1$ and $v$ is in $1 \otimes \operatorname{H}\nolimits^2(\mathfrak n/\mathfrak z,k)$ by the hypothesis. Also by the hypothesis, we have that $v^n = 0$ for $n$ sufficiently large, since $\operatorname{H}\nolimits^*(\mathfrak n/\mathfrak z,k)$ has finite dimension. Consequently, for $r$ sufficiently large we have that $0 = (s+v)^{p^r} = s^{p^r} + v^{p^r}= s^{p^r}$ on the $E_3$ page of the spectral sequence. However, if $s$ is not zero, then we have a contradiction to the fact that $S^{*}(\mathfrak n^*)^{(1)}$ injects into the cohomology ring $\operatorname{H}\nolimits^{2*}(u(\mathfrak n),k)$. So $d_2(x) \in 1 \otimes \operatorname{H}\nolimits^2(\mathfrak n/\mathfrak z,k)$. Next we see that the rows $E_2^{*,0}$ and $E_2^{*,1}$ are both isomorphic to $\operatorname{H}\nolimits^*(u(\mathfrak n/\mathfrak z), k)$ as modules over the symmetric algebra $S^{*}((\mathfrak n/\mathfrak z)^*)^{(1)}$. Moreover, the differential $d_2: E_2^{*,1} \to E_2^{*+2,0}$ is a homomorphism of $S^{*}((\mathfrak n/\mathfrak z)^*)^{(1)}$-modules. More specifically, we have that the differential \[ d_2 : S^{*}(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^{*}(\mathfrak n,k) \cong E_2^{*,1} \longrightarrow E_2^{*,0} \cong S^{*}(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^{*}(\mathfrak n,k) \] is multiplication by $d_2(x)$ which has the form $d_2(x) = 1 \otimes w \in 1 \otimes \operatorname{H}\nolimits^2(\mathfrak n/\mathfrak z,k)$. Hence, by the hypothesis, we have that $E_3^{*,1} \cong S(\mathfrak n^*)^{(1)} \otimes K$ and $E_3^{*,1} \cong S(\mathfrak n^*)^{(1)} \otimes C$, where $K$ and $C$ are respectively the kernel and cokernel of multiplication by $w$ on $\operatorname{H}\nolimits^*(\mathfrak n/\mathfrak z,k)$. We now observe that the element $w \in \operatorname{H}\nolimits^2(\mathfrak n/\mathfrak z, k)$ is the extension class associated to the extension $\mathfrak z \hookrightarrow \mathfrak n \rightarrow \mathfrak n/\mathfrak z$. In the LHS spectral sequence $\hat{E}_2^{i,j} = \operatorname{H}\nolimits^i(\mathfrak n/\mathfrak z, \operatorname{H}\nolimits^j(\mathfrak z,k)) \Rightarrow \operatorname{H}\nolimits^{i+j}(\mathfrak n,k)$, of ordinary Lie algebra cohomology associated to that sequence, the differential $d_2: \hat{E}_2^{*,1} \to \hat{E}_2^{*,0}$ is multiplication by $w$. In addition, there can be no further differentials in that spectral sequence, since the sequence has only two nonzero rows. It follows that $E_2^{*,0} \oplus E_2^{*,1} \cong S^{*}((\mathfrak n/\mathfrak z)^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak n, k)$ as (free) modules over $S^{*}((\mathfrak n/\mathfrak z)^*)^{(1)}$. Finally, we note that $E_3 = E_\infty$. The reason is that the $E_3$ page is generated, as a ring by the elements on the bottom two rows ($E_3^{*,0}$ and $E_3^{*,1}$) and an element $X$ in $E_3^{0,2}$ that represents the class of a generator in $S^{*}(\mathfrak z^*)^{(1)} \subseteq S^{*}(\mathfrak n^*)^{(1)}$. The class $X$ must survive until the $E_\infty$ page of the spectral sequence. So $d_2(X) = 0$, and we must have that $XE_3^{i,j} = E_3^{i, j+2}$ for all $i, j \geq 0$. Consequently, the differential $d_3$ must vanish, because it vanishes on a collection of ring generators. The same holds for all further differentials in the spectral sequence. We have verified the hypothesis of Lemma \ref{lem:eqdim} and the theorem is proved. \end{proof} As an initial application we offer the following. Notice that the algebra $\mathfrak n$ in the corollary must be the direct sum of a commutative Lie algebra and a Heisenberg Lie algebra. Assuming $p \geq 3$, any such ordinar Lie algebra can be made into a restricted Lie algebra by assuming a vanishing $p$-power operation. \begin{cor} \label{cor:dim1comm} Suppose that $\operatorname{Dim}\nolimits([\mathfrak n, \mathfrak n]) = 1$. Then $\operatorname{H}\nolimits^*(u({\mathfrak n}), k) \cong S^{*}(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak n,k)$, as $S^{*}(\mathfrak n^*)^{(1)}$-modules, and $\operatorname{H}\nolimits^*(u({\mathfrak n}),k)$ is Cohen-Macaulay. \end{cor} \begin{proof} Observe that the quotient algebra $\mathfrak v = \mathfrak n/[\mathfrak n,\mathfrak n]$ is commutative and hence we have that $\operatorname{H}\nolimits^*(u(\mathfrak v), k) \cong S^{*}(\mathfrak v^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak v,k)$ as rings. Thus, Theorem~\ref{thm:tensoralg} implies the corollary. \end{proof} The next theorem provides stronger conditions which will show when one can identify $\operatorname{H}\nolimits^{*}(u({\mathfrak n}),k)$ with $S^{*}({\mathfrak n}^{*})^{(1)}\otimes \operatorname{H}\nolimits^{*}({\mathfrak n},k)$ as rings. Together this theorem in conjunction with Theorem~\ref{thm:tensoralg} can be applied to inductively compute cohomology rings. \begin{thm}\label{th:splitting} Let ${\mathfrak n}$ be a $p$-nilpotent Lie algebra and suppose that there is an isomorphism of $S^{*}({\mathfrak n}^{*})^{(1)}$-modules, $$ \operatorname{H}\nolimits^{*}(u({\mathfrak n}),k)\cong S^{*}({\mathfrak n}^{*})^{(1)} \otimes \operatorname{H}\nolimits^{*}({\mathfrak n},k). $$ Moreover, assume that there exists a subalgebra $B$ in $\operatorname{H}\nolimits^{*}(u({\mathfrak n}),k)$ such that $B\cong \operatorname{H}\nolimits^{*}({\mathfrak n},k)$ under the map $\phi:\operatorname{H}\nolimits^{*} (u({\mathfrak n}),k) \rightarrow \operatorname{H}\nolimits^{*}({\mathfrak n},k)$. Then $\operatorname{H}\nolimits^{*}(u({\mathfrak n}),k)\cong S^{*}({\mathfrak n}^{*})^{[1]}\otimes \operatorname{H}\nolimits^{*}({\mathfrak n},k)$ as rings. \end{thm} \begin{proof} Let $A$ be the subalgebra in $\operatorname{H}\nolimits^{*}(u({\mathfrak n}),k)$ isomorphic to $S^{*}({\mathfrak n}^{*})^{(1)}$. We have an algebra homomorphism $\Gamma$ defined by \[ S^{*}({\mathfrak n}^{*})^{(1)}\otimes \operatorname{H}\nolimits^{*}({\mathfrak n},k) \rightarrow A\otimes B \rightarrow \operatorname{H}\nolimits^{*}(u({\mathfrak n}),k)\otimes \operatorname{H}\nolimits^{*}(u({\mathfrak n}),k) \rightarrow \operatorname{H}\nolimits^{*}(u({\mathfrak n}),k). \] The last map is given by the cup product. This map is bijective because \[ \operatorname{H}\nolimits^{*}(u({\mathfrak n}),k)\cong S^{*}({\mathfrak n}^{*})^{(1)}\otimes \operatorname{H}\nolimits^{*}({\mathfrak n},k). \] as $S^{*}({\mathfrak n}^{*})^{(1)}$-modules. \end{proof} \section{Some examples of type A} We begin with the example of the nilpotent radical $\mathfrak n$ of a Borel subalgebra of $\mathfrak{sl}_3$. The relations for the cohomology in characteristic 3 were calculated by computer using the system Magma \cite{BoCa}. Specifically, we use the package for basic algebras written by the first author. Two of the three unusual relation can be derived from the second example in this section. Note that $\mathfrak n$ has an action of a two dimensional torus. Let $\alpha$ and $\beta$ be the simple roots. By convention the weights of $\mathfrak n$ consist of sums of negative roots so that its cohomology has weights in the positive cone of roots. For convenience, we subscript elements by their weights whenever this causes no problems. \begin{lemma}\label{sl3-ordin} Let $\mathfrak n$ be the nilpotent radical of a Borel subalgebra of $\mathfrak{sl}_3$ over a field of characteristic at least three. Then the ordinary Lie algebra cohomology of $\mathfrak n$ is given as \[ \operatorname{H}\nolimits^*(\mathfrak n, k) = k[\eta_\alpha, \eta_\beta, \eta_{2\alpha+\beta}, \eta_{\alpha+2\beta}]/I \] where $I$ is the ideal generated by \[ \eta_\alpha^2, \ \eta_\alpha\eta_\beta, \ \eta_\beta^2, \ \eta_\alpha\eta_{2\alpha+\beta}, \ \eta_\beta\eta_{\alpha+2\beta}, \ \eta_\beta\eta_{2\alpha+\beta} +\eta_{\alpha}\eta_{\alpha+2\beta}, \ \eta_{2\alpha+\beta}^2, \ \eta_{\alpha+2\beta}^2, \ \eta_{2\alpha+\beta}\eta_{\alpha+2\beta} \ \] \end{lemma} \begin{proof} The result follows easily from the LHS spectral sequence $E_2^{i,j} = \operatorname{H}\nolimits^i(\mathfrak n/\mathfrak z, \operatorname{H}\nolimits^j(\mathfrak z,k)) \Rightarrow \operatorname{H}\nolimits^{i+j}(\mathfrak n,k)$. Note that the torus $T$ acts on the spectral sequence. We let $\eta_\alpha$ and $\eta_\beta$ be the generators of $E_2^{1,0}$ having weights $\alpha$ and $\beta$ and let $\eta_{\alpha+\beta}$ be the generator of $E_2^{0,1}$ with weight $\alpha+\beta$. It is easy to check that $d_2(\eta_{\alpha+\beta}) = \eta_\alpha\eta_\beta$, which is the extension class. The elements $\eta_\alpha\eta_{\alpha+\beta}$ and $\eta_{\beta}\eta_{\alpha+\beta}$ survive to the $E_3 = E_{\infty}$ page, and are ring generators \-- which we call $\eta_{2\alpha+\beta}$ and $\eta_{\alpha+2\beta}$, respectively. The relations follow easily from the relations in the exterior algebra of ${\mathfrak n}^{*}$. \end{proof} With this lemma, we can compute the cohomology of the restricted Lie algebra. The following calculation is computer generated in part. However, as we see in the next example all but one of the relations can be derived by hand. \begin{prop}\label{sl3-rest} Let $k$ be a field of characteristic 3. Let $u(\mathfrak n)$ be the restricted enveloping algebra of the Lie algebra $\mathfrak n$ which is the nilpotent radical of the Borel subalgebra of $\mathfrak{sl}_3$. Then the cohomology ring $\operatorname{H}\nolimits^{*}(u(\mathfrak n),k) \cong S^{*}(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak n, k)$ is a free $S^{*}(\mathfrak n^*)^{(1)}$-module with basis consisting of the images of a basis of the ordinary Lie algebra cohomology of $\mathfrak n$ as in Lemma \ref{sl3-ordin}. The ring $S^{*}(\mathfrak n^*)^{(1)}$ is a polynomial ring in variables $X_\alpha, X_\beta$ and $X_{\alpha+\beta}$ having weights $3\alpha, 3\beta$ and $3(\alpha+\beta)$ under the action of the torus. The multiplicative relations are given by \[ \eta_\alpha^2, \ \eta_\alpha\eta_\beta, \ \eta_\beta^2, \ \eta_\beta\eta_{2\alpha+\beta} +\eta_{\alpha}\eta_{\alpha+2\beta}, \ \eta_{2\alpha+\beta}^2, \ \eta_{\alpha+2\beta}^2, \ \] \[ \eta_\alpha\eta_{2\alpha+\beta}- \eta_\beta X_\alpha, \ \eta_\beta\eta_{\alpha+2\beta} - \eta_\alpha X_\beta, \ \eta_{2\alpha+\beta}\eta_{\alpha+2\beta} - X_\alpha X_\beta \] \end{prop} \begin{proof} First note that the isomorphism $\operatorname{H}\nolimits^{*}(u(\mathfrak n),k) \cong S^{*}(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak n, k)$ is a consequence of Corollary \ref{cor:dim1comm}. Consider that LHS spectral sequence \[ E_2^{i,j} = \operatorname{H}\nolimits^i(u(\mathfrak n/\mathfrak z), \operatorname{H}\nolimits^j(u(\mathfrak z),k)) \Rightarrow \operatorname{H}\nolimits^{i+j}(u(\mathfrak n),k) \] We have elements $a,b$ in $E_2^{1,0}$ and $u$ in $E_2^{0,1}$. The differential on the $E_2$ page has the form $d_2(u) = ab$ as in the proof of Lemma~\ref{sl3-ordin}. A representative of $X_{\alpha+\beta}$ is in $E_2^{0,2}$, and this element must live until the $E_{\infty}$ page. Because the resulting ring at the $E_3$ page is generated in degrees one and two, we conclude that all further differentials must vanish and $E_3 = E_\infty$. The problem is to ungrade the spectral sequence. The first six relations are forced by the grading on the spectral sequence and by the action of the torus. For example, the relation $\eta_\beta\eta_{2\alpha+\beta} + \eta_{\alpha}\eta_{\alpha+2\beta}$ holds on the (graded) $E_3$ page and must hold in the ungrading because there is no nonzero element of that weight ($2\alpha+2\beta$) in $E_3^{3,0}$. For the last three relations, we rely on the computer. For example, $\eta_\alpha\eta_{2\alpha+\beta} = 0$ in the graded $E_3$ page. This element lies in $E_3^{2,1}$ in the ungrading it is equal to $\eta_\beta X_\alpha \in E_3^{4,0}$. Note that both elements have weight $3\alpha+\beta$. The other two relations are similar. \end{proof} Next we extend the example slightly to get a nilpotent Lie algebra (with trivial $p$th power) where the restricted cohomology fails to be Cohen-Macaulay. We consider the algebra $\mathfrak n$, labeled as $L_{5,9}$ in the list of de Graaf \cite{deG}. This algebra is isomorphic to the quotient of the Lie algebra of all upper triangular $4 \times 4$ matrices by its center. The algebra can also be represented as the algebra of all strictly upper triangular matrices such that the all entries in the third (and fourth) row are zero. Notice that in this representation, the algebra has trivial $p$-power operation. The algebra has a basis consisting of $u_1, \dots, u_5$, where $u_4$ and $u_5$ are central, and with the additional relations: \[ [u_1, u_2] = u_4, [u_2, u_3] = u_5, [u_1, u_3] = 0. \] The reader should be aware that this is a minor change from the presentation in \cite{deG}. \begin{prop}\label{prop:notcm1} Suppose that the characteristic of $k$ is 3, and let $\mathfrak n$ be as above. The cohomology ring $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is not Cohen-Macaulay. In particular, it has an associated prime $\mathfrak P$ such that $\operatorname{H}\nolimits^*(u(\mathfrak n),k)/\mathfrak P$ has Krull dimension four. \end{prop} \begin{proof} First we observe that the algebra has an action of a three dimensional torus, $T$ (in the representation as upper triangular $4 \times 4$ matrices modulo the center of that algebra). With this action, the basis elements $u_1, \dots, u_5$ have weights $-\alpha, \ -\beta, \ -\gamma, \ -\alpha-\beta$ and $-\beta- \gamma$, respectively. The torus also acts on the cohomology. Let $\mathfrak v$ be the commutative subalgebra of $\mathfrak n$ spanned by $u_1, u_3, u_4, u_5$. Its Lie algebra cohomology is an exterior algebra generated by elements $\eta_\alpha, \eta_{\alpha+\beta}, \eta_{\beta+\gamma}, \eta_\gamma$ all in degree one and having weights as indicated by the subscripts. The element $u_2$ acts on $\mathfrak v$ and on its cohomology. After possible rescaling, we have that $u_2\eta_{\alpha+\beta} = \eta_\alpha$ and $u_2\eta_{\beta+\gamma} = \eta_\gamma.$ Recall that the action on the cohomology is dual to the action on the algebra, so multiplying by $u_2$ on an element of cohomology subtracts the root $\beta$. Now we consider the spectral sequence \[ E_2^{i,j} = \operatorname{H}\nolimits^i(u(\mathfrak n/\mathfrak v), \operatorname{H}\nolimits^j(u(\mathfrak v),k) \Rightarrow \operatorname{H}\nolimits^{i+j}(u(\mathfrak n), k). \] As a module over $u(\mathfrak n/\mathfrak v)$, \[ \operatorname{H}\nolimits^1(u(\mathfrak v)) \ \cong \ M_1 \oplus M_2 \] where $M_1$ is the span of $\{\eta_\alpha, \eta_{\alpha+\beta} \}$, $M_2$ is the span of $\{ \eta_\gamma, \eta_{\beta+\gamma} \}$ and the action of the class of $u_2$ is given as above. In degree 2, we have that \[ \operatorname{H}\nolimits^2(u(\mathfrak n), k) \ \cong \Lambda^{2}(M_1) \ \ \oplus \ \ M_1 \otimes M_2 \ \ \oplus \ \ \Lambda^{2}(M_2) \ \ \oplus \ \ k^4 \] where the last factor is spanned by the generators $X_\alpha, X_\gamma, X_{\alpha+\beta}, X_{\beta+\gamma}$ (each having weight equal to three times its index) of $S^{*}(\mathfrak v^*)^{(1)}$ that are fixed by the action of $u_2$. The interesting part of this is the tensor product $M_1 \otimes M_2$ which is the direct sum of a trivial module spanned by $\eta_\alpha \wedge \eta_{\beta+\gamma} - \eta_{\alpha+\beta} \wedge \eta_\gamma$ and an indecomposable three dimensional module generated by $\eta_{\alpha+\beta} \wedge \eta_{\beta+\gamma}$ and with socle spanned by $w = \eta_\alpha \wedge \eta_\gamma$. Because the characteristic is 3, this is a free module over $u(\mathfrak n/\mathfrak v)$. Thus there is an element $E_2^{0,2} \cong \operatorname{H}\nolimits^2(u(\mathfrak v),k)^{\mathfrak n/\mathfrak v}$ that is determined by $w$. Write $M_1 \otimes M_2 \cong k \oplus N$, where $N$ is the free submodule with socle spanned by $w$. Let $X_\beta \in E_2^{2,0}$ be the generator of $S^{1}((\mathfrak n/\mathfrak v)^{*})^{(1)}$. This elements survives to the $E_\infty$ page. Moreover, every element in $E_2^{i,j}$ for $j \geq 2$ is a multiple of this element. We claim that this element must be contained in an associated prime of $\operatorname{H}\nolimits^*(u(\mathfrak n), k)$. Specifically, the element $\hat{w} \in \operatorname{H}\nolimits^0(u(\mathfrak n/\mathfrak v),N)$ determined by $w$ in $E_2^{2, 0}$ has the property that $X_\beta\hat{w} = 0$ on the $E_2$ page because $N$ is free over $u(\mathfrak n/\mathfrak v)$. So we only need to show that $\hat{w}$ survives to the $E_\infty$ page and that the product $X_\beta\hat{w}$ does not ungrade to something that is nonzero. Both of these statements can be deduced from looking at the action of the torus. That is, $w$ has weight $\alpha + \gamma$ and there is no element of that weight in either $E_2^{2,1}$ or $E_2^{0,3}$. Hence both $d_2$ and $d_3$ must both vanish on the class of $w$. Likewise, $X_\beta\hat{w}$ has weight $\alpha +3\beta +\gamma$ and there is no element of that weight in $E_\infty^{3,1}$ or $E_\infty^{4,0}$. So we must have that $X_\beta$ annihilates the class of $w$ in $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ \end{proof} \begin{rem} \label{rem:derive-rel} As mentioned earlier in the paper, two of the computer generated relations in Proposition \ref{sl3-rest} are indicated by the calculation above. For this we require the spectral sequence \[ E_2^{i,j} \ = \ \operatorname{H}\nolimits^i(u(\mathfrak n/\mathfrak z), \operatorname{H}\nolimits^j(u(\mathfrak z), k)) \Rightarrow \operatorname{H}\nolimits^{i+j}(u(\mathfrak n),k) \] where $\mathfrak n$ is the 5-dimensional Lie algebra as above and $\mathfrak z$ is the one-dimensional subalgebra spanned by $u_5$. Note that $\mathfrak n/\mathfrak z \cong \mathfrak a \oplus \mathfrak b$ where $\mathfrak a$ (generated by the classes of $u_1, u_2, u_4$) is the nilpotent radical of a Borel subalgebra of $\mathfrak{sl}_3$, and $\mathfrak b$ (generated by $u_3$) is a one dimensional Lie algebra. The bottom row of the spectral sequence is generated by elements $\eta_\alpha, \eta_\beta, \eta_{2\alpha+\beta}, \eta_{\alpha+2\beta}, X_\alpha, X_\beta,X_{\alpha+\beta}$, generating $\operatorname{H}\nolimits^*(u(\mathfrak a),k)$, and $\eta_\gamma, X_\gamma$, generating $\operatorname{H}\nolimits^*(u(\mathfrak b),k)$. The weights are as indicated by the subscripts. The term $E_2^{1,0}$ is spanned by an element $\eta_{\beta+\gamma}$. Its image under the differential $d_2$ is $\eta_\beta \eta_\gamma$. What we know from the proof of Proposition~\ref{prop:notcm1} is that $\eta_\alpha \eta_\gamma X_\beta = 0$. This element can only be zero if it is in the image of $d_2$. Then by an examination of weights we see that (up to some nonzero scalar multiple) $d_2(\eta_{\beta+\gamma}\eta_{\alpha+2\beta}) = \eta_\beta \eta_\gamma \eta_{\alpha+2\beta} = \eta_\alpha \eta_\gamma X_\beta$. Note that this relation occurs in the ring $E_2^{*,0} \cong \operatorname{H}\nolimits^*(u(\mathfrak a),k) \otimes \operatorname{H}\nolimits^*(u(\mathfrak b),k)$. Consequently, we must have that $\eta_\beta \eta_{\alpha+2\beta} = \eta_\alpha X_\beta$ in $\operatorname{H}\nolimits^*(u(\mathfrak a),k)$, as asserted. The relation $\eta_\alpha \eta_{2\alpha+\beta} = \eta_\beta X_\alpha$ follows by symmetry (interchanging $u_1$ with $u_3$ and $u_4$ with $u_5$). \end{rem} We shall see in Section \ref{sec:metabelian} that the example in Proposition \ref{prop:notcm1} can be generalized, giving a metabelian Lie algebra whose cohomology ring is not Cohen-Macaulay for any prime $p$. \section{Some examples of type B} In this section, we consider the nilpotent radical $\mathfrak n$ of the Borel subalgebra of a Lie algebra of type $B_2$ and some extensions thereof. We show that in characteristic ~5, the cohomology of $\mathfrak n$ has the form $\operatorname{H}\nolimits^*(u(\mathfrak n),k) \cong S(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak n,k)$ as a module over $S(\mathfrak n^*)^{(1)}$, but not as a ring. Moreover, there is a one-dimensional extension of this Lie algebra whose cohomology is not Cohen-Macaulay. The basic idea of the construction applies other non-simply laced cases in other characteristics. Note that if the characteristic of $k$ is greater than $6 = 2(h-1)$ then the isomorphism $\operatorname{H}\nolimits^*(u(\mathfrak n),k) \cong S(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak n,k)$ is an isomorphism of rings by \cite[Theorem 3.1.1]{DNN}. The Lie algebra $\mathfrak n$ has a basis $u_1, u_2, u_3, u_4$ and the Lie bracket is given by $[u_1,u_2] = u_3$, $[u_1, u_3] = u_4$, $[u_2,u_3] = 0$ with $u_4$ being central. There is an action of a two-dimensional torus, $T$, relative to which the basis elements have weights $-\alpha, -\beta, -\alpha-\beta, -2\alpha-\beta$ respectively ($\alpha$ being the short simple root). We begin with a calculation of the ordinary Lie algebra cohomology of $\mathfrak n$. As in the last section, we let the subscripts of the elements denote their weights. \begin{lemma}\label{lem:ordcohob2} The cohomology ring $\operatorname{H}\nolimits^*(\mathfrak n,k)$ is generated by elements which we denote $\eta_\alpha, \eta_\beta, \eta_{\alpha+2\beta}, \eta_{3\alpha+\beta}, \eta_{4\alpha+2\beta}, \eta_{3\alpha+3\beta}$ in degrees 1,1,2,2,3,3. All products of two of the given generators are zero except for the products \[ \eta_{\alpha} \eta_{3\alpha+3\beta} = -\eta_\beta\eta_{4\alpha+2\beta} = \eta_{3\alpha+\beta} \eta_{\alpha+2\beta} \] \end{lemma} \begin{proof} Let $\mathfrak z$ be the subalgebra spanned by $u_4$. Then we have a spectral sequence given as \[ E_2^{i,j} = \operatorname{H}\nolimits^i(\mathfrak n/\mathfrak z, \operatorname{H}\nolimits^j(\mathfrak z,k)) \Rightarrow \operatorname{H}\nolimits^{i+j}(\mathfrak n,k). \] The bottom row is the cohomology of the nilpotent radical of a Borel subalgebra of $\mathfrak{sl}_3$, which is provided in Lemma \ref{sl3-ordin}. We adopt the notation for the elements as given in that lemma. The $E_2$ term of the spectral sequence is generated as a ring by one additional element $\zeta = \zeta_{2\alpha+\beta} \in E_2^{0,1}$. Its image under the $d_2$ differential is the extension class $\eta_{2\alpha+\beta}$ (having the same weight). The differential vanishes on every product of $\zeta$ with an element on the bottom row except $\eta_\beta \zeta$, and there $d_2(\eta_\alpha \zeta) = \eta_\beta \eta_{2\alpha+\beta} = -\eta_\alpha \eta_{\alpha+2\beta}$. The product structure is derived from the product on the $E_2$ page as well as weight considerations. \end{proof} Now we extend this to the restricted Lie algebra cohomology. Some of the relations in the proposition given below were calculated using the basic algebra package in Magma. \begin{prop} \label{prop:b2rest} Suppose that $k$ is a field of characteristic 5. Let $\mathfrak n$ be the nilpotent radical of the Borel subalgebra of a Lie algebra of type $B_2$. The cohomology ring $\operatorname{H}\nolimits^{*}(u(\mathfrak n),k) \cong S^{*}(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak n, k)$ is a free $S^{*}(\mathfrak n^*)^{(1)}$-module with basis consisting of the images of a basis of the ordinary Lie algebra cohomology of $\mathfrak n$ as in Lemma \ref{sl3-ordin}. The ring $S^{*}(\mathfrak n^*)^{(1)}$ is a polynomial ring in variables $X_\alpha, X_\beta, X_{\alpha+\beta}$ and $X_{2\alpha+\beta}$ having weights $5\alpha, 5\beta, 5(\alpha+\beta)$ and $5(2\alpha+\beta)$ under the action of the torus. The multiplicative relations among the generators of $1 \otimes \operatorname{H}\nolimits^*(\mathfrak n,k) \subseteq \operatorname{H}\nolimits^*(u(\mathfrak n),k)$ are exactly as given in Lemma \ref{lem:ordcohob2}, except that $\eta_{3\alpha+\beta}^2 = \eta_{\alpha+2\beta}X_\alpha$. In particular, the isomorphism $\operatorname{H}\nolimits^{*}(u(\mathfrak n),k) \cong S^{*}(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak n, k)$ does not hold as rings. \end{prop} \begin{proof} The first statement follows from \cite[Theorem 3.1.1]{DNN}. A large part of the remainder of the proof can be derived from the LHS spectral sequence and weight considerations. The unusual relation was verified by the computer. \end{proof} Next we consider an extension of the algebra, whose cohomology ring is not Cohen-Macaulay. As in the case of Remark \ref{rem:derive-rel}, we can derive the unusual relation in the above proposition from the calculations of the cohomology of the extension. Let $\mathfrak n$ be the restricted Lie algebra of dimension ~5, with basis $u_1, \dots, u_5$ and Lie bracket given by $[u_1,u_2] = u_3$, $[u_1, u_3] = u_4$, $[u_1,u_4] = u_5$, with $u_2, \dots, u_5$ forming a commutative subalgebra that we denote $\mathfrak v$. We continue to assume that the characteristic of $k$ is ~5. The algebra $\mathfrak n$ has an action of a 2-dimensional torus, so that the elements $u_1, \dots, u_5$ have weights $-\alpha$, $-\beta$, $-\alpha-\beta$, $-2\alpha-\beta$ and $-3\alpha-\beta$. \begin{prop}\label{prop:b2notcm} The cohomology ring $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is not Cohen-Macaulay. In particular, there is an element in $\operatorname{H}\nolimits^2(u(\mathfrak n),k)$ whose annihilator $\mathfrak P$ has the property that $\operatorname{H}\nolimits^*(u(\mathfrak n),k)/\mathfrak P$ has Krull dimension four. \end{prop} \begin{proof} We consider the spectral sequence: $E_2^{i,j} = \operatorname{H}\nolimits^i(u(\mathfrak n/\mathfrak v), \operatorname{H}\nolimits^j(u(\mathfrak v),k))\Rightarrow \operatorname{H}\nolimits^{i+j}(u(\mathfrak n),k)$. As a module over $u(\mathfrak n/\mathfrak v)$ $M = \operatorname{H}\nolimits^1(u(\mathfrak v),k)$ is uniserial of length ~4, generated by an element $\eta_{3\alpha + \beta}$. Then $\operatorname{H}\nolimits^2(u(\mathfrak v),k)$ is the exterior square of $M$ and it has a free $u(\mathfrak n/\mathfrak v)$-summand generated by $\eta_{3\alpha+\beta} \wedge u_{\alpha}\eta_{3\alpha+\beta}$ and having socle generated by $\eta_{\alpha+\beta} \wedge \eta_\beta$. Hence there is a class $\zeta$ in $E_2^{0,2}$ represented by a $u(\mathfrak n/\mathfrak v)$-homomorphism of $k$ onto the socle of this summand. This class is annihilated on the $E_2$ page of the spectral sequence by the class $X_\alpha$ in $E_2^{2,0}$. As in the case of Proposition \ref{prop:notcm1}, we can argue by weights that $\zeta$ is represented by a nonzero class $\hat{\zeta} \in \operatorname{H}\nolimits^2(u(\mathfrak n),k)$ such that $X_\alpha \hat{\zeta} =0$, and the annihilator of $\hat{w}$ is contained in a prime $\mathfrak P$ having the asserted properties. \end{proof} \begin{rem}\label{rem:notorus} We should note that the action of the torus is not required to show that the example is not Cohen-Macaulay. If it were the case that one of the differentials $d_2$ or $d_3$ failed to vanish on the class $\zeta$, then we would have that $d_2(\zeta) = X_\alpha \mu$ for $\mu$ in $E_2^{0,1}$ or that $d_3(\zeta) = X_\alpha \mu$ for $\mu \in E_3^{1,0}$. In either case we would have a class in $\operatorname{H}\nolimits^1(u(\mathfrak n),k)$ that is annihilated by $X_\alpha$. Similarly, there is no way to ungrade the spectral sequence to avoid having a large associate prime in the cohomology ring. \end{rem} \begin{rem}\label{rem:derive-rel2} We remark, as in \ref{rem:derive-rel}, that the unusual relation $\eta_{3\alpha+\beta}^2 = \eta_{\alpha+2\beta} X_\alpha$ in Proposition \ref{prop:b2rest} can be derived from the last Proposition. The proof is almost exactly the same as in Remark~\ref{rem:derive-rel} and we leave the details to the interested reader. \end{rem} \begin{rem}\label{rem:b2other-primes} The situation in Proposition \ref{prop:b2notcm} can be extended to give examples for other primes. For suppose that $p > 5$, and let $\mathfrak n$ be the restricted $p$-Lie algebra of dimension $n+1$ for some $n$ with $(p+3)/2 \leq n < p$, defined as follows. A basis for $\mathfrak n$ consists of the elements $v, u_1, \dots, u_n$, where $u_1, \dots, u_n$ span a commutative subalgebra, which we denote $\mathfrak v$. Then the product is given by $[v,u_i] = u_{i+1}$ for $i = 1, \dots, n-1$, and $[v,u_n]= 0$. The $p$th-power operation is zero on $\mathfrak n$. The algebra $\mathfrak n$ has an action of a two-dimensional torus such that the basis elements $v, u_1, \dots, u_n$ have weights $-\alpha, -\beta, -\alpha-\beta, \dots, -(n-1)\alpha-\beta$, respectively. Then $M = \operatorname{H}\nolimits^1(u(\mathfrak v), k)$ is an indecomposable uniserial module of dimension $n$ over the algebra $u(\mathfrak n/\mathfrak v)$. Because $n \geq (p+3)/2$, its exterior square $\Lambda^{2}(M)$ has a free summand. Hence, considering the spectral sequence with $E_2$ term $E_2^{i,j} = \operatorname{H}\nolimits^i(u(\mathfrak n/\mathfrak v), \operatorname{H}\nolimits^j(u(\mathfrak v),k)) \Rightarrow \operatorname{H}\nolimits^{i+j}(u(\mathfrak n),k)$ and arguing exactly as in the proof of Proposition \ref{prop:b2notcm}, we get that $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ has an associated prime $\mathfrak P$ such that $\operatorname{H}\nolimits^*(u(\mathfrak n),k)/\mathfrak P$ has Krull dimension at most $n$. \end{rem} \section{An example of type $G_2$ in characteristic 7} In this section we consider the nilpotent radical of a Borel subalgebra of the restricted Lie algebra of type $G_2$. We obtain a similar result to that in Proposition \ref{sl3-rest} and Proposition \ref{prop:b2rest}. Because the methods are also very similar to those in the aforementioned propositions, we give only a sketch. \begin{prop}\label{prop:g2rest} Suppose that $\mathfrak n$ is the nilpotent radical of a Borel subalgebra of the restricted Lie algebra of type $G_2$. Then as a module over the symmetric algebra $S^{*}(\mathfrak n^*)^{(1)}$, we have that $\operatorname{H}\nolimits^*(u(\mathfrak n),k) \cong S^{*}(\mathfrak n^*)^{(1)} \otimes \operatorname{H}\nolimits^*(\mathfrak n,k)$. However, this is not an isomorphism as rings. \end{prop} \begin{proof} The characteristic of the field $k$ is larger than the Coxeter number and hence the first statement is a consequence of \cite[(3.5) Prop.]{FP}. Our task is to prove the second statement. Suppose that $\alpha$ and $\beta$ are the simple roots for the root system of type $G_2$. Assume that $\alpha$ is the short root. The other positive roots are $\alpha+\beta, 2\alpha+\beta, 3\alpha+\beta, 3\alpha+2\beta$. We construct the central extension $\mathfrak g$ \[ \xymatrix{ 0 \ar[r] & \mathfrak a \ar[r] & \mathfrak g \ar[r] & \mathfrak n \ar[r] & 0 } \] where $\mathfrak a$ has dimension one, and $\mathfrak g$ has an action by the two dimensional torus so that an element of $\mathfrak a$ has weight $-4\alpha-\beta$. Thus $\mathfrak g$ has basis $u_\alpha, u_\beta, u_{\alpha+\beta}, u_{2\alpha+\beta}, u_{3\alpha+\beta}, u_{4\alpha+\beta}, u_{3\alpha+2\beta}$ where the subscript on each element indicates the negative of its weight. Let $\mathfrak v$ be the subalgebra generated by $u_\beta, u_{\alpha+\beta}, u_{2\alpha+\beta}, u_{3\alpha+\beta}, u_{4\alpha+\beta}, u_{3\alpha+2\beta}$, and let $\mathfrak z$ be the subalgebra generated by $u_{3\alpha+2\beta}$. The cohomology of $\mathfrak v$ can be computed from the spectral sequence $E_2^{i,j} = \operatorname{H}\nolimits^i(\mathfrak v/\mathfrak z, \operatorname{H}\nolimits^j(\mathfrak z,k)) \Rightarrow \operatorname{H}\nolimits^{i+j}(\mathfrak v,k)$. Let $\eta_\gamma$ denote an element of weight $\gamma$ on this $E_2$ page. The differential must take the element $\eta_{3\alpha+2\beta} \in E_2^{0,1}$ to the extension class $d_2(\eta_{3\alpha+2\beta}) = \eta_\beta \wedge \eta_{3\alpha+\beta} - \eta_{\alpha+\beta} \wedge \eta_{2\alpha+\beta} \in E_2^{2,0}$. Note that this element is annihilated by the action of $u_\alpha$, as is $\eta_{2\alpha+\beta}$. The point of this calculation is that $\operatorname{H}\nolimits^2(\mathfrak v,k)$ contains a free module under the action of $u(\mathfrak g/\mathfrak v)$. That is the element $\eta_{4\alpha+\beta} \wedge \eta_{3\alpha+\beta}$ generates a uniserial module of dimension $7$ over $u(\mathfrak g/\mathfrak v)$, whose socle (the submodule annihilated by the action of $u_\alpha$) is spanned by $\eta_{\alpha+\beta} \wedge \eta_{\beta}$, a class that survives to the $E_\infty$ page of the spectral sequence. This implies that $\operatorname{H}\nolimits^{*}(u({\mathfrak g}),k)$ is not Cohen-Macaulay. That is, we see from the spectral sequence $E_2^{i,j} = \operatorname{H}\nolimits^i(u(\mathfrak g/\mathfrak v), \operatorname{H}\nolimits^j(u(\mathfrak v),k))\Rightarrow \operatorname{H}\nolimits^{i+j}(u({\mathfrak g}),k)$, that the element $X_\alpha \in E_2^{0,2}$ in the symmetric algebra annihilates the element corresponding to $\eta_{\alpha+\beta} \wedge \eta_{\beta} \in E_2^{0,2}$. This spectral sequence collapses at the $E_2$ page, and hence the relation exists in $\operatorname{H}\nolimits^*(u(\mathfrak g),k)$. The proposition is a consequence of Theorem \ref{thm:tensoralg}. More specifically, by following the arguments in Proposition \ref{sl3-rest} and using the weight information, we see that in $\operatorname{H}\nolimits^4(u(\mathfrak n),k)$ there must be a relation having roughly the form $(\eta_\alpha \wedge \eta_{3\alpha+\beta})^2 = (\eta_{\alpha+\beta} \wedge \eta_{\beta}) X_\alpha$. Note that both are elements of weight $8\alpha+2\beta$. \end{proof} \section{A metabelian example} \label{sec:metabelian} In this section we present an example of a metabelian restricted Lie algebra with the property that its cohomology ring is not Cohen-Macaulay. Such an example can be constructed for any value of $p\geq 3$, except that the dimension of the example depends on the prime $p$. Let $\mathfrak n$ be the nilpotent restricted Lie algebra with basis consisting of the elements $u, v_i, w_i$ for $i = 1, \dots, n$, and Lie bracket defined by the rule \[ [u,v_i] = w_i, \quad [u,w_i] = 0 = [v_i, v_j] = [v_i, w_j] = [w_i, w_j] \] for all $i,j$ such that $1 \leq i, j, \leq n$. Note that $\mathfrak n$ is isomorphic to a subalgebra of $\mathfrak{sl}_{n+2}$. That is, we can define a homomorphism $\varphi: \mathfrak n \to \mathfrak{sl}_{n+2}$ as follows. Let $E_{i,j}$ be the matrix with $1 \in k$ in the $(i,j)$ position and 0 elsewhere. Then define $\varphi$ by $\varphi(u) = E_{1,2}$, $\varphi(v_i) = E_{2,i+2}$ and $\varphi(w_i) = E_{1,i+2}$. The image of $\varphi$ has the property that the $p^{th}$-power of any element in the algebra is zero, since $p \geq 3$. Also, we note that the algebra has an action of the diagonal torus of $\mathfrak{sl}_{n+2}$ of dimension $n+1$. Let $\mathfrak v$ be the subalgebra with basis consisting of all of the elements $v_i, w_i$ for $i = 1, \dots, n$. This subalgebra is commutative. We consider the spectral sequence $E_2^{r,s} = \operatorname{H}\nolimits^r(u(\mathfrak n/\mathfrak v), \operatorname{H}\nolimits^s(u(\mathfrak v), k) \Rightarrow \operatorname{H}\nolimits^{r+s}(u(\mathfrak n),k)$. The cohomology group $\operatorname{H}\nolimits^1(u(\mathfrak v),k) = \operatorname{H}\nolimits^1(\mathfrak v,k)$ has dimension $2n$ and is spanned by elements $\gamma_i$ (of weight $\alpha_2 +\dots + \alpha_{i+1}$) and $\eta_i$ (of weight $\alpha_1 + \dots + \alpha_{i+1}$, for $i = 1, \dots, n$. The action of the element $u \in \mathfrak n/\mathfrak v$ on $\operatorname{H}\nolimits^1(u(\mathfrak v),k)$ is given by $u \cdot \eta_i = \gamma_i$ and $u \cdot \gamma_i = 0$. Thus, $\operatorname{H}\nolimits^1(u(\mathfrak v),k)$ is a direct sum of $n$ uniserial $u(\mathfrak n/\mathfrak v)$-modules of dimension 2. With this information, we can prove the following. \begin{prop}\label{prop:metabelian} Let $n = p-1$. Then the cohomology ring $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is not Cohen-Macaulay. \end{prop} \begin{proof} Let $M$ denote the uniserial $u(\mathfrak n/\mathfrak v)$-module of dimension 2. As noted $E_2^{0,1} = \operatorname{H}\nolimits^1(u(\mathfrak v),k)$ is a directs sum of $p-1$ copies of $M$. Hence, $E_2^{0,p-1}$ contains the $p-1$ exterior power of $\operatorname{H}\nolimits^1(u(\mathfrak v),k)$ which includes the $p-1$ tensor power of $M$. The $p-1$ tensor power of $M$ has a projective $u(\mathfrak n/\mathfrak v)$-module, the uniserial module of dimension $p$ generated by $\eta_1 \wedge \dots \wedge \eta_{p-1}$, and having socle spanned by $\gamma_1 \wedge \dots \wedge \gamma_{p-1}$. Consequently, $E_2^{0,p-1}$ has an element $y$ of weight $\alpha_2 + \dots + \alpha_{p}$. This element must survive to the $E_{\infty}$ page of the spectral sequence, because there is no element of the same weight in $E_r^{p-1-r,r+2}$ for any value of $r>0$. On the other hand, because $\gamma_1 \wedge \dots \wedge \gamma_{p-1}$ is in the socle of a projective $u(\mathfrak n/\mathfrak v)$-module, we must have that $X.y = 0$ for $X$ the generator of the symmetric algebra in $E_{2.0}$. Again, by a weight argument we see that the product $X.y$ cannot ungrade to an element that is not zero. Consequently, this relation must exist in $\operatorname{H}\nolimits^*(u(\mathfrak n).k)$. This implies that $X$ is not a regular element and that the depth of $\operatorname{H}\nolimits^*(u(\mathfrak n),k)$ is less than the Krull dimension. \end{proof} \section{Nilpotent Lie algebras of dimension $\leq 5$} In this section we will use deGraaf's classification of indecomposable nilpotent Lie algebras over fields of characteristic not equal to 2. Again our interest is in whether there is an isomorphism \[ \operatorname{H}\nolimits^*(u({\mathfrak n}),k)\cong S^{*}({\mathfrak n}^{*})^{(1)}\otimes \Lambda^{*}({\mathfrak n}^{*}) \cong S^{*}({\mathfrak n}^{*})^{(1)} \otimes \operatorname{H}\nolimits^{*}({\mathfrak n},k) \] as rings or as modules over the symmetric algebra. If the algebra is commutative, then the isomorphism holds as rings. On the other hand if the $p$-power operation on the Lie algebra fails to vanish ($x^{[p]} \neq 0$ for some $x$ in $\mathfrak n$), then the above isomorphism can not hold. Given a finitel dimensional nilpotent Lie algebra, it is not always possible to make it a restricted Lie algebra with trivial $p$-power operation. For one thing, the adjoint action of any element on the algebra would have to be nilpotent of degree less than $p$. We have listed below the indecomposable non-abelian nilpotent Lie algebras of dimension less than or equal to 5 along with the restrictions on the prime $p$ that are necessary to impose a trivial $p$-power operation. In this case, our standing assumption that ${\mathcal N}_{1}({\mathfrak n})\cong {\mathfrak n}$ holds. The notation for the Lie algebras is taken from DeGraaf's list \cite{deG}. \vskip .25cm \noindent Dimension 3: \ $L_{3,2}$ ($p\geq 3$) \vskip .5cm \noindent Dimension 4: \ $L_{4,3}$ ($p \geq 5$) \vskip .5cm The Lie algebras $L_{3,2}$ (reps. $L_{4,3}$) arise naturally as the unipotent radicals of the Borel subalgebras of simple Lie algebras of type $A_{2}$ (resp. $B_{2}$). We have $L_{3,2}=\langle x_{-\alpha}, x_{-\beta}, x_{-\alpha-\beta} \rangle$ and $L_{4,3}=\langle x_{-\alpha}, x_{-\beta}, x_{-\alpha-\beta}, x_{-2\alpha-\beta} \rangle$. The restricted cohomology rings of these algebras are given in Propositions \ref{sl3-rest} and \ref{prop:b2rest}. \vskip .5cm \noindent Dimension 5: $L_{5,4}$ ($p\geq 3$), $L_{5,5}$ ($p\geq 5$), $L_{5,6}$ ($p\geq 5$), $L_{5,7}$ $(p\geq 5$), $L_{5,8}$ ($p\geq 3$), $L_{5,9}$ ($p\geq 5$). \vskip .5cm We now use deGraaf's description of the five dimensional nilpotent Lie algebra (cf. \cite[Section 4]{deG}) and describe natural gradings on these Lie algebras. The natural gradings are induced by toral actions given by outer automorphisms. When we compute the cohomology of the algebras, differentials in the spectral sequences respect the actions of these tori. \vskip .25cm \noindent $L_{5,4}$: This nilpotent Lie algebra arises as a subalgebra of the nilpotent radical of a simple Lie algebra of type $A_{3}$. Let $\alpha_{1},\alpha_{2},\alpha_{3}$ denote the simple roots. The $L_{5,4}$ consists of the span of the root vectors $\{x_{-\alpha_{1}},x_{-\alpha_{3}}, x_{-\alpha_{1}-\alpha_{2}},x_{-\alpha_{2}-\alpha_{3}}, x_{-\alpha_{1}-\alpha_{2}-\alpha_{3}}\}$. \vskip .25cm \noindent $L_{5,5}$: This Lie algebra has a double grading with basis $\langle x_{-\alpha},x_{-\beta},x_{-\alpha-\beta}, x_{-2\alpha-\beta},x_{-2\alpha} \rangle$. \vskip .25cm \noindent $L_{5,6}$: Let $W(1)=\langle e_{i}:\ i\in {\mathbb Z}\}$ be the Witt algebra defined over ${\mathbb Z}$ with Lie bracket $[e_{i},e_{j}]=(i+j)e_{i+j}$. One can consider the subalgebra ${\mathfrak a}=\langle e_{i}:\ i< 0\}$ and factor this out by the ideal ${\mathfrak z}=\{e_{i}:\ i\leq -6\}$. The Lie algebra $L_{5,6}$ is the Lie algebra ${\mathfrak a}/{\mathfrak z}$ tensored by $k$. This has a natural ${\mathbb Z}$ grading, thus an action of a one-dimensional torus on $L_{5,6}$. This Lie algebra can also be viewed as a non-graded central extension of the unipotent radical of type $B_{2}$. \vskip .25cm \noindent $L_{5,7}$: The Lie algebra $L_{5,7}$ is a graded central extension of $L_{4,3}$. The Lie algebra can be graded by a two-dimensional torus and has basis given by $\langle x_{-\alpha},x_{-\beta},x_{\alpha-\beta}, x_{-2\alpha-\beta},x_{-3\alpha-\beta} \rangle$. \vskip .25cm \noindent $L_{5,8}$: Let ${\mathfrak a}$ be the nilpotent radical for the Borel subalgebra of a Lie algebra of type $A_{3}$, and ${\mathfrak z}$ be the center of this Lie algebra. The Lie algebra $L_{5,8}$ can be realized as ${\mathfrak a}/{\mathfrak z}$ and has basis (with an action of a three dimensional torus) given by $\langle x_{-\alpha_{1}},x_{-\alpha_{2}},x_{-\alpha_{3}}, x_{-\alpha_{1}-\alpha_{2}},x_{-\alpha_{2}-\alpha_{3}} \rangle$. \vskip .25cm \noindent $L_{5,9}$: One can realize this Lie algebra as another graded central extension of $L_{4,3}$. This Lie algebra has a double grading with basis $\langle x_{-\alpha},x_{-\beta},x_{-\alpha-\beta}, x_{-2\alpha-\beta}, x_{-\alpha-2\beta} \rangle$. \vskip .5cm The ordinary Lie algebra cohomology can be computed recursively using central extensions and the LHS spectral sequence. For example, if ${\mathfrak a}$ is a nilpotent Lie algebra and ${\mathfrak z}$ is a one-dimensional central subalgebra then the LHS spectral sequence: $$E_{2}^{i,j}=\operatorname{H}^{i}({\mathfrak a}/{\mathfrak z},k) \otimes \operatorname{H}^{j}({\mathfrak z},k) \Rightarrow \operatorname{H}^{i+j}({\mathfrak a},k)$$ will converge after the second page (i.e., $E_{3}\cong E_{\infty}$). We have $$\operatorname{H}^{2}({\mathfrak a},k)\cong \operatorname{H}^{2}({\mathfrak a}/{\mathfrak z},k)/\langle \text{Im }\delta_{2} \rangle \oplus \langle \text{Ker }\hat{\delta}_{2} \rangle.$$ where $\delta_{2}:E_{2}^{0,1}\rightarrow E_{2}^{2,0}$ and $\hat{\delta_{2}}:E_{2}^{1,1}\rightarrow E_{2}^{3,0}$. With appropriate choices of central subalgebras one can guarantee that the differentials respect the gradings above. This allows us to compute the differentials $\delta_{2}$ and $\hat{\delta}_{2}$ inductively. The weight spaces for the ordinary Lie algebra cohomology for these Lie algebras are multiplicity free (i.e., one-dimensional) and given in the following tables. To aid in the computation we use some facts about the ordinary Lie algebra cohomology. For example, that if $\mathfrak g$ has dimension $d$, then the ordinary Lie algebra cohomology vanishes in degrees greater than $d$. In addition there is a Poincar\'e duality that is also respected by the action of the tori. So for example, if $d$ is the dimension of the algebra $\mathfrak n$ and if the element in $\operatorname{H}\nolimits^d(\mathfrak n,k)$ has weight $\gamma$, then the weights of the cohomology element in degrees $d-1$ will be $\gamma-\zeta_1, \gamma-\zeta_2, \dots $ where $\zeta_1, \zeta_2, \dots$ are the weights of the cohomology elements in degree 1. \vskip.3in \begin{table}[htbp] \begin{tabular}{||r|r| |r|r||} \hline $L_{3,2}$ & & $L_{4,3}$ & \\ \hline \hline degree & weights & degree & weights \\ \hline 0 & $0$ & 0 & $0$ \\ 1 & $\alpha$, $\beta$ & 1 & $\alpha$, $\beta$ \\ 2 & $\alpha+2\beta$, $2\alpha+\beta$ & 2 & $\alpha+2\beta$, $3\alpha+\beta$ \\ 3 & $2\alpha+2\beta$ & 3 & $3\alpha+3\beta$, $4\alpha+2\beta$ \\ & & 4 & $4\alpha+3\beta$ \\ \hline \end{tabular} \end{table} \begin{table}[htbp] \begin{tabular}{||r|r||} \hline $L_{5,4}$ & \\ \hline \hline degree & weights \\ \hline 0 & $0$ \\ 1 & $\alpha_{1}$, $\alpha_{3}$, $\alpha_{1}+\alpha_{2}$, $\alpha_{2}+\alpha_{3}$ \\ 2 & $\alpha_{1}+\alpha_{3}$, $2\alpha_{1}+\alpha_{2}$, $\alpha_{1}+\alpha_{2}+\alpha_{3}$, \\ & $\alpha_{2}+2\alpha_{3}$, $\alpha_{1}+2\alpha_{2}+\alpha_{3}$ \\ 3 & $2\alpha_{1}+3\alpha_{2}+2\alpha_{3}$, $\alpha_{1}+2\alpha_{2}+3\alpha_{3}$, $2\alpha_{1}+2\alpha_{2}+2\alpha_{3}$, \\ & $3\alpha_{1}+2\alpha_{2}+\alpha_{3}$, $2\alpha_{1}+\alpha_{2}+2\alpha_{3}$ \\ 4 & $2\alpha_{1}+3\alpha_{2}+3\alpha_{3}$, $3\alpha_{1}+3\alpha_{2}+2\alpha_{3}$, $2\alpha_{1}+2\alpha_{2}+3\alpha_{3}$, \\ & $3\alpha_{1}+2\alpha_{2}+2\alpha_{3}$ \\ 5 & $3\alpha_{1}+3\alpha_{2}+3\alpha_{3}$ \\ \hline \end{tabular} \end{table} \begin{table}[htbp] \begin{tabular}{||r|r||} \hline $L_{5,8}$ & \\ \hline \hline degree & weights \\ \hline 0 & $0$ \\ 1 & $\alpha_{1}$, $\alpha_{2}$, $\alpha_{3}$ \\ 2 & $\alpha_{1}+2\alpha_{2}$, $2\alpha_{1}+\alpha_{2}$, $\alpha_{1}+\alpha_{3}$, \\ & $2\alpha_{2}+\alpha_{3}$, $\alpha_{2}+2\alpha_{3}$ \\ 3 & $\alpha_{1}+\alpha_{2}+2\alpha_{3}$, $2\alpha_{2}+2\alpha_{3}$, $\alpha_{1}+3\alpha_{3}+\alpha_{3}$, \\ & $2\alpha_{1}+\alpha_{2}+\alpha_{3}$, $2\alpha_{1}+2\alpha_{2}$ \\ 4 & $\alpha_{1}+3\alpha_{2}+2\alpha_{3}$, $2\alpha_{1}+2\alpha_{2}+2\alpha_{3}$, $2\alpha_{1}+3\alpha_{2}+\alpha_{3}$ \\ 5 & $2\alpha_{1}+3\alpha_{2}+2\alpha_{3}$ \\ \hline \end{tabular} \end{table} \begin{table}[htbr] \begin{tabular}{||r|r| |r|r||} \hline $L_{5,5}$ & & $L_{5,6}$ & \\ \hline \hline degree & weights & degree & weights \\ \hline 0 & $0$ & 0 & $0$ \\ 1 & $\alpha$, $\beta$, $2\alpha$ & 1 & $1$, $2$ \\ 2 & $2\alpha+\beta$, $\alpha+2\beta$, $3\alpha$ & 2 & $3$ \\ 3 & $4\alpha+2\beta$, $5\alpha+\beta$, $3\alpha+3\beta$ & 3 & $12$ \\ 4 & $5\alpha+3\beta$, $6\alpha+2\beta$, $4\alpha+3\beta$ & 4 & $13$, $14$ \\ 5 & $6\alpha+3\beta$ & 5 & $15$ \\ \hline \end{tabular} \end{table} \begin{table}[htbr] \begin{tabular}{||r|r| |r|r||} \hline $L_{5,7}$ & & $L_{5,9}$ & \\ \hline \hline degree & weights & degree & weights \\ \hline 0 & $0$ & 0 & $0$ \\ 1 & $\alpha$, $\beta$ & 1 & $\alpha$, $\beta$ \\ 2 & $\alpha+2\beta$, $4\alpha+\beta$ & 2 & $3\alpha+\beta$, $\alpha+3\beta$ \\ 3 & $6\alpha+2\beta$, $3\alpha+3\beta$ & 3 & $2\alpha+4\beta$, $4\alpha+2\beta$ \\ 4 & $6\alpha+4\beta$, $7\alpha+3\beta$ & 4 & $4\alpha+5\beta$, $5\alpha+4\beta$ \\ 5 & $7\alpha+4\beta$ & 5 & $5\alpha+5\beta$ \\ \hline \end{tabular} \end{table} \vfill\eject With this information about the ordinary Lie algebra cohomology $\text{H}^{*}({\mathfrak n},k)$ we can deduce structural properties about the restricted Lie algebra cohomology. \begin{thm} Let ${\mathfrak n}$ be a five-dimensional nilpotent Lie algebra. Then \begin{itemize} \item[(a)] $\operatorname{H}^{*}(u({\mathfrak n}),k) \cong S^{*}({\mathfrak n}^{*})^{(1)}\otimes \operatorname{H}^{*}({\mathfrak n},k)$ as $S^{*}({\mathfrak n}^{*})^{(1)}$ module for $L_{5,4}$ ($p\geq 3$), $L_{5,5}$ ($p\geq 5$), $L_{5,6}$ ($p\geq 7$), $L_{5,7}$ $(p\geq 7$), $L_{5,8}$ ($p\geq 5$), and $L_{5,9}$ ($p\geq 5$). In these cases the cohomology ring $\operatorname{H}\nolimits^{*}(u({\mathfrak n}),k)$ is Cohen-Macaulay. \item[(b)] In addition, the above isomorphism holds as rings for the algebras $L_{5,4}$ ($p\geq 5$), $L_{5,5}$ ($p\geq 7$), $L_{5,6}$ ($p\geq 13$), $L_{5,7}$ $(p\geq 11$), $L_{5,8}$ ($p\geq 5$), and $L_{5,9}$ ($p\geq 5$). \end{itemize} \end{thm} \begin{proof} (a) There exist a one-dimensional central ideal ${\mathfrak z}$ such that ${\mathfrak n}/{\mathfrak z}$ is isomorphic to (i) the nilpotent radical for a simple Lie algebra of Type $B_{2}$ for $L_{5,5}$, $L_{5,6}$, $L_{5,7}$ and $L_{5,9}$, (ii) an abelian Lie algebra for $L_{5,4}$ and (iii) the nilpotent radical of type $A_{2}\times A_{1}$ for $L_{5,8}$. Now assume that $p\geq 7$ in case (i), $p\geq 3$ in case (ii), and $p\geq 5$ in case (iii). Then $\operatorname{H}^{*}(u({\mathfrak n}/{\mathfrak z}),k)\cong S^{*}(({\mathfrak n}/{\mathfrak z})^{*})^{(1)}\otimes \operatorname{H}^{*}({\mathfrak n}/{\mathfrak z},k)$ as rings. Hence, by Theorem~\ref{thm:tensoralg}, $\operatorname{H}^{*}(u({\mathfrak n}),k)\cong S^{*}({\mathfrak n}^{*})^{(1)}\otimes \operatorname{H}^{\bullet}({\mathfrak n},k)$ as $S^{*}({\mathfrak n}^{*})^{(1)}$ module. The remaining cases when $L_{5,8}$ ($p=3$), $L_{5,5}$ ($p=5$) and $L_{5,9}$ ($p=5$) can be verified directly by showing that the spectral sequence (\ref{eq:specseq}) collapses. (b) We construct a subalgebra $B$ in $\text{H}^{*}(u({\mathfrak n}),k)$ which is isomorphic to $\text{H}^{*}({\mathfrak n},k)$. This is accomplished using the gradings to show that the following conditions cannot simultaneously hold. First, \[ \gamma_{1}+\gamma_{2}=\gamma_{3}+p\sigma \] where $\gamma_{j}$ is a weight of $\text{H}^{a_{j}}({\mathfrak n},k)$ for $j=1,2,3$ and $\sigma\neq 0$ is a weight of $S^{*}({\mathfrak n}^{*})^{(1)}$. Second, $$a_{1}+a_{2}=a_{3}+\text{deg}(\sigma)$$ were $\text{deg}(\sigma)$ is the cohomological degree of the element that $\sigma$ represents. In all the cases listed in the statement this was verified, and proves there do not exist two elements of weights $\gamma_1$ and $\gamma_2$ whose product is the product of an element of the symmetric algebra with an element of weight $\gamma_3$. Furthermore, this shows that a basis of weight vectors in $\text{H}^{*}({\mathfrak n},k)$ form a subalgebra of $\text{H}^{*}(u({\mathfrak n}),k)$. \end{proof} \begin{thm} Let ${\mathfrak n}$ be a five-dimensional nilpotent Lie algebra. Then $\operatorname{H}^{*}(u({\mathfrak n}),k)$ is not Cohen-Macaulay in the cases that $\mathfrak n$ is $L_{5,7}$ for $p=5$ and $L_{5, 8}$ for $p=3$. In these cases, the depth of the cohomology ring is one less than the dimension. \end{thm} \begin{proof} The results are proved in Propositions \ref{prop:notcm1} and \ref{prop:b2notcm}. \end{proof} We remark that the work in \cite{BC1, BC2} can be adapted for restricted Lie algebra cohomology to show that in the case when the cohomology ring has depth at least one less than the Krull dimension, the cohomology ring is Cohen-Macaulay if and only if it satisfies the functional equation in Theorem~\ref{thm:bc}. We can conclude that the cohomology rings for $L_{5,7}$ ($p=5$) and $L_{5, 8}$ ($p=3$) do not satisfy the functional equation. \section{Type $A_{3}$, $p>h$} From the Examples \ref{sl3-rest}, \ref{prop:b2rest}, \ref{prop:g2rest}, one might get the impression that if ${\mathfrak n}$ is a unipotent radical of a Borel subalgebra of a simple Lie algebra and $h<p <2(h-1)$, then the restricted cohomology is not isomorphic as rings to the tensor of the symmetric algebra and the ordinary Lie algebra cohomology. However, this is not true. In this section we analyze the smallest example of this sort where the ring isomorphism exists. In this case, the root system $\Phi$ is of type $A_{3}$ and ${\mathfrak n}$ is the six dimensional Lie algebra of strictly upper triangular $4\times 4$ matrices over a field of characteristic $5$. In the DeGraaf notation this is the Lie algebra $L_{6,19}(\epsilon)$. Since $p>h$, the ordinary Lie algebra cohomology is given by Kostant's theorem. A proof in characteristic $p$ with $p>h$ is found in \cite[Theorem 4.1.1]{UGA}. As a module for $T$ we have \begin{equation} \label{eq:Kostant} \operatorname{H}^{n}({\mathfrak n},k)\cong \bigoplus_{w\in W,\ l(w)=n} -w\cdot 0 \end{equation} As before we are using the convention that ${\mathfrak n}$ consists of negative root vectors. Let $\Delta=\{\alpha_{1},\alpha_{2},\alpha_{3}\}$ denote the simple root vectors. Then (\ref{eq:Kostant}) can be used to produce the following table which describes the weights in the cohomology groups $\operatorname{H}^{n}({\mathfrak n},k)$. \begin{table}[htbp] \begin{tabular}{||r|r||} \hline $L_{6,19}(\epsilon)$ & \\ \hline \hline degree & weights \\ \hline 0 & $0$ \\ 1 & $\alpha_{1}$, $\alpha_{2}$, $\alpha_{3}$ \\ 2 & $\alpha_{1}+2\alpha_{2}$, $\alpha_{1}+\alpha_{3}$, $2\alpha_{1}+\alpha_{2}$, \\ & $\alpha_{2}+2\alpha_{3}$, $2\alpha_{2}+\alpha_{3}$ \\ 3 & $2\alpha_{1}+2\alpha_{2}$, $\alpha_{1}+2\alpha_{2}+3\alpha_{3}$, $\alpha_{1}+3\alpha_{2}+\alpha_{3}$, \\ & $2\alpha_{1}+\alpha_{2}+2\alpha_{3}$, $2\alpha_{2}+2\alpha_{3}$, $3\alpha_{1}+2\alpha_{2}+\alpha_{3}$ \\ 4 & $2\alpha_{1}+2\alpha_{2}+3\alpha_{3}$, $2\alpha_{1}+4\alpha_{2}+2\alpha_{3}$, $\alpha_{1}+3\alpha_{2}+3\alpha_{3}$ \\ & $3\alpha_{1}+3\alpha_{2}+\alpha_{3}$, $3\alpha_{1}+2\alpha_{2}+2\alpha_{3}$ \\ 5 & $2\alpha_{1}+4\alpha_{2}+3\alpha_{3}$, $3\alpha_{1}+3\alpha_{2}+3\alpha_{3}$, $3\alpha_{1}+4\alpha_{2}+2\alpha_{3}$ \\ 6 & $3\alpha_{1}+4\alpha_{2}+3\alpha_{3}$ \\ \hline \end{tabular} \end{table} \begin{thm} Let ${\mathfrak n}$ be the unipotent radical corresponding to the simple group with root system $A_{3}$ (i.e., $4\times 4$ upper triangular matrices) with $p>h$. Then $$\operatorname{H}^{*}(u({\mathfrak n}),k)\cong S^{*}({\mathfrak n}^{*})^{(1)}\otimes \operatorname{H}^{*}({\mathfrak n},k)$$ as rings. \end{thm} \begin{proof} For $p>2(h-1)$ we can apply \cite[Theorem 3.1.1]{DNN}. This leaves us with the case when $p=5$. In order to invoke Theorem~\ref{th:splitting}, we need to analyze the equation \begin{equation} \label{eq:condition1} -w_{1}\cdot 0-w_{2}\cdot 0=p\sigma-w_{3}\cdot 0 \end{equation} where $-w_{j}\cdot 0$ is a weight of $\text{H}^{a_{j}}({\mathfrak n},k)$ for $j=1,2,3$ and $\sigma$ is a weight of $S^{*}({\mathfrak n}^{*})^{(1)}$. Furthermore, by consideration of cohomological degrees we must have that \begin{equation} \label{eq:condition2} l(w_{1})+l(w_{2})-l(w_{3})=2\text{deg}(\sigma) \end{equation} where $\text{deg}(\sigma)$ is the cohomological degree corresponding to the element of weight $\sigma$. The first equation (\ref{eq:condition1}) does have solutions. For example, $$(2\alpha_{1}+4\alpha_{2}+3\alpha_{3})+(3\alpha_{1}+3\alpha_{2}+3\alpha_{3}) =(2\alpha_{2}+\alpha_{3})+5(\alpha_{1}+\alpha_{2}+\alpha_{3}).$$ Here $l(w_{1})=l(w_{2})=5$ and $l(w_{3})=2$. However, $\text{deg}(\sigma)\leq 6$. Thus, the second equation (\ref{eq:condition2}) cannot hold. A careful, case by case analysis rules out the possibility that both equations could simultaneously be satisfied. \end{proof} We end the paper with the following intriguing question. \begin{quest} Suppose that $\mathfrak n$ is the nilpotent radical of a Borel subalgebra of a Lie algebra arising from a reductive algebraic group. In the case that the root system $\Phi$ is simply laced and that $p >h$, is there always a ring isomorphism $$\operatorname{H}^{*}(u({\mathfrak n}),k)\cong S^{*}({\mathfrak n}^{*})^{(1)}\otimes \operatorname{H}^{*}({\mathfrak n},k)?$$ \end{quest}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction \label{intro}} In series of previous works \cite{deriglazov-bmt:2013,DPM3,DPM1,deriglazov2014Monster} we developed a Poincare-invariant variational formulation describing particle with spin. This classical model provides a unified description of both Frenkel and BMT equations \cite{bmt:59}. The latter are considered as a basic tool in the analysis of the polarization precession measurements \cite{miller2007muon}. In \cite{DPW2} we extended the variational formulation to the general relativity, where the classical models of a spinning particle are widely used to describe a rotating body in pole-dipole approximation \cite{dixon:1964,Khriplovich1989,Khriplovich2008,alwal2015_3,alwal2015_2,alwal2015_1,Koch2015,Balakin2015,Frob2016}. Another possible application can be related with the kinetic theory of chiral medium, where, in the regime of weak external fields and weak interactions between spinning (quasi)-particles, each particle can be considered as moving along a classical trajectory \cite{chen2014}. For variational formulations provide a starting point to the canonical quantization \cite{dirac1958}, they have incredible theoretical importance connecting classical and quantum descriptions of nature. Canonical quantization of the free spinning particle (within our variational formulation, \cite{deriglazov2013quant}) leads to the one-particle relativistic quantum mechanics with positive-energy states. It also identifies the non-commutative Pryce's d-type center of mass operator \cite{deriglazov2013quant} \footnote{See also \cite{gitman1990quantization}, where the same result was obtained for the classical particle with anticommuting spin variables.} as the quantum observable which corresponds to the classical position variable. The noncommutative position variables were constructed already by Pryce \cite{pryce1948mass}. He shown that coordinates of the relativistic center-of-mass have to obey non-trivial Poisson brackets. As a result, the corresponding quantum observables do not commute. Therefore a physically meaningful position operators of a spin-$1/2$ particle should be non-commutative. Recent theoretical studies revive Snyder's attempts \cite{snyder1947quantized} to solve fundamental physical problems by introducing non-commutativity of the space \cite{deriglazov-nc:2003}. It is believed that this fundamental non-commutativity may be important at Plank length scale $\lambda_P$. Extensive studies of non-commutativity cover both classical and quantum theories, as well relativistic and non-relativistic situations. Postulating non-commutative deformation of position operators \cite{Ferrari:2012bv} one can study physical consequences and estimate possible effects. Calculations of the hydrogen spectrum corrections strongly limit possible non-commutativity of coordinate parameters in the Dirac equation \cite{sheikh2001,Gomes2010spin-non-commutativity,khodja2012ncommutative-coulomb,santos2012zeeman-non-commutative,kupriyanov2013hydrogen-non-commutativity}. In the present work we study effects of a non-commutativity of Pryce's d-type center of mass (at both classical and quantum levels) in the description of electron interacting with an electromagnetic background. Our considerations extend results of \cite{deriglazov2013quant} towards a quantization of interacting spinning particle. In the free theory, different candidates for the position operator are almost indistinguishable. All these operators obey the same Heisenberg equations (uniform rectilinear motion), and the difference in their expectation values is of Compton wave-length order, $\lambda_C$. In the interacting case, the problem of the identification of quantum position observables becomes more complicated.\footnote{ Another related problem is in the definition of spin operator, since it correlates with the definition of a center of mass. Bauke in \cite{bauke2014spin} compares Pauli, Foldy-Wouthuysen, Czachor, Frenkel, Chakrabarti, Pryce, and Fradkin-Good spin operators in different physical situations and concluded that interaction with electromagnetic potentials allows to distinguish between various spin operators experimentally. } Fleming \cite{fleming1965nonlocal} noted: {\it "The simplest form of interaction is that due to a static potential which may be expressed in terms of the position operator of the particle. For a relativistic particle, however, the important question arises of which position operator should be used. The conventional approach, in which the position operator is assumed to be local, forces the choice of the center of spin."}\footnote{Fleming calls the Newton-Wigner position operator as the center of spin, while Pryce d-type operator is called as the center of mass.} He also observed, that a formal substitution of Pryce d-type operator into the potential leads to some reasonable corrections: {\it "The first correction term to a spherically symmetric local potential will be recognized as the spin-orbit coupling that Thomas derived many years ago as a consequence of classical relativity and which appears in the nonrelativistic limit of the Dirac equation for spin particles."} Analogous situation was observed in general relativity, \cite{feynman1961,fleming1965covariant,corben:1968,Pomeranskii1998} where a formal substitution of a non-local position variable into potential results in correct equations of motion for the spinning particle. Restricting ourselves to the case of special relativity, in the present work we provide some theoretical grounds for such substitution. The paper is organized as follows. In Sect. 2 we formulate the problem of covariant formalism \cite{Pomeranskii1998} concerning the discrepancy between relativistic and Pauli Hamiltonians. In Sect. 3 we give a brief description of the vector model for the classical description of a relativistic spinning particle. In Sect. 4 we consider canonical quantization of the model in the physical-time parametrization and realize classical algebra of Dirac brackets by quantum operators in the case of a stationary electro-magnetic background. This realization deforms the covariant Hamiltonian and at low energies gives Pauli Hamiltonian with correct spin-orbital interaction. In the conclusion we discuss the obtained results. \section{Model independent discussion of the quantum and classical Hamiltonians of a spinning particle} From quantum point of view, at low energies an electron interacting with a background electromagnetic field is described by the two-component Schr\"{o}dinder equation. Pauli Hamiltonian\footnote{We will write quantum Hamiltonians and other operators using the hat, the same observables without the hat correspond to the classical theory. Thus (\ref{pauli-hamiltonian}) defines also classical Pauli-like Hamiltonian.} includes spin-orbital and Zeeman interactions \begin{eqnarray} \hat H_{ph}=\frac{1}{2m}(\hat{{\rm {\bf p}}} - \frac{e}{c}{\rm {\bf A}})^2 - eA_0 + \frac{e(g-1)}{2m^2c^2}\hat{{\rm {\bf S}}}[\hat{\bf p}\times{\bf E}]- \frac{eg}{2mc}{\rm {\bf B}}\hat{{\rm {\bf S}}} =\hat H_{charge}+\hat H_{spin-em}\,, \qquad \qquad \label{pauli-hamiltonian} \end{eqnarray} where the spin operator is proportional to $\sigma$-matrices of Pauli $\hat S^i=\frac{\hbar}{2}\sigma^i$. Gyromagnetic ratio $g$ is a coupling constant of spin with an electromagnetic field. In principle, in non-relativistic theory one can expect different coupling constants for the third and the fourth terms of the Hamiltonian. Experimental observations of the hydrogen spectrum lead to the factor $g-1$ in the third term and to the factor $g$ in the last term. Thus, the Hamiltonian explains Zeeman effect and reproduces fine structure of the energy levels of the hydrogen atom. This Hamiltonian follows also from the non-relativistic limit of the Dirac equation in the Foldy-Wouthuysen representation \cite{dirac1958,foldy:1978}. From classical point of view, models of spinning particles are based on a Lagrangian or Hamiltonian mechanics, both in the relativistic and non-relativistic regime \cite{AAD2010}. In a covariant formulation, the spin part of the Hamiltonian describing an interaction between spin $S$ and electromagnetic field reads \begin{eqnarray} H_{spin-em-cov}\sim\frac{eg}{2m^2c^2}{\rm {\bf S}}[{\bf p}\times{\bf E}]- \frac{eg}{2mc}{\rm {\bf B}}{\rm {\bf S}}\,. \label{FF0.2.1} \end{eqnarray} We emphasize that the expression (\ref{FF0.2.1}) follows from the analysis of all possible terms in covariant equations of motion and thus is a model-independent \cite{Pomeranskii1998}. It can also be predicted from symmetry considerations on the level of a Hamiltonian. For instance, if we take the Frenkel spin-tensor $S^{\mu\nu}$ with the covariant condition $S^{\mu\nu}P_\nu=0$, the only Lorentz-invariant combination that could give the desired terms written in (\ref{FF0.2.1}) is $F_{\mu\nu}S^{\mu\nu}=2E^iS^{i0}+\epsilon^{ijk}S^{ij}B^k$ (see our notations in Appendix). For the classical gyromagnetic ratio $g=2$, the classical spin-orbital interaction in (\ref{FF0.2.1}) differs by the famous and troublesome factor\footnote{This factor is often referred to Thomas precession \cite{thomas1926motion}. We will not touch this delicate and controversial issue \cite{frenkel:1926,stepanov2012} since the covariant formalism automatically accounts the Thomas precession \cite {weinbergGC}.} of $\frac12$ from its quantum counterpart in (\ref{pauli-hamiltonian}). It seems that quantization of $H_{spin-em-cov}$ will not reproduce quantum behavior given by $\hat H_{spin-em}$. The issue about this difference was raised already in 1926 \cite{frenkel:1926} and still remains under discussion \cite{Pomeranskii1998}. In principle, Hamiltonian $H_{spin-em}$ can be obtained, if one impose a non covariant supplementary condition on spin, $2S^{i0}P_0+S^{ij}P_j=0$, where $P_0\sim -mc$ in the leading approximation. On a first glance, any covariant spin-supplementary condition \cite{frenkel:1926,Papapetrou:1951pa,pirani:1956,dixon:1964,tulczyjew:1959} would give $H_{spin-em-cov}$ and the discrepancy factor of $\frac12$. In the next section we study this issue in the framework of vector model of a spinning particle \cite{deriglazov2014Monster}. We show that the vector model provides an answer on a pure classical ground, without appeal to the Dirac equation. In a few words, it can be described as follows. The relativistic vector model involves a second-class constraints, which should be taken into account by passing from the Poisson to Dirac bracket. The emergence of a higher non linear classical brackets that accompany the relativistic Hamiltonian (\ref{FF0.2.1}) is a novel point, which apparently has not been taken into account in literature. If we pretend to quantize the model, it is desirable to find a set of variables with the canonical brackets. The relativistic Hamiltonian (\ref{FF0.2.1}), when written in the canonical variables, just gives (\ref{pauli-hamiltonian}). \section{Vector model of spinning particle in the parametrization of physical time.} To find the classical brackets that accompany $H_{cov}$ we need a systematically developed model of a spinning particle. Here we consider the vector model and briefly describe the construction of the Hamiltonian and the brackets in a stationary electromagnetic field. For a detailed discussion of the model, see \cite{deriglazov2014Monster}. Configuration space of the vector model of spinning particle is parameterized by a point $x^\mu(\tau)$ of a world-line and a vector $\omega^\mu(\tau)$ attached to that point. The configuration-space variables are taken in an arbitrary parametrization $\tau$ of the world-line. The conjugate momenta of the variables are denoted by $p^\mu$ and $\pi^\mu$, correspondingly. Frenkel spin-tensor in the vector model is a composite quantity, $S^{\mu\nu}=2(\omega^\mu\pi^\nu-\omega^\nu\pi^\mu)$. The free Lagrangian can be written in a number of equivalent forms \cite{alwal2015_3,deriglazov2013quant}. To describe the spin-field interaction through the gyromagnetic ratio $g$, we use the Lagrangian with an auxiliary variable $\lambda(\tau)$ \begin{eqnarray}\label{m.1} S=\int d\tau\frac{1}{4\lambda}\left[\dot xN\dot x+D\omega ND\omega- \sqrt{\left[\dot xN\dot x+D\omega ND\omega\right]^2-4(\dot xND\omega)^2}\right]- \cr \frac{\lambda}{2}(m^2c^2-\frac{\alpha}{\omega^2})+\frac{e}{c}A\dot x, \qquad \qquad \qquad \end{eqnarray} where $D\omega^\mu\equiv\dot\omega^\mu-\lambda\frac{eg}{2c}F^{\mu\nu}\omega_\nu$. The auxiliary variable provides a homogeneous transformation law of $D\omega$ under the reparametrizations, $D_{\tau'}\omega=\frac{d\tau}{d\tau'}D\omega$. The matrix $N_{\mu\nu}$ is the projector on the plane orthogonal to $\omega^\nu$, $N_{\mu\nu}= \eta_{\mu\nu}-\frac{\omega_\mu \omega_\nu}{\omega^2}$. The parameter $m$ is mass, while $\alpha$ determines the value of spin. The value $\alpha=\frac{3\hbar^2}{4}$ is fixed by quantization conditions and corresponds to an elementary spin one-half particle. In the spinless limit, $\alpha=0$ and $\omega^\mu=0$, the functional (\ref{m.1}) reduces to the well known Lagrangian of the relativistic particle, $\frac{1}{2\lambda}\dot x^2-\frac{\lambda}{2}m^2c^2+\frac{e}{c}A\dot x$. Frenkel considered the case $g=2$ and found approximate equations of motion neglecting quadratic and higher terms in spin, fields and field gradients. Equations of motion obtained from (\ref{m.1}) coincide with those of Frenkel in this approximation \cite{frenkel:1926}. To find relativistic Hamiltonian in the physical-time parametrization\footnote{Which is necessary for the canonical quantization.}, we use the Hamiltonian action associated with (\ref{m.1}). This reads \cite{deriglazov2014Monster}, $\int d\tau ~ p\dot x+\pi\dot\omega-\lambda_iT_i$, where $\lambda_i$ are Lagrangian multipliers associated with the primary constraints $T_i$. The variational problem provides both equations of motion and constraints of the vector model in an arbitrary parametrization. Using the reparametrization invariance of the functional, we take physical time as the evolution parameter, $\tau=\frac{x^0}{c}=t$, then the functional reads \begin{eqnarray}\label{ch09:eqn9.7.1} S_H=\int dt ~ c\tilde{\cal P}_0-eA^0+p_i\dot x^i+\pi_\mu\omega^\mu- \qquad \qquad \qquad \cr \left[\frac{\lambda}{2}\left(-\tilde{\cal P}_0^2+{\cal P}_i^2-\frac{eg}{4c}(FS)+m^2c^2+\pi^2-\frac{\alpha}{\omega^2}\right)+ \lambda_2\omega\pi+\lambda_3{\cal P}\omega+\lambda_4{\cal P}\pi\right], \end{eqnarray} where $\tilde{\cal P}_0=p_0-\frac{e}{c}A_0$ and ${\cal P}^i=p^i-\frac{e}{c}A^i$ is $U(1)$\,-invariant canonical momentum. We can treat the term associated with $\lambda$ as a kinematic constraint of the variational problem. Following the known prescription of classical mechanics, we solve the constraint, \begin{eqnarray}\label{ch09:eqn9.7.2} \tilde{\cal P}_0=-\tilde{\cal P}^0=-\sqrt{{\cal P}_i^2-\frac{eg}{4c}(FS)+m^2c^2+\pi^2-\frac{\alpha}{\omega^2}}, \end{eqnarray} and substitute the result back into Eq. (\ref{ch09:eqn9.7.1}), this gives an equivalent form of the functional \begin{eqnarray}\label{ch09:eqn9.7.3} S_H=\int dt ~ p_i\dot x^i+\pi_\mu\dot\omega^\mu- \left[c\sqrt{{\cal P}_i^2-\frac{eg}{4c}(FS)+ m^2c^2+\pi^2-\frac{\alpha}{\omega^2}}+eA^0+ \right. \cr \left. \lambda_2\omega_\mu\pi^\mu+\lambda_3{\cal P}_\mu\omega^\mu+\lambda_4{\cal P}_\mu\pi^\mu\right], \qquad \qquad \qquad \end{eqnarray} where the substitution (\ref{ch09:eqn9.7.2}) is implied in the last two terms as well. The sign in front of the square root (\ref{ch09:eqn9.7.2}) was chosen according to the right spinless limit, $L=-mc\sqrt{-\dot x^\mu\dot x_\mu}$. The expression in square brackets is the Hamiltonian. The variational problem implies the first-class constraints $T_2\equiv\omega\pi=0$, $T_5\equiv\pi^2-\frac{\alpha}{\omega^2}=0$. They determine gauge symmetries and physical observables of the theory. The quantities $x^i(t)$, ${\cal P}^i(t)$ and $S^{\mu\nu}(t)$ have vanishing Poisson brackets with the constraints and hence are candidates for observables. The set \begin{eqnarray}\label{ch09:eqn9.7.4} T_3=-{\cal P}^0\omega^0+{\cal P}^i\omega^i=0, \qquad T_4=-{\cal P}^0\pi^0+{\cal P}^i\pi^i=0, \end{eqnarray} where \begin{eqnarray}\label{ch09:eqn9.7.5} {\cal P}^0\equiv\sqrt{{\cal P}_i^2-\frac{eg}{4c}(FS)+m^2c^2}. \end{eqnarray} represents a pair of second class constraints. In all expressions below the symbol ${\cal P}^0$ represents the function (\ref{ch09:eqn9.7.5}). The constraints imply the spin-supplementary condition \begin{eqnarray}\label{ssc} S^{\mu\nu}{\cal P}_\nu=0, \end{eqnarray} as well as the value-of-spin condition $S^{\mu\nu}S_{\mu\nu}=8\alpha$. To represent the Hamiltonian in a more familiar form, we take into account the second-class constraints by passing from Poisson to Dirac bracket. As the constraints involve conjugate momenta of the position ${\bf x}$, this leads to nonvanishing brackets for the position variables. In the result, the position space is endowed, in a natural way, with a noncommutative structure which originates from accounting of spin degrees of freedom. For the convenience, an exact form of Dirac brackets of our observables is presented in the Appendix. Since the Dirac bracket of any quantity with second-class constraints vanishes, we can omit them from the Hamiltonian. The first-class constraints can be omitted as well, as they do not contribute into equations of motion for physical variables. In the result we obtain the relativistic Hamiltonian \begin{eqnarray}\label{pht.16} H_{cov}=c\sqrt{\vec{\cal P}^2-\frac{eg}{4c}F_{\mu\nu}S^{\mu\nu}+m^2c^2}+eA^0. \end{eqnarray} Note that the Dirac brackets (\ref{pht14.2}), (\ref{pht15.1}) encode the most part of spin-field interaction, on this reason we have arrived at a rather simple form for the physical Hamiltonian. Equations of motion follow from this Hamiltonian with use of the Dirac bracket\footnote{We emphasize that the use of canonical brackets will lead to different equations. In our opinion, this turns out to be the reason for debates around the controversial results obtained by different groups, see the discussion in \cite{Pomeranskii1998}.}: $\frac{d z}{dt}=\{z, H_{cov}\}_D$. \section{First Relativistic Corrections and Fine Structure of Hydrogen Spectrum} \label{ch09:sec9.7} To quantize our relativistic theory we need to find quantum realization of highly non linear classical brackets (\ref{pht14.2}). They remain non canonical even in absence of interaction. For instance, the first equation from (\ref{pht14.2}) in a free theory reads $\{x^i, x^j\}=\frac{1}{2mcp^0}S^{ij}$. It is worth noting that non relativistic spinning particle \cite{AAD2010,DPM1} implies the canonical brackets, so the deformation arises as a relativistic correction induced by spin of a particle. Technically, the deformation arises from the fact that the constraints (\ref{ch09:eqn9.7.4}), used to construct the Dirac bracket, mixes up the space-time and inner-spin coordinates. Concerning quantum realization of the brackets in a free theory and relativistic covariance of the resulting quantum mechanics, see \cite{deriglazov2013quant}. In an interacting theory, the explicit form of the brackets is not known. Therefore we quantize the interacting theory perturbatively, considering $c^{-1}$ as a small parameter and expanding all quantities in power series. Let us consider the approximation $o(c^{-2})$ neglecting $c^{-3}$ and higher order terms. For the Hamiltonian (\ref{pht.16}) we have $H_{cov}\approx mc^2+\frac{\boldsymbol{\cal P}^2}{2m}-\frac{\boldsymbol{\cal P}^4}{8m^3c^2}-\frac{eg}{8mc}(FS)$. Since the last term is of order $(mc)^{-1}$, resolving the constraint $S^{\mu\nu}{\cal P}_\nu=0$ with respect to $S^{i0}$ we can approximate ${\cal P}^0=mc$, then $S^{i0}=\frac{1}{mc}S^{ij}{\cal P}^j$. Using this expression we obtain \begin{eqnarray}\label{pht.16.0} H_{cov}= mc^2+\frac{\boldsymbol{\cal P}^2}{2m}-\frac{\boldsymbol{\cal P}^4}{8m^3c^2}+eA^0+ \frac{eg}{2mc} \left[\frac{1}{mc}{\rm {\bf S}}[{\boldsymbol{\cal P}}\times{\bf E}]- {\rm {\bf B}}{\rm {\bf S}}\right]+o\left(\frac{1}{c^2}\right)\cr =H_{charge}+H_{spin-em-cov}+o\left(\frac{1}{c^2}\right). \qquad \qquad \end{eqnarray} Due to the second and fourth terms, we need to know the operators $\hat{\cal P}^i$ and $\hat x^i$ up to order $c^{-2}$, while $\hat S^{ij}\sim\hat{\bf S}$ should be found up to order $c^{-1}$. With this approximation, the commutators $[\hat x, \hat x]$, $[\hat x, \hat{\cal P}]$, and $[\hat{\cal P}, \hat{\cal P}]$ can be computed up to order $c^{-2}$, while the remaining commutators can be written only up to $c^{-1}$. Therefore, we expand the right hand sides of Dirac brackets (\ref{pht14.2}) in this approximation \begin{eqnarray}\label{16.1} \{x^i, x^j\} & = & \frac{1}{2m^2c^2}S^{ij}+o\left(\frac{1}{c^2}\right), \nonumber \\ \label{16.2} \{x^i, {\cal P}^j\} & = & \delta^{ij}+o\left(\frac{1}{c^2}\right), \nonumber \\ \label{16.3} \{x^i, S^{jk}\} &= & 0+o\left(\frac{1}{c}\right), \\ \label{16.4} \{{\cal P}^i, {\cal P}^j\} &= &\frac{e}{c} F^{ij}+o\left(\frac{1}{c^3}\right), \nonumber \\ \label{16.5} \{{\cal P}^i, S^{jk}\} &= & o\left(\frac{1}{c^2}\right), \nonumber \\ \label{16.6} \{S^{ij}, S^{kl}\} &= & 2(\delta^{ik}S^{jl}-\delta^{il}S^{jk}-\delta^{jk}S^{il}+\delta^{jl}S^{ik})+o\left(\frac{1}{c}\right). \nonumber \end{eqnarray} Only the first bracket acquires a non standard form in the leading approximation. An operator realization of these brackets on the space two-component Weyl spinors reads \begin{eqnarray} \hat{\cal P}_i &=& -i\hbar\frac{\partial}{\partial x^i}-\frac{e}{c}{A}_i({\bf x}), \label{16.7} \\ \hat{x}_i & =& x_i-\frac{\hbar}{4m^2c^2}\epsilon_{ijk}\hat{P}^j\sigma^k, \label{16.8} \\ \hat{S}^{ij}& = &\hbar\epsilon_{ijk}\sigma^k, \label{16.9} \qquad \end{eqnarray} then \begin{eqnarray} \hat S^i & = & \frac14\epsilon_{ijk}S^{jk}=\frac{\hbar}{2}\sigma^i, \label{pht.16.9} \\ \hat{S}^{i0} & =& \frac{\hbar}{mc}\epsilon_{ijk}\hat{\cal P}^j\sigma^k. \label{pht.16.10} \end{eqnarray} By construction of a Dirac bracket, the operator $\hat S^{i0}$ automatically obeys the desired commutators up to order $c^{-1}$. The operator $\hat x_i$ coincides with the positive-energy part of Pryce (d) operator in the Foldy-Wouthuysen representation, see \cite{deriglazov2013quant} for details. We substitute these operators into the classical Hamiltonian (\ref{pht.16.0}). Expanding $A^0(\hat{\bf x})$ in a power series, we obtain an additional contribution of order $c^{-2}$ to the potential due to non commutativity of the position operator \begin{eqnarray}\label{17} eA^0\left(x_i-(2mc)^{-2}\epsilon_{ijk}\hat{\cal P}^j\hat S^k\right) \approx eA^0({\bf x})-\frac{e}{2m^2c^2}\hat{\bf S}[\hat{\boldsymbol{\cal P}}\times{\bf E}]. \end{eqnarray} The contribution has the same structure as fifth term in the Hamiltonian (\ref{pht.16}). In the result, the quantum Hamiltonian up to order $c^{-2}$ reads \begin{equation}\label{18} \hat H_{cov}= mc^2+\frac{\hat{\boldsymbol{\cal P}}^2}{2m}-\frac{\hat{\boldsymbol{\cal P}}^4}{8m^3c^2}+eA^0+\frac{e(g-1)}{2m^2c^2} \hat{\bf S}[\hat{\boldsymbol{\cal P}}\times{\bf E}]-\frac{eg}{2mc}{\bf B}\hat{\bf S}. \end{equation} The first three terms corresponds to an increase of relativistic mass. The last two terms coincides with those in Eq. (\ref{pauli-hamiltonian}). We could carry out the same reasoning in the classical theory, by asking on the new variables $z'$ that obey the canonical brackets as a consequence of Eq. (\ref{16.3}). In the desired approximation they are ${\cal P}^i={\cal P}'^i-\frac{e}{c}A^i(x'^j)$, $x^i=x'^i-\frac{1}{4m^2c^2}S'^{ij}{\cal P}'^j$ and $S^{ij}=S'^{ij}$. In the result, we have shown that non-commutativity of electron's position at the Compton scale is responsible for the fine structure of hydrogen atom. \section{Conclusions} Vector model of relativistic spinning particle gives an example of a noncommutative system. In the leading approximation, the non commutative geometry induced by spin affects only the brackets of position variables, see the first equation from (\ref{16.1}). The ``parameter of noncommutativity'' is being proportional to spin-tensor. As a consequence, canonical quantization of the model in leading approximation gives the Pauli Hamiltonian. Our calculations show that\\ 1) classical interaction of spin with electromagnetic field can be described by manifestly covariant term $S^{\mu\nu}F_{\mu\nu}$, accompanied by the covariant spin-supplementary condition $S^{\mu\nu}P_\nu=0$;\\ 2) phase space is endowed with a non-trivial symplectic structure (Dirac brackets), in particular, position variables become non-commutative due to non-vanishing Dirac brackets;\\ 3) the Thomas precession automatically appears in the equations of motion \cite{deriglazov2014Monster} due to non-trivial Dirac bracket, without modification of the Hamiltonian;\\ 4) quantization of the vector model for free electron leads to one-particle relativistic quantum mechanics with positive-energy states. The free Hamiltonian acts in the space of two-component spinors and reads $\hat H_{phys}=\sqrt{\hat{\bf p}{\,}^2+m^2c^2}$, position operator of a free electron is the Pryce's d-type \cite{pryce1948mass,deriglazov2013quant};\\ 5) quantization of the model in the case of a stationary electromagnetic background formally leads to the Hamiltonian $${\hat H_{cov}(F)= c\sqrt{(\hat{\bf p}{\,}-\frac{e}{c}{\bf A}(\hat{\bf x}))^2 -\frac{eg}{4c}\hat{S}^{\mu\nu}F_{\mu\nu}(\hat{\bf x})+m^2c^2}+eA^0(\hat{\bf x})}\,,$$ which, up to $o(c^{-2})$ order, coincides with the positive energy part of Dirac Hamiltonian in the Foldy-Wouthuysen representation. It would be interesting to compare high-order terms;\\ 6) non-commutativity of position operator results in the Thomas $1/2$-correction of spin-orbital interaction coming from $eA^0(\hat{\bf x})$ term\footnote{Similar corrections were obtained in \cite{kupriyanov2013hydrogen-non-commutativity}. However, they appear from the non-commutativity introduced in the Dirac representation, therefore they give additional contribution to the correct spectrum as if non-commutativity acts twice.}. In the considered approximation our Hamiltonian $\hat H_{phys}(F)$ coincides with the Pauli Hamiltonian for the case of stationary fields. Therefore, within this approximation there is no any difference between standard and non-commutative approach to the spin-orbital interaction except a conceptual one. However, in the case of non-stationary fields the classical Hamiltonian changes form. Further studies of time-dependent electromagnetic fields and next order corrections may give suggestions for the experimental searches of effects produced by non-commutativity. {\bf Acknowledgments} The work of AAD has been supported by the Brazilian foundations CNPq (Conselho Nacional de Desenvolvimento Cientifico e Tecnol\'{o}gico - Brasil) and FAPEMIG (Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de Minas Gerais - Brasil). The work of AMPM is suported by FAPEMIG (Demanda Universal 2015). \section{Appendix} {\bf Notation.} Our variables are taken in arbitrary parametrization $\tau$, then $\dot x^\mu=\frac{dx^\mu}{d\tau}$. The square brackets mean antisymmetrization, $\omega^{[\mu}\pi^{\nu]}=\omega^\mu\pi^\nu-\omega^\nu\pi^\mu$. For the four-dimensional quantities we suppress the contracted indexes and use the notation $\dot x^\mu N_{\mu\nu}\dot x^\nu=\dot xN\dot x$, $N^\mu{}_\nu\dot x^\nu=(N\dot x)^\mu$, $\omega^2=\eta_{\mu\nu}\omega^\mu\omega^\nu$, $\eta_{\mu\nu}=(-, +, +, +)$, $\mu=(0, i)$, $i=1, 2, 3$, Notation for the scalar functions constructed from second-rank tensors are $FS= F_{\mu\nu}S^{\mu\nu}$, $S^2=S^{\mu\nu}S_{\mu\nu}$. Electromagnetic field: \begin{eqnarray}\label{L.0} F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu=(F_{0i}=-E_i, ~ F_{ij}= \epsilon_{ijk}B_k), \cr E_i=-\frac{1}{c}\partial_tA_i+\partial_i A_0, \quad B_i=\frac12\epsilon_{ijk}F_{jk}=\epsilon_{ijk}\partial_j A_k. \nonumber \end{eqnarray} Spin-tensor: \begin{eqnarray}\label{L.0.1} S^{\mu\nu}=2(\omega^\mu\pi^\nu-\omega^\nu\pi^\mu)=(S^{i0}=D^i, ~ S_{ij}=2\epsilon_{ijk}S_k), \nonumber \end{eqnarray} then $S_i=\epsilon_{ijk}\omega_j\pi_k=\frac{1}{4}\epsilon_{ijk}S_{jk}$. Here $S_i$ is three-dimensional spin-vector of Frenkel and $D_i$ is dipole electric moment. {\bf Dirac bracket.} Dirac bracket for the constraints (\ref{ch09:eqn9.7.4}) reads \begin{eqnarray}\label{pht.13} \{A, B\}_{D}=\{A, B\}-\{A, T_3\}\{T_4, T_3\}^{-1}\{T_4, B\}- \{A, T_4\}\{T_3, T_4\}^{-1}\{T_3, B\}. \qquad \qquad \nonumber \end{eqnarray} Complete list of brackets computed in an arbitrary parametrization can be found in \cite{DPM3}. Here we present the brackets of the observables $x^i(t)$, ${\cal P}^i(t)$ and $S^{\mu\nu}(t)$. To compute them, we use the auxiliary Poisson brackets shown in the table \ref{tabular:poissonbr2}. \begin{table} \caption{Auxiliary Poisson brackets} \label{tabular:poissonbr2} \begin{center} \begin{tabular}{c|c|c|c} ${}$ & $\{{\cal P}^0, *\}$ & $\{T_3, *\}$ & $\{T_4, *\}$ \\ \hline \hline $x^i$ & $-\frac{{\cal P}^i}{{\cal P}^0}$ & $-\omega^i+\frac{\omega^0{\cal P}^i}{{\cal P}^0}$ & $-\pi^i+\frac{\pi^0{\cal P}^i}{{\cal P}^0}$ \\ \hline $\mathcal{P}^i$ & $-\frac{e}{{\cal P}^0c}[(F\vec {\cal P})^i+\frac{g}{8}\partial^i(SF)]$ & $\frac{e\omega^0}{{\cal P}^0c}[(F\vec {\cal P})^i+\frac{}{8}\partial^i(SF)]-$ & $\frac{e\pi^0}{{\cal P}^0c}[(F\vec {\cal P})^i+\frac{g}{8}\partial^i(SF)]-$ \\ & & $\frac{e}{c}(F\vec\omega)^i$ & $\frac{e}{c}(F\vec\pi)^i$ \\ \hline ${\cal P}^0$ & $0$ & $\frac{e}{2{\cal P}^0c}[(g-2)(\vec{\cal P}F\vec\omega)+$ & $\frac{e}{2{\cal P}^0c}[(g-2)(\vec{\cal P} F\vec\pi)+$ \\ & & $\frac{g}{8}\omega^i\partial^i(SF)-\mu F^{0i}{\cal P}^{[0}\omega^{i]}]$ & $\frac{g}{8}\pi^i\partial^i(SF)-\frac{g}{2} F^{0i}{\cal P}^{[0}\pi^{i]}]$ \\ \hline $\omega^\mu$ & $-\frac{eg}{2{\cal P}^0c}(F\omega)^\mu$ & $\frac{\omega^0eg}{2{\cal P}^0c}(F\omega)^\mu$ & $-{\cal P}^\mu+\frac{\pi^0eg}{2{\cal P}^0c}(F\omega)^\mu$ \\ \hline $\pi^\mu$ & $-\frac{eg}{2{\cal P}^0c}(F\pi)^\mu$ & ${\cal P}^\mu+\frac{\omega^0eg}{2{\cal P}^0c}(F\pi)^\mu$ & $\frac{\pi^0eg}{2{\cal P}^0c}(F\pi)^\mu$ \\ \hline $J^{\mu\nu}$ & $-\frac{eg}{2{\cal P}^0c}(FS)^{[\mu\nu]}$ & $\frac{\omega^0eg}{2{\cal P}^0c}(FS)^{[\mu\nu]}-2{\cal P}^{[\mu}\omega^{\nu]}$ & $\frac{\pi^0eg}{2{\cal P}^0c}(FS)^{[\mu\nu]}-2{\cal P}^{[\mu}\pi^{\nu]}$ \\ \hline \end{tabular} \end{center} \end{table} We will use the notation \begin{eqnarray}\label{pht15.1} u^0 &=& {\cal P}^0-\frac{(g-2)a}{2}(SF{\cal P})^0+\frac{ga}{8}S^{0\mu}\partial_\mu(FS), \qquad a=\frac{-2e}{4m^2c^3-e(g+1)(SF)}, \cr \triangle^{\mu\nu} &=&v-\frac{2ca}{eu^0}{\cal P}^{(0}S^{\mu\nu)}, \qquad {\cal P}^{(0}S^{\mu\nu)}={\cal P}^0S^{\mu\nu}+{\cal P}^\mu S^{\nu 0}+{\cal P}^\nu S^{0\mu}, \cr K^{\mu\nu}&=& -\frac{gca}{4eu^0}S^{0\mu}\partial^\nu(SF), \qquad L^{\mu\nu\alpha} =-\frac{ga}{u^0}(FS)^{[\mu\nu]}S^{0\alpha}, \cr g^{\mu\nu} &=& \eta^{\mu\nu}-\frac{2ca{\cal P}^0}{eu^0}{\cal P}^\mu{\cal P}^\nu. \qquad \qquad \qquad \end{eqnarray} Using the table, we obtain $\{T_3, T_4\}=\frac{eu^0}{2ca{\cal P}^0}$. Then \begin{eqnarray}\label{pht14} \{x^i, x^j\}_{D}=\frac12\triangle^{ij}, \qquad \{x^i, {\cal P}^j\}_{D}=\delta^{ij}-\frac{e}{2c}\left[\triangle^{ik}F^{kj}-K^{ij}\right], \nonumber \end{eqnarray} \begin{eqnarray}\label{pht14.1} \{{\cal P}^i, {\cal P}^j\}_{D}=\frac{e}{c}F^{ij}-\frac{e^2}{2c^2}\left[F^{ik}\triangle^{kn}F^{nj}-F^{[ik}K^{kj]}\right], \nonumber \end{eqnarray} \begin{eqnarray}\label{pht14.2} \{S^{\mu\nu}, S^{\alpha\beta}\}_{D}= 2(g^{\mu\alpha} S^{\nu\beta}-g^{\mu\beta} S^{\nu\alpha}-g^{\nu\alpha} S^{\mu\beta} +g^{\nu\beta} S^{\mu\alpha})+L^{\mu\nu[\alpha}{\cal P}^{\beta]}, \qquad \quad \end{eqnarray} \begin{eqnarray}\label{pht14.3} \{S^{\mu\nu}, x^j\}_{D}={\cal P}^{[\mu}\triangle^{\nu]j}+\frac12 L^{\mu\nu j}, \nonumber \end{eqnarray} \begin{eqnarray}\label{pht14.4} \{S^{\mu\nu}, {\cal P}^j\}_{D}=\frac{e}{c}\left[-{\cal P}^\mu(\triangle^{\nu k}F^{kj}-K^{\nu j})-(\mu\leftrightarrow\nu)+\frac12 L^{\mu\nu k}F^{kj}\right]. \nonumber \end{eqnarray}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Method} \begin{figure*} \begin{center} \includegraphics[width=\linewidth]{figures/pipeline3.pdf} \end{center} \caption{The pipeline of SelfRecon. We simultaneously maintain the explicit and implicit geometry representations and use forward deformation fields to transform canonical geometry to the current frame space. For explicit representation, we mainly use differentiable mask loss to recover the overall shape. As for implicit representation, sampled neural rendering loss and predicted normals are used to refine geometry details. Finally, a consistency loss is used to keep both geometric representations matched.} \label{fig:pipeline} \end{figure*} SelfRecon aims to reconstruct a high-fidelity and space-time coherent clothed body shape from a monocular video depicting a self-rotating person, and the whole algorithm pipeline is given in Fig.~\ref{fig:pipeline}. Both explicit and implicit geometric representations are utilized to achieve the above target. Specifically, we utilize the forward deformation field to generate space-time coherent explicit meshes. The deformation fields are decomposed into two parts, where the first one represents per frame's non-rigid deformation with a learnable MLP, and the second is the skinning deformation field. Differentiable masks, regular and smooth losses are adopted to control the shape of explicit meshes. To update the shape of implicit neural representation, we use non-rigid ray-casting (sec~\ref{sec:nonrigid_raycast}) to find the differentiable intersection points of rays and the deformed implicit surface. Then, the implicit rendering network (sec~\ref{sec:implicit_render}) will utilize the rays' color information to improve the geometry. Unless otherwise indicated, we also utilize the predicted normal map~\cite{saito2020pifuhd} to refine the details. Finally, a consistency loss is designed to match both representations. For a self-rotating video with $N$ frames, we adopt the method described in VideoAvatar~\cite{alldieck2018video} to generate the initial shape parameter $\boldsymbol{\beta}$ and per-frame's pose parameters $\{\boldsymbol{\theta}_i|i\in\{1,...,N\}\}$ of SMPL model. We pre-defined a template pose and generate initial canonical SMPL body mesh $\mathcal{B}$ with $\boldsymbol{\beta}$ and this pose parameter. Our implicit and explicit representations are both initialized with $\mathcal{B}$. In the following, we present the algorithm details of each component. \subsection{Canonical Implicit SDF} In the similar work of VideoAvatar~\cite{alldieck2018video}, they adopt the SMPL+D representation for clothed human body. However, SMPL+D has limited resolution and representation ability, and thus it can not represent high-fidelity geometry shape and various clothing types. In this work, we represent the canonical template shape $\mathcal{S}_{\eta}$ as the zero isosurface of a SDF, which is expressed by an MLP $f$ with learnable weights $\eta$: \begin{equation} \mathcal{S}_{\eta} = \{\mathbf{p}\in{\mathbb{R}^3}|f(\mathbf{p};\eta) = 0\}. \end{equation} To avoid unexpected solution, we use IGR~\cite{gropp2020implicit} to initialize $\mathcal{S}_{\eta}$ as the initial canonical body $\mathcal{B}$. \subsection{Deformation Fields} \label{sec:deform} Following prior works~\cite{huang2020arch,he2021arch++}, we utilize skeleton skinning transformation to control human body's large-scale movements due to the articulated structure. However, garments' non-rigid deformation cannot be fully represented by skinning transformation. Therefore, we extend to model non-rigid deformation with another MLP. \textbf{Non-rigid Deformation Field.} We use an MLP $d$ with learnable weights $\phi$ to represent the non-rigid deformation field. For $i$-th frame, $d$ takes its optimizable conditional variable $\mathbf{h}_i$ as input and deform points in the canonical space with $i$-th frame's specific non-rigid deformation. \textbf{Skinning Transformation Field.} Given $i$-th frame's pose parameter $\boldsymbol{\theta}_i$, we have to define a canonical-to-current space skinning transformation field $\mathcal{W}$. As the initial template body $\mathcal{B}$ has well-defined skinning weights relate to its SMPL skeleton, an intuitive idea is to expand the skinning weights of $\mathcal{B}$'s vertices to the whole canonical space to define the skinning transformation field. Specifically, we first pre-defined a sparse grid containing $\mathcal{B}$ in the canonical space. For each grid point, we find its nearest $30$ vertices on $\mathcal{B}$ and average their skinning weights with IDW (inverse distance weight) as its initial weight. Then, we smooth all grid points' weights with Laplace smoothing. Finally, given a point in the canonical space, we compute its skinning weights by trilinear interpolation in the grid. During our optimization, the grid is pre-computed and fixed. This forward deformation design avoids the trouble of inverse skinning transformation~\cite{weng2020vid2actor,deng2020nasa,jeruzalski2020nilbs,chen2021snarf} and provides a regular constrain for human articulated movement. Finally, by compositing $d$ and $\mathcal{W}$, we get the final deformation field $\mathcal{D} = \mathcal{W}(d(\cdot))$. It takes $i$-th frame's conditional variable $\mathbf{h}_i$ and SMPL pose parameter $\boldsymbol{\theta}_i$ as input, and transform canonical points to the $i$-th frame space. For brevity of description, we use $\mathcal{D}_i$ to denote $i$-th frame's deformation field, $\mathcal{S}_i$ for $i$-th frame's zero isosurface $\mathcal{D}_i(\mathcal{S}_{\eta})$ and $\psi_i$ for $\mathcal{D}_i$'s optimizable parameters $\{\phi,\mathbf{h}_i,\boldsymbol{\theta}_i\}$. \subsection{Differentiable Non-rigid Ray-casting} \label{sec:nonrigid_raycast} For rigid scenes, the sphere tracing algorithm~\cite{hart1996sphere,jiang2020sdfdiff,yariv2020multiview} is widely used to find the intersection point of a ray and the SDF. However, it is not feasible here due to the deformation fields. Inspired by the method in~\cite{seyb2019non}, which proposes a strategy to render a deformed SDF, we utilize the explicit mesh to help find the intersection point of a ray and $\mathcal{S}_i$. As shown in Fig~\ref{fig:nonrigid_raycast}, we extract an explicit template mesh $\mathbf{T}$ from the canonical surface $\mathcal{S}_{\eta}$. With deformation $\mathcal{D}_i$, we can get $i$-th frame's mesh $\mathbf{T}_i$. Theoretically, $\mathbf{T}_i$ is a piecewise linear approximation of $\mathcal{S}_i$. Therefore, consider a ray emitted from the camera position $\mathbf{c}$ along the direction $\mathbf{v}$, its first intersection $\hat{\mathbf{x}}$ with $\mathbf{T}_i$ is a good approximation of its intersection with $\mathcal{S}_i$. Moreover, with the intersected triangle on $\mathbf{T}_i$, we can find $\hat{\mathbf{x}}$'s corresponding point $\hat{\mathbf{p}}$ on the template $\mathbf{T}$ by consistent barycentric weights. Obviously, $\hat{\mathbf{p}}$ is close to $\mathcal{S}_{\eta}$ and is a good approximation of $\mathcal{D}_i^{-1}(\hat{\mathbf{x}})$. With $\hat{\mathbf{p}}$ as good initialization, we can find a point $\mathbf{p}$ on $\mathcal{S}_{\eta}$, whose deformed point $\mathbf{x}=\mathcal{D}_i(\mathbf{p})$ is exactly the intersection point of the ray $\mathbf{r}$ and $\mathcal{S}_i$. Specifically, we solve $\mathbf{p}$ by: \begin{equation} \label{equ:nonrigid_raycast} \mathbf{p}=\mathop{\arg\min}_{\hat{\mathbf{p}}} \ \omega|f(\hat{\mathbf{p}})|+ \frac{\|(\mathcal{D}_i(\hat{\mathbf{p}})-\mathbf{c})\times \mathbf{v}\|_2}{\|\mathcal{D}_i(\hat{\mathbf{p}})-\mathbf{c}\|_2}, \end{equation} where the first item constrains $\hat{\mathbf{p}}$ to be close with $\mathcal{S}_{\eta}$ and the second item restricts $\mathcal{D}_i(\hat{\mathbf{p}})$ on the ray. In our implementation, we set $\omega = 3.05$ and execute $10$ gradient descent iterations to solve $\mathbf{p}$. To guarantee accuracy, we reject those samples with large losses. \textbf{Differentiable Formula.} The above-mentioned solving process of $\mathbf{p}$ is an iterative optimization process, which is not differentiable. For the ray in $i$-th frame, camera position $\mathbf{c}$, view direction $\mathbf{v}$, $\mathcal{D}_i$'s parameters $\psi_i$ and $f$'s parameter $\eta$ uniquely determine the $\mathbf{p}$. Therefore, $\mathbf{p}$ can be seen as a function of these parameters, and we need to compute partial derivatives of $\mathbf{p}$ to all these parameters. For brevity, we only clarify our calculation for $\eta$ here, and other partial derivatives are computed similarly. Through the above analysis, $\mathbf{p}$ satisfies the surface and ray constraints: $f(\mathbf{p})\equiv 0$ and $(\mathcal{D}_i(\mathbf{p})-\mathbf{c})\times \mathbf{v}\equiv 0$. We differentiate these two equations w.r.t $\eta$ to get: \begin{equation} {\frac{\partial f}{\partial \mathbf{p}}}^T\frac{\partial \mathbf{p}}{\partial \eta} = -\frac{\partial f}{\partial \eta} \quad [\mathbf{v}]_{\times}\frac{\partial \mathbf{x}}{\partial \mathbf{p}} \frac{\partial \mathbf{p}}{\partial \eta} = 0, \end{equation} where $\mathbf{x} = \mathcal{D}_i(\mathbf{p})$ and $[\mathbf{v}]_{\times}$ is $\mathbf{v}$'s cross product matrix. We concatenate these two equations to get a $4\times 3$ linear system, then $\frac{\partial \mathbf{p}}{\partial \eta}$ is computed by solving its normal equation \begin{figure} \begin{center} \includegraphics[width=\linewidth]{figures/nonrigid_raycast.pdf} \end{center} \caption{Some related symbols and illustration about differentiable non-rigid ray-casting and implicit neural rendering.} \label{fig:nonrigid_raycast} \end{figure} \subsection{Implicit Rendering Network} \label{sec:implicit_render} IDR~\cite{yariv2020multiview} proposes an MLP $M$ to approximate the rendering equation and demonstrates certain disentangle ability of lighting and material. In their rigid configuration, $M$ takes the zero isosurface's point, its normal, the view direction and its global geometry feature vector as input to estimate the point's color along the view direction. We subtly transfer their design to non-rigid scenarios by converting related current frame's attributes to canonical space. As shown in Fig.~\ref{fig:nonrigid_raycast}, considering a ray emitted from camera center $\mathbf{c}$ along a sampled pixel, whose direction $\mathbf{v}$ is determined by camera's intrinsic parameters $\tau$, we compute its intersection point $\mathbf{p}$ on $\mathcal{S}_{\eta}$ with the algorithm described in sec~\ref{sec:nonrigid_raycast}. In the meantime, we compute its normal $\mathbf{n}_{\mathbf{p}} = \nabla f(\mathbf{p};\eta)$ by gradient calculation. Then, the view direction $\mathbf{v}_{\mathbf{p}}$ in canonical space can be computed by transferring $\mathbf{v}$ with the Jacobian matrix $J_{\mathbf{x}}(\mathbf{p})$ of the deformed point $\mathbf{x}=\mathcal{D}_i(\mathbf{p})$ w.r.t $\mathbf{p}$. As for the global geometry feature, we similarly use a larger MLP $F(\mathbf{p};\eta) = (f(\mathbf{p};\eta),\mathbf{z}(\mathbf{p};\eta))$ to additionally compute it, which implies the geometry information around $\mathbf{p}$ and can be used to help the prediction of global shadow~\cite{yariv2020multiview}. Finally, we use an MLP $M$ with learnable weights $\gamma$ to compute $\mathbf{p}$'s color $L_{\mathbf{p}}(\eta,\psi_i,\gamma,\tau)$, formulated as: \begin{equation} \label{equ:neural_render} \begin{aligned} L_{\mathbf{p}}(\eta,\psi_i,\gamma,\tau)&=M(\mathbf{p},\mathbf{n}_{\mathbf{p}},\mathbf{v}_{\mathbf{p}},\mathbf{z}(\mathbf{p};\eta);\gamma) \\ \mathbf{n}_{\mathbf{p}}&=\nabla f(\mathbf{p};\eta) \\ \mathbf{x}&=\mathcal{D}_i(\mathbf{p}) \\ \mathbf{v}_{\mathbf{p}}&=J_{\mathbf{x}}(\mathbf{p})^{-1}\mathbf{v}, \end{aligned} \end{equation} where related symbols have been described above. It can be seen that the color along direction $\mathbf{v}$ of the deformed point $\mathbf{x}$ in $i$-th frame is determined by the MLP weights $\eta$ and $\gamma$, camera parameters $\tau$ and deformation field parameters $\psi_i$. \subsection{Loss Function} According to the description above, for a $N$-frame self-rotating video, the set $\mathcal{X}$ of all optimizable parameters is: $$ \mathcal{X}=\{\eta,\gamma,\phi,\tau\}\cup\{\mathbf{h}_i,\boldsymbol{\theta}_i|i\in{1,...,N}\}, $$ which includes camera parameters, the learnable weights of MLPs shared by the whole sequence and per frame's specific pose parameters and non-rigid deformation field's conditional variable. Our target is to design a loss function and optimize $\mathcal{X}$ to match the mask and RGB images $\{O_i,I_i|i\in{1,...,N}\}$ of the input video. Besides, a predicted normal map $\{N_i|i\in{1,...,N}\}$ is added to the optimization. As SelfRecon maintains both explicit and implicit geometry, the loss terms can be divided into two parts. \subsubsection{Explicit Loss} During the computation of explicit losses, we temporarily regard the canonical mesh vertices $\mathbf{T}$ as an optimizable variable and compute its gradient together with $\mathcal{X}$. Then in the consistency loss, we associate its variations with our implicit representation. At present, explicit losses mainly include mask loss, deformation regularization loss, and the smoothness loss of the skeleton. \textbf{Mask Loss.} We utilize a differentiable renderer~\cite{wiles2020synsin} based on point cloud to render the mask $O(\mathbf{T}_i)$ of $i$-th frame's mesh $\mathbf{T}_i = \mathcal{D}_i(\mathbf{T})$ with camera parameters, and compute the IoU loss with target mask $O_i$: \begin{equation} loss_{IoU} = 1 - \frac{\|O(\mathbf{T}_i)\otimes O_i\|_1}{\|O(\mathbf{T}_i)\oplus O_i - O(\mathbf{T}_i)\otimes O_i\|_1}, \end{equation} where $\otimes$ and $\oplus$ are the operators that perform element-wise product and sum respectively. \textbf{Deformation Regularization Loss.} As stated in Sec. \ref{sec:deform}, the $i$-th frame's deformation field $\mathcal{D}_i$ contains variable $d$ and fixed $\mathcal{W}$. $d$ represents the deformation that can not be represented by skinning transformation $\mathcal{W}$, and this deformation should be relatively small. To associate the skeleton pose, we design the following regularization loss: \begin{equation} \label{equ:w3_defregu} \begin{aligned} loss_{regu}=\frac{1}{|\mathbf{T}|}\sum_{\mathbf{t}\in{\mathbf{T}}}\rho(\|\mathcal{W}(\mathbf{t};\boldsymbol{\theta}_i)-\mathcal{D}_i(\mathbf{t})\|_2), \end{aligned} \end{equation} where $\mathbf{t}$ is a vertex coordinate of $\mathbf{T}$, $|\mathbf{T}|$ is the vertices number of $\mathbf{T}$, and $\rho$ is the Geman-McClure robust loss~\cite{ganan1985bayesian}. \textbf{Skeleton Smoothness Loss.} The motion trajectory of the joints should be low frequency. Similar with MonoPerf~\cite{xu2018monoperfcap}, we smooth the skeleton coordinates of $30$ consecutive frames by minimizing the distance to a $10$ dimensional linear subspace $\mathbf{B}\in{\mathbb{R}^{30\times10}}$ spanned by the $10$ lowest frequency basis vectors of the discrete cosine transform: \begin{equation} loss_{ske} = \frac{1}{30} \|\mathbf{J}\text{Null}(\mathbf{B})\|_F^2. \end{equation} Here, $\text{Null}(\mathbf{B})$ denotes the nullspace of the $\mathbf{B}$ matrix, the matrix $\mathbf{J}\in{\mathbb{R}^{72\times30}}$ stacks all skeleton coordinates of consecutive $30$ frames, and $\|\cdot\|_F$ denotes the Frobenius norm. Finally, the loss for the explicit representation is: \begin{equation} Loss_{exp}=loss_{IoU}+\lambda_{e1}loss_{regu}+\lambda_{e2}loss_{ske}. \end{equation} $\lambda_{e1}$ and $\lambda_{e2}$ adjust the weights of related losses. After each iteration, we reserve $\mathcal{X}$'s gradients and wait for the implicit loss iteration to update together. For canonical mesh vertices, we use SGD to update $\mathbf{T}$ to $\hat{\mathbf{T}}$, which will be used in consistency loss to match two representations. \subsubsection{Implicit Loss} We sample pixels within the ground truth mask and utilize Sec.~\ref{sec:nonrigid_raycast} to get the ray's intersection $\mathbf{p}$ on $\mathcal{S}_{\eta}$ and its corresponding ground truth color $I_{\mathbf{p}}$ and predicted normal $N_{\mathbf{p}}$ if available. Then, based on this sampled points set $\text{P}$, we construct two losses. \textbf{Color Loss.} By referring to Eq.~\eqref{equ:neural_render}, we formulate the color loss as: \begin{equation} loss_{RGB} = \frac{1}{|\text{P}|}\sum_{\mathbf{p}\in \text{P}}|L_{\mathbf{p}}(\mathcal{X})-I_{\mathbf{p}}|. \end{equation} Here, we use $\mathcal{X}$ to substitute related parameters in Eq.~\eqref{equ:neural_render}. Intuitively, this loss requires that the rendered images should match the input images. \textbf{Normal Loss.} We utilized the predicted normal map by PIFuHD~\cite{saito2020pifuhd} to further refine the geometry shape. Referring to Eq.~\eqref{equ:neural_render}, we can easily compute $\mathbf{p}$'s normal $\mathbf{n}_\mathbf{p}$. Besides, we need to transform the corresponding predicted normal $N_{\mathbf{p}}$ from the space of current frame to canonical space, which can be computed with $J_{\mathbf{x}}(\mathbf{p})^T N_{\mathbf{p}}$, where $J_{\mathbf{x}}(\mathbf{p})$ is the Jacobian matrix of the forward deformation field at $\mathbf{p}$~\cite{seyb2019non}. Thus, the normal loss is: \begin{equation} \label{equ:norm} loss_{norm}=\frac{1}{|\text{P}|}\sum_{\mathbf{p}\in{\text{P}}}\omega_\mathbf{p}\|\mathbf{n}_\mathbf{p}-unit(J_{\mathbf{x}}(\mathbf{p})^T N_{\mathbf{p}})\|_2. \end{equation} Here, $unit(\cdot)$ means to normalize the vector. $\omega_\mathbf{p}$ is the weight defined by the cosine of angle between $\mathbf{n}_\mathbf{p}$ and corresponding view direction. Since the predicted normals are noisy and inconsistent between frames, we use this weights to alleviate the impact of normal that deviates from its view direction and avoid geometry artifacts. We also design regular losses for the implicit representation, and these losses are defined on the set of sampled points $\text{S}$ near the implicit surface~\cite{gropp2020implicit}. \textbf{Rigidity Loss.} We require the first deformation field $d$ to be as rigid as possible to avoid distortion. Following Park \textit{et al.}~\cite{park2021nerfies}, we design our loss as: \begin{equation} \begin{aligned} loss_{rigid}&=\frac{1}{|\text{S}|}\sum_{\mathbf{p}\in \text{S}}\rho(\|\text{log}\Sigma_{\mathbf{p}}\|_F), \end{aligned} \end{equation} where $\Sigma_{\mathbf{p}}$ is the singular value diagonal matrix of the Jacobian of $d$ on $\mathbf{p}$ and $\rho$ is the robust function~\cite{ganan1985bayesian}. \textbf{Eikonal Loss.} We adopt the regular loss of IGR~\cite{gropp2020implicit} to make $f$ to be sign distance function: \begin{equation} loss_{sdf} = \frac{1}{|\text{S}|}\sum_{\mathbf{p}\in{\text{S}}}(\|\mathbf{n}_{\mathbf{p}}\|_2-1)^2, \end{equation} where $\mathbf{n}_\mathbf{p}$ is obtained by differentiating $f$ at $\mathbf{p}$. Finally, the implicit loss can be represented as: \begin{equation} \label{equ:implicit} Loss_{imp} = loss_{RGB}+\lambda_{i1}loss_{norm}+\lambda_{i2}loss_{rigid}+\lambda_{i3}loss_{sdf}, \end{equation} where $\lambda_{i1}$, $\lambda_{i2}$ and $\lambda_{i3}$ are balancing weights. \subsubsection{Explicit/Implicit Consistency} After explicit iteration, the canonical mesh has been updated to $\hat{\mathbf{T}}$, to make the implicit SDF consistent with the updated explicit mesh during implicit iteration, we design a consistency loss: \begin{equation} Loss_{cons} = \frac{1}{|\hat{\mathbf{T}}|}\sum_{\hat{\mathbf{t}}\in{\hat{\mathbf{T}}}}|f(\hat{\mathbf{t}};\eta)|, \end{equation} where $\hat{\mathbf{t}}$ is a vertex coordinate of $\hat{\mathbf{T}}$. Intuitively, the loss requires $\hat{\mathbf{T}}$ to match the implicit surface $\mathcal{S}_{\eta}$. In each optimization step, we first perform the explicit iteration to obtain $\hat{\mathbf{T}}$ and reserve $\mathcal{X}$'s gradients. Then, we compute implicit and consistency losses to accumulate new $\mathcal{X}$'s gradients. Finally, Adam is utilized to update $\mathcal{X}$ with computed gradients. \section{Conclusion and Discussion} We proposed SelfRecon, a self-supervised reconstruction method based on neural implicit representation and neural rendering. With forward deformation, our method can be easily applied to body movement and recover space-time coherent surfaces, which is convenient for downstream applications. Moreover, combining the explicit representation, we proposed a non-rigid ray casting algorithm, which makes it possible for differentiable intersecting with the deformed implicit surface. SelfRecon can reconstruct high-fidelity clothed body shape from a self-rotating video without pre-computed templates. We also show high-fidelity avatar generation with our tracking results, demonstrating potential applications of SelfRecon. SelfRecon still has several limitations. First, it requires relatively long time to optimize, which limits its convenient applications. However, this problem can be alleviated with the help of body priors and the fast growing field of neural rendering. Second, current method relies on the predicted normal maps to improve the geometric details. How to recover the geometric details directly from the self-supervised rendering loss is worthy of future study. Third, the proposed method mainly works well for self-rotating motions, and it is worthy of study for more general motion sequences. {\small{\paragraph{Acknowledgement} This research was supported by National Natural Science Foundation of China (No. 62122071), the Youth Innovation Promotion Association CAS (No. 2018495), ``the Fundamental Research Funds for the Central Universities''(No. WK3470000021).}} \section{Experiments} \begin{figure*} \begin{center} \includegraphics[width=\linewidth]{figures/syn_compare.pdf} \end{center} \caption{Reconstructions in canonical pose and their error maps for four synthetic self-rotating sequences. In each group, we show the GT mesh, results of VideoAvatar and SelfRecon in turn (red means $\geq6$cm).} \label{fig:syn_compare} \end{figure*} We conduct quantitative and qualitative experiments to demonstrate the effectiveness of SelfRecon. For quantitative evaluation, we synthesize several sequences with commercial software due to the lack of high-quality geometry data of human body with general clothing. For qualitative evaluation, we mainly utilize the PeopleSnapshot~\cite{alldieck2018video} dataset and several real sequences collected by ourselves. We also present ablation study for the loss term design and present an avatar generation application. \begin{table} \centering \caption{The errors (cm) on the synthetic five sequences. We report three error metrics: the average distance from reconstructed to GT meshes (Recon), the average distance from GT to reconstructed meshes (GT), and the Chamfer distance. For each error metric, we report the values of VideoAvatar and ours in two consecutive rows.} \label{tab:syn_compare} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Subject & f1 & f2 & f3 & m1 & m2 & mean\\ \hline \multirow{2}{*}{Recon} & \textbf{1.59} & 1.71 & 1.93 & 1.81 & 1.27 & 1.66 \\ \cline{2-7} & 1.67 & \textbf{1.32} & \textbf{1.63} & \textbf{1.53} & \textbf{1.17} & \textbf{1.46} \\ \hline \multirow{2}{*}{GT} & 2.08 & 1.50 & 2.40 & 1.92 & 1.42 & 1.86 \\ \cline{2-7} & \textbf{1.62} & \textbf{1.16} & \textbf{1.92} & \textbf{1.53} & \textbf{1.17} & \textbf{1.48} \\ \hline \multirow{2}{*}{Chamfer} & 1.84 & 1.60 & 2.17 & 1.86 & 1.34 & 1.76 \\ \cline{2-7} & \textbf{1.64} & \textbf{1.24} & \textbf{1.77} & \textbf{1.53} & \textbf{1.17} & \textbf{1.47} \\ \hline \end{tabular} \end{table} \subsection{Quantitative Evaluation} We synthesize data to quantitatively evaluate our reconstruction algorithm. Specifically, we use Blender~\cite{Blender} to design the self-rotating motions for male and female avatars. Then, we utilize CLO3D~\cite{CLO3D} to design several clothes and animate the clothed body with motions. Finally, we synthesized two sets of male and three sets of female dressing sequences. We reconstruct these sequences with VideoAvatar~\cite{alldieck2018video} and our method, and report the registration error for the canonical posture results in Tab.~\ref{tab:syn_compare}. Compared with VideoAvatar, our method significantly reduces the values of various error metrics. In Fig.~\ref{fig:syn_compare}, we also present four group results and their error maps. Intuitively, our results capture the overall shape and have some reasonable details. As VideoAvatar is based on the SMPL+D representation, it has plausible results for tighter clothing, like the male examples, but lacks detailed reconstruction ability. Moreover, it can not correctly reconstruct loose clothing, especially for the females dressed in skirts. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{figures/compare2.pdf} \end{center} \caption{Comparison results with methods that use video or multi-frame images, including PaMIR~\cite{zheng2021pamir}, NeuralBody~\cite{peng2021neural} and VideoAvatar~\cite{alldieck2018video}. For the second comparison group, the method in the first two rows is PaMIR, and the rest is NeuralBody. We also present our rendered image as reference. SelfRecon can reconstruct high-fidelity geometry shape of standing posture, including facial features and clothing folds. \label{fig:compare} \end{figure} \subsection{Qualitative Evaluation} We also qualitatively compare SelfRecon with multi-frame prediction algorithm PaMIR~\cite{zheng2021pamir}, optimization method VideoAvatar~\cite{alldieck2018video} and NeRF~\cite{mildenhall2020nerf} based neural rendering method NeuralBody~\cite{peng2021neural} on several sequences of PeopleSnapshot dataset. In Fig.~\ref{fig:compare}, we present the first frame of input video, our rendered image, and reconstruction results of all methods from two perspectives. And we compare with PaMIR in the first two rows and the others with NeuralBody. As we can see, VideoAvatar based on SMPL+D can only approximately capture the overall shape, but details such as hairstyle and clothing folds are lost. PaMIR uses multi-frame input to improve its results, but still suffers from the deep ambiguity. As in the second example, its reconstructed human is not upright from the side view. Besides, its results have some details but miss facial features, while our method has better details and can recover certain facial features. Similar with SelfRecon, NeuralBody also inputs a video to do self-supervised optimization. It mainly focuses on novel view synthesis, but still can extract geometry from underlying NeRF representation. We can see that its reconstructions alleviate deep ambiguity and conform to the human's overall structure while suffering from large noise on the surface, which may be caused by excessive freedom of volume rendering. Different from their method, implicit surface representation based SelfRecon can recover high-fidelity geometry shape without noises. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{figures/present.pdf} \end{center} \caption{Reconstruction results from videos taken by smartphones. Each group shows the first image of the video, corresponding neural rendered image and reconstructed shape. \label{fig:present} \end{figure} Fig.~\ref{fig:present} shows reconstruction results on our collected videos with smartphones. For each group, we present the first frame of the video, our rendered image and reconstruction. Our results have high-fidelity geometry shapes for kinds of clothing and body, and our neural rendered images are also quite close to the input images. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{figures/ablation.pdf} \end{center} \caption{Ablation study for color, mask and normal losses. With mask loss only, the results lack details and have much concave geometry. After adding color loss, geometry is significantly improved but can not completely eliminate the concavities. With the predicted normals, the results are further improved. In the last column, we also show the neural rendering images as reference.} \label{fig:ablation} \end{figure} \subsection{Ablation Study} Our complete algorithm requires color images, masks and normal maps as inputs. Fig.~\ref{fig:ablation} shows ablation experiments of two examples on three inputs. As the results show, if only using the mask loss, the recovered geometry shape is inside the convex hull formed by the silhouettes but lacks details and has noticeable concavities. After adding the color loss, it significantly improves the details and reduces the unnatural concavities. For the second example, the result has been very close to the result obtained by adding the normal loss. However, for the first example, it can not completely eliminate the depressed geometry without normal loss. This may be caused by lacking rich texture and multi-view observations in these areas. With the normal loss, our results are further improved, and the unnatural pits are eliminated while the details are preserved. Since the normal prediction network~\cite{saito2020pifuhd} is trained with synthetic images, its prediction may be not accurate for real tests and may be not consistent for different frames. As shown in Fig.~\ref{fig:norm}, without the adaptive weights in Eq.~\eqref{equ:norm}, the normal loss might result unexpted results. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{figures/normal.pdf} \end{center} \caption{Without the adaptive weights in Eq.~\eqref{equ:norm} and small $\lambda_{i1}$ in Eq.~\eqref{equ:implicit}, normal loss will result artifacts (middle). After adjusting the weights, the corresponding result (right) is more plausible.} \label{fig:norm} \end{figure} \subsection{Avatar Generation} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{figures/animation.pdf} \end{center} \caption{The reconstructed texture mesh and driving results. The left shows the reference image, texture meshes generated by VideoAvatar and SelfRecon. On the right, we use three pose parameters to drive our texture mesh and generate plausible results.} \label{fig:animation} \end{figure} Thanks to our forward deformation field design, we can extract a mesh sequence with consistent topology. Based on the tracking results, we can extract a texture template mesh from the images with bound skinning weights from the skinning transformation field. Then, an animatable avatar is generated and can be driven with SMPL pose parameters. For texture extraction, we follow the method of VideoAvatar~\cite{alldieck2018video}. Fig.~\ref{fig:animation} shows two examples of texture generation and driving from the PeopleSnapshot dataset. Our method recovers better geometric details like facial, shoes and clothing folds thanks to more accurate tracking results. Besides, our driving results look plausible and may be of sufficient quality for some applications. \section{Introduction} Clothed body reconstruction has been an important and challenging research topic in the community for years. In the film and gaming industry, high-fidelity human reconstruction usually requires pre-captured templates, multi-camera systems, controlled studios, and long-term works of talented artists. However, these requirements exceed the application scenarios of general customers, such as personalized avatars for telepresence, AR/VR, anthropometry, and virtual try-on, etc. Therefore, directly reconstruction high-fidelity digital avatar from monocular video will have significant practical application value. The state-of-the-art marker-less monocular human performance capture approaches~\cite{xu2018monoperfcap,habermann2019livecap,habermann2020deepcap} are mainly designed based on explicit mesh representation. They require actor-specific rigged templates and utilize detected 2D/3D joints and silhouettes to estimate per frame's posture and non-rigid deformation. DeepCap~\cite{habermann2020deepcap} additionally uses multi-view information during training to resolve deep ambiguity and improve tracking accuracy for monocular inference. The explicit representation has some advantages, including space-time coherence and compatibility with existing graphics control pipelines, like texture editing and reposing. Moreover, skinning deformation is suitable under this paradigm to model the body's large-scale articulated deformations. However, actor-specific templates limit the extension of these methods to unseen human sequences. For videos of self-rotation humans under rough A-pose, VideoAvatar~\cite{alldieck2018video} can estimate general clothed humans with the SMPL+D parametric representation~\cite{loper2015smpl,bhatnagar2019mgn,alldieck2019tex2shape,alldieck19cvpr,alldieck2018video,xiang2020monoclothcap}, while it can not recover folds and loose clothing, like skirts. Recently, some neural implicit representation based monocular human reconstruction approaches have demonstrated compelling results~\cite{zheng2019deephuman,saito2019pifu,saito2020pifuhd,zheng2021pamir,MetaAvatar:NeurIPS:2021,huang2020arch,he2021arch++,he2020geopifu,burov2021dsfn,xiu2021icon}. These methods can handle various topologies, and thus can represent various clothing and hairstyles. However, they require high-quality 3D data for supervision, and they only reconstruct for a specific frame and can not keep the space-time coherence of surface vertices for the whole sequence. A simple solution to guarantee the coherence and correct body structure is to maintain an implicit template surface in the canonical space, and then utilize backward deformation fields to map current points to canonical space to assist their implicit function queries. The backward deformation strategy has been widely applied recently and works well for small-scale deformations~\cite{park2021nerfies,pumarola2021d,tretschk2021non,li2021neural}. However, it is not very suitable for articulated skinning deformation due to its irreversibility in some parts of current space~\cite{jeruzalski2020nilbs,chen2021snarf}. To this end, technologies such as pose-related skinning weights prediction~\cite{weng2020vid2actor,peng2021animatable,jeruzalski2020nilbs} and specific inverse articulated deformation design~\cite{deng2020nasa} are proposed at the cost of high complexity and poor generalization. In this work, we propose SelfRecon, which combines the explicit and implicit representations together to reconstruct high-fidelity digital avatar from a monocular video. Specifically, SelfRecon utilizes a learnable signed distance field (SDF) rather than a template with fixed topology to represent the canonical shape. To improve the generalization of the deformation and reduce the optimization difficulty, we adopt the forward deformation to map canonical points to the current frame space~\cite{chen2021snarf,zheng2021avatar}. During optimization, we periodically extract the explicit canonical mesh and warp it to each frame with the deformation fields. For these meshes, we utilize mask loss and smooth constraints to recover the overall shape. For the implicit part, a differential formulation is designed to intersect the deformed surface and follow IDR's neural rendering~\cite{yariv2020multiview} to refine the geometry. A consistency loss is designed to match both geometric representations as close as possible. SelfRecon alleviates the dependence on actor-specific templates and extracts a space-time coherent mesh sequence from a monocular video. Extensive evaluations on self-rotating human videos demonstrate that it outperforms existing methods. We believe that SelfRecon will inspire more studies on combining implicit and explicit representations for 3D reconstruction for articulated object. \section{Related Work} \textbf{Implicit Human Reconstruction.} PIFu~\cite{saito2019pifu} adopts a deep network to extract image features and concatenates pixel's feature and its corresponding 3D point depth information as the input of a Multi-Layer Perceptron (MLP) to obtain high-fidelity 3D clothed human occupancy field. However, it may generate incorrect body structures for humans under challenging poses. StereoPIFu~\cite{yang2021stereopifu} aims at binocular images, utilizes volume alignment feature and predicted high-precision depth to guide implicit function prediction, can effectively alleviate the depth ambiguity and restore absolute scale information. PIFuHD~\cite{saito2020pifuhd} utilizes higher resolution features and predicted normal information to refine the geometric details of PIFu. PaMIR~\cite{zheng2021pamir} utilizes parameterized human body to decrease the influence of deep ambiguity in implicit function training, reduces the occurrence of abnormal human body structure, and improves the reconstruction accuracy. These methods train an MLP to represent the human's implicit geometry from single or several images and achieve impressive results. However, they require the corresponding high-quality 3D data of color images to train the model, which is hard to obtain and thus limits their generalization to in-the-wild images. Besides, overfitting the implicit neural representation of a person's movement sequence to acquire actor-specific reconstructions becomes popular. NASA~\cite{deng2020nasa} coarsely models the naked body as the union of articulated parts, and each part is an implicit occupancy field. SCANimate~\cite{saito2021scanimate} proposes an end-to-end trainable framework that turns raw 3D scans of a clothed human into an animatable avatar. SNARF~\cite{chen2021snarf} learns a forward deformation field to improve its generalization for unseen human poses. All these methods need 4D scan data to train their clothed body representation, and thus are difficult to be widely used for general image data. Recently, some implicit representation methods, which can extract geometry and synthesize novel views based on multi-view images, attract researchers' attention. NeuralBody~\cite{peng2021neural} reconstructs per frame's NeRF~\cite{mildenhall2020nerf} field conditioned at body structured latent codes and utilizes the NeRF field to synthesize new images. However, the extracted geometry from NeRF suffers from noise. H-NeRF~\cite{xu2021h} utilizes an implicit parametric model~\cite{alldieck2021imghum} to reconstruct the temporal motion of humans. Neural Actor~\cite{liu2021neuralactor} integrates texture map features to refine volume rendering. IDR~\cite{yariv2020multiview} combines implicit signed distance field and differential neural rendering to generate high-quality rigid reconstruction from multi-view images. Concurrent IMAvatar~\cite{zheng2021avatar} expand IDR to learn implicit head avatars from monocular videos. \textbf{Explicit Human Reconstruction.} With the help of human statistical model~\cite{anguelov2005scape,loper2015smpl,Jiang2020HumanBody}, some works utilize image cues to automatically obtain model parameters~\cite{bogo2016keep,guler2018densepose,kanazawa2018end,omran2018neural}. To represent human clothing, some methods add displacements on SMPL~\cite{loper2015smpl} vertices to model tight clothing~\cite{pavlakos2018learning,bhatnagar2019multi,ma2020learning,alldieck2018video}. However, this SMPL+D representation can only support tight clothing types and recover coarse level geometry shape. To improve the representation ability, some works adopt separate clothing representation and combine with SMPL body to do reconstruction~\cite{jiang2020bcnet,ma2021scale}, but they need clothing type and high-quality 3D supervision. Besides, to capture the performance of a specific person, many prior works use an actor-specific template to assist tracking. Monoperfcap~\cite{xu2018monoperfcap} optimizes the deformation of template mesh to match 2D cues. LiveCap~\cite{habermann2019livecap} refines the optimization pipeline and achieves real-time tracking for a specific person with monocular RGB input. DeepCap~\cite{habermann2020deepcap} adopts a network to predict per frame's template deformation for a specific person. However, the requirement for pre-defined templates limits their broader applications.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{secintro} The Moran process \cite{moran} is a \textit{stochastic} model for the evolution of a \textit{finite} population of several types of individuals with different fitnesses, asexual reproduction and \textit{no mutations}. Another model for the same situation is the Wright-Fisher process \cite{fisher, wright}. Mathematically speaking, both models are discrete-time Markov chains \cite{allen} with a finite set of states. Due to the no-mutations hypothesis, the states in which all individuals are of the same type are absorbing. As all the other states are transient, with probability 1 and after sufficient time, one of the absorbing states will be reached \cite{allen}. This is the phenomenon of \textit{fixation} and we say that the only type of individuals present in the final state is fixated in the population. One important problem is calculating as a function of the initial state of the population, i.e. how many individuals of each type is initially present, the probability of attaining each of the absorbing states. One important difference between the Moran and Wright-Fisher processes is that in the simplest cases the fixation probabilities may be exactly calculated in the former, whereas in the latter we must use approximations \cite{ewens}. By simplest cases, we mean that the number of types of individuals is only 2 and that the population is \textit{fully mixed}. This exact solution, see eq. (\ref{exactpii}), is known since the original work by Moran \cite{moran}, and extends also to the case in which the fitnesses of the individuals depend on their population frequencies \cite{nowaknature, taylor}, i.e. Evolutionary Game Theory. If the number of types of individuals in the population is three or more \cite{wang2007evolutionary, moran3}, we can no longer calculate exactly the fixation probabilities, but we can find useful upper and lower bounds for them \cite{moran3}. As we are interested here in exact fixation probabilities, from now on we will consider only cases in which there are only individuals of two types in the population. Most models in Mathematical Biology, e.g. the Lotka-Volterra predator-prey model, the SIR model for epidemics, the simpler versions of both Wright-Fisher and Moran models, and many others suppose that populations are \textit{fully mixed} or, in different words, have \textit{no structure}. This hypothesis means that all individuals in the population can interact with equal probability with any other individual. In real populations, on the contrary, there exist in general small communities and individuals with more or less contacts outside their community. In recent times, research focused on the statistical properties of networks of individuals in real populations \cite{newman}, and which may be the effects on the mathematical models of substituting the simple full-mixing hypothesis for realistic networks \cite{pastorvespignani1, pastorvespignani2}. The Moran process in a structured population was introduced in \cite{lieberman}. Population structure is modeled by a \textit{directed} and \textit{weighted} graph. Several different classes of graphs were described in the above reference, in which edges may be directional or not, and weighted. Authors gave without proof asymptotic expressions (in the infinite population limit) for the fixation probability of a randomly placed mutant individual for star and super-star graphs in the case of frequency-independent fitnesses. In particular, these asymptotic expressions imply that both star and super-star are \textit {amplifiers} of selection, i.e. one individual fitter (respectively less fit) than the rest of the population will fixate with larger (resp. smaller) probability than in an unstructured population. Many papers followed the introduction of the subject and reviews are available \cite{szabo, rocacuestasanchez, shakarianroosjohnson, allennowak}. Broom and Rycht\'{a}\v{r} \cite{broomrychtar} developed further the theory, showing that, in general, fixation probabilities for the Moran process on a graph may be calculated by solving a huge system of $2^N$ linear equations, where $N$ is the population size, i.e. the number of nodes in the graph. But they showed that for symmetric graphs the number of equations to be solved may be much smaller. Approximate calculations of fixation probabilities in general graphs may use Monte Carlo simulations \cite{barbosa} or other algorithms \cite{shakarianroosmoores, hindersin}. One very simple symmetric graph in which the fixation probabilities were explicitly calculated \cite{broomrychtar} is the \textit{star graph}. The star is a graph with $N=n+1$ vertices: the \textit{center} and $n$ leaves. The center is linked to all leaves and the leaves connect only to the center, see Fig. \ref{figstar}. All edges are bidirectional and all weights are equal. \begin{figure} \begin{center} \includegraphics[width=0.4 \textwidth]{grafoestrela} \caption{\label{figstar} The star graph with $n=5$ leaves.} \end{center} \end{figure} The asymptotic formula in \cite{lieberman}, with a correction given by Chalub \cite{chalubstar}, follows from the exact solution of Broom and Rycht\'{a}\v{r}. A different derivation of the exact solution, using martingales, is given by Monk, Green and Paulin \cite{monk}. In this paper we will generalize on the exact result of \cite{broomrychtar, monk} for the fixation probabilities of the star graph. They were derived on the hypotheses of frequency-independent fitnesses and birth-death (BD) updating. Our derivation allows the fitnesses to depend on the population frequencies of the individuals and also the death-birth (DB) updating. We also provide explicit formulae for the fixation probabilities in the star graph for any initial configuration of A and B individuals, allowing us to study their asymptotic limits when the total population tends to infinity. In Sect. \ref{secMoran} we will introduce the Moran process for frequency-dependent fitnesses, both for structured and unstructured populations. For the former, we will define the BD and DB updating rules. In Sect. \ref{secExact} we will derive the exact expressions for the fixation probabilities in the star graph. In Sect. \ref{secAsymp} we derive, separately for each update rule, asymptotic expressions for the fixation probabilities in the limit of infinite populations. Some conclusions are drawn in Sect. \ref{secconc}. \section{The Moran process}\label{secMoran} In order to explain the Moran process on a structured population, we describe at first the standard, or \textit{unstructured}, Moran process. Consider a population with fixed size of $N$ individuals of two types A and B. Suppose that time is discrete and at each time step we draw random individuals, one for reproduction and one for death. The death lottery is uniform, but the reproduction lottery -- defined precisely below -- is performed in such a way that fitter individuals reproduce more frequently. The offspring of the reproducing individual is a single individual having the same type A or B as its parent. This is known as the \textit{no mutations} hypothesis. This offspring replaces the dead individual and population size $N$ thus remains constant. Unstructured means here that both lotteries are realized among all individuals in the population. Let $f_i$ and $g_i$ be the fitnesses respectively of type A and type B individuals when the number of A individuals in the population is $i\in \{0, 1, \dots, N\}$. In the Evolutionary Game Theory context, these fitnesses are calculated in terms of a pay-off matrix \cite{nowakbook} and generally depend on $i$. In such a case, we say that fitnesses are frequency-dependent. In many cases we may consider that $f_i$ and $g_i$ do not depend on $i$ and we say that fitnesses are frequency-independent. The \textit{relative fitness} of A individuals is defined as \begin{equation} \label{defri} r_i \,=\,\ \frac{f_i}{g_i}\;. \end{equation} The precise definition of the reproduction lottery is that the probabilities of drawing an A or a B for reproduction are respectively \begin{equation} \label{replott} \frac{i f_i}{i f_i+ (N-i)g_i}\;\;\textrm{and}\;\;\; \frac{(N-i)g_i}{i f_i+ (N-i)g_i}\;, \end{equation} i.e. proportional to the type's fitness. It can be shown \cite{ewens,nowakbook} that the fixation probability of A individuals when their initial number is $i$ is exactly given by \begin{equation} \label{exactpii} \pi_i= \frac{1+\sum_{j=1}^{i-1}\prod_{k=1}^{j} r_k^{-1}}{1+ \sum_{j=1}^{N-1}\prod_{k=1}^{j} r_k^{-1}}\;. \end{equation} We will now describe a particular case of the Moran process for a structured population. Let $\cal G$ be a graph with $N$ vertices. We suppose that there is exactly one individual at each vertex of $\cal G$. We interpret an edge linking vertices $a$ and $b$ as meaning that an offspring of the individual at $a$ can occupy the vertex $b$, and vice-versa. We say that $a$ and $b$ are neighbors in $\cal G$ if there is an edge between them. For a more general situation, in which the graph $\cal G$ describing the structure of the population is directed and weighted, and update rules other than BD and DB considered here, see e.g. \cite{allennowak}. For simplicity, we restrict our description to the case in which $\cal G$ is not directed and all edges have the same weight. The Moran process on $\cal G$ with BD updating is similar to the unstructured Moran process, but at each time step we first draw an individual for reproduction with probabilities given by (\ref{replott}) and then we draw an individual for death uniformly \textit{only among the neighbor vertices} in $\cal G$ of the reproducing individual. The offspring of the reproducing individual has the same type as its parent, \textit{no mutation} again, and occupies the vertex of the dead individual. In the DB updating, the order of the lotteries is reversed: we first draw with uniform probability an individual for dying and then, only among its neighbors in $\cal G$, we draw with probabilities proportional to fitness an individual to reproduce and have its offspring substitute the one that died. We will see soon that the order of the draws matters. In any case, the graph $\cal{G}$ provides a structure for the population, in which an individual does not necessarily interact with all other individuals. The standard Moran process is recovered if $\cal{G}$ is the complete graph on $N$ vertices with all edges having the same weight. \section{Exact fixation probabilities in the star graph}\label{secExact} Let $n$ be the number of leaves in the star graph. A \textit{configuration} is described by the type A or B of the individual occupying the center and by the number $i \in \{0, 1, \dots, n\}$ of A individuals occupying the leaves. Let $(0,i)$ denote the configuration in which there is a B at the center and $i$ A individuals at the leaves. Accordingly, $(1,i)$ denotes the configuration in which there is an A at the center and $i$ As at the leaves. Fig. \ref{startransitions} illustrates, taking the case $n=5$ as an example, the possible configuration transitions in one time step, i.e. one reproduction and one death lottery, for the Moran process in the star graph for either BD or DB updates. \begin{figure} \begin{center} \includegraphics[width=0.85 \textwidth]{startransitions} \caption{\label{startransitions} The possible transitions between configurations of the star graph with $n=5$ leaves.} \end{center} \end{figure} According to the figure, the lower configurations $(0,i)$ may in general move to the left neighbor $(0,i-1)$, upwards to $(1,i)$, or remain fixed. The \textit{minus} transition $(0,i) \rightarrow (0,i-1)$ happens when one draws the center vertex occupied by a B individual for reproduction and a leaf vertex occupied by an A for death. The probability of that transition is denoted $t_{i,-}$. Similarly, the \textit{up} transition $(0,i) \rightarrow (1,i)$ happens when one draws one of the $i$ leaves occupied by an A individual for reproduction and the center occupied by a B for death. The corresponding transition probability is denoted $t_{i,u}$. The other possible non-trivial transitions are the \textit{plus} transition $(1,i) \rightarrow (1,i+1)$, with probability $t_{i,+}$, and the down transition $(1,i) \rightarrow (0,i)$, with probability $t_{i,d}$. All the transition probabilities introduced above may be easily calculated according to the definitions of the process. As an example, for the BD updating, $t_{i,+}$ is the probability of drawing for reproduction the center vertex occupied by an A times the probability of drawing any of the $n-i$ leaves occupied by a B individual for death. The first of these, taking into account (\ref{replott}) and that the total number of A individuals in the population is $i+1$, is $r_{i+1}/[(i+1)r_{i+1}+n-i]$. The second probability, taking into account that the death lottery is uniform, is simply $(n-i)/n$. The complete set of non-trivial transition probabilities is given below. In deducing these formulae we also remind the reader that in the BD case the death probability for the center is 1 if a leaf is drawn for reproduction. In the DB case, the reproduction probability of the center is 1 if a leaf is drawn for death. The result is: \textbf{BD case}: \begin{align} t_{i,u} &= \frac{ir_i}{ir_i+ n-i+1} &t_{i,d} &= \frac{n-i}{(i+1)r_{i+1}+n-i} \nonumber \\ t_{i,-} &= \frac{1}{ir_i+n-i+1}\,\frac{i}{n} &t_{i,+} &= \frac{r_{i+1}}{(i+1)r_{i+1}+n-i}\,\frac{n-i}{n} \; \label{BDprobs} \end{align} \textbf{DB case}: \begin{align} t_{i,u} &= \frac{1}{n+1}\, \frac{ir_i}{ir_i+n-i} & t_{i,d} &= \frac{1}{n+1}\, \frac{n-i}{ir_i+n-i} \nonumber\\ t_{i,-} &= \frac{i}{n+1} &t_{i,+} &= \frac{n-i}{n+1} \;\label{DBprobs} \end{align} Let $P_i^0$ be the A fixation probability with initial condition $(0,i)$. Similarly, $P_i^1$ will denote the A fixation probability for initial condition $(1,i)$. Due to the no-mutation hypothesis, configurations $(0,0)$ and $(1,n)$, in which one single type is present, are absorbing. We have thus boundary conditions \begin{equation} \label{bc} P_0^0=0 \;\; \textrm{and} \;\; P_n^1=1\;. \end{equation} The equations for calculating the fixation probabilities in both BD and DB are \begin{eqnarray} P_i^0 &=& t_{i,u} P_i^1 + t_{i,-} P_{i-1}^0+ (1-t_{i,u}-t_{i,-})P_i^0 \nonumber\\ P_i^1 &=& t_{i,d} P_i^0 + t_{i,+} P_{i+1}^1+ (1-t_{i,d}-t_{i,+})P_i^1 \nonumber \;, \end{eqnarray} where $i$ runs between 1 and $n$ for the first line and from 0 to $n-1$ in the second. In order to find the fixation probabilities, we have thus to solve the above system of $2n$ equations, taking into account the boundary conditions (\ref{bc}). The above equations can be rewritten as \begin{eqnarray} P^0_i&=&\beta_i P^0_{i-1}+(1-\beta_i)P^1_i \label{eqp0i} \\ P^1_i&=&\alpha_i P^1_{i+1}+(1-\alpha_i)P^0_i \label{eqp1i}\;, \end{eqnarray} with \begin{equation} \label{defalphabeta} \beta_i= \frac{t_{i,-}}{t_{i,u}+t_{i,-}} \;\;\; \alpha_i= \frac{t_{i,+}}{t_{i,d}+t_{i,+}} \;. \end{equation} We may now look again at (\ref{BDprobs}) and (\ref{DBprobs}) and understand why the order BD or DB of the lotteries is so important for the star graph. We let $n \rightarrow \infty$ and fix the fraction $x=i/(n+1)$ of A individuals in the population. In this limit, for fixed $x$, $i$ is of the order of $n$. Then in the BD case the probabilities $t_{i,\pm}$ involving drawing the center for reproduction are $O(1/n)$, i.e. small. On the contrary, the probabilities of drawing some leaf for reproduction are $O(1)$, and so are $t_{i,u}$ and $t_{i,d}$. The center in the BD case is very much influenced by the leaves. The reader may repeat a similar reasoning and see that in the DB case, on the contrary, the center influences very much the leaves. As a consequence, one should expect that whether the center is occupied by an A or a B should not influence very much the fixation probability of the A individuals in the BD case. On the contrary, we expect that in the DB case the occupation of the center by an A should increase substantially the fixation probability of the A individuals, and occupation of the center by a B should decrease substantially the A fixation probability. This strong difference between BD and DB is apparent in Figs. \ref{figindep} and \ref{figcoord}. In order to solve the set of equations (\ref{eqp0i}) and (\ref{eqp1i}), we start by defining the differences \[ d_i^0= P_i^0-P_{i-1}^0, \;\;\; d_i^1= P_i^1-P_{i-1}^1\] and \[d_i^{10}= P_i^1-P_i^0 \;.\] We may then rewrite equations (\ref{eqp0i}) and (\ref{eqp1i}) respectively as \begin{eqnarray} d^0_i&=&\frac{1-\beta_i}{\beta_i}d^{10}_i\nonumber\\ d^1_i&=&\frac{1-\alpha_{i-1}}{\alpha_{i-1}}d^{10}_{i-1}\nonumber \end{eqnarray} and also, by the definitions of the differences, we get \begin{equation*} d^{10}_i=d^1_i-d^0_i+d^{10}_{i-1}\;. \end{equation*} We may consider the last three equations as a linear system for obtaining $d_i^0$, $d_i^1$ and $d_i^{10}$ all in terms of $d_{i-1}^{10}$. Solving it, we get \begin{eqnarray} d^1_i&=&\frac{1-\alpha_{i-1}}{\alpha_{i-1}}d^{10}_{i-1}\nonumber\\ d^{10}_{i}&=&\frac{\beta_i}{\alpha_{i-1}}d^{10}_{i-1}\label{soldiff}\\ d^0_i&=&\frac{1-\beta_i}{\alpha_{i-1}}d^{10}_{i-1}\nonumber \end{eqnarray} Solving the recursion given by the second of (\ref{soldiff}), we get \begin{eqnarray} d^{10}_i&=&\prod_{j=1}^{i}\left(\frac{\beta_j}{\alpha_{j-1}}\right)d_0^{10}\nonumber\\ &=&\prod_{j=1}^{i}\left(\frac{\beta_j}{\alpha_{j-1}}\right)P^1_0 \label{sold10i}\;, \end{eqnarray} because, by the first of (\ref{bc}), we have $d^{10}_0= P^1_0-P^0_0=P^1_0$. Observe here that this formula proves that for all $i$ we have $P^1_i>P^0_i$. Substituting (\ref{sold10i}) in the first of (\ref{soldiff}), we have \begin{eqnarray} d^1_i&=&\frac{1-\alpha_{i-1}}{\alpha_{i-1}}\prod_{j=1}^{i-1}\left( \frac{\beta_j}{\alpha_{j-1}} \right)P^1_0\nonumber\\ &=& \frac{1-\alpha_{i-1}}{\alpha_0}\, \prod_{j=1}^{i-1} \frac{\beta_j}{\alpha_{j}}\,P^1_0\;.\label{sold1i} \end{eqnarray} An explicit formula for the $P^1_0$ can now be found, because, due to the second in (\ref{bc}), \begin{eqnarray} 1&=&P_n^1= d_n^1+d_{n-1}^1+ \dots +d_1^1+P^1_0\nonumber\\ &=& \frac{P_0^1}{\alpha_0}\left(1+ \sum_{j=1}^{n-1}(1-\alpha_j)\prod_{k=1}^j \frac{\beta_k}{\alpha_k}\right)\nonumber\;. \end{eqnarray} Solving this for $P_0^1$, we get \begin{equation}\label{solP10} P^1_0=\frac{\alpha_0}{1+\sum_{j=1}^{n-1}(1-\alpha_j)\prod_{k=1}^{j}\frac{\beta_k}{\alpha_k}}\;. \end{equation} The $P^1_i$, $i=1,2, \dots, n-1$, may be obtained recursively by $P^1_i=P^1_{i-1}+d^1_i$ and using (\ref{sold1i}) and (\ref{solP10}). The result is \begin{equation} \label{solP1i} P^1_i=\frac{1+\sum_{j=1}^{i-1}(1-\alpha_j)\prod_{k=1}^{j}\frac{\beta_k}{\alpha_k}}{1+\sum_{j=1}^{n-1}(1-\alpha_j)\prod_{k=1}^{j}\frac{\beta_k}{\alpha_k}}\;. \end{equation} Finally, we may calculate the $P^0_i$, $i=1,2, \dots, n$ as $P^1_i-d^{10}_i$. Using (\ref{solP1i}), (\ref{sold10i}) and (\ref{solP10}) we obtain \begin{eqnarray} P^0_i&=&\frac{1+\sum_{j=1}^{i-1}(1-\alpha_j)\prod_{k=1}^{j}\frac{\beta_k}{\alpha_k}- \beta_i \prod_{j=1}^{i-1} \frac{\beta_j}{\alpha_j}}{1+\sum_{j=1}^{n-1}(1-\alpha_j)\prod_{k=1}^{j}\frac{\beta_k}{\alpha_k}} \nonumber\\ &=& \frac{\sum_{j=1}^{i}(1-\beta_j) \prod_{k=1}^{j-1}\frac{\beta_k}{\alpha_k}} {1+\sum_{j=1}^{n-1}(1-\alpha_j)\prod_{k=1}^{j}\frac{\beta_k}{\alpha_k}} \;. \label{solP0i} \end{eqnarray} Formulae (\ref{solP10}), (\ref{solP1i}) and (\ref{solP0i}) are the exact and explicit solution for the fixation probability of A individuals at any configuration in the star graph for frequency-dependent fitnesses and either BD or DB updating. If Ewens \cite{ewens} termed ``unwieldy" the analogous and simpler formula (\ref{exactpii}) for the unstructured population, then these formulae deserve much more a better comprehension. Before we do that in the next section, repeating some work similar to \cite{graphshapes} for the unstructured case, let us simply plot the results of (\ref{solP10}), (\ref{solP1i}) and (\ref{solP0i}) in two illustrative cases and both update rules. In the first example, we take $f_i=r$, $g_i=1$, with $r>0$, so that $r_i=r$ in (\ref{defri}). This choice defines the transition probabilities in (\ref{BDprobs}) and (\ref{DBprobs}) to be substituted in (\ref{defalphabeta}). The interpretation of $r$ is that in the reproduction lottery the probability of a single chosen A individual being drawn is $r$ times the probability of a single chosen B be drawn. If $r>1$, A individuals are fitter, if $0<r<1$, Bs are fitter. The case $r=1$ is called \textit{neutral}. As $r$ is independent of $i$, we are in the simpler context of \textit{frequency-independent} fitness. Fig. \ref{figindep} illustrates the behavior of the fixation probability of A individuals as a function of the number of A individuals in the leaves of the star graph, both for BD and DB. We take frequency-independent fitness with $r=1.2$. \begin{figure} \begin{center} \includegraphics[width= \textwidth]{indepBDDB} \caption{\label{figindep} Left panel: plots of the fixation probabilities in the star graph with $n=20$ leaves, BD updating and frequency-independent relative fitness $r=1.2$. For the same value of $i$ the difference between $P^0_i$ and $P^1_i$ is so small that it is almost invisible. Right panel: the same for DB updating. In this case, the blue dots are the $P^0_i$, noticeably smaller than the $P^1_i$, represented by orange dots.} \end{center} \end{figure} For a general pay-off matrix $M$, in which types A and B are numbered respectively as 1 and 2, $m_{k \ell}$ is the pay-off of type $k$ when interacting with type $\ell$. The standard Evolutionary Game Theory \cite{nowakbook, graphshapes} fitnesses are \begin{eqnarray} \label{fi} f_i &=& m_{11} \, \frac{i-1}{N-1} \,+\, m_{12}\, \frac{N-i}{N-1} \\ \label{gi} g_i &=&m_{21} \, \frac{i}{N-1} \,+\, m_{22}\, \frac{N-i-1}{N-1} \;. \end{eqnarray} In the second example, illustrated at Fig. \ref{figcoord}, the pay-off matrix is \begin{equation} \label{payoffcoord} M=\left( \begin{array}{cc} 10 & 5 \\ 6 & 10 \\ \end{array} \right) \;, \end{equation} i.e. a \textit{coordination} or \textit{stag hunt} game. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{coordBDDB} \caption{\label{figcoord} As in Fig. \ref{figindep}, the left and right panels refer respectively BD and DB updatings. Here we use the pay-off matrix (\ref{payoffcoord}) to define frequency-dependent fitnesses by (\ref{fi}) and (\ref{gi}). The number of leaves in the star graph is $n=10$. Again, as in Fig. \ref{figindep}, the difference between $P^0_i$ and $P^1_i$ is almost invisible in the BD plot. For the DB plot, the blue dots are the $P^0_i$ and the orange dots are the $P^1_i$ } \end{center} \end{figure} \section{Asymptotics}\label{secAsymp} Taking the ratio between (\ref{fi}) and (\ref{gi}), it is easy to see that \begin{equation} \label{rR} r_i=R(i/n)+O(1/n)\;, \end{equation} where \begin{equation} \label{defR} R(x)=\frac{m_{11}x + m_{12} (1-x)} {m_{21}x+m_{22}(1-x)} \end{equation} is independent of $n$. Let $x \in [0,1]$ be a fixed fraction of A individuals in the leaves of the star graph and $[n x]$ be the integer closest to $nx$. We define the asymptotic fixation probabilities as \begin{equation*} \pi^0 (x)=\lim\limits_{n \rightarrow \infty} P^0_{[nx]} \;\;\; \textrm{and} \;\;\; \pi^1 (x)=\lim\limits_{n \rightarrow \infty} P^1_{[nx]}\;. \end{equation*} In (\ref{solP10}), (\ref{solP1i}) and (\ref{solP0i}) the ratio $\alpha_k/\beta_k$ assumes a role similar to the relative fitness $r_k$ in the unstructured case (\ref{exactpii}). The strong qualitative differences between BD and DB seen in Figs. \ref{figindep} and \ref{figcoord} are justified by the fact that $\alpha_k/\beta_k$ is so different in the two cases when $n \rightarrow \infty$. We now separate the two cases. \subsection{DB case} Using (\ref{defalphabeta}) and (\ref{DBprobs}), in the DB case we get \begin{equation} \label{DBratio} \frac{\alpha_i}{\beta_i}=1+ \ \frac{1}{n} \, \frac{r_i-1}{1+ \frac{i}{n}(r_i-1)}+O(\frac{1}{n^2})\;, \end{equation} \begin{eqnarray} 1-\alpha_i&=&\frac{1}{n} \ \frac{1}{1+\frac{i}{n}(r_i-1)}+O(\frac{1}{n^2}) \label{1-aDB}\\ 1-\beta_i&=&\frac{1}{n} \ \frac{r_i}{1+\frac{i}{n}(r_i-1)}+O(\frac{1}{n^2})\label{1-bDB}\;. \end{eqnarray} We proceed by writing the product of the $\alpha_k/\beta_k$ as an exponential of a sum. Because, by (\ref{DBratio}), $\log(\alpha_k/\beta_k)$ is $O(1/n)$, the sum conveniently converges to an integral: \begin{eqnarray*} \prod_{k=1}^{[nx]}\frac{\beta_k}{\alpha_k}&=&\exp \left[- \sum_{k=1}^{[nx]}\log \frac{\alpha_k}{\beta_k}\right] \nonumber\\ &=&\exp \left[-\sum_{k=1}^{[nx]}\frac{1}{n}\left( \frac{r_k-1}{1+\frac{k}{n}(r_k-1)}+O(\frac{1}{n}) \right)\right]\\ &\stackrel{n \rightarrow \infty}{\longrightarrow}&\exp\left[-\displaystyle\int_{0}^{x}\frac{R(z)-1}{1+z(R(z)-1)} dz\right]\;. \end{eqnarray*} Observe now that the convenient $1/n$ leading behavior is present also in (\ref{1-aDB}) and (\ref{1-bDB}). So the sums in (\ref{solP1i}) and (\ref{solP0i}) converge to integrals, too. We may then define functions which are the limits of the sums appearing in the numerators and denominators of (\ref{solP1i}) and (\ref{solP0i}): \begin{equation*} \varTheta(x)\equiv \int_{0}^{x}\frac{R(y)}{1+y \ (R(y)-1)} \ e^{-\int_{0}^{y}\frac{R(z)-1}{1+z(R(z)-1)} \ dz}\ dy \end{equation*} and \begin{equation*} \varXi(x)\equiv\int_{0}^{x}\frac{1}{1+y \ (R(y)-1)} \ e^{-\int_{0}^{y}\frac{R(z)-1}{1+z(R(z)-1)} \ dz}\ dy\;. \end{equation*} In terms of these functions, we prove that in the DB case \begin{equation*} \pi^0(x)\,=\, \frac{\varTheta(x)}{1+\varXi(1)}\;\;\; \textrm{and}\;\;\; \pi^1(x)\,=\,\frac{1+\varXi(x)}{1+\varXi(1)}\;. \end{equation*} \subsection{BD case} The formulae analogous to (\ref{DBratio}-\ref{1-bDB}) in the BD case are \begin{equation}\label{BDratio} \frac{\alpha_i}{\beta_i}=r_i\, r_{i+1}\left[1+\frac{1}{n}\, \frac{1-r_i r_{i+1}}{r_i}+O(\frac{1}{n^2})\right]\;, \end{equation} \begin{eqnarray} 1-\alpha_i&=& 1-\frac{r_{i+1}}{n}+O(\frac{1}{n^2})\label{1-aBD} \\ 1-\beta_i&=&1-\frac{1}{n\,r_i}+O(\frac{1}{n^2})\label{1-bBD}\;. \end{eqnarray} Lieberman et al. \cite{lieberman} had already noticed -- for the BD updating and frequency-independent fitness -- that the fixation probability of a single A individual in a star graph is asymptotically equal to the fixation probability in an unstructured population with relative fitness $r$ replaced by $r^2$. This replacement is responsible for the fact that the star is an amplifier of selection. At first sight, if we take $r_i=r$, neglect the corrections tending to 0 when $n \rightarrow \infty$ in (\ref{BDratio}-\ref{1-bBD}) and use the exact formula (\ref{solP1i}), we see such a result. But the above argument is not strictly true, because Chalub \cite{chalubstar} noticed that the asymptotic expression of Lieberman et al. has to be corrected. The source for this correction is that the $O(1/n)$ contribution in (\ref{BDratio}) cannot be simply neglected. The analysis of the BD case is more involved, but similar to the asymptotics for the unstructured population case, thoroughly explained in \cite{graphshapes}. Part of the same analysis had been done before by Antal and Scheuring \cite{AntalScheuring}, but \cite{graphshapes} is more complete and also corrects some mistakes. In the following we will try to give a reasonable account of this analysis, referencing the reader to the cited papers for the technical details. From a mathematical point of view, the main difference between BD and DB is that, contrary to the DB case, $\log(\alpha_k/\beta_k)$ is $O(1)$, not $O(1/n)$. This is what makes the BD case similar to the unstructured case. For $x \in [0,1]$ and $R$ given by (\ref{defR}), define, as in \cite{graphshapes}, the \textit{fitness potential} \begin{equation} \label{defL} L(x) \,\equiv \, - \, \int_{0}^{x} \log R(t) \,dt \;. \end{equation} A similar definition was given by Chalub and Souza \cite{chalubsouza1} and further explored by the same authors in \cite{chalubsouza2}. It can be seen that $L$ always has a single maximum point $x^* \in [0,1]$. The location of $x^*$ in the interior of the interval, or in one of its boundary points depends on the \textit{invasion scenario} \cite{taylor, graphshapes}, i.e. on the possible signs of $R(0)-1$ and $R(1)-1$. In order to deal with the correction by Chalub, related to the $O(1/n)$ terms in (\ref{BDratio}), we define also \begin{equation} \label{defC} C(x) \,\equiv \, \int_{0}^{x} \left(\frac{R(t)^2-1}{R(t)}- \frac{\det M}{m_{12}m_{22}}\right) \,dt \;. \end{equation} Using a reasoning similar to the DB case, we write \begin{eqnarray} \prod_{k=1}^{[nx]}\frac{\beta_k}{\alpha_k}&=&\exp \left[-\sum_{k=1}^{[nx]}\log \frac{\alpha_k}{\beta_k}\right] \nonumber\\ &=&\exp\left[-n \sum_{k=1}^{[n x]}\frac{1}{n} (\log r_k+\log r_{k+1}) \right]\, \exp \left[-\sum_{k=1}^{[nx]}\frac{1}{n}\left( \frac{1-r_k r_{k+1}}{r_k}+O(\frac{1}{n}) \right)\right]\nonumber\\ &=& \exp\left[-2n \sum_{k=1}^{[n x]}\frac{1}{n} \log r_k \right]\, \exp \left[-\sum_{k=1}^{[nx]}\frac{1}{n}\left( \frac{1-r_k r_{k+1}}{r_k}+ \frac{\det M}{m_{12}{m_{22}}}+O(\frac{1}{n}) \right)\right] \nonumber \\ &\stackrel{n \rightarrow \infty}{\sim}& e^{C(x)+E(x)}\, e^{2nL(x)} \label{asympprod} \;, \end{eqnarray} where the $E(x)$ term will be explained below. In the above asymptotic formula, the main term is $e^{2n L(x)}$. It appears both in \cite{graphshapes, AntalScheuring} and also, as $r^{-2n}$, for the frequency-independent fitnesses case in \cite{lieberman}. The $e^{C(x)}$ generalizes the correction by Chalub \cite{chalubstar}, and the $e^{E(x)}$ is of the type called ``continuation error" in \cite{graphshapes}, because it appears when the sum $\sum_{k=1}^{[n x]}\frac{1}{n} \log r_k$ is replaced by the integral $-L(x)$. The first thing to be explained in the BD case is why the difference $P^1_i-P^0_i = d^{10}_i$ is so small when $n$ is large. This can be seen by substituting (\ref{solP10}) in (\ref{sold10i}) and obtaining \begin{equation}\label{altd10i} d^{10}_i = \frac{\beta_i \, \prod_{j=1}^{i-1} \frac{\beta_j}{\alpha_j}} {1+\sum_{j=1}^{n-1}(1-\alpha_j)\prod_{k=1}^{j}\frac{\beta_k}{\alpha_k}}\;, \end{equation} which holds also in the DB case. Using the asymptotic expression (\ref{asympprod}), if $n$ is large, then the sum in the denominator of (\ref{altd10i}) is dominated by the term such that $L(j/n)$ is maximum, which tends to $x^*$ when $n \rightarrow \infty$. Using (\ref{asympprod}) also in the numerator of (\ref{altd10i}), we get for large $n$ and fixed $x \in [0,1]$, \[d^{10}_{[n x]} \approx \frac{1}{n} \, c_n(x) \, e^{2n (L(x)-L(x^*))} \;,\] where the $1/n$ factor comes from the $\beta_i=1/(n r_i+1)$ in the numerator and $c_n(x)$ may be either $O(1)$ if $x^*$ is a boundary point, or $O(n^{-1/2})$ if $x^*$ is an interior point. We omit here some technicalities which can be found in \cite{graphshapes}. In any case, if $x \neq x^*$, $d^{10}_{[n x]}$ is exponentially small in $n$, and, even if $x=x^*$, it tends to 0 as $n \rightarrow \infty$, although more slowly. In the example plotted in the left panel of Fig. \ref{figcoord} we have $x^*=5/9$, which locates correctly the region in which the difference $P^1_i-P^0_i$ is more visible. Having accepted that the difference between $P^1_i$ and $P^0_i$ is small, we may then use the expression (\ref{solP1i}) for the former to approximate both. We use it, because we can now approximate $1-\alpha_j$ by 1, see (\ref{1-aBD}), and (\ref{solP1i}) takes the same form as (\ref{exactpii}) with $r_k$ replaced by $\alpha_k/ \beta_k$. The possible graph shapes and asymptotic behavior for (\ref{exactpii}) were studied in \cite{graphshapes} and hold here as approximations for the $P^1_i$ and $P^0_i$ in the BD case. In particular, for fixed $x \in(0,1)$, $P^1_{[n x]}$ and $P^0_{[n x]}$ both tend to 0 when $n \rightarrow \infty$ if $x<x^*$ and to 1 if $x>x^*$. \section{Conclusions}\label{secconc} The discovery of an exact solution for a simplified problem, as the Onsager solution \cite{onsager} for the two-dimensional Ising model in Statistical Mechanics, is an important result. In fact, one can proceed to more realistic models and develop approximation techniques based on the increased understanding gained by the exact solution. In this paper we have generalized the exact solution found by Broom and Rycht\'{a}\v{r} \cite{broomrychtar} and derived by another method by Monk et al. \cite{monk}. Both papers provide explicit formulae only when the fitnesses of A and B individuals are frequency-independent and the updating rule is BD. Our formulae (\ref{solP10}), (\ref{solP1i}) and (\ref{solP0i}) are rather complicated, but we can understand them very well in the important asymptotic limit when the number of leaves $n$ in the star graph tends to $\infty$. We see rather important differences between the DB and BD cases. In the DB case we obtain asymptotic formulae in terms of integrals. The BD case is harder, but it can be understood by techniques introduced elsewhere \cite{AntalScheuring, graphshapes}. We hope that the results of this paper may encourage other researchers to understand more thoroughly the fixation probabilities of the Moran process in more general graphs. \section*{Acknowledgments} This study was financed in part by the Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior - Brasil (CAPES) - Finance Code 001. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A tired and inattentive driver often breaks the driving regulations by entering, for example, the opposite lane. This abnormal driving behaviour is usually early detected by the participating human drivers who react early enough to prevent harmful situations. Similar to humans, autonomous vehicles should perform anomaly detection as part of the automated driving modules~\cite{engel2019deeplocalization, strohbeck2020multiple, wiederer2020tcg}. Learning, thus, normal driving behaviour is necessary for detecting anomalies. Anomaly detection is a long-developed approach in computer vision, for instance, to spot abnormal human behaviour~\citep{lu2020few, morais2019learning} or vehicle motion in traffic~\citep{chen2021dual}. In robotics, the approach detects hardware failures of self-flying delivery drones~\citep{sindhwani2020unsupervised} or helps a wheeled robot to navigate around unseen obstacles~\citep{mcallister2019robustness}. Although these approaches can be transferred to automated driving, they only consider a single agent in a static environment. That is barely the case for autonomous vehicles, where multiple agents influence each other through constant interactions. In this work, we present an approach to detect anomalies of multi-agents based on their trajectories.% We propose a spatio-temporal graph auto-encoder (STGAE) for trajectory embedding. Similar to the standard auto-encoder, it learns a latent representation of multi-agent trajectories. The main innovation of STGAE is the ability to simultaneously learn multiple trajectories for a dynamic number of agents. In a second step, we perform kernel density estimation (KDE) on the latent representation of the STGAE. We empirically observe that KDE captures well the distribution of the normal trajectory data. During the test phase, we detect anomalies in low-density regions of the estimated density. To evaluate our approach, we introduce a new dataset for multi-agent trajectory anomaly detection for automated driving. The current automotive datasets~\citep{caesar2020nuscenes, braun2019eurocity} contain many hours of recordings, but lack anomalies due to the rareness of abnormal driving situations, whereas anomaly detection datasets~\citep{liu2018ano_pred, lu2013abnormal} have the required anomaly labels but are not relevant to automotive and automated driving problems. Finally, scenario staging is often applied in behaviour modelling~\citep{kooij2014context}, but its prohibitive for driving anomalies since it will put the actors into life danger. For these reasons, we develop a multi-agent simulation and create a dataset with normal and abnormal manoeuvres. Then, we evaluate our method for single- and multi-agent configurations, including comparisons with deep sequential auto-encoder and linear models. Moreover, we rely on the standard metrics for anomaly detection to show that our approach delivers promising results compared to the related methods. \section{Related Work} \label{sec:related_work} \mypara{Multi-Agent Trajectory Modelling.} Trajectory prediction is essential for automated driving~\citep{elnagar2001automation, zernetsch2016network}. Modelling the interaction with the environment and between the participants improves the prediction quality~\citep{kooij2014context, kitani2012activity}. The idea of information exchange across agents is actively studied in the literature~\citep{gupta2018social, sadeghian2019sophie, lee2017desire}. For example, Alahi~\textit{et al.} introduced the social-pooling layer into LSTMs to incorporate interaction features between agents~\citep{alahi2016social}. Recently, graph neural networks (GNN) have outperformed traditional sequential models on trajectory prediction benchmarks~\citep{tang2019multiple, ivanovic2019trajectron}. GNNs explicitly model the agents as nodes and their connection as edges to represent the social interaction graph. Similarly, the social spatio-temporal graph convolution neural network (ST-GCNN)~\citep{morais2019learning} extracts spatial and temporal dependencies between agents. Also, we use a related architecture to design our spatio-temporal graph auto-encoder for learning the normal data representation. \mypara{Anomaly Detection.} On image and video data, anomaly detection is a long-standing topic. Hand-crafted and motion features were traditionally employed for anomaly detection based on probabilistic PCA~\citep{kim2009observe} or one-class SVM~\citep{hu2009abnormal}. At the moment, deep neural networks dominate within the anomaly detection approaches~\citep{xu2017detecting, kim2015deep}. The usual approach is training an auto-encoder with normal data and then measuring the deviation of the test samples from the learned representation. For instance, Morais \textit{et al.}~\citep{morais2019learning} proposed a sequential auto-encoder to encode skeleton-based human motion and considered the reconstruction error as a measure for the detection of irregular motion patterns. Also, the variational auto-encoder (VAE) in~\citep{park2018multimodal} can learn the normal data distribution from a set of raw sensor signals in combination with a long short-term memory (LSTM) network. Unlike our work, these approaches assume a fixed number of input streams, i.e. sensor signals, instead of a varying number of trajectories, i.e. agents. Here, we formulate the idea of learning the normal data distribution with the spatio-temporal graph auto-encoder. Furthermore, we estimate the normal data density function instead of relying on the reconstruction error. \textbf{Anomaly Detection in Trajectory Prediction and Control.} Anomaly detection has been extensively studied in robotics, e.g.~model predictive control~\citep{lee2020control}, collaborative robots~\citep{hayes2017interpretable}, autonomous drones~\citep{sindhwani2020unsupervised}, robot navigation in crowds~\citep{bera2016realtime} and uncertain environments~\citep{ji2020multi, mcallister2019robustness}. Nevertheless, the prior work only address the problem of single agent anomaly detection. We tackle the problem of detecting anomalies in multi-agent trajectories. \section{Anomaly Detection in Multi-Agent Trajectories} \label{sec:method} We study the problem of anomaly detection for multi-agent trajectories. The input to our approach is a scene with $N$ agent trajectories of length $T$, where $N$ is dynamic over scenarios. We describe the observed trajectory of the agent $i$ with the agent states $\mathbf{s}^i = \{\mathbf{s}_t^i\}_{t=1}^T$, where $\mathbf{s}_t^i = (x_t^i, y_t^i)$ denotes the agent location in x- and y-coordinates. % Our goal is to estimate the anomaly score $\alpha_t=[0, 1]$, i.e. normal or abnormal, for each time step of an unseen scene during testing, while only showing normal scenes during training. We present a two-stage approach, where first a spatio-temporal graph convolution network auto-encoder is trained to represent normal trajectories in the feature space (Sec.~\ref{subsec:STGAE}). Second, we use the latent representation to fit a probabilistic density to the normal trajectories with kernel density estimation (Sec.~\ref{subsec:KDE}). Finally, we present the anomaly detection score (Sec.~\ref{subsec:AD}) given the estimated density. The supplementary material provides visualisation of the training and inference process. \subsection{Spatio-Temporal Graph Auto-Encoder} \label{subsec:STGAE} We define the spatio-temporal graph auto-encoder (STGAE) as composition of the multi-agent trajectory encoder $g(\cdot)$ and the trajectory decoder $f(\cdot)$. The encoder maps a set of agent trajectories on the latent representation. The decoder transforms the latent representation back to trajectories. \mypara{Trajectory Encoding.} The encoder is designed as a spatio-temporal graph convolution neural network (ST-GCNN)~\citep{yan2018spatial}. Given a set of $N$ agent trajectories of length $T$, we define the spatio-temporal graph $\mathcal{G} = \{\mathcal{G}_t\}_{t=1}^{T}$ as a set of directed spatial graphs $\mathcal{G}_t = (\mathcal{V}_t, \mathcal{E}_t)$. The spatial graphs model the multiple agents as nodes $\mathcal{V}_t$ and their connectivity as edges $\mathcal{E}_t$ to compute pairwise influence. The set of graph nodes $\mathcal{V}_t = \{\mathbf{v}_t^i\}_{i=1}^{N}$ represent the agent states in terms of the relative location $\mathbf{v}_t^i = (x_t^i-x_{t-1}^i, y_t^i-y_{t-1}^i)$. We define the edges $\mathcal{E}_t = \{ e_t^{ij}\}_{i,j=1}^{N}$ to model how strong the node $i$ influences node $j$ at time step $t$. To this end, a kernel function $e_t^{ij}=\kappa_{edge}(\mathbf{v}_t^i, \mathbf{v}_t^j)$~\citep{mohamed2020social} measures the similarity between two agents in the same time step defined as \begin{equation} \label{eq:adjacency_weighting} e_{t}^{ij} = \begin{cases} 1/\lVert \mathbf{v}^i_t-\mathbf{v}^j_t\rVert_2 &\text{, } \lVert \mathbf{v}^i_t-\mathbf{v}^j_t\rVert_2 \neq 0 \\ 0& \text{, otherwise.} \end{cases} \end{equation} The influence is high for similar agent states and low otherwise. In the rare case of two agents sharing the same location, we set $e_{t}^{ij}=0$. We define the weighted adjacency matrix $\mathbf{A}_t \in \mathbb{R}^{N\times N}$ based on the connectivity parameters $e_t^{ij}$. We follow the procedure of Kipf~\textit{et al.}~\citep{kipf2016semi} and compute the normalised graph Laplacian $\hat{\mathbf{A}}_t = \mathbf{D}_t^{-\frac{1}{2}} \tilde{\mathbf{A}}_t \mathbf{D}_t^{-\frac{1}{2}}$ with the graph Laplacian $\tilde{\mathbf{A}}_t = \mathbf{A}_t + \mathbf{I}$, where $\mathbf{I}$ denotes the identity matrix, and the node degree matrix $\mathbf{D}_t$ with diagonal entries defined as $\mathbf{D}_t^{ii} = \sum_{j}\tilde{\mathbf{A}}_t^{ij}$. As introduced by Yan~\textit{et al.}~\citep{yan2018spatial}, we aggregate over neighbouring agents using spatial graph convolutions $g_s(\mathcal{V}, \hat{\mathbf{A}}) = \sigma(\hat{\mathbf{A}}\mathcal{V}\bm{\theta}_s)$ with the activation function $\sigma(\cdot)$ and the network weights $\bm{\theta}_{s}$. We denote $\hat{\mathbf{A}}$ and $\mathcal{V}$ as the concatenation of the weighted adjacency matrices $\{\hat{\mathbf{A}}_t\}_{t=1}^{T}$ and the node features $\{\mathcal{V}_t\}_{t=1}^{T}$ over all time steps, respectively. In dynamical systems, spatial features are not expressive enough since they ignore important temporal relationships. To include the time dimension, we connect the same node over consecutive frames using temporal convolutions $g_t(\cdot)$ as introduced in~\citep{lea2017temporal}. We define the encoder $g(\cdot)$ as a composite of spatial graph convolution and temporal convolution layers. As a result the encoder computes the latent representation $\mathbf{Z} \in \mathbb{R}^{T\times N\times F_{g}}$ with the latent feature dimension $F_{g}$. \mypara{Trajectory Decoding.} Given the encoded representation, we define a decoder to reconstruct the set of input trajectories. The decoder applies multiple 2D convolution layers on the temporal dimension of the latent features~\citep{mohamed2020social}. To include agent interactions, the convolutions aggregate features across agents. We denote the decoder output as $\hat{\bm{\mathcal{V}}}\in \mathbb{R}^{T \times N \times F_{f}}$ with output feature dimension $F_{f}$. \mypara{Training.} For the training, we assume that the state of agent $i$ in time step $t$ comes from a bi-variate distribution, given by $\mathbf{s}_t^i \sim \mathcal{N}_2\left(\bm{\mu}_t^i, \bm{\sigma}_t^i, \bm{\rho}_t^i\right)$, where $\bm{\mu}_t^i, \bm{\sigma}_t^i\in\mathbb{R}^2$ are the mean and the standard deviation of the location, respectively, and $\bm{\rho}_t^i\in\mathbb{R}^2$ is the correlation factor. We estimate the parameters of the bi-variate distribution with the decoder output $\hat{\mathbf{v}}_t^i = \{\hat{\bm{\mu}}_t^i, \hat{\bm{\sigma}}_t^i, \hat{\bm{\rho}}_t^i\}$. We denote the estimated probability density function as $q(\mathbf{s}_t^i| \hat{\bm{\mu}}_t^i, \hat{\bm{\sigma}}_t^i, \hat{\bm{\rho}}_t^i)$ and train the model to minimise the negative log-likelihood \begin{equation} \label{eq:loss} \mathcal{L} = - \sum_{t = 1}^{T} \log\left(q\left(\mathbf{s}_t^i|\hat{\bm{\mu}}_{t}^i,\hat{\bm{\sigma}}_{t}^{i}, \hat{\bm{\rho}}_t^i \right)\right). \end{equation} Similar to other sequential models~\citep{morais2019learning}, our STGAE requires inputs of a fixed temporal length. Therefore we clip each scene in the training set into smaller fixed-size segments of length $T'$ using a sliding window approach. \subsection{Kernel Density Estimation for Normal Trajectories} \label{subsec:KDE} We rely on kernel density estimation (KDE) to approximate the probability density function of the normal trajectories from the latent feature representation of the STGAE. The idea is based on the assumption that normal trajectories fall in high density regions and anomalies occur in regions with lower density. Given the trained STGAE, we encode the training segments and combine all latent representations in the set $Z_{kde}$. The KDE assumes all samples to be i.i.d. random variables drawn from an unknown true distribution $p$~\citep{parzen1962estimation}. We can approximate the true density of a new feature vector $\mathbf{z}$ by $\hat{p}$ defined as \begin{equation} \hat{p}(z) = \frac{1}{|Z_{kde}|h} \sum_{i=1}^{|Z_{kde}|} \kappa_{kde}\left(\frac{\mathbf{z}-\mathbf{z}_i}{h} \right) \; \textrm{with the Gaussian Kernel~\citep{nicolau2016one}} \;\; \kappa_{kde}(\mathbf{x}) \propto exp\left(- \frac{\norm{\mathbf{x}}^2}{2h^2}\right). \label{eq:kde} \end{equation} The kernel function $\kappa_{kde}(\mathbf{x})$ weights the observations differently based on the similarity to its neighbours. The parameter responsible for the weights of the points is the bandwidth $h$, which serves as smoothing. To sum-up, Eq.~\ref{eq:kde} computes the probability density of the agent's feature vector $\mathbf{z}$. \subsection{Abnormal Trajectory Detection} \label{subsec:AD} During the inference, we use the same sliding window approach as in the training and get the set of all test segments. For a test segment, the STGAE encoder computes the latent representation and the KDE from Eq.~\ref{eq:kde} estimates the density for each agent and time step, given the latent feature vector. We take the estimated density as a measure for anomalies. The feature decoder of the STGAE is not required during testing. \mypara{Anomaly Score.} We follow a similar approach as introduced in ~\citep{morais2019learning} for the anomaly scoring. First, we compute $N$ anomaly scores $\alpha_t^i$ for all agents included in the same time step, and second compute the anomaly score $\alpha_t$ to measure if time step $t$ is abnormal. To score for one agent, we average the anomaly scores of all segments where the agent occurs by: \begin{equation} \label{eq:agent_anomaly_score} \alpha_t^i = \frac{\sum_{o \in S_o}p(\mathbf{z}_{t, o}^i)} {\lvert S_o \rvert}, \\ \end{equation} where $S_o$ are the overlapping sliding window segments in which the agent $i$ is present and $\mathbf{z}_{t,o}^i$ are the resulting feature vectors from the STGAE encoder at time step $t$. This results in one anomaly score for each agent in a specific time step. To identify if a time step is normal or abnormal, we compute the anomaly score $\alpha_t$ by taking the maximum over all agents: \begin{equation}\label{eq:frame_agent_anomaly_score} \alpha_t = max(\alpha(\mathbf{z}_t^i))_{i \in {1, ..., N}}. \\ \end{equation} The max-operation avoids missing anomalies compared to the mean. We use $\alpha_t$ for the calculation of the metrics in our evaluations. % \section{Experimental Results} \label{sec:result} We present the first dataset for anomaly detection in multi-agent trajectories and evaluate our method in comparison to seven baselines. See supplementary material for more results. \subsection{Dataset Development} To evaluate our algorithm, we propose the MAAD dataset, a dataset for multi-agent anomaly detection based on the OpenAI Gym \textit{MultiCarRacing-v0} environment~\citep{brockman2016openai}. Since it was originally released as a multi-agent racing environment for learning visual control policies~\citep{schwarting2021deep}, we adapt it for anomaly simulation. We design the scenario of a two-lane highway shared by two vehicles, which naturally leads to interaction, e.g. speed adjustments, lane changes or overtaking actions. The vehicles are controlled by human players to record multiple expert trajectories. For every sequence, we choose random initialisation of the agent starting positions to increase trajectory diversity. In total, we create a dataset with 113 normal and 33 abnormal scenes. Beforehand, 11 different types of anomalies are defined in terms of breaking driving rules or careless behaviour\footnote{We define the 11 anomaly sub-classes as \textit{leave road}, \textit{left spreading}, \textit{aggressive overtaking}, \textit{pushing aside}, \textit{aggressive reeving}, \textit{right spreading}, \textit{skidding}, \textit{staggering}, \textit{tailating}, \textit{thwarting} and \textit{wrong-way-driving}.}. Each abnormal scenario is recorded three times to incorporate more variations (see Fig.~\ref{fig:maad_dataset}). After all recordings, the sequences are annotated with frame-wise labels by human experts with the ELAN annotation software~\citep{sloetjes2008elan}. We use 80 randomly selected normal sequences for training and the remaining 66 sequences to test. The sequences are sub-sampled to 10 Hz with a segment length of $T'=15$ time steps, which corresponds to 1.5 seconds. \begin{figure*}[t] \centering \begin{subfigure}[t]{0.24\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/dataset/000417_normal_overtaking.png} \end{subfigure} \centering \begin{subfigure}[t]{0.24\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/dataset/000325_abnormal_pushing_aside.png} \end{subfigure} \begin{subfigure}[t]{0.24\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/dataset/000349_abnormal_aggressive_reeving.png} \end{subfigure} \begin{subfigure}[t]{0.24\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/dataset/000422_abnormal_wrong_way_driving.png} \end{subfigure} \caption{Example sequences from the proposed MAAD dataset. We show the observed trajectory of each agent. The first frame shows a save overtaking manoeuvre form the normal set. The other three frames contain abnormal actions: the blue vehicle is pushing the red vehicle aside, an aggressive reeving action of the red vehicle and a wrong-way driver who even collides with the upcoming traffic.} \label{fig:maad_dataset} \end{figure*} \subsection{Baselines} Our baselines include \textit{multi-agent} models as variants of our method, which explicitly model interaction, as well as \textit{single-agent} models, ignoring interaction. The baselines can be further categorised as \textit{one-class} and \textit{reconstruction} methods. \mypara{Single-Agent.} To examine the effect of interaction, we define four interaction-free models, two parameter-free and two neural network approaches. As simple parameter-free reconstruction method, we employ the constant velocity model (\textit{CVM}) from~\citep{scholler2020constant}. Secondly, we approximate the trajectory with interpolation between the first and the last time step of the observed trajectory, we denote the model as linear temporal interpolation (\textit{LTI}). The linear models succeed in the metrics, if the velocity profile of abnormal trajectories highly deviates from the normal trajectories. As a single-agent neural network, first we adapt the \textit{Seq2Seq} model from~\citep {park2018sequencetosequence}. It is composed of an encoder LSTM and a decoder LSTM. The encoder computes a feature vector representing one trajectory. After the last input is processed, the decoder tries to reconstruct the input trajectory from the feature vector. Next, we implement an interaction-free variant of STGAE by setting $A_t = I$. This reduces the model to a spatio-temporal auto-encoder, why we call it \textit{STAE}. \mypara{Multi-Agent.} Based on the proposed STGAE we evaluate three multi-agent baselines. For the first two variants, we use STGAE as reconstruction method. We train one STGAE with a bi-variate loss (\textit{STGAE-biv}) and a second with classical MSE loss (\textit{STGAE-mse}) on the trajectory reconstruction task. Similar to our approach, the third variant is a one-class classification method. We replace the KDE of our method with a one-class SVM (OC-SVM), which is an adaption of the traditional SVM to one-class classification. It takes the encoder features as input and finds a hyperplane separating the data points from the origin while ensuring that the hyperplane has maximum distance from the origin~\citep{scholkopf1999support}. The baseline is denoted as \textit{STGAE+OC-SVM}. \subsection{Evaluation Metrics} We quantitatively evaluate our approach following the standard evaluation metrics used in the anomaly detection literature~\citep{Corbiere2019confidence, DBLP:journals/corr/HendrycksG16c, zaheer2020old}, namely AUROC, AUPR-Success, AUPR-Error and FPR at 95\% TPR. The AUROC metric integrates over the area under the Receiver Operating Characteristic curve (ROC) and results in a threshold-independent evaluation. Note that a classifier with 50 \% AUROC is equal to a random classifier, while 100 \% is the upper limit and denotes the best possible classifier. We use the Area Under the Precision-Recall (AUPR) curve as our second metric. Other than AUROC, it is able to adjust for class imbalances, which is always the case in anomaly detection, i.e.~the amount of abnormal samples is small compared to normal samples~\citep{DBLP:journals/corr/HendrycksG16c}. Here, we show both AUPR metrics, the AUPR-Abnormal, where we treat the abnormal class as positive, and the AUPR-Normal, where we treat the normal class as positive. Additionally we show FPR-95\%-TPR, the False Positive Rate (FPR) at 95\% True Positive Rate (TPR). For the one-class methods we directly apply the metrics on the output score and for the reconstruction methods we take the mean squared error (MSE) between the given and the reconstructed trajectory as anomaly score, following~\citep{morais2019learning}. \subsection{Implementation Details} The CVM approximates the agent trajectories assuming the same velocity for all time steps. The velocity is estimated given the first two time steps of a trajectory. For LTI the reconstruction error is defined as the distance between the ground-truth trajectory and equidistantly sampled locations on a straight line between the beginning and the end of the trajectory. Both, the encoder and decoder of the Seq2Seq model, are implemented with $3$ stacked LSTM layers and $15$ hidden features. We train the network for $1500$ epochs using Adam optimiser with learning rate $0.01$. Our STGAE method is implemented as ST-GCNN encoder~\citep{mohamed2020social} and TCN decoder~\citep{lea2017temporal}. The encoder is composed of one spatial graph convolution layer and one TCN layer, both with five latent features, followed by the decoder consisting of five convolution layers for reconstruction. We train for 250 epochs using Stochastic Gradient Descent and learning rate $0.01$, and decay the learning rate to $0.002$ after $150$ epochs. For evaluating STGAE-biv we sample $20$ reconstructed trajectories from the bi-variate Gaussian distribution. STGAE with MSE loss does not require sampling. Both one-class classification methods (OC-SVM and KDE) are implemented using a Gaussian kernel and the best hyperparameters are selected via grid search. Note, that OC-SVM has a minor supervised advantage, since validation of the model takes place on a holdout test set, i.e. 20\% randomly selected from the test data. Hyperparameter tuning of $\gamma$ and $\nu$ for OC-SVM is performed via grid search with $\gamma \in \{2^{-10},2^{-9},...,2^{-1}\}$ and $\nu \in \{0.01, 0.1\}$. The bandwidth of KDE is selected from $h \in \{2^{-4.5}, 2^{-4}, ..., 2^{5} \}$ via 5-fold cross-validation with the log-likelihood score as in~\citep{ruff2018deep}. \begin{table}[t!] \caption{Comparison of the proposed method with the baselines on the proposed MAAD dataset using four metrics: AURC, AUPR-Abnormal, AUPR-Normal and FPR-95\%-TPR. We differentiate between single- and multi-agent and further categorise into reconstruction and one-class classification methods. Highest scores are written in \textbf{bold}. For the trained models we provide the mean and standard deviation of ten runs. Note, CVM and LTI are deterministic with zero standard deviation.} \centering \label{tab:res_comparison} \resizebox{\textwidth}{!}{\begin{tabular}{p{0.06\linewidth}llcccc} \toprule & \textbf{Method} & \begin{tabular}[c]{@{}c@{}}\textbf{One-class vs.}\\\textbf{Reconstruction}\end{tabular} & \textbf{AUROC} $\uparrow$ & \textbf{AUPR-Abnormal} $\uparrow$ & \textbf{AUPR-Normal} $\uparrow$ & \textbf{FPR-95\%-TPR} $\downarrow$\\ \midrule \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\begin{tabular}[c]{@{}c@{}}{Single-}\\{Agent}\end{tabular}}}} & CVM~\citep{scholler2020constant} & reconstruction & 83.11 ($\pm$0.00) & 54.47 ($\pm$0.00) & 95.99 ($\pm$0.00) & 74.62 ($\pm$0.00) \\ & LTI & reconstruction & 75.47 ($\pm$0.00) & 48.89 ($\pm$0.00) & 92.37 ($\pm$0.00) & 95.03 ($\pm$0.00) \\ & Seq2Seq~\citep {park2018sequencetosequence} & reconstruction & 56.15 ($\pm$0.68) & 16.92 ($\pm$1.01) & 89.54 ($\pm$0.13) & 84.62 ($\pm$0.26) \\ & STAE-biv $| A_t = I$ & reconstruction & 57.54 ($\pm$11.77) & 21.77 ($\pm$8.48) & 89.47 ($\pm$3.26) & 84.42 ($\pm$2.50) \\ \cmidrule(lr{1em}){1-7} \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\begin{tabular}[c]{@{}c@{}}{Multi-}\\{Agent}\end{tabular}}}} & STGAE-mse & reconstruction & 81.53 ($\pm$3.16) & 50.76 ($\pm$4.47) & 95.90 ($\pm$0.93) & 67.36 ($\pm$9.30) \\ & STGAE-biv & reconstruction & 74.82 ($\pm$5.10) & 37.79 ($\pm$7.16) & 94.10 ($\pm$1.31) & 77.80 ($\pm$9.77) \\ & STGAE-biv+OC-SVM & one-class & 85.97 ($\pm$2.40) & 52.37 ($\pm$8.85) & 97.11 ($\pm$0.59) & \textbf{49.90} ($\pm$6.33) \\ \cmidrule(lr{1em}){2-7} & \cellcolor{Gray} {Ours} & \cellcolor{Gray} one-class & \cellcolor{Gray} \textbf{86.28} ($\pm$1.73) & \cellcolor{Gray} \textbf{55.20} ($\pm$7.74) & \cellcolor{Gray} \textbf{97.15} ($\pm$0.54) & \cellcolor{Gray} 50.02 ($\pm$7.97) \\ \bottomrule \end{tabular}} \end{table} \begin{figure}[t!] \centering \begin{minipage}[b]{.49\textwidth} \centering \includegraphics[]{figures/roc_curves.pdf} \captionof{figure}{The ROC curves of the single-agent models (dashed lines) and the multi-agent models (solid lines).} \label{fig:roc_curves} \end{minipage}% \hspace{0.01\textwidth} \begin{minipage}[b]{.49\textwidth} \centering \includegraphics[]{figures/aurocs_over_seq_len_all.pdf} \captionof{figure}{AUROC metric over the segment length. Our method is more stable for various segment lengths.} \label{fig:aurocs_seq_len} \end{minipage} \end{figure} \begin{table}[t] \centering \caption{AUROC on different types of anomalies. We differentiate between non-interactive and interactive anomalies. The multi-agent models reach high-scores in both interaction and non-interaction anomalies. Highest scores are written in \textbf{bold}.} \resizebox{\textwidth}{!}{ \begin{tabular}{lc|cccc|ccc|c} \toprule \textbf{Abnormal class} & \begin{tabular}[c]{@{}c@{}}\textbf{Non-interactive vs.}\\ \textbf{Interactive}\end{tabular} & \textbf{CVM} % & \textbf{LTI} % & \textbf{Seq2Seq} & \begin{tabular}[c]{@{}c@{}}\textbf{STAE-biv}\\\textbf{$A_t = I$}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{STGAE}\\\textbf{-mse}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{STGAE}\\\textbf{-biv}\end{tabular} % & \begin{tabular}[c]{@{}c@{}}\textbf{STGAE-biv}\\\textbf{+SVM}\end{tabular} % & \textbf{Ours} \\ \midrule \rowcolor{Gray} leave road & non-interactive & 88.66 & 32.30 & 91.06 & 92.95 & 99.24 & 94.32 & \textbf{99.57} & 98.22 \\ left spreading & interactive & 90.88 & \textbf{96.90} & 47.06 & 55.06 & 91.92 & 52.76 & 94.16 & 96.56 \\ \rowcolor{Gray} aggressive overtaking & interactive & 91.78 & 78.30 & 59.09 & 50.30 & 92.52 & 59.92 & 91.80 & \textbf{93.24} \\ pushing aside & interactive & 91.54 & 79.98 & 75.55 & 82.02 & \textbf{98.73} & 72.51 & 89.84 & 90.44 \\ \rowcolor{Gray} aggressive reeving & interactive & \textbf{96.67} & 61.15 & 79.00 & 81.97 & 96.33 & 86.37 & 92.31 & 94.57 \\ right spreading & interactive & 87.66 & 89.83 & 45.22 & 61.75 & \textbf{96.33} & 53.57 & 95.55 & 96.24 \\ \rowcolor{Gray} skidding & non-interactive & 96.90 & 94.69 & 90.00 & 98.84 & 98.31 & 70.24 & 98.80 & \textbf{99.65} \\ staggering & non-interactive & 86.87 & 85.30 & 57.83 & 72.64 & 90.58 & 66.53 & 94.91 & \textbf{96.01} \\ \rowcolor{Gray} tailgating & interactive & 77.18 & 70.20 & 35.88 & 74.11 & \textbf{88.71} & 86.42 & 83.55 & 84.87 \\ thwarting & interactive & \textbf{98.08} & 90.75 & 88.34 & 92.56 & 92.68 & 96.58 & 81.95 & 81.59 \\ \rowcolor{Gray} wrong-way driving & interactive & 63.16 & 64.63 & 68.86 & 53.90 & 61.46 & 57.88 & 73.11 & \textbf{73.15} \\ \midrule overall & & 83.11 & 75.47 & 58.06 & 70.01 & 87.70 & 72.67 & 87.38 & \textbf{88.34} \\ \bottomrule \end{tabular}} \label{tab:res_anomaly_types} \end{table} \subsection{Results} \label{subsec:results} We present our results in comparison to the baselines. Afterwards, we evaluate different anomaly types and finally the performance stability over input sequence length and beyond pairs of agents. \mypara{Comparison with the Baselines.} The comparison of our method with the baselines is presented in Table~\ref{tab:res_comparison}. Additionally, we show the ROC curves of all approaches in Figure~\ref{fig:roc_curves}. Our STGAE-biv+KDE outperforms both the linear and the deep methods in three out of four metrics, namely AUROC, AUPR-Abnormal and AUPR-Normal, and is on par with STGAE-biv+OC-SVM for FPR-95\%-TPR. The second one-class classification approach STGAE-biv+OC-SVM reaches similar performance and is the best for FPR-95\%-TPR, but cannot reach the high-score in the remaining metrics. Considering the ROC curves, both one-class methods have a similar course with slight advantages for our method using KDE for anomaly detection. In general, the one-class methods are more stable over multiple runs. Our approach has practical advantage over the baselines as it remains superior even in consideration of the standard deviation of all other models. Overall, besides STGAE-biv, the multi-agent models show higher accuracy compared to methods considering each agent individually. This indicates that some manoeuvres are only anomalous in the context of other traffic participants and could be considered as normal if the agent is alone on the street. This is also the reason for the performance drop for STGAE-biv, if the adjacency matrix $A_t$ is reduced to the identity matrix, i.e.~no feature aggregation over neighbours. The linear models perform competitively, which gives rise to the difference between normal and abnormal lying in the degree of linearity of the trajectories. We empirically observe that Seq2Seq can not differentiate between normal and abnormal trajectories for most experiments, because it fails to learn the sequential dependencies. Overall, our approach can detect most of the anomalies compared to the baselines. In practice, high detection rates directly support the decision making process. Once an anomaly is detected the automated vehicle can react or pass the control to the human driver. \mypara{Detection Score on Anomaly Types.} Table \ref{tab:res_anomaly_types} shows the evaluation of the eleven anomaly types. To compute the ROC curve for an anomaly type, we consider this anomaly as positive and ignore all frames labelled with another anomaly category. Our method outperforms the others in four of the eleven classes and is competitive for the remaining ones. In particular, we significantly outperform the single-agent baselines on the \textit{aggressive overtaking} category. We argue that it requires interaction modelling to distinguish the aggressive from a normal manoeuvre. Overall, the \textit{wrong-way-driving} category leaves space for improvement. Including absolute coordinates in addition to velocity could help to learn more meaningful interaction features. Some abnormal trajectories can be detected without caring about interaction. This is when the linear models show their benefits, see \textit{left spreading} or \textit{thwarting} for LTI and CVM, respectively. The \textit{left spreading} action is highly non-linear, such that the LTI model fails to approximate correctly what results in high anomaly scores. Similar, \textit{thwarting} results through strong braking, which is not following the constant velocity assumption of CVM. Again, STGAE-mse is the best reconstruction method and outperforms on three sub-classes, however with the downside of a computational expensive trajectory decoder. \mypara{Ablation Study on Observed Sequence Length.} Figure \ref{fig:aurocs_seq_len} shows the influence of different segment length $T' \in \{4, 8, 10, 15, 20, 30, 40\}$ on the recognition performance. For the comparison we re-train all models on different segment length. Although we see a correlation between STGAE-biv and our method, our results remain stable for a large interval. As reported before, we reach the best performance for $T'=15$. The performance of the linear models decreases for higher input length, which means that linear models are good trajectory approximators only for short sequences. In general, the reconstruction methods drop in performance for longer sequences. Both, CVM and LTI have peaks at 0.8 seconds, however our method remains the best overall with 88.34 \% AUROC. \mypara{Ablation Study on Scalability Beyond Pairs of Agents.} The proposed MAAD dataset includes diverse interactive anomalies between pairs of agents. However, the proposed model is flexible to the number of agents in a scene and can process more than two agents without adaption. We evaluate the recognition performance for a highway with higher traffic density on two models. In detail, we compare Our approach with the performance of STGAE-biv. Both models are trained on the original MAAD training set with two agents in each scene. For testing we create three ablation test sets with $N=2$, $N=3$ and $N=4$ agents, see details in the supplementary material. The results are shown in Table \ref{tab:agent_scalability}. As before, our approach reaches higher scores compared to the reconstruction method. Interestingly, the metrics of the reconstruction method vary intensively with the number of agents. This indicates that once trained on two agents the reconstruction is confused by the features from the additional agents. It looks different for our method. Even though we see a small performance decrease with an increasing number of agents, our method is more stable and can reliably detect anomalies even on highways with higher traffic density. \begin{table}[t] \centering \caption{Comparison of STGAE-biv and our method on test sets with two, three, and four agents on the highway. Other than the reconstruction method, our one-class approach remains more stable in all metrics with increasing traffic density.} \resizebox{\textwidth}{!}{ \begin{tabular}{ll|cccc} \toprule \textbf{Test Agents} & \textbf{Model} & \textbf{AUROC} $\uparrow$ & \textbf{AUPR-Abnormal} $\uparrow$ & \textbf{AUPR-Normal} $\uparrow$ & \textbf{FPR-95\%-TPR} $\downarrow$ \\ \midrule $N=2$ & STGAE-biv & 69.08 & 39.28 & 90.07 & 92.19 \\ \rowcolor{Gray} $N=2$ & Ours & 92.34 & 66.75 & 98.48 & 35.20 \\ \midrule $N=3$ & STGAE-biv & 78.20 & 42.52 & 94.73 & 78.23 \\ \rowcolor{Gray} $N=3$ & Ours & 91.26 & 63.57 & 98.36 & 35.38 \\ \midrule $N=4$ & STGAE-biv & 52.42 & 17.95 & 88.03 & 88.19 \\ \rowcolor{Gray} $N=4$ & Ours & 89.41 & 60.52 & 97.87 & 42.55 \\ \bottomrule \end{tabular}} \label{tab:agent_scalability} \end{table} \section{Conclusion} \label{sec:conclusion} We presented the spatio-temporal graph auto-encoder for trajectory representation learning. Our main contribution is the ability to simultaneously learn multiple trajectories for a dynamic number of agents. Our model learns normal driving behaviour for performing afterwards anomaly detection. To this end, we performed kernel density estimation on the latent representation of the model. During testing, we detect anomalies in low-density regions of the estimated density. Due to the lack of datasets for multi-agent trajectory anomaly detection for automated driving, we presented a synthetic multi-agent dataset with normal and abnormal manoeuvres. In our evaluations, we compared our approach with several baselines to show superior performance. Although our study is on driving trajectories, our approach can learn joint feature spaces in other multi-agent domains like verbal and non-verbal communication, sports or human-robot interaction. This would be a future work direction.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the course of their evolution, circumstellar disks transform from optically thick, gas-dominated protoplanetary disks to optically thin, almost gas-free debris disks. How the gas is removed is not known in detail, but the removal is thought to be the result of a quick inside-out process at an age of $\sim 10$~Myr \citep{alexander-2008,hillenbrand-2008}. It may be either due to the UV switch mechanism resulting from an interplay between photoevaporation and viscous accretion \citep[e.g.][]{hollenbach-et-al-2000,clarke-et-al-2001,% takeuchi-lin-2005,takeuchi-et-al-2005,alexander-armitage-2007} or due to gap opening by hidden giant planets \citep[e.g.][]{lubow-et-al-1999,lubow-dangelo-2006}. Although the former effect seems to be slightly preferred by observational statistics \citep[e.g.][]{cieza-et-al-2008}, it is not yet possible to distinguish between them with certainty \citep{najita-et-al-2007,alexander-2008,hillenbrand-2008}. In this paper, we concentrate on a more advanced phase of system's evolution: the debris disk stage. Apart from possible planets, a debris disk system contains remnant planetesimals and dust into which they are ground through collisions \citep[see, e.g.][and references therein]{wyatt-2008}. Debris disks are expected to be nearly gas-free, at least extremely gas-poor compared to protoplanetary disks. Even so, in the case of $\beta$~Pic, gas was detected very early on in absorption \citep{slettebak-1975,hobbs-et-al-1985}, and later in emission \citep{olofsson-et-al-2001}, due to the favorable edge-on orientation of the disk. The observed gas around $\beta$~Pic is most likely replenished, i.e.\ secondary, as opposed to a remnant from the initial star-forming cloud. Evidence for this comes from the presence of CO \citep{vidalmadjar-et-al-1994,jolly-et-al-1998,roberge-et-al-2000}, which would be dissociated on time-scales of $\sim$200\,yr \citep{vandishoeck-black-1988,roberge-et-al-2000} and from the presence of neutral gas elements in the disk \citep{olofsson-et-al-2001,brandeker-et-al-2004}, subject to short removal times \citep{fernandez-et-al-2006}. Possible mechanisms for producing secondary gas include photon-induced desorption from solids \citep{chen-et-al-2007} and grain-grain collisions \citep{czechowski-mann-2007}. Part of the observed gas may also stem from comet evaporation, as inferred from observed time-variable absorption lines \citep[e.g.][]{ferlet-et-al-1987,beust-valiron-2007}. However, in general observations of gas are much more difficult than that of dust. Standard detection techniques either use CO as a tracer of hydrogen which can be observed at radio frequencies, as done by \citet{hughes-et-al-2008} for 49~Cet, or measure $\mathrm{H}_2$ emission lines which are pumped by stellar emission lines originating from the chromospheric and coronal regions, which was done for AU~Mic by \citet{france-et-al-2007}. A potentially more sensitive way of finding gas is to look for it in absorption, as was done for $\beta$~Pic. The downside is that this requires the special edge-on geometry of the disk, but this method has nevertheless been successfully used by \citet{redfield-2007} to detect circumstellar Na\,I absorption towards HD~32297, a star with a known disk. Conversely, stars which are known to exhibit circumstellar absorption lines, so called \textit{shell stars}, can be searched for evidence of circumstellar material, as done by \citet{roberge-weinberger-2008} using Spitzer/MIPS data. Out of 16 surveyed shell stars they found infrared excess, and thus evidence for circumstellar dust, around four stars: HD\,21620, HD\,118232, HD\,142926 and HD\,158352. Despite substantial efforts, the gas component of the debris disks remains much less constrained observationally than the dusty one. It is quite possible that primordial gas survives longer than usually assumed, at least in the outer parts of the disks, or is present in larger amounts than expected, without showing up in observations. In fact, about ten Earth masses of gas, if not more, could still remain in many young debris disks where gas was searched for and not found, without violating observations \citep{hillenbrand-2008}. If present, this hardly detectable gas would heavily affect the disk's physics and evolution and could necessitate revisions to standard theories of disk evolution and planet formation. The goal of our work is to analyze the effects of gas on the dynamical evolution of solids. We would like to find out whether gas, if present in larger amounts or with a different radial distribution than usually assumed, would alter the dust distribution and thus the brightness profile of a debris disk in such a way as to show up in the observations. We follow the approach first suggested by \citet{thebault-augereau-2005} who applied it to the the $\beta$~Pic system: we first postulate a certain amount and spatial distribution of gas in one or another debris disk system, then compute a steady-state distribution of dust in it, calculate the observables such as brightness profile, and compare them with available observations. In Sect.~2 we select and analyze three young debris disk systems relevant for this study. Sect.~3 lays down basic theory of the dust production and dynamical evolution in a debris disk with a gas component. Sect.~4 describes numerical simulations and Sect.~5 their results. In Sect.~6 we devise an analytical model and use it to interpret the numerical results. Sect.~7 contains our conclusions. \section{Systems} \subsection{Selection criteria} We wish to choose several young debris disks in which the presence of gas in little to moderate amounts has been reported. Ideally, these should be edge-on systems, so that better constraints on gas are available from the presence or absence of absorption lines, not just CO mm emission. We need dust disks that are spatially resolved, preferably in scattered light, so that the radial profile of brightness is known. The age of the disks should not be very far from the boundary that separates gas- and dust-rich, optically thick protoplanetary disks from nearly gasless, optically thin debris disks, which is believed to lie at $\approx 10$~Myr. The best ages would thus be 10--30~Myr. We find three systems to satisfy these criteria the best: $\beta$~Pic, HD~32297, and AU~Mic. Known facts and key parameters of these systems relevant to our study are presented in the subsequent sections. We stress, however, that all three systems should be regarded as ``typical'' examples of their classes and might be used as a proxy for other systems. Thus our results, being of interest in the context of particular objects, could at the same time be considered as generic. \subsection{ $\beta$~Pic} \mbox{} \vspace*{-\parindent} {\em Star.} A 12~Myr-old \citep{zuckerman-et-al-2001} A5V star at $d = 19.44 \pm 0.05$~pc. {\em Dust and parent bodies.} The debris disk was first resolved by \citet{smith-terrile-1984} and later at various wavelengths \citep[][and references therein]{artymowicz-2000}. According to \citet{artymowicz-clampin-1997}, the vertical optical depth of the dust disk has a maximum of $1.53 \times 10^{-2}$ at $60$\,AU, and has a slope of $-1.7$...$-2$ in the outer part. \citet{mouillet-et-al-1997} give $5 \times 10^{-3}$ at 100\,AU with an outer slope of $-1.7$. Dust mass is roughly $0.05$...$0.5 M_\oplus$ \citep{artymowicz-2000,thebault-augereau-2005}, with $0.1 M_\oplus$ being probably the best estimate \citep{zuckerman-becklin-1993,lagrange-et-al-2000}. Analysis by \citet{augereau-et-al-2001} (their Fig.~1) show that an extended dust disk produced by the planetesimal belt (``birth ring'') between $80$--$120$\,AU would closely match the resolved scattered light images together with the long-wavelength photometric data. The dust distribution itself is given in Fig.~2 of \citet{augereau-et-al-2001}. The radial $\mathit{SB}$ profile of the mid-plane scattered-light images shows a slope of $-3$...$-4$ outside $130$--$260$\,AU \citep{golimowski-et-al-2006}. The dust disk around $\beta$~Pic is famous for its large scale asymmetries, which might be caused by a sub-stellar companion in the disk \citep[see, e.g.][]{mouillet-et-al-1997,augereau-et-al-2001}. {\em Gas.} Before $\beta$\,Pic was known to harbor a debris disk, it was classified as a shell star by \citet{slettebak-1975} due to its prominent Ca\,II H \& K absorption. Gas was then re-discovered in absorption by \citet{hobbs-et-al-1985} and in spatially resolved Na\,I emission by \citet{olofsson-et-al-2001}. The atomic hydrogen content of the disk was constrained by \citet{freudling-et-al-1995} to be $< 2$\,$M_{\oplus}$, and the molecular column density to be $< 3 \times 10^{18}$\,cm$^{-2}$ by \citet{desetangs-et-al-2001}, which corresponds to $\lesssim 0.2$\,M$_{\oplus}$, assuming the gas to be distributed in the disk \citep{brandeker-et-al-2004}. \citet{brandeker-et-al-2004} observed spatially extended gas emission from a number of elements (including Na\,I, Fe\,I, and Ca\,II), and derived a spatial distribution for the gas \begin{equation} n(H) = n_0 \left[ \left( r \over r_0 \right)^{2.4} + \left( r \over r_0 \right)^{5.3} \right]^{-1/2} \label{bpic_gas_profile} \end{equation} with $n_0 = 2.25 \times 10^3 \,\hbox{cm}^{-3}$ for an assumed solar composition, and $r_0 = 117$\,AU, leading to a total gas mass of $0.1\,M_\oplus$. They also investigated a metal-depleted case $n_0 = 10^6 \,\hbox{cm}^{-3}$ (implying a total gas mass of $40\,M_\oplus$), which they found to be in contradiction with the observed upper limits on hydrogen. \subsection{HD 32297} \mbox{} \vspace*{-\parindent} {\em Star.} A 30~Myr-old A5V star \citep{maness-et-al-2008} at $d = 113 \pm 12$~pc. {\em Dust and parent bodies.} The dust disk was first resolved with HST/NICMOS in scattered light by \citet{schneider-et-al-2005} up to 400\,AU. The surface brightness ($\mathit{SB}$) of the SW wing is fitted by a power law with index $-3.6$, while the NE side shows a break at 200\,AU: the inner part has a slope of $-3.7$, whereas the outer one $-2.7$. \citet{kalas-2005} resolved the disk in the R-band between 560 and 1680\,AU. The mid-plane slopes were found to be $-2.7$ and $-3.1$ for NE and SW wings, respectively, with strong asymmetries. \citet{moerchen-et-al-2007b} resolved the disk with Gemini South/T-ReCS in thermal emission at 12 and $18\,\mu\hbox{m}$ up to 150\,AU. Resolved images by \citet{fitzgerald-et-al-2007} taken with Gemini North at $11\,\mu\hbox{m}$ revealed a bilobed structure with peaks at $\sim 65$\,AU from the star. \citet{maness-et-al-2008} marginally resolved the disk with CARMA at 1.3~mm. The spectral energy distribution (SED) fitting by \citet{fitzgerald-et-al-2007} suggests a population of larger grains~-- and therefore a location of the birth ring~-- at $\approx 70$--$80$\,AU. The vertical optical depth of the dust disk at the same distance is $4 \times 10^{-3}$ \citep{maness-et-al-2008}. 1.3 mm measurements by \citet{maness-et-al-2008} point to the existence of a third population of even larger grains at a characteristic stellar distance of 50\,AU, which probably comprises $\ge 95$~\% of the total dust mass. Dust mass required to fit the SED up to far-infrared wavelengths is roughly $0.02\,M_\oplus$, but 1.3\,mm flux may require as much as $1\,M_\oplus$ of dust \citep{maness-et-al-2008}. {\em Gas.} \citet{redfield-2007} found an intriguingly strong Na\,I absorption. Assuming the morphology and abundances of the stable gas component to be the same as for $\beta$~Pic, and that the gas disk extends up to 1680\,AU as debris disk does, he derived the total gas mass of $\sim 0.3\,M_\oplus$. The absence of the observable CO $J=2-1$ emission with CARMA places an upper limit on the gas mass of $\sim 100\,M_\oplus$ \citep{maness-et-al-2008}. \subsection{AU Mic} \mbox{} \vspace*{-\parindent} {\em Star.} A 12 Myr-old dM1e flare star, a member of the $\beta$~Pictoris Moving Group at $d = 9.94 \pm 0.13$\,pc. It is the closest known debris disk resolved in scattered light. {\em Dust and parent bodies.} The debris disk was first resolved in R-band by \citet{kalas-et-al-2004} and \citet{liu-2004}. Later on, it was resolved with HST/ACS by \citet{krist-et-al-2005} and in the H-band with Keck AO by \citet{metchev-et-al-2005}. The dust fractional luminosity is $6 \times 10^{-4}$ \citep{liu-2004}. The dust mass (up to 1~mm) is estimated to be $\sim 2\times 10^{-4} M_\oplus$ \citep{augereau-beust-2006}, but sub-mm fluxes require $1 \times 10^{-2} M_\oplus$ \citep{liu-et-al-2004}. The birth ring of planetesimals is believed to be located at 35\,AU \citep{augereau-beust-2006}. An R-band $\mathit{SB}$ profile slope of $-3.8$ between 35--200\,AU was found by \citet{kalas-et-al-2004}, whereas \citet{liu-2004}, \citet{krist-et-al-2005}, and \citet{fitzgerald-et-al-2007b} derived ${\mathit SB}$ slopes in the range $-3.8$...$-4.7$. Like $\beta$\,Pic and HR 32297, the disk of AU~Mic possesses asymmetries, which are probably formed by the dynamical influence of planets \citep{liu-2004}. {\em Gas.} Non-stringent upper limits on the gas mass were found from non-detection of CO 3-2 emission by \citet{liu-et-al-2004} ($< 1.3\,M_\oplus$) and $H_2$ UV absorption by \citet{roberge-et-al-2005} ($< 7 \times 10^{-2}\,M_\oplus$). \citet{france-et-al-2007} tentatively detected and analyzed fluorescent $H_2$ emission. Within the observational uncertainties, the data are consistent with gas residing in the debris disk, although other possibilities such as a cloud that extends beyond the disk cannot be completely ruled out. They found a very low total gas mass between $\sim 4 \times 10^{-4}\,M_\oplus$ and $\sim 6 \times 10^{-6}\,M_\oplus$, consistent with upper limits $\lesssim 10^{-4}$\,$M_\oplus$ obtained from a search for optical absorption lines from Ca\,I, Ca\,II and Fe\,I by \citet{brandeker-jayawardhana-2008}. \section{Basic theory} \subsection{General picture} Throughout this paper, we adopt the following standard scenario of a debris disk evolution \citep[e.g.][]{krivov-et-al-2006,strubbe-chiang-2006,% thebault-augereau-2007,krivov-et-al-2008,thebault-wu-2008}: \begin{itemize} \item There is a relatively narrow belt of planetesimals (``birth ring'') in orbits with moderate eccentricities and inclinations. We assume that this birth ring is located where the scattered image brightness peaks. Note that the systems resolved at (sub)-mm wavelengths usually exhibit a bright ring of approximately the same radius. \item Orbiting planetesimals in the birth ring undergo collisional cascade that grinds the solids down to dust. We assume that the dust grains with radii $[s, s+ds]$ are produced in the birth ring at a constant rate $\dot{N} ds$, where \begin{equation} \dot{N} \propto s^{-q} . \label{Ndot} \end{equation} The parameter $q$ is unknown. However, a usual assumption~--- which we will follow unless stated otherwise~-- is $q=3.5$. \item At smallest dust sizes, stellar radiation pressure effectively reduces the mass of the central star and quickly (on the dynamical timescale) sends the grains into more eccentric orbits, with their pericenters still residing within the birth ring while the apocenters are located outside the ring. As a result, the dust disk spreads outward from the planetesimal belt. The smaller the grains, the more extended their ``partial'' disk. \item The dust grain orbits undergo slower modifications due to gas drag and experience gradual loss due to mutual collisions. \end{itemize} \subsection{Stellar gravity and radiation pressure} We require that the disk is optically thin, so that each dust grain is fully exposed to stellar radiation at any location in the disk. Since the radiation pressure is proportional to $r^{-2}$, as is the stellar gravity, a dust grain experiences ``photogravity'', i.e. gravity of a star with an ``effective stellar mass'' $M_\mathrm{eff}$: \begin{equation} M_\mathrm{eff} = M_\star \, (1 - \beta) \, , \end{equation} where $\beta$ is the ratio of radiation pressure to gravity \citep{burns-et-al-1979}: \begin{equation} \nonumber \beta = 0.5738 \; Q_{\mathrm{pr}} \left( 1 \,\hbox{g} \,\hbox{cm}^{-3} \over \rho_\mathrm{bulk} \right) \left( 1 \,\mu\hbox{m} \over s \right) {L_\star / L_\odot \over M_\star / M_\odot} . \label{beta} \end{equation} Here, $Q_{\mathrm{pr}}$ is the radiation pressure efficiency (henceforth set to unity), $\rho_\mathrm{bulk}$ the material density of particles (we set it to $3.3\,\hbox{g}\,\hbox{cm}^{-3}$), $s$ their radius, and $L_\star$ and $M_\star$ are stellar luminosity and mass, respectively. \subsection{Gas drag} We assume that the gas distribution remains unaffected by that of dust and that gas simply exerts a drag force on the dust particles. If the gas mass is larger than the dust mass, this assumption is natural. In the case where both are comparable, the validity of this assumption will be checked later a posteriori. Indeed, we will choose the same initial distributions for dust and gas and will see that these will not diverge considerably in the course of the disk evolution. Gas orbits the star at a sub-keplerian speed \begin{equation} v_\mathrm{g} = v_\mathrm{K} \, \sqrt{1 - \eta} \, , \end{equation} where $v_\mathrm{K}$ is the circular Keplerian velocity and $\eta$ is the ratio of the force that supports gas against gravity to the gravity force. There exist two possible reasons for sub-keplerian rotation of a gas disk. One is the case of a thermally-supported disk, in which the sub-keplerian rotation stems from the gas pressure gradient. However, around $\beta$~Pic the gas at non-solar composition has been observed, which is dominantly supported by radiation pressure rather than gas pressure \citep{fernandez-et-al-2006,roberge-et-al-2006}. Thus we also consider another case, where gas is supported against stellar gravity by radiation pressure. For a thermally-supported gas disk, we follow a standard description of the dust aerodynamics \citep{weidenschilling-1977}, generalized to the presence of radiation pressure \citep{takeuchi-artymowicz-2001,thebault-augereau-2005,herrmann-krivov-2007}. The factor $\eta$ is \begin{equation} \eta = - (GM_\star)^{-1} {r^2 \over \rho_\mathrm{g}} {d P \over d r} . \label{eta def} \end{equation} Here, $GM_\star$ is the gravitational parameter of the central star and $P$ the gas pressure, proportional to gas density $\rho_\mathrm{g}$ and gas temperature $T_\mathrm{g}$: \begin{equation} P = \rho_\mathrm{g} {k T_\mathrm{g} \over \mu_\mathrm{g} m_\mathrm{H}} \end{equation} with the Boltzmann constant $k$, the mean molecular weight $\mu_\mathrm{g}$ and the mass of a hydrogen atom $m_\mathrm{H}$. The gas temperature and density are usually taken to be power laws \begin{equation} T_\mathrm{g} \propto r^{-p} \label{T_g} \end{equation} and \begin{equation} \rho_\mathrm{g} \propto r^{-\xi}. \label{xi} \end{equation} Then, Eq.~(\ref{eta def}) takes the form \begin{equation} \eta = (p+\xi) {k T_\mathrm{g}^0 \over \mu_\mathrm{g} m_\mathrm{H}} {r_0 \over GM_\star} \left( r \over r_0 \right)^{1-p} , \label{eta} \end{equation} where $T_\mathrm{g}^0$ is the gas temperature at a reference distance $r_0$. The $\eta$ ratio only depends on the gas temperature slope $p$ and its density slope $\xi$, but not on the gas density at a given distance. (Note that sub-keplerianity requires $p+\xi > 0$.) As $T_\mathrm{g}^0 \propto L_\star^{1/4}$, the dependence on the stellar parameters is weak: $\eta \propto L_\star^{1/4}/M_\star$. We now consider the case where gas is supported against stellar gravity by radiation pressure. Denoting by $\beta_{\mathrm{gas}}$ the effective radiation pressure coefficient acting on the gas, its speed is \begin{equation} v_\mathrm{g} = v_\mathrm{K} \, \sqrt{1 - \beta_{\mathrm{gas}}} \, , \end{equation} yielding a simple relation \begin{equation} \eta = \beta_{\mathrm{gas}} . \label{eta rad} \end{equation} \bigskip Regardless of the mechanism that supports the gas disk against gravity, the gas drag force on a dust grain is expressed by \citep{takeuchi-artymowicz-2001} \begin{equation} \vecbold{F}_\mathrm{D} = - \pi \rho_\mathrm{g} s^2 \left(v_\mathrm{T}^2+ \Delta v^2\right)^{1/2} \Delta \vecbold{v} \, , \end{equation} which combines the subsonic and supersonic regimes. Here, $\Delta \vecbold{v} \equiv \vecbold{v}_\mathrm{d} - \vecbold{v}_\mathrm{g}$ is the difference between the dust velocity $\vecbold{v}_\mathrm{d}$ and the gas velocity $\vecbold{v}_\mathrm{g}$, and $v_\mathrm{T}$ is the gas thermal velocity: \begin{equation} v_\mathrm{T} = {4 \over 3} \left( 8 k T_\mathrm{g} \over \pi \mu_\mathrm{g} m_\mathrm{H} \right)^{1/2} . \end{equation} For later discussions of the timescales, we also define the stopping time, \begin{equation} T_{\mathrm{stop}} = {4 \over 3} {\rho_\mathrm{d} \over \rho_\mathrm{g} } {s \over v_\mathrm{T}} {1 \over \sqrt{1 + \Delta v^2 / v_\mathrm{T}^2} } , \label{T_stop} \end{equation} the time interval over which $\Delta \vecbold{v}$ would be reduced by a factor of $e$ if the drag force were constant. \subsection{Gas temperature} \label{sect:gastemp} As we saw in the previous section, dust dynamics is expected to depend sensitively on the gas temperature, in particular on its radial gradient $p$. A commonly used assumption is that the gas shares the dust temperature profile \citep{kamp-vanzadelhoff-2001}, which in the simple blackbody approximation gives $p=1/2$. This is a reasonable assumption if gas-dust interaction is strong and the photo-electric heating weak, but may not be valid in general. Indeed, in case of strong UV environments the photo-electric effect on dust can be the dominant heating source of the gas and lead to a dust drift instability \citep{klahr-lin-2005,besla-wu-2007}. For sufficiently high dust content, one may expect (at every distance) a power-law relation between the gas temperature and the number density of dust. Following \citet{klahr-lin-2005}, Eq.~(\ref{T_g}) generalizes to \begin{equation} T_\mathrm{g} \propto \rho_\mathrm{dust}^\gamma \; r^{-p} \label{gamma} \end{equation} with $\gamma \ge 0$. A first-order heating-cooling analysis suggests $\gamma \sim 1$ as a typical value \cite[see the Appendix of][]{klahr-lin-2005}. To evaluate how valid the $p=1/2$ assumption is for a more detailed model of the thermal balance, we used the code \textsc{ontario} (Zagorovsky, Brandeker, \& Wu, in prep.). \textsc{ontario} is tuned to model gas emission during the debris disk phase, given input parameters related to the gas/dust disk structure, elemental abundances, and the stellar luminosity spectrum. \textsc{ontario} computes the ionization and thermal balance self-consistently, with particular care taken of heating/cooling mechanisms (the most important being photo-electric and ionization heating, and cooling by C\,II $158\,\mu\hbox{m}$). The major simplifying assumptions are that the gas is considered to be in atomic/ion form (no molecules), and that the disks are optically thin (i.e.\ no chemistry and simplified radiative transfer), conditions that are expected to be closely met by debris disks around A and F stars, but not necessarily around stars of later spectral type. Using the same gas and dust profiles as in the dynamical simulations (i.e., surface density slope of -1.5 corresponding to a mid-plane density slope of -2.5), we computed the mid-plane temperature for three different cases, as shown in Fig.~\ref{fig_T_gas} (top): \begin{enumerate} \item an $M_{\mathrm{gas}} = 0.1\,M_{\oplus}$ disk at solar abundance with hydrogen entirely in atomic form (giving the mean molecular weight $\mu_\mathrm{g} = 1.3$) \item a similar model but with $M_{\mathrm{gas}} = 10\,M_{\oplus}$ \item an $M_{\mathrm{gas}} = 0.1\,M_{\oplus}$ model with $\beta$\,Pic abundances (i.e.\ solar abundance except 20$\times$ carbon, no helium, and $10^{-3}\times$ hydrogen, giving $\mu_\mathrm{g} = 11.3$, as motivated by the inventory of gas observed around $\beta$\,Pic, compiled by \citealt{roberge-et-al-2006}). \end{enumerate} For comparison, the $T_\mathrm{g} \propto r^{-1/2}$ is overplotted (with arbitrary normalization). The bottom panel of Fig.~\ref{fig_T_gas} shows the local $p = -d\log T /d\log r$ exponent, which can be compared to the $p=1/2$ value. The numerical noise seen in the plot is due to the limited precision of the code ($dT \sim$ 0.1\,K). The temperature is not a power law, but varies smoothly with a local exponent that is constrained to the interval $-1$...$0$; a $p=1/2$ power law therefore seems to be a reasonable approximation. \begin{figure}[bth!] \centerline{\includegraphics[width=0.45\textwidth]{12917f01.eps}} \caption{ Top: Gas temperature in the $\beta$~Pic disk as a function of distance from the star, for three different assumptions about gas composition and density. (Dashed, dash-dotted, and solid lines correspond to tns, ths, and rns models, respectively, of Table~\ref{tab_systems} below.) Bottom: Radial slope $p$ of the gas temperature. \label{fig_T_gas} } \end{figure} \subsection{Radial dynamics of dust under photogravity and gas drag} \label{sect_rad_dyn} We start with a thermally-supported gas disk. Since gas adds or removes angular momentum to or from solid particles, it causes them to spiral outward or inward, until a certain size-dependent stability distance is reached, at which the gas pressure gradient and the stellar radiation pressure balance each other \citep{takeuchi-artymowicz-2001}: \begin{equation} \beta(s) = \eta(s, r_\mathrm{stab}) , \label{stability} \end{equation} which can be solved for $r_\mathrm{stab}$ for a given $s$ or vice versa. Note that additional radial forces such as photophoresis \citep{herrmann-krivov-2007} can be included by simply adding them to the left part of Eq.~(\ref{stability}). We now turn to the case where the gas is radiation pressure-supported. Then, $\eta$ is independent of distance, so that there is only one value for $\eta$ over the entire disk and for all particle sizes. In this case, the particles with $\beta < \eta$ ($\beta > \eta$) will spiral inward (outward) from the birth ring; those that just have $\beta = \eta$ will have no radial motion and will stay in the parent belt. Therefore, only grains with $\beta > \eta$ will make a contribution to the outer disk. They all will be drifting outward all the way through in a steady-state regime. \subsection{Vertical dynamics of dust under photogravity and gas drag} Apart from the radial dynamics described above, dust grain orbits also have vertical evolution. The essential effect is dust sedimentation (or settling) toward the mid-plane of the disk. It happens on a timescale $T_{\mathrm{sett}}$ that depends on the particle size and the vertical profile of the gas density. Generally, $T_{\mathrm{sett}}$ also changes with time because of the grain's radial drift. If, for dust grains that contribute to visible brightness, the settling timescale is comparable to the stopping time ($T_{\mathrm{sett}} \sim T_{\mathrm{stop}}$), a combination of settling and radial drift would cause the aspect ratio of the outer disk to decrease with distance (``anti-flaring''). Thus in principle, a comparison of the observed vertical distribution of brightness in an edge-on debris disk with a modeled one may offer another method of constraining the gas densities. In the present paper, however, we confine our analysis to the radial distribution and radial brightness profiles. \subsection{Collisions} Collisions are the main mechanism that limits the lifetime of dust grains in the outer disk, if they are large enough not to be blown away by radiation pressure or rapidly dragged away by gas. Collisional outcome is known to depend sensitively upon the relative velocities. Both projectiles are disrupted if their relative velocity $v_\mathrm{rel}$ exceeds \citep[see, e.g.][their Eq. 5.2]{krivov-et-al-2005} \begin{equation} v_{\mathrm{cr}} = \sqrt{ {2 (m_\mathrm{t} + m_\mathrm{p})^2 \over m_\mathrm{t} m_\mathrm{p}} Q_D^\star } , \label{disruption} \end{equation} where $m_\mathrm{t}$ and $m_\mathrm{p}$ are masses of the two colliders and $Q_D^\star$ is the critical energy for fragmentation and dispersal, which is $ \sim 10^8\,\hbox{erg}\,\hbox{g}^{-1}$ at dust sizes \citep[e.g.][]{benz-asphaug-1999}. If $v_\mathrm{rel} < v_{\mathrm{cr}}$, the collision may result in partial fragmentation (cratering), restitution or, at low velocities, merging of the two colliders. The relative velocity $v_\mathrm{rel}$ mainly stems from the difference in the radial velocities of different-sized grains $|v_r|$, as set by stellar photogravity and gas drag. In Sect.~5, we will see that $v_\mathrm{rel}$ is typically high enough for catastrophic collisions to occur even if the gas density is high. \section{Numerical simulations} \subsection{Setups for simulations} For each of the three systems, we adopt a fixed set of parameters for the central star and solids (planetesimals and dust) and test various gas density models (Table~\ref{tab_systems}). One model may differ from another in four respects: \newcommand{\tb}[1]{\makebox[6mm][c]{#1}} \begin{table*}[t] \begin{center} \caption{Models for $\beta$~Pic, HD~32297, and AU Mic. \label{tab_systems} } \begin{tabular}{l|cccccccccc|cc|cc} \hline & \multicolumn{10}{|c|}{\makebox[3cm]{$\beta$~Pic}} & \multicolumn{2}{|c|}{\makebox[1.2cm]{HD~32297}} & \multicolumn{2}{|c}{\makebox[1.2cm]{AU~Mic}}\\ \hline \multicolumn{15}{l}{\em Star} \\ Luminosity, $L_\odot$ & \multicolumn{10}{|c|}{$8.70$} & \multicolumn{2}{|c|}{$5.40$} & \multicolumn{2}{|c}{$.092$}\\ Mass, $M_\odot$ & \multicolumn{10}{|c|}{$1.75$} & \multicolumn{2}{|c|}{$1.80$} & \multicolumn{2}{|c}{$.50$}\\ Age, Myr & \multicolumn{10}{|c|}{$12$} & \multicolumn{2}{|c|}{$30$} & \multicolumn{2}{|c}{$12$}\\ \hline \multicolumn{14}{l}{\em Planetesimals and dust} \\ Birth ring, AU & \multicolumn{10}{|c|}{110-130} & \multicolumn{2}{|c|}{60--80} & \multicolumn{2}{|c}{30--40}\\ Max optical depth & \multicolumn{10}{|c|}{$5 \times 10^{-3}$} & \multicolumn{2}{|c|}{$4 \times 10^{-3}$} & \multicolumn{2}{|c}{$6 \times 10^{-4}$}\\ Init. surf. density slope & \multicolumn{10}{|c|}{$-1.5$} & \multicolumn{2}{|c|}{$-1.5$} & \multicolumn{2}{|c}{$-1.5$}\\ $[s_{min}, s_{max}]$, $\,\mu\hbox{m}$ & \multicolumn{10}{|c|}{$[1, 86]$} & \multicolumn{2}{|c|}{$[0.6, 52]$} & \multicolumn{2}{|c}{$[0.04, 3.2]$}\\ \hline \multicolumn{15}{l}{\em Gas} \\ Model identifier &\tb{000}&\tb{tns}&\tb{t0ns}&\tb{t1ns}&\tb{rns}&\tb{r0ns}&\tb{r1ns}&\tb{ths}&\tb{thf}&\tb{tvs} &\tb{tns}&\tb{ths} &\tb{tns} &\tb{ths} \\ t- or r-support & - & t & t & t & r & r & r & t & t & t & t & t & t & t \\ Temperature slope $p$ & - & $1/2$ & $0$ & $1$ & $1/2$ & $0$ & $1$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ \\ Mass, $M_\oplus$ & - & $.05$ & $.05$ & $.05$ & $.05$ & $.05$ & $.05$ & 5 & 5 & 50 & .15 & 15 & $2\times 10^{-4}$ & $2\times 10^{-2}$ \\ Surface density slope & - & -1.5 & -1.5 & -1.5 & -1.5 & -1.5 & -1.5 & -1.5 & -1.0 & -1.5 & -1.5 & -1.5 & -1.5 & -1.5 \\ Outer edge, AU$^\star$ & \multicolumn{10}{|c|}{$600$} & \multicolumn{2}{|c|}{$350$} & \multicolumn{2}{|c}{$175$}\\ \hline \end{tabular} \end{center} \smallskip {\small $^\star$ The arbitrary truncation distance (we set it to five times the central distance of the respective birth ring). It only affects the conversion of total gas mass to gas density. In that conversion, we also use a constant semi-opening angle of $15^\circ$ for all disks, assuming the density to be constant with height.} \end{table*} \begin{enumerate} \item {\em Gas support mechanism.} We consider two possible reasons for sub-keplerian rotation of a gas disk: a thermally-supported disk, which seems reasonable in the case of primordial gas at solar composition, and a radiation pressure-supported disk, which could be more appropriate for secondary gas with a non-solar composition. \citet{fernandez-et-al-2006} computed the effective radiation pressure on the ionized gas to be $\beta \sim 4$ in the $\beta$~Pic disk (see their Fig.~4), assuming solar abundances. Since the mass is dominated by (inert) carbon, which was later found to be overabundant by a factor 20 \citep{roberge-et-al-2006}, the effective radiation pressure coefficient acting on the gas including 20$\times$ carbon is estimated to be $\beta_{\mathrm{gas}} \sim 4/20 = 0.2$. Thus for the radiation pressure-supported disks we set $\eta = \beta_{\mathrm{gas}} = 0.2$. \item {\em Radial slope of the gas temperature}. In most of the models, we assume $p=1/2$, but also test $p=0$ and $p=1$ to bracket the behavior of expected temperature profiles (Sect.~\ref{sect:gastemp}). \item {\em Total amount of gas}. In the nominal gas models, the total gas mass is taken as retrieved from the gas observations: $0.1 M_\oplus$ for $\beta$~Pic, $0.3 M_\oplus$ for HD~32297, $4 \times 10^{-4} M_\oplus$ for AU~Mic. As we consider here only the outer part of the disks, outside the birth ring, we simply halve these masses. Indeed, in the $\beta$~Pic gas disk, the masses of the inner and outer disks are nearly equal (two terms in Eqs. (4) or (5) of \citet{brandeker-et-al-2004} make comparable contributions). Interestingly, the above gas masses are roughly comparable with the dust mass (gas:dust ratio $\approx$ 1:1). However, as explained in the introduction, the systems may contain much more gas than is evident in the observations. For this reason, we consider high gas mass models, in which the total gas mass is 100 times the nominal mass. With this choice, the gas-to-dust ratio is a standard 100:1. Finally, we try a very high gas case, in which the gas mass is ten times higher than in the high gas models. \item {\em Radial slope of the gas surface density.} If the gas is secondary, one could expect the gas profile to approximately follow the dust profile \citep[e.g.,][]{czechowski-mann-2007}. For our three systems, the latter falls off as $\propto r^{-1}$...$\propto r^{-2}$. Slopes in this range are also expected on theoretical grounds in a standard ``birth ring - collisionally evolving gas-free disk'' model explained in Sect. 3.1. Thus our standard choice is to set the surface density of both gas and dust to be $\propto r^{-1.5}$ initially. However, if the gas is primordial, i.e. a remnant of an accretion disk, the profile is more uncertain. One could still expect $\propto r^{-1.5}$ (consistent with an isothermal steady-state solution for a viscous accretion disk), but, for instance, a flat density profile $\propto r^{-1}$ (another known steady-state solution) could also be possible. \end{enumerate} Accordingly, each model has an identifier $XpYZ$, where \\[-5mm] \begin{itemize} \item $X$ indicates the type of the gas disk \begin{itemize} \item 0 (no gas) \item t (thermally-supported) \item r (radiation pressure-supported) \end{itemize} \item $p$ indicates the slope of gas temperature: \begin{itemize} \item 0 ($p=0$) \item 1 ($p=1$) \item nothing ($p=1/2$) \end{itemize} \item $Y$ indicates the total amount of gas: \begin{itemize} \item 0 (no gas) \item n (``nominal'' gas mass) \item h (``high'' gas mass) \item v (``very high'' gas mass) \end{itemize} \item $Z$ indicates the slope of the gas density profile: \begin{itemize} \item 0 (no gas) \item s (standard, surface density falls off as $1.5$) \item f (flat, surface density falls off as $1.0$) \end{itemize} \end{itemize} Our ``tpns'' and ``rpns'' models (a thermally- or radiation pressure-supported gas disk with various gas temperature profiles, a nominal gas content, and a standard surface density slope) are expected to simulate {\em secondary gas} whose production is somehow related to dust. In contrast, the ``ths'', ``tvs'', and ``thf'', models would emulate a possible remnant of {\em primordial gas} in the outer part of the system. As reference models, we treat ``tns'' and ``ths'' cases (a thermally-supported gas disk with a nominal or high gas content and a standard surface density slope). We ran these models for all three systems. In the case of $\beta$~Pic, we additionally tested other disks, as listed in Tab.~\ref{tab_systems}. \subsection{Numerical integrations} We now describe the procedure to compute individual trajectories of dust grains as well as the overall brightness profiles of the disks. First, we assumed that dust parent bodies are orbiting in the ``birth ring''. Their orbital semimajor axes were uniformly distributed within the birth ring as specified in Table~\ref{tab_systems} and their eccentricities between $[0.0, 0.1]$. In each run with a thermally-supported gas disk, we launched 500 particles with radii distributed uniformly in a log scale between $s_{min}$ and $s_{max}$. The size ranges $[s_{min}, s_{max}]$ used correspond to a $\beta$-ratio interval from $0.9$ down to $0.01$. The minimum $\beta$ value of 0.01 is chosen to cover all grains which can have their stability distance in or outside the center of the birth ring (see Fig.~\ref{fig_r_stab} below). Upon release, the particles instantaneously acquire orbits with semimajor axes and eccentricities different from those of their parent bodies, which is a standard radiation pressure effect \citep{burns-et-al-1979}. The forces included stellar gravity, direct radiation pressure, the Poynting-Robertson force, and gas drag. The drag force due to the stellar wind, which is known to be important for AU Mic \citep{strubbe-chiang-2006,augereau-beust-2006}, is not included. The reason is that a nearly radial stellar wind could no longer exist in the outer disk considered here, between roughly 35 and 300\,AU, because of the interactions with the presumed rotating gas. Indeed, a simple estimate, assuming a mass loss rate of $\sim 10^{-12} M_\odot \,\hbox{yr}^{-1}$ and the stellar wind velocity of $\sim 400\,\hbox{km}\,\hbox{s}^{-1}$, yields the total mass of the stellar wind particles in the outer disk of $\la 10^{-6} M_\oplus$, about three orders of magnitude less than the mass of rotating gas in the nominal model. The particle orbits were followed with the \citet{everhart-1974,everhart-1985} integrator of 15th order with an adaptive step size. The integrations ended upon one of the following, whichever was the earliest: (i) a grain came as close as 10\,AU to the star; (ii) a grain reached the distance of 1000\,AU; (iii) after $10^5$~yr of integration. Instantaneous positions of particles were stored each 500 years for bound grains and each 5 years for unbound ones. A typical number of instantaneous positions per system and gas model was $\sim 10^6$. The setup for the runs with radiation pressure-supported gas disks was different from what is described above, because using the same setup would lead to the following problem. As explained in Sect. 4.3, we normalize the calculated dust density in such as way as to arrive at the correct maximum geometrical optical depth $\tau_0$ (Table~\ref{tab_systems}). In the usual runs for thermally-supported disks, $\tau_0$ in the birth ring is dominated by the particles with $\beta < 0.2$. But in the ``rns'' run these drift inward, so that $\tau$ comes from grains with $\beta \ge 0.2$. These have smaller cross section, and thus their number density turns out to be two orders of magnitude higher than in standard runs. Thus the collisional lifetime becomes quite short, $\sim 1$...$10$ years. For these reasons, the ``rns'', ``r0ns'', and ``r1ns'' models were run with just 15 grains (instead of 500) over $10^3$ years (instead of $10^5$), and the recording time step was as small as 0.5 years for bound grains and 0.005 years for unbound ones. The minimum $\beta$ value was set to $\eta = 0.2$ (instead of 0.01), because in the radiation pressure-supported disks only grains with $\beta > \eta$ drift outward from the birth ring. \subsection{Collisional post-processing} \label{collpost} The collisions were applied to the numerical integration results through the following post-processing algorithm: \begin{enumerate} \item The instantaneous positions of grains stored during the numerical integrations are distributed into two-dimensional size-distance bins, $(s_i, r_j)$ or $(\beta_i, r_j)$. The number of occurrences in each bin $N_{\mathrm{bin}}(\beta_i, r_j)$ is converted into the absolute number density $n(\beta_i,r_j)$ from the known normal geometrical optical depth at the birth ring (Table~\ref{tab_systems}). Besides, for each bin we calculate the average radial velocity of its grains, $v_r(\beta_i,r_j)$. \item For each $(\beta_i,r_j)$-bin, the collisional lifetime of its particles is calculated as follows. We consider all bins with various $\beta_k$ at the same distance $r_j$ and check if Eq.(\ref{disruption}) is fulfilled. The reciprocal of the collisional lifetime is \begin{eqnarray} T_{\mathrm{coll}}(\beta_i,r_j)^{-1} &=& \sum\limits_k n(\beta_k,r_j) \nonumber\\ &\times& \max [ v_r(\beta_i,r_j), v_r(\beta_k,r_j) ] \nonumber\\ &\times& \sigma (\beta_i,\beta_k) , \label{T_coll} \end{eqnarray} where $\sigma$ is the collisional cross section and summation is performed over those projectiles that satisfy Eq.(\ref{disruption}). \item We then go back to the stored numerical integration results and go along each trajectory again. For a particle with $\beta_i$ at each time step $t_l$, we calculate a probability that the particle survives destructive collisions over the current time step, $p(\beta_i, t_l) = \exp [-\Delta t / T_{\mathrm{coll}}(\beta_i,r_j)]$, where $r_j$ is the distance at the moment $t_l$ and $\Delta t = t_l - t_{l-1}$ is time step between two successive stored positions of the particle. Having $p$ for all previous time steps for the same grain, we determine the probability for that grain to survive collisions up to the current moment of time: \begin{equation} P(\beta_i, t_l) = \prod\limits_{m=0}^l p(\beta_i,t_m) . \end{equation} \item Steps 1--3 are repeated iteratively. From now on, when determining the number densities in step 1, we use $N_{\mathrm{bin}}(\beta_i, r_j) P(\beta_i, t_l)$ instead of just $N_{\mathrm{bin}}(\beta_i, r_j)$. \end{enumerate} Note that we do not compute the exact relative velocities between impacting grains, since this procedure would be too time consuming, but assume that this velocity is of the order of the $v_r$ of the grain having the highest radial velocity. In doing so, we implicitly neglect the azimuthal component of the relative velocity, and our procedure should thus be considered as giving a lower estimate for impact speeds and thus shattering efficiency of collisions. The procedure is computationally fast and converges rapidly to a steady state. In practice, we perform five iterations. \subsection{Calculation of surface brightness profiles} From $N_{\mathrm{bin}}(\beta_i, r_j)$ recalculated after several iterations, the surface brightness of the disk is computed as \begin{equation} \mathit{SB}(r_j) = \mathrm{const} \times \sum\limits_i N_{\mathrm{bin}}(\beta_i, r_j) s_i^{3-q} r_j^{-3} \end{equation} (we set $q=3.5$). Here, the factor $s^{3-q}$ provides conversion from particle numbers to their cross section area ($s^2$), accounts for a logarithmic binning of sizes ($dN/d{\ln s} = s dN/ds$), and includes the size distribution at production ($s^{-q}$, Eq.~\ref{Ndot}). In turn, the factor $r^{-1}$ is needed because the surface area of a radial annulus is proportional to $r$, whereas $r^{-2}$ is a conversion from optical depth to the $\mathit{SB}$. The latter conversion needs to be explained in more detail. For the sake of simplicity, we assume here gray, isotropic scattering. In that case, the brightness of the edge-on disk is known to come mostly from the part of the line of sight closest to the star \citep[e.g.][]{nakano-1990,thebault-augereau-2007}. Further, as mentioned above, we assume a non-flared disk, with the dust scale height being linearly proportional to $r$. With these assumptions, the standard brightness integral gives $\mathit{SB}(r) \propto \tau(r) r^{-2}$. Therefore, if \begin{equation} \tau(r) \propto r^{-\alpha} \end{equation} then \begin{equation} \mathit{SB}(r) \propto r^{-\mu} \end{equation} with \begin{equation} \mu = \alpha + 2 . \label{mu_alpha} \end{equation} \section{Results} In this section, we start with an analysis of typical single-grain dynamics and then proceed with $\mathit{SB}$ profiles. \subsection{Radial drift of dust grains} We start with the case of a thermally-supported gas disk. Numerical solution of Eq.~(\ref{stability}) with $p=1/2$ is shown in Fig.~\ref{fig_r_stab}. It shows that the stability distance is less than 1000\,AU only for grains with $\beta \la 0.15$--$0.20$. Grains with higher $\beta$ ratios are sweeping outward all the way through the disk. \begin{figure}[bth!] \centerline{\includegraphics[width=0.45\textwidth]{12917f02.eps}} \caption{ Loci of stability in the $\beta$ ratio -- distance plane in thermally-supported gas disks (curves) and in a radiation pressure-supported disk (vertical straight line). Grains to the left of the stability lines drift inward, those to the right move outward. Thin horizontal lines mark the distances at which the birth rings around all three stars are centered. \label{fig_r_stab} } \end{figure} \begin{figure}[bth!] \centerline{\includegraphics[width=0.40\textwidth]{12917f03.eps}} \caption{ Radial drift of different-size particles in the $\beta$~Pic nominal gas (top) and high gas models (bottom). \label{fig_r_of_t} } \end{figure} Fig.~\ref{fig_r_of_t} illustrates how grains with different $\beta$ ratios drift toward their stability distances in the nominal and high gas $\beta$~Pic models. It shows in particular how sensitive the drift timescale $T_{\mathrm{drift}}$ (time needed for a grain to reach its stability distance) is to the the grain sizes and to the gas content. Further, it tells us that $T_{\mathrm{drift}} \gg T_{\mathrm{stop}}$. One reason for this is that $T_{\mathrm{stop}}$ increases rapidly with increasing distance. For instance, in the nominal gas case, the $\beta=0.1$ grains have the stability distance at $\sim 500$\,AU, but are still at $\sim 150$\,AU after 1~Myr of evolution. Hence they have $T_{\mathrm{drift}} \gg 1$~Myr, whereas their stopping time at the birth ring distance is $T_{\mathrm{stop}} \sim 2 \times 10^3$~yr. We now turn to the case where the gas is radiation pressure-supported. As explained in Sect.~\ref{sect_rad_dyn}, only the particles with $\beta > \eta$ will spiral outward from the birth ring, and the stability distance does not exist. Thus the outer disk will only be composed of relatively small grains in unbound orbits. \subsection{Radial velocities and collisional outcomes} \begin{figure*}[bth!] \centerline{\includegraphics[width=0.65\textwidth]{12917f04.eps}} \caption{ Radial velocity of different-sized grains in the $\beta$~Pic disk 000 (top left), rns (top right), tns (bottom left) and ths (bottom right) gas models, obtained with numerical integrations described in Sect.~4. For comparison, a horizontal straight line gives the minimum relative velocity $v_\mathrm{cr}$ needed to break up two like-sized colliders (this value is independent of sizes, since we assume a constant critical energy for fragmentation and dispersal $Q_D^\star = 10^8 \,\hbox{erg}\,\hbox{g}^{-1}$ for all dust sizes). \label{fig_rad_vel} } \end{figure*} The relative velocity of impacting grains is a crucial parameter, leading to their destruction if it exceeds $v_\mathrm{cr}$ (Eq.~\ref{disruption}). As discussed in Sect.~\ref{collpost}, we assume that these impact speeds are of the order of particle radial velocities. Figure~\ref{fig_rad_vel} plots $v_r$ for grains of various sizes, in the $\beta$~Pic disk without gas and for two gas models, and compares them to the value of $v_\mathrm{cr}\sim 300\,\hbox{m}\,\hbox{s}^{-1}$ for impacts between \emph{equal-sized} particles. The top panel reveals Keplerian U-shape curves, where $v_r>v_\mathrm{cr}$ for all grains smaller than $s \sim 17 \,\mu\hbox{m}$ ($\beta > 0.05$), except near the apsides. Under the presence of gas (middle and bottom panels), radial velocities are damped, and only $\beta \sim 0.1$ ($s \sim 10 \,\mu\hbox{m}$) grains have $v_r$ exceeding the disruption threshold in the nominal gas case. For the high gas case, this damping is much more efficient so that after a few pseudo-orbits, even $\beta \sim 0.2$ ($s \sim 4 \,\mu\hbox{m}$) grains have $v_r<v_\mathrm{cr}$. The results are similar for the other two systems, i.e., low $v_r$ for all grains larger than $\beta \sim 0.05$ in the gas free case and $\beta \sim 0.2$ in the high gas case. However, collisional disruption plays a significant role in all the systems, for all gas models considered, and across the whole range of distances in the outer disk. This is because of the crucial role of collision between grains of \emph{unequal} sizes. This is easy to understand. Indeed, Fig.~\ref{fig_rad_vel} shows that very small grains always maintain large radial velocities. Hence, for a grain of a given size, there is a significant amount of somewhat smaller grains that are fast enough and still massive enough to satisfy Eq.~(\ref{disruption}). To illustrate this, we plot in Figure~\ref{fig_T_coll} the collisional lifetimes of different-sized grains around all three stars, calculated with the algorithm described in section 5. We see that for $\beta \leq 0.2$, $T_\mathrm{coll}$ is more or less independent of particle sizes in all considered cases. Overall, $T_\mathrm{coll}$ is longer in high-gas models and shorter in nominal-gas ones. For bound grains ($\beta < 0.5$) in all models, they never exceed $4 \times 10^4$~yr. \subsection{Surface brightness profiles without gas} We now turn to $\mathit{SB}$ profiles and start with a disk that does not contain gas (``000'' model for $\beta$~Pic), both with and without collisions taken into account. For comparison, we ran our full collisional code ACE \citep{krivov-et-al-2008} that provides a detailed treatment of collisions with a multitude of collisional outcomes. The resulting profiles are shown in Fig.~\ref{fig_sb_profiles_nogas}. Without collisions we get a steep ($\approx -5$) $\mathit{SB}$ profile. This result is expected when assuming that grains down to the radiation pressure blow-out limit ($\beta = 0.5$) are produced in the birth ring with a $q=3.5$ size distribution and that the smallest of these grains, which dominate the outer ring, are then simply diluted along their eccentric orbits and thus get underabundant in the birth ring. \citet{strubbe-chiang-2006} showed that in this case the resulting surface density profile approximately decreases as $r^{-2.5}$, yielding an $\mathit{SB}$ slope of $-4.5$ \citep[see][for a more detailed discussion]{thebault-wu-2008}. In contrast, the $\mathit{SB}$ profile with collisions turned out to be close to $-3.5$...$-4.0$, which corresponds to a surface density slope of $\approx -1.5$...$-2.0$. This slope agrees reasonably well with the one calculated with a full collisional simulation with ACE. A radial $\mathit{SB}$ slope close to 3.5 is theoretically expected, too. The difference with the collisionless case is that the small high-$\beta$ grains can only be produced and destroyed in the birth ring but spend most of their time in collisionally inactive regions beyond it. As a result, their number density follows the $q=3.5$ size distribution {\emph within} the birth ring (instead of being underabundant as in the collisionless case) so that the total integrated number of small grains (taking into account the large fraction outside the birth ring) is much higher than the one derived from a $q=3.5$ law \citep[see][]{strubbe-chiang-2006,thebault-wu-2008}. \begin{figure}[bth!] \centerline{\includegraphics[width=0.40\textwidth]{12917f05.eps}} \caption{ Collisional lifetime of different-sized grains (i.e. those with different $\beta$ ratios), obtained in numerical simulations described in Sect.~4. \label{fig_T_coll} } \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=0.45\textwidth]{12917f06.eps}} \caption{$\mathit{SB}$ profiles for $\beta$~Pic without gas (thick curves). Dashed and solid lines are without and with collisions taken into account, respectively. Dotted line is the profile obtained with an elaborate collisional code ACE \citep{krivov-et-al-2008}. All profiles are normalized to unity at the outer edge of the birth ring. Filled vertical bar shows the location of the birth ring. The grey-shaded area is bordered by power laws with indices $-3.0$ and $-4.0$, and a $-3.5$ slope is marked with a thin straight line. \label{fig_sb_profiles_nogas} } \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=0.45\textwidth]{12917f07.eps}} \caption{ Same as Fig.~\ref{fig_sb_profiles_nogas}, but for several gas models (thick curves). Collisions are included. \label{fig_sb_profiles_bpic} } \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=0.45\textwidth]{12917f08.eps}} \caption{Same as Fig.~\ref{fig_sb_profiles_bpic}, but for models assuming different gas temperature profiles. Since all the curves fall almost on top of each other, we artificially lowered each curve listed in the legend (starting from ``tns'') relative to the previous one by a factor of 1.5. \label{fig_sb_profiles_bpic_p} } \end{figure} \subsection{Surface brightness profiles with gas: $\beta$~Pic} Fig.~\ref{fig_sb_profiles_bpic} compares $\mathit{SB}$ profiles for the $\beta$~Pic systems in several gas models. It shows that the nominal amount of gas leads to almost the same profile as the one without gas (close to $-3.5$...$-4.0$). Furthermore, and surprisingly, all profiles with high and very high gas masses (``ths'', ``tvs'', and ``thf'') are all close to each other and to those without or with little gas ($\approx -3.5$...$-4.0$). Our ``tvs'' and ``000'' models are close, although not completely identical, to the ``high gas'' and ``no gas'' models in \citet{thebault-augereau-2005}. In those two cases they obtained the slopes of $\approx -2.5$ and $\approx -4.0$, respectively (their Figs.~3 and~4). Thus our result ($\approx -3.0$ and $\approx -3.5$...$-4.0$) slightly differs from theirs. Most of the difference comes from the fact that they used a more realistic, extended distribution of parent planetesimals, but a much simpler, monosized collisional model without a collisional disruption threshold. Next, we compared the ``rns'' profile with the ``tns'' one. Both turned out to be very similar (Fig.~\ref{fig_sb_profiles_bpic_p}). This implies that both thermally-supported and radiation pressure-supported gas disks with the same amount of gas may yield similar radial distributions of dust. Finally, we have also tested the influence of the radial profile of the gas temperature on the $\mathit{SB}$ profiles. To this end, we compared the $\mathit{SB}$ profiles of $\beta$~Pic obtained in the ``tns'' model ($p=1/2$) with those in the ``t0ns'' and ``t1ns'' models ($p=0$ and $1$, respectively). Similarly, ``rns'' results were compared with ``r0ns'' and ``r1ns''. Again, the $\mathit{SB}$ profiles turned out to be almost indistinguishable (Fig.~\ref{fig_sb_profiles_bpic_p}). \subsection{Surface brightness profiles with gas: all systems} We now proceed with the numerical runs for all three systems and two gas models (``tns'' and ``ths'') for each system (Table~\ref{tab_systems}). The resulting profiles are shown in Fig.~\ref{fig_sb_profiles}. One lesson from the plots is about the role of collisions. In systems with a high dust density (or a large optical depth) and low gas density, collisions flatten the profile ($\beta$~Pic, nominal gas; see also Fig~\ref{fig_sb_profiles_nogas} without gas). When the dust density is lower (AU~Mic), collisions have little influence on the $\mathit{SB}$ slope. The same is true in the case of a high gas density (high gas models for all three systems). This is mostly because the strong gas drag sustains sufficiently high radial velocities even far from the star, so that $v_r(r)$ does not decrease with increasing $r$ as abruptly as in the nominal gas cases (see Fig.~\ref{fig_rad_vel}). \begin{figure*}[bth!] \centerline{\includegraphics[width=0.75\textwidth]{12917f09.eps}} \caption { $\mathit{SB}$ profiles for $\beta$~Pic (top), HD~32297 (middle), and AU~Mic (bottom), for a nominal gas model (left) and a high gas one (right). Thick dashed and solid lines are without and with collisions taken into account, respectively. Their normalization is the same as in Fig.~\ref{fig_sb_profiles_nogas}. Thin curves show partial contribution of three $\beta$-intervals into the total profile with collisions. Note that shaded areas have another meaning than in Figs.~\ref{fig_sb_profiles_nogas}--~\ref{fig_sb_profiles_bpic_p}. They now indicate an approximate range of profiles, as deduced from observations (see Sect. 2): $-3.0$...$-4.0$ for $\beta$~Pic, $-2.7$...$-3.7$ for HD~32297, and $-3.8$...$-4.7$ for AU~Mic. \label{fig_sb_profiles} } \end{figure*} In addition, Fig.~\ref{fig_sb_profiles} depicts partial contributions to $\mathit{SB}$ profiles by different-sized grains. The largest contribution typically comes from medium-sized grains with $0.05 < \beta < 0.5$, most of which have stability distances outside the disk (Fig.~\ref{fig_r_stab}) and can be treated as ``effectively unbound''ones. The relative contribution of the small $\beta > 0.5$ grains slightly rises with increasing distance, but never becomes comparable to that of the medium-sized particles. Large grains with $\beta < 0.05$ do not make any appreciable contribution to the $\mathit{SB}$ profiles in any of the systems. However, the most important conclusion from Fig.~\ref{fig_sb_profiles} is that the slopes differ only moderately for the two extreme gas models, an effect that we have already seen for $\beta$~Pic and now see for the other two systems. In the nominal gas case, a distinctive feature of the profiles is their slight ``curvature''~--- they do not follow a single power-law across the entire disk. This effect mostly comes from collisions rather than gas, and it can also be seen in the gasless case (Fig.~\ref{fig_sb_profiles_nogas}). The profiles are steeper close to the birth ring and are more gently sloping farther out. Between 2 and $3 r_\mathrm{birth}$ ($r_\mathrm{birth}$ being the birth ring distance), the slopes are $-4.6$ for $\beta$~Pic, $-4.5$ for HD~32297, and $-4.7$ for AU~Mic. Outside $3 r_\mathrm{birth}$, the slopes flatten to $-3.5$, $-3.2$, and $-3.3$, respectively. In contrast, in the high gas case the curvature effect is only present close to the birth ring. Outside $2 r_\mathrm{birth}$, all three $\mathit{SB}$ profiles have slopes in a $-3.6$...$-4.0$ range. Finally, the model slopes have to be compared with with the observed slopes: $-3.0$...$-4.0$ ($\beta$~Pic), $-2.7$...$-3.7$ (HD~32297), and $-3.8$...$-4.7$ (AU~Mic). From all these values and from Fig.~\ref{fig_sb_profiles}, it is hardly possible to judge whether nominal-gas or high-gas models match observations better. In particular, this depends on the radial zone of the disk considered. Besides, one should keep in mind that our models rest on many simplifying assumptions (as, for instance, grey isotropic scattering) and have limited accuracy (e.g., contain some numerical noise). Equally, the slopes retrieved from observations inherit uncertainties from the data and are sensitive to the specific procedure of data reduction (see, e.g., a discussion in Sect. 3.2 of \citeauthor{fitzgerald-et-al-2007b} \citeyear{fitzgerald-et-al-2007b}) and should be treated with caution, too. Thus the only conclusion we can make is that nominal-gas and high-gas models are both is reasonable agreement with observations. \section{Analytic model} To better understand the numerical simulation results, in this section we address the dust distributions analytically. \subsection{``Static'' model in the case of a thermally-supported gas disk} We start with a simple model that assumes dust to swiftly drift by gas drag: $T_{\mathrm{drift}} \ll T_{\mathrm{life}}$, where $T_{\mathrm{drift}}$ is the dust radial drift time and $T_{\mathrm{life}}$ is the grain lifetime, e.g. collisional one. With this assumption, a dust grain of radius $s$ is expected to ``instantaneously'' arrive at an equilibrium distance (\ref{stability}) from the star $r$ (for brevity the subscript ``stab'' will be omitted). This model ignores that fact that grains spend finite time on their way to stability distances, and thus also contribute to the optical depth and brightness closer to the star than their parking distances. It also ignores that stability distances of smaller grains (if $p<1$) are located outside the disk and they never arrive there. And, even if stability distances are well inside the disk, grains may not arrive there (or not all of them may), if they are collisionally eliminated on shorter timescales, meaning that the assumption $T_{\mathrm{drift}} \ll T_{\mathrm{life}}$ fails. Some of these assumptions fail in the systems considered in this paper. For instance, in Sect. 5.5 we showed that the largest contribution to the $\mathit{SB}$ profiles comes from ``effectively unbound'' grains. Nevertheless, we deem this ``static'' model useful. First, the ``static'' case is fully tractable analytically. Second, it can give us a rough idea, to what extent the ambient gas can change the dust profiles and it can be considered as a limiting case for a more realistic ``dynamic'' model that will be worked out later. The grains with radii $[s, s+ds]$ are produced in the birth ring at a constant rate $\dot{N} ds$, see Eq.~(\ref{Ndot}). These grains drift to a size-dependent stability distance, which is easy to find. In the limit of geometric optics, $\beta \propto s^{-1}$, Eq.~(\ref{beta}). Next, $\eta$ is given by Eq.~(\ref{eta}). Equating $\beta$ and $\eta$ we find a simple relation between the grain radius and stability distance (one-to-one if $p \ne 1$): \begin{equation} s \propto r^{p-1}. \label{s_of_r} \end{equation} Grains with radii $[s, s+ds]$ are located in an annulus $[r, r+dr]$. Their steady-state number is \begin{equation} dN = \dot{N} T_{\mathrm{life}} ds, \end{equation} and the normal geometrical optical depth of the annulus is \begin{equation} \tau (r) = \left| dN \over 2 \pi r dr \right| \pi s^2 = {1 \over 2} {s(r)^2 \over r} \dot{N}(s(r)) T_{\mathrm{life}} (s(r)) \left| ds \over dr \right| . \label{tau_general} \end{equation} We now assume that the grain lifetime is independent of size (or distance). This is a reasonable approximation as can be seen, for instance, from Fig.~\ref{fig_T_coll}. Substituting Eqs.~(\ref{s_of_r}) and (\ref{Ndot}) into Eq.~(\ref{tau_general}) yields a radial dependence \begin{equation} \tau (r) \propto T_{\mathrm{life}} \; r^{-\alpha} \label{tau_general1} \end{equation} where \begin{equation} \alpha = 5 - 3p + (p-1)q . \label{alpha_wo_coll} \end{equation} According to Eq.~(\ref{mu_alpha}), the $\mathit{SB}$ slope is \begin{equation} \mu = 7 - 3p + (p-1)q . \label{mu_wo_coll} \end{equation} In the above derivations we adopted the gas temperature which is independent of the dust distribution. However, as discussed above (see Eq.~\ref{gamma}), in a high dust density limit the gas temperature may become proportional to a certain power of the dust density (or equivalently the optical depth $\tau$ divided by distance $r$). Using Eq.~(\ref{gamma}) instead of Eq.~(\ref{T_g}), Eq.~(\ref{alpha_wo_coll}) for the optical depth slope replaces by \begin{equation} \alpha = {5 - 3p + (p-1)q - (3-q)\gamma \over 1 + \gamma(3-q)} , \label{alpha_wo_coll_mod} \end{equation} and Eq.~(\ref{mu_wo_coll}) for the $\mathit{SB}$ slope by \begin{equation} \mu = 2 + {5 - 3p + (p-1)q - (3-q)\gamma \over 1 + \gamma(3-q)} \label{mu_wo_coll_mod} \end{equation} \begin{figure}[bth!] \vspace*{5mm} \centerline{\includegraphics[width=0.43\textwidth]{12917f10.eps}} \caption{ Radial slope of surface brightness for different $p$, $q$, $\gamma$, according to Eq.~(\ref{mu_wo_coll_mod}). Thin and thick dotted lines ($q=3.0$) coincide with each other. Asterisk marks a standard model with a blackbody gas temperature ($\gamma=0$ and $p=1/2$) and a canonical size distribution of dust at production ($q=3.5$). For comparison, best-fit slopes inferred from the analysis of scattered light images of our three systems are shown on the right. \label{fig_sb_slope} } \end{figure} Figure~\ref{fig_sb_slope} gives slopes $\mu$ of surface brightness (without grain-grain collisions) for various $p$, $q$, and for $\gamma=0$ and $1$. These results can easily be understood. Consider, for instance, the standard gas heating model ($\gamma = 0$). We start with the dependence on $q$ for a fixed $p$. If the temperature gradient is not too steep ($p<1$), then $s$ decreases with increasing $r$. In that case, assuming a steeper size distribution (larger $q$) makes the profile flatter, as it should. For a steep temperature drop-off ($p>1$), it is the other way round. The steeper the size distribution, the steeper the profile. The dependence on $p$ for a fixed $q$ is also obvious. For all $q > 3.0$ (which is expected), a steeper temperature gradient steepens the $\mathit{SB}$ profile. For all $q < 3.0$, the opposite is true. In the standard case that we took in most of the numerical simulations ($p = 1/2$, $q = 3.5$), we get $\mu = 3.75$ (asterisk in Fig.~\ref{fig_sb_slope}). In the case of a dust-induced gas heating ($\gamma = 1$), Eq.~(\ref{mu_wo_coll_mod})and Fig.~\ref{fig_sb_slope} suggest that a much wider range of $\mu$ is possible, and that the results are more sensitive to $q$. For example, the same standard case $p = 1/2$, $q = 3.5$ would result in an extremely steep $\mu = 6.5$. However, in view of the temperature calculation results presented in Fig.~\ref{fig_T_gas}, we do not expect Eq.~(\ref{mu_wo_coll_mod}) to be better approximation to reality than Eq.~(\ref{mu_wo_coll}). Rather, it is meant to show that the radial distribution of dust is quite sensitive to the assumed gas heating model. \subsection{``Dynamic'' model} Instead of considering a system in which all grains reside at their stability distances, we now allow grains to drift through the disk towards their respective parking positions. We start with a thermally-supported gas disk and, for a time being, we neglect collisions. Equation~(\ref{tau_general}) for the normal geometrical optical depth of the disk replaces by \begin{equation} \tau (r) = {1 \over 2r} \int_0^{s_0(r)} {\dot{N}(s) s^2 \over v_r(r,s)} ds \propto {1 \over r} \int_0^{s_0(r)} {s^{2-q} \over v_r(r,s)} ds , \label{tau_unbound} \end{equation} and the surface brightness profile is now given by \begin{equation} \mathit{SB} (r) \propto {1 \over r^3} \int_0^{s_0(r)} {s^{2-q} \over v_r(r,s)} ds . \label{mu_full} \end{equation} Here, $s_0(r)$ is the radius of those grains whose stability distance is $r$. For disks with $p<1$, $s_0$ decreases with $r$. The radius $s_0$ or equivalently, the radiation pressure-to-gravity ratio of grains with radius $s_0$, which we denote by $\beta_0$, can be read out from Fig.~\ref{fig_r_stab} or calculated analytically from Eq.~(\ref{eta}): \begin{equation} \beta_0 (r) = (p+\xi) {k T_\mathrm{g}^0 \over \mu_\mathrm{g} m_\mathrm{H}} {r_0 \over GM_\star} \left( r \over r_0 \right)^{1-p} . \label{boundary} \end{equation} As a numerical example, for $\beta$~ Pic at $r=500$\,AU we have $\beta_0 \approx 0.10$...$0.13$ and $s_0 \approx 6.6$...$8.6\,\mu\hbox{m}$ for all thermally-supported disks modeled. It is only grains with $s \la 6.6$...$8.6\,\mu\hbox{m}$ that are present at the distance of $500$\,AU. Unfortunately, $v_r(r,s)$ is a complicated, non-power-law function of both arguments \citep[see, e.g., Eq. 23 in][]{takeuchi-artymowicz-2001}. What is more, a distance-dependent integration limit $s_0(r)$ in Eq.~(\ref{mu_full}) would result is a non-power-law brightness profile even if $v_r$ was a power law. The only straightforward particular case is blowout grains with $\beta \ga 0.5$. Fig.~\ref{fig_rad_vel} shows that their radial velocity is nearly constant, yielding $\mathit{SB} \propto r^{-3}$. This is consistent with numerical simulations (see dotted lines in Fig.~\ref{fig_sb_profiles}). For the bound grains with moderate $\beta$ that actually dominate outer disks, there are two competing effects. One is the distance-dependent integration limit $s_0 (r)$. Its physical meaning is as follows. The larger the distance, the smaller the grains need to be to be able to reach it by gas drag. Thus larger grains are only present closer to the star, which steepens the $\mathit{SB}$ slope. Another effect is that $v_r$ decreases with $r$ (see Fig.~\ref{fig_rad_vel}), making the $\mathit{SB}$ slope flatter than $-3$. For instance, if we adopt $v_r \propto r^{-1}$ as a very rough approximation for the outer disk, we will have $\mathit{SB} \propto r^{-2}$. Further complications are expected from collisions. The grains have then a limited lifetime, and the integrand in Eq.~(\ref{mu_full}) would have to be weighted with a fraction of particles that survive collisions before they arrive at a distance $r$. This would generally affect the $\mathit{SB}$ slope. Because of the complexity of $v_r(r,s)$ and limited grain lifetime due to collisions, it is difficult to extend our ``dynamic'' model further. However, the model is still useful, as it uncovers the reason why $\mathit{SB}$ slopes in systems with gas may be flatter than $-3$ \citep[like in][see their Fig.~3]{thebault-augereau-2005}: it is the slow-down of radial drift velocity with increasing distance. On the other hand, the model demonstrates that slopes steeper than $-3$ are also possible, because larger grains can only drift to limited distances and only contribute to the parts of the disk close to the birth ring. Equations (\ref{tau_unbound})--(\ref{mu_full}) also hold for the radiation pressure-supported disks. However, in this case the upper integration limit $s_0$ has another meaning: the radius of grains whose $\beta$ ratio is equal to $\beta_{\mathrm{gas}}$, Eq.~(\ref{eta rad}). The above discussion applies in large part to this case, too. The main conclusion is that a slope around $-3.0$ is expected, possible deviations from which stem from a size-dependent radial drift velocity and collisions. \section{Conclusions} We considered young debris disks, in which there is observational evidence for a rotating gas component (either primordial or secondary). We assumed that dust is replenished from parent bodies that are located in a ``birth ring'' (which usually shows up in the resolved images). We then modeled the dust distribution and scattered-light brightness profile in the outer part of the disk, exterior to the birth ring, under different assumptions about possible amount and distribution of gas. Our main conclusions are as follows: 1. Our numerical simulations revealed that the radial profile of dust density, and thus the surface brightness profile of a dusty disk, are surprisingly insensitive to the parameters of a central star, location of the dust-producing planetesimal belt, dustiness of the disk and, most interestingly, the parameters of the ambient gas. The radial brightness slopes in the outer disks are all typically in the range $-3$...$-4$. This result holds for gas densities varying by three orders of magnitude and for different radial profiles of the gas temperature. Both the gas of solar composition supported against gravity by gas pressure gradient and gas of strongly non-solar composition that must be supported by radiation pressure would lead to similar profiles, too. The slopes of $-3$...$-4$ are the same that were theoretically found for gas-free debris disks, and they are the same as actually retrieved from observations of many debris disks. 2. Although the slopes roughly fall into the range $-3$...$-4$, the numerical simulations made it apparent that the exact slope depends on the total amount of gas in the disk and the gas density distribution slope, as well as on the dust density (through collisional timescales). 3. We developed a simple analytic description of the radial distribution of dust brightness in an optically thin disk. The analytic model explains the numerical results 1.--2. and provides guidelines to what can be expected in young debris disks, prior to any detailed numerical modeling. Assuming gray isotropic scattering by dust, the analytic model predicts a range of slopes around $-3$, due to the dominant contribution of small high-$\beta$ grains. It shows that deviations from this nominal value may come from the slow-down of radial drift of bigger grains at larger distances (flattening), from the fact that larger grains cannot reach larger distances (steepening), and from the collisional elimination of dust particles. In the limiting case of very high gas densities and low dust densities (where the ``visible'' dust drifts through the outer disk over timescales shorter than their collisional timescales), if the dust size distribution at production follows a power law with an index $-3.5$, and assuming a black-body gas temperature, the model predicts a slope of $-3.75$. 4. Our results for three young (10--30~Myr old), spatially resolved, edge-on debris disks ($\beta$~Pic, HD~32297, and AU Mic) show that the observed radial profiles of the surface brightness do not pose any stringent constraints on the gas component of the disk. At least for gas densities falling within the observationally derived density limits, we do not see any significant effect of gas on dust distributions. Thus we cannot exclude that outer parts of the systems may have retained substantial amounts of primordial gas which is not evident in the gas observations (e.g. as much as $50$ Earth masses for $\beta$~Pic). However, the possibility that gas is only present in small to moderate amounts, as deduced from gas detections (e.g. $\sim 0.05$ Earth masses in the $\beta$~Pic disk or even less), remains open, too. In that case, gas would be secondary, stemming for instance from grain-grain collisions or photo-desorption of dust. \acknowledgements We thank Torsten L\"ohne for many useful discussions. Comments by an anonymous referee that helped to improve the paper are appreciated. Part of this work was supported by the \emph{Deut\-sche For\-schungs\-ge\-mein\-schaft, DFG\/} project number Kr~2164/8-1, by the {\em Deutscher Akademischer Austauschdienst} (DAAD), project D/0707543, and by the International Space Science Institute Bern, Switzerland (``Exozodiacal Dust Disks and Darwin'' working group, http://www.issibern.ch/teams/exodust/). FH was partly funded by the graduate student fellowship of the Thuringia State. AB was funded by the \textit{Swedish National Space Board} (contract 84/08:1).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The classical kink soliton solution of the $\lambda\phi^4$ theory has found many applications. One such use has been in models of extra-dimensions, where a background scalar field assumes the kink solution and becomes a domain-wall brane, a specific realisation of the generic idea of a brane world. From the point of view of a model builder, the kink can be used to localise fermions~% \cite{Rubakov:1983bb, ArkaniHamed:1999dc}, gauge fields~% \cite{Dvali:1996xe}, Higgs fields~\cite{George:2006gk} and gravity \cite{Gremm:1999pj, Csaki:2000fc} (building on~% \cite{Randall:1999vf}). Giving the kink a non-trivial representation under some internal symmetry allows for exciting symmetry breaking opportunities, such as GUT breaking~% \cite{Pogosian:2000xv, Vachaspati:2001pw, Pogosian:2001fm, Davidson:2002eu} and supersymmetry breaking~\cite{Dvali:1996bg}. All these ingredients are able to play together in a comprehensive model of extra-dimensions, and a domain-wall-localised standard model can be implemented~\cite{Davies:2007xr}. Even though the kink has played a central role in domain-wall models for many decades now, there are some interesting and important technical properties of the kink that have been overlooked. These loose ends were alluded to in a previous work by the authors~\cite{George:2006gk}, and relate to the precise nature of the zero mode of translation of the kink, the thin-kink limit, and the implicit collective coordinate treatment.% \footnote{For earlier analyses of the thin kink limit, see~\cite{Gregory:1990pm, Carter:1994ag}.} In this paper we shall resolve these issues and make clear the following two facts: first, that the kink zero mode corresponding to translations is almost completely frozen out in the thin-kink limit, and, second, that the implicit collective coordinate expansion (ICCE) does not capture all physically-acceptable classical field configurations. Both of these results appear to be contrary to common understanding, they impact the conclusions of previous work, and they must be taken into account in future studies of kinks. Our analysis is chiefly mathematical and the results are valid for any application of the kink solution, not just domain-wall brane theories. But to aid in physical understanding and help the flow of our argument, we have in mind the specific scenario of a five-% dimensional theory with a bulk scalar field forming a kink. We are interested in integrating out the extra-dimension to determine the equivalent four-dimensional theory, and we shall elucidate the scalar degrees of freedom present in this reduced spacetime. The thin-kink limit is an important phenomenological limit for such a model, as the masses of the Kaluza-Klein (KK) modes are pushed to infinity. In addition, the action for a thin kink can be compared with the Nambu-Goto action for a fundamental brane. For the case of the infinitely-thin kink, we show that the Nambu-Goldstone boson, related to the spontaneous breaking of the translation symmetry, is not fully dynamical: the only remnant of translation invariance in the four-dimensional theory is the allowance of a single frequency massless mode. When dealing with translation invariance, one usually employs the ICCE; see Rajaraman~\cite{Rajaraman:1982is} and references therein. We shall demonstrate that such an expansion must be used with caution, as it is not able to adequately encode all field configurations of the original five-dimensional field and does not properly handle the non-linear interactions of the zero-mode at high order. The paper is organised as follows. In Section~\ref{sec:frozen} we review the kink solution, its energy density and its behaviour in the thin-kink limit. We demonstrate the existence of a `wavy kink' solution of a fixed frequency, which persists in the thin-% kink limit. We then argue that, in such a limit, this fixed frequency wave is the only remaining dynamical behaviour and hence the four-dimensional zero mode~--- the Nambu-Goldstone boson corresponding to translations of the kink~--- is almost completely frozen out. In Section~\ref{sec:modes} we analyse the modes of the kink, showing that the ICCE is not completely general, and we use the fully-general Fourier expansion to show that the zero mode is truly frozen out. We make some further remarks regarding dimensional reduction and then conclude in Section~\ref{sec:concl}. \section{The `wavy kink' and the frozen zero mode} \label{sec:frozen} The set-up of the problem is quite simple; we consider five-% dimensional Minkowski spacetime, and a single scalar field with a quartic potential. The action is \begin{equation} \label{eq:phi-act-5d} \mathcal{S} = \int d^5x \left[ \frac{1}{2} \partial^M \Phi \partial_M \Phi - V(\Phi) \right] \:, \end{equation} where $\Phi$ is the scalar field, and the potential is \begin{equation} V(\Phi) = \frac{\lambda}{4} \left( \Phi^2 - v^2 \right)^2 \:. \end{equation} Indices $M, N$ run over the spacetime coordinates $(t,x,y,z,w)$, the Minkowski metric is $\eta_{MN}=\diag(+1,-1,-1,-1,-1)$, and $\lambda$ and $v$ are real parameters. The equation of motion for $\Phi$ is \begin{equation} \label{eq:phi-el} \partial^M \partial_M \Phi - v^2 \lambda \Phi + \lambda \Phi^3 = 0 \:. \end{equation} The well-known classical kink solution to Eq.~% \eqref{eq:phi-el} is \begin{equation} \label{eq:phi-clas} \phi_c(w) = v \tanh \left( k w \right) \:, \end{equation} where $k=v\sqrt{\lambda/2}$ is the inverse width of the kink. Here, we have chosen the kink profile to depend on the extra-% dimensional coordinate $w$, as this is the dimension we want to eliminate when constructing the equivalent four-dimensional theory. Integrating over $w$, one obtains the energy density per unit four-volume of the kink: \begin{equation} \varepsilon = \int dw \left[ \frac{1}{2} \phi_c'^2 + V(\phi_c) \right] = \frac{2}{3} v^3 \sqrt{2\lambda} \:. \end{equation} The thin-kink limit has the width of the hyperbolic tangent profile tending to zero, and is defined by $k\to\infty$ while $\varepsilon$ is kept finite. For the two parameters of the model, this limit translates to $\lambda\to\infty$ and $v\to0$, with $v^6 \lambda$ finite. We would now like to make a less restrictive ansatz for the solution to the five-dimensional Euler-Lagrange equation, an ansatz which can describe degrees of freedom on top of the static kink profile. Due to the Poincar\'e invariance of the action, any $w$-translated form of Eq.~\eqref{eq:phi-clas} is also a valid solution for $\Phi$. Using this fact as a hint, we try the more general translated ansatz \begin{align} \Phi(x^M) &= \phi_c\left(w-Z(x^\mu)\right) \\ \label{eq:phi-wavy} &= v \tanh \left[ k \left(w-Z(x^\mu)\right) \right] \:. \end{align} Here, the index $\mu$ runs over the four-dimensional subspace $(t,x,y,z)$. $Z(x^\mu)$ is a real scalar field which acts to translate the kink by an $x^\mu$-dependent amount, and includes as a particular case any constant shift of the kink. This ansatz is in fact of the same form as the first term in the ICCE approach to redescribing the five-dimensional scalar field as an infinite tower of four-dimensional KK scalar fields. In terms of the ICCE, we have here taken the solutions for all massive KK four-dimensional fields to be zero. We shall examine the general expansion, which retains all KK modes, in the next section. The fundamental theory is that of a five-dimensional scalar field. To ensure that all of the physics is retained, the correct approach to finding solutions for $Z(x^\mu)$ is therefore to substitute the ansatz into the five-dimensional Euler-Lagrange equation~\eqref{eq:phi-el}. Doing this gives \begin{equation} \label{eq:wavy-el} - \phi'_c(w-Z) \partial^\mu\partial_\mu Z + \phi''_c(w-Z) \partial^\mu Z \partial_\mu Z = 0 \:, \end{equation} where prime denotes derivative with respect to $w$. Since $\phi_c''$ is an odd function, integrating this equation over $w$ eliminates the second term,% \footnote{Note that in doing the integration over $w$ we have terms such as $\int \phi'_c(w-Z) dw$ which look like functions of $x^\mu$. These terms actually yield the same value for each point in the four-space (which can be seen by changing the integration variable independently at each $x^\mu$), and so the integral results in an $x^\mu$-independent answer.} and so the most general solution obeys \begin{equation} \label{eq:z-constrain} \partial^\mu\partial_\mu Z = \partial^\mu Z \partial_\mu Z = 0 \:. \end{equation} Solutions for $Z(x^\mu)$ are massless plane waves \emph{of a single frequency only}. The usual equation of motion for a zero mode, $\partial^\mu\partial_\mu Z=0$, now has an auxiliary constraint, $\partial^\mu Z \partial_\mu Z = 0$, and one can no longer take a Fourier sum of all frequencies. The most general solution to both of these equations is \begin{equation} \label{eq:z-soln} Z(x^\mu) = A\cos(p_\mu x^\mu) + B\sin(p_\mu x^\mu) + C \:, \end{equation} with $A$, $B$ and $C$ arbitrary real numbers and $p_\mu p^\mu=0$. Notice that this solution solves the five-dimensional equation of motion~\eqref{eq:wavy-el} irrespective of the values of the parameters $\lambda$ and $v$; in particular, it remains valid in the thin kink limit. The auxiliary constraint means that, as an effective four-% dimensional field, $Z$ does not manifest as a standard dynamical scalar field in the four-% dimensional theory. Let us now compute the energy density per unit four-volume for the more general kink solution given by Eqs.~% \eqref{eq:phi-wavy} and~\eqref{eq:z-constrain}. It is% \footnote{As before, one will encounter terms such as $\int \phi'^2_c(w-Z) dw$ which are actually $x^\mu$-independent.} \begin{align} E &= \int dw \left[ \frac{1}{2} \dot{\Phi}^2 + \frac{1}{2} \left( \nabla\Phi\cdot\nabla\Phi \right) + \frac{1}{2} \Phi'^2 + V(\Phi) \right] \\ &= \varepsilon + \frac{1}{2} \varepsilon \dot{Z}^2 + \frac{1}{2} \varepsilon \left( \nabla Z \cdot \nabla Z \right) \:. \end{align} Here, an over-dot denotes derivative with respect to $t$. $E$ is the energy density of the original kink background, $\varepsilon$, plus kinetic and gradient energy of $Z$, with larger energy for higher frequency $Z$ solutions. The energy density is not sensitive to the individual parameters $v$ and $\lambda$, only their combination $\varepsilon$. Importantly, in the infinitely-% thin kink limit, we are allowed a non-zero form for $Z$, as its contribution to the total energy density remains finite (assuming the spacetime derivatives of $Z$ are finite). Summarising, we have found a slightly more general kink solution, given by Eq.~\eqref{eq:phi-wavy}, which is an \emph{exact} solution of the five-dimensional Euler-Lagrange equation so long as $Z$ takes the form of Eq.~\eqref{eq:z-soln}. Due to the fact that this solution for $Z$ must be of a fixed frequency (with arbitrary phase and amplitude), we shall call the resulting solution the `wavy kink' solution. The `wave' appears along the length of the kink such that the hyperbolic tangent profile is shifted in the $w$-direction by an amount that varies sinusoidally in the three-space $(x,y,z)$. This wave oscillates in time at a fixed frequency, and, from the point of view of a four-dimensional observer, is the only dynamical behaviour that can be observed given the ansatz~\eqref{eq:phi-wavy}. Consequently, $Z$ cannot be called a proper four-dimensional mode, as, from a momentum-space perspective, its degrees of freedom consist of a set of measure zero: a single frequency. Now, it may seem that we have been too restrictive in our ansatz for the scalar field. After all, conventional wisdom has it that the kink spontaneously breaks translation invariance, and so there should be a massless Nambu-Goldstone boson at the effective four-% dimensional level. This boson would correspond to translations of the kink. In fact, at first order, this Nambu-Goldstone boson is exactly $Z$: for small $Z$, where we ignore $Z^2$ and higher terms, the auxiliary constraint in Eq.~\eqref{eq:z-constrain} is eliminated. In this approximation, we are left only with the usual massless wave equation describing the behaviour of $Z$, and so the system admits a fully-dynamical scalar mode at the four-% dimensional level. One may think that this Nambu-Goldstone zero mode should persist, even if we move outside the regime of the approximation and keep higher order terms for $Z$. We shall show that this is not actually the case, and that the necessary coupling of $Z$ to higher-mass modes constrains its dynamics. What follows is a brief intuitive argument for such behaviour. In the next section we provide a more rigorous mathematical analysis. Consider what happens if one excites the scalar field $\Phi$ to a configuration $\Phi(w-Z)$ where the form of $Z$ consists of multiple frequencies. Obviously, such an excitation does not satisfy the five-dimensional Euler-Lagrange equation~% \eqref{eq:phi-el} with our restricted ansatz~\eqref{eq:phi-wavy}. What will happen is that, as the system evolves in time, the KK fields set to zero in our ansatz will be excited. It is important to realise that if we prohibit the excitation of such modes, then we can only have solutions of the form $\Phi(w-Z)$ with $Z$ a single frequency massless plane wave. Now we are in a position to state one of the main results of this paper: in the infinitely-thin kink limit, such extra modes are infinitely heavy (they are frozen out), and so the zero mode $Z$ is dynamically constrained to a single frequency. As a consequence, from the effective four-dimensional point of view, $Z$ does not have enough degrees of freedom to look anything like a traditional scalar field, and so this Nambu-Goldstone boson is not present in the four-dimensional spectrum. In fact, in the infinitely-thin kink limit, the four-dimensional spectrum contains no propagating degrees of freedom at all. \section{Collective coordinates and the general expansion} \label{sec:modes} In this section we proceed to analyse the full spectrum of modes of the kink, and demonstrate that all the propagating degrees of freedom are frozen out in the thin-kink limit. To do this, we shall expand the five-dimensional field $\Phi$ in a set of complete four-dimensional modes~--- a generalised Fourier transformation, or Kaluza-Klein decomposition. The extra dimension can then be integrated out to obtain a four-% dimensional action, giving an equivalent, but alternative, description of the original theory. The appropriate expansion is written as \begin{equation} \label{eq:mode-expansion} \Phi(x^\mu,w) = \phi_c(w) + \sum_i \phi_i(x^\mu) \eta_i(w) \:, \end{equation} where the sum over $i$ includes two discrete modes ($i=0,1$) and an integral over a continuum ($i=q$, where $q\in\mathbb{R}$). The profiles $\eta_i(w)$ form a complete basis (in the sense that \emph{any} physically-acceptable five-dimensional field configuration $\Phi(x^\mu,w)$ can be represented by suitable choice of $\phi_i(x^\mu)$) and are determined by linearising the five-dimensional Euler-Lagrange equation about the kink background; see reference~% \cite{George:2006gk} for explicit forms of the basis functions $\eta_i$. Note that even though the $\eta_i$ were determined after linearising, they still form a complete basis in the exact regime, and can be used for a general expansion with no loss of information. The fields $\phi_i$ are a tower of scalar fields, and serve to faithfully represent, at the four-dimensional level, all degrees of freedom inherent in $\Phi$. The tower consists of a zero-mass mode, followed by a discrete massive mode, followed by a massive continuum. Before using this expansion, we shall discuss a slightly different version of Eq.~\eqref{eq:mode-expansion}, the aforementioned ICCE~% \cite{Rajaraman:1982is}. Since any translated version of the background kink $\phi_c$ is just as good as any other, there exists an entire class of basis functions $\eta_i$ which are also translated by an equivalent amount. The first basis function $\eta_0$ is proportional to the first derivative of $\phi_c$ and corresponds to infinitesimal (first order) translations of the static kink profile. The mode $\eta_0$ therefore plays a unique role, and it should perhaps be treated differently from the other $\eta_i$ modes. The ICCE is motivated by this observation, and removes the zero mode from the tower of modes, placing it in a more `obvious' spot: \begin{equation} \label{eq:cc-expansion} \Phi(x^\mu,w) = \phi_c\left(w-Z(x^\mu)\right) + \sum_{i\ne0} \tilde{\phi}_i(x^\mu) \eta_i\left(w-Z(x^\mu)\right) \:. \end{equation} The idea now is that the four-dimensional scalar fields $Z(x^\mu)$ and $\tilde{\phi}_{1,q}(x^\mu)$ can faithfully encode all degrees of freedom of $\Phi$. Note that, in this expansion, $\phi_c$ and $\eta_{1,q}$ have the same form as they do in Eq.~\eqref{eq:mode-expansion}, but now the sum excludes $i=0$. The ICCE has seen numerous applications to problems where continuous symmetries and zero modes are present. For example, in a perturbative quantum field theory analysis, the zero mode can potentially lead to divergent energy contributions in higher-order terms~\cite{Rajaraman:1982is}. The ICCE allows one to treat the zero mode separately and avoid such difficulties. Although Eq.~\eqref{eq:cc-expansion} looks quite reasonable, it is actually not general enough to expand an arbitrary field $\Phi(x^\mu,w)$. For example, there are no (finite) choices of $Z(x^\mu)$ and $\tilde{\phi}_{1,q}(x^\mu)$ which yield $\Phi(x^\mu,w)=\omega(x^\mu)\eta_0(w)$ for \emph{any} non-zero choice for $\omega(x^\mu)$.% \footnote{Note that it is not necessary for the configuration $\omega(x^\mu)\eta_0(w)$ to be a classical solution. It is enough that it exists in the space of all possible configurations. At the level of the action, the field $\Phi$ is, of course, taken to be a variable and this configuration is one possible `value' this variable can take. At the quantum level, the path integral must include this configuration in the domain of functional integration.} If there were, then we could write \begin{equation} \label{eq:cc-proof-1} \omega(x^\mu)\eta_0(w) = \phi_c(w-Z) + \sum_{i\ne0} \tilde{\phi}_i(x^\mu) \eta_i(w-Z) \:. \end{equation} Keep in mind that $Z$ may depend on $x^\mu$, we have just neglected to write this explicitly to keep the equation clear. Now, multiply through by $\eta_0(w-Z)$ and integrate over $w$: \begin{equation} \begin{aligned} \omega(x^\mu) \int \eta_0(w)\eta_0(w-Z) \; dw &= \int \phi_c(w-Z)\eta_0(w-Z) \; dw \\ &\quad+ \sum_{i\ne0} \tilde{\phi}_i(x^\mu) \int \eta_i(w-Z)\eta_0(w-Z) \; dw \:. \end{aligned} \end{equation} There is the freedom to shift the integrals on the righthand-side by $Z$, and then, because $\eta_0$ is orthogonal to $\phi_c$ and $\eta_{1,q}$, we have \begin{equation} \label{eq:cc-proof-2} \omega(x^\mu) \int \eta_0(w)\eta_0\left(w-Z(x^\mu)\right) \; dw = 0 \:, \end{equation} Since $\eta_0$ is strictly positive (or strictly negative, depending on the normalisation convention) the integral in this equation will always be positive, regardless of the form of $Z(x^\mu)$, and so it must be that $\omega(x^\mu)=0$. (We shall discuss shortly the possibility that $Z$ is infinite). Hence we have shown that the implicit collective coordinate expansion~\eqref{eq:cc-expansion} cannot faithfully represent all possible configurations of $\Phi$, and so is less general than the mode expansion~\eqref{eq:mode-expansion}. We should make clear what we mean by an expansion being general enough to represent any (physically-acceptable) classical field configuration. In one-dimensional, non-relativistic quantum mechanics, one looks for the eigenfunctions of a time-independent Schr\"odinger equation, and builds a set out of those eigenfunctions which are \emph{bounded at infinity}. Relying on Sturm-Liouville theory, one can make the statement that this set forms a complete set of modes, and any function that is also bounded at infinity can be expanded as a linear combination of the eigenfunctions. It is this idea of completeness that we have in mind throughout the current paper. Our argument above demonstrates that the ICCE is not general enough to represent an arbitrary configuration which is bounded at infinity. In contrast, the mode expansion given by Eq.% ~\eqref{eq:mode-expansion} is determined from a Schr\"odinger-like equation as the set of eigenfunctions which are bounded at infinity, and so is able to represent a more general, and in fact adequate, class of configurations than the ICCE.% \footnote{We have checked explicitly that the expansion~\eqref{eq:mode-expansion} can represent the configuration $\Phi(x^\mu,w)=\omega(x^\mu)\eta_0(w)$ for any choice of $\omega(x^\mu)$. Essentially, there exists a linear combination of the massive modes $\eta_{1,q}(w)$ which cancel the kink configuration $\phi_c(w)$, leaving the zero mode $\eta_0(w)$.} In an attempt to satisfy Eq.~\eqref{eq:cc-proof-2}, one may try and take $Z(x^\mu)\to\infty$, in which case the overlap of the two $\eta_0$ profiles becomes infinitesimally small and the integral vanishes~\cite{Burnier:2009}. If we allow $Z$ to be infinite, our argument above (that the ICCE is not general) breaks down because multiplying Eq.~\eqref{eq:cc-proof-1} through by $\eta_0(w-Z)$ is essentially multiplying through by zero. To understand what is happening, must look back at the actual definition of the ICCE, Eq.~\eqref{eq:cc-expansion}, and consider the effect of taking $Z\to\infty$. Mathematically, the $\phi_c(w-Z)$ term becomes a constant ($+v$ or $-v$, depending on whether $Z\to-\infty$ or $Z\to\infty$, respectively), the discrete mode $\eta_1(w-Z)$ vanishes, and the continuum modes $\eta_q(w-Z)$ become plane waves with frequency beginning at zero. In such a limit, the ICCE thus reduces to the standard Fourier transform of sines and cosines. Physically, one has translated the kink away, off to infinity, leaving a homogeneous vacuum (which is of course an allowed solution of the theory). Using the ICCE with $Z$ of infinite magnitude is therefore equivalent to doing a mode decomposition around a homogeneous vacuum background, rather than around the kink background. Although the underlying five-dimensional theory can be equally-well recast into equivalent four-dimensional forms using either decomposition (around the kink background, or around a homogeneous vacuum), for a given application one re-description will be more convenient than the other. For the application of concern to us here, the kink-background approach is clearly the more convenient. If one were to try to do the analysis using the homogeneous-vacuum mode (standard Fourier) basis, one would first have to understand how to choose the Fourier coefficients to produce the kink background plus excitations. This can be done, of course, but it is an awkward way to proceed. So, with a finite $Z$ the ICCE is not fully general, while for infinite $Z$ one recovers a mode basis that is not convenient for studying kink-related physics. There are certain regimes of analysis where the ICCE is adequate. This includes the case where one restricts oneself to look only at small perturbations of the kink background, as there are no troubles expanding a perturbed kink using the ICCE. The configuration used in the above argument~--- the one which cannot be represented by the ICCE, $\Phi(x^\mu,w)=\omega(x^\mu)\eta_0(w)$~--- is not a small perturbation of the kink since its asymptotic behaviour differs from that of $\phi_c(w)$. In this paper we are interested in determining the full, non-linear behaviour of the kink \emph{exactly}, and must allow for the possibility that the kink background configuration is significantly modified. The ICCE approach is therefore unsuitable. Instead, using the more general expansion~\eqref{eq:mode-expansion} allows us to transform the five-dimensional kink into an equivalent description in terms of four-dimensional scalar modes. Analysing these modes is then straightforward because the resulting Lagrangian contains just massive interacting fields, with usual Klein-Gordon equations of motion (as opposed to the ICCE which yields difficult-to-interpret derivative couplings). This is due to the proper choice of basis functions $\eta_i$.% \footnote{At the very least, regardless of the regime of validity of the ICCE, the mode expansion given by Eq.~% \eqref{eq:mode-expansion} is general enough to describe all configurations which remain bounded at infinity, and we can be confident in using it to obtain an equivalent four-dimensional action. If the ICCE is equally valid, it should produce the same results.} Substituting Eq.~\eqref{eq:mode-expansion} in the five-% dimensional action~\eqref{eq:phi-act-5d} and integrating out the extra dimension yields the equivalent four-dimensional action: \begin{equation} \mathcal{S}_\Phi = \int d^4x \left[ -\varepsilon_{\phi_c} + \mathcal{L}_\phi \right], \end{equation} where the kinetic, mass and self-coupling terms for the scalar modes are (see~\cite{George:2006gk}) \begin{equation} \begin{aligned} \label{eq:phi-lag-4d} \mathcal{L}_\phi &= \frac{1}{2} \partial^\mu \phi_0 \partial_\mu \phi_0 + \frac{1}{2} \partial^\mu \phi_1 \partial_\mu \phi_1 - \frac{3}{4} v^2 \lambda \phi_1^2 \\ & \quad + \int_{-\infty}^{\infty} dq \left[ \frac{1}{2} \partial^\mu \phi_q^* \partial_\mu \phi_q - \frac{1}{4} (q^2 + 4) v^2 \lambda \phi_q^* \phi_q \right] \\ & \quad - \kappa^{(3)}_{i j k} \phi_i \phi_j \phi_k - \kappa^{(4)}_{i j k l} \phi_i \phi_j \phi_k \phi_l \:. \end{aligned} \end{equation} In the last line here there are implicit sums over discrete modes, and integrals over continuum modes, for each of the indices $i$, $j$, $k$ and $l$. For these terms, the cubic and quartic coupling coefficients are, respectively, \begin{align} \kappa^{(3)}_{i j k} &= \lambda \int_{-\infty}^\infty \phi_c \eta_i \eta_j \eta_k \; dw \:, \\ \kappa^{(4)}_{i j k l} &= \frac{\lambda}{4} \int_{-\infty}^\infty \eta_i \eta_j \eta_k \eta_l \; dw \:. \end{align} The four-dimensional equivalent theory described by Eq.~% \eqref{eq:phi-lag-4d} contains a massless scalar field $\phi_0$, a massive scalar $\phi_1$, and a continuum of massive fields $\phi_q$. There exist cubic and quartic couplings among these fields, and, importantly, a quartic \emph{self-coupling} term for $\phi_0$; this is due to the non-zero value of $\kappa^{(4)}_{0000}$: \begin{equation} \kappa^{(4)}_{0000} = \frac{9}{70} \left(\frac{3\varepsilon}{8}\right)^{1/3} \lambda^{4/3} \:. \end{equation} Determining the Euler-Lagrange equations for each of the fields is a straightforward task. For our purposes, it suffices to examine the two discrete modes: \begin{align} \label{eq:phi0-el} \partial^\mu\partial_\mu \phi_0 + 6 \kappa^{(3)}_{001} \phi_0 \phi_1 + 4 \kappa^{(4)}_{0000} \phi_0^3 + 12 \kappa^{(4)}_{0011} \phi_0 \phi_1^2 \nonumber\\ + \; \text{(terms involving continuum modes)} &= 0 \:,\\ \label{eq:phi1-el} \partial^\mu\partial_\mu \phi_1 + \frac{3}{2} v^2 \lambda \phi_1 + 3 \kappa^{(3)}_{001} \phi_0^2 + 3 \kappa^{(3)}_{111} \phi_1^2 + 12 \kappa^{(4)}_{0011} \phi_0^2 \phi_1 + 4 \kappa^{(4)}_{1111} \phi_1^3 \nonumber\\ + \; \text{(terms involving continuum modes)} &= 0 \:. \end{align} Given this rather neat, and exact, dimensional reduction of the original $\Phi$ model, we can now make a rigorous conclusion regarding the thin-kink limit. In this limit, $v^2 \lambda \to \infty$ and so the mass terms in the Lagrangian, Eq.~\eqref{eq:phi-lag-4d}, become infinitely large. From the point of view of the Euler-Lagrange equations for $\phi_1$ and $\phi_q$, the mass terms for these fields have an infinite coefficient, and these equations of motion can only be generally satisfied if the associated fields are identically zero. We therefore conclude that, in the infinitely-thin kink limit, the massive modes $\phi_1$ and $\phi_q$ are frozen out. Since $\phi_1$ and the continuum modes must be zero, Eq.~% \eqref{eq:phi1-el} reduces to $3\kappa^{(3)}_{001}\phi_0^2=0$, implying that $\phi_0$ must also be zero.% \footnote{Analysis of the Euler-Lagrange equations for the continuum modes reveals similar constraints, such as $\kappa^{(3)}_{00q}\phi_0^2=0$ ($q$ corresponding to an odd mode) and $\kappa^{(4)}_{000p}\phi_0^3=0$ ($p$ corresponding to an even mode).} This is the central part of the argument, and supports our earlier claim that $Z$ is constrained due to its coupling to massive, frozen modes. Here, the dynamics dictate that $\phi_0$ must excite $\phi_1$ (if $\phi_1$ begins as zero) and so if $\phi_1$ is forbidden (for example, if it is infinitely heavy), then $\phi_0$ cannot be excited at all. Similar statements can be made regarding the coupling of $\phi_0$ to the massive continuum modes. Furthermore, the quartic coupling of $\phi_0$ to itself also prevents it from being excited: in the thin kink limit, $\kappa^{(4)}_{0000}\to\infty$, and, in order to satisfy Eq.~\eqref{eq:phi0-el}, $\phi_0$ is driven to zero. From a slightly different point of view, consider all fields $\phi_i$ to be identically zero to begin with, and attempt to excite them individually. In the thin kink limit, all of the Euler-Lagrange equations contain potential terms that are infinite if any one of the fields are independently excited. In the equation for $\phi_0$, this term has coefficient $4\kappa^{(4)}_{0000}$, for $\phi_1$ it has $\tfrac{3}{2}v^2\lambda$, and for $\phi_q$ it has $\tfrac{1}{2}(q^2+4)v^2\lambda$. Thus, each field is individually frozen. We are essentially arguing that, in the Euler-Lagrange equations for the four-dimensional fields (and also in the four-dimensional action), there are coefficients which become infinite in the thin kink limit, and so the fields that make up such terms must be zero at the solution level. The reader may wonder if there exists some special combination of these fields which conspire to cancel the infinities. This is actually true. The special combination of $\phi_0$ and the massive modes that persists in the thin-kink limit is the fixed frequency, wavy kink solution that we found in Section~\ref{sec:frozen}. But this is not a true four-dimensional dynamical field. In reference~\cite{George:2006gk} it is shown that there is no other special combination that manifests as a proper four-dimensional scalar field with a canonical kinetic term. The mode expansion given by Eq.~\eqref{eq:mode-expansion} retains all degrees of freedom of $\Phi$, and our analysis shows that these degrees of freedom are all driven to zero in the thin kink limit. Furthermore, there is no special combination of the modes which yields a field with a proper kinetic term. We have therefore shown that there are no observable dynamics, at the four-dimensional level, of the infinitely-thin kink. It is perhaps best, then, to consider a thin kink as also being a rigid kink; that is, it cannot be perturbed. For a thin kink (but not infinitely thin), one can set up a finite-energy-density configuration $\phi_c(w-Z(x^\mu))$ with arbitrary form for $Z(x^\mu)$. But, if the kink is made thinner, and hence more rigid, the same configuration will have a greater energy cost and will dissipate more rapidly to a wavy kink of fixed frequency (possibly zero frequency: the usual, static kink). For the case of the infinitely-thin kink limit, the initial configuration must \emph{begin} as a fixed frequency wavy kink. \section{Discussion and conclusion} \label{sec:concl} It must be stressed again that a five-dimensional theory is ruled by the five-dimensional Euler-Lagrange equations. Four-dimensional Euler-Lagrange equations provide an equivalent description only when all five-dimensional fields have been expanded in a full, or general, set of modes. To go to an equivalent four-dimensional theory, one should \emph{not} make a non-general ansatz for the five-% dimensional fields, then integrate out the extra dimension. To find a four-dimensional theory which is equivalent to the original five-dimensional one, all degrees of freedom must be kept to begin with, and then the irrelevant ones eliminated at the four-dimensional level. If one does not begin with a general expansion of the five-% dimensional fields, then one may miss some important low-energy dynamics, dynamics which influence the behaviour of other low-% energy degrees of freedom that have been included. For a concrete example of this statement, consider the non-general expansion $\Phi=\phi_c(w-Z(x^\mu))$. There is nothing wrong with employing such an expression as a solution ansatz, but, since it ignores a great number of degrees of freedom, one must use the five-% dimensional Euler-Lagrange equation to determine the behaviour of $Z$. This is what we did in Section~\ref{sec:frozen}, where we found that $Z$ must be a massless plane wave of a fixed frequency. Now, to contrast this method, we try and substitute the non-% general expansion into the original five-dimensional action, integrate out the extra dimension, and obtain the effective four-% dimensional action: \begin{align} \mathcal{S} &= \int d^5x \left[ \frac{1}{2} \phi_c'^2(w-Z) \: \partial^\mu Z \partial_\mu Z - \frac{1}{2} \phi_c'^2(w-Z) - V\left(\phi_c(w-Z)\right) \right ] \\ &= \int d^4x \left[ \frac{1}{2} \varepsilon \: \partial^\mu Z \partial_\mu Z - \varepsilon \right] \:. \end{align} This procedure gives the four-dimensional Euler-Lagrange equation $\partial^\mu\partial_\mu Z=0$, which is not correct, as it is missing the auxiliary constraint that fixes $Z$ to a single frequency. The first method we used is the correct method, as the solution respects the full five-dimensional theory. Reduction to lower dimensions can only proceed if one uses a full, general mode expansion. In light of this argument, there is no sense in using the ICCE to redescribe a five-dimensional theory as a completely equivalent four-% dimensional theory. As we have shown, the ICCE is not general, and, in going to a four-dimensional description, one will potentially miss out on degrees of freedom which are pertinent to the low-energy dynamics. Having said this, the ICCE is useful in certain contexts; for example, where one is only interested in expanding a model up to a given order in perturbation theory. This is actually the case for the discussions in Rajaraman~\cite{Rajaraman:1982is}, where modes are quantised around a classical ground state (like the kink), and perturbation theory is used to analyse the quantum excitations. As pointed out in Section~\ref{sec:frozen}, if one works in the regime where $Z$ is small and $Z^2\sim0$, the auxiliary constraint $\partial^\mu Z \partial_\mu Z=0$ is automatically satisfied at this order, and one is allowed a fully dynamical field $Z(x^\mu)$ at the four-dimensional level. Physically, this means that $Z$ is so small that it does not excite the higher-mass modes. As a relevant aside, we shall make some brief comments regarding fundamental branes. Such branes may originate from string theory, and are modelled by an effective action~--- the Nambu-Goto action~--- which treats them as infinitely thin, delta-distribution sources. These branes are assumed to support a proper translation zero mode which couples only through derivative terms to other fields (via the metric). We can accept this behaviour by understanding that branes modelled by the Nambu-Goto action are flexible, even though they are infinitely thin. Their degree of flexibility is dictated by their tension, which is equivalent to their energy density. In contrast, modelling a brane by a thin domain-wall kink solution yields different effective four-dimensional dynamics; the domain wall does not exhibit a dynamical zero mode, as it is extremely rigid. Our conclusion that the ICCE is not a general expansion may have an impact on previous work that relied on this method. For example, Burnier and Zuleta~\cite{Burnier:2008ke} compared fundamental branes and kinks using the ICCE for two scalar fields (the kink and an additional coupled scalar). The low-energy expansion of the domain-wall model was compared with the low-% energy expansion of the Nambu-Goto action. Their use of the ICCE to describe \emph{perturbations} of the kink is well justified, but it is not clear to us that their conclusions would remain unchanged using the more general mode expansion given by Eq.~\eqref{eq:mode-expansion}. Another interesting analysis to revisit is that where gravity is included~\cite{Shaposhnikov:2005hc}. Here, the zero mode of the kink mixes with gravitational degrees of freedom. It would be important to understand which effects are more important: particle physics modifications to the zero mode due to its quartic self-% coupling, or the mixing with gravity. In conclusion, we have established two facts that have been overlooked during the study of domain walls and of kinks. First, that the zero mode of translation is almost completely frozen out in the thin-kink limit. The only remnant is a four-dimensional entity which must assume a single frequency, yielding a wavy kink solution. This entity does not manifest as a proper mode in the effective four-dimensional theory; almost all degrees of freedom are frozen out. Second, that the implicit collective coordinate expansion is not completely general. It should only be used with caution and in certain approximations. \section*{Acknowledgements} We thank Y.~Burnier and K.~Zuleta for useful correspondence. DPG was supported in part by the Puzey Bequest to the University of Melbourne. RRV was supported in part by the Australian Research Council.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Certain high-energy astrophysical sources are characterised by luminous and rapid flares of energetic radiation. In particular, these include blazars \citep[e.g.,][]{Aharonian_2007,Albert_2007,Aleksi__2011,Abdo_2011,10.1093/mnras/sts711,Ackermann_2016}, and the Crab pulsar wind nebula \citep{Tavani736,Abdo739,Buehler_2012,10.1111/j.1365-2966.2012.22097.x,Mayer_2013,Striani_2013}. In these extreme astrophysical environments, magnetic fields may dominate even the local rest-mass energy density. Magnetic reconnection is considered a leading explanation for the efficient particle acceleration behind the dramatic gamma-ray flares of blazars \citep{10.1111/j.1745-3933.2009.00635.x,10.1111/j.1365-2966.2010.18140.x,10.1111/j.1365-2966.2012.21721.x,10.1093/mnras/stt167,10.1093/mnras/stv641,10.1093/mnras/stw1832}. Through changes of the magnetic line topology, particles are accelerated in the current sheets, converting magnetic energy into kinetic and thermal energy. In the case of the Crab pulsar wind nebula, the $\gamma$-ray radiation spectral peaks can surpass the classical synchrotron radiation reaction limit ($\sim 160\;{\rm MeV}$), which suggests a very efficient localised dissipation of magnetic energy that allows for rapid particle acceleration \citep{Uzdensky_2011,10.1111/j.1365-2966.2011.18516.x,Arons_2012,10.1111/j.1365-2966.2012.21349.x,10.1093/mnras/sts214,B_hler_2014,Zrake_2016,Zrake_2017,lyutikov_komissarov_sironi_porth_2018}. Numerical simulations based on the kinetic particle-in-cell (PIC) algorithm have demonstrated that relativistic reconnection in collisionless plasma is an efficient mechanism of magnetic energy dissipation and particle acceleration \citep{Zenitani_2001,doi:10.1063/1.1644814,Zenitani_2007,Lyubarsky_2008,doi:10.1063/1.3589304,Bessho_2012,Kagan_2013,Sironi_2014,PhysRevLett.113.155005,refId0,Guo_2015,Guo_2016,Werner_2016,Werner_2017,10.1093/mnras/stx2530,10.1093/mnras/sty2702,Petropoulou_2019,Guo_2019,guo2020magnetic}, and that it can produce extreme radiative signatures --- energetic, highly anisotropic and rapidly variable \citep{Cerutti_2012,0004-637X-770-2-147,Cerutti_2014,Kagan_2016,nalewajko2018relativistic,10.1093/mnras/sty2636,10.1093/mnras/staa2346,Comisso_2020,10.1093/mnras/staa1899}. Most of these simulations were initiated from relativistic Harris-type current layers \citep{Kirk_2003}. An alternative class of magnetostatic equilibria known as the `Arnold-Beltrami-Childress' (ABC) magnetic fields \citep{Arnold_1965} has been recently applied as an initial configuration for investigating relativistic magnetic dissipation \citep{PhysRevLett.115.095002}. This configuration involves no kinetically thin current sheets, but is unstable to the so-called coalescence modes that lead to localised interactions of magnetic domains of opposite polarities, emergence of dynamical current layers, instantaneous particle acceleration, and production of rapid flares of high-energy radiation. The overall process has been dubbed \emph{magnetoluminescence} -- a generic term for efficient and fast conversion of magnetic energy into radiation \citep{Blandford_2017}. Numerical simulations of ABC fields have been performed with relativistic magnetohydrodynamics (MHD) and relativistic force-free (FF) algorithms \citep{PhysRevLett.115.095002}. Detailed comparison between 2D and 3D ABC fields in the FF framework has been performed by \cite{Zrake_East_2016}. PIC simulations of 2D ABC fields have been reported by \cite{0004-637X-826-2-115} with the focus on the structure of current layers and particle acceleration, by \cite{Yuan_2016} including synchrotron radiation reaction and radiative signatures, and by \cite{nalewajko_yuan_chruslinska_2018} including synchrotron and Inverse Compton (IC) radiation. ABC fields have been also investigated in great detail (including PIC simulations) by \cite{lyutikov_sironi_komissarov_porth_2017,Lyutikov2017,lyutikov_komissarov_sironi_porth_2018} with application to the Crab Nebula flares. The first three-dimensional PIC simulations of ABC fields have been reported in \cite{10.1093/mnras/sty2549}. The previous works have established the following picture. ABC fields simulated in periodic numerical grids are unstable to coalescence instability if only there exists a state of equal total magnetic helicity and lower total magnetic energy \citep{PhysRevLett.115.095002}. The growth time scale of the linear coalescence instability is a fraction of the light crossing time scale that depends on the mean magnetisation (or equivalently on the typical Alfven velocity) \citep{0004-637X-826-2-115}. The magnetic dissipation efficiency is determined primarily by the global magnetic field topology, and it is restricted in 2D systems due to the existence of additional topological invariants \citep{Zrake_East_2016}. The dissipated magnetic energy is transferred to the particles, resulting in non-thermal high-energy tails of their energy distributions. These tails can be in most cases described as power laws with a power-law index, but more generally they can be characterised by the non-thermal number and energy fractions \citep{0004-637X-826-2-115}. With increasing initial magnetisation, the non-thermal tails become harder, containing higher number and energy fractions, similar to the results on Harris-layer reconnection \cite{Sironi_2014,PhysRevLett.113.155005,Werner_2016}. A limitation of the ABC fields in comparison with the Harris layers is that the initial magnetisation is limited for a given simulation size by the minimum particle densities required to sustain volumetric currents. The particle acceleration mechanisms of ABC fields, described in more detail in \cite{0004-637X-826-2-115,Yuan_2016,lyutikov_sironi_komissarov_porth_2017}, show similarities to other numerical approaches to the problem of relativistic magnetic dissipation. During the linear stage of coalescence instability, kinetically thin current layers form and evolve very dynamically. The few particles that happen to straggle into one of those layers are accelerated by direct non-ideal reconnection electric fields ($\bm{E}\cdot\bm{B} \ne 0$, $|\bm{E}| > |\bm{B}|$). This is essentially the \cite{Zenitani_2001} picture of magnetic X-point, which is important also in large-scale simulations of Harris-layer reconnection in the sense that particles that pass through a magnetic X-point are most likely to eventually reach top energies \citep{Sironi_2014,Guo_2019}. The non-linear stage of coalescence instability features slowly damped electric oscillations that gradually convert to particle energies. This can affect essentially all particles, as electric oscillations cross the entire simulation volume multiple times. Particles accelerated during the linear stage now propagate on wide orbits and can interact with electric perturbations at random angles. This is reminiscent of a Fermi process, in particular of the kind envisioned by \cite{2012PhRvL.108m5003H}. With a larger number of magnetic domains, the coalescence proceeds in multiple stages, with the successive current layers increasingly less regular. The system becomes chaotic more quickly and begins to resemble a decaying turbulence of the kind studied by \cite{Comisso_2019}. As the previous PIC simulations of ABC fields were largely limited to the lowest unstable mode, in this work we present the results of new series of 2D PIC simulations of ABC fields for different coherence lengths $\lambda_0$ in order to understand how they affect the efficiency of magnetic dissipation and particle acceleration. Although the coalescence instability is rather fast, it is followed by slowly damped non-linear oscillations, hence our simulations are run for at least $25 L/c$ light crossing times for the system size $L$ to allow these oscillations to settle. Our simulations were performed at three different sizes, in addition we investigated the effects of numerical resolution and local particle anisotropy, in order to break the relation between the effective wavenumber and the mean initial magnetisation. We also compare our results with new 3D simulations following the setup described in \cite{10.1093/mnras/sty2549}. In Section \ref{sec:setup} we define the initial configuration of our simulations. Our results are presented in Section \ref{sec:res}, including spatial distributions of magnetic fields (Section \ref{sec:mag}), evolution of the total energy components (Section \ref{sec:evo}), conservation accuracy of the magnetic helicity (Section \ref{sec:heli}), and particle energy distributions (Section \ref{sec:acc}). Discussion is provided in Section \ref{sec:disc}. \section{Simulation setup} \label{sec:setup} We perform a series of PIC simulations using the {\tt Zeltron} code\footnote{\url{http://benoit.cerutti.free.fr/Zeltron/}} \citep{0004-637X-770-2-147} of 2D periodic magnetic equilibria known as ABC fields \citep{PhysRevLett.115.095002}. As opposed to the Harris layers, these initial configurations do not contain kinetically thin current layers. In 2D, there are two ways to implement ABC fields on a periodic grid, which we call diagonal or parallel, referring to the orientation of the separatrices between individual magnetic domains. The \emph{diagonal} ABC field is defined as: \begin{eqnarray} B_x(x,y) &=& B_0\sin(2\pi y/\lambda_0)\,, \\ B_y(x,y) &=& B_0\cos(2\pi x/\lambda_0)\,, \\ B_z(x,y) &=& B_0\left[\sin(2\pi x/\lambda_0)+\cos(2\pi y/\lambda_0)\right]\,, \end{eqnarray} where $\lambda_0$ is the coherence length. The \emph{parallel} ABC field can be obtained from the diagonal one through rotation by $45^\circ$ and increasing the effective wavenumber by factor $\sqrt{2}$: \begin{eqnarray} B_x(x,y) &=& B_0\left[\sin(\sqrt{2}\pi(x+y)/\lambda_0)+\sin(\sqrt{2}\pi(x-y)/\lambda_0)\right]/\sqrt{2}\,, \\ B_y(x,y) &=& B_0\left[\sin(\sqrt{2}\pi(x-y)/\lambda_0)-\sin(\sqrt{2}\pi(x+y)/\lambda_0)\right]/\sqrt{2}\,, \\ B_z(x,y) &=& B_0\left[\cos(\sqrt{2}\pi(x+y)/\lambda_0)-\cos(\sqrt{2}\pi(x-y)/\lambda_0)\right]\,. \end{eqnarray} With this, both the diagonal and parallel configurations satisfy the Beltrami condition $\nabla\times\bm{B} = -(2\pi/\lambda_0)\bm{B}$. In all cases, the mean squared magnetic field strength is $\left<B^2\right> = 2B_0^2$ and the maximum magnetic field strength is $B_{\rm max} = 2B_0$. These magnetic fields are maintained in an initial equilibrium by volumetric current densities $\bm{j}(\bm{x}) = -(c/2\lambda_0)\bm{B}(\bm{x})$ provided by locally anisotropic particle distribution \citep[for details, see][]{0004-637X-826-2-115,10.1093/mnras/sty2549}. ABC fields are characterised by vanishing divergence of the electromagnetic stress tensor $\partial_iT_{\rm EM}^{ij} = 0$ (equivalent to the vanishing $\bm{j}\times\bm{B}$ force), which implies uniform gas pressure that can be realised with uniform temperature $T$ and uniform gas density $n$. We chose the initial particle energy distribution to be Maxwell-J\"{u}ttner distribution of relativistic temperature $\Theta = kT/mc^2 = 1$, hence the mean particle energy is $\left<\gamma\right> \simeq 3.37$, and the mean particle velocity is $\left<\beta\right> \simeq 0.906$. The gas density (including both the electrons and positrons) is given by: \begin{equation} n = \frac{3B_0}{2e \tilde{a}_1 \left<\beta\right> \lambda_0} \end{equation} where $\tilde{a}_1 \le 1/2$ is a constant that normalises the dipole moment of the local particle distribution. We chose $\tilde{a}_1 = 1/4$ as a standard value, but we investigate the effect of reduced local particle anisotropy with lower values of $\tilde{a}_1$ that result in higher particle densities and lower magnetisation values. The initial kinetic energy density is: \begin{equation} u_{\rm kin,ini} = \left<\gamma\right>nm_{\rm e}c^2 \simeq \frac{6\pi\left<\gamma\right>}{\tilde{a}_1\Theta\left<\beta\right>} \left(\frac{\rho_0}{\lambda_0}\right) \left<u_{\rm B,ini}\right>\,, \end{equation} where $\rho_0 = \Theta m_{\rm e} c^2/(eB_0)$ is the nominal gyroradius, and $\left<u_{\rm B,ini}\right> = B_0^2/4\pi$ is the initial mean magnetic energy density. The initial mean hot magnetisation is given by: \begin{equation} \left<\sigma_{\rm ini}\right> = \frac{\left<B^2\right>}{4\pi w} = \frac{\tilde{a}_1\Theta\left<\beta\right>}{3\pi (\left<\gamma\right>+\Theta)} \left(\frac{\lambda_0}{\rho_0}\right)\,, \label{eq_sigma_ini} \end{equation} where $w = (\left<\gamma\right>+\Theta)nm_{\rm e}c^2$ is the relativistic enthalpy density. For $\Theta = 1$, we have $\left<\sigma_{\rm ini}\right> \simeq (4\tilde{a}_1)(\lambda_0/182\rho_0)$. We performed simulations of either diagonal or parallel ABC fields and for different wavenumbers $k$ ($k = L/\lambda_0$ for diagonal configuration and $k = L/\sqrt{2}\lambda_0$ for parallel configuration). For instance, a simulation labelled {\tt diag\_k2} is initiated with a diagonal ABC field with $L/\lambda_0 = 2$. In order to verify the scaling of our results, we performed series of simulations for three sizes of numerical grids: small (s) for $N_x = N_y = 1728$, medium (m) for $N_x = N_y = 3456$, and large (l) $N_x = N_y = 6912$. For numerical resolution $\Delta x = \Delta y = L/N_x$, where $L$ is the physical system size we chose a standard value of $\Delta x = \rho_0/2.4$, but we investigated the effect of increased resolution on the medium numerical grid. The numerical time step was chosen as $\Delta t = 0.99(\Delta x/\sqrt{2}c)$. All of our simulations were performed for at least $25 L/c$ light crossing times. In each case we used 128 macroparticles (including both species) per cell. We also performed two new 3D simulations for the cases {\tt diag\_k2} and {\tt diag\_k4}, following the configuration described in \cite{10.1093/mnras/sty2549}, but extending them to $25 L/c$. In this case we chose the following parameter values: $N_x = N_y = N_z = 1152$, $\Delta x = \Delta y = \Delta z = \rho_0/1.28$, $\tilde{a}_1 = 0.2$, and 16 macroparticles per cell. \section{Results} \label{sec:res} The key parameters of our large simulations are listed in Table \ref{tab_res}, where we report basic results describing global energy transformations that will be discussed in Section \ref{sec:evo}, and particle energy distributions that will be discussed in Section \ref{sec:acc}. \begin{table*} \caption{Global parameters of energy conversion and particle acceleration compared for the 2D and 3D simulations. The initial values denoted with subscript \emph{ini} are measured at $t=0$, and the final values (\emph{fin}) are averaged over $20 \le ct/L \le 25$. The initial mean hot magnetisation $\left<\sigma_{\rm ini}\right>$ is computed from Eq. (\ref{eq_sigma_ini}). The initial magnetic energies $\mathcal{E}_{\rm B,ini}$ are normalised to the total system energy $\mathcal{E}_{\rm tot}$. The magnetic dissipation efficiency is defined as $\epsilon_{\rm diss} = 1-\mathcal{E}_{\rm B,fin}/\mathcal{E}_{\rm B,ini}$. We report the peak value $\tau_{\rm E,peak}$ of the linear growth time scale $\tau_E$ of electric energy, which scales like $\mathcal{E}_{\rm E} \propto \exp(ct/L\tau_E)$. For the final particle energy distributions, we report: the power law index $p$, the maximum Lorentz factor $\gamma_{\rm max}$, and the non-thermal particle energy fraction $f_E$.} \centering \begin{tabular}{lrllrccclrc} \hline\hline config & $\displaystyle\frac{L}{\lambda_0}$ & $\tilde{a}_1$ & $\displaystyle\frac{\rho_0}{\Delta x}$ & $\left<\sigma_{\rm ini}\right>$ & $\mathcal{E}_{\rm B,ini}$ & $\epsilon_{\rm diss,fin}$ & $\tau_{\rm E,peak}$ & $p$ & $\gamma_{\rm max}$ & $f_E$ \\ \hline \multicolumn{11}{l}{2D small, $N_x = 1728$} \\ para\_k1 & $ \sqrt{2}$ & 1/4 & 2.4 & 2.8 & 0.65 & 0.26 & 0.25 & 3.1 & 450 & 0.18 \\ para\_k2 & $ 2\sqrt{2}$ & 1/4 & 2.4 & 1.4 & 0.48 & 0.52 & 0.17 & 3.75 & 190 & 0.16 \\ para\_k4 & $ 4\sqrt{2}$ & 1/4 & 2.4 & 0.7 & 0.31 & 0.59 & 0.14 & 4.8 & 60 & 0.07 \\ para\_k8 & $ 8\sqrt{2}$ & 1/4 & 2.4 & 0.4 & 0.19 & 0.65 & 0.17 & --- & 30 & 0.02 \\ \hline \multicolumn{11}{l}{2D medium, $N_x = 3456$} \\ para\_k1 & $ \sqrt{2}$ & 1/4 & 2.4 & 5.6 & 0.78 & 0.27 & 0.21 & 2.85 & 870 & 0.31 \\ diag\_k2 & 2 & 1/4 & 2.4 & 4.0 & 0.72 & 0.44 & 0.16 & 2.95 & 620 & 0.34 \\ para\_k2 & $2\sqrt{2}$ & 1/4 & 2.4 & 2.8 & 0.65 & 0.53 & 0.13 & 3.2 & 590 & 0.28 \\ diag\_k4 & 4 & 1/4 & 2.4 & 2.0 & 0.56 & 0.57 & 0.11 & 3.65 & 270 & 0.20 \\ para\_k4 & $4\sqrt{2}$ & 1/4 & 2.4 & 1.4 & 0.48 & 0.59 & 0.09 & 3.8 & 190 & 0.15 \\ diag\_k8 & 8 & 1/4 & 2.4 & 1.0 & 0.39 & 0.60 & 0.08 & 4.2 & 100 & 0.10 \\ para\_k8 & $8\sqrt{2}$ & 1/4 & 2.4 & 0.7 & 0.31 & 0.61 & 0.08 & 4.8 & 60 & 0.06 \vspace{1em} \\ para\_k1 & $ \sqrt{2}$ & 1/8 & 2.4 & 2.8 & 0.64 & 0.26 & 0.27 & 3.35 & 320 & 0.18 \\ para\_k1 & $ \sqrt{2}$ & 1/16 & 2.4 & 1.4 & 0.48 & 0.26 & 0.40 & 4.5 & 80 & 0.07 \\ para\_k1 & $ \sqrt{2}$ & 1/32 & 2.4 & 0.7 & 0.31 & 0.25 & 0.68 & --- & 30 & 0.02 \\ para\_k2 & $ 2\sqrt{2}$ & 1/8 & 2.4 & 1.4 & 0.48 & 0.51 & 0.19 & 4.2 & 150 & 0.13 \\ para\_k2 & $ 2\sqrt{2}$ & 1/16 & 2.4 & 0.7 & 0.31 & 0.48 & 0.34 & --- & 50 & 0.04 \\ para\_k4 & $ 4\sqrt{2}$ & 1/8 & 2.4 & 0.7 & 0.31 & 0.57 & 0.17 & 5.2 & 60 & 0.05 \\ para\_k4 & $ 4\sqrt{2}$ & 1/16 & 2.4 & 0.4 & 0.19 & 0.54 & 0.46 & --- & 30 & 0.01 \\ para\_k8 & $ 8\sqrt{2}$ & 1/8 & 2.4 & 0.4 & 0.19 & 0.58 & 0.20 & --- & 30 & 0.01 \vspace{1em} \\ para\_k1 & $ \sqrt{2}$ & 1/4 & 4.8 & 2.8 & 0.64 & 0.26 & 0.25 & 3.2 & 410 & 0.18 \\ para\_k1 & $ \sqrt{2}$ & 1/4 & 9.6 & 1.4 & 0.48 & 0.26 & 0.32 & 3.8 & 160 & 0.10 \\ para\_k1 & $ \sqrt{2}$ & 1/4 & 19.2 & 0.7 & 0.31 & 0.27 & 0.44 & 5.8 & 40 & 0.05 \\ para\_k2 & $ 2\sqrt{2}$ & 1/4 & 4.8 & 1.4 & 0.48 & 0.52 & 0.17 & 3.75 & 200 & 0.17 \\ para\_k2 & $ 2\sqrt{2}$ & 1/4 & 9.6 & 0.7 & 0.31 & 0.52 & 0.30 & --- & 50 & 0.10 \\ para\_k4 & $ 4\sqrt{2}$ & 1/4 & 4.8 & 0.7 & 0.31 & 0.58 & 0.13 & 4.75 & 60 & 0.09 \\ para\_k4 & $ 4\sqrt{2}$ & 1/4 & 9.6 & 0.4 & 0.19 & 0.63 & 0.24 & --- & 30 & 0.05 \\ para\_k8 & $ 8\sqrt{2}$ & 1/4 & 4.8 & 0.4 & 0.19 & 0.64 & 0.15 & --- & 30 & 0.03 \\ \hline \multicolumn{11}{l}{2D large, $N_x = 6912$} \\ para\_k1 & $ \sqrt{2}$ & 1/4 & 2.4 & 11.2 & 0.88 & 0.26 & 0.18 & 2.4 & 1490 & 0.56 \\ para\_k2 & $ 2\sqrt{2}$ & 1/4 & 2.4 & 5.6 & 0.78 & 0.53 & 0.10 & 2.95 & 1620 & 0.40 \\ para\_k4 & $ 4\sqrt{2}$ & 1/4 & 2.4 & 2.8 & 0.65 & 0.60 & 0.07 & 3.3 & 510 & 0.26 \\ para\_k8 & $ 8\sqrt{2}$ & 1/4 & 2.4 & 1.4 & 0.48 & 0.61 & 0.05 & 3.85 & 170 & 0.12 \\ \hline \multicolumn{3}{l}{3D, $N_x = 1152$} \\ diag\_k2 & 2 & 1/5 & 1.28 & 3.6 & 0.71 & 0.50 & 0.22 & 3.2 & 180 & 0.25 \\ diag\_k4 & 4 & 1/5 & 1.28 & 1.8 & 0.54 & 0.75 & 0.17 & 4.0 & 110 & 0.10 \\ \hline \hline \end{tabular} \label{tab_res} \end{table*} \subsection{Spatial distribution of magnetic fields} \label{sec:mag} Fig. \ref{fig_Bz_maps} compares the initial ($ct/L = 0$), intermediate ($ct/L \simeq 4$) and final ($ct/L \simeq 25$) configurations of the out-of-plane magnetic field component $B_z$. The initial configurations have the form of periodic grids of $B_z$ minima (blue) and maxima (red). The case {\tt diag\_k1} is the only one that represents a stable equilibrium, as it involves only one minimum and one maximum of $B_z$. The case {\tt para\_k1} (investigated in detail in \citealt{0004-637X-826-2-115,Yuan_2016}) begins with two minima and two maxima of $B_z$, by $ct/L \simeq 4$ it is just entering the linear instability stage, and the final state appears very similar to the case {\tt diag\_k1}, although the domains of positive and negative $B_z$ are still slightly perturbed. As we increase $L/\lambda_0$, throughout the case of {\tt para\_k4}, the intermediate states become more evolved, at further stages of magnetic domains coalescence, while the final states in all cases consist of single positive and negative $B_z$ domains. We notice that these domains become separated by increasingly broad bands of $B_z \simeq 0$. \begin{figure*} \includegraphics[width=\textwidth]{Bz_maps_multi2} \caption{Spatial distributions of the out-of-plane magnetic field component $B_z$ for ABC fields of different initial topologies. Each column of panels compares the initial configuration at $ct/L = 0$ (top) with an intermediate state at $ct/L \simeq 4$ (middle), and with the final state at $ct/L \simeq 25$ (bottom).} \label{fig_Bz_maps} \end{figure*} \subsection{Total energy transformations} \label{sec:evo} The initial configurations investigated here involve various levels of magnetic energy $\mathcal{E}_{\rm B,ini}$ as fractions of the total energy $\mathcal{E}_{\rm tot}$. The initial magnetic energy fraction decreases with increasing $L/\lambda_0$ and increases with the system size. Our simulations probe the range of $\mathcal{E}_{\rm B,ini}/\mathcal{E}_{\rm tot}$ values from 0.19 to 0.88. Related to the initial magnetic energy fraction is the initial mean hot magnetisation $\left<\sigma_{\rm ini}\right>$ (see Eq. \ref{eq_sigma_ini}), which in our simulations takes values from 0.35 to 11.2. Time evolutions of the magnetic energy fractions are presented in the left panel of Figure \ref{fig_evo_mag}. In all studied cases, the magnetic energy experiences a sudden decrease followed by a slow settling. As the settling is largely complete by $t = 20L/c$, we measure the final magnetic energy fraction $\mathcal{E}_{\rm B,fin}$ as the average over the $20 < ct/L < 25$ period. We define the final magnetic dissipation efficiency as $\epsilon_{\rm diss,fin} = 1 - \mathcal{E}_{\rm B,fin}/\mathcal{E}_{\rm B,ini}$. The right panel of Figure \ref{fig_evo_mag} shows that $\epsilon_{\rm diss,fin}$ is a function of magnetic topology parameter $L/\lambda_0$, almost independent of the system size $L$ (although it is slightly lower for reduced values of $\tilde{a}_1$). For large values of $L/\lambda_0$, magnetic dissipation efficiency appears to saturate at the level of $\epsilon_{\rm diss} \sim 0.6$. We have fitted the large and medium 2D results for the standard values of $\tilde{a}_1$ and $\rho_0/\Delta x$ with a relation $\epsilon_{\rm diss} = \epsilon_0 - \epsilon_2(\lambda_0/L)^2$, finding $\epsilon_0 \simeq 0.62$ and $\epsilon_2 \simeq 0.70$. \begin{figure*} \includegraphics[width=0.497\textwidth]{evo_ene_mag} \includegraphics[width=0.497\textwidth]{table_epsdiss_keff} \caption{\emph{Left panel:} time evolution of the magnetic energy $\mathcal{E}_{\rm B}$ as fraction of the total energy $\mathcal{E}_{\rm tot}$ for the medium (thin solid lines) and large (thick solid lines) simulation sizes. The thick dashed lines indicate two 3D simulations. The line colour indicates the effective wavenumber $L/\lambda_0$, as shown in the right panel. \emph{Right panel:} final magnetic dissipation efficiency $\epsilon_{\rm diss,fin} = 1 - \mathcal{E}_{\rm B,fin} / \mathcal{E}_{\rm B,ini}$ (evaluated at $20 < ct/L < 25$) as function of the effective wavenumber of initial magnetic configuration $L/\lambda_0$. The large/medium/small circles indicate new results obtained from large/medium/small simulations, the `+' symbols indicate simulations for non-standard values of $\tilde{a}_1$, the `x' symbols indicate simulations for non-standard values of $\rho_0/\Delta x$, and the stars indicate 3D simulations. The symbol colours indicate the effective wavenumber $L/\lambda_0$. The black dashed line shows a $1-\lambda_0/L$ relation predicted by the relaxation theorem of \cite{PhysRevLett.33.1139} and matching the 3D results, and the magenta dashed line shows a $0.62 - 0.70(\lambda_0/L)^2$ relation fitted to the 2D results.} \label{fig_evo_mag} \end{figure*} Also shown in Figure \ref{fig_evo_mag} are analogous results for two 3D simulations. These results are consistent with a relation $\epsilon_{\rm diss} = 1 - \lambda_0/L$ predicted by the relaxation theorem of \cite{PhysRevLett.33.1139}. The initial sudden decrease of the magnetic energy is mediated by rapid growth of the electric energy. Time evolutions of the electric energy $\mathcal{E}_{\rm E}$ as fraction of the initial magnetic energy $\mathcal{E}_{\rm B,ini}$ are presented in the left panel of Figure \ref{fig_evo_ele}. In all studied cases we find an episode of rapid exponential growth of the electric energy, an indication of linear instability known as coalescence instability \citep{PhysRevLett.115.095002}. We indicate moments of peak electric energy growth time scale $\tau_{\rm E,peak}$ (defined by $\mathcal{E}_{\rm E} \propto \exp(ct/L\tau_{\rm E})$). The right panel of Figure \ref{fig_evo_ele} compares the values of $\tau_{\rm E,peak}$, multiplied by $L/\lambda_0$, as function of the initial mean magnetisation $\left<\sigma_{\rm ini}\right>$. Combining our 2D results with the previous simulations for the case {\tt para\_k1} reported in \cite{0004-637X-826-2-115}, the relation between $\tau_{\rm E,peak}$ and $\left<\sigma_{\rm ini}\right>$ for the standard values of $\tilde{a}_1$ and $\rho_0/\Delta x$ has been fitted as: \begin{equation} \tau_{\rm E,peak} \simeq \frac{0.233\pm 0.005}{(L/\lambda_0)\beta_{\rm A,ini}^3}\,, \label{eq_tauE_fit} \end{equation} where $\beta_{\rm A,ini} = [\left<\sigma_{\rm ini}\right>/(1+\left<\sigma_{\rm ini}\right>)]^{1/2}$ is the characteristic value of initial Alfven velocity. The four 3D simulations (including two new full runs and two shorter runs from \citealt{10.1093/mnras/sty2549}) show longer growth time scales compared with their 2D counterparts, with the cases {\tt para\_k4} being strongly affected by the noise component of the electric field. \begin{figure*} \includegraphics[width=0.497\textwidth]{evo_ene_ele} \includegraphics[width=0.497\textwidth]{table_tauE_keff} \caption{\emph{Left panel:} time evolution of the electric energy $\mathcal{E}_{\rm E}$ as fraction of the initial magnetic energy $\mathcal{E}_{\rm B,ini}$. The line types are the same as in the left panel of Figure \ref{fig_evo_mag}. Moments of minimum growth time scale are indicated with the filled symbols. \emph{Right panel:} minimum growth time scales for the total electric energy $\tau_{\rm E}$ as function of the initial mean magnetisation $\left<\sigma_{\rm ini}\right>$. The symbol types are the same as in the right panel of Figure \ref{fig_evo_mag}; in addition the blue diamonds indicate the {\tt para\_k1} simulations from \cite{0004-637X-826-2-115}, and original shorter 3D runs from \cite{10.1093/mnras/sty2549} are indicated. The black dashed line shows a $\beta_{\rm A}^{-3}$ trend (see Eq. \ref{eq_tauE_fit}) fitted to all 2D results. The blue dashed line shows a different trend (see Eq. \ref{eq_tauE_KN16}) suggested previously by \cite{0004-637X-826-2-115}.} \label{fig_evo_ele} \end{figure*} \subsection{Conservation of total energy and magnetic helicity} \label{sec:heli} Figure \ref{fig_cons} shows the conservation accuracy for the total system energy $\mathcal{E}_{\rm tot}$ and total magnetic helicity $\mathcal{H} = \int H\,{\rm d}V$ (where $H = \bm{A}\cdot\bm{B}$ with $\bm{A}$ the magnetic vector potential). The conservation accuracy for parameter $X$ is defined as $\delta_X \equiv \max|X(ct<25L)/X(t=0)-1|$. The conservation accuracy of total energy $\delta_\mathcal{E}$ is presented as function of modified magnetisation parameter $\sigma_\mathcal{E} \equiv \left<\sigma_{\rm ini}\right> (2.4\Delta x/\rho_0)^{-3/4} (L/2880\rho_0)^{-3/4}$. For $1 < \sigma_\mathcal{E} < 6$ (essentially for $L/\lambda_0 \gtrsim 2\sqrt{2}$), energy conservation accuracy scales like $\delta_\mathcal{E} \propto \sigma_\mathcal{E}^{-5/2} \propto \left<\sigma_{\rm ini}\right>^{-5/2} (\Delta x/\rho_0)^{15/8 \simeq 2} (L/\rho_0)^{15/8 \simeq 2}$, reaching the value of $\simeq 0.02$ for $\sigma_\mathcal{E} \simeq 1$. For $\sigma_\mathcal{E} > 6$, energy conservation accuracy is found to be of the order $\delta_\mathcal{E} \sim 3\times 10^{-4}$. In the 3D cases, energy conservation is found to be worse by factor $\simeq 30$ as compared with the 2D results for the same value of $\sigma_\mathcal{E}$. \begin{figure} \includegraphics[width=0.495\columnwidth]{table_cons_ene_sigma} \includegraphics[width=0.495\columnwidth]{table_cons_heli_sigma} \caption{Conservation accuracies of total energy $\delta_\mathcal{E}$ (\emph{left panel}) and total magnetic helicity $\delta_\mathcal{H}$ (\emph{right panel}) as functions of modified magnetisation parameters $\sigma_\mathcal{E}$ and $\sigma_\mathcal{H}$, respectively, chosen to minimise scatter around the suggested trends (dashed lines; $\sigma_\mathcal{E}^{-5/2}$ and $\sigma_\mathcal{H}^{-2}$, respectively), see the main text for details. The symbol types are the same as in the right panel of Figure \ref{fig_evo_mag}.} \label{fig_cons} \end{figure} The conservation accuracy of total magnetic helicity $\delta_\mathcal{H}$ is presented as function of a different modified magnetisation parameter $\sigma_\mathcal{H} \equiv \left<\sigma_{\rm ini}\right> / (4\tilde{a}_1) \simeq \lambda_0/182\rho_0$ (the latter assuming $\Theta=1$). For $\sigma_\mathcal{H} < 2.5$, magnetic helicity conservation accuracy scales like $\delta_\mathcal{H} \propto \sigma_\mathcal{H}^{-2} \propto (\lambda_0/\rho_0)^{-2}$, reaching the value of $\simeq 0.1$ for $\sigma_\mathcal{H} \simeq 0.4$. For $\sigma_\mathcal{H} > 2.5$ (essentially for $L/\lambda_0 \lesssim 2$), we find that simulations with reduced values of $\tilde{a}_1$ appear to follow the same trend, however large and medium simulations with standard $\tilde{a}_1$ value show worse conservation of the order $\delta_\mathcal{H} \sim 3\times 10^{-3}$. In the 3D cases, magnetic helicity conservation is found to be worse by factor $\sim 12$ as compared with the 2D results for the same value of $\sigma_\mathcal{H}$. \subsection{Particle energy distributions} \label{sec:acc} Figure \ref{fig_spec} shows the particle momentum distributions $N(u)$ (closely related to the energy distributions for $u = \sqrt{\gamma^2-1} \gg 1$) for the final states of the medium and large 2D simulations, as well as the 3D simulations (averaged over the time range of $20 < ct/L < 25$). The non-evolving case {\tt diag\_k1} is equivalent to the initial Maxwell-J\"{u}ttner distribution. A high-energy excess is evident in all other cases. \begin{figure} \center \includegraphics[width=0.7\columnwidth]{spec_ele} \caption{Momentum distributions $u^2 N(u)$ of electrons and positrons averaged over the time period $20 < ct/L < 25$. The line types are the same as in the left panel of Figure \ref{fig_evo_mag}. } \label{fig_spec} \end{figure} There are several ways to characterise this excess component. In most cases, a power-law section can be clearly identified. Accurate evaluation of the corresponding power-law index $p$ (such that $N(u) \propto u^{-p}$) is in general complicated, as it requires fitting analytical functions that properly represent the high-energy cutoff \citep{Werner_2016}. Here, in order to avoid those complications, we estimate a power-law index using a compensation method, multiplying the measured distribution by $u^p$ with different $p$ values to obtain the broadest and most balanced plateau section. The accuracy of this method is estimated at $\pm 0.05$. The best values of $p$ estimated for our simulations are reported in Table \ref{tab_res}. No power-law sections could be identified for certain cases with low initial magnetisations $\left<\sigma_{\rm ini}\right> < 1$. The hardest spectrum with $p \simeq 2.4$ has been found for the large simulation {\tt para\_k1}. A similar spectrum with $p \simeq 2.45$ (reexamined with the same method) has been obtained in previous simulations for the case {\tt para\_k1} reported in \cite{0004-637X-826-2-115} and characterised by slightly higher initial magnetisation of $\left<\sigma_{\rm ini}\right> = 12.4$. The left panel of Figure \ref{fig_spec_stat} shows the power-law index $p$ as function of the initial magnetic energy fraction $\mathcal{E}_{\rm B,ini}/\mathcal{E}_{\rm tot}$. The value of $p$ is strongly anti-correlated with $\mathcal{E}_{\rm B,ini}/\mathcal{E}_{\rm tot}$, independent of the simulation size, with the Pearson correlation coefficient of $\simeq -0.98$. A linear trend has been fitted to the results of 2D simulations with standard values of $\tilde{a}_1$ and $\rho_0/\Delta x$, including the previous {\tt para\_k1} simulations from \cite{0004-637X-826-2-115}: \begin{equation} p \simeq (-3.9 \pm 0.2) \frac{\mathcal{E}_{\rm B,ini}}{\mathcal{E}_{\rm tot}} + (5.8 \pm 0.1) \,. \label{eq_p_fit} \end{equation} Also shown are results for two 3D simulations showing particle distributions slightly steeper as compared with 2D simulations with comparable initial magnetic energy fractions. \begin{figure*} \includegraphics[width=0.497\textwidth]{table_plindex_EBini} \includegraphics[width=0.497\textwidth]{table_fe_sigma} \caption{\emph{Left panel:} power-law index $p$ of the momentum distribution $N(u) \propto u^{-p}$ as function of the initial magnetic energy fraction $\mathcal{E}_{\rm B,ini}/\mathcal{E}_{\rm tot}$. The black dashed line shows a linear trend fitted to all 2D results. \emph{Right panel:} non-thermal energy fraction $f_E$ as function of a modified magnetisation parameter $\sigma_f$. The dashed lines indicate two trends: $\propto \sigma_f^{3/4}$ (blue) and $\propto \sigma_f^{2}$ (brown). For both panels, the symbol types are the same as in the right panel of Figure \ref{fig_evo_ele}.} \label{fig_spec_stat} \end{figure*} The high-momentum excess component of the particle distribution can be alternatively characterised by the maximum particle energy reached $\gamma_{\rm max}$. Here, the value of $\gamma_{\rm max}$ is evaluated at the fixed level of $10^{-3}$ of the $u^2 N(u)$ distribution normalised to peak at unity (cf. the bottom edge of Figure \ref{fig_spec}). The final values of $\gamma_{\rm max}$ for our large simulations are reported in Table \ref{tab_res}. The highest value of $\gamma_{\rm max} \simeq 1620$ has been found for the large simulation {\tt para\_k2}. For the cases where the power-law index $p$ could be evaluated (note that $\gamma_{\rm max}$ can always be evaluated), $\log\gamma_{\rm max}$ is strongly anti-correlated with $p$, with the Pearson correlation coefficient of $\simeq -0.99$. Yet another approach to the high-momentum excess is to fit and subtract a low-momentum Maxwell-J\"{u}ttner component and to calculate the non-thermal fractions of particle number $f_n$ and particle energy $f_E$ contained in the remaining excess. This fitting was performed using the weighted least squares method with the weights proportional to $u^{-2}$. In all cases, the non-thermal number fractions were found to be closely related to the energy fractions as $f_n \simeq f_E/3.5$. The values of non-thermal energy fractions $f_E$ for our simulations are reported in Table \ref{tab_res}. The highest value of $f_E \simeq 56\%$ has been found for the large simulation {\tt diag\_k1}. For the cases where $p$ could be evaluated, $f_E$ is anti-correlated with $p$, with the Pearson correlation coefficient of $\simeq -0.93$. The right panel of Figure \ref{fig_spec_stat} shows the non-thermal energy fraction $f_E$ vs. another modified magnetisation parameter $\sigma_f \equiv \left<\sigma_{\rm ini}\right> (4\tilde{a}_1)^{1/2}$. We also indicate the $f_E \propto \left<\sigma_{\rm ini}\right>^{3/4}$ trend suggested by \cite{0004-637X-826-2-115} and re-fitted only to the {\tt para\_k1} results (deep blue symbols). We confirm that this trend describes the {\tt para\_k1} results reasonably well, however, it is not followed by the high-$(L/\lambda_0)$ cases that probe lower magnetisation values $\sigma_f < 1$. In the particular case of $L/\lambda_0 = 8\sqrt{2}$ (brown symbols), the values of $f_E$ decrease faster with decreasing $\sigma_f$, roughly like $f_E \propto \sigma_f^{2}$ for $\sigma_f < 1$. For intermediate magnetisation values $1 < \sigma_f < 10$, the values of $f_E$ for $L/\lambda_0 > \sqrt{2}$ are systematically higher as compared with the {\tt para\_k1} trend line. The 3D simulations produced $f_E$ values that are consistent (in the case {\tt diag\_k2}) or somewhat lower (in the case {\tt diag\_k4}) than the 2D results. We use the final non-thermal energy fractions $f_E$ to divide the global energy gain of the particles into the non-thermal and thermal parts: \begin{eqnarray} \Delta\mathcal{E}_{\rm nth} &=& f_E\,\mathcal{E}_{\rm kin,fin}\,, \\ \Delta\mathcal{E}_{\rm th} &=& (1-f_E)\mathcal{E}_{\rm kin,fin} - \mathcal{E}_{\rm kin,ini}\,, \end{eqnarray} where $\mathcal{E}_{\rm kin,ini} = \mathcal{E}_{\rm tot} - \mathcal{E}_{\rm B,ini}$ and $\mathcal{E}_{\rm kin,fin} \simeq \mathcal{E}_{\rm tot} - \mathcal{E}_{\rm B,fin}$, since by $ct = 25L$ the total electric energy that mediates the dissipation of magnetic energy decreases to the level of $\mathcal{E}_{\rm E,fin} < 10^{-2}\mathcal{E}_{\rm tot}$. The two components of particle energy gain are presented in Figure \ref{fig_deltaEk} as functions of yet two other modified magnetisation parameters $\sigma_{\rm th} \equiv \left<\sigma_{\rm ini}\right> (4\tilde{a}_1)^{-1/2} (L/\lambda_0)^{3/4} (2.4\Delta x/\rho_0)^{3/4}$ and $\sigma_{\rm nth} \equiv \left<\sigma_{\rm ini}\right> (4\tilde{a}_1)^{1/2} (L/\lambda_0)^{-1/4} (2.4\Delta x/\rho_0)^{-1/4}$, respectively. We find that the cases of {\tt para\_k1} (deep blue symbols) stand out from other cases, having significantly lower thermal energy gains, suggesting that they are limited by the magnetic topology. On the other hand, their non-thermal energy gains are comparable to other cases, but achieved at significantly higher values of $\sigma_{\rm nth}$. Power-law trends can be suggested only for sufficiently high wavenumbers ($L/\lambda_0 \gtrsim 4\sqrt{2}$): $\Delta\mathcal{E}_{\rm th} \propto \sigma_{\rm th}^{1/3}$ and $\Delta\mathcal{E}_{\rm nth} \propto \sigma_{\rm nth}$, respectively. However, in the {\tt diag\_k8} cases (brown symbols), a steeper trend for the non-thermal energy gain $\Delta\mathcal{E}_{\rm nth} \propto \sigma_{\rm nth}^{5/2}$ is apparent for low magnetisation values $\sigma_{\rm nth} < 0.25$. The highest value of $\Delta\mathcal{E}_{\rm nth} / \mathcal{E}_{\rm tot} \simeq 25\%$ is obtained for our large simulation {\tt diag\_k2}. \begin{figure} \includegraphics[width=0.495\columnwidth]{table_dEth_sigma} \includegraphics[width=0.495\columnwidth]{table_dEnth_sigma} \caption{Global gain of the particle energy divided into thermal $\Delta\mathcal{E}_{\rm th}$ (\emph{left panel}) and non-thermal $\Delta\mathcal{E}_{\rm nth}$ (\emph{right panel}) components, normalised to the total energy $\mathcal{E}_{\rm tot}$, as functions of modified magnetisation parameters $\sigma_{\rm th}$ and $\sigma_{\rm nth}$, respectively, chosen to minimise scatter around suggested trends (dashed black lines) $\propto\sigma_{\rm th}^{1/3}$ and $\propto\sigma_{\rm nth}$, respectively. The symbol types are the same as in the right panel of Figure \ref{fig_evo_ele}.} \label{fig_deltaEk} \end{figure} \section{Discussion} \label{sec:disc} Our new results extend the previous study of 2D PIC simulations of ABC fields for the {\tt para\_k1} case in the non-radiative regime \citep{0004-637X-826-2-115}, and connect it with a study of 3D PIC simulations for the cases {\tt diag\_k2} and {\tt diag\_k4} \citep{10.1093/mnras/sty2549}. They can also be compared with the FF simulations of ABC fields presented in \cite{Zrake_East_2016}. In particular, the magnetic dissipation efficiency in the FF limit in 2D has been estimated at $\epsilon_{\rm diss} \simeq 70\%$, while our results suggest $\epsilon_{\rm diss} \simeq 62\%$ in the limit of $L/\lambda_0 \gg 1$. It should be noted, however, that in PIC simulations this limit forces us towards lower magnetisation values. In \cite{0004-637X-826-2-115}, a relation between the electric energy growth time scale $\tau_{\rm E,peak}$ and the initial characteristic hot magnetisation $\sigma_{\rm hot}$ was suggested in the following form: \begin{equation} \tau_{\rm E,peak} \simeq \frac{0.13}{v_{\rm A}(0.21\sigma_{\rm hot})} \,, \label{eq_tauE_KN16} \end{equation} where $v_{\rm A}(\sigma) \equiv [\sigma/(1+\sigma)]^{1/2}$ was treated as a function in the form of Alfven velocity of arbitrarily scaled argument $\sigma$, and $\sigma_{\rm hot} \equiv \left<\sigma_{\rm ini}\right>/2$ was a characteristic value of hot magnetisation based on $B_0^2$ instead of the mean value $\left<B^2\right>$ used here\footnote{We note that the characteristic values of $\sigma_{\rm hot}$ reported in \cite{0004-637X-826-2-115} were underestimated by a constant factor of $\simeq 1.13$.}. The above relation is shown in the right panel of Figure \ref{fig_evo_ele} with a dashed blue line (cf. Figure 3 of \citealt{0004-637X-826-2-115}). We can see that the previously suggested trend agrees very well with the previous measurements from \cite{0004-637X-826-2-115}, and is very close to the new trend line in the range of $1.5 < \left<\sigma_{\rm ini}\right> < 12.5$. However, the previous trend predicts significantly shorter growth time scales for low magnetisation values $\left<\sigma_{\rm ini}\right> < 1$ that is probed here with simulations for $L/\lambda_0 \ge 4\sqrt{2}$. Our new scaling described by Eq. (\ref{eq_tauE_fit}) is more natural, without arbitrary scaling parameters. It suggests that in the FF limit, when $\left<\sigma_{\rm ini}\right> \to \infty$ and $\beta_{\rm A,ini} \to 1$, we should expect that the growth time scale should become $\tau_{\rm E,FF} \simeq 0.233/(L/\lambda_0)$. For $L/\lambda_0 = \sqrt{2}$, this would yield $\tau_{\rm E,FF} \simeq 0.16$, somewhat longer than $\tau_{\rm E,FF} \simeq 0.13$ indicated by \cite{0004-637X-826-2-115}. As for why should $\tau_{\rm E,peak} (L/\lambda_0)$ scale with $\beta_{\rm A,ini}^{-3}$ requires a theoretical investigation of the linear coalescence instability beyond the FF limit, with proper treatment of magnetic nulls, which is beyond the scope of this work. We can only partially confirm a relation between non-thermal energy fraction and initial mean hot magnetisation $f_E \propto \left<\sigma_{\rm ini}\right>^{3/4}$ originally suggested in \cite{0004-637X-826-2-115}. This relation appears to hold for the {\tt para\_k1} case, including new simulations extending into the $\left<\sigma_{\rm ini}\right> \sim 1$ regime, and possibly also for higher values of $L/\lambda_0$ as long as $\sigma_f > 1$ (see the right panel of Figure \ref{fig_spec_stat}). However, for the cases where a power-law index $p$ can be determined, a simple linear relation holds between $p$ and the initial magnetic energy fraction $\mathcal{E}_{\rm B,ini} / \mathcal{E}_{\rm tot}$ (see Eq. \ref{eq_p_fit}), at least over the studied range of $0.3 < \mathcal{E}_{\rm B,ini} / \mathcal{E}_{\rm tot} < 0.9$ (see the left panel of Figure \ref{fig_spec_stat}). We have introduced several \emph{modified magnetisation} parameters, as combinations of the initial mean magnetisation $\left<\sigma_{\rm ini}\right>$ with other input parameters, in order to describe the scalings of global output parameters. The particular formulae for the modified magnetisations were chosen in order to minimise scatter around the suggested trends, with the exponents of $\Delta x/\rho_0$, $L/\rho_0$, $L/\lambda_0$ and $\tilde{a}_1$ estimated empirically with the accuracy of $\sim \pm 1/4$. The energy conservation accuracy for ABC fields simulated with the {\tt Zeltron} code is found to scale roughly like $\delta_\mathcal{E} \propto \left<\sigma_{\rm ini}\right>^{-5/2} (\Delta x/\rho_0)^2 (L/\rho_0)^2$, not sensitive to $\lambda_0$. This is different from the reference case of uniform magnetic field, in which we found $\delta_\mathcal{E} \propto \left<\sigma_{\rm ini}\right>^{-1} (\Delta x/\rho_0)^2$, independent of $L$. On the other hand, the magnetic helicity conservation accuracy is found to scale like $\delta_\mathcal{H} \propto (\lambda_0/\rho_0)^{-2}$, but it is not sensitive to $\Delta x/\rho_0$ or $L/\rho_0$. This is in contrast to the force-free simulations of \cite{Zrake_East_2016}, in which $\delta_\mathcal{H} \propto (\Delta x)^{2.8}$. Further investigation is required in order to explain these differences. For the non-thermal energy fraction $f_{\rm E}$, the scaling with initial mean magnetisation $\left<\sigma_{\rm ini}\right>$ is rather ambiguous. Only in the special case of $L/\lambda_0 = \sqrt{2}$ we have sufficient range of $\left<\sigma_{\rm ini}\right>$ values to claim that $f_{\rm E} \propto \left<\sigma_{\rm ini}\right>^{3/4}$; this scaling is improved by additional dependence on the particle anisotropy level $\tilde{a}_1$. The scalings of thermal and non-thermal kinetic energy gains, $\Delta\mathcal{E}_{\rm th}$ and $\Delta\mathcal{E}_{\rm nth}$, respectively, can in principle be derived from the scalings of $f_{\rm E}$ and magnetic dissipation efficiency $\epsilon_{\rm diss}$. The ambiguity of the $f_{\rm E}$ scaling makes it not straightforward to predict in detail the scalings of $\Delta\mathcal{E}_{\rm th}$ and $\Delta\mathcal{E}_{\rm nth}$. The initial mean hot magnetisation $\left<\sigma_{\rm ini}\right>$ of ABC fields with relativistically warm plasma ($\Theta = 1$) is strongly limited by the simulation size, especially if one would like to resolve numerically all the fundamental length scales, in particular the nominal gyroradius $\rho_0$. For a given effective wavenumber $L/\lambda_0$, higher values of $\left<\sigma_{\rm ini}\right>$ can only be reached by increasing the system size $L/\rho_0$ \footnote{One can achieve a somewhat higher $\left<\sigma_{\rm ini}\right>$ by increasing the local particle anisotropic parameter $\tilde{a}_1$. However, some numerical artefacts are observed for $\tilde{a}_1 \simeq 1/2$.}. It can be expected that larger simulations would show more effective non-thermal particle acceleration with harder high-energy tails indicated by higher values of non-thermal energy fractions $f_e$ and lower values of power-law indices $p$. Eventually, at sufficiently high $\left<\sigma_{\rm ini}\right>$, and with $L/\lambda_0 \ge 2$, it should be possible to achieve particle distributions dominated energetically by the high-energy particles, with $p < 2$, as has been demonstrated in the case of Harris-layer reconnection \citep{Sironi_2014,PhysRevLett.113.155005,Werner_2016,10.1093/mnras/sty452}. What remains unclear, though, is the level of thermal energy gains. Our results show that the case {para\_k1} characterised by the lowest unstable effective wavenumber $L/\lambda_0 = \sqrt{2}$, studied in detail by \cite{0004-637X-826-2-115} and \cite{Yuan_2016}, has a limited efficiency of both thermal and non-thermal particle acceleration, which is related to the limited magnetic dissipation efficiency. On the other hand, 2D ABC fields with high $L/\lambda_0$ values, although also limited by topological constraints \citep{Zrake_East_2016}, can be used as a model for kinetic investigations of decaying relativistic magnetised turbulence, an alternative to uncorrelated magnetic fluctuations \citep{PhysRevLett.121.255101,Comisso_2019,Comisso_2020}. Relativistic magnetised turbulence has also been investigated extensively by means of PIC simulations in the driven mode \citep{PhysRevLett.118.055103,10.1093/mnras/stx2883,Zhdankin_2018,PhysRevLett.122.055101,10.1093/mnras/staa284,Wong_2020}. \vskip 1em These results are based on numerical simulations performed at the supercomputer {\tt Prometheus} located at the Academic Computer Centre `Cyfronet' of the AGH University of Science and Technology in Krakow, Poland (PLGrid grants {\tt plgpic20,ehtsim}); and at the computing cluster {\tt Chuck} located at the Nicolaus Copernicus Astronomical Center of the Polish Academy of Sciences in Warsaw, Poland. QC and KN were supported by the Polish National Science Center grant 2015/18/E/ST9/00580. BM acknowledges support from DOE through the LDRD program at LANL and NASA Astrophysics Theory Program. \bibliographystyle{jpp}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Abstract} I discuss three proposed experiments that could in principle locate the boundary between the classical and quantum worlds, as well as distinguish the Hamiltonian theory presented in the first paper of this series from the spontaneous-collapse theories. \section{Introduction} Ninety years post-quantum-revolution, we still do not know where, or what, is the Infamous Boundary. (This phrase, coined by John Bell, \cite{bell}, was suggested by William Faris for the title of a book of ours that appeared in 1995, \cite{tib}.) Of course, many suggestions have been made on this topic, but most do not offer specific predictions that could be tested in the laboratory. Here I contrast the theory of paper I in a series, \cite{wick}, with the spontaneous-collapse (SC) theories, each of which do make predictions, and discuss some proposed experiments. The first SC theory was presented by G. C. Ghirardi, A. Rimini, and T. Weber in 1986, \cite{grw}. In their proposal, the fundamental paradox of quantum theory referred to as the Measurement Problem---namely, that in measurement scenarios two or more wave packets representing different outcomes separate and move in different directions, with no implied result---was resolved by postulating a mechanism of random, spontaneous collapses. These happen rapidly for a large (``macroscopic") system but rarely on the atomic level, thus preserving the successes of QM at that scale while explaining the adequacy of classical physics at the larger scale. Variants have been proposed over the years, for instance continuous-collapse (CC), in which a continuous stochastic process substitutes for the jumps in the wavefunction, see e.g., \cite{cc}. (CC replaces Schr{\"o}dinger's\ deterministic equation by a ``stochastic differential equation" of the ``Brownian motion plus drift" variety, but for the wavefunction rather than a particle. The drift drives the wavefunction to lower spatial dispersion, while the ``Brownian" part somewhat opposes it. The magnitude of the random part is the square-root of the drift. I am familiar with this kind of model from mathematical biology, where it is sometimes useful as a simplified, continuum description for a discrete population, \cite{war}. Schr{\"o}dingerists\ of course reject any interpretation of the wavefunction as an approximation to an underlying discrete system.) The collapse theories can be differentiated from the Hamiltonian theory of paper I by several characteristics. The latter (a) preserves energy exactly (trajectory-by-trajectory); (b) preserves the norm exactly; and (c) is exactly time reversible. (In other words, it has all the familiar features of physicists' theories from decades and centuries past.) The former do not enjoy these properties. Both SC and CC theories have at least two free parameters (governing rate and extent of collapses). The Hamiltonian theory has one (the coupling constant between the linear and nonlinear parts, denoted by ``$w$" in previous papers) and possibly another (the magnitude of a random part of the wavefunction, see paper II). However, in paper III it was remarked that, as is typical of high-dimensional, nonlinear deterministic theories, the evolution may be chaotic. (But only an instability in measurement situations was demonstrated there.) If so, a parameterized model of a random part may be superfluous (for roulette, no one bothers to make a detailed model of the croupier's hand). There are claimed theorems asserting that any deterministic, nonlinear quantum theory must violate the no-action-at-a-distance rule of relativity (for a review see \cite{cc}), and therefore is unacceptable. But, as for von Neumann's ``god is throwing dice" theorem of 1932, \cite{vN}, the argument is not decisive.\footnote{When Einstein was asked about the dice-playing god theorem, he pointed at von Neumann's Axiom II about summation of expected values and asked, ``Why should we believe in that"? Axiom II was unique to linear theories and ignored the impact of the measuring apparatus, see Bell's collected work, \cite{bell}, paper 1, or for the anecdote, \cite{wick}, p. 286.} Assuming von Neumann's axioms for observables, which he derived from linear theory, at the outset renders the logic circular. Schr{\"o}dingerists\ need not agree that $\hbox{tr}\,\rho\, A$ for any self-adjoint operator $A$ represents an observable absent an explicit device description, nor that all observables are of that type. (For example, in a nonlinear Hamiltonian theory the energy is not a von Neumann observable, as it is not quadratic in $\psi$.) The coupled Dirac and Maxwell equations, with the Dirac charge current serving as the source terms for Maxwell's, provide an example of a relativistically-invariant, nonlinear quantum theory\footnote{It can be argued that Dirac's theory was not relativistically invariant, due to the nonunitary matrices implementing Lorenz transformations. The cure may require introducing an indefinite-metric Hilbert space, which spoils the Copenhagenist statistical interpretation of the wavefunction but Schr{\"o}dingerists\ might not object. See paper I and references therein.}, see \cite{barut}. For a suggested example in the present context, see paper I, section 4. Concerning the IB: for collapse models, it is purely a function of scale, while for a Hamiltonian model it is a function of both scale and energy. That provides us with the experimental desideratum making distinguishing tests possible. \section{An ``ideal" experiment} Consider a system of size conjectured to be near the IB, and subject to an external potential of the ``double-well" shape, see Figure \ref{vfig} (reproduced from paper III). Initially the object should be confined (in dispersion) to a narrow band centered at the location of the central, unstable-equilibrium point (``hill") in the potential. In order to generate cats (if on the quantum side of the boundary), we can utilize one of two scenarios: \begin{quote} (a) Couple the ``macrosystem" to a microsystem, say one ``spin" (``qubit"), initially in a superposition of, say, spin-up and spin-down with equal weights. In linear QM, wave packets describing the entangled state should separate, with the interpretations ``spin up and needle went up" and with down replacing up. (b) Cool the system down to its groundstate. If it can be considered like a single quantum particle, it should oscillate in both wells simultaneously. \end{quote} The goal is to observe cats or the absence thereof. The modern definition of ``cat" is a macroscopic (or at least directly observable) system whose dispersion is larger than its physical size. (See paper I for how these quantities are defined by a wavefunction.) Figure \ref{dispfig}, derived using the small-scale (``9 qubit") model from paper III, shows the dispersion as a function of the hill-height for the linear (w=0) and nonlinear (w=2.2) cases. The important element to note in this figure is the elbow or ``hockey stick" shape in the nonlinear case. This results from the energy barrier to forming cats: below the threshold the external potential cannot supply the required energy, while cats become possible above it. (It may be surprising---it was to this author---that in the linear case the dispersion decreases with the height parameter. For the explanation, see the Computational Appendix.) \begin{figure} \rotatebox{0}{\resizebox{5in}{5in}{\includegraphics{Vplot.pdf}}} \caption{``Double-well" external potential plotted vs. some ``spatial" degree of freedom.}\label{vfig} \end{figure} \begin{figure} \rotatebox{0}{\resizebox{5in}{5in}{\includegraphics{NLQMIV_disperion_figure.pdf}}} \caption{Dispersion vs. height for the linear (red curve) and nonlinear (blue curve) models.}\label{dispfig} \end{figure} Observing the hockey-stick could distinguish the Hamiltonian theory presented in paper I from the spontaneous-collapse theories. If the IB is purely a matter of scale, when the system-size is below the IB, cats can form and perisist, but above a collapse quickly occurs, eliminating one of the cats. Thus we would expect to see two curves resembling the red curve in Figure \ref{dispfig}, one higher than the other (although perhaps the curves would display the reversed trends). \section{Some proposed experiments} Here are three criteria for an experiment that might both locate the IB and distinguish collapse theories from Hamiltonian: \begin{quote} (1) The system should be scalable. (2) The system should be subjected to an adjustable external potential. (3) The readout should make an unambiguous differentiation between cat-or-no-cat states. \end{quote} These are worth keeping in mind while reading the abbreviated accounts of some plausible experiments, presented next. \subsection{The Marshall {\em et al. } experiment} In the 1980s, L. Di{\'o}si, \cite{diosi}, and R. Penrose, \cite{penrosebook}, independently proposed a theory of collapse of the wavefunction due to gravitation. Roughly, the idea seems to be the following. Consider separating ``two lumps" (copies of some small object) by some distance. This will require a certain cost in (gravitational) energy. In a time equal to Planck's constant divided by this potential energy, one packet is eliminated, due to ``uncertainty" (a concept that Schr{\"o}dingerists\ of course reject). This is explained as deriving from a putative ``quantum gravity theory" combined with another postulated ``gravitization of quantum mechanics", \cite{penrose}. Whatever the justification, this seems to be another spontaneous collapse theory, albeit one that does not assume a free parameter for the time before collapse. In 2002 Marshall and colleagues, \cite{marshall}, described an experiment to test the theory: a single photon scattered off a tiny mirror (size: one micron) suspended on a cantilever. Starting with a photon-present-or-photon-absent superposition state will entangle the mirror, with the latter's position dispersed ``by about the diameter of an atomic nucleus". Readout would be by fringes observed in a double-arm interferometer, from which a cat-state of the mirror would be inferred if interference reappeared with a multiple of the cantilever oscillation frequency. Mirrors are certainly scalable. But the predicted displacement is far smaller than the system size, making the cat appellation questionable. (Indeed, one could imagine that everything from gnats to planets is naturally dispersed by a femtometer---without calling into question the validity of classical mechanics.) There is no built-in, controllable potential (although the cantilever torsion might provide one). The readout is indirect. \subsection{The quantum-opto-mechanics experiments} The exemplar is the "micromechanical oscillator", a small beam or bridge clamped at both ends and free to vibrate. Markus Arndt, Markus Aspelmeyer, and Anton Zeilinger wrote in 2009, \cite{Arndt} (see also \cite{ASP}): \begin{quote} The developing field of quantum-opto-mechanics provides ... a unique opportunity to generate superposition states of massive mechanical systems ... one arrives at the canonical situation of Schrödinger's cat involving two macroscopically distinct motional states of a mechanical resonator. \end{quote} The prefix ``opto-" references optical; as for the Marshall {\em et al. }\ experiment, they envision photons reflected off the beam to make the cat. However, there remains ``the intriguing question whether it will be possible to generate macroscopic displacements that exceed the physical size of the mechanical object". It will be soon for ``carbon nanotubes or a silicon nanowire", which might restrict to the quantum side of the IB. However, the Viennese physicists remark that the imagined machinery spans the size range \begin{quote} from hundreds of nanometers ... to tens of centimeters in the case of gravitational wave antennae. It is currently a hot research topic how to prepare genuine quantum states of motion of such mechanical devices. \end{quote} Thus with these proposed ``quantum machines" scalability is available, but there remains the displacement issue, and we have to ask for the details of the amplification step and whether it is controllable. \subsection{The Abdi {\em et al. } experiment} In 2016, a German-British collaboration proposed, \cite{abdi}, to conduct an experiment on a ``lithium-decorated monolayer graphene sheet" of diameter one micrometer suspended in a ``controllable, electrostatic double-well potential"". The metallic lithium will render the wafer electrically conductive. The authors propose to cool the system to milliKelvin temperatures, aiming to attain the ground state. Observation is by way of magnetic coupling to a ``superconducting qubit", with which they hope to explore ``higher-occupation number states" (Copenhagenist language; for Schr{\"o}dingerists, higher-frequency modes). The goal is to test conventional linear QM vs. QM + CC. One micron is smaller than a human cell, but larger than a virus. It may lie on the classical side of the IB. Scalability is an issue. \section{Discussion} The experiments summarized above have size and displacement limitations (for a review of the ``macroscopicity" achievable by various other suggested or conducted experiments, see \cite{arndthornberger}), and interpretational problems. As the Abdi {\em et al. } experiment approaches most closely the ``ideal" case for purposes of testing the Hamiltonian theory against rivals, I discuss it in more detail here. Leaving aside the scalability issue, the principle difficulty is the indirect observation of dispersion. Presumably the presence or absence of a cat will have to be deduced from computations combining many observed modes (known as ``quantum state reconstruction"). This generates an epistemological dilemma (which is not limited to this particular experiment). Let us suppose an anomaly emerges from the experiment. In this context, this would mean a divergence of some measured quantities, here oscillation modes (those ``higher-number occupation states") from those predicted, say as a function of the ``anharmonicity parameter" (which I labelled ``height" in previous sections). The question is what to make of it. Does it mean that QM, or QM+CC, is falsified? Let me sketch briefly the steps in the computation Abdi {\em et al. } list in an appendix to their paper: \begin{quote} (1) The quantum (Schr{\"o}dinger's) equation is replaced by a master equation. (2) The equation is expanded in a Dyson series, retaining terms up to second order (Born approximation). (3) Adopting an ```adequate' microsopic model for the system-environment interaction", environmental states are introduced but reduced by ``truncating at a certain number of states which are required in order for our simulations to converge". (4) Assuming the interaction with the environment is small, more terms are dropped (``rotating wave approximation"). (5) Making more assumptions about memory in the environment, another, Markov, approximation is introduced. (6) Imaginary terms are dropped and a time integral extended to infinity. (7) Finally, the Markov model is simulated numerically. \end{quote} The dilemma must now be clear to the reader: is the outcome really anomalous? Or is one of the approximation methods inadequate, or an assumption about an ``environment" incorrect? (Of course, computational ambiguity arises with any multi-body quantum system, since we cannot solve Schr{\"o}dinger's\ equation exactly except for systems with few degrees of freedom, or even simulate effectively with today's supercomputers. As for the ``environment": see next paragraph.) I believe there would be sufficient doubt that no one would abandon their paradigm from such evidence alone. The cure is to introduce some direct readout of ``cat-or-no-cat", say by lowering in a microscope equipped with a weak light source and taking a snapshot of the wafer. I am aware that this intervention would likely heat up the system, but perhaps the picture could be taken at the end of each ``run". I have not discussed ``decoherence" due to a putative ``environment", as I do not accept this theory as a solution of the Measurement Problem. Eliminating macroscopic interference is the job of the apparatus (by cleanly separating wavepackets), not a mysterious ``environment". Nor does replacing a superposition by a mixture supply a definite outcome---that's imposed by the nonlinear terms in the Hamiltonian, which force the system to make a choice, see paper III. If ``decoherence" exists I would regard it as a nuisance of the type that afflicts all experiments, i.e., external noise. I leave suppressing all such environmental perturbations to the skill of the experimentalist. Even if an anomaly in the direction predicted by the Hamiltonian nonlinear theory (e.g., Figure \ref{dispfig}) appeared in the data, it would not ``prove" that theory true. As Karl Popper pointed out in 1935, \cite{popper}, data can falsify theories but never validate them. (Thomas Kuhn expressed doubt even about the falsification claim, \cite{kuhn}.) I would accept a failure to observe anything like the ``hockey-stick graph" near the Infamous Boundary as a falsification of nonlinear QM. Even to be relevant to the debate, data must be sufficiently ``clean" to permit unambiguous interpretation, and theory sufficiently rigid not to allow wiggle room for supporters to dismiss an anomaly. I do believe that experimentalists are getting closer to performing an informative experiment. \section{Computational Appendix} The parameterization used for the potential was the following: \def\hbox{height}{\hbox{height}} \begin{eqnarray} \nonumber V(x) &\=& A\,(x - \hbox{width})^2\,(x + \hbox{width})^2,\\ A &\=& \hbox{\hbox{height}}/\hbox{width}^4. \end{eqnarray} Parameter "width" was 4.0. Other parameters were as in paper III. Figure \ref{dispfig} displays the peak dispersion over the time interval $[1,10]$ (arbitrary time units); the first interval was ignored because in the nonlinear case the dispersion dropped from that of the initial condition. (``Dispersion" means of the ``center-of-total-spin" and included the square-root, like a standard-error rather than a variance, although for Schr{\"o}dingerists\ it isn't either.) I also made simulations with a two-part function, $V_1(x)$, of form (writing $V(\hbox{height},x)$ for the above): \begin{eqnarray} \nonumber V_1(x) &\=& V(10,x), {\phantom{...}} \hbox{for $|x| > 4.0$};\\ \nonumber V_1(x) &\=& V(\hbox{height},x), {\phantom{...}} \hbox{for $|x| <= 4.0$}. \end{eqnarray} I also tried diminishing the coupling constant of the ``micro" system to the ``macro", $\alpha$ in previous papers, by a factor of ten. Neither change altered Figure \ref{dispfig} substantially. The explanation for the trend of the dispersion in the linear case is that the double-well potential is not absolutely necessary to form cats, as the superposition of the up- and down-states in the ``microsystem" plus the linear coupling to the ``macrosystem" can do the job. The effect of adding the external potential is to partially confine the cats to the wells, actually diminishing the overall dispersion, see Figures \ref{densfig} and \ref{densfig2}. I suspect it is an artifact of the tiny size of the ``device" (8 qubits), and would not be reproduced at larger scales. Simulation used the Tao symplectic solver, described in the Appendix to paper III. Figure \ref{dispfig} required 6 hours and forty minutes running on a ten-year-old HP linux box. The program was written in the C language in the style of the old classic, {\em Numerical Recipes in C}; the reader is invited to reproduce (and extend) it on a modern platform with modern programming techniques.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Background and Motivation} Wireless communications has witnessed several major theoretical advancements in the last few decades, which have been quickly incorporated into communication standards, e.g., multiple input multiple output (MIMO) systems and orthogonal frequency division multiple access (OFDMA)~\cite{BOOKTse}. The baseband receiver design, underlying these technologies, has been the receiver based on a coherent detection (CD), i.e., coherent receiver for short, which has been adopted exclusively in nearly all the popular wireless communication standards. The coherent receiver design has {\color{black}stayed} virtually unchanged throughout the evolution of wireless communication systems to date. The recent trend towards a large number of antennas at the transmitter or receiver (such as in massive MIMO and millimeter wave systems~\cite{MagazineRobert,Jeffrey}) provides incentive to rethink the coherent receiver design for future communication systems. This is because provision of accurate channel state information (CSI) becomes challenging in scenarios with a massive number of antennas. In this regard, \emph{power/intensity-detection} (PD) based \emph{non-coherent} receivers have been proposed~\cite{Goldsmith16} and~\cite{Petar16}. Although, in general, non-coherent receivers suffer from a performance loss compared to coherent receivers, their low cost and low power consumption makes them attractive for systems with a large number of antennas. It is proved in~\cite{Goldsmith16} that PD-based non-coherent modulation in a massive single input multiple output (SIMO) system can achieve a scaling law which is the same as the coherent modulation scheme with an increasing number of antennas. It is also shown in \cite{Petar16} that the performance of the PD-based non-coherent modulation asymptotically approaches the performance of coherent detection in high SNR regimes. These studies show that PD-based non-coherent receivers can offer a performance comparable to coherent receivers in future wireless systems. Another motivation for considering PD-based non-coherent receivers comes from the recent interest in simultaneous wireless information and power transfer (SWIPT)~\cite{Bi15,Huang15,Krikidis_survey,XiaoLu}. In such systems, a user is assumed to be equipped with an energy receiver which is based on non-coherent RF (radio frequency)-to-DC (direct current) conversion, and a conventional coherent (information) receiver. In SWIPT systems, the user employs the energy receiver and the coherent receiver \emph{separately}: (i) In time-switching, the user switches between the two receivers, depending on whether it is in RF energy harvesting (EH) mode or information detection mode, or (ii) in power-splitting, the user splits the received signal into two streams, and then sends one stream to the energy receiver and the other to the coherent receiver. Although the PD-based non-coherent receiver has received more attention in wireless communication systems recently, it has played an important role in optical communications for a long time. For the low-cost wireless infrared communication system~\cite{InfraredMag}, \emph{intensity modulation} (IM) is the most commonly adopted modulation scheme. For IM-based wireless infrared communication, information is carried by the instantaneous power of the carrier, and the receiver uses a photodetector to produce a current proportional to the received instantaneous power directly, i.e., PD-based non-coherent demodulation. The wireless infrared communication channel is usually referred to as the intensity channel or the \emph{non-coherent additive white Gaussian noise (AWGN) channel}, echoing the coherent AWGN channel. The modulation schemes and the capacity of the non-coherent AWGN channel were studied in~\cite{OpticModu} and~\cite{OpticalTIT}, respectively. \subsection{Novel Contributions} Motivated by the recent interest in the PD-based non-coherent receiver, we consider a basic point-to-point communication system and revisit the design of the communication receiver. Rather than focusing on an improved design for either coherent receiver or non-coherent receiver alone, we consider a receiver with joint coherent and PD-based non-coherent processing. To the best of our knowledge, this is an open problem in the literature and it is not immediately clear whether joint processing will be better than either coherent or PD-based non-coherent processing alone. In this work, we show that it can in fact significantly improve the achievable rate and also reduce the symbol error rate (SER). The main contributions of the paper are summarized as follows: \begin{enumerate} [1)] \item We propose a novel information receiver architecture for a $K$-antenna receiver called \emph{splitting receiver}. The received signal at each antenna is split into two streams by a passive power splitter with a certain \emph{splitting ratio}. One stream is processed by a conventional (coherent) CD circuit, and the other is processed by a (non-coherent) PD circuit, and then the $2 K$ streams of processed signal are jointly used for information detection. \item As a variant of the splitting receiver, we also propose a simplified receiver where no power splitters are required and a fixed number of antennas are connected to CD circuits and the remaining antennas are connected to PD circuits. Analytically, the simplified receiver can be treated as a special case of a splitting receiver, where the splitting ratio at each antenna can only take $1$ or $0$. \item We show that the splitting receiver (and also the simplified receiver), increases the dimension of the received signal space, since the noise adds linearly to the signal in the coherent receiver part and the noise adds to the squared amplitude of the signal in the PD-based non-coherent receiver part. This results in improved communication performance. \item From an information-theoretic perspective, we model the channel introduced by the splitting receiver as a splitting channel. Assuming a Gaussian input to the splitting channel, in the high signal-to-noise-ratio (SNR) regime, we show analytically that: (i) The asymptotic maximum mutual information of the splitting channel is $3/2$ times that of either the coherent AWGN channel or the non-coherent AWGN channel, under the same average received signal power constraint. (ii) For a splitting receiver with a single receiver antenna, the asymptotic optimal power splitting ratio is $1/3$. (iii) For the simplified receiver with a large number of receiver antennas, connecting half the antennas to the CD circuits and the other half to the PD circuits is the optimal strategy. \item For transmissions based on practical modulations, we analyze the symbol decision region and the symbol error rate (SER) at the splitting receiver. Considering high SNR regime, we derive the SER expression for a general modulation scheme. The analytical results show that, compared with the conventional coherent receiver, the splitting receiver achieves asymptotic SER reduction by a factor of $M-1$ for $M$-PAM (pulse amplitude modulation) and $\sqrt{M} -1$ for $M$-QAM (quadrature amplitude modulation). \end{enumerate} \subsection{Paper Organization and Notation} This paper is organized as follows. Section II presents the system model, the proposed receiver architectures and the splitting channel. Section III analyzes the mutual information of the splitting channel with a Gaussian input. Section IV presents the received signal constellation at the splitting receiver for practical modulation schemes. Section V shows the SER results of practical modulation schemes. Finally, Section VI concludes the paper. \underline{Notation:} $\tilde{\cdot}$ denotes a complex number. $(\cdot)^*$ and $\vert \cdot \vert$ denote the conjugate and the absolute-value norm of a complex number, respectively. $\mathrm{Real}\{\cdot\}$ and $\mathrm{Imag}\{\cdot\}$ denotes the real part and the imaginary part of a complex number, respectively. $\myprobability{\cdot}$ denotes the probability of an event. $h(\cdot)$, $h(\cdot,\cdot)$, $h(\cdot \vert \cdot)$ denote the differential entropy, joint and conditional differential entropy, respectively. $\mathcal{I}(\cdot;\cdot)$ denotes the mutual information. Random variables and their realizations are denoted by upper and lower case letters, respectively. $\mathrm{erfc}(\cdot)$ is the complementary error function, and $Q(x) \triangleq \frac{1}{2} \mathrm{erfc}(\frac{x}{\sqrt{2}})$ is the Q-function. \section{System Model} Consider the communication between a single-antenna transmitter and a $K$-antenna receiver. The average received signal power at each antenna is denoted by $\mypower$. The channel coefficient at the $k$th receiver antenna is denoted by $\tilde{h}_k$. \begin{figure}[t] \small \renewcommand{\captionlabeldelim}{ } \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \centering \usetikzlibrary{arrows} \usetikzlibrary{arrows} \vspace{-0.5cm} \begin{tikzpicture} [scale=0.6] \draw [-latex,rounded corners=10pt] (-4.5,0) -- (-3.15,0) -- (-2.15,1.5) -- (1.7,1.5) node (v1) {}; \draw [-latex,rounded corners=10pt] (-4.5,0) -- (-3.15,0) -- (-2.15,-1.5) -- (1.7,-1.5) node (v2) {}; \draw [-latex,rounded corners=10pt] (-4.5,0) -- (-3.15,0) -- (-2.15,1.5) -- (-1.45,1.5) ; \draw [-latex,rounded corners=10pt] (-4.5,0) -- (-3.15,0) -- (-2.15,-1.5) -- (-1.45,-1.5) ; \node [scale=1.5] at (2,1.5) {$\oplus$}; \node [scale=1.5] at (2,-1.5) {$\oplus$}; \draw [-latex,blue,thick](2.25,1.5) -- (3.2,1.5) -- (3.5,1.5) ; \draw [-latex,red,thick](2.25,-1.5) -- (3.2,-1.5) -- (3.5,-1.5); \draw [-latex,rounded corners=6pt] (-3.65,0.5) rectangle (-2.65,-0.5); \draw [-latex](2,2.2) -- (2,1.75); \draw [-latex](2,-0.65) -- (2,-1.25); \node at (2,2.7) {$\tilde{Z}_k$}; \node at (2,-0.3) {$N_k$}; \node [align=center,font=\footnotesize] at (-3.85,2.45) { RF power\\[-2mm] splitter}; \draw [rounded corners=2pt,fill=white] (-1.45,2) rectangle (0.8,1); \draw [rounded corners=2pt,fill=white] (-1.45,-1) rectangle (0.8,-2); \node [align=center,font=\footnotesize] at (-0.3,1.5) { Down\\[-3mm] conversion}; \node [font=\footnotesize] at (-0.3,-1.5) { Rectifier}; \node at (-3.45,0) {$\bullet$}; \node at (-2.75,1.4) {$\rho$}; \node at (-3,-1.3) {$1-\rho$}; \draw [rounded corners=6pt](-2,3.1) rectangle (3,0.35); \draw [rounded corners=6pt](-2,0.05) rectangle (3,-2.6); \node at (-0.8,2.75) {\underline{CD}}; \node at (-0.8,-0.35) {\underline{PD}}; \node [circle,draw,scale=0.7,font=\Large] (v11) at (-9.35,0.65) {\bf +}; \node [circle,draw,scale=0.7,font=\Large] (v12) at (-9.35,-0.8) {\bf +}; \draw (-16.35,2.2) node (v7) {} -- (-15.6,2.2) node (v4) {}; \draw (v4) -- (-14.9,2.7) {}; \draw (v4) -- (-14.9,1.7) {}; \draw [-latex](-14.9,2.7) -- (-14.45,2.7); \draw [-latex](-14.9,1.7) -- (-14.45,1.7); \draw [rounded corners=4pt] (-14.45,3) rectangle (-12.95,2.4); \draw [rounded corners=4pt] (-14.45,2) rectangle (-12.95,1.4); \node at (-13.7,2.7) {CD}; \node at (-13.7,1.7) {PD}; \node [fill,circle,scale=0.5] at (-15.6,2.2) {}; \draw [rounded corners=4pt] (-15.95,2.55) rectangle (-15.25,1.85); \node (v5) at (-11.65,2.7) {$\tilde{Y}_{1,1}$}; \node (v6) at (-11.65,1.7) {$Y_{2,1}$}; \draw [-latex,blue,thick] (-12.95,2.7) -- (v5); \draw [-latex,red,thick] (-12.95,1.7) -- (v6); \draw [-latex,blue,thick](v5) -- (v11); \draw [-latex,red,thick](v6) -- (v12); \draw (-16.35,0.15) -- (-15.6,0.15) node (v4) {}; \draw (v4) -- (-14.9,0.65) {}; \draw (v4) -- (-14.9,-0.35) {}; \draw [-latex](-14.9,0.65) -- (-14.45,0.65); \draw [-latex](-14.9,-0.35) -- (-14.45,-0.35); \draw [rounded corners=4pt] (-14.45,0.95) rectangle (-12.95,0.35); \draw [rounded corners=4pt] (-14.45,-0.05) rectangle (-12.95,-0.65); \node at (-13.7,0.65) {CD}; \node at (-13.7,-0.35) {PD}; \node [fill,circle,scale=0.5] at (-15.6,0.15) {}; \draw [rounded corners=4pt] (-15.95,0.5) rectangle (-15.25,-0.2); \node (v5) at (-11.65,0.65) {$\tilde{Y}_{1,2}$}; \node (v6) at (-11.65,-0.35) {$Y_{2,2}$}; \draw [-latex,blue,thick] (-12.95,0.65) -- (v5); \draw [-latex,red,thick] (-12.95,-0.35) -- (v6); \draw [-latex,blue,thick](v5) -- (v11); \draw [-latex,red,thick](v6) -- (v12); \draw (-16.35,-2.25) -- (-15.6,-2.25) node (v4) {}; \draw (v4) -- (-14.9,-1.75) {}; \draw (v4) -- (-14.9,-2.75) {}; \draw [-latex](-14.9,-1.75) -- (-14.45,-1.75); \draw [-latex](-14.9,-2.75) -- (-14.45,-2.75); \draw [rounded corners=4pt] (-14.45,-1.45) rectangle (-12.95,-2.05); \draw [rounded corners=4pt] (-14.45,-2.45) rectangle (-12.95,-3.05); \node at (-13.7,-1.75) {CD}; \node at (-13.7,-2.75) {PD}; \node [fill,circle,scale=0.5] at (-15.6,-2.25) {}; \draw [rounded corners=4pt] (-15.95,-1.9) rectangle (-15.25,-2.6); \node (v5) at (-11.65,-1.75) {$\tilde{Y}_{1,K}$}; \node (v6) at (-11.65,-2.75) {$Y_{2,K}$}; \draw [-latex,blue,thick] (-12.95,-1.75) -- (v5); \draw [-latex,red,thick] (-12.95,-2.75) -- (v6); \draw [-latex,blue,thick](v5)-- (v11); \draw [-latex,red,thick](v6) -- (v12); \draw (-16.35,2.2) -- (-16.35,2.7) -- (-16.65,3.1) -- (-16.05,3.1) -- (-16.35,2.7); \draw (-16.35,0.15) -- (-16.35,0.65) -- (-16.65,1.05) -- (-16.05,1.05) -- (-16.35,0.65); \draw (-16.35,-2.25) -- (-16.35,-1.75) -- (-16.65,-1.35) -- (-16.05,-1.35) -- (-16.35,-1.75); \node at (-16.35,-0.7) {$\vdots$}; \node at (-11.75,-0.9) {$\vdots$}; \node (v8) at (-7.5,0.65) {$\tilde{Y}_1$}; \node (v9) at (-7.5,-0.8) {$Y_2$}; \draw [-latex,ultra thick](v11) -- (v8); \draw [-latex,ultra thick](v12) -- (v9); \node at (4,1.5) {$\tilde{Y}_{1,k}$}; \node at (4,-1.5) {$Y_{2,k}$}; \draw [rounded corners=6pt] (-10.2,2.15) rectangle (-8.6,-1.85); \node at (-9.4,1.85) {MRC}; \node at (-12.0,-3.75) {(a) $K$-antenna splitting receiver architecture.}; \node at (0.2,-3.75) {(b) The $k$th antenna receiver architecture.}; \draw (-4.5,0) -- (-4.5,0.5) -- (-4.8,0.9) -- (-4.2,0.9) -- (-4.5,0.5); \draw (-4.05,1.95) .. controls (-4,1.5) and (-3.75,0.85) .. (-3.25,0.6); \end{tikzpicture} \vspace{-0.5cm} \caption{The proposed splitting receiver architecture.} \label{fig:splitting_receiver} \vspace{-0.5cm} \end{figure} \subsection{Proposed Receiver Architecture} \emph{Splitting receiver:} The proposed splitting receiver architecture is illustrated in Figs.~\ref{fig:splitting_receiver}(a) and~(b). In the first stage, the received signal at each antenna is split into two streams by an ideal \emph{passive RF power splitter}. We assume there is no power loss or noise introduced during the splitting process~\cite{split_circuit,datasheet,Xunzhou13}. {\color{black}One stream is sent to the (conventional) CD circuit and the other to the PD circuit. The signals in the CD and PD circuits are first converted to the baseband signals and then sampled and digitized by the analog-to-digital converters (ADCs) accordingly, for further processing. Specifically, the rectifier-based PD circuit converts the RF signal into a DC signal with a conversion efficiency $\eta$.} In the second stage, all the $2K$ streams of signal of the $K$ antennas are jointly used for information detection.\footnote{Although the CD and PD circuits may have different detection sensitivity level in practice~\cite{WanchunICC16}, we assume both the circuits are able to detect arbitrarily small power signal for tractability.} Note that although we focus on the wireless communication application in this paper, the proposed splitting receiver with single-antenna ($K=1$) is also applicable to cable and fibre-optical communication systems. \textit{Simplified receiver:} We also propose a simplified receiver, as a variant of the splitting receiver, where no power splitters are required. This is illustrated in Fig.~\ref{fig:mixed_receiver}. In the simplified receiver, $K_1$ antennas ($1 \leq K_1 < K$) are connected to the CD circuit and the remaining antennas are connected to the PD circuits. This is illustrated in Fig.~\ref{fig:mixed_receiver}. We assume that the connections are determined offline, hence do not depend on the instantaneous channel coefficients at each antenna. \begin{figure}[t] \small \renewcommand{\captionlabeldelim}{ } \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \centering \usetikzlibrary{arrows} \usetikzlibrary{arrows} \vspace{-0.5cm} \begin{tikzpicture} [scale=0.6] \node (v10) at (9.85,2.4) {$\tilde{Y}_{1,1}$}; \draw (5.85,2.4) -- (5.85,2.75) -- (5.55,3.15) -- (6.15,3.15) -- (5.85,2.75); \draw [-latex] (5.85,2.4) -- (6.6,2.4); \draw [rounded corners=4pt] (6.6,2.7) rectangle (8.1,2.1); \node at (7.3,2.4) {CD}; \draw [-latex,blue,thick] (8.1,2.4) -- (v10); \node (v11) at (9.85,0.7) {$\tilde{Y}_{1,K_1}$}; \draw (5.85,0.7) -- (5.85,1.05) -- (5.55,1.45) -- (6.15,1.45) -- (5.85,1.05); \draw [-latex] (5.85,0.7) -- (6.6,0.7); \draw [rounded corners=4pt] (6.6,1) rectangle (8.1,0.4); \node at (7.3,0.7) {CD}; \draw [-latex,blue,thick] (8.1,0.7) -- (v11); \node (v12) at (9.85,-0.35) {$Y_{2,K_1 +1}$}; \draw (5.85,-0.35) -- (5.85,0) -- (5.55,0.4) -- (6.15,0.4) -- (5.85,0); \draw [-latex] (5.85,-0.35) -- (6.6,-0.35); \draw [rounded corners=4pt] (6.6,-0.05) rectangle (8.1,-0.65); \node at (7.3,-0.35) {PD}; \draw [-latex,red,thick] (8.1,-0.35) -- (v12); \node (v13) at (9.85,-1.9) {$Y_{2,K}$}; \draw (5.85,-1.9) -- (5.85,-1.55) -- (5.55,-1.1) -- (6.15,-1.1) -- (5.85,-1.55); \draw [-latex] (5.85,-1.9) -- (6.6,-1.9); \draw [rounded corners=4pt] (6.6,-1.6) rectangle (8.1,-2.2); \node at (7.3,-1.9) {PD}; \draw [-latex,red,thick] (8.1,-1.9) -- (v13); \node at (9.8,1.75) {$\vdots$}; \node at (9.8,-1.05) {$\vdots$}; \node [circle,draw,scale=0.7,font=\Large] (v111) at (13,0.7) {\bf +}; \node [circle,draw,scale=0.7,font=\Large] (v222) at (13,-0.35) {\bf +}; \draw [rounded corners=6pt] (12,2.4) rectangle (14,-1.6); \draw [-latex,blue,thick](v10) -- (v111); \draw [-latex,blue,thick](v11) -- (v111); \draw [-latex,red,thick](v12) -- (v222); \draw [-latex,red,thick](v13) -- (v222); \node at (13,1.9) {MRC}; \node (v8) at (15.3,0.7) {$\tilde{Y}_1$}; \node (v9) at (15.3,-0.35) {$Y_2$}; \draw [-latex,ultra thick](v111) -- (v8); \draw [-latex,ultra thick](v222) -- (v9); \draw [-latex] (19,1.5) -- (19.35,1.5) -- (20.35,1.5) -- (24.2,1.5) node (v1) {}; \draw [-latex] (19,-1.5) -- (19.35,-1.5) -- (20.35,-1.5) -- (24.2,-1.5) node (v2) {}; \node [scale=1.5] at (24.45,1.5) {$\oplus$}; \node [scale=1.5] at (24.45,-1.5) {$\oplus$}; \draw [-latex,blue,thick](24.7,1.5) -- (25.65,1.5) -- (25.95,1.5) ; \draw [-latex,red,thick](24.7,-1.5) -- (25.65,-1.5) -- (25.95,-1.5); \draw [-latex](24.45,2.2) -- (24.45,1.75); \draw [-latex](24.45,-0.65) -- (24.45,-1.25); \node at (24.45,2.7) {$\tilde{Z}_{K_1}$}; \node at (24.45,-0.3) {$N_{K_1+1}$}; \draw [rounded corners=2pt,fill=white] (21.05,2) rectangle (23.3,1); \draw [rounded corners=2pt,fill=white] (21.05,-1) rectangle (23.3,-2); \node [align=center,font=\footnotesize] at (22.2,1.5) { Down\\[-3mm] conversion}; \node [font=\footnotesize] at (22.2,-1.5) { Rectifier}; \draw [rounded corners=6pt](20.5,3.1) rectangle (25.3,0.35); \draw [rounded corners=6pt](20.5,0.05) rectangle (25.3,-2.6); \node at (21.7,2.75) {\underline{CD}}; \node at (21.7,-0.35) {\underline{PD}}; \draw (19,1.5) -- (19,2.35) -- (18.7,2.75) -- (19.3,2.75) -- (19,2.35); \draw (19,-1.5) -- (19,-0.65) -- (18.7,-0.25) -- (19.3,-0.25) -- (19,-0.65); \node at (26.8,1.5) {$\tilde{Y}_{1,K_1}$}; \node at (27.1,-1.5) {$Y_{2,K_1+1}$}; \node at (9.9,-3.5) {(a) $K$-antenna simplified receiver architecture.}; \node at (24,-3.5) {(b) The $K_1$th and the $(K_1+1)$th antenna receiver architectures.}; \end{tikzpicture} \vspace{-0.5cm} \caption{The proposed simplified receiver architecture.} \label{fig:mixed_receiver} \vspace{-0.5cm} \end{figure} \subsection{Signal Model} In this section, we present the signal model for the splitting receiver. Note that the simplified receiver can be analytically treated as a special case of the splitting receiver with power splitting ratios taking binary values only, i.e., $\rho_k \in \{0,1\}$, for all $k=1,2,..., K$. Based on~\cite{Xunzhou13,OpticalTIT}, the output signals from the CD and PD circuits at the $k$th antenna are given by, respectively, \begin{align} \tilde{Y}_{1,k}& = \sqrt{\rho_k \myP} \tilde{h}_k \tilde{X} + \tilde{Z}_k, \label{receive_signal_1}\\ Y_{2,k}&= \eta (1-\rho_k) \vert \tilde{h}_k \vert^2 \myP \vert \tilde{X} \vert^2 + N'_k, \label{receive_signal_2'} \end{align} where $\rho_k \in \left[0,1\right]$ is the power splitting ratio. $\tilde{X}$ is the transmitted signal with normalized variance and $\tilde{X} \in \mathcal{X}$, where $\mathcal{X}$ denotes the set of all possible transmitted signals. {\color{black}$\tilde{Z}_k$ is the post-processing complex AWGN of the CD circuit with the mean of zero and the variance of $\sigmaone$, which includes both the RF band to baseband conversion noise and the ADC noise. $N'_k$ is the post-processing noise of the PD circuit which is also assumed to be real Gaussian noise~\cite{OpticalTIT}, which includes both the rectifier noise and the ADC noise.} Note that we only consider the post-processing noise $\tilde{Z}_k$ and $N'_k$, i.e., we ignore the pre-processing noise, such as the antenna noise which is almost at the thermal noise level and is much smaller than the post-processing noise~\cite{Xunzhou13}. Without loss of generality, scaling \eqref{receive_signal_2'} by $\eta $, the received signal {\color{black}$Y_{2,k}$} can be rewritten~as \begin{align} Y_{2,k} &= (1-\rho_k) \vert \tilde{h}_k \vert^2 \myP \vert \tilde{X} \vert^2 + {\color{black}N_k} \label{receive_signal_2}, \end{align} where $N_k \triangleq N'_k/(\eta )$ is the equivalent rectifier conversion AWGN with the mean of zero and the variance $\sigmatwo$. \subsection{Maximal Ratio Combining of Splitting Receiver} To detect the transmitted signal $\tilde{X}$, similar with a conventional SIMO receiver, the optimal method is the maximal ratio combining (MRC). We assume that the receiver has perfect channel state information (CSI), i.e., knowledge of $\tilde{h}_k$. Since the $K$-antenna received signals $\tilde{Y}_{1,k}$ and $Y_{2,k}$, $k=1,2,...,K$, lie in different signal spaces, we use MRC for coherently processed signals (i.e., $\tilde{Y}_{1,k}$) and non-coherent signals (i.e., $Y_{2,k}$) separately. Based on~\eqref{receive_signal_1} and~\eqref{receive_signal_2}, the combined coherently and non-coherently processed signals are given by, respectively, \begin{equation} \label{MRC_1} \begin{aligned} \tilde{Y}_1 &= \left(\sum\limits_{k=1}^{K} \rho_k \vert \tilde{h}_k \vert^2 \right) \sqrt{\myP} \tilde{X} + \sum\limits_{k=1}^{K} \sqrt{\rho_k} \tilde{h}^*_k \tilde{Z}_k, \\ Y_2 &= \left(\sum\limits_{k=1}^{K} (1-\rho_k)^2 \vert \tilde{h}_k \vert^4 \right) \myP \vert \tilde{X} \vert^2 + \sum\limits_{k=1}^{K} (1-\rho_k) \vert \tilde{h}_k \vert^2 N_k. \end{aligned} \end{equation} For convenience of analysis, after linear scaling, \eqref{MRC_1} can be rewritten as \begin{equation} \label{MRC_signal} \begin{aligned} \tilde{Y}_1 &= \sqrt{\Theta_1} \sqrt{\myP} \tilde{X} + \tilde{Z}, \ Y_2 = \sqrt{\Theta_2} \myP \vert \tilde{X} \vert^2 + N, \end{aligned} \end{equation} where \begin{equation} \label{my_theta} \begin{aligned} \Theta_1 &= {\sum\limits_{k=1}^{K} \rho_k \vert \tilde{h}_k \vert^2 } , \ \Theta_2 = {\sum\limits_{k=1}^{K} (1-\rho_k)^2 \vert \tilde{h}_k \vert^4}, \end{aligned} \end{equation} and $\tilde{Z}$ and $N$ follow the same distributions as $\tilde{Z}_k$ and $N_k$, respectively. The two-dimensional signal $\tilde{Y}_1$ and the one-dimensional signal $Y_2$ form a triple $(\tilde{Y}_1, Y_2)$, which is the equivalent received signal of the $K$-antenna splitting receiver. \textnormal{It is interesting to see that since the two-dimensional signal $\tilde{Y}_1$ lies on the \emph{in-phase-quadrature} (I-Q) plane and the one-dimensional signal $Y_2$ lines on the \emph{power} (P)-axis, the equivalent received signal $(\tilde{Y}_1, Y_2)$ lies in the three-dimensional I-Q-P space. This is different from the conventional coherent (two-dimensional) and non-coherent (one-dimensional) receiver signal spaces. Thus, the splitting receiver expands the received signal space and fundamentally changes the way in which the signal is processed compared with the conventional receivers.} Considering the noiseless signal, i.e., letting $\tilde{Z}$ and $N = 0$ in \eqref{MRC_signal}, we have $Y_2 = \frac{\sqrt{\Theta_2}}{\Theta_1}\vert\tilde{Y}_1\vert^2$ from \eqref{MRC_signal}, which is a paraboloid equation. From a geometric point of view, defining $\vec{\rho} \triangleq [\rho_1, \rho_2,...\ \rho_K]$, $\vec{1} \triangleq \underbrace{[1, 1, \cdots, 1]}_K$, and $\vec{0} \triangleq \underbrace{[0, 0, \cdots, 0]}_{K}$, the splitting receiver is actually bending the noiseless received signal space into a paraboloid with $\vec{\rho}$ as illustrated in Fig.~\ref{fig:Gaussian_shape}. When $\vec{\rho} = \vec{1}$, i.e., a non-splitting case, the splitting receiver degrades to the coherent receiver. As the parameter $\sqrt{\Theta_2}/\Theta_1$ increases, e.g., each element of $\vec{\rho}$ decreases, the splitting receiver bends the signal plane to a paraboloid, which is taller and thinner with a larger $\sqrt{\Theta_2}/\Theta_1$ , e.g., a smaller $\vec{\rho}$. When $\vec{\rho} = \vec{0}$, the splitting receiver degrades to the PD-based non-coherent receiver. In this paper, PD-based non-coherent receiver is named as \emph{non-coherent receiver} for short, and we refer to both the $K$-antenna coherent receiver (i.e., $\vec{\rho} = \vec{1}$) and the $K$-antenna non-coherent receiver (i.e., $\vec{\rho} = \vec{0}$) as the \emph{conventional receivers}. \begin{figure}[t] \small \renewcommand{\captionlabeldelim}{ } \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \centering \vspace{-1.3cm} \includegraphics[scale=0.8]{Gaussian_shape} \vspace*{-0.5cm} \caption{\small Illustration of the signal space of the splitting receiver, $\rho= \ 0.2,\ 0.5,\ 1$.} \label{fig:Gaussian_shape} \vspace*{-0.5cm} \end{figure} \subsection{Splitting Channel} From an information theory perspective, \eqref{MRC_signal} can be rewritten as \begin{equation} \label{receive_signal} \begin{bmatrix} \tilde{Y}_1 \\ Y_2 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & \vert \cdot \vert^2 \end{bmatrix} \begin{bmatrix} \sqrt{\Theta_1} \\ \sqrt[4]{\Theta_2} \end{bmatrix} \sqrt{\myP}\tilde{X} + \begin{bmatrix} \tilde{Z} \\ N \end{bmatrix}, \end{equation} where $\vert \cdot \vert^2$ is the squared magnitude operator. We name \eqref{receive_signal} as the \emph{splitting channel}, and the input and output of the splitting channel regarded as random variables, are $\sqrt{\myP} \tilde{X}$ and $(\tilde{Y}_1,Y_2)$, respectively. The splitting channel can be treated as a SIMO channel, since the channel has one input $\sqrt{\myP}\tilde{X}$ and two outputs $\tilde{Y}_1$ and $Y_2$. It can also be treated as a degraded (due to the power splitting) SISO channel with the output $\tilde{Y}_1$ and a side information $Y_2$. \subsection{Performance Metrics} We study the mutual information between the input and output of the splitting channel for an ideal Gaussian input, and study the SER performance for practical modulation schemes. For convenience of analysis, we define the operating SNR as \begin{equation} \snr \triangleq \min\left\lbrace \snrcov, \snrrec \right\rbrace, \end{equation} where \begin{equation} \snrcov \triangleq H_2 \frac{\myP}{\sigmaone},\ \snrrec \triangleq \sqrt{H_4} \frac{\myP}{\sigmatwosqrt},\ H_2 \triangleq \sum\limits_{k=1}^{K} \vert \tilde{h}_k \vert^2,\ H_4 \triangleq \sum\limits_{k=1}^{K} \vert \tilde{h}_k \vert^4. \end{equation} $\snrcov$ and $\snrrec$ are the SNRs of the conventional receivers, i.e., $\vec{\rho} =\vec{1}$ for the coherent receiver and $\vec{\rho} = \vec{0}$ for the non-coherent receiver, respectively. Specifically, the definition of $\snrrec$ is consistent with \cite{OpticalTIT}. Note that although $\sqrt{H_4} \myP$ and $\sigmatwosqrt$ correspond to the standard deviation (not variance) of the signal $\sqrt{H_4} \myP \vert\tilde{X} \vert^2$ and the noise $N$ at the PD receiver, respectively, $\sqrt{H_4} \myP$ still has the physical meaning of ``power''. Thus, the signal-to-noise ratio is defined as $\sqrt{H_4} \frac{\myP}{\sigmatwosqrt}$ not $H_4\frac{\myP^2}{\sigmatwo}$. In the following, we refer to the high SNR regime as $\snr \rightarrow \infty$ which is obtained by letting $\myP\rightarrow \infty$. Our analysis will focus on the splitting receiver which includes the simplified receiver as a special case. \section{Splitting Channel: Mutual Information} In this section, we study the mutual information of the splitting channel to determine the gain due to the joint coherent and non-coherent processing. We also provide a discussion to intuitively explain this processing gain. Based on \eqref{receive_signal}, the mutual information between the input and outputs of the splitting channel with the splitting ratio $\vec{\rho}$ is \begin{equation} \label{first_mutual_info} \begin{aligned} &\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2) = h(\tilde{Y}_1,Y_2) - h(\tilde{Y}_1,Y_2 \vert \sqrt{\myP} \tilde{X})\\ &{=} h(\tilde{Y}_1,Y_2) - h(\tilde{Z}, N \vert \sqrt{\myP} \tilde{X}) {=} h(\tilde{Y}_1,Y_2) - h(\tilde{Z}, N) {=} h(\tilde{Y}_1,Y_2) - h(\tilde{Z}) - h(N)\\ &= - \int_{Y_2} \int_{\tilde{Y}_1} f_{\tilde{Y}_1, Y_2} (\tilde{y}_1,y_2) \log_2(f_{\tilde{Y}_1, Y_2} (\tilde{y}_1,y_2)) \mathrm{d}\tilde{y}_1 \mathrm{d}y_2 - \log_2(\pi e \sigmaone) - \frac{1}{2} \log_2(2\pi e \sigmatwo). \end{aligned} \end{equation} The joint probability density function (pdf) of $(\tilde{Y}_1, Y_2)$ is \begin{equation} f_{\tilde{Y}_1, Y_2} (\tilde{y}_1,y_2) = \int_{\tilde{X}} f_1(\sqrt{\Theta_1 \myP} \tilde{x},\tilde{y}_1) f_2(\sqrt{\Theta_2} \myP \vert \tilde{x} \vert^2,y_2) f_{\tilde{X}} (\tilde{x}) \mathrm{d} \tilde{x}, \end{equation} where $f_{\tilde{X}}(\tilde{x})$ is the pdf of $\tilde{X}$, and $f_1(\sqrt{\Theta_1 \myP} \tilde{x},\cdot)$ and $f_2(\sqrt{\Theta_2} \myP \vert \tilde{x} \vert^2, \cdot)$ are the pdfs of the distributions $\mathcal{CN}(\sqrt{\Theta_1 \myP} \tilde{x},\sigmaone)$ and $\mathcal{N}(\sqrt{\Theta_2}\myP \vert \tilde{x} \vert^2,\sigmatwo)$, respectively. The mutual information expression in~\eqref{first_mutual_info} needs five integrals to evaluate, which is cumbersome and thus the maximal mutual information theoretically achieved by the optimal distribution of $\tilde{X}$, cannot be obtained. \subsection{Mutual Information and Joint Processing Gain} For tractability, in the following analysis, we consider the mutual information with a Gaussian input, i.e., $\tilde{X} \sim \mathcal{CN}(0,1)$, and we have: \begin{enumerate} [1)] \item Letting $\vec{\rho} = \vec{1}$, the splitting channel is degraded to the coherent AWGN channel, and the mutual information is well-known~as~\cite{BookInfo} \begin{equation} \label{rho_1} \mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2) = h(\tilde{Y}_1) -h(\tilde{Z}) = \log_2 \left(1 + H_2\frac{\myP}{\sigmaone}\right), \end{equation} which is exactly the capacity of the coherent AWGN channel, i.e., $\mathcal{C}(\vec{\rho} = \vec{1})$. \item Letting $\vec{\rho} = 0$, the splitting channel is degraded to the conventional intensity channel in free-space optical communications~\cite{OpticalTIT}. Recall that we refer to the intensity channel as the non-coherent AWGN channel, echoing the coherent AWGN channel in this paper\footnote{Note that in this paper, the non-coherent channel refers to the intensity channel, and it does not refer to the kind of channel without CSI at the transmitter or the receiver.}. The mutual information of the non-coherent AWGN channel is~\cite{OpticalTIT} \begin{align} \mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2) &= h(Y_2) -h(N) = -\int\limits_{-\infty}^{\infty} f_{Y_2}(y_2) \log_2(f_{Y_2}(y_2)) \mathrm{d}y_2 - \frac{1}{2} \log_2(2 \pi e \sigmatwo) \nonumber\\ &\stackrel{(a)}{\geq} \frac{1}{2} \log_2\left(1 + H_4 \frac{\myP^2 e}{2 \pi \sigmatwo}\right),\label{rho_0} \end{align} where $Y_2 = \sqrt{H_4} \myP \vert \tilde{X}\vert^2 + N$ follows an exponential modified Gaussian distribution~\cite{EMG}: \begin{equation} f_{Y_2}(y_2) = \frac{1}{2 \sqrt{H_4} \myP} \exp\left(\frac{1}{2 \sqrt{H_4} \myP}\left(\frac{\sigmatwo}{\sqrt{H_4} \myP} -2 y_2\right)\right) \mathrm{erfc}\left(\frac{\frac{\sigmatwo}{\sqrt{H_4} \myP} - y_2}{\sqrt{2}\sigmatwosqrt}\right). \end{equation} The inequality $(a)$ is given by~\cite{OpticalTIT}, and \eqref{rho_0} is the asymptotic mutual information in the high SNR regime, which is also the asymptotic capacity (with gap less than $\vert \frac{1}{2} \log_2\left(\frac{e}{2 \pi}\right) \vert$ bits) of the non-coherent AWGN channel, i.e., $\mathcal{C}(\vec{\rho} = \vec{0})$. \end{enumerate} Comparing \eqref{rho_1} and \eqref{rho_0}, it is easy to see that as $\snr \rightarrow \infty$, \emph{the coherent and non-coherent AWGN channels have the same asymptotic capacity}, i.e., $\lim_{\snr \rightarrow \infty} {\mathcal{C}(\vec{\rho}=\vec{1})}/{\mathcal{C}(\vec{\rho}=\vec{0})} = 1$. In the following, we will show that the splitting receiver with $\vec{\rho} \neq \vec{0} \text{ nor } \vec{1}$ provides a gain in the mutual information compared with the conventional receivers. Firstly, we need the following definition. \begin{definition} \label{def:splitting_gain_MI} The joint processing gain of the splitting receiver is \begin{equation} G \triangleq \frac{\sup\{\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2): \vec{\rho} \in \left[0,1\right]^K\}}{\max\{\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2)\vert_{\vec{\rho} = \vec{0}},\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2)\vert_{\vec{\rho} = \vec{1}}\}}, \end{equation} where $\sup\{\cdot\}$ denotes for the supremum, and $\left[0,1\right]^K$ is the $K$-product space generated by the interval $\left[0,1\right]$. \end{definition} If the joint processing gain $G > 1$, the splitting receiver achieves higher mutual information compared with the conventional receivers. If the joint processing gain $G = 1$ which means the joint coherent and non-coherent processing is unnecessary, the splitting receiver should be degraded to either one of the conventional receivers. Due to the complicated form of \eqref{first_mutual_info}, it is not possible to accurately evaluate the mutual information\footnote{A lower bound and an upper bound of $\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1, Y_2)$ with explicit expressions can be found based on the basic inequalities $\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1, Y_2) >\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1)$, $\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1, Y_2) >\mathcal{I}(\sqrt{\myP}\tilde{X};Y_2)$ and $\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1, Y_2) < \mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1) +\mathcal{I}(\sqrt{\myP}\tilde{X};Y_2)$~\cite{BookInfo}. Since the bounds are loose, we do not pursue them here.} for $\vec{\rho} \in \left[0,1 \right]^K$ and prove whether $G$ is greater than $1$ or not. Hence, we first use Monte Carlo based histogram method to simulate the results. In Fig.~\ref{fig:mutual_new}, considering the $K=1$ case, it is observed that when $\snr$ is reasonably high, e.g., $\myP = 10$, $\sigmaone = 1$ and $\sigmatwosqrt =1$, the joint processing gain $G$ is greater than $1$. Inspired by this, we will focus on the analysis on the mutual information in \eqref{first_mutual_info} and the joint processing gain in Definition~\ref{def:splitting_gain_MI} in the high SNR regime in the following subsection. \begin{figure}[t] \small \renewcommand{\captionlabeldelim}{ } \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \centering \vspace*{-0.7cm} \includegraphics[scale=0.6]{three_curve_plot} \vspace*{-0.5cm} \caption{\small $\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2)$ versus $\rho$, $\myP = 10$, $K=1$, $\vert \tilde{h}_1 \vert = 1$. The simulation results are marked with `o's, and are curve fitted by polynomials of degree of $3$.} \label{fig:mutual_new} \vspace*{-0.5cm} \end{figure} \subsection{High SNR Analysis} \begin{lemma} \label{theory:high_snr} In the high SNR regime, $\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2)$ with $\vec{\rho} \in \left[0,1 \right]^K \backslash\{\vec{0},\vec{1}\}$ is given by \begin{subequations} \begin{alignat}{2} \label{theory1_1} \mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2) &\approx \log_2( \frac{\Theta_1 \myP}{\sigmaone}) +\frac{1}{2 \log(2)} \exp\left(\frac{\Theta_1 \sigmatwo }{\Theta_2 2 \sigmaone \myP }\right) \Ei \left(\frac{\Theta_1 \sigmatwo}{\Theta_2 2 \sigmaone \myP }\right)\\ \label{theory1_2} &\approx \log_2(\frac{\sqrt{2} \myP^{\frac{3}{2}} \sqrt{\Theta_1 \Theta_2}}{\sigmaonesqrt \sigmatwosqrt}) - \frac{\gamma}{2 \ln 2}, \end{alignat} \end{subequations} where $\Ei(x) \triangleq \int_{x}^{\infty} \frac{e^{-t}}{t} \mathrm{d} t$ is the exponential integral function, and $\gamma$ is Euler's constant. \end{lemma} \begin{proof} See Appendix A. \end{proof} {\color{black}From Lemma~1, it is clear that the mutual information of the splitting channel increases linearly with $\log_2(\myP)$ and decreases linearly with $\log_2(\sigmaonesqrt)$ and $\log_2(\sigmatwosqrt)$ in the high SNR regime. Moreover, since the mutual information depends on the power splitting ratio $\vec{\rho}$, which is contained in the term $\Theta_1 \Theta_2$, it is interesting to find the optimal $\vec{\rho}$ that maximizes the mutual information.} Based on Lemma~\ref{theory:high_snr}, the following optimization problem is proposed to obtain the optimal splitting ratio $\vec{\rho}$ in the high SNR regime: \begin{equation} \label{first_P1} \textrm{(P1)} \ \max_{\vec{\rho} \in \left[0,1 \right]^K\backslash\{\vec{0},\vec{1}\}} \Theta_1 \Theta_2 \Leftrightarrow \max_{\vec{\rho} \in \left[0,1 \right]^K\backslash\{\vec{0},\vec{1}\}} \sum\limits_{k=1}^{K} \rho_k \vert \tilde{h}_k \vert^2 \sum\limits_{k=1}^{K} \left(1-\rho_k\right)^2 \vert \tilde{h}_k \vert^4. \end{equation} It can be shown that (P1) is not a convex optimization problem. Thus, the optimal splitting ratio can be obtained by numerical methods. In what follows, we first focus on two special scenarios and then discuss the joint processing gain for a general splitting receiver.\par \subsubsection{Splitting receiver with single receiver antenna} When $K=1$, $\Theta_1 = \rho_1 \vert \tilde{h}_1 \vert^2$ and $\Theta_2 = (1-\rho_1)^2 \vert \tilde{h}_1 \vert^4$, and thus, the optimal power-splitting ratio is obtained by solving the equation $\frac{\partial \rho^{\frac{1}{3}} (1-\rho)^{\frac{2}{3}}}{\partial \rho} = 0.$ It is straightforward to obtain the following results. \begin{proposition} \label{coro:opti_rho} For the splitting receiver with single receiver antenna, the optimal splitting ratio in the high SNR regime is \begin{equation} \rho^\star = \frac{1}{3}, \end{equation} and the maximal mutual information is given by \begin{equation} \label{optimal_MI} \begin{aligned} \mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2) \vert_{\rho^\star} \approx \log_2 \left(\frac{2 \sqrt{2}}{3\sqrt{3}} \frac{\vert \tilde{h}_1 \vert^3 \myP^{\frac{3}{2}}}{\sigmaonesqrt \sigmatwosqrt}\right)- \frac{\gamma}{2 \ln 2}. \end{aligned} \end{equation} \end{proposition} \subsubsection{Simplified receiver with a large number of antennas} For the simplified receiver with a large number of antennas, \eqref{first_P1} can be rewritten as \begin{equation} \max_{1 \leq K_1 <K} \sum\limits_{k =1}^{K_1}\vert \tilde{h}_k \vert^2 \sum\limits_{k = K_1+1}^{K} \vert \tilde{h}_k \vert^4. \end{equation} Assuming that $\tilde{h}_k$, $k =1,2,... K$, are independent and identically distributed (i.i.d.) random variables, i.e., \emph{\color{black}the uncorrelated scenario}, due to the law of large numbers when $K$ is sufficiently large, we have \begin{equation} \label{antennas} \lim\limits_{K \rightarrow \infty} \sum\limits_{k = 1}^{K_1}\vert \tilde{h}_k \vert^2 \!\!\!\sum\limits_{k = K_1+1}^{K} \!\!\!\! \vert \tilde{h}_k \vert^4 = \lim\limits_{K \rightarrow \infty} K_1 K_2 {\sum\limits_{k = 1}^{K_1}\vert \tilde{h}_k \vert^2}/{K_1} \!\!\!\sum\limits_{k = K_1+1}^{K} \!\!\!\! \vert \tilde{h}_k \vert^4/{K_2} \stackrel{(a)}{=} K_1 K_2 \myexpect{\vert \tilde{h}_k \vert^2}\myexpect{\vert \tilde{h}_k \vert^4}, \end{equation} where $K_2 \triangleq K-K_1$, and $(a)$ is because both $K_1$ and $K_2$ are sufficiently large. {\color{black}Assuming that $\vert \tilde{h}_k \vert $, $k =1,2,... K$, are identical with each other, i.e., \emph{the free-space scenario which is also a fully-spatially-correlated scenario}, we have the same expression with~\eqref{antennas}.} Thus, $K_1 K_2$ is maximized when $K_1 = K_2 = K/2$, and we have the following proposition. \begin{proposition} \label{prop:large_antenna} For the simplified receiver with a large number of antennas, {\color{black}the optimal strategy for the spatially-uncorrelated channel or the fully-spatially-correlated channel (i.e., the free-space scenario)} in the high SNR regime is to connect half of the antennas to the CD circuits and the other half to the PD circuits, and the maximum mutual information is given by \begin{equation} \mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2) \vert_{\vec{\rho}^\star} \approx \log_2\left(\frac{K \myP^{\frac{3}{2}} \sqrt{ \myexpect{\vert \tilde{h}_k \vert^2}\myexpect{\vert \tilde{h}_k \vert^4}}}{\sqrt{2} \sigmaonesqrt \sigmatwosqrt}\right) - \frac{\gamma}{2 \ln 2}. \end{equation} \end{proposition} {\color{black}Note that for the general spatially-correlated scenario, the optimal strategy in the high SNR regime is not immediately clear. We will investigate this scenario for future study.} \subsubsection{Joint processing gain of splitting receiver with $K$ receiver antennas} We assume that $\vec{\rho} \neq \vec{0} \text{ nor } \vec{1}$, thus, $\Theta_1 \neq 0$ and $\Theta_2 \neq 0$. Then, based on \eqref{theory1_2} of Lemma~\ref{theory:high_snr}, \eqref{rho_1} and \eqref{rho_0}, we can show that \begin{equation} \lim\limits_{\snr \rightarrow \infty} \frac{\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2)\vert_{\vec{\rho} \in \left[0,1\right]^K \backslash \{\vec{0},\vec{1}\}}}{\max\{\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2)\vert_{\vec{\rho} = \vec{0}},\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2)\vert_{\vec{\rho} = \vec{1}}\}} = \frac{3}{2}. \end{equation} In other words, the asymptotic gain is the same no matter what value $\vec{\rho}$ takes, as long as $\vec{\rho} \neq \vec{0} \text{ nor } \vec{1}$. Therefore, the asymptotic optimal splitting ratio $\vec{\rho}^\star \in \left[0,1\right]^K \backslash \{\vec{0},\vec{1}\}$, and we have the following result based on Definition~\ref{def:splitting_gain_MI}. \begin{proposition} \label{splitting_gain} In the high SNR regime, the asymptotic joint processing gain for a splitting receiver with $K$ receiver antennas is \begin{equation} G = \lim\limits_{\snr \rightarrow \infty} \frac{\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2)\vert_{\vec{\rho}^{\star}}}{\max\{\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2)\vert_{\vec{\rho} = \vec{0}},\mathcal{I}(\sqrt{\myP}\tilde{X};\tilde{Y}_1,Y_2)\vert_{\vec{\rho} = \vec{1}}\}} = \frac{3}{2}. \end{equation} \end{proposition} \subsection{Explanation of the Joint Processing Gain} The result of Proposition~\ref{splitting_gain} shows that in the high SNR regime, since $G=3/2 > 1$, the splitting receiver provides a processing gain. Note that although the joint processing gain at any given SNR depends on the specific value of the received signal power $\myP$, the noise variances $\sigmaone$ and $\sigmatwo$, the asymptotic joint processing gain is independent of the specific noise variances at the CD and PD circuits in the high SNR regime. \emph{This implies that the reason for the performance improvement lies in the joint coherent and non-coherent processing.} This is explained in detail using intuitive and geometric arguments as follows. \emph{Intuitive explanation of the rate improvement:} \textnormal{Since the degrees of freedom of a channel is commonly defined as the dimension of the received signal space~\cite{BOOKTse}, the coherent AWGN channel has two degrees of freedom (I-Q plane) while the non-coherent AWGN channel has one degree of freedom (P-axis). For the splitting channel created by jointly utilizing both the coherent and non-coherent AWGN channels, the received signals are spread into a three-dimensional space, i.e., the I-Q-P space. Thus, the splitting channel can be treated as a channel with three degrees of freedom. Therefore, the splitting channel with a properly designed splitting ratio can take better advantage of the I-Q-P space, and achieve a better channel rate performance compared with either the coherent or non-coherent AWGN channel.} \textnormal{We would like to highlight that a `splitting receiver', which splits the received signal at each antenna into two streams and sends both streams to CD circuits (i.e., two coherent AWGN channels), does not provide any rate improvement. After MRC, it is straightforward to see that the received signal space still lies on the I-Q plane. Thus, the received signal space is the same as for the conventional coherent receiver. For instance, consider a single-antenna receiver for ease of illustration. It can be proved that the best `splitting' strategy is to send the entire signal to the CD circuit with~a smaller noise variance, instead of splitting and sending signals to both CD circuits~\cite{BOOKTse}. Therefore, there is no joint processing gain by using two coherent AWGN channels, i.e., $G = 1$. The same argument holds for a `splitting receiver' which splits the received signal at each antenna into two streams and sends them to two PD circuits (i.e., two non-coherent AWGN channels). } \textnormal{Therefore, the key to the rate improvement is the increased dimension of the received signal space achieved by joint coherent and non-coherent processing, where the coherent channel adds noise linearly to the signal, and the noncoherent channel adds noise to the square amplitude of the signal. } \emph{A geometric explanation of the asymptotic gain:} \textnormal{As discussed in Sec.~II.C, a splitting receiver with the splitting ratio $\vec{\rho}$ maps the noiseless received signal space, i.e., the I-Q plane, to a paraboloid in the I-Q-P space with parameter ${\sqrt{\Theta_2}}/{\Theta_1}$ which depends on $\vec{\rho}$. Considering a disk with radius $R$ and center $(0,0)$ in the I-Q plane, the area of the disk is $\pi R^2$, {\color{black}where $R$ is proportional to $\sqrt{P}$ in this paper.} After the mapping, the disk is converted into a paraboloid with parameter ${\sqrt{\Theta_2}}/{\Theta_1}$ which is restricted by the condition that the projection of the paraboloid in the I-Q plane should lie within the disk with radius $\sqrt{\Theta_1}R$. When $R$ is sufficiently large, the area of the paraboloid can be shown to be approximated by $3 \pi \sqrt{\Theta_1 \Theta_2} R^3$ for $\vec{\rho} \neq \vec{0} \text{ nor } \vec{1}$. It is well known that the optimal constellation design for the I-Q space is equivalent to a sphere-packing problem, i.e., packing two-dimensional spheres (disks) with a certain radius, which is related to the detection error rate, on the surface of the disk (i.e., the disk on the I-Q plane). The number of spheres that can be packed is proportional to the area of the disk. Thus, the communication rate can be written as $\mathcal{O}\left(\log({\pi R^2})\right)\sim 2\mathcal{O}({\color{black}\log{R}})$. Similarly, for the paraboloid, the number of three-dimensional spheres\footnote{Note that sphere-packing is considered only if $\sigmaone = 2 \sigmatwo$, i.e., a uniform three-dimensional noise sphere, otherwise, it is ellipsoid-packing. Here we use sphere-packing for ease of illustration.} (balls) that can be packed on the surface is proportional to the paraboloid area, and the rate can be written as $\mathcal{O}\left(\log({3 \pi \sqrt{\Theta_1 \Theta_2} R^3})\right)\sim 3\mathcal{O}({\color{black}\log{R}})$. Therefore, it is straightforward to see that there is a $3/2$ fold rate gain provided by the splitting receiver when $R$ is sufficiently large. To sum up, bending the signal space from a two-dimensional plane to a three-dimensional paraboloid increases the effective area of the signal space which boosts the communication rate.} \emph{The complexity of splitting receiver:} {\color{black}Although the splitting receiver is able to provide a performance gain, it is clear that for the information detection in the digital domain, the splitting receiver requires a three-dimensional detection, while the conventional CD/PD receiver only needs a two/one-dimensional detection, respectively. Specifically, when applying the minimum distance detection for practical modulation, the splitting receiver needs to calculate the distance between two signal points in the three-dimensional space, while the conventional CD/PD receiver only needs to calculate the distance in the two/one-dimensional space. Thus, the splitting receiver requires a higher computation complexity to achieve the performance gain. Regarding the circuit complexity, for each antenna branch, the splitting receiver requires two detection circuits, while the conventional CD/PD receiver only needs one detection circuit. On the other hand, the proposed simplified receiver has a lower complexity than the CD receiver and a higher complexity than the PD receiver. Therefore, we should consider both the performance gain and the complexity (and the cost) when adopting a splitting receiver in practical systems.} \subsection{Numerical Results} In the last two subsections, we have shown and explained that the splitting receiver achieves a $3/2$ fold rate gain compared with the non-splitting channels in the high SNR regime. This suggests that a notable performance improvement can be found within a moderate SNR range, which is verified as follows. {\color{black}Also, we verify the tightness of the asymptotic analytical results presented in Sec.~III.B.} \subsubsection{Single-antenna scenario} We set the channel power gain $\vert\tilde{h}_1 \vert^2 =1$ for simplicity. Fig.~\ref{fig:high_SNR_lower_bound} depicts the mutual information approximation given in \eqref{theory1_1} and also the simulated mutual information with different received signal power. We see that the approximation and simulation results have the same general trend, and the percentage difference between the approximation and simulation results decreases as $\myP$ increase (e.g., from $\myP = 10$ to $100$). {\color{black}Also we see that the optimal splitting ratios are (almost) the same for both the approximation and simulation results. When $\snr$ is sufficiently large, e.g., $20$~dB, $\rho=0.33$ makes the mutual information at least $20\%$ larger than that of the conventional cases (i.e., $\rho = 0$ or $1$), and the joint processing gain is shown to be $G \approx 1.3$. When $\snr=30$~dB (i.e., $\myP=1000$), the approximation is tight, and the joint processing gain with $\rho = 0.33$ is close to $1.5$. Thus, the tightness of the mutual information expressions in Lemma~1 and Proposition~1 (which is obtained by taking $\rho=1/3$ into Lemma~1) and also the asymptotic joint processing gain given by Proposition~3 are verified.} \begin{figure*}[t] \small \renewcommand{\captionlabeldelim}{ } \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \minipage{0.47\textwidth} \centering \vspace*{-0.7cm} \includegraphics[width=\linewidth]{high_SNR_lower_bound} \vspace*{-1.3cm} \caption{\small Mutual information versus $\rho$,\! $\sigmaone\! = \sigmatwo\! =1$.} \label{fig:high_SNR_lower_bound} \endminipage \hspace{0.5cm} \minipage{0.47\textwidth} \centering \vspace*{-0.7cm} \includegraphics[width=\linewidth]{optimal_rho} \vspace*{-1.3cm} \caption{\small Optimal $\rho$ versus $\myP$, $\sigmaone = \sigmatwo = 1$.} \label{fig:high_SNR_opti_rho} \endminipage \vspace*{-0.7cm} \end{figure*} Fig.~\ref{fig:high_SNR_opti_rho} depicts the optimal splitting ratio $\rho$ (obtained by simulation) versus the received signal power $\myP$. {\color{black}It is observed that the optimal splitting ratio approaches $1/3$ quickly as $\myP$ increases, i.e., $\rho=1/3$ when $\myP > 35$. \emph{Thus, $\rho = 1/3$ is a near-optimal choice even at moderate SNRs, and the tightness of the asymptotic optimal splitting ratio in Proposition~1 is verified.}} Fig.~\ref{fig:high_SNR_lower_bound_power} depicts the approximation of the splitting channel mutual information given in \eqref{theory1_1} with $\rho = 1/3$, and the optimal non-splitting channel mutual information, i.e., $\max\{\eqref{rho_1},\eqref{rho_0}\}$. It is observed that the splitting channel mutual information increases much faster w.r.t. $\myP$ as compared with the coherent and non-coherent AWGN channels. When $\snr > 20$~dB (e.g., $\myP>100$, $\sigmaone =1$ and $\sigmatwo=0.1$, or $\myP>1000$, $\sigmaone =1$ and $\sigmatwo=10$), one can clearly see the mutual information improvement due to splitting. \begin{figure*}[t] \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \minipage{0.47\textwidth} \includegraphics[width=\linewidth]{diff_P} \vspace*{-1.3cm} \caption{Mutual information of the splitting channel and the non-splitting channel versus $\myP$, $\sigmaone = 1$.}\label{fig:high_SNR_lower_bound_power} \endminipage \hspace{0.5cm} \minipage{0.47\textwidth} \includegraphics[width=\linewidth]{splitting_gain} \vspace*{-1.3cm} \caption{Joint processing gain versus $\myP$, $\sigmaone = 1$.}\label{fig:splitting gain} \endminipage \end{figure*} Fig.~\ref{fig:splitting gain} depicts the joint processing gain obtained by taking \eqref{theory1_1} into Definition~\ref{def:splitting_gain_MI}. It is observed that the joint processing gain increases with $\myP$ and slowly approaches the constant~$3/2$. The gain at a practically high SNR, e.g., $30$~dB, is notable, e.g., $1.2 \sim 1.4$. \subsubsection{Multi-antenna scenario} Fig.~\ref{fig:K_4} depicts the average mutual information over $10^3$ channel realizations using \eqref{theory1_2}, where the channel power gain $\vert \tilde{h}_k \vert^2$ is assumed to follow an exponential distribution with the mean of $1$, and $\myP = 100$. Three splitting strategies are considered: (i) the numerically searched optimal splitting ratios by solving (P1) for every channel realization (i.e., optimal splitting), (ii) the simplified receiver with the strategy in Proposition~\ref{prop:large_antenna}, and (iii) $\rho_k = 1/3$ for all $k=1,2,...,K$. It is observed that the splitting receiver with the optimal splitting strategy is better than the simplified receiver. On the other hand, the splitting receiver can perform worse than the simplified receiver if some sub-optimal splitting strategy is used, e.g., $\rho = 1/3$, which is the optimal for a single-antenna receiver, but generally not necessarily optimal for a multi-antenna receiver. Fig.~\ref{fig:large_K} depicts the optimal ratio of antennas allocated for coherent processing for a simplified receiver obtained by simulation using $10^4$ random channel realizations. {\color{black}\emph{It shows that the optimal ratio is within the range of $(0.45,0.55)$ when $K>40$, and the optimal ratio converges to $1/2$ as $K$ increases further, which verifies Proposition~2.}} \begin{figure*}[t] \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \minipage{0.47\textwidth} \vspace*{-0.7cm} \includegraphics[width=\linewidth]{K_4} \vspace*{-1.3cm} \caption{Mutual information of the splitting channel with different splitting strategies, $\sigmaone = \sigmatwo = 1$.}\label{fig:K_4} \endminipage \hspace{0.5cm} \minipage{0.47\textwidth} \vspace*{-0.7cm} \includegraphics[width=\linewidth]{Large_K} \vspace*{-1.3cm} \caption{Average optimal ratio of antennas allocated for coherent processing versus $K$.}\label{fig:large_K} \endminipage \vspace*{-0.7cm} \end{figure*} \section{Splitting Receiver: Practical Modulation} In this section, we consider commonly used modulation schemes and assume that each signal of the constellation is transmitted with the same probability. Note that, in this section, $x$ and $y$ denote the in-phase and quadrature signals in the CD circuit, respectively, and $z$ denotes the signal in the PD circuit, which are different from the notation in Sec.~III. This change of notation is adopted for ease of presentation of results. \subsection{Transmitted Signal Constellation} We consider the transmitted signal constellation of a general $M$-ary modulation scheme is $\myomegagen$, which is a two-dimensional constellation placed on the I-Q plane. The $i$th symbol is denoted by the tuple $({x}_i,{y}_i)$ on the I-Q plane, $i = 1,2,...,M$. Specifically: \begin{enumerate} [(i)] \item For the $M$-PAM scheme~\cite{BOOKTse}, which is a one-dimensional modulation scheme on the I-axis, we have ${x}_i= 2 i -1$ for $i=1,2,...,M/2$, and ${x}_i= - x_{i-M/2}$ for $i=M/2+1,...M$, and $y_i = 0$ for all $i$. \item For the $M$-QAM scheme, which is a two-dimensional modulation scheme on the I-Q plane, we have ${x}_i= 2 \left(i \mod{\frac{\sqrt{M}}{2}} \right) - 1$, ${y}_i= 2 \left\lfloor \frac{i-1}{\sqrt{M}/2}\right\rfloor -1$, $i=1,2,...,M/4$, which are the first quadrant symbols on the I-Q plane. Due to the symmetry property of $M$-QAM, the other quadrant symbol expressions are omitted for brevity. \item For the $M$-IM scheme, which is a one-dimensional modulation scheme on the positive I-axis {\color{black}where} the information is carried by the signal power but not phase, we have $x_i = \sqrt{2(i-1)}$ and $y_i = 0$, $i=1,2,...,M$. \end{enumerate} \subsection{Noiseless Received Signal Constellation} Based on the received signal expression after MRC in~\eqref{MRC_signal}, the average signal power of the coherently and non-coherently processed signals are $\Theta_1 \myP$ and $\sqrt{\Theta_2} \myP$, respectively. Thus, with such average power constraints, we define the noiseless received signal constellation $\myomegagennew$, and the $i$th symbol in $\myomegagennew$ is denoted by the tuple $(\breve{x}_i, \breve{y}_i, \breve{z}_i)$, and $\breve{x}_i = k_1 \sqrt{\Theta_1 \myP} {x}_i$, $\breve{y}_i = k_1 \sqrt{\Theta_1 \myP} {y}_i$, $\breve{z}_i =k_2 \sqrt{\Theta_2} \myP \left(x^2_i + y^2_i\right)$. Here $k_1$ and $k_2 \triangleq k^2_1$ are the power normalization parameters determined only by the geometric property of a certain modulation scheme. Specifically, we have \begin{equation} \label{geo_parameters} k_1 = \left\lbrace \begin{aligned} & \sqrt{\frac{3}{M^2-1}}, & M\text{-PAM},\\ & \sqrt{\frac{3}{2 (M-1)} }, & M\text{-QAM},\\ & \sqrt{\frac{1}{M-1}}, & M\text{-IM}. \end{aligned} \right. \end{equation} For the single-antenna scenario, Figs.~\ref{fig:PAM_rho_shape} and~\ref{fig:PPM_rho_shape} show that the splitting ratio ${\rho} \in (0,1)$ bends the received signal constellation from the I-axis to a paraboloid in the I-P plane, for $4$-PAM and $4$-IM, respectively. Fig.~\ref{fig:QAM_shape} shows that the splitting ratio ${\rho} \in (0,1)$ bends the received signal constellation from the I-Q plane to a paraboloid in the I-Q-P space. \begin{figure}[t] \small \renewcommand{\captionlabeldelim}{ } \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \centering \vspace{-0.7cm} \hspace{-0.5cm} \subfigure[$4$-PAM on the I-P plane.]{\label{fig:PAM_rho_shape}\includegraphics[scale=0.46]{PAM_rho_shape}} \hspace{-0.6cm} \subfigure[$16$-QAM on the I-Q-P space, $\rho=0.2,\ 0.5,\ 1$.]{\label{fig:QAM_shape}\includegraphics[scale=0.72]{QAM_shape}} \hspace{-0.4cm} \subfigure[$4$-IM on the I-P plane.]{\label{fig:PPM_rho_shape}\includegraphics[scale=0.5]{PPM_rho_shape}} \caption{\small Noiseless received signal constellation with different $\vec{\rho}$, $K=1$, $\vert\tilde{h}_1\vert = 1$, $\myP = 10$.} \vspace{-0.5cm} \end{figure} \subsection{Decision Region} Since all the transmitted symbols are of equal probability, the optimal signal detection method is the maximum likelihood (ML) method~\cite{BOOKTse}. The decision region for the $i$th symbol is defined~as \begin{equation} \label{first_V} \mathcal{V}_i \triangleq \left\lbrace {\bm v} \vert f ({\bm v} \vert i) \geq f ({\bm v} \vert j) , \forall j \neq i, {\bm v} \in \mathbb{R}^3 \right \rbrace, \end{equation} where ${\bm v} \triangleq (x,y,z)$ is the three-dimensional post-processing (noise added) signal in the splitting receiver, and $f(\cdot \vert \cdot)$ is the conditional pdf. From Sec. III, since both the CD and PD circuits introduce additive Gaussian noise, the received signal is surrounded by the noise sphere (ellipsoid) in the I-Q-P space, and $f({\bm v}\vert i)$ is thus given by \begin{equation} f({\bm v} \vert i) = \frac{1}{\sigmaone \pi \sqrt{2 \sigmatwo \pi}} \exp\left( - \frac{(x-x_i)^2}{\sigmaone} - \frac{(y-y_i)^2}{\sigmaone} - \frac{(z-z_i)^2}{2\sigmatwo} \right). \end{equation} Therefore, \eqref{first_V} is rewritten as \begin{equation} \label{second_V} \mathcal{V}_i \triangleq \left\lbrace (x,y,z):\ d_i(x,y,z) \leq d_j(x,y,z), \forall j\neq i\right\rbrace, \end{equation} where \begin{equation} \label{first_distance} d_j(x,y,z) \triangleq \frac{\left(x-x_j\right)^2}{\sigmaone/2} + \frac{\left(y-y_j\right)^2}{\sigmaone/2} + \frac{\left(z-z_j\right)^2}{\sigmatwo}. \end{equation} From \eqref{second_V} and \eqref{first_distance}, after simplification, the decision region of the $i$th symbol, $\mathcal{V}_i$ is given by \begin{equation} \label{QAM_region} \mathcal{V}_i= \left\lbrace (x,y,z):\ \frac{\breve{x}_j-\breve{x}_i}{\sigmaone}\ x +\frac{\breve{y}_j-\breve{y}_i}{\sigmaone}\ y +\frac{\breve{z}_j-\breve{z}_i}{2\sigmatwo}\ z \leq \frac{\breve{x}^2_j+\breve{y}^2_j-\breve{x}^2_i-\breve{y}^2_i}{2\sigmaone} + \frac{\breve{z}^2_j-\breve{z}^2_i}{4 \sigmatwo}, \forall j \neq i \right\rbrace, \end{equation} where $\breve{x}_i$, $\breve{y}_i$ and $\breve{z}_i$ are defined in Sec. IV.B above \eqref{geo_parameters}. It is easy to see that $\mathcal{V}_i$ is bounded by planes. The plane implied in \eqref{QAM_region}, which divides the decision region between the $i$th and $j$th receive symbols, is given by \begin{equation} \mathcal{A}_{i-j} \triangleq \left\lbrace (x,y,z):\ \frac{\breve{x}_j-\breve{x}_i}{\sigmaone}\ x +\frac{\breve{y}_j-\breve{y}_i}{\sigmaone}\ y +\frac{\breve{z}_j-\breve{z}_i}{2\sigmatwo}\ z = \frac{\breve{x}^2_j+\breve{y}^2_j-\breve{x}^2_i-\breve{y}^2_i}{2\sigmaone} + \frac{\breve{z}^2_j-\breve{z}^2_i}{4 \sigmatwo} \right\rbrace. \end{equation} The decision regions for $8$-PAM, $36$-QAM (only for the symbols within the first quadrant of the I-Q-P space) and $4$-IM are illustrated in Figs.~\ref{fig:PAM_region},~\ref{fig:QAM_region}, and~\ref{fig:PPM_region}, respectively. \subsection{Joint Processing Gain in SER} To quantify the reduction in SER by the splitting receiver, we define the joint processing gain in terms of SER as: \begin{definition}[Joint processing gain in SER] \label{def:splitting_gain} Given a certain modulation scheme, the joint processing gain of the splitting receiver is \begin{equation} G \triangleq \frac{\min_{\vec{\rho} = \vec{0}, \vec{1}} P_e }{\inf\{P_e: \vec{\rho} \in \left[0,1 \right]^K\} }, \end{equation} where $\inf\{\cdot\}$ denotes for the infimum, and $P_e$ is the SER for a given $\vec{\rho}$. \end{definition} The joint processing gain represents the maximum SER reduction provided by the splitting receiver, compared with the best of the conventional receivers. \section{Splitting Receiver: SER Analysis} In this section, we derive the SER at a splitting receiver for practical modulation schemes with the transmitted signal constellation $\myomegagen$ in the I-Q plane and the received signal constellation $\myomegagennew$ in the I-Q-P space. The SER can be written as \begin{equation} P_{e} = \frac{1}{M}\sum\limits_{i=1}^{M} \left(1 - P_i\right), \end{equation} where $P_i$ is the symbol success probability of the $i$th symbol, which is given by \begin{equation} \label{QAM_SER} P_i = \iiint_{\mathcal{V}_i} \exp\left( -\frac{\left(x-\breve{x}_j\right)^2}{\sigmaone/2} - \frac{\left(y-\breve{y}_j\right)^2}{\sigmaone/2} - \frac{\left(z-\breve{z}_j\right)^2}{\sigmatwo} \right) \mathrm{d}x\mathrm{d}y\mathrm{d}z. \end{equation} \begin{figure}[t] \small \renewcommand{\captionlabeldelim}{ } \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \centering \vspace{-1cm} \subfigure[$8$-PAM decision regions in the I-P plane.]{\label{fig:PAM_region}\includegraphics[scale=0.35]{PAM_region}} \hspace{0.3cm} \subfigure[Illustration of the decision regions for $36$-QAM in the first quadrant, i.e., $x > 0$ and $y>0$.]{\label{fig:QAM_region}\includegraphics[scale=0.37]{QAM_region}} \hspace{0.3cm} \subfigure[$4$-IM decision regions in the I-P plane.]{\label{fig:PPM_region}\includegraphics[scale=0.34]{PPM_region}} \caption{\small Decision regions for $8$-PAM, $36$-QAM and $4$-IM, $\myP = 10$, $\sigmaone =2$, $\sigmatwo =1$.} \label{fig:shapes} \vspace{-0.8cm} \end{figure} Based on Sec.~IV.B, when $\snr \rightarrow \infty$ and $\vec{\rho} \in \left[0,1 \right]^K\backslash\{\vec{0},\vec{1}\}$, the {\color{black}received} symbols $(\breve{x}_i,\breve{y}_i,\breve{z}_i)$ and $(\breve{x}_j,\breve{y}_j,\breve{z}_j)$ belonging to different power tiers, i.e., $\breve{z}_i \neq \breve{z}_j$, are easily distinguished because they are separated by a distance proportional to $\myP$ in the power domain. In contrast, the symbols belonging to the same power tier are only separated with a distance proportional to $\sqrt{\myP}$ on the I-Q plane. Thus, the intra-tier detection error probability dominates the overall SER in the high SNR regime. Therefore, there are basically two cases for the SER analysis in the high SNR regime: \begin{enumerate} \item For $\myomegagen$ having symbols that belong to the same tier as illustrated in Fig.~\ref{fig:general_constellation}(a), such as $M$-PAM, $M$-QAM and $M$-PSK (phase-shift keying), the intra-tier detection error probability is dominant. Moreover, the detection error caused by the pair symbols with the minimum distance on the I-Q plane is dominant (see Fig.~\ref{fig:general_constellation}(a)). \item For $\myomegagen$ in which every symbol belongs to a different tier as illustrated in Fig.~\ref{fig:general_constellation}(b), such as a $M$-IM, the inter-tier detection error probability is dominant. Moreover, the detection error caused by the pair of symbols with the minimum distance in the P-axis (power domain) is dominant. \end{enumerate} Consider a transmitted signal constellation $\myomegagen$ with $W$ pairs of dominant symbols as mentioned above and the minimum distance being $d_{\text{min}}$. The approximated SER is calculated as~\cite{BOOKTse} \begin{equation} \label{general_SER} P_e \approx \frac{1}{M} \sum_{i=1}^{W} 2 Q\left( \frac{d_{\text{min}}}{2\sigma}\right), \end{equation} where $\sigma = \sqrt{\sigmaone/2}$ or $\sigmatwosqrt$ for cases 1) and 2), respectively. {It is straightforward to see that}: For $M$-PAM, we have $W=1$, i.e., the pair of symbols with the lowest power having the minimum distance given by $d_{\text{min}} = 2 \breve{x}_1$. For $M$-QAM, we have $W=2 \sqrt{M}$, as illustrated in Fig.~\ref{fig:general_constellation}(c), and $d_{\text{min}}= 2 \breve{x}_1 $. For $M$-IM, we have $W=M-1$, as illustrated in Fig.~\ref{fig:general_constellation}(b) and $d_{\text{min}} = \breve{x}_2 -\breve{x}_1$. Then, based on \eqref{general_SER}, we can obtain the following results. \begin{figure}[t] \small \renewcommand{\captionlabeldelim}{ } \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \centering \vspace{-0.7cm} \usetikzlibrary{arrows} \begin{tikzpicture}[scale = 0.45] \draw [-latex, ultra thick](-4.8,0) -- (5.7,0); \draw [-latex, ultra thick](0,-4.85) -- (0,5.25); \draw [thick,dashed,black!60] (0,0) ellipse (4 and 4); \draw [thick,dashed,black!60] (0,0) ellipse (2.5 and 2.5); \draw [thick,dashed,black!60] (0,0) ellipse (1.5 and 1.5); \node [circle,blue,fill,scale=0.5] at (1.5,2) {}; \node [circle,blue,fill,scale=0.5] at (2.2,-1.15) {}; \node [circle,blue,fill,scale=0.5] at (-2.5,0) {}; \node [circle,blue,fill,scale=0.5] at (1.5,0) {}; \node [circle,blue,fill,scale=0.5] at (0,1.5) {}; \node [circle,blue,fill,scale=0.5] at (-1.35,3.8) {}; \node [circle,blue,fill,scale=0.5] at (-3.45,-2.05) {}; \node [circle,blue,fill,scale=0.5] at (4,0.5) {}; \node [circle,blue,fill,scale=0.5] at (-3.95,0.85) {}; \node [circle,blue,fill,scale=0.5] at (-3,2.65) {}; \node [circle,blue,fill,scale=0.5] at (2.1,-3.4) {}; \node at (0,5.85) {Quadrature}; \node at (7.55,0) {In-phase}; \draw [-latex,<->,thick](-2.9,2.65) -- (-1.3,3.75); \draw [-latex,<->,thick](-3.85,0.85) -- (-3,2.6); \draw [-latex,<->,thick](0.2,1.4) -- (1.35,0.15); \node at (-4.05,2.15) {$d_{\text{min}}$}; \node at (-2.75,3.7) {$d_{\text{min}}$}; \node at (1.65,1.05) {$d_{\text{min}}$}; \draw [-latex, ultra thick](11.15,-1.55) -- (16.55,3.85); \node [circle,blue,fill,scale=0.5] at (11.15,-1.55) {}; \node [circle,blue,fill,scale=0.5] at (12.55,-0.15) {}; \node [circle,blue,fill,scale=0.5] at (14.05,1.35) {}; \node [circle,blue,fill,scale=0.5] at (15.55,2.85) {}; \node at (16.7,4.8) {Power}; \draw [-latex,<->,thick](11.35,-1.9) -- (12.75,-0.5) node (v1) {}; \draw [-latex,<->,thick](v1) -- (14.25,1) node (v2) {}; \draw [-latex,<->,thick](v2) -- (15.75,2.5) node (v3) {}; \node at (12.8,-1.55) {$d_{\text{min}}$}; \node at (14.3,-0.05) {$d_{\text{min}}$}; \node at (15.8,1.45) {$d_{\text{min}}$}; \node at (0,-5.7) {(a)}; \node at (14,-5.7) {(b)}; \draw [-latex, ultra thick](20.05,0) -- (28.45,0); \draw [-latex, ultra thick](23.85,-4.5) -- (23.85,5.15); \draw [thick,dashed,black!60] (23.85,0) ellipse (0.707 and 0.707); \draw [thick,dashed,black!60] (23.85,0) ellipse (1.581 and 1.581); \draw [thick,dashed,black!60] (23.85,0) ellipse (2.549 and 2.549); \draw [thick,dashed,black!60] (23.85,0) ellipse ( 2.1213 and 2.1213); \draw [thick,dashed,black!60] (23.85,0) ellipse ( 2.9155 and 2.9155); \draw [thick,dashed,black!60] (23.85,0) ellipse ( 3.5355 and 3.5355); \node [circle,blue,fill,scale=0.4] (v11) at (21.35,0.5) {}; \node [circle,blue,fill,scale=0.4] (v9) at (22.35,0.5) {}; \node [circle,blue,fill,scale=0.4] (v1) at (23.35,0.5) {}; \node [circle,blue,fill,scale=0.4] (v7) at (26.35,0.5) {}; \node [circle,blue,fill,scale=0.4] (v5) at (25.35,0.5) {}; \node [circle,blue,fill,scale=0.4] (v2) at (24.35,0.5) {}; \node [circle,blue,fill,scale=0.4] at (21.35,1.5) {}; \node [circle,blue,fill,scale=0.4] at (22.35,1.5) {}; \node [circle,blue,fill,scale=0.4] (v13) at (23.35,1.5) {}; \node [circle,blue,fill,scale=0.4] at (26.35,1.5) {}; \node [circle,blue,fill,scale=0.4] at (25.35,1.5) {}; \node [circle,blue,fill,scale=0.4] (v21) at (24.35,1.5) {}; \node [circle,blue,fill,scale=0.4] at (21.35,2.5) {}; \node [circle,blue,fill,scale=0.4] at (22.35,2.5) {}; \node [circle,blue,fill,scale=0.4] (v14) at (23.35,2.5) {}; \node [circle,blue,fill,scale=0.4] at (26.35,2.5) {}; \node [circle,blue,fill,scale=0.4] at (25.35,2.5) {}; \node [circle,blue,fill,scale=0.4] (v22) at (24.35,2.5) {}; \node [circle,blue,fill,scale=0.4] at (21.35,-2.5) {}; \node [circle,blue,fill,scale=0.4] at (22.35,-2.5) {}; \node [circle,blue,fill,scale=0.4] (v17) at (23.35,-2.5) {}; \node [circle,blue,fill,scale=0.4] at (26.35,-2.5) {}; \node [circle,blue,fill,scale=0.4] at (25.35,-2.5) {}; \node [circle,blue,fill,scale=0.4] (v18) at (24.35,-2.5) {}; \node [circle,blue,fill,scale=0.4] at (21.35,-1.5) {}; \node [circle,blue,fill,scale=0.4] at (22.35,-1.5) {}; \node [circle,blue,fill,scale=0.4] (v15) at (23.35,-1.5) {}; \node [circle,blue,fill,scale=0.4] at (26.35,-1.5) {}; \node [circle,blue,fill,scale=0.4] at (25.35,-1.5) {}; \node [circle,blue,fill,scale=0.4] (v16) at (24.35,-1.5) {}; \node [circle,blue,fill,scale=0.4] (v12) at (21.35,-0.5) {}; \node [circle,blue,fill,scale=0.4] (v10) at (22.35,-0.5) {}; \node [circle,blue,fill,scale=0.4] (v3) at (23.35,-0.5) {}; \node [circle,blue,fill,scale=0.4] (v8) at (26.35,-0.5) {}; \node [circle,blue,fill,scale=0.4] (v6) at (25.35,-0.5) {}; \node [circle,blue,fill,scale=0.4] (v4) at (24.35,-0.5) {}; \node at (23.85,5.65) {Quadrature}; \node at (30.05,0) {In-phase}; \draw [<->,thick](v1) -- (v2); \draw [<->,thick](v3) -- (v1); \draw [<->,thick](v3) -- (v4); \draw [<->,thick](v2) -- (v4); \draw [<->,thick](v5) -- (v6); \draw [<->,thick](v7) -- (v8); \draw [<->,thick](v9) -- (v10); \draw [<->,thick](v11) -- (v12); \draw [<->,thick](v13) -- (v21); \draw [<->,thick](v14) -- (v22); \draw [<->,thick](v15) -- (v16); \draw [<->,thick](v17) -- (v18); \node at (23.95,-5.7) {(c)}; \end{tikzpicture} \vspace{-0.5cm} \caption{(a) and (b) Two transmitted signal constellation maps with $3$ pairs of symbols that are dominant on detection error probability, plotted on the I-Q plane and the P-axis, respectively. (c) The transmitted signal constellation for $36$-QAM, where $12$ pairs of symbol that are dominant on detection error probability are illustrated.} \label{fig:general_constellation} \vspace{-0.5cm} \end{figure} \subsection{$M$-PAM} \begin{proposition} \label{lemma_opti_rho} For the $M$-PAM scheme and $\vec{\rho} \in \left[0,1 \right]^K\backslash\{\vec{0},\vec{1}\}$, the SER in the high SNR regime is given by \begin{equation} {P}_e \approx \frac{2}{M} Q\left( \frac{\sqrt{2}\breve{x}_1}{\sigmaonesqrt} \right). \end{equation} \end{proposition} Based on Proposition~\ref{lemma_opti_rho}, we can see that when $\vec{\rho} \neq \vec{1}$ and $\vec{\rho} \rightarrow \vec{1}$, $\breve{x}_1 \approx \sqrt{3 H_2 \myP /(M^2-1)}$, and $P_e \approx \frac{2}{M} Q\left( \sqrt{\frac{6 H_2 \myP}{\sigmaone (M^2-1)}} \right) $, which is smaller than the SER of the $\vec{\rho} = \vec{1}$ case, i.e., $\frac{2(M-1)}{M} \times\\ Q\left( \sqrt{\frac{6 H_2 \myP}{\sigmaone (M^2-1)}}\right)$, and is also smaller than the SER of the $\vec{\rho} = \vec{0}$ case in which the SER can be as large as $0.5$. Thus, we have the following proposition: \begin{proposition}\label{asym_splitting_gain} For $M$-PAM, the asymptotic joint processing gain in the high SNR regime~is \begin{equation} G_{\mathrm{PAM}} = \frac{\min\left\lbrace 0.5, \frac{2 (M-1)}{M}Q\left(\sqrt{\frac{6 H_2 \myP}{\sigmaone (M^2-1)}}\right) \right\rbrace}{\frac{2}{M}Q\left(\sqrt{\frac{6 H_2 \myP}{\sigmaone (M^2-1)}}\right)} = M-1. \end{equation} \end{proposition} Note that although the joint processing gain depends on $\myP$, $\sigmaone$ and $\sigmatwo$, the asymptotic joint processing gain is independent of the specific noise variance at the CD and PD circuits in the high SNR regime. \subsection{$M$-QAM} \begin{proposition}\label{QAM_high_SNR} For $M$-QAM and $\vec{\rho} \in \left[0,1 \right]^K\backslash\{\vec{0},\vec{1}\}$, the SER in the high SNR regime is given by \begin{equation} \label{QAM_approx} P_e \approx \frac{4}{\sqrt{M}} Q\left( \frac{\sqrt{2}\breve{x}_1}{\sigmaonesqrt} \right). \end{equation} \end{proposition} Letting $\vec{\rho} \rightarrow \vec{1}$, i.e., $\breve{x}_1 \rightarrow \sqrt{\frac{3 H_2 \myP}{2(M-1)}}$, the approximated SER in Proposition~\ref{QAM_high_SNR} is minimized as $P_e \approx \frac{4}{\sqrt{M}} Q \left(\sqrt{\frac{3 H_2 \myP}{(M-1) \sigmaone}} \right)$, which is smaller than the SER obtained by setting $\vec{\rho} = \vec{0}$ or $\vec{1}$. Thus, we have the following result: \begin{proposition}\label{Splitting_gain_QAM} For $M$-QAM, the asymptotic joint processing gain in the high SNR regime is \begin{equation} \begin{aligned} G_{\mathrm{QAM}} &= \lim\limits_{\snr \rightarrow \infty}\frac {4 \left(1 -\frac{1}{\sqrt{M}}\right) Q\left( \sqrt{\frac{3 H_2 \myP}{(M-1) \sigmaone}} \right) -4 \left(1 -\frac{1}{\sqrt{M}}\right)^2 Q\left(\sqrt{ \frac{3 H_2 \myP}{(M-1) \sigmaone} }\right)^2 } {\frac{4}{\sqrt{M}} Q\left(\sqrt{\frac{3 H_2 \myP}{(M-1) \sigmaone}}\right) }\\ &= \lim\limits_{\snr \rightarrow \infty} \frac {4 \left(1 -\frac{1}{\sqrt{M}}\right) Q\left( \sqrt{\frac{3 H_2 \myP}{(M-1) \sigmaone}} \right)} {\frac{4}{\sqrt{M}} Q\left(\sqrt{\frac{3 H_2 \myP}{(M-1) \sigmaone}}\right) } = \sqrt{M}-1. \end{aligned} \end{equation} \end{proposition} Therefore, in the high SNR regime, for $M$-PAM and $M$-QAM, there always exists a non-trivial $\vec{\rho}$ that $\vec{\rho}\in \left[0,1 \right]^K \backslash \{\vec{0},\vec{1}\}$ and achieves a lower SER than the conventional receivers, i.e., $\vec{\rho} = \vec{0} \text{ or }\vec{1}$, no matter what values $\sigmaone$ and $\sigmatwosqrt$ take. \subsection{$M$-IM} \begin{proposition} \label{PPM_asyn_SER} For $M$-IM, the SER in the high SNR regime is given by \begin{equation} P_e \approx \frac{2 (M-1)}{M} Q\left(\frac{\sqrt{\Theta_2}\myP}{(M-1) \sigmatwosqrt}\right). \end{equation} \end{proposition} From Proposition \ref{PPM_asyn_SER}, as $\vec{\rho} \rightarrow \vec{0}$, the minimum approximated SER is obtained as $P_e = \frac{2 (M-1)}{M} \times \\ Q\left(\frac{\sqrt{H_4} \myP}{(M-1) \sigmatwosqrt}\right)$, which equal to the SER when $\vec{\rho} = \vec{0}$. Thus, the splitting receiver cannot improve the SER performance compared with the conventional receivers, and we have the result: \begin{proposition} \label{PPM} For $M$-IM, the asymptotic joint processing gain in the high SNR regime is equal to one. \end{proposition} \subsection{Numerical Results} We present the numerical results using $M$-QAM for (i) the splitting receiver with single receiver antenna assuming $\vert \tilde{h}_1 \vert^2 = 1$, and (ii) the simplified receiver with multiple receiver antennas. The SER results for $M$-QAM are plotted based on Monte Carlo simulation with $10^9$ points using the detection rule~\eqref{QAM_region}. The results for $M$-PAM and $M$-IM are omitted due to space limitation. \subsubsection{Splitting receiver with single receiver antenna} Fig. \ref{fig:QAM_diff_M_P} plots the SER versus the splitting ratio $\rho$ for different $M$ and different $\myP$, where the approximation results are plotted using Proposition~\ref{QAM_high_SNR}. It shows that the SER first decreases and then increases as $\rho$ increases. We can see that the optimal $\rho$ that minimizes the SER, increases with $\myP$ and approaches $1$ but decreases with the increasing of the order of constellation $M$. {\color{black}Also we can see that when $\snr$ is sufficiently large, e.g., $23$~dB (i.e., $\myP = 200$, and $\sigmaone = \sigmatwo = 1$), the approximation of the SER is very close with the accurate SER for the value of $\rho$ in the range $(0,\rho^\star)$, where $\rho^\star$ is the optimal splitting ratio. Note that $\rho^\star$ approaches $1$ as $\snr$ increases. This means the mismatch around $\rho =1$ is minimized as $\snr$ increases. \emph{Therefore, the approximation in Proposition 6 is accurate for the values of $\rho \in (0,1)$ when $\snr$ is sufficiently large.} } Fig. \ref{fig:QAM_splitting_gain} shows the joint processing gain versus $\myP$ using Definition~\ref{def:splitting_gain}. We can see that the joint processing gain increase with $\myP$ and approaches $3$ and $5$ for $16$-QAM and $36$-QAM, respectively, when $\myP = 100$, $\sigmaone = 1$ and $\sigmatwo$ is sufficiently small, e.g., $10^{-3}$. These results approach the asymptotic joint processing gain in Proposition~\ref{Splitting_gain_QAM}. Also we see that only half the joint processing gain is achieved when $\myP = 100$ and $\sigmatwo$ is large, e.g., $\sigmatwo=1$. However, the increasing trend of the joint processing gain in Fig.~15 suggest that the asymptotic joint processing gain can be eventually achieved, when $\myP$ is much larger than $100$. {\color{black}\emph{Therefore, the asymptotic joint processing gain in Proposition~7 may not be approached in a normal range of the received signal power and the noise variance, but half of the joint processing gain is achievable.}} \begin{figure*}[t] \small \renewcommand{\captionlabeldelim}{ } \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \minipage{0.47\textwidth} \vspace*{-0.7cm} \includegraphics[width=\linewidth]{QAM_diff_M_P} \vspace*{-1.3cm} \caption{\small SER versus $\rho$, $\sigmaone = 1$ and $\sigmatwo = 1$.} \label{fig:QAM_diff_M_P} \endminipage \hspace{0.32cm} \minipage{0.47\textwidth} \vspace*{-0.7cm} \includegraphics[width=\linewidth]{QAM_splitting_gain} \vspace*{-1.3cm} \caption{\small $G_{\text{QAM}}$ versus $\myP$, $\sigmaone = 1$.} \label{fig:QAM_splitting_gain} \endminipage \vspace*{-0.5cm} \end{figure*} \subsubsection{Simplified receiver with multiple receiver antennas} For the simplified receiver, we assume that the channel power gain at each antenna is independent and follows an exponential distribution with the mean of $1$. Fig.~\ref{fig:modu_large_K} plots the optimal number of antennas allocated for coherent processing, i.e., $K^{\star}_1$, versus the total number of antennas for the $36$-QAM scheme obtained by using $10^3$ random channel realizations. It shows that $K^{\star}_1$ increases with $K$, and approaches to $K-1$ in the high SNR regime, e.g, $K^{\star}_1 \approx K-5,\ K-2$ and $K-1$ when $\myP = 2,\ 10$ and $200$, respectively. This is because that the optimal ratio $\vec{\rho} \rightarrow \vec{1}$ but never reaches $\vec{1}$ based on Proposition~\ref{lemma_opti_rho} in the high SNR regime. In other words, for the simplified receiver (where $\rho_k \in\{0,1\}$), most of the antennas should be connected to the CD circuits and at least one antenna should be connected to a PD circuit to achieve the highest joint processing gain. {\color{black}Note that in practice, the degradation of ADC noise is usually modeled by the signal-to-quantization-noise ratio (SQNR), approximately given by $6K$~dB, where $K$ is the number of quantization bits~\cite{Xunzhou13}. Here, by assuming $\myP = 2$ and the noise variance of the ADC equals to $0.1$ (i.e., less than $\sigmaone$ and $\sigmatwo$), the SQNR equals to $13$~dB, which implies $K \approx 2$~bits. Similarly, by assuming $\myP = 200$, the SQNR equals to $33$~dB, which implies $K \approx 5$~bits. Therefore, the parameter settings are practical.} \section{Conclusions} In this paper, we have proposed a splitting receiver, which fundamentally changes the way in which the signal is processed. With the same received signal power, the analytical results show that the splitting receiver provides excellent performance gain in the sense of both the mutual information (Gaussian input) and the SER (practical modulation), compared with the conventional coherent and non-coherent receivers. Future research may focus on the topics such as the MIMO system with a multi-antenna splitting receiver, and the design of constellations and coding schemes for the communication systems with splitting receiver. Moreover, some practical issues can also be taken into account, such as the effects of the antenna noise, the power splitter losses and the different receive sensitivity level at the CD and PD circuits of the splitting receiver. \begin{figure}[t] \small \renewcommand{\captionlabeldelim}{ } \renewcommand{\captionfont}{\small} \renewcommand{\captionlabelfont}{\small} \centering \vspace*{-0.7cm} \includegraphics[scale=0.6]{modulation_K} \vspace*{-0.5cm} \caption{\small Optimal number of antennas allocated for coherent processing versus $K$ for $16$-QAM, $\sigmaone = \sigmatwo = 1$.} \label{fig:modu_large_K} \vspace*{-0.5cm} \end{figure} \begin{appendices} \renewcommand{\theequation}{\thesection.\arabic{equation}} \numberwithin{equation}{section} \section{Proof of Lemma~1} We assume that $\vec{\rho} \in \left[0,1\right]{\color{black}^K} \backslash \{\vec{0},\vec{1}\}$. Thus, based on \eqref{my_theta}, $\Theta_1 >0$ and $\Theta_2 >0$. \subsection{Proof of \eqref{theory1_1}} Based on the property of mutual information invariance under scaling of random variables~\cite{Invariance}, a scaled received signal expression based on \eqref{receive_signal} is given by \begin{equation} \label{rewritten_signal} \begin{aligned} \tilde{Y}_1 & = \sqrt{\Theta_1} \tilde{X} + \frac{\tilde{Z}}{\sqrt{\myP}},\\ Y_2 &= \sqrt{\Theta_2} k \sqrt{\myP} \vert \tilde{X} \vert^2 + k \frac{N}{\sqrt{\myP}}, \end{aligned} \end{equation} where $k \triangleq \frac{\sigmaonesqrt}{\sqrt{2}\sigmatwosqrt}$. Thus, it is easy to verify that the real and imaginary parts of $\frac{\tilde{Z}}{\sqrt{\myP}}$ and $k \frac{{\color{black}N}}{\sqrt{\myP}}$ are independent with each other and follow the same distribution $\mathcal{N}(0,\frac{\sigmaone}{2 \myP})$. We define two random variables as \begin{equation} \label{new_X1_X2} \tilde{X}_1 \triangleq \sqrt{\Theta_1} \tilde{X} \text{, and } X_2 \triangleq \sqrt{\Theta_2}k \sqrt{\myP} \vert \tilde{X} \vert^2. \end{equation} Because of the Markov chain $\sqrt{\myP}{\color{black}\tilde{X}} \rightarrow (\tilde{X}_1,X_2) \rightarrow (\tilde{Y}_1, Y_2)$ and the smooth and uniquely invertible map from $\sqrt{\myP}\tilde{X}$ to $(\tilde{X}_1,X_2)$, we have \begin{equation} \mathcal{I}(\sqrt{\myP} X; \tilde{Y}_1, Y_2) = \mathcal{I}(\tilde{X}_1,X_2; \tilde{Y}_1, Y_2). \end{equation} Before the analysis of $ \mathcal{I}(\tilde{X}_1,X_2; \tilde{Y}_1, Y_2)$, we first define a new coordinate system named as paraboloid-normal (PN) coordinate system which is based on a paraboloid $\mathcal{U}$. The paraboloid $\mathcal{U}$ is defined by the equation \begin{equation} c_P =k \sqrt{\myP} \frac{\sqrt{\Theta_2}}{\Theta_1} (c^2_I+c^2_Q), \end{equation} where $c_I$, $c_Q$ and $c_P$ are the three axes of Cartesian coordinate system of the I-Q-P space. By changing coordinate system, the point $(c_1,c_2,c_3)$ is represented as $(\tilde{a},l)$ in the PN coordinate system, where $\tilde{a}$ is the nearest point on the paraboloid $\mathcal{U}$ to the point $(c_1,c_2,c_3)$, and $\vert l\vert$ is the distance. In other words, the point $(c_1,c_2,c_3)$ is on the normal line at the point $\tilde{a}$ on the paraboloid. Specifically, the sign of $l$ is positive when the point $(c_1,c_2,c_3)$ is above the parabolic, otherwise, it is negative. Based on the property of mutual information invariance under a change of coordinates~\cite{Invariance}, representing Cartesian coordinate based random variables $(\tilde{X}_1,X_2)$ and $(\tilde{Y}_1, Y_2)$ under the PN coordinate system as $(\tilde{A}_X, L_X)$ and $(\tilde{A}_X + \tilde{A}_{\tilde{Z},N},L_{\tilde{Z},N})$, respectively, gives \begin{equation} \mathcal{I}(\tilde{X}_1,X_2; \tilde{Y}_1, Y_2) = \mathcal{I}(\tilde{A}_X, L_X; \tilde{A}_X + \tilde{A}_{\tilde{Z},N},L_{\tilde{Z},N}), \end{equation} where the noise-related random variables $\tilde{A}_{\tilde{Z},N}$ and $L_{\tilde{Z},N}$, which are generated by $\tilde{Z}$ and $N$, are correlated with the random variable $\tilde{A}_X$. Since $X_2= k\sqrt{\myP}\frac{\sqrt{\Theta_2}}{\Theta_1} \vert \tilde{X}_1 \vert^2$, $(\tilde{X}_1,X_2)$ lies on the paraboloid $\mathcal{U}$, i.e., $L_X$ is a constant which is equal to zero, $\tilde{A}_{X}$ can be represented by $(\tilde{X}_1,X_2)$ for brevity. Thus, we have \begin{equation} \label{MI_coordinates} \begin{aligned} \mathcal{I}(\tilde{X}_1,X_2; \tilde{Y}_1, Y_2) &= \mathcal{I}(\tilde{A}_X; \tilde{A}_X + \tilde{A}_{\tilde{Z},N},L_{\tilde{Z},N})\\ &= h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N},L_{\tilde{Z},N}) - h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N},L_{\tilde{Z},N} \vert \tilde{A}_X)\\ &= h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N}) + h(L_{\tilde{Z},N} \vert \tilde{A}_X + \tilde{A}_{\tilde{Z},N})\\ &-\left( h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N}\vert \tilde{A}_X) + h(L_{\tilde{Z},N} \vert \tilde{A}_X, \tilde{A}_X + \tilde{A}_{\tilde{Z},N}) \right). \end{aligned} \end{equation} {\color{black}Since the expectations $\myexpect{\tilde{Z}}=(0,0)$ and $\myexpect{N}=0$, and the variances $\mathrm{Var}(\frac{\tilde{Z}}{\sqrt{\myP}}) \rightarrow 0$ and $\mathrm{Var}(k \frac{N}{\sqrt{\myP}}) \rightarrow 0$ as $\myP \rightarrow \infty$, it is easy to see that the noise variable $\tilde{A}_{\tilde{Z},N}$ converges in probability towards $(0,0)$. Thus, $\tilde{A}_X + \tilde{A}_{\tilde{Z},N}$ converges in probability towards $\tilde{A}_X$. Furthermore, since convergence in probability implies convergence in distribution and the entropy function $h(\cdot)$ is continuous and defined based on the probability distribution of the input random variable~\cite{BookInfo}, we have $h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N}) \rightarrow h(\tilde{A}_X)$ as $\myP \rightarrow \infty$. Similarly, we have the convergence of the random variable, i.e., $\left(L_{\tilde{Z},N} , \tilde{A}_X + \tilde{A}_{\tilde{Z},N}\right) \rightarrow \left(L_{\tilde{Z},N} , \tilde{A}_X\right)$, hence the convergence of entropy, i.e., $h(L_{\tilde{Z},N} , \tilde{A}_X + \tilde{A}_{\tilde{Z},N}) \rightarrow h(L_{\tilde{Z},N} , \tilde{A}_X)$. Therefore, the conditional entropy $ h(L_{\tilde{Z},N} \vert \tilde{A}_X + \tilde{A}_{\tilde{Z},N}) \triangleq h(L_{\tilde{Z},N} , \tilde{A}_X + \tilde{A}_{\tilde{Z},N}) - h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N}) $ converges to $h(L_{\tilde{Z},N} \vert \tilde{A}_X) \triangleq h(L_{\tilde{Z},N} , \tilde{A}_X) - h(\tilde{A}_X)$, i.e., $h(L_{\tilde{Z},N} \vert \tilde{A}_X + \tilde{A}_{\tilde{Z},N}) \rightarrow h(L_{\tilde{Z},N} \vert \tilde{A}_X)$. Similarly, we have $h(L_{\tilde{Z},N} \vert \tilde{A}_X, \tilde{A}_X + \tilde{A}_{\tilde{Z},N}) \rightarrow h(L_{\tilde{Z},N} \vert \tilde{A}_X, \tilde{A}_X)$. Together with the fact that $h(L_{\tilde{Z},N} \vert \tilde{A}_X)= h(L_{\tilde{Z},N} \vert \tilde{A}_X, \tilde{A}_X)$~\cite{BookInfo}, we have $h(L_{\tilde{Z},N} \vert \tilde{A}_X + \tilde{A}_{\tilde{Z},N}) - h(L_{\tilde{Z},N} \vert \tilde{A}_X, \tilde{A}_X + \tilde{A}_{\tilde{Z},N}) \rightarrow 0$ as $\myP \rightarrow \infty$. } Thus, the mutual information in \eqref{MI_coordinates} can {\color{black}asymptotically} be rewritten as \begin{equation} \label{MI_asymptotic_appen} \mathcal{I}(\tilde{X}_1,X_2; \tilde{Y}_1, Y_2) = h(\tilde{A}_X) - h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N}\vert \tilde{A}_X). \end{equation} Then we calculate $h(\tilde{A}_X)$ and $h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N}\vert \tilde{A}_X)$ as follows. \subsubsection{$h(\tilde{A}_X)$} Due to the fact that the probability contained in a differential area should not alter under a change of variables, we have \begin{equation} \label{first_derri_eq} \vert f_{\tilde{X}}(\tilde{x}) \mathrm{d} S \vert = \vert f_{\tilde{A}_{X}}(\tilde{a}) \mathrm{d} \Sigma \vert , \end{equation} where $\mathrm{d} S = \mathrm{d} u \mathrm{d} v$, $u = \mathrm{Real}\{\tilde{x}\}$, $v = \mathrm{Imag}\{\tilde{x}\}$, $\tilde{a}$ is the PN coordinate system representation of the point $(\tilde{x}_1,x_2)$, $\mathrm{d} \Sigma$ is the differential area on the paraboloid $\mathcal{U}$, $f_{\tilde{A}_{X}}(\tilde{a})$ and $f_{\tilde{X}}(\tilde{x})$ are the pdfs of $\tilde{A}_{X}$ and $\tilde{X}$, respectively, and \begin{equation} \label{normal_pdf} f_{\tilde{X}}(\tilde{x}) = \frac{1}{\pi} \exp\left( - \vert \tilde{x} \vert^2\right). \end{equation} Assuming that ${\bf r} = (\mathrm{Real}\{\tilde{x}_1\},\mathrm{Imag}\{\tilde{x}_1\}, x_2)$, which is a point on the paraboloid $\mathcal{U}$, and thus, based on \eqref{new_X1_X2}, we have \begin{equation} \label{my_Sigma} \begin{aligned} \frac{\partial {\bf r}}{\partial u} &= (\sqrt{\Theta_1 },0,2 k\sqrt{\myP}\sqrt{\Theta_2} u),\ \frac{\partial {\bf r}}{\partial v} = (0,\sqrt{\Theta_1 },2 k\sqrt{\myP} \sqrt{\Theta_2} v),\\ \mathrm{d} \Sigma &= \left\vert \frac{\partial {\bf r}}{\partial u} \times \frac{\partial {\bf r}}{\partial v} \right\vert \mathrm{d} u \mathrm{d}v =\Theta_1 \sqrt{4 \frac{k^2 \myP \Theta_2}{\Theta^2_1} \vert \tilde{x}_1\vert^2 + 1} \ \mathrm{d} u \mathrm{d}v, \end{aligned} \end{equation} where $\times$ is the cross product operator. Taking \eqref{my_Sigma}, \eqref{normal_pdf} and $\tilde{x} = \frac{\tilde{x}_1}{\sqrt{\Theta_1}}$ into \eqref{first_derri_eq}, after simplification, we have \begin{equation} \label{f_Y1_Y2} f_{\tilde{A}_{X}}(\tilde{a}) = f_{\tilde{A}_{X}}(\tilde{x}_1,x_2) = \frac{1}{\pi \Theta_1 \sqrt{4 \frac{k^2 \myP \Theta_2}{\Theta^2_1} \vert \tilde{x}_1\vert^2 + 1}} \exp\left( - \frac{\vert \tilde{x}_1 \vert^2}{\Theta_1 }\right). \end{equation} The differential entropy of $\tilde{A}_{X}$ is derived as \begin{equation} \label{joint_entropy} \begin{aligned} h(\tilde{A}_{X}) &= \iint - f_{\tilde{A}_{X}}(\tilde{a}) \log_2\left(f_{\tilde{A}_{X}}(\tilde{a})\right) \mathrm{d}\Sigma. \end{aligned} \end{equation} Taking \eqref{f_Y1_Y2} and \eqref{my_Sigma} into \eqref{joint_entropy}, we have \begin{equation} \label{highSNR_entropy_2} \begin{aligned} h(\tilde{A}_{X}) &= \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} - \frac{\exp\left( - \frac{\vert \tilde{x}_1 \vert^2}{\Theta_1 }\right)}{\pi} \log_2 \left( \frac{\exp\left( - \frac{\vert \tilde{x}_1 \vert^2}{\Theta_1 }\right)}{\pi \Theta_1 \sqrt{4 \frac{k^2 \myP\Theta_2}{\Theta^2_1} \vert \tilde{x}_1\vert^2 + 1}} \right)\mathrm{d}u \mathrm{d}v\\ &\stackrel{(a)}{=} 2 \pi \int_{0}^{\infty} - \frac{\exp\left( - \frac{r^2}{\Theta_1 }\right)}{\pi \Theta_1 } \log_2 \left( \frac{\exp\left( - \frac{r^2}{\Theta_1 }\right)}{\pi \Theta_1 \sqrt{4 \frac{k^2 \myP \Theta_2}{\Theta^2_1} r^2 + 1}} \right)r \mathrm{d} r\\ &= {\log_2 (\pi e \Theta_1)}+\frac{1}{2 \log(2)} \exp\left(\frac{\Theta_1 }{4 k^2 \myP \Theta_2}\right) \Ei \left(\frac{\Theta_1 }{4 k^2 \myP \Theta_2}\right), \end{aligned} \end{equation} where $(a)$ is because of the polar transformation. \subsubsection{Asymptotic $h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N}\vert \tilde{A}_X)$} For a given value of $\tilde{A}_X$, based on the definition of the PN coordinate system, the random variable $\tilde{A}_X + \tilde{A}_{\tilde{Z},N}$ is treated as the projection of the three-dimensional circular symmetric Gaussian noise, i.e., $(\frac{\tilde{Z}}{\sqrt{\myP}},k \frac{N}{\sqrt{\myP}})$ shifted by $\tilde{A}_X$, on the parabolic~$\mathcal{U}$ by the normal vectors of it. As $\myP \rightarrow \infty$, $\tilde{A}_X +\tilde{A}_{\tilde{Z},N}$ converges in probability toward $\tilde{A}_X$, thus, for a given value of $\tilde{A}_X$, the effective range of the random variable $\tilde{A}_{X}+\tilde{A}_{\tilde{Z},N}$ on the paraboloid $\mathcal{U}$, is very small, which is close to the tangent plane of $\mathcal{U}$ at the point $\tilde{A}_{X}$. Therefore, the random variable $\tilde{A}_X +\tilde{A}_{\tilde{Z},N}$ converges in probability toward the random variable generated by the projection of the three-dimensional circular symmetric Gaussian noise on the tangent plane of $\mathcal{U}$ at the point $\tilde{A}_{X}$ by the normal vector of the point $\tilde{A}_{X}$, which is the well-known two-dimensional {\color{black}complex} Gaussian random variable with variance $\sigmaone/\myP$. Therefore, given $\tilde{A}_X$, the entropy $h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N})$ is approaching to $\log_2 (\pi e \sigmaone/\myP)$ which does not rely on $\tilde{A}_X$. Thus, as $\myP \rightarrow \infty$, the asymptotic conditional entropy is \begin{equation} \label{part2} h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N} \vert \tilde{A}_X ) = \mathbb{E}_{\tilde{A}_X}\left[ h(\tilde{A}_X + \tilde{A}_{\tilde{Z},N} \vert \tilde{A}_X =\tilde{a}_X) \right] \approx \log_2 \left(\frac{\pi e \sigmaone}{\myP}\right). \end{equation} \subsubsection{Asymptotic $\mathcal{I}(\sqrt{\myP} X; \tilde{Y}_1, Y_2)$} Taking \eqref{highSNR_entropy_2} and \eqref{part2} into \eqref{MI_asymptotic_appen}, \eqref{theory1_1} is obtained. \subsection{Proof of \eqref{theory1_2}} Based on the power series expansion of the exponential integral function \cite{Handbook} \begin{equation} \Ei(x) = - \gamma - \ln x - \sum_{n=1}^{\infty} \frac{(-x)^n}{n\ n!},\ x >0, \end{equation} where $\gamma \approx 0.5772$ is Euler's constant, as $\myP$ is sufficiently large, we have \begin{equation} \label{asymp_expEi} \lim\limits_{\myP \rightarrow \infty} \exp\left(\frac{\Theta_1 \sigmatwo}{2 \sigmaone \myP \Theta_2}\right) \Ei\left(\frac{\Theta_1 \sigmatwo}{2 \sigmaone \myP \Theta_2}\right) = - \gamma + \ln \left(\frac{2 \sigmaone \myP \Theta_2}{\Theta_1 \sigmatwo}\right). \end{equation} Substituting \eqref{asymp_expEi} into \eqref{theory1_1}, \eqref{theory1_2} is obtained. \end{appendices} \ifCLASSOPTIONcaptionsoff \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{introduction} The use cases of 5G communications are classified into three broad categories: enhanced Mobile BroadBand, URLLC and mMTC. URLLC is necessary for mission-critical applications such as unmanned aerial vehicle (UAV) communication and process automation\cite{basnayaka2021agej,sharma2020communication,perera2020age}. The packet size should be extremely small in URLLC to facilitate low-latency transmission. The Shannon-Hartley Capacity theorem is no longer relevant in this scenario since the law of large numbers is invalid. However, using finite block length information theory, we can derive the achievable data rate under short packet communication as a function of the SNR, the block length and the decoding error probability\cite{polyanskiy2010channel}. In addition, for these mission-critical applications, the freshness of the information is of high importance, along with the URLLC requirements. This has prompted growing interest in the age of information (AoI), a performance metric that quantifies the freshness of the information. \par On the other hand, SWIPT is an emerging technology for future wireless communication systems. In general, practical SWIPT receivers for energy harvesting (EH) and information decoding have been established through the use of power splitting (PS) and time switching (TS) techniques. Since EH shares the resources allocated to information transmission, this has an influence on the performance of wireless communication. However, minimal research has been conducted to analyse SWIPT-enabled relays using finite block length information theory and AoI. \par This paper presents a wireless relay system with SWIPT for future mission-critical URLLC-enabled applications. To the best of the authors' knowledge, no prior study has analysed AoI in a SWIPT and URLLC enabled wireless relay network. In this work, a two-way wireless relay system employs a nonlinear PS model for energy harvesting and short packet communication is employed to address the trade-off between reliability and latency. We derived an approximation for the average AoI (AAoI) of the proposed relay scheme under the finite blocklength constraint. In comparison to prior work on SWIPT, we examine the AoI performance of the proposed SWIPT system using threshold-based nonlinear EH. Furthermore, we examine the effect of various factors, including block length and packet size, on the weighed sum AAoI. \section{System Model} \begin{figure}[!htbp] \includegraphics[width=0.485\textwidth,keepaspectratio]{Presentation1.pdf} \caption{ In the considered system model, sources $A$ $(S_A)$ and $B$ $(S_B)$ exchange status updates with each other with the help of a single relay $(R)$; sources send updates to the $R$ during the first transmission time slot $T_{1}$ while the $R$ is harvesting energy and then the $R$ exchanges updates received from each source using harvested energy during the second transmission time slot $T_{2}$.} \label{sytmmod} \end{figure}This paper considers a two-way cooperative status update system where two source nodes, source $A$ $(S_A)$ and source $B$ $(S_B)$ exchange status updates with each other as timely as possible with the help of a single relay $(R)$. $R$ adopts the decode-and-forward relaying protocol. Specifically, $S_{i},i\in \left ( A, B \right )$ transmits status updates which can be generated at the beginning of any time slot. The sources in this system are regarded as energy providers and the relay is equipped with an energy harvesting device and is capable of conducting data forwarding and harvesting energy simultaneously. In this paper, we assume $R$ adopts the dynamic power splitting technique \cite{perera2017simultaneous}. As shown in \figurename{ \ref{sytmmod}}, we consider a two-time slot transmission scheme in which $S_{i}$ sends updates to the $R$ while harvesting energy during the first transmission time slot and then the $R$ exchange updates received from $S_{i}$ using harvested energy during the second transmission time slot. Specifically, $S_{i},i\in \left ( A, B \right )$ transmits status updates which can be generated at the beginning of any time circle following generate-at-will updates generation model \cite{ceran2019average}. Under this policy, the relay uses energy harvested within the first transmission time slot ($T_{1}$) for the transmission in the second transmission time slot ($T_{2}$) without waiting. Suppose the harvested energy is less than the minimum required energy for the transmission. In that case, the relay does not transmit received updates and updates received from both sources are destroyed at the relay. $H_{ij}$ represents the channel coefficient of the channel between node $i$ to node $j$ where $i,j\in \left \{ A,B,R \right \}$ and $i\neq j$. The small scale channel gain is $g_{\textit{i}\textit{j}}=\left | h_{\textit{i}\textit{j}}\right |^2$, where $h_{\textit{i}\textit{j}} \sim \mathcal{C}\mathcal{N}(0,1)$ is the Rayleigh fading channel coefficient. The probability density function (PDF) of the small scale channel gain is defined as $ f_{g_{\textit{i}\textit{j}}}(z)=e^{-z},z\geq 0$. The large scale channel gain $\alpha_{ij}$ is given by $-10\log_{10}(\alpha_{\textit{i}\textit{j}} )= 20 \log_{10}(d_{\textit{i}\textit{j}})+20\log_{10}(\frac{4\pi f_{c}}{c})$, where $f_{c}, d_{\textit{i}\textit{j}}$ are the carrier frequency and distance between node $i$ and node $j$ respectively. $c$ is the speed of light in the space. Thus, channel coefficient is written as $H_{ij}=\sqrt{\alpha_{ij}g_{ij}}$ and channel gain is written as $ G_{ij}=\alpha_{ij}g_{ij}$. Up-link transmission between the source and the relay are performed in an orthogonal channel. \section{Block error analysis under finite block-length } Our first objective is to study block error probability at each destination (opposite source) in this system. The block error probability at the destination has been derived using different mathematical approximation techniques in this section. \par To calculate block error probability at each node, it is necessary to derive the SNR at each node. The received SNR at the relay from each source node $S_{{i}'},{i}' \in \left \{ A,B \right \}$ is given by, \begin{equation} \gamma^{{i}'}_{R}=\frac{(1-\rho) P_{{i}'}G_{{i}'R}}{\sigma_{R} ^{2} }, \label{snratrfrs} \end{equation} where $\rho $ is the power spitting factor, noise power at the relay is denoted as $\sigma^{2}_R$ and transmit power at the $S_{{i}'}$ given by $P_{{i}'} $. Then, the energy harvested by the relay from the each node is given by $ E^{{i}'}_{R}= \rho \eta P_{{i}'}G_{{i}'R}T_{1}$, where $\eta$\ is the energy conversion efficiency and $T_{1}$ is the transmission time of the first transmission slot and it can be calculated as $T_{1}=n^{{i}'}_{R}T_{s}$, where $n^{{i}'}_{R}$ is the allocated block length for the transmission between ${i}'$ and $R$ and $T_{s}$ is the symbol duration. Then, the total energy harvested by the relay within the first transmission slot is given by \begin{equation} E_{R}=\sum_{{i}' = \left \{ A,B \right \}}^{} E^{{i}'}_{R}=\rho \eta T_{1}\left ( P_{A}G_{AR} +P_{B}G_{BR}\right ). \label{havestpower} \end{equation} The available energy harvested by the relay for transmission is given by \begin{equation} E_{R}^{T}= \begin{cases} E_{max} & E_{R}\geq E_{max} \\ E_{R} & E_{max}> E_{R}> E_{min} \\ 0 & \text{otherwise}, \end{cases} \label{avaitransp} \end{equation}where $E_{max}$ is the maximum energy limit that the relay can harvest and $E_{min}$ is the minimum energy required for transmission. $E_{min}$ can be calculated as $E_{min}=P_{min}T_{2}$, where $P_{min}$ is the minimum required power for the transmission while $T_{2}$ is the time of the second transmission slot and $T_{2}$ is calculated as $ T_{2}=\sum_{i = \left \{ A,B \right \}}^{} n^{R}_{i}T_{s}$, where $n^{R}_{i}$ is the allocated block length for the transmission between $R$ and $i$. Then, the transmit power of relay is calculated as $P_{R}=\frac{E_{T}^{R}}{T_{2}}$. During the second time slot, the received SNR at each $S_{i} \in\left \{ A,B \right \}$ is given by $ \gamma_{i}=\frac{ P_{R}G_{Ri}}{\sigma_{i} ^{2} }$ Then, using \eqref{havestpower} and \eqref{avaitransp} SNR at the destination is expressed as \begin{equation} \gamma _{i}= \begin{cases} \frac{E_{max}\alpha_{Ri}g_{Ri}}{T_{2}\sigma_{i} ^{2} }, & E_{R}> E_{max} \\ \frac{E_{R}\alpha_{Ri}g_{Ri}}{T_{2}\sigma_{i}^{2} }, & E_{max}\geq E_{R}\geq E_{min} \\ 0 ,& \text{otherwise}. \end{cases} \end{equation} An outage happens when the relay or opposite source are unable to decode the received message successfully. Hence, the system overall transmission success probability $\varphi_{i} $ at each source node $i$ can be calculated as \begin{equation} \varphi_{i}= 1-\left ( \varepsilon^{{i}'}_{R}+\left (1-\varepsilon^{{i}'}_{R} \right ) \varepsilon^{R}_{i} \right ), \label{sucessprob} \end{equation} where $i\neq {i}' , i,{i}' \in \left \{ A,B \right \}$ and $\varepsilon^{t}_{j}$ is the decoding error probability at receiving node $j\in \left ( i,R \right )$ for block received from node $t\in \left ( {i}',R \right )$. Due to the static nature of the communication channels, it is assumed that the fading coefficients stay constant over the duration of each transmission block. Following Polyanskiy's results on short packet communication \cite{polyanskiy2010channel} and assuming that the receiver has the perfect channel state information, the expectation of the block error probability at the receiving node for a given block length $n^{t}_{j}$ can be written as \begin{equation} \varepsilon^{t}_{j}=\mathbb{E}\left [ Q\left (\frac{n^{t}_{j}C(\gamma^{t}_{j})-k^{t}_{j}}{\sqrt{n^{t}_{j}V(\gamma^{t}_{j})}} \right ) \right ], \end{equation} where $\mathbb{E}\left [ . \right ]$ is the expectation operator, $Q(x)=\frac{1}{\sqrt{2\pi }}\int_{x}^{\infty }e^{-\frac{t_{2}}{2}}dt$ and $V(\gamma^{t}_{j})$ is the channel dispersion, which can be written $V(\gamma^{t}_{j} )=\frac{{\log _{2}}^{2}e}{2}(1-\frac{1}{(1+\gamma^{t}_{j} )^2})$. The variable $C(\gamma^{t}_{j})$ denotes the channel capacity of a complex AWGN channel and it is given by $C(\gamma^{t}_{j})=\log _{2}(1+\gamma^{t}_{j})$. The number of bits per block represents by $k^{t}_{j}$. Moreover, under the Rayleigh fading channel conditions, $\varepsilon^{t}_{j} $ can be formulated as \begin{equation} \varepsilon^{t}_{j} = \int_{0}^{\infty }f_{\gamma^{t}_{j}} (z)Q\left (\frac{n^{t}_{j}C(\gamma^{t}_{j})-k^{t}_{j}}{\sqrt{n^{t}_{j}V(\gamma^{t}_{j})}} \right)dz, \label{e11} \end{equation} where $f_{\gamma^{t}_{j}}(z)$ denotes the PDF of the received SNR ($\gamma^{t}_{j}$) at the receiving node $j$. Due to the complexity of the Q-function, it is challenging to get a closed-form expression for the overall decoding error probability. Thus, using the approximation technique given in \cite{makki2014finite} and \cite{gu2017ultra}, \eqref{e11} can be approximated as $ \varepsilon^{t}_{j} \approx \int_{0}^{\infty }f_{\gamma^{t}_{j}} (z)\Theta^{t}_{j} (z) dz$, where $\Theta^{t}_{j}(z)$ denotes the linear approximation of $Q\left ( \frac{n^{t}_{j}C(\gamma^{t}_{j})-k^{t}_{j}}{\sqrt{n^{t}_{j}V(\gamma^{t}_{j}})} \right)$, this can be expressed as in \cite{gu2017ultra} \begin{equation} \Theta^{t}_{j} (z)=\left \{ \begin{matrix} 1,& \gamma^{t}_{j}\leq \phi^{t}_{j}, & \\ \frac{1}{2}-\beta^{t}_{j} \sqrt{n^{t}_{j}}(\gamma^{t}_{j}-\psi^{t}_{j}), & \phi^{t}_{j}< \gamma^{t}_{j} <\delta^{t}_{j}, & \\ 0,& \gamma^{t}_{j} \geq \delta^{t}_{j},& \end{matrix} \right. \label{qfap} \end{equation} where $\beta^{t}_{j} =\frac{1}{2\pi \sqrt{2^{\frac{2k^{t}_{j}}{n^{t}_{j}}}-1}},\psi_{j}=2^{\frac{k^{t}_{j}}{n^{t}_{j}}}-1,\phi^{t}_{j}=\psi^{t}_{j}-\frac{1}{2\beta^{t}_{j}\sqrt{n^{t}_{j}}}$ and $\delta^{t}_{j}=\psi^{t}_{j}+\frac{1}{2.\beta^{t}_{j}\sqrt{n^{t}_{j}}}$. By using above linear approximation $\varepsilon^{t}_{j}$ can be expressed as \begin{equation} \varepsilon^{t}_{j} \approx \beta^{t}_{j}\sqrt{n^{t}_{j}}\int_{\phi^{t}_{j}}^{\delta^{t}_{j}}F_{\gamma^{t}_{j}}\left ( z \right )dz, \label{aprxberwcdf} \end{equation} where $F_{\gamma^{t} _{j}}\left ( z \right )$ denotes the CDF of the received SNR ($\gamma^{t}_{j}$) at receiving node $j$. To calculate success probability at each source $i$ using \eqref{sucessprob}, it is necessary to calculate $\varepsilon^{{i}'}_R$ and $\varepsilon^{R}_{i}$. Using (\ref{qfap}) and (\ref{aprxberwcdf}) block error probabilities at the $R$ and each source can be calculated as follows, \begin{equation} \varepsilon^{{i}'}_{R} \approx \beta^{{i}'}_{R}\sqrt{n^{{i}'}_{R}}\int_{\phi^{{i}'}_{R}}^{\delta^{{i}'}_{R}}F_{\gamma^{{i}'}_{R}}\left ( z \right )dz, \label{aprxberwcdf_R} \end{equation} \begin{equation} \varepsilon^{R}_{i} \approx \beta^{R}_{i}\sqrt{n^{R}_{i}}\int_{\phi^{R}_{i}}^{\delta^{R}_{i}}F_{\gamma^{R}_{i}}\left ( z \right )dz. \end{equation} \begin{lemma} An approximation for the block error probability at the relay is derived as \begin{multline} \varepsilon^{{i}'}_{R} \approx 1-\left (\frac{(1-\rho) P_{{i}'}\alpha _{{i}'R}\beta^{{i}'}_{R}\sqrt{n^{{i}'}_{R}}}{\sigma_{R} ^{2}} \right ) \\ \left ( e^{-\frac{\phi^{{i}'}_{R}\sigma_{R} ^{2}}{(1-\rho) P_{{i}'}\alpha _{{i}'R}}}-e^{-\frac{\delta^{{i}'}_{R}\sigma_{R} ^{2}}{(1-\rho) P_{{i}'}\alpha _{{i}'R}}} \right ). \label{policy1errsr} \end{multline} \begin{proof} The PDF of SNR at relay from each source can be calculated using \eqref{snratrfrs} as $ f_{\gamma^{{i}'}_{R}}{}\left ( x \right )=\frac{\sigma_{R} ^{2}}{(1-\rho) P_{{i}'}\alpha _{{i}'R}}e^{-\frac{x\sigma_{R} ^{2}}{(1-\rho) P_{{i}'}\alpha _{{i}'R}}}$ Then, the CDF can be calculated as \begin{equation} \begin{aligned} F_{\gamma^{{i}'}_{R}}\left ( z \right )&= 1-e^{-\frac{z\sigma_{R} ^{2}}{(1-\rho) P_{{i}'}\alpha _{{i}'R}}}. \label{cdfsnratrvt} \end{aligned} \end{equation} Then, the result can be proved by substituting \eqref{cdfsnratrvt} to $\eqref{aprxberwcdf_R}$. \end{proof} \end{lemma} \begin{lemma} Block error probability at the opposite receiving node can be derived as follows: \begin{equation} \varepsilon^{R}_{i}\approx \beta^{R}_{i}\sqrt{n^{R}_{i}}\left ( \left ( \frac{\delta ^{R}_{i}+\phi^{R}_{i}}{2} \right )\sum_{v=1}^{V}\frac{\pi }{V}\sqrt{1-\phi_{v}^2}F_{\gamma^{R}_{i}}\left ( q \right )+R_{V} \right ) \label{policy1er} \end{equation} where $\phi_{v}=\cos \left ( \frac{2v-1}{2v} \pi \right )$, $q=\left ( \frac{\delta^{R}_{i}-\phi^{R}_{i}}{2} \right )\phi_{v}+\left ( \frac{\delta^{R}_{i}+\phi^{R}_{i}}{2} \right )$, $V$ is the complexity-accuracy trade-off factor, while $R_{V}$ denotes the error term, which is ignored at substantially larger values of $V$. \begin{proof} To calculate error probability at $S_{i}$, it is necessary to derived the CDF of SNR. Using \eqref{avaitransp}, $F_{\gamma^{R}_{i}}\left ( z \right )$ is evaluated as \begin{equation} \begin{aligned} F_{\gamma^{R}_{i}}\left ( z \right )= &\mathbb{P}_{r}\left ( \gamma^{R}_{i}< z \right )= 1- \mathbb{P}_{r}\left \{ E_{R} \geq E_{min} \cap \gamma^{R}_{i}> z\right \}\\ F_{\gamma^{R}_{i}}\left ( z \right )=& 1- \underbrace{\mathbb{P}_{r} \left \{ E_{min}\leq E_{R}\leq E_{max}\cap\gamma^{R}_{i}> z\right \}}_{L_{1}} \\ & - \underbrace{\mathbb{P}_{r} \left \{ E_{R}\geq E_{max}\cap\gamma^{R}_{i}> z\right \}}_{L_{2}}. \end{aligned} \label{apen1} \end{equation} Then, substituting $E_{R}=\rho \eta T_{1}\left ( P_{A}\alpha _{AR}g_{AR} +P_{B}\alpha_{BR}g_{BR}\right )$ into \eqref{apen1}, $L_{1}$ is evaluated as \begin{equation} \begin{aligned} L_{1}= &\mathbb{P}_{r}\left \{ \Omega_{1} < I< \Omega _{2} \, \cap \, Ig_{Ri}> \Omega _{3} \right \}, \end{aligned} \end{equation} or \begin{equation} L_{ 1}= \begin{cases} 0, & g_{Ri}< \frac{\Omega _{3}}{ \Omega_{2}}, \\ \mathbb{P}_{r}\left \{ \frac{\Omega_{3}}{g_{Ri}}< I\leqslant \Omega _{2}\right \},&\frac{\Omega _{3}}{\Omega _{2}} <g_{Ri}<\frac{\Omega _{3}}{\Omega _{1}}, \\ \mathbb{P}_{r}\left \{ \Omega_{1}< I\leqslant \Omega _{2}\right \},& g_{Ri}> \frac{\Omega _{3}}{\Omega _{1}}, \end{cases} \end{equation} where $\Omega _{1}=\frac{E_{min}}{\rho \eta T_{1}}$ , $\Omega _{2}=\frac{E_{max}}{\rho \eta T_{1}}$, $\Omega _{3}=\frac{z\sigma _{i}^{2}T_{2}}{\rho \eta T_{1}\alpha _{Ri}}$ and $ I=\sum_{i=\left \{ A,B \right \}}^{} P_{i}\alpha _{iR}g_{iR}$. To calculate $L_{1}$ it is necessary to get PDF and CDF of $I$ and $g_{Ri}$. Then, to calculate the PDF of $I$, it is considered as summation of two independent random variable as $I=\mu_{1}+\mu_{2 }$, where $\mu_{1}\sim \mathrm{exp}(\frac{1}{P_{A}\alpha_{AR}})$ and $\mu_{2} \sim \mathrm{exp}(\frac{1}{P_{B}\alpha_{BR}})$. Using the concepts of convolution of random variables, PDF and CDF of $I$ can be calculated as follows: \begin{equation} \begin{aligned} f_{I}\left ( z \right ) & =\int_{-\infty }^{\infty } f_{\mu_{1}}(x)f_{\mu_{2}}(z-x)dx,\\ & = \int_{0}^{z}\frac{1}{P_{A}\alpha_{AR}}e^{-\frac{1}{P_{A}\alpha_{AR}}x}\frac{1}{P_{B}\alpha_{BR}}e^{-\frac{1}{P_{B}\alpha_{BR}}(z-x)}dx,\\ & = \frac{1}{P_{A}\alpha_{AR}P_{B}\alpha_{BR}}e^{-\frac{1}{P_{B}\alpha_{BR}}z}\int_{0}^{z}e^{(\frac{1}{P_{B}\alpha_{BR}}-\frac{1}{P_{A}\alpha_{AR}})x}dx,\\ f_{I}\left ( z \right ) & = \begin{cases} \frac{1}{P_ {A}\alpha_{AR}-P_{B}\alpha_{BR}}(e^{-\frac{1}{P_{A}\alpha_{AR}}z}-e^{-\frac{1}{P_{B}\alpha_{BR}}z}), \\ \qquad \textrm{if} \quad \frac{1}{P_{A}\alpha_{AR}}\neq \frac{1}{P_{B}\alpha_{BR}},\\ \frac{1}{(P\alpha)^2}ze^ {-\frac{1}{P\alpha}z}, \quad \textrm{if}\, \: \frac{1}{P_{A}\alpha_{AR}}= \frac{1}{P_{B}\alpha_{BR}}=\frac{1}{P\alpha}, \end{cases}. \end{aligned} \end{equation} where $f$ denotes the PDF function of a random variable. Then, the CDF of $I$ can be calculated as \begin{equation} \begin{aligned} F_{I}(z)&=P(Z\leq z)=\int_{0}^{z} f(t) dt, \\ F_{I}(z)&=\begin{cases} 1+\frac{P_{B}\alpha_{BR}}{P_A \alpha_{AR}-P_{B}\alpha_{BR}}e^{-\frac{z}{P_{B}\alpha_{BR}}}\\ \quad - \frac{P_{A}\alpha_{AR}}{P_{A}\alpha_{AR}-P_{B}\alpha_{BR}}e^{-\frac{z}{P_{A}\alpha_{AR}}}, & \\ ,\textrm{if}\, P_{A}\alpha_{AR}\neq P_{B}\alpha_{BR} ,& \\ 1-e^{-\frac{1}{P\alpha}z}\left ( 1+\frac{1}{P\alpha}z \right ) \textrm{if}\, \: \frac{1}{P_{A}\alpha_{AR}}= \frac{1}{P_{B}\alpha_{BR}}=\frac{1}{P\alpha}, \end{cases} \end{aligned} \end{equation} Further, we derive the approximation for $L_{1}$ as follows, \begin{equation} \begin{aligned} L_{1} &=\int_{\frac{\Omega_{3}}{\Omega_{2}}}^{\frac{\Omega_{3}}{\Omega_{1}}}\left ( f_{g_{Ri}}\left ( x \right )\int_{\frac{\Omega_{3}}{x}}^{\Omega _{2}} f_{I}\left (y \right )dy\right )dx\\ & \: \: \: +\int_{\Omega_{1}}^{\Omega_{2}}f_{I}\left ( x \right )dx\int_{\frac{\Omega _{3}}{\Omega _{1}}}^{\infty }f_{g_{Ri}}\left ( x \right )dx,\\ L_{1} =& L_{3}+ \left ( F_{I}\left ( \Omega_{2} \right ) -F_{I}\left ( \Omega_{1} \right )\right )\left ( 1-F_{Ri}\left ( \frac{\Omega _{3}}{\Omega_{1}} \right ) \right ), \end{aligned} \end{equation} where \begin{equation} \begin{aligned} L_{3}=&\int_{\frac{\Omega _{3}}{\Omega _{2}}}^{\frac{\Omega _{3}}{\Omega _{1}}}\left ( f_{g_{Ri}}\left ( x \right )\int_{\frac{\Omega _{3}}{x}}^{\Omega_{2}} f_{I}\left (y \right )dy\right )dx, \\ =&\int_{\frac{\Omega _{3}}{\Omega_{2}}}^{\frac{\Omega _{3}}{\Omega _{1}}} f_{g_{Ri}}\left ( x \right )\left [ F_{I}\left ( \Omega _{2} \right )-F_{I}\left ( \frac{\Omega _{3}}{x} \right ) \right ]dx,\\ L_{3}=& F_{I}\left ( \Omega _{2} \right )\left [ F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega _{1}}\right ) -F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega _{2}}\right )\right ]-L_{4}, \end{aligned} \end{equation} where $L_{4}$ is defined as \begin{equation} \begin{aligned} L_{4}=&\int_{\frac{\Omega _{3}}{\Omega _{2}}}^{\frac{\Omega _{3}}{\Omega_{1}}}f_{g_{Ri}}\left ( x \right )F_{I}\left ( \frac{\Omega_{3}}{x} \right )dx, \label{erl4} \end{aligned} \end{equation} Using Gaussian-Chebyshev-Quadrature (GCQ) method \cite{abramowitz1988handbook}, \eqref{erl4} can be approximated as follows, \begin{equation} L_{4}\approx \frac{\frac{\Omega_{3}}{\Omega _{1}}+\frac{\Omega _{3}}{\Omega _{2}}}{2}\sum_{m=1}^{M}\frac{\pi }{M}\sqrt{1-\phi _{m}^2}f_{g_{Ri}}\left ( z_{1} \right )F_{I}\left ( \frac{\Omega_{3}}{z_{1}} \right )+R_{M}, \end{equation} where $\phi _{m}=\cos \left ( \frac{2m-1}{2M}\pi \right )$, $z_{1}=\frac{\frac{\Omega _{3}}{\Omega _{1}}-\frac{\Omega _{3}}{\Omega _{2}}}{2}\phi _{m}+\frac{\frac{\Omega _{3}}{\Omega _{1}}+\frac{\Omega _{3}}{\Omega _{2}}}{2}$, $M$ is the complexity-accuracy trade-off factor, and $R_{M}$ is the error term that can be ignored at sufficiently high $M$ values. Finally, expression for $L_{1}$ can be approximated as shown in \eqref{longl1}. \begin{figure*}[!t] \normalsize \begin{multline} L_{1}\approx F_{I}\left ( \Omega_{2} \right )\left [ F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega _{1}}\right ) -F_{g_{Ri}}\left ( \frac{\Omega_{3}}{\Omega _{2}}\right )\right ]-\frac{\frac{\Omega _{3}}{\Omega_{1}}+\frac{\Omega_{3}}{\Omega _{2}}}{2}\sum_{m=1}^{M}\sqrt{1-\phi _{m}^2}f_{g_{Ri}}\left ( z_{1} \right )F_{I}\left ( \frac{\Omega_{3}}{z_{1}} \right )+R_{M}+ \\ \left ( F_{I}\left ( \Omega_{2} \right ) - F_{I}\left ( \Omega_{1} \right )\right )\left ( 1-F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega_{1}} \right ) \right ). \label{longl1} \end{multline} \begin{multline} F_{\gamma^{R}_{i}}\left ( z \right ) \approx 1- F_{I}\left ( \Omega _{2} \right )\left [ F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega _{1}}\right ) -F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega _{2}}\right )\right ]-\frac{\frac{\Omega _{3}}{\Omega_{1}}+\frac{\Omega_{3}}{\Omega _{2}}}{2}\sum_{m=1}^{M}\sqrt{1-\phi _{m}^2}f_{g_{Ri}}\left ( z_{1} \right )F_{I}\left ( \frac{\Omega_{3}}{z_{1}} \right )+R_{M}+ \\ \left ( F_{I}\left ( \Omega_{2} \right ) - F_{I}\left ( \Omega_{1} \right )\right ) \left ( 1-F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega_{1}} \right ) \right ) -\left ( 1-F_{I} \left (\Omega _{2} \right )\right ) \left (1-F_{g_{Ri}} \left ( \Omega_{4} \right ) \right ). \label{snrapatdes} \end{multline} \hrulefill \vspace*{4pt} \end{figure*} Similarly, $L_{2}$ is calculated as \begin{equation} \begin{aligned} L_{2}&= \mathbb{P}_{r} \left \{ I> \Omega_{2}\cap g_{Ri}> \Omega _{4}\right \}\\ &=\left ( 1-F_{I} \left ( \Omega _{2} \right )\right )\left (1-F_{g_{Ri}} \left ( \Omega _{4} \right ) \right ), \label{l1eq} \end{aligned} \end{equation} where $\Omega _{4}= \frac{z\sigma_{i} ^{2}T_{2}}{E_{max}\alpha _{Ri}}$. Then, CDF of SNR at each destination ($F_{\gamma^{R}_{i}}\left ( z \right ) $) can be obtained by substituting \eqref{longl1} and \eqref{l1eq} in \eqref{apen1} as in \eqref{snrapatdes}. Then, the result can be proved by substituting \eqref{snrapatdes} to \eqref{aprxberwcdf} and then applying the GCQ method for the integration of the CDF function. \end{proof} \end{lemma} Finally, substituting \eqref{policy1errsr} and \eqref{policy1er} into \eqref{sucessprob} the overall transmission success probability can can be calculated. \section{Age of Information Analysis} This section estimates the AAoI of the two-way relay system. This system adopts the generate-at-will update generation model \cite{ceran2019average}. Hence, $S_{A}$ and $S_{B}$ generate new status updates every transmission cycle to keep the information at the corresponding destinations as fresh as possible. Then, generated updates are transmitted to their opposite sources using a relay system. If the generation time of the freshest update received at opposite source time stamp $t$ is $g(t)$, then AoI can be defined as a random process as $ \Delta \left ( t \right )=t-g(t).$ \begin{figure}[!htb] \includegraphics[width=0.5\textwidth]{figure4.pdf} \caption{ Evolution of AoI $\Delta(t)$ with the time: Each source generate updates at time stamps $\mathit{g_{1},g_{2},...,g_{n-1}}$ and the opposite source receive these updates at time stamps $\mathit{g_{2},g_{3},...,g_{n}}$; $\Delta(t)$ is the AoI at the opposite source (destination).} \label{f2a} \end{figure} As illustrated in the \figurename {\ref{f2a}}, it is assumed that at $t=0$ the measurements of the AoI starts and the AoI at the opposite source (destination) is set to $\Delta (0)=\Delta _{0}$. Each source generates updates at time stamps $\mathit{g_{1},g_{2},...,g_{n-1}}$ and the opposite source receive these updates at time stamps $\mathit{g_{2},g_{3},...,g_{n}}$. As illustrated in \figurename {\ref{f2a}}, data update $i$ is transmitted from the source at time stamp $t = g_{i}$ and it is successfully delivered to its opposite source at time stamp $g_{i+1}= g_{i}+T$ where $T$ is total time allocated for a one transmission circle and $T=T_{1} +T_{2}$. Therefore, if update packet delivered successfully, at the time $g_{i+1}$, the AoI at the opposite source is estimated as $\Delta (g_{i+1})=T$. We assume that AoI increases linearly until the next update is successfully delivered to the opposite source. As an example, one packet fails to be decoded at time $g_{3}$, hence, $\Delta(t)$ continues to increase linearly. For the considered time period $T_{c}$, time average AoI can be computed using the area under $\Delta(t)$. Similarly, the time average age is estimated as $\Delta_{{T_{c}}}= \frac{1}{T_{c}} \int_{0}^{T_{c}} \Delta(t)dt.$ Similar to the work presented in \cite{kosta2017age}, the time average age ($\Delta_{{T_{c}}}$) tends to ensemble average age when $T_{c}\rightarrow \infty $, i.e., which can be expressed as \begin{equation} \Delta^{AAOI}=\mathbb{E}\left [ \Delta \right ]=\lim_{t\rightarrow \infty }\mathbb{E}\left [ \Delta \left ( t \right )\right ]=\lim_{T_{c}\rightarrow \infty }\Delta_{{T_{c}}}. \label{aaoie} \end{equation} Applying graphical methods to saw-tooth age waveform in \figurename {\ref{f2a}} and using [\citeonline{basnayaka2021agej}, eq.8] we can calculated AAoI at each $S_{i},i\in \left ( A,B \right )$ as follows: \begin{equation} \Delta _{i}^{AAOI} =\frac{E[X_{i}^{2}]}{2E[X_{i}]} +T, \label{aaoif} \end{equation} where $X_{i}$ denote the inter departure time between two consecutive successfully received status updates at $S_{i}$. It assumes that the end-to-end delay of each successfully received update is always a constant, which is given by $E[Y_{i}]=T_{1}+T_{2}=T$. The inter departure time $X_{i}$ is a geometric random variable with mean $E[X_{i}] = \frac{T}{\varphi_{i}}$ and second moment $E\left [ X_{i}^2 \right ]=\frac{T^2\left ( 2-\varphi_{i} \right )}{\varphi_{i}^2}$. \begin{lemma} For the two way relay network, the expression of the AAoI at each source can be obtained as follows: \begin{equation} \Delta _{i}^{AAOI}= \frac{T}{2}+\frac{T}{\varphi_{i}}. \end{equation} \begin{proof} The result can be proved by substituting $E[X_{i}] = \frac{T}{\varphi_{i}}$ and $E\left [ X_{i}^2 \right ]=\frac{T^2\left ( 2-\varphi_{i} \right )}{\varphi_{i}^2}$ into (\ref{aaoif}). \end{proof} \end{lemma} The expected weighted sum AAoI of the two-way relay system can be calculated as follows, $ \Delta _{Sum}^{AAoI}=\sum_{i=\left \{ A,B \right \}}^{}\omega _{i} \Delta _{i}^{AAOI}$, where $\omega _{i}$ is the weighting coefficient at $S_{i}$. \section{Simulation results and discussions} In this section, we present the analytical and numerical simulation results. Unless otherwise stated, the simulation parameters are set as: $d_{AR}$ $=$ \SI{30}{\metre}, $d_{BR}$ $=$ \SI{30}{\metre}, $f_{c}$ $=$ \SI{900}{\mega \hertz}, speed of the light (m/s) $=$ $3\times 10^{8} \SI{}{\metre}\SI{}{\second}^{-1} $, ${P}_{A}$ $=$ 1 W, ${P}_{B}$ $=$ 1 W, $T_{s}$ $=$ \SI{20}{\micro \second}, $n^{A}_{R},n^{B}_{R}$ $=$ 200 bits, $n^{R}_{A},n^{R}_{B}$ $=$ 200 bits, $k^{A}_{R},k^{B}_{R}$ $=$ 32 bits, noise power ($\sigma_{R}^2,\sigma_{A}^2,\sigma_{B}^2$) $=$ -100 dBm, , $E_{max}$ $=$ 0.001 J, $P_{min}$ $=$ 0.0001 mW, $\rho $ $=$ 0.5, $\omega _{A}$ $=$ 0.5, $\omega _{B}$ $=$ 0.5 and energy $\eta$ $=$ 0.9. \begin{figure}[!htbp] \includegraphics[width= \linewidth ,height=0.85\linewidth]{simfig1.eps} \caption{ Weighed sum AAoI as a function of transmission power} \label{sim1} \end{figure} \par The weighted sum of AAoI as a function of transmission power in Fig. \ref{sim1} for different distances is derived. The weighted sum of AAoI decreases dramatically as the transmit power at the sources increases, since increasing the transmit power at the source reduces the error probability at the relay node and increases the amount of energy harvested by the relay. However, for large transmission power levels, the AAoI value is fixed since the number of erroneous packets that impact the AAoI is too small. On the other hand, when the distance between the relay and sources is short, the AAoI is low, and when the distance increases, the AAoI increases due to low SNR. The numerically simulated AAoI well coincides with the approximated results, especially moderate SNR values, since the linear approximation applied in \eqref{qfap} is too tight for moderate SNR values \cite{basnayaka2021age}. \begin{figure}[!htbp] \includegraphics[width= \linewidth ,height=0.8\linewidth]{simfig2.eps} \caption{ Weighed sum AAoI as a function of block length.} \label{sim2} \end{figure} Next, in Fig. \ref{sim2}, we plot the weighted sum AAoI versus block length. When the transmission power is high, AAoI increases when block length is increased, as the number of erroneous packets is too low under high SNR conditions and increasing block length only increases transmission time. However, under low-SNR scenarios, small block length increases AAoI due to the high block error probability and increasing block length towards its optimal value decreases the AAoI due to the decrease in error probability. On the other hand, increasing the block length after the optimal value has resulted in an increase in AAoI because the impact of transmission time on AAoI is greater than the decrease in block error probability. This result proved that a short block length does not always maintain information freshness. \begin{figure}[!htbp] \includegraphics[width= \linewidth ,height=0.8\linewidth]{simfig3.eps} \caption{ Weighed sum AAoI as a function of update size.} \label{sim3} \end{figure} In Fig. \ref{sim3} we present the weighed sum AAoI versus update size. If the transmission power is low, the AAoI increases as the packet size increases under fixed block length since it increases the overall block error probability. However, in high SNR scenarios, packet size has little effect on AAoI since block error probability is low. \begin{figure}[!htbp] \centering \includegraphics[width=0.48\textwidth,height=0.8\linewidth]{simfig4.eps} \caption{Weighed sum AAoI as a function of $P_{min}$.} \label{sim7} \end{figure} \figurename{ \ref{sim7}} illustrates the weighted sum AAoI as a function of $P_{min}$. The weighted sum AAoI of the system is monotonically increasing with the $P_{min}$ since high $P_{min}$ threshold values increase update loss at the relay. \section{Conclusions} This work developed a model to estimate the AAoI in a two-way relay equipped with SWIPT that operates under ultra-reliable and low latency constraints. We derived an approximation for AAoI at each source using linear approximation techniques. The impacts of various parameters was studied,i.e., including block length, packet size, transmission power and noise level. Then, the numerical analysis to evaluate and validate the derived results. We observed that packet size does not affect freshness when SNR is high. Short packet communication retains an improved AoI performance in low SNR scenarios. This paper concludes that the short block length communications does not always assist in maintaining freshness in SWIPT--enabled communications systems, even though it always assists in maintaining a low latency. \section{Acknowledgement} This work is funded by the CEU-Cooperativa de Ensino Universit\'{a}rio, Portugal. \bibliographystyle{IEEEtran} \section{introduction} The use cases of 5G communications are classified into three broad categories: enhanced Mobile BroadBand, URLLC and mMTC. URLLC is necessary for mission-critical applications such as unmanned aerial vehicle (UAV) communication and process automation\cite{basnayaka2021agej,sharma2020communication,perera2020age}. The packet size should be extremely small in URLLC to facilitate low-latency transmission. The Shannon-Hartley Capacity theorem is no longer relevant in this scenario since the law of large numbers is invalid. However, using finite block length information theory, we can derive the achievable data rate under short packet communication as a function of the SNR, the block length and the decoding error probability\cite{polyanskiy2010channel}. In addition, for these mission-critical applications, the freshness of the information is of high importance, along with the URLLC requirements. This has prompted growing interest in the age of information (AoI), a performance metric that quantifies the freshness of the information. \par On the other hand, SWIPT is an emerging technology for future wireless communication systems. In general, practical SWIPT receivers for energy harvesting (EH) and information decoding have been established through the use of power splitting (PS) and time switching (TS) techniques. Since EH shares the resources allocated to information transmission, this has an influence on the performance of wireless communication. However, minimal research has been conducted to analyse SWIPT-enabled relays using finite block length information theory and AoI. \par This paper presents a wireless relay system with SWIPT for future mission-critical URLLC-enabled applications. To the best of the authors' knowledge, no prior study has analysed AoI in a SWIPT and URLLC enabled wireless relay network. In this work, a two-way wireless relay system employs a nonlinear PS model for energy harvesting and short packet communication is employed to address the trade-off between reliability and latency. We derived an approximation for the average AoI (AAoI) of the proposed relay scheme under the finite blocklength constraint. In comparison to prior work on SWIPT, we examine the AoI performance of the proposed SWIPT system using threshold-based nonlinear EH. Furthermore, we examine the effect of various factors, including block length and packet size, on the weighed sum AAoI. \section{System Model} \begin{figure}[!htbp] \includegraphics[width=0.485\textwidth,keepaspectratio]{Presentation1.pdf} \caption{ In the considered system model, sources $A$ $(S_A)$ and $B$ $(S_B)$ exchange status updates with each other with the help of a single relay $(R)$; sources send updates to the $R$ during the first transmission time slot $T_{1}$ while the $R$ is harvesting energy and then the $R$ exchanges updates received from each source using harvested energy during the second transmission time slot $T_{2}$.} \label{sytmmod} \end{figure}This paper considers a two-way cooperative status update system where two source nodes, source $A$ $(S_A)$ and source $B$ $(S_B)$ exchange status updates with each other as timely as possible with the help of a single relay $(R)$. $R$ adopts the decode-and-forward relaying protocol. Specifically, $S_{i},i\in \left ( A, B \right )$ transmits status updates which can be generated at the beginning of any time slot. The sources in this system are regarded as energy providers and the relay is equipped with an energy harvesting device and is capable of conducting data forwarding and harvesting energy simultaneously. In this paper, we assume $R$ adopts the dynamic power splitting technique \cite{perera2017simultaneous}. As shown in \figurename{ \ref{sytmmod}}, we consider a two-time slot transmission scheme in which $S_{i}$ sends updates to the $R$ while harvesting energy during the first transmission time slot and then the $R$ exchange updates received from $S_{i}$ using harvested energy during the second transmission time slot. Specifically, $S_{i},i\in \left ( A, B \right )$ transmits status updates which can be generated at the beginning of any time circle following generate-at-will updates generation model \cite{ceran2019average}. Under this policy, the relay uses energy harvested within the first transmission time slot ($T_{1}$) for the transmission in the second transmission time slot ($T_{2}$) without waiting. Suppose the harvested energy is less than the minimum required energy for the transmission. In that case, the relay does not transmit received updates and updates received from both sources are destroyed at the relay. $H_{ij}$ represents the channel coefficient of the channel between node $i$ to node $j$ where $i,j\in \left \{ A,B,R \right \}$ and $i\neq j$. The small scale channel gain is $g_{\textit{i}\textit{j}}=\left | h_{\textit{i}\textit{j}}\right |^2$, where $h_{\textit{i}\textit{j}} \sim \mathcal{C}\mathcal{N}(0,1)$ is the Rayleigh fading channel coefficient. The probability density function (PDF) of the small scale channel gain is defined as $ f_{g_{\textit{i}\textit{j}}}(z)=e^{-z},z\geq 0$. The large scale channel gain $\alpha_{ij}$ is given by $-10\log_{10}(\alpha_{\textit{i}\textit{j}} )= 20 \log_{10}(d_{\textit{i}\textit{j}})+20\log_{10}(\frac{4\pi f_{c}}{c})$, where $f_{c}, d_{\textit{i}\textit{j}}$ are the carrier frequency and distance between node $i$ and node $j$ respectively. $c$ is the speed of light in the space. Thus, channel coefficient is written as $H_{ij}=\sqrt{\alpha_{ij}g_{ij}}$ and channel gain is written as $ G_{ij}=\alpha_{ij}g_{ij}$. Up-link transmission between the source and the relay are performed in an orthogonal channel. \section{Block error analysis under finite block-length } Our first objective is to study block error probability at each destination (opposite source) in this system. The block error probability at the destination has been derived using different mathematical approximation techniques in this section. \par To calculate block error probability at each node, it is necessary to derive the SNR at each node. The received SNR at the relay from each source node $S_{{i}'},{i}' \in \left \{ A,B \right \}$ is given by, \begin{equation} \gamma^{{i}'}_{R}=\frac{(1-\rho) P_{{i}'}G_{{i}'R}}{\sigma_{R} ^{2} }, \label{snratrfrs} \end{equation} where $\rho $ is the power spitting factor, noise power at the relay is denoted as $\sigma^{2}_R$ and transmit power at the $S_{{i}'}$ given by $P_{{i}'} $. Then, the energy harvested by the relay from the each node is given by $ E^{{i}'}_{R}= \rho \eta P_{{i}'}G_{{i}'R}T_{1}$, where $\eta$\ is the energy conversion efficiency and $T_{1}$ is the transmission time of the first transmission slot and it can be calculated as $T_{1}=n^{{i}'}_{R}T_{s}$, where $n^{{i}'}_{R}$ is the allocated block length for the transmission between ${i}'$ and $R$ and $T_{s}$ is the symbol duration. Then, the total energy harvested by the relay within the first transmission slot is given by \begin{equation} E_{R}=\sum_{{i}' = \left \{ A,B \right \}}^{} E^{{i}'}_{R}=\rho \eta T_{1}\left ( P_{A}G_{AR} +P_{B}G_{BR}\right ). \label{havestpower} \end{equation} The available energy harvested by the relay for transmission is given by \begin{equation} E_{R}^{T}= \begin{cases} E_{max} & E_{R}\geq E_{max} \\ E_{R} & E_{max}> E_{R}> E_{min} \\ 0 & \text{otherwise}, \end{cases} \label{avaitransp} \end{equation}where $E_{max}$ is the maximum energy limit that the relay can harvest and $E_{min}$ is the minimum energy required for transmission. $E_{min}$ can be calculated as $E_{min}=P_{min}T_{2}$, where $P_{min}$ is the minimum required power for the transmission while $T_{2}$ is the time of the second transmission slot and $T_{2}$ is calculated as $ T_{2}=\sum_{i = \left \{ A,B \right \}}^{} n^{R}_{i}T_{s}$, where $n^{R}_{i}$ is the allocated block length for the transmission between $R$ and $i$. Then, the transmit power of relay is calculated as $P_{R}=\frac{E_{T}^{R}}{T_{2}}$. During the second time slot, the received SNR at each $S_{i} \in\left \{ A,B \right \}$ is given by $ \gamma_{i}=\frac{ P_{R}G_{Ri}}{\sigma_{i} ^{2} }$ Then, using \eqref{havestpower} and \eqref{avaitransp} SNR at the destination is expressed as \begin{equation} \gamma _{i}= \begin{cases} \frac{E_{max}\alpha_{Ri}g_{Ri}}{T_{2}\sigma_{i} ^{2} }, & E_{R}> E_{max} \\ \frac{E_{R}\alpha_{Ri}g_{Ri}}{T_{2}\sigma_{i}^{2} }, & E_{max}\geq E_{R}\geq E_{min} \\ 0 ,& \text{otherwise}. \end{cases} \end{equation} An outage happens when the relay or opposite source are unable to decode the received message successfully. Hence, the system overall transmission success probability $\varphi_{i} $ at each source node $i$ can be calculated as \begin{equation} \varphi_{i}= 1-\left ( \varepsilon^{{i}'}_{R}+\left (1-\varepsilon^{{i}'}_{R} \right ) \varepsilon^{R}_{i} \right ), \label{sucessprob} \end{equation} where $i\neq {i}' , i,{i}' \in \left \{ A,B \right \}$ and $\varepsilon^{t}_{j}$ is the decoding error probability at receiving node $j\in \left ( i,R \right )$ for block received from node $t\in \left ( {i}',R \right )$. Due to the static nature of the communication channels, it is assumed that the fading coefficients stay constant over the duration of each transmission block. Following Polyanskiy's results on short packet communication \cite{polyanskiy2010channel} and assuming that the receiver has the perfect channel state information, the expectation of the block error probability at the receiving node for a given block length $n^{t}_{j}$ can be written as \begin{equation} \varepsilon^{t}_{j}=\mathbb{E}\left [ Q\left (\frac{n^{t}_{j}C(\gamma^{t}_{j})-k^{t}_{j}}{\sqrt{n^{t}_{j}V(\gamma^{t}_{j})}} \right ) \right ], \end{equation} where $\mathbb{E}\left [ . \right ]$ is the expectation operator, $Q(x)=\frac{1}{\sqrt{2\pi }}\int_{x}^{\infty }e^{-\frac{t_{2}}{2}}dt$ and $V(\gamma^{t}_{j})$ is the channel dispersion, which can be written $V(\gamma^{t}_{j} )=\frac{{\log _{2}}^{2}e}{2}(1-\frac{1}{(1+\gamma^{t}_{j} )^2})$. The variable $C(\gamma^{t}_{j})$ denotes the channel capacity of a complex AWGN channel and it is given by $C(\gamma^{t}_{j})=\log _{2}(1+\gamma^{t}_{j})$. The number of bits per block represents by $k^{t}_{j}$. Moreover, under the Rayleigh fading channel conditions, $\varepsilon^{t}_{j} $ can be formulated as \begin{equation} \varepsilon^{t}_{j} = \int_{0}^{\infty }f_{\gamma^{t}_{j}} (z)Q\left (\frac{n^{t}_{j}C(\gamma^{t}_{j})-k^{t}_{j}}{\sqrt{n^{t}_{j}V(\gamma^{t}_{j})}} \right)dz, \label{e11} \end{equation} where $f_{\gamma^{t}_{j}}(z)$ denotes the PDF of the received SNR ($\gamma^{t}_{j}$) at the receiving node $j$. Due to the complexity of the Q-function, it is challenging to get a closed-form expression for the overall decoding error probability. Thus, using the approximation technique given in \cite{makki2014finite} and \cite{gu2017ultra}, \eqref{e11} can be approximated as $ \varepsilon^{t}_{j} \approx \int_{0}^{\infty }f_{\gamma^{t}_{j}} (z)\Theta^{t}_{j} (z) dz$, where $\Theta^{t}_{j}(z)$ denotes the linear approximation of $Q\left ( \frac{n^{t}_{j}C(\gamma^{t}_{j})-k^{t}_{j}}{\sqrt{n^{t}_{j}V(\gamma^{t}_{j}})} \right)$, this can be expressed as in \cite{gu2017ultra} \begin{equation} \Theta^{t}_{j} (z)=\left \{ \begin{matrix} 1,& \gamma^{t}_{j}\leq \phi^{t}_{j}, & \\ \frac{1}{2}-\beta^{t}_{j} \sqrt{n^{t}_{j}}(\gamma^{t}_{j}-\psi^{t}_{j}), & \phi^{t}_{j}< \gamma^{t}_{j} <\delta^{t}_{j}, & \\ 0,& \gamma^{t}_{j} \geq \delta^{t}_{j},& \end{matrix} \right. \label{qfap} \end{equation} where $\beta^{t}_{j} =\frac{1}{2\pi \sqrt{2^{\frac{2k^{t}_{j}}{n^{t}_{j}}}-1}},\psi_{j}=2^{\frac{k^{t}_{j}}{n^{t}_{j}}}-1,\phi^{t}_{j}=\psi^{t}_{j}-\frac{1}{2\beta^{t}_{j}\sqrt{n^{t}_{j}}}$ and $\delta^{t}_{j}=\psi^{t}_{j}+\frac{1}{2.\beta^{t}_{j}\sqrt{n^{t}_{j}}}$. By using above linear approximation $\varepsilon^{t}_{j}$ can be expressed as \begin{equation} \varepsilon^{t}_{j} \approx \beta^{t}_{j}\sqrt{n^{t}_{j}}\int_{\phi^{t}_{j}}^{\delta^{t}_{j}}F_{\gamma^{t}_{j}}\left ( z \right )dz, \label{aprxberwcdf} \end{equation} where $F_{\gamma^{t} _{j}}\left ( z \right )$ denotes the CDF of the received SNR ($\gamma^{t}_{j}$) at receiving node $j$. To calculate success probability at each source $i$ using \eqref{sucessprob}, it is necessary to calculate $\varepsilon^{{i}'}_R$ and $\varepsilon^{R}_{i}$. Using (\ref{qfap}) and (\ref{aprxberwcdf}) block error probabilities at the $R$ and each source can be calculated as follows, \begin{equation} \varepsilon^{{i}'}_{R} \approx \beta^{{i}'}_{R}\sqrt{n^{{i}'}_{R}}\int_{\phi^{{i}'}_{R}}^{\delta^{{i}'}_{R}}F_{\gamma^{{i}'}_{R}}\left ( z \right )dz, \label{aprxberwcdf_R} \end{equation} \begin{equation} \varepsilon^{R}_{i} \approx \beta^{R}_{i}\sqrt{n^{R}_{i}}\int_{\phi^{R}_{i}}^{\delta^{R}_{i}}F_{\gamma^{R}_{i}}\left ( z \right )dz. \end{equation} \begin{lemma} An approximation for the block error probability at the relay is derived as \begin{multline} \varepsilon^{{i}'}_{R} \approx 1-\left (\frac{(1-\rho) P_{{i}'}\alpha _{{i}'R}\beta^{{i}'}_{R}\sqrt{n^{{i}'}_{R}}}{\sigma_{R} ^{2}} \right ) \\ \left ( e^{-\frac{\phi^{{i}'}_{R}\sigma_{R} ^{2}}{(1-\rho) P_{{i}'}\alpha _{{i}'R}}}-e^{-\frac{\delta^{{i}'}_{R}\sigma_{R} ^{2}}{(1-\rho) P_{{i}'}\alpha _{{i}'R}}} \right ). \label{policy1errsr} \end{multline} \begin{proof} The PDF of SNR at relay from each source can be calculated using \eqref{snratrfrs} as $ f_{\gamma^{{i}'}_{R}}{}\left ( x \right )=\frac{\sigma_{R} ^{2}}{(1-\rho) P_{{i}'}\alpha _{{i}'R}}e^{-\frac{x\sigma_{R} ^{2}}{(1-\rho) P_{{i}'}\alpha _{{i}'R}}}$ Then, the CDF can be calculated as \begin{equation} \begin{aligned} F_{\gamma^{{i}'}_{R}}\left ( z \right )&= 1-e^{-\frac{z\sigma_{R} ^{2}}{(1-\rho) P_{{i}'}\alpha _{{i}'R}}}. \label{cdfsnratrvt} \end{aligned} \end{equation} Then, the result can be proved by substituting \eqref{cdfsnratrvt} to $\eqref{aprxberwcdf_R}$. \end{proof} \end{lemma} \begin{lemma} Block error probability at the opposite receiving node can be derived as follows: \begin{equation} \varepsilon^{R}_{i}\approx \beta^{R}_{i}\sqrt{n^{R}_{i}}\left ( \left ( \frac{\delta ^{R}_{i}+\phi^{R}_{i}}{2} \right )\sum_{v=1}^{V}\frac{\pi }{V}\sqrt{1-\phi_{v}^2}F_{\gamma^{R}_{i}}\left ( q \right )+R_{V} \right ) \label{policy1er} \end{equation} where $\phi_{v}=\cos \left ( \frac{2v-1}{2v} \pi \right )$, $q=\left ( \frac{\delta^{R}_{i}-\phi^{R}_{i}}{2} \right )\phi_{v}+\left ( \frac{\delta^{R}_{i}+\phi^{R}_{i}}{2} \right )$, $V$ is the complexity-accuracy trade-off factor, while $R_{V}$ denotes the error term, which is ignored at substantially larger values of $V$. \begin{proof} To calculate error probability at $S_{i}$, it is necessary to derived the CDF of SNR. Using \eqref{avaitransp}, $F_{\gamma^{R}_{i}}\left ( z \right )$ is evaluated as \begin{equation} \begin{aligned} F_{\gamma^{R}_{i}}\left ( z \right )= &\mathbb{P}_{r}\left ( \gamma^{R}_{i}< z \right )= 1- \mathbb{P}_{r}\left \{ E_{R} \geq E_{min} \cap \gamma^{R}_{i}> z\right \}\\ F_{\gamma^{R}_{i}}\left ( z \right )=& 1- \underbrace{\mathbb{P}_{r} \left \{ E_{min}\leq E_{R}\leq E_{max}\cap\gamma^{R}_{i}> z\right \}}_{L_{1}} \\ & - \underbrace{\mathbb{P}_{r} \left \{ E_{R}\geq E_{max}\cap\gamma^{R}_{i}> z\right \}}_{L_{2}}. \end{aligned} \label{apen1} \end{equation} Then, substituting $E_{R}=\rho \eta T_{1}\left ( P_{A}\alpha _{AR}g_{AR} +P_{B}\alpha_{BR}g_{BR}\right )$ into \eqref{apen1}, $L_{1}$ is evaluated as \begin{equation} \begin{aligned} L_{1}= &\mathbb{P}_{r}\left \{ \Omega_{1} < I< \Omega _{2} \, \cap \, Ig_{Ri}> \Omega _{3} \right \}, \end{aligned} \end{equation} or \begin{equation} L_{ 1}= \begin{cases} 0, & g_{Ri}< \frac{\Omega _{3}}{ \Omega_{2}}, \\ \mathbb{P}_{r}\left \{ \frac{\Omega_{3}}{g_{Ri}}< I\leqslant \Omega _{2}\right \},&\frac{\Omega _{3}}{\Omega _{2}} <g_{Ri}<\frac{\Omega _{3}}{\Omega _{1}}, \\ \mathbb{P}_{r}\left \{ \Omega_{1}< I\leqslant \Omega _{2}\right \},& g_{Ri}> \frac{\Omega _{3}}{\Omega _{1}}, \end{cases} \end{equation} where $\Omega _{1}=\frac{E_{min}}{\rho \eta T_{1}}$ , $\Omega _{2}=\frac{E_{max}}{\rho \eta T_{1}}$, $\Omega _{3}=\frac{z\sigma _{i}^{2}T_{2}}{\rho \eta T_{1}\alpha _{Ri}}$ and $ I=\sum_{i=\left \{ A,B \right \}}^{} P_{i}\alpha _{iR}g_{iR}$. To calculate $L_{1}$ it is necessary to get PDF and CDF of $I$ and $g_{Ri}$. Then, to calculate the PDF of $I$, it is considered as summation of two independent random variable as $I=\mu_{1}+\mu_{2 }$, where $\mu_{1}\sim \mathrm{exp}(\frac{1}{P_{A}\alpha_{AR}})$ and $\mu_{2} \sim \mathrm{exp}(\frac{1}{P_{B}\alpha_{BR}})$. Using the concepts of convolution of random variables, PDF and CDF of $I$ can be calculated as follows: \begin{equation} \begin{aligned} f_{I}\left ( z \right ) & =\int_{-\infty }^{\infty } f_{\mu_{1}}(x)f_{\mu_{2}}(z-x)dx,\\ & = \int_{0}^{z}\frac{1}{P_{A}\alpha_{AR}}e^{-\frac{1}{P_{A}\alpha_{AR}}x}\frac{1}{P_{B}\alpha_{BR}}e^{-\frac{1}{P_{B}\alpha_{BR}}(z-x)}dx,\\ & = \frac{1}{P_{A}\alpha_{AR}P_{B}\alpha_{BR}}e^{-\frac{1}{P_{B}\alpha_{BR}}z}\int_{0}^{z}e^{(\frac{1}{P_{B}\alpha_{BR}}-\frac{1}{P_{A}\alpha_{AR}})x}dx,\\ f_{I}\left ( z \right ) & = \begin{cases} \frac{1}{P_ {A}\alpha_{AR}-P_{B}\alpha_{BR}}(e^{-\frac{1}{P_{A}\alpha_{AR}}z}-e^{-\frac{1}{P_{B}\alpha_{BR}}z}), \\ \qquad \textrm{if} \quad \frac{1}{P_{A}\alpha_{AR}}\neq \frac{1}{P_{B}\alpha_{BR}},\\ \frac{1}{(P\alpha)^2}ze^ {-\frac{1}{P\alpha}z}, \quad \textrm{if}\, \: \frac{1}{P_{A}\alpha_{AR}}= \frac{1}{P_{B}\alpha_{BR}}=\frac{1}{P\alpha}, \end{cases}. \end{aligned} \end{equation} where $f$ denotes the PDF function of a random variable. Then, the CDF of $I$ can be calculated as \begin{equation} \begin{aligned} F_{I}(z)&=P(Z\leq z)=\int_{0}^{z} f(t) dt, \\ F_{I}(z)&=\begin{cases} 1+\frac{P_{B}\alpha_{BR}}{P_A \alpha_{AR}-P_{B}\alpha_{BR}}e^{-\frac{z}{P_{B}\alpha_{BR}}}\\ \quad - \frac{P_{A}\alpha_{AR}}{P_{A}\alpha_{AR}-P_{B}\alpha_{BR}}e^{-\frac{z}{P_{A}\alpha_{AR}}}, & \\ ,\textrm{if}\, P_{A}\alpha_{AR}\neq P_{B}\alpha_{BR} ,& \\ 1-e^{-\frac{1}{P\alpha}z}\left ( 1+\frac{1}{P\alpha}z \right ) \textrm{if}\, \: \frac{1}{P_{A}\alpha_{AR}}= \frac{1}{P_{B}\alpha_{BR}}=\frac{1}{P\alpha}, \end{cases} \end{aligned} \end{equation} Further, we derive the approximation for $L_{1}$ as follows, \begin{equation} \begin{aligned} L_{1} &=\int_{\frac{\Omega_{3}}{\Omega_{2}}}^{\frac{\Omega_{3}}{\Omega_{1}}}\left ( f_{g_{Ri}}\left ( x \right )\int_{\frac{\Omega_{3}}{x}}^{\Omega _{2}} f_{I}\left (y \right )dy\right )dx\\ & \: \: \: +\int_{\Omega_{1}}^{\Omega_{2}}f_{I}\left ( x \right )dx\int_{\frac{\Omega _{3}}{\Omega _{1}}}^{\infty }f_{g_{Ri}}\left ( x \right )dx,\\ L_{1} =& L_{3}+ \left ( F_{I}\left ( \Omega_{2} \right ) -F_{I}\left ( \Omega_{1} \right )\right )\left ( 1-F_{Ri}\left ( \frac{\Omega _{3}}{\Omega_{1}} \right ) \right ), \end{aligned} \end{equation} where \begin{equation} \begin{aligned} L_{3}=&\int_{\frac{\Omega _{3}}{\Omega _{2}}}^{\frac{\Omega _{3}}{\Omega _{1}}}\left ( f_{g_{Ri}}\left ( x \right )\int_{\frac{\Omega _{3}}{x}}^{\Omega_{2}} f_{I}\left (y \right )dy\right )dx, \\ =&\int_{\frac{\Omega _{3}}{\Omega_{2}}}^{\frac{\Omega _{3}}{\Omega _{1}}} f_{g_{Ri}}\left ( x \right )\left [ F_{I}\left ( \Omega _{2} \right )-F_{I}\left ( \frac{\Omega _{3}}{x} \right ) \right ]dx,\\ L_{3}=& F_{I}\left ( \Omega _{2} \right )\left [ F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega _{1}}\right ) -F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega _{2}}\right )\right ]-L_{4}, \end{aligned} \end{equation} where $L_{4}$ is defined as \begin{equation} \begin{aligned} L_{4}=&\int_{\frac{\Omega _{3}}{\Omega _{2}}}^{\frac{\Omega _{3}}{\Omega_{1}}}f_{g_{Ri}}\left ( x \right )F_{I}\left ( \frac{\Omega_{3}}{x} \right )dx, \label{erl4} \end{aligned} \end{equation} Using Gaussian-Chebyshev-Quadrature (GCQ) method \cite{abramowitz1988handbook}, \eqref{erl4} can be approximated as follows, \begin{equation} L_{4}\approx \frac{\frac{\Omega_{3}}{\Omega _{1}}+\frac{\Omega _{3}}{\Omega _{2}}}{2}\sum_{m=1}^{M}\frac{\pi }{M}\sqrt{1-\phi _{m}^2}f_{g_{Ri}}\left ( z_{1} \right )F_{I}\left ( \frac{\Omega_{3}}{z_{1}} \right )+R_{M}, \end{equation} where $\phi _{m}=\cos \left ( \frac{2m-1}{2M}\pi \right )$, $z_{1}=\frac{\frac{\Omega _{3}}{\Omega _{1}}-\frac{\Omega _{3}}{\Omega _{2}}}{2}\phi _{m}+\frac{\frac{\Omega _{3}}{\Omega _{1}}+\frac{\Omega _{3}}{\Omega _{2}}}{2}$, $M$ is the complexity-accuracy trade-off factor, and $R_{M}$ is the error term that can be ignored at sufficiently high $M$ values. Finally, expression for $L_{1}$ can be approximated as shown in \eqref{longl1}. \begin{figure*}[!t] \normalsize \begin{multline} L_{1}\approx F_{I}\left ( \Omega_{2} \right )\left [ F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega _{1}}\right ) -F_{g_{Ri}}\left ( \frac{\Omega_{3}}{\Omega _{2}}\right )\right ]-\frac{\frac{\Omega _{3}}{\Omega_{1}}+\frac{\Omega_{3}}{\Omega _{2}}}{2}\sum_{m=1}^{M}\sqrt{1-\phi _{m}^2}f_{g_{Ri}}\left ( z_{1} \right )F_{I}\left ( \frac{\Omega_{3}}{z_{1}} \right )+R_{M}+ \\ \left ( F_{I}\left ( \Omega_{2} \right ) - F_{I}\left ( \Omega_{1} \right )\right )\left ( 1-F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega_{1}} \right ) \right ). \label{longl1} \end{multline} \begin{multline} F_{\gamma^{R}_{i}}\left ( z \right ) \approx 1- F_{I}\left ( \Omega _{2} \right )\left [ F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega _{1}}\right ) -F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega _{2}}\right )\right ]-\frac{\frac{\Omega _{3}}{\Omega_{1}}+\frac{\Omega_{3}}{\Omega _{2}}}{2}\sum_{m=1}^{M}\sqrt{1-\phi _{m}^2}f_{g_{Ri}}\left ( z_{1} \right )F_{I}\left ( \frac{\Omega_{3}}{z_{1}} \right )+R_{M}+ \\ \left ( F_{I}\left ( \Omega_{2} \right ) - F_{I}\left ( \Omega_{1} \right )\right ) \left ( 1-F_{g_{Ri}}\left ( \frac{\Omega _{3}}{\Omega_{1}} \right ) \right ) -\left ( 1-F_{I} \left (\Omega _{2} \right )\right ) \left (1-F_{g_{Ri}} \left ( \Omega_{4} \right ) \right ). \label{snrapatdes} \end{multline} \hrulefill \vspace*{4pt} \end{figure*} Similarly, $L_{2}$ is calculated as \begin{equation} \begin{aligned} L_{2}&= \mathbb{P}_{r} \left \{ I> \Omega_{2}\cap g_{Ri}> \Omega _{4}\right \}\\ &=\left ( 1-F_{I} \left ( \Omega _{2} \right )\right )\left (1-F_{g_{Ri}} \left ( \Omega _{4} \right ) \right ), \label{l1eq} \end{aligned} \end{equation} where $\Omega _{4}= \frac{z\sigma_{i} ^{2}T_{2}}{E_{max}\alpha _{Ri}}$. Then, CDF of SNR at each destination ($F_{\gamma^{R}_{i}}\left ( z \right ) $) can be obtained by substituting \eqref{longl1} and \eqref{l1eq} in \eqref{apen1} as in \eqref{snrapatdes}. Then, the result can be proved by substituting \eqref{snrapatdes} to \eqref{aprxberwcdf} and then applying the GCQ method for the integration of the CDF function. \end{proof} \end{lemma} Finally, substituting \eqref{policy1errsr} and \eqref{policy1er} into \eqref{sucessprob} the overall transmission success probability can can be calculated. \section{Age of Information Analysis} This section estimates the AAoI of the two-way relay system. This system adopts the generate-at-will update generation model \cite{ceran2019average}. Hence, $S_{A}$ and $S_{B}$ generate new status updates every transmission cycle to keep the information at the corresponding destinations as fresh as possible. Then, generated updates are transmitted to their opposite sources using a relay system. If the generation time of the freshest update received at opposite source time stamp $t$ is $g(t)$, then AoI can be defined as a random process as $ \Delta \left ( t \right )=t-g(t).$ \begin{figure}[!htb] \includegraphics[width=0.5\textwidth]{figure4.pdf} \caption{ Evolution of AoI $\Delta(t)$ with the time: Each source generate updates at time stamps $\mathit{g_{1},g_{2},...,g_{n-1}}$ and the opposite source receive these updates at time stamps $\mathit{g_{2},g_{3},...,g_{n}}$; $\Delta(t)$ is the AoI at the opposite source (destination).} \label{f2a} \end{figure} As illustrated in the \figurename {\ref{f2a}}, it is assumed that at $t=0$ the measurements of the AoI starts and the AoI at the opposite source (destination) is set to $\Delta (0)=\Delta _{0}$. Each source generates updates at time stamps $\mathit{g_{1},g_{2},...,g_{n-1}}$ and the opposite source receive these updates at time stamps $\mathit{g_{2},g_{3},...,g_{n}}$. As illustrated in \figurename {\ref{f2a}}, data update $i$ is transmitted from the source at time stamp $t = g_{i}$ and it is successfully delivered to its opposite source at time stamp $g_{i+1}= g_{i}+T$ where $T$ is total time allocated for a one transmission circle and $T=T_{1} +T_{2}$. Therefore, if update packet delivered successfully, at the time $g_{i+1}$, the AoI at the opposite source is estimated as $\Delta (g_{i+1})=T$. We assume that AoI increases linearly until the next update is successfully delivered to the opposite source. As an example, one packet fails to be decoded at time $g_{3}$, hence, $\Delta(t)$ continues to increase linearly. For the considered time period $T_{c}$, time average AoI can be computed using the area under $\Delta(t)$. Similarly, the time average age is estimated as $\Delta_{{T_{c}}}= \frac{1}{T_{c}} \int_{0}^{T_{c}} \Delta(t)dt.$ Similar to the work presented in \cite{kosta2017age}, the time average age ($\Delta_{{T_{c}}}$) tends to ensemble average age when $T_{c}\rightarrow \infty $, i.e., which can be expressed as \begin{equation} \Delta^{AAOI}=\mathbb{E}\left [ \Delta \right ]=\lim_{t\rightarrow \infty }\mathbb{E}\left [ \Delta \left ( t \right )\right ]=\lim_{T_{c}\rightarrow \infty }\Delta_{{T_{c}}}. \label{aaoie} \end{equation} Applying graphical methods to saw-tooth age waveform in \figurename {\ref{f2a}} and using [\citeonline{basnayaka2021agej}, eq.8] we can calculated AAoI at each $S_{i},i\in \left ( A,B \right )$ as follows: \begin{equation} \Delta _{i}^{AAOI} =\frac{E[X_{i}^{2}]}{2E[X_{i}]} +T, \label{aaoif} \end{equation} where $X_{i}$ denote the inter departure time between two consecutive successfully received status updates at $S_{i}$. It assumes that the end-to-end delay of each successfully received update is always a constant, which is given by $E[Y_{i}]=T_{1}+T_{2}=T$. The inter departure time $X_{i}$ is a geometric random variable with mean $E[X_{i}] = \frac{T}{\varphi_{i}}$ and second moment $E\left [ X_{i}^2 \right ]=\frac{T^2\left ( 2-\varphi_{i} \right )}{\varphi_{i}^2}$. \begin{lemma} For the two way relay network, the expression of the AAoI at each source can be obtained as follows: \begin{equation} \Delta _{i}^{AAOI}= \frac{T}{2}+\frac{T}{\varphi_{i}}. \end{equation} \begin{proof} The result can be proved by substituting $E[X_{i}] = \frac{T}{\varphi_{i}}$ and $E\left [ X_{i}^2 \right ]=\frac{T^2\left ( 2-\varphi_{i} \right )}{\varphi_{i}^2}$ into (\ref{aaoif}). \end{proof} \end{lemma} The expected weighted sum AAoI of the two-way relay system can be calculated as follows, $ \Delta _{Sum}^{AAoI}=\sum_{i=\left \{ A,B \right \}}^{}\omega _{i} \Delta _{i}^{AAOI}$, where $\omega _{i}$ is the weighting coefficient at $S_{i}$. \section{Simulation results and discussions} In this section, we present the analytical and numerical simulation results. Unless otherwise stated, the simulation parameters are set as: $d_{AR}$ $=$ \SI{30}{\metre}, $d_{BR}$ $=$ \SI{30}{\metre}, $f_{c}$ $=$ \SI{900}{\mega \hertz}, speed of the light (m/s) $=$ $3\times 10^{8} \SI{}{\metre}\SI{}{\second}^{-1} $, ${P}_{A}$ $=$ 1 W, ${P}_{B}$ $=$ 1 W, $T_{s}$ $=$ \SI{20}{\micro \second}, $n^{A}_{R},n^{B}_{R}$ $=$ 200 bits, $n^{R}_{A},n^{R}_{B}$ $=$ 200 bits, $k^{A}_{R},k^{B}_{R}$ $=$ 32 bits, noise power ($\sigma_{R}^2,\sigma_{A}^2,\sigma_{B}^2$) $=$ -100 dBm, , $E_{max}$ $=$ 0.001 J, $P_{min}$ $=$ 0.0001 mW, $\rho $ $=$ 0.5, $\omega _{A}$ $=$ 0.5, $\omega _{B}$ $=$ 0.5 and energy $\eta$ $=$ 0.9. \begin{figure}[!htbp] \includegraphics[width= \linewidth ,height=0.85\linewidth]{simfig1.eps} \caption{ Weighed sum AAoI as a function of transmission power} \label{sim1} \end{figure} \par The weighted sum of AAoI as a function of transmission power in Fig. \ref{sim1} for different distances is derived. The weighted sum of AAoI decreases dramatically as the transmit power at the sources increases, since increasing the transmit power at the source reduces the error probability at the relay node and increases the amount of energy harvested by the relay. However, for large transmission power levels, the AAoI value is fixed since the number of erroneous packets that impact the AAoI is too small. On the other hand, when the distance between the relay and sources is short, the AAoI is low, and when the distance increases, the AAoI increases due to low SNR. The numerically simulated AAoI well coincides with the approximated results, especially moderate SNR values, since the linear approximation applied in \eqref{qfap} is too tight for moderate SNR values \cite{basnayaka2021age}. \begin{figure}[!htbp] \includegraphics[width= \linewidth ,height=0.8\linewidth]{simfig2.eps} \caption{ Weighed sum AAoI as a function of block length.} \label{sim2} \end{figure} Next, in Fig. \ref{sim2}, we plot the weighted sum AAoI versus block length. When the transmission power is high, AAoI increases when block length is increased, as the number of erroneous packets is too low under high SNR conditions and increasing block length only increases transmission time. However, under low-SNR scenarios, small block length increases AAoI due to the high block error probability and increasing block length towards its optimal value decreases the AAoI due to the decrease in error probability. On the other hand, increasing the block length after the optimal value has resulted in an increase in AAoI because the impact of transmission time on AAoI is greater than the decrease in block error probability. This result proved that a short block length does not always maintain information freshness. \begin{figure}[!htbp] \includegraphics[width= \linewidth ,height=0.8\linewidth]{simfig3.eps} \caption{ Weighed sum AAoI as a function of update size.} \label{sim3} \end{figure} In Fig. \ref{sim3} we present the weighed sum AAoI versus update size. If the transmission power is low, the AAoI increases as the packet size increases under fixed block length since it increases the overall block error probability. However, in high SNR scenarios, packet size has little effect on AAoI since block error probability is low. \begin{figure}[!htbp] \centering \includegraphics[width=0.48\textwidth,height=0.8\linewidth]{simfig4.eps} \caption{Weighed sum AAoI as a function of $P_{min}$.} \label{sim7} \end{figure} \figurename{ \ref{sim7}} illustrates the weighted sum AAoI as a function of $P_{min}$. The weighted sum AAoI of the system is monotonically increasing with the $P_{min}$ since high $P_{min}$ threshold values increase update loss at the relay. \section{Conclusions} This work developed a model to estimate the AAoI in a two-way relay equipped with SWIPT that operates under ultra-reliable and low latency constraints. We derived an approximation for AAoI at each source using linear approximation techniques. The impacts of various parameters was studied,i.e., including block length, packet size, transmission power and noise level. Then, the numerical analysis to evaluate and validate the derived results. We observed that packet size does not affect freshness when SNR is high. Short packet communication retains an improved AoI performance in low SNR scenarios. This paper concludes that the short block length communications does not always assist in maintaining freshness in SWIPT--enabled communications systems, even though it always assists in maintaining a low latency. \section{Acknowledgement} This work is funded by the CEU-Cooperativa de Ensino Universit\'{a}rio, Portugal. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \subsection{Super-Brownian motion} Let $\psi$ be a function of the form $$\psi(\lambda)=-\alpha\lambda+\beta\lambda^2+\int_0^\infty \Big(e^{-\lambda y}-1+\lambda y\Big)n(\d y),\quad \lambda\ge 0,$$ where $\alpha\in\R$, $\beta \ge 0$ and $n$ is a $\sigma$-finite measure satisfying $$ \int_0^\infty (y^2\wedge y) n(\d y)<\infty. $$ $\psi$ is called a branching mechanism. We will always assume that $\lim_{\lambda\to\infty}\psi(\lambda)=\infty$. Let $\{B_t, t\geq 0; \mbox{P}_x\}$ be a standard Brownian motion starting from $x\in\R$, and let $\mE_x$ be the corresponding expectation. We write $\mP=\mP_{0}$ and $\mE=\mE_0$. In this paper we will consider a super-Brownian motion $X$ on $\R$ with branching mechanism $\psi$. Let $\mathcal{B}^+(\R)$ (resp. $\mathcal{B}^+_b(\R)$) be the space of non-negative (resp. bounded non-negative) Borel functions on $\R$, and let ${\cal M}_F(\R)$ be the space of finite measures on $\R$, equipped with the topology of weak convergence. A super-Brownian motion $X=\{X_t,t\geq 0\}$ with branching mechanism $\psi$ is a Markov process taking values in ${\cal M}_F(\R)$. For any $\mu \in \mathcal{M}_F(\R)$, we denote the law of $X$ with initial configuration $\mu$ by $\P_\mu$, and the corresponding expectation by $\E_\mu$. We write $\P=\P_{\delta_0}$ and $\E=\E_{\delta_0}$. As usual, we use the notation $\langle f,\mu\rangle:=\int_{\R} f(x)\mu(dx)$ and $\|\mu\|:=\langle 1,\mu\rangle$. Then for all $f\in \mathcal{B}^+_b(\R)$ and $\mu \in \mathcal{M}_F(\R)$, \begin{equation}\label{V} -\log \E_\mu\left(e^{-\langle f,X_t\rangle}\right)= \langle V_f(t, \cdot),\mu\rangle, \qquad t\geq 0, \end{equation} where $V_f(t, x)$ is the unique positive solution to the equation \begin{equation}\label{eqt-u} V_f(t, x)+\mE_x\int_0^t\psi(V_f(t-s, B_s))\d s=\mE_x f(B_t),\qquad t\geq 0. \end{equation} The existence of such superprocesses is well-known, see, for instance, \cite{Dawson}, \cite{E.B.} or \cite{Li11}. It is well known that $\|X_t\|$ is a continuous state branching process with branching mechanism $\psi$ and that $$\P(\lim_{t\to\infty}\|X_t\|=0)=e^{-\lambda^*},$$ where $\lambda^*\in[0,\infty)$ is the largest root of the equation $\psi(\lambda)=0$. It is known that $\lambda^*>0$ if and only if $\alpha=-\psi'(0+)>0$. $X$ is called a supercritical (critical, subcritical) super-Brownian motion if $\alpha > 0$ ($= 0,< 0$). In this paper, we only deal with the supercritical case, that is, we assume $\alpha > 0$. Let $M_t$ be the supremum of the support of $X_t$. More precisely, we define the rightmost point $M(\mu)$ of $\mu\in\cM_{F}(\R)$ by $M(\mu):=\sup\{x: \mu(x,\infty)>0\}$. Here we use the convention that $\sup\emptyset=-\infty.$ Then $M_t$ is simply $M(X_t)$. Recently, in \cite{SRZ}, we studied the asymptotic behavior of $M_t$ under the following two assumptions: \begin{itemize} \item[] {\bf{(H1)}} There exists $\gamma>0$ such that \begin{equation*}\label{cond-log} \int_1^\infty y(\log y)^{2+\gamma}n(\d y)<\infty. \end{equation*} \item[]{\bf (H2)} There exist $\vartheta\in(0,1]$ and $a>0,b>0$ such that \begin{equation*}\label{H2} \psi(\lambda)\ge -a\lambda+b\lambda^{1+\vartheta}, \quad \lambda >0. \end{equation*} \end{itemize} It is clear that if $\beta>0$ or $n(\d y)\ge y^{-1-\vartheta}\,\d y$, then (H2) holds. Condition (H2) implies that the following Grey condition holds: \begin{equation}\label{Grey} \int^\infty \frac{1}{\psi(\lambda)}\ d\lambda<\infty. \end{equation} It is well known that under the above Grey condition, $\lim_{t\to\infty}\P_{\mu}(\|X_t\|=0)=e^{-\lambda^*\|\mu\|}.$ Denote $\mathcal{S}:=\{\forall t\ge 0, \|X_t\|>0\}$. It is clear that $\P(\mathcal{S})\in(0,1)$. Define, for $t\ge0$, $$ D_t:=\langle (\sqrt{2\alpha}t-\cdot)e^{-\sqrt{2\alpha}(\sqrt{2\alpha}t-\cdot)},X_t \rangle. $$ It has been proven in \cite{KLSR} that $\{D_t, t\geq 0\}$ is a martingale, which is called the derivative martingale of the super-Brownian motion $X_t$, and that $D_t$ has an almost sure non-negative limit $D_\infty$ as $t\to\infty$. Assumption (H2) also implies that \begin{eqnarray}\label{con:psi} \int^\infty\frac{1}{\sqrt{\int_{\lambda^*}^\xi \psi(u)\,du}}\,\d \xi<\infty. \end{eqnarray} Under (H1) and \eqref{con:psi}, $D_\infty$ is non-degenerate and \begin{equation}\label{limit-as} \frac{M_t}{t}\to \sqrt{2\alpha},\quad \P\mbox{-a.s. on } \mathcal{S}, \end{equation} see \cite[Theorem 2.4 and Corollary 3.2 ]{KLSR}. For any $f\in \mathcal{B}^+(\R)$, put \begin{equation}\label{def:u} u_f(t,x):=-\log \E \left( e^{-\int_\R f(y-x) X_t(dy)}; M_t\le x\right), \end{equation} Note that $u_f$ only depends on the value of $f$ on $(-\infty,0]$. Let $\mathcal{H}$ be the space of all the nonnegative bounded functions $f$ on $(-\infty,0]$ satisfying \begin{equation}\label{initial-cond1'} \int_0^\infty y e^{\sqrt{2\alpha}y}f(-y)\,dy<\infty. \end{equation} It has been proved in \cite[Theorem 1.3]{SRZ} that under (H1)-(H2), for any $f\in \mathcal{H}$, we have that \begin{equation}\label{M-w} \lim_{t\to\infty}u_f(t,m(t)+x)=w_f(x), \end{equation} where \begin{equation}\label{def-m_t} m_t=\sqrt{2\alpha}t-\frac{3}{2\sqrt{2\alpha}}\log t, \end{equation} and $w_f$ is a traveling wave solution of the F-KPP equation, that is, a solution of $$ \frac{1}{2}w_{xx}+\sqrt{2\alpha}w_x-\psi(w)=0. $$ Moreover, $w_f$ is given by $w_f(x)=-\log \E\left[\exp\{-\tilde{C}(f)D_\infty e^{-\sqrt{2\alpha}x}\}\right]$, with $$\tl{C}(f):=\lim_{r\to\infty}\sqrt{\frac{2}{\pi}}\int_0^\infty u_{f}(r,\sqrt{2\alpha}r+y)ye^{\sqrt{2\alpha}y}\,dy\in(0,\infty).$$ In the remainder of this paper, we write $u(t,x)$ and $w(x)$ for $u_f(t,x)$ and $w_f(x)$ respectively when $f\equiv 0$. \subsection{Main results} In \cite[Theorem 1.2]{SRZ}, we proved the following upper large deviation results for $M_t$ under conditions (H1)-(H2): \begin{itemize} \item[(1)]For $\delta>1$, $$\lim_{t\to\infty}\sqrt{t}e^{\alpha(\delta^2-1)}\P(M_t>\sqrt{2\alpha}\delta t)\in(0,\infty);$$ \item[(2)] $$\lim_{t\to\infty}\frac{t^{3/2}}{\frac{3}{2\sqrt{2\alpha}}\log t}\P(M_t>\sqrt{2\alpha}t)\in(0,\infty).$$ \end{itemize} However, using the methods in \cite{SRZ}, we could not get the asymptotic behavior of the lower large deviation probability $\P(M_t\le \sqrt{2\alpha}\delta t|\mathcal{S})$ for $\delta<1$. The purpose of this paper is to study the asymptotic behavior of the lower large deviation probability. To accomplish this, we use the skeleton decomposition of super-Brownian motion and adapt some ideas from \cite{CHM} used in the study of lower deviations of the maximum of branching Brownian motion. For branching Brownian motion, the aysmptotic behavior of the maximal position, also denoted by $M_t$, of the particles alive at time $t$ has been intensively studied. To simplify notation, we consider a standard binary branching Brownian motion in $\R$, i.e., the lifetime of a particle is an exponential random variable with parameter 1 and when it dies, it gives birth to $2$ children at the position of its death. Bramson proved in \cite{Bramson78} that $P(M_t-m(t)\le x)\to 1-w(x)$ as $t\to\infty$, where $m(t)=\sqrt{2}t-\frac{3}{2\sqrt{2}}\log t$ and $w(x)$ is a traveling wave solution. For the large deviation of $M_t$, \cite{ Chauvin88,Chauvin} studied the convergence rate of $P(M_t>\sqrt{2}\delta t)$ for $\delta \ge 1$. Recently, Derrida and Shi \cite{DS1, DS} studied the lower large deviation of $M_t$, i.e, the asymptotic behavior of $\frac1{t}\log P(M_t\le \sqrt{2}\delta t)$ for $\delta <1$, and found that the rate function has a phase transition at $1-\sqrt{2}$. In \cite{CHM}, Chen, He and Mallein studied the limiting property of $P(M_t\le \sqrt{2}\delta t)$ for $\delta<1.$ For more results on extremal processes of branching Brownian motions, we refer our readers to \cite{ABBS, ABK}. To maximize the possibility of $M_t\le \sqrt{2}\delta t$ for $\delta<1$, a good strategy is to make the first branching time $\tau$ as large as possible. It was shown in \cite{CHM} that, conditioned on $\{M_t\le \sqrt{2}\delta t\}$, $\tau \approx \frac{1-\delta}{\sqrt{2}} t\pm O(1) \sqrt{t}$ when $\delta \in (1-\sqrt{2}, 1)$; $\tau \approx t-O(1) \sqrt{t}$ when $\delta=1-\sqrt{2}$ and $\tau \approx t-O(1) $ when $\delta<1-\sqrt{2}$. The asymptotic behaviors of $P(M_t\le \sqrt{2}\delta t)$ are different in these 3 different cases. The intuition above also works for super-Brownian motion, but we need to use the first branching time of the skeleton process, which is a branching Brownian motion. Put $$ q:=\psi'(\lambda^*)>0, \qquad \rho:=\sqrt{1+\frac{\psi'(\lambda^*)}{\alpha}}=\sqrt{1+\frac{q}{\alpha}}. $$ We also use $\tau$ to denote the first branching time of the skeleton process of super-Brownian motion. We will prove that, conditioned on $\{M_t\le \sqrt{2\alpha}\delta t, \mathcal{S}\}$, as $t\to\infty$, $ \tau\in [\frac{1-\delta}{\rho} t- (\log t)\sqrt{t}, \frac{1-\delta}{\rho} t+ (\log t)\sqrt{t}]$ when $ \delta\in (1-\rho,1)$; $\tau \in t-\sqrt{t}\left[t^{-1/4}, \log t\right]$ when $ \delta=1-\rho$ and $\tau\in [t-O(1),t] $ when $\delta<1-\rho$. The asymptotic behavior of $\P(M_t\le \sqrt{2\alpha}\delta t|\mathcal{S})$ exhibits a phase transition at $\delta=1-\rho$. Now we state our main results. \begin{theorem}\label{thm-case1} Assume that (H1) and (H2) hold. If $\delta\in(1-\rho, 1)$, then for any $f\in\mathcal{H}$, \begin{align*} &\lim_{t\to\infty} e^{2\alpha(\rho-1)(1-\delta)t}t^{-3(\rho-1)/2} \E\left( e^{-\int_\R f(y-\sqrt{2\alpha}\delta t)X_t(dy)}; M_t\le \sqrt{2\alpha}\delta t|\mathcal{S}\right)\\ =&\frac{\lambda^*}{e^{\lambda^*}-1}\frac{a_\delta^{3(\rho-1)/2}}{\sqrt{2\alpha}\rho} \int_{-\infty}^\infty e^{-\sqrt{2\alpha}(\rho-1)z}A(w_f(z))\d z, \end{align*} where $a_\delta=1-\frac{1-\delta}{\rho}$ and $$ A(\lambda)=\frac{1}{\lambda^* }\psi(\lambda)+\psi'(\lambda^*) \left(1-\frac{\lambda}{\lambda^*}\right) \ge0,\quad \lambda\geq 0. $$ \end{theorem} \begin{theorem}\label{them:case2} Assume that (H1) and (H2) hold. Then for any $f \in\mathcal{H}$, \begin{align*} &\lim_{t\to\infty}t^{-3(\rho-1)/4}e^{(q+\alpha(\rho-1)^2)t}\E\left( e^{-\int_\R f(y-\sqrt{2\alpha}(1-\rho) t)X_t(dy)}; M_t\le \sqrt{2\alpha}(1-\rho) t|\mathcal{S}\right)\\ =&\frac{\lambda^*}{e^{\lambda^*}-1}\frac{1}{\sqrt{2\pi }} \int^{\infty}_{ 0}s^{3(\rho-1)/2}e^{-\alpha\rho^2 s^2}\d s\int^\infty_{-\infty} e^{-\sqrt{2\alpha}(\rho-1)z}A(w_f(z))\d z. \end{align*} \end{theorem} \begin{theorem}\label{them:case3} Assume that (H1) and (H2) hold. If $\delta<1-\rho$, then for any $f \in\mathcal{B}_b^+(\R)$, \begin{align*} &\lim_{t\to\infty}\sqrt{t}e^{(q+\alpha\delta^2)t}\E\left( e^{-\int_\R f(y-\sqrt{2\alpha}\delta t)X_t(dy)}; M_t\le\sqrt{2\alpha}\delta t|\mathcal{S}\right)\\ =&\frac{\lambda^*}{e^{\lambda^*}-1}\left[\frac{1}{2\sqrt{\pi\alpha}|\delta|} +\frac{1}{\sqrt{2\pi}}\int_0^\infty e^{(q-\alpha\delta^2)s}\,\d s\int_{\R}e^{\sqrt{2\alpha}\delta z}{G}_f(s,z)\,\d z\right], \end{align*} where \begin{equation}\label{e:defofG} G_f(t,x):=\frac{1}{\lambda^* }\Big[\psi(u_f(t,x))-\psi(\lambda^*+u^*_f(t,x))\Big]+qv_f(t,x), \end{equation} with $v_f,u^*_f$ being defined in \eqref{def:v} and \eqref{def:ustar} below. \end{theorem} The reason that we assume $f\in\mathcal{H}$ in Theorems \ref{thm-case1} and \ref{them:case2} is that \eqref{M-w} plays an important role in the proofs of Lemmas \ref{lem-case1} and \ref{lem-case2}. Lemma \ref{lem-case1} is used in the proof of Theorem \ref{thm-case1} and Lemma \ref{lem-case2} is used in the proof of Theorem \ref{them:case2}. Let ${\cal C}_c(\R)({\cal C}_c^+(\R))$ be the space of all the (nonnegative) continuous functions with compact support. Let $\cM_{R}(\R)$ be the space of all the Radon measures on $\R$ equipped with the vague topology, see \cite[p.111]{Kallenberg}. Recall that for random measures $\mu_t,\mu\in \cM_{R}(\R)$, $\mu_t$ converges in distribution to $\mu$ is equivalent to $\langle f,\mu_t\rangle$ converges in distribution to $\langle f,\mu\rangle $ for any $f\in{\cal C}_c^+(\R)$. See \cite[p.119]{Kallenberg} for more details. As a consequence of Theorems \ref{thm-case1}-\ref{them:case3}, we have the following corollary. \begin{corollary} Assume that (H1) and (H2) hold. Conditioned on $\{M_t\le \sqrt{2\alpha}\delta t,\mathcal{S}\}$, $X_t-\sqrt{2\alpha}\delta t$ converges in distribution to a random measure $\Xi_\delta$. Moreover, for any $f\in C_c^+(\R)$, if $\delta \in[1-\rho,1)$, \begin{align}\label{lap-xi} \E\left( e^{-\int_\R f(y)\Xi_\delta(dy)}\right)=\frac{ \int_{-\infty}^\infty e^{-\sqrt{2\alpha}(\rho-1)z}A(w_f(z))\d z}{ \int_{-\infty}^\infty e^{-\sqrt{2\alpha}(\rho-1)z}A(w(z))\d z}; \end{align} and if $\delta<1-\rho$, $$\E\left( e^{-\int_\R f(y)\Xi_\delta(dy)}\right)=\frac{\frac{1}{\sqrt{2\alpha}|\delta|} +\int_0^\infty e^{(q-\alpha\delta^2)s}\,\d s\int_{\R}e^{\sqrt{2\alpha}\delta z}{G}_f(s,z)\,\d z}{\frac{1}{\sqrt{2\alpha}|\delta|} +\int_0^\infty e^{(q-\alpha\delta^2)s}\,\d s\int_{\R}e^{\sqrt{2\alpha}\delta z}{G}(s,z)\,\d z},$$ where $G_f$ is defined in \eqref{e:defofG} and $G(t,x):=G_0(t,x)$. \end{corollary} {\bf Proof:} First consider the case of $\delta \in[1-\rho,1)$. For any $f\in\mathcal{H}$ and $\theta>0$, by Theorems \ref{thm-case1}-\ref{them:case2}, $$\lim_{t\to\infty}\E\left( e^{-\theta\int_\R f(y-\sqrt{2\alpha}\delta t)X_t(dy)}|M_t\le \sqrt{2\alpha}\delta t,\mathcal{S}\right)=\frac{ \int_{-\infty}^\infty e^{-\sqrt{2\alpha}(\rho-1)z}A(w_{\theta f}(z))\d z}{ \int_{-\infty}^\infty e^{-\sqrt{2\alpha}(\rho-1)z}A(w(z))\d z}.$$ It has been proved in \cite[Lemma 3.3]{SRZ} that $\lim_{\theta\to0}\tilde{C}(\theta f)=\tilde{C}(0)$, which implies that $w_{\theta f}(x)\to w(x).$ Note that $A(\lambda)$ is decreasing on $(0,\lambda^*)$ and $0\le w_{\theta f}(z)\le \lambda^*$. Thus using the monotone convergence theorem we get that $$\lim_{\theta\to0}\frac{ \int_{-\infty}^\infty e^{-\sqrt{2\alpha}(\rho-1)z}A(w_{\theta f}(z))\d z}{ \int_{-\infty}^\infty e^{-\sqrt{2\alpha}(\rho-1)z}A(w(z))\d z}=1.$$ Thus, conditioned on $\{M_t\le \sqrt{2\alpha}\delta t,\mathcal{S}\}$, $\int_\R f(y-\sqrt{2\alpha}\delta t)X_t(dy)$ converges in distribution for any $f\in \mathcal{C}_c^+(\R)$, which implies that $X_t-\sqrt{2\alpha}\delta t$ converges in distribution to a random measure $\Xi_\delta$ with Laplace transform given by \eqref{lap-xi}. Similarly, using Theorem \ref{them:case3}, we can get the result for $\delta<1-\rho$. \hfill$\Box$ \medskip Throughout this paper we use $C$ to denote a positive constant whose value may change from one appearance to another. For any two positive functions $f$ and $g$ on $[0,\infty)$, $f\sim g$ as $s\to \infty$ means that $\lim_{s\to\infty} \frac{f(s)}{g(s)}=1.$ \section{Preliminaries} \subsection{Skeleton decomposition}\label{skeleton} Denote by $\P^*_{\mu}$ the law of $X$ with initial configuration $\mu$ conditioned on extinction. It is well known that $(X,\P^*)$ is a super-Brownian motion with branching mechanism $\psi^*(\lambda)=\psi(\lambda+\lambda^*)$. Note that $(\psi^*)'(0+)=\psi'(\lambda^*)=q>0$. So $(X,\P^*)$ is subcritical. Let $\mathbb{D}([0,\infty), \mathcal{M}_F({\R}))$ be the space of all the right continuous functions $w:[0,\infty)\to \mathcal{M}_F({\R})$, and $\mathbb{D}_0^+$ be the space of right continuous functions from $(0,\infty)$ to $ \mathcal{M}_F({\R})$ having zero as a trap. It has been proved in \cite{E.B2} that there is a family of measures $\{\N_x,x\in\R\}$ on $\mathbb{D}_0^+$ associated with the probability measures $\{\P_{\delta_x}^*:x\in\R\}$ such that \begin{equation}\label{N-measure} \int_{\mathbb{D}_0^+} \left(1-e^{-\langle f,w_t\rangle}\right)\N^*_{x}(dw)=-\log \P_{\delta_x}^*\left(e^{-\langle f,X_t\rangle}\right), \end{equation} for all $f\in\mathcal{B}_b^+(\R)$ and $t>0$. The branching property of $X$ implies that, under $\P_{\delta_x}^*$, $X_t$ is an infinitely divisible measure, so \eqref{N-measure} is a Levy-Khinchine formula in which $\N^*_x$ plays the role of L\"evy measure. By the spatial homogeneity of Brownian motion, one can check that $$\P_{\delta_x}^*\left(e^{-\langle f,X_t\rangle}\right)=\P_{\delta_0}^*\left(e^{-\int f(x+y) X_t(\d y)}\right),\quad \N^*_{x}\left(1-e^{-\langle f,w_t\rangle}\right)=\N^*_{0}\left(1-e^{-\int f(x+y) w_t(\d y)}\right).$$ It was shown in \cite{BKS} that the skeleton of the super Brownian motion $X_t$ is a branching Brownian motion $Z_t$ with branching rate $q=\psi'(\lambda^*)$ and an offspring distribution $\{p_n:n\ge 2\}$ such that its generating function $\varphi$ satisfies $$q(\varphi(s)-s)=\frac{1}{\lambda^*}\psi(\lambda^*(1-s)).$$ We label the particles in $Z$ using the classical Ulam-Harris notation. Let $\mathcal{T}$ be the set of all the particles. We write $\varnothing$ for the root. For each particle $u\in\mathcal{T}$, we write $b_u$ and $\sigma_u$ for its birth and death time respectively, $N_u$ for the number of offspring of $u$, and $\{z_u(r):r\in[b_u,\sigma_u]$ for its spatial trajectory. $v\preccurlyeq u$ means that $v$ is an ancestor of $u$. Now we introduce the three kinds of immigrations along the skeleton $Z$ as follows. \begin{enumerate} \item {\bf Continuous immigration:} The process $I^{\N^*}$ is defined by $$ I^{\N^*}_t:=\sum_{u\in{\cal T}}\sum_{(r_j,w_j)\in \mathcal{D}_{1,u}}{\bf 1}_{r_j<t} w_j(t-r_j), $$ where, given $Z$, independently for each $u\in{\cal T}$, $\mathcal{D}_{1,u}:=\{(r_j,w_j):j\ge 1\}$ are the atoms of a Poisson point process on $(b_u, \sigma_u]\times \mathbb{D}_0^+$ with rate $2\beta{\rm d} r\times{\rm d}\N^*_{z_u(r)}$. \item {\bf Discontinuous immigration:} The processes $I^{\P^*}$ is defined by $$ I^{\P^*}_t:=\sum_{u\in{\cal T}}\sum_{(r_j,w_j)\in \mathcal{D}_{2,u}} {\bf 1}_{r_j<t} w_j(t-r_j), $$ where, given $Z$, independently for each $u\in{\cal T}$, $\mathcal{D}_{2,u}:=\{(r_j,w_j):j\ge 1\}$ are the atoms of a Poisson point process on $(b_u, \sigma_u]\times \mathbb{D}([0,\infty), \mathcal{M}_F({\R}))$ with rate ${\rm d}r\times\int_{y\in(0,\infty)}ye^{-\lambda^* y}n({\rm }dy){\rm d} \P^*_{y\delta_{z_u(r)}}$. \item {\bf Branching point biased immigration:} The process $I^{\eta}$ is defined by \begin{equation*} I^{\eta}_t:=\sum_{u\in{\cal T}} \mathbf{1}_{\sigma_u\leq t}X^{(3,u)}_{t-\sigma_u}\ , \end{equation*} where, given $Z$, independently for each $u\in{\cal T}$, $X^{(3, u)}_{\cdot}$ is an independent copy of the canonical process $X$ issued at time ${\sigma_u}$ with law $\mathbb{P}^*_{Y_u\delta_{z_u({\sigma_u})}}$ where, given $u$ has $n (\geq 2)$ offspring, $Y_u$ is an independent random variable with distribution $\eta_n({\rm d}y)$, where \begin{equation*} \eta_n(\d y)=\frac{1}{p_n\lambda^* q}\left\{\beta(\lambda^*)^2 \delta_0(\d y)\mathbf{1}_{\{n=2\}}+(\lambda^*)^n\frac{y^n}{n!}e^{-\lambda^*y}n(\d y)\right\}. \end{equation*} \end{enumerate} Now we define another $\mathcal{M}_F(\mathbb{R})$-valued process $I=\{I_t : t\geq 0\}$ by \begin{equation}\label{I=sum} I:=I^{\N^*}+I^{\P^*}+I^{\eta}\ , \end{equation} where $I^{\N^*}=\{I^{\N^*}_t: t\geq0\}$, $I^{\P^*}=\{I^{\P^*}_t: t\geq0\}$ and $I^{\eta}=\{I^{\eta}_t: t\geq0\}$, conditioned on $Z$, are independent of each other. For any integer-valued measure $\nu$, we denote by $\bP_{\nu}$ the law of $(Z, I)$ when the initial configuration of $Z$ is $\nu$. We write $\bP$ for $\bP_{\delta_0}$. For any $\mu \in \mathcal{M}_F(\mathbb{R})$, let $Z$ be a branching Brownian motion with $Z_0$ being a Poisson random measure with intensity measure $\lambda^*\mu$ and $I$ is the immigration process along $Z$. Let $\widetilde{X}$ be an independent copy of $X$ under $\P^*_\mu$, also independent of $I$. Then we define a measure-valued process $\Lambda=\{\Lambda_t: t\geq0\}$ by \begin{equation}\label{12} \Lambda=\widetilde{X}+I. \end{equation} We denote the law of $\Lambda$ by ${\bf Q}_\mu$. In particular, under $\bQ_{\delta_0}$, $Z_0=N\delta_0$, where $N$ is a Poisson random variable with parameter $\lambda^*$. We write $\bQ$ for $\bQ_{\delta_0}$. In the rest of the paper, we use ${\bf E}$, $\mathbb{E}^*$ and $\mE_{\bQ}$ to denote the expectations with respect to ${\bf P}$, $\mathbb{P}^*$ and ${\bf Q}$, respectively. The following result is proved in \cite{BKS}. \begin{proposition}\label{p:skeleton} For any $\mu \in\mathcal{M}_F(\mathbb{R}^d)$, the process $(\Lambda, \bQ_\mu)$ is Markovian and has the same law as $(X, \P_\mu)$. \end{proposition} Recall that $M_t$ is the supremum of the support of $X_t$. Denote the supremum of $\Lambda_t, I_t, Z_t$, and $\tilde{X}_t$ by $M_t^\Lambda, M_t^I, M_t^Z$, and $ M_t^{\tilde{X}}$, respectively. By \eqref{V}, for any $f\in \mathcal{B}^+(\R)$, $$ V_f(t,x) =-\log \E_{\delta_x}\left(e^{-\int_\R f(y) X_t(dy)}\right), \quad x\in\R.$$ By the space homogeneity of $X$, we have \begin{equation}\label{V(-x)} V_f(t,-x)=-\log \E\left(e^{-\int_\R f(y-x) X_t(dy)}\right),\quad x\in\R. \end{equation} Setting $f_\theta:=f+\theta{\bf 1}_{(0,\infty)}$, we get \begin{equation}\label{rel-V-u} u_f(t,x)=\lim_{\theta\to\infty} V_{f_\theta}(t,-x),\quad x\in\R. \end{equation} For any $f\in \mathcal{B}^+(\R)$, put \begin{align} v_f(t,x):= &\bE\left(e^{-\int_\R f(y-x) X_t(dy)}; M_t^{I}\le x\right),\label{def:v}\\ u_f^*(t,x):=&- \log \E^*\left(e^{-\int_\R f(y-x) X_t(dy)}; M_t\le x\right). \label{def:ustar} \end{align} For $f\equiv 0$, we write $v(t,x)$ and $u^*(t,x)$ for $v_f(t,x)$ and $u^*_f(t,x)$, respectively. The relation among $u_f, u^*_f$ and $v_f$ is given by the following lemma. \begin{lemma}\label{fact5} For any $f\in \mathcal{B}^+(\R)$, $t\ge 0$ and $x\in\R$, $$u_f(t,x)=u^*_f(t,x)+\lambda^*(1-v_f(t,x)).$$ \end{lemma} {\bf Proof:} Recall that under $\bQ$, $Z_0=N\delta_0$, where $N$ is Poisson distributed with parameter $\lambda^*$. By the definition of $\Lambda$, we get that, for any $t\geq 0, x\in\R$, \begin{align*} e^{-u_f(t,x)}&=\E \left( e^{-\int_\R f(y-x) X_t(dy)}; M_t\le x\right)=\mE_\bQ \left( e^{-\int_\R f(y-x) \Lambda_t(dy)}; M_t^\Lambda\le x\right)\\ &=\mE_\bQ \left( e^{-\int_\R f(y-x) \tilde{X}_t(dy)}; M_t^{\tilde{X}}\le x\right)\mE_\bQ \left( e^{-\int_\R f(y-x) I_t(dy)}; M_t^I\le x\right)\\ &=\E^*\left(e^{-\int_\R f(y-x) X_t(dy)}; M_t\le x\right) \mE_{\bQ}\left(\left[\bE(e^{-\int_\R f(y-x) I_t(dy)}; M_t^I\le x\right]^N\right)\\ &=e^{-u^*_f(t,x)}e^{\lambda^*(v_f(t,x)-1)}. \end{align*} Thus $u_f(t,x)=u^*_f(t,x)+\lambda^*(1-v_f(t,x)).$ \hfill$\Box$ \medskip Now we give some basic relations among $M_t^Z, M_t^\Lambda$ , $M_t^I$ and $M_t$. \begin{lemma}\label{fact1} Under $\bQ$, given $\Lambda_t$, $Z_t$ is a Poisson random measure with intensity $\lambda^*\Lambda_t$, which implies that $M_t^Z\le M_t^\Lambda$, $\bQ$-a.s. \end{lemma} {\bf Proof:} We refer the readers to the display above \cite[(3.14)]{BKS} for a proof. \hfill$\Box$ \begin{lemma}\label{fact2} Under $\bP$, $M_t^Z\le M_t^I$, a.s. \end{lemma} {\bf Proof:} First we claim that $\bQ(M_t^Z\le M_t^I)=1$. In fact, for any $x$, by Lemma \ref{fact1}, we have \begin{align*} 0=\bQ(M_t^Z>x\ge M_t^\Lambda)&=\bQ(M_t^Z>x, M_t^I\le x,M_t^{\tilde{X}}\le x) \\& =\bQ(M_t^Z>x, M_t^I\le x) \bQ(M_t^{\tilde{X}}\le x). \end{align*} Using the fact that $\bQ(M_t^{\tilde{X}}\le x)>0$, we get $\bQ(M_t^Z>x, M_t^I\le x)=0$. Since $x$ is arbitrary, the claim is true. Recall that under $\bQ$, $Z_0=N\delta_0$, where $N$ is Poisson distributed with parameter $\lambda^*$. Thus $$0=\bQ(M_t^Z> M_t^I)\ge \bQ[M_t^Z>M_t^I|N=1]\bQ(N=1)=\bP(M_t^Z> M_t^I)e^{-\lambda^*},$$ which implies that $\bP(M_t^Z>M_t^I)=0$. \hfill$\Box$ \medskip The following lemma implies that, to prove our main results, we only need to study the limit behavior of $v_f(t,\sqrt{2\alpha}\delta t).$ \begin{lemma}\label{fact4} For any $f\in \mathcal{B}^+(\R)$ and $\delta<1$, \begin{equation}\label{equalent} \lim_{t\to\infty} \frac{\E\left(e^{-\int_\R f(y-\sqrt{2\alpha}\delta t) X_t(dy)}; M_t\le \sqrt{2\alpha}\delta t|\mathcal{S}\right)}{v_f(t,\sqrt{2\alpha}\delta t)}=\frac{\lambda^*}{e^{\lambda^*}-1}. \end{equation} \end{lemma} {\bf Proof:} We also use $\mathcal{S}$ to denote the survival of $\Lambda$. It is clear that, under $\bQ$, $\mathcal{S}\subset \{N\ge 1\}$ and $\bQ(\mathcal{S})=\bQ(N\ge1)=1-e^{-\lambda^*}$. It follows that $\mathcal{S}=\{N\ge 1\}$, $\bQ$-a.s. Then, by Proposition \ref{p:skeleton}, \begin{align}\label{e:1} &\E\left(e^{-\int_\R f(y-x) X_t(dy)}; M_t\le x|\mathcal{S}\right)=\mE_\bQ \left( e^{-\int_\R f(y-x) \Lambda_t(dy)}; M_t^\Lambda\le x|N\ge 1\right)\nonumber\\ =&\mE_\bQ \left( e^{-\int_\R f(y-x) \tilde{X}_t(dy)}; M_t^{\tilde{X}}\le x\right)\mE_\bQ \left( e^{-\int_\R f(y-x) I_t(dy)}; M_t^I\le x|N\ge 1\right)\nonumber\\ =&e^{-u^*_f(t,x)}\mE_\bQ(v_f(t,x)^N|N\ge 1) =e^{-u^*_f(t,x)}\frac{e^{\lambda^*v_f(t,x)}-1}{e^{\lambda^*}-1 }. \end{align} Since $(X,\P^*)$ is subcritical, we have, for any $\delta$, $$e^{-u^*_f(t,\sqrt{2\alpha}\delta t)}\ge \P^*(\|X_t\|=0)\to 1,\quad t\to\infty,$$ which implies that $e^{-u^*_f(t,\sqrt{2\alpha}\delta t)}\to1$, as $t\to\infty$. By \eqref{limit-as}, we have for any $\delta<1$, $$ \E\left(e^{-\int_\R f(y-\sqrt{2\alpha}\delta t) X_t(dy)}; M_t\le \sqrt{2\alpha}\delta t|\mathcal{S}\right)\le \P(M_t\le \sqrt{2\alpha} \delta t|\mathcal{S})\to0. $$ Thus by \eqref{e:1}, $v_f(t,\sqrt{2\alpha}\delta t)\to0 $ for any $\delta<1$. The desired result follows immediately. \hfill $\Box$ \medskip To study the behavior of $v_f(t,\sqrt{2\alpha}\delta t)$ as $t\to\infty$, the following decomposition of $v_f$ plays a fundamental role. \begin{proposition}\label{exp:v} For any $f\in \mathcal{B}^+(\R)$, $t>0$ and $x\in\R$, \begin{equation}\label{decom-v} v_f(t,x)=U_{1,f}(t,x)+U_{2,f}(t,x), \end{equation} where \begin{align}\label{eq-v2} U_{1,f}(t,x)=&\mE \Big[e^{-\int_0^t \psi'(\lambda^*+u^*_f(t-r,x-B_r))\,dr},B_t\le x\Big],\\ U_{2,f}(t,x)=& \mE\int_0^t e^{-\int_0^{s} \psi'(\lambda^*+u^*_f(t-r,x-B_{r}))\,dr}\hat{G}_f(t-s,x-B_{s})\,\d s,\label{eq-v2'} \end{align} with $\hat{G}_f(t,x)$ being defined by \begin{align*} \hat{G}_f(t,x)&=\frac{1}{\lambda^*}\left[\beta(\lambda^*)^2 v_f(t,x)^2+\int_0^\infty \left(e^{\lambda^* v_f(t,x)y}-1-\lambda^* v_f(t,x)y\right)e^{-(\lambda^*+u^*_f(t,x))y}\,n(\d y)\right]\\ &=\frac{1}{\lambda^*}\Big[\psi(u_f(t,x))-\psi(\lambda^*+u^*_f(t,x))+\psi'(\lambda^*+u^*_f(t,x))\lambda^*v_f(t,x)\Big]. \end{align*} \end{proposition} {\bf Proof:} Let $\tau$ be the first splitting time of $Z$, that is $\tau=\sigma_\varnothing$. By considering the cases $\tau>t$ and $\tau\le t$ separately, we get \begin{align}\label{v=U1+U2} &v_f(t,x)=\bE\left( e^{-\int_\R f(y-x)I_t(dy)}; M^I_t\le x\right)\nonumber\\ &=\displaystyle \bE\left( e^{-\int_\R f(y-x)I_t(dy)}; M^I_t\le x,\tau>t\right) +\bE\left( e^{-\int_\R f(y-x)I_t(dy)}; M^I_t\le x,\tau\le t\right)\nonumber\\ &=:\displaystyle U_{1,f}(t,x)+U_{2,f}(t,x).\end{align} By Lemma \ref{fact2}, $U_{1,f}(t,x)=\bE\left( e^{-\int_\R f(y-x)I_t(dy)}; M^I_t\le x, M_t^Z\le x,\tau>t\right)$. By the decomposition of $I$ in \eqref{I=sum}, on the event $\{\tau>t\}$, we have that $I_t=I_t^{\N^*}+I_t^{\P^*}$. Thus using \cite[Lemma 3]{BKS}, we have on the event $\{\tau>t\}$, for any $x\in\R$, \begin{align*} &\bE\left( e^{-\int_\R f(y-x)I_t(dy)}; M^I_t\le x|\mathcal{F}_t^Z\right) =\lim_{\theta\to\infty}\bE\left(e^{-\int_\R [f(y-x)+\theta {\bf 1}_{(0,\infty)}(y-x)]I_t(dy)}|\mathcal{F}_t^Z\right)\\ =&\exp\left\{-\int_0^t \langle\phi(u^*_f(t-s, x-\cdot)),Z_s\rangle\,\d s\right\}, \end{align*} where $\{\mathcal{F}_t^Z, t\geq 0\}$ is the natural filtration of $Z$ and \begin{equation}\label{phi} \phi(\lambda):=\psi'(\lambda+\lambda^*)-\psi'(\lambda^*)=2\beta\lambda+\int_0^\infty (1-e^{-\lambda x})xe^{-\lambda^*x}n(dx). \end{equation} Note that, on the event $\{\tau>t\}$, $Z_s=\delta_{z_{\varnothing}(s)}$ and $\{z_{\varnothing}(s),s\le t\}\overset{d}{=}\{B_s,s\le t\}$. Thus \begin{align}\label{U1-expansion} U_{1,f}(t,x)&=e^{-qt}\mE\Big[\exp\Big\{-\int_0^t \phi(u^*_f(t-r,x-B_r))\,dr\Big\}; B_t\le x\Big]\nonumber\\ &=\mE\Big[\exp\Big\{-\int_0^t \psi'(\lambda^*+u^*_f(t-r,x-B_r))\,dr\Big\};B_t\le x\Big]. \end{align} On the event $\{\tau\le t\}$, the immigration process $I$ has the following expression: \begin{align}\label{dec:I} I_t& =\sum_{(r_j,w_j)\in\mathcal{D}_{1,\varnothing}} w_j(t-r_j)+\sum_{(r_j,w_j)\in\mathcal{D}_{2,\varnothing}} w_j(t-r_j)+ X^{(3,\varnothing)}_{t-\tau} +\sum_{i=1}^{N_\varnothing}I^i_{t-\tau}\nonumber\\ &=: \mathcal{J}_{1,t}+\mathcal{J}_{2,t}+\mathcal{J}_{3,t}+\mathcal{J}_{4,t}, \end{align} where, given $Z_{\tau}$, $I^i,i=1,\cdots, N_\varnothing$, are i.i.d copies of $I$ under $\bP_{z_\varnothing(\tau)}$. Since, given $\mathcal{F}_t^Z$, $\mathcal{J}_{i,t},i=1,2,3,4$, are independent, so \begin{align}\label{2.5} U_{2,f}(t,x)&=\bE\left[\bE\left( e^{-\int_\R f(y-x)I_t(dy)}; M^I_t\le x | \mathcal{F}_t^Z\right);\tau\le t\right]\nonumber\\ &=\bE\left[H_{1,t}H_{2,t}H_{3,t}H_{4,t};\tau\le t\right], \end{align} where $$H_{i,t}=\bE( e^{-\int_\R f(y-x)\mathcal{J}_{i,t}(dy)}; \mathcal{J}_{i,t}(x,\infty)=0 | \mathcal{F}_t^Z),\quad i=1,2,3,4.$$ Put $f_\theta=f+\theta{\bf 1}_{(0,\infty)}.$ By the bounded convergence theorem, we have \begin{equation}\label{H} H_{i,t}=\lim_{\theta\to\infty} \bE\left( e^{-\int_\R f_\theta(y-x)\mathcal{J}_{i,t}(dy)}| \mathcal{F}_t^Z\right). \end{equation} By the definition of $\mathcal{D}_{1,\varnothing}$ and \eqref{H}, we have that, on the event $\{\tau\le t\}$, \begin{align} H_{1,t}=\lim_{\theta\to\infty} \exp\left\{-2\beta\int_0^\tau\int_{\mathbb{D}_0^+}\Big(1-e^{-\int_\R f_\theta(y-x) w_{t-r}(dy)}\Big)\N^*_{z_{\varnothing}(r)}(dw)\,dr\right\}. \end{align} Using \eqref{N-measure}, we getthat \begin{align*} &\lim_{\theta\to\infty}\int_{\mathbb{D}_0^+}\left(1-e^{-\int_\R f_\theta(y-x) w_{t-r}(dy)}\right)\N^*_{z}(dw) =\lim_{\theta\to\infty}-\log \E^*_{\delta_z}\left[e^{-\int_\R f_\theta(y-x) X_{t-r}(dy)}\right]\\ &=-\log \E^*_{\delta_z}\left[e^{-\int_\R f(y-x) X_{t-r}(dy)};M_{t-r}\le x\right]=u^*_f(t-r,x-z). \end{align*} Thus we have that \begin{align}\label{J1} H_{1,t}=\exp\left\{-2\beta\int_0^\tau u^*_f(t-r,x-z_{\varnothing}(r))\,dr\right\}. \end{align} For $H_{2,t}$, on the event $\{\tau\le t\}$, we have that \begin{align} H_{2,t}=\lim_{\theta\to\infty}\exp\left\{-\int_0^\tau \,dr \int_0^\infty ye^{-\lambda^* y}n(\d y)\E^*_{y\delta_{z_\varnothing}(r)}\Big(1-e^{-\int_\R f_\theta(y-x) X_{t-r}(dy)}\Big)\,dr\right\}. \end{align} It follows from the branching property of $X$ that \begin{align*} &\lim_{\theta\to\infty}\P^*_{y\delta_{z}}(e^{-\int_\R f_\theta(y-x) X_{t-r}(dy)}) =\lim_{\theta\to\infty}\left[\P^*_{\delta_{z}}(e^{-\int_\R f_\theta(y-x) X_{t-r}(dy)})\right]^y=e^{-u^*_f(t-r,x-z)y},\end{align*} which implies that \begin{align}\label{J2} H_{2,t}=\exp\left\{-\int_0^\tau \int_0^\infty y[1-e^{-u^*_f(t-r,x-z_\varnothing(r))y}]e^{-\lambda^* y}n(\d y)\,dr\right\}. \end{align} By the definition of $X^{(3,\varnothing)}$, on the event $\{\tau\le t\}$, we have that \begin{align}\label{J3} H_{3,t} =&\lim_{\theta\to \infty}\bE\left(\P^*_{Y_\varnothing\delta_{y}}\left(e^{-\int_\R f_\theta(y-x)X_{t-s}(dy)}\right)|\mathcal{F}_t^Z\right)|_{s=\tau,y=z_\varnothing(\tau)}\nonumber\\ =&\bE\left(e^{-u^*_f(t-\tau,x-z_\varnothing(\tau))Y_\varnothing}|\mathcal{F}_t^Z\right)\nonumber\\ =&\frac{1}{p_{N_\varnothing}\lambda^* q}\left(\beta(\lambda^*)^2{\bf 1}_{N_\varnothing=2}+\int_0^\infty\frac{(\lambda^*y)^{N_\varnothing}}{N_\varnothing !}e^{-u^*_f(t-\tau,x-z_\varnothing(\tau))y}e^{-\lambda^* y}\,n(\d y)\right). \end{align} It follows from the branching property that on the event $\{\tau\le t\}$, \begin{align}\label{J4} H_{4,t} =\left[\bP_{\delta_{z_\varnothing(\tau)}}\Big(e^{-\int_\R f(y-x)X_{t-s}(dy)}; M_{t-s}^I\le x\Big)\right]^{N_{\varnothing}}_{s=\tau}=v_f(t-\tau,x-z_\varnothing(\tau))^{N_{\varnothing}}. \end{align} Note that \begin{align}\label{2.3} &\sum_{n=2}^\infty p_n\frac{1}{p_{n}\lambda^* q}\left(\beta(\lambda^*)^2{\bf 1}_{n=2}+\int_0^\infty\frac{(\lambda^*y)^{n}}{n!}e^{-u^*_f(t-\tau,x-z_\varnothing(\tau))y}e^{-\lambda^* y}\,n(\d y)\right)v_f(t-\tau,x-z_\varnothing(\tau))^{n}\nonumber\\ &=\frac{1}{\lambda^* q}\Big[(\beta(\lambda^*)^2v_f(t-\tau,x-z_\varnothing(\tau))^2\nonumber\\ &\qquad +\int_0^\infty\left(e^{\lambda^*v_f(t-\tau,x-z_\varnothing(\tau))y}-1-\lambda^*v_f(t-\tau,x-z_\varnothing(\tau))y\right) e^{-(\lambda^*+u^*_f(t-\tau,x-z_\varnothing(\tau)))y}n(\d y)\Big]\nonumber\\ &=q^{-1}\hat{G}_f(t-\tau,x-z_\varnothing(\tau)). \end{align} Recall the definition of $\phi$ in \eqref{phi}. Combining \eqref{2.5}-\eqref{2.3}, we get that \begin{align*} U_{2,f}(t,x)&=q^{-1}\bP\left(\exp\left\{-\int_0^{\tau} \phi( u^*_f(t-r,x-z_\varnothing(r)))\,dr\right\}\hat{G}_f(t-\tau,x-z_\varnothing(\tau)),\tau\le t\right)\nonumber\\ &=\mE\int_0^t\exp\left\{-\int_0^{s}\left(q+ \phi( u^*_f(t-r,x-B_{r}))\right)\,dr\right\}\hat{G}_f(t-s,x-B_{s})\,\d s. \end{align*} Note that $q+\phi(\lambda)=\psi'(\lambda^*+\lambda)$. The proof is now compete. \hfill$\Box$ \medskip Note that $\frac{e^x-1-x}{x^2}=\sum_{k=2}^\infty \frac{x^{k-2}}{k!}$ is increasing in $x$ on $(0,\infty)$. So $e^{\lambda^*v_f(t,x)y}-1-\lambda^*v_f(t,x)y\le v_f(t,x)^2(e^{\lambda^*y}-1-\lambda^*y)$, which implies that \begin{align}\label{est-hatG} &\hat{G}_f(t,x)\le \frac{1}{\lambda^*}\left[(\beta(\lambda^*)^2+\int_0^\infty(e^{\lambda^*y}-1-\lambda^*y)e^{-\lambda^*y}n(\d y))\right]v_f(t,x)^2 \nonumber\\ &=\left(\psi'(\lambda^*)-\psi(\lambda^*)/\lambda^*\right)v_f(t,x)^2=qv_f(t,x)^2 \le qv(t,x). \end{align} Here in the last inequality, we use the fact that $v_f(t,x)\le v(t,x)$. \subsection{Some useful estimates} In this subsection we give some useful estimates for $u^*_f(t,x)$ and $v_f(t,x)$. Recall that $q=\psi'(\lambda^*)$ and $\rho=\sqrt{1+q/\alpha}.$ \begin{lemma}\label{lemma:u*} \begin{description} \item [{\bf (1)}] For any $f\in\mathcal{B}^+(\R)$ and $t>0, x\in\R$, $$u^*_f(t,x)\le k(t):=-\log \P^*(X_t=0),$$ and $t\mapsto e^{qt}k(t)$ is decreasing on $(0, \infty)$. \item [{\bf (2)}] If (H2) holds, then there exists a positive constant $c_2$ such that \begin{equation}\label{est:k} k(t)\le \left[\frac{c_2} {e^{c_2\vartheta t}-1}\right]^{1/\vartheta}, \quad t>0, \end{equation} and for any $f\in\mathcal{B}_b^+(\R)$, there exists a positive constant $c_3$ such that \begin{align}\label{est:u*} u^*_f(t,x)\le c_3(1+x^{-2/\vartheta})e^{(a+\alpha)t}, \quad t, x>0. \end{align} \end{description} \end{lemma} {\bf Proof:} Since $\E^*\left(e^{-\int_R f(y-x)X_t(dy)}; M_t\le x\right)\ge \P^*(X_t=0)$ for any $t>0, x\in\R$, we have $u^*_f(t,x)\le k(t)$. By the branching property and Markov property, we get that $$\P^*(\|X_t\|=0)=\E^*\left(\P^*_{X_{t-s}}(\|X_{s}\|=0)\right)=\E^*\left(e^{-k(s)\|X_{t-s}\|}\right).$$ Put $u^*_\theta(t):=-\log\E^*\left(e^{-\theta\|X\|_t}\right)$. Then $k(t)=u^*_{k(s)}(t-s).$ Under $\P^*$, $\|X_t\|$ is a continuous state branching process with branching mechanism $\psi(\lambda^*+\lambda)$. Then according to \cite[Theorem 10.1]{Kyprianou}, we have \begin{equation}\label{deriv-k} k'(t)=-\psi\left(\lambda^*+u^*_{k(s)}(t-s)\right)=-\psi(\lambda^*+k(t)). \end{equation} Since $\psi(\lambda^*)=0$ and $\psi'$ is increasing on $(0,\infty)$, $\psi(\lambda^*+\lambda)\ge \psi'(\lambda^*)\lambda=q\lambda.$ Thus $k'(t)\le -q k(t)$. Using this one can check that $(e^{qt}k(t))'\le 0$. The proof of (1) is complete. Assume that (H2) holds. Then there exists $c_2>0$ such that $\psi(\lambda^*+\lambda)\ge c_2(\lambda+\lambda^{1+\vartheta})$. Now \eqref{est:k} follows immediately from \eqref{deriv-k}. Since $u_f^*(t,x)\le u_f(t,x)$, it suffices to show that \eqref{est:u*} is true for $u_f(t,x).$ By \cite[Lemma 2.3(2)]{SRZ}, we have that $$V_{f_1+f_2}(t,x)\le V_{f_1}(t,x)+V_{f_2}(t,x).$$ By \eqref{rel-V-u}, $$u_f(t,x)=\lim_{\theta\to\infty} V_{f_\theta}(t,-x)\le V_f(t,-x)+\lim_{\theta\to\infty}V_{\theta{\bf 1}_{(0,\infty)}}(t,-x) =V_f(t,-x)+u(t,x),$$ where $f_\theta=f+\theta{\bf 1}_{(0,\infty)}$. By \eqref{V(-x)} and Jensen's inequality, we have that $$V_f(t,-x)=-\log \E\left(e^{-\int_\R f(y-x) X_t(dy)}\right)\le \E\left(\int_\R f(y-x) X_t(dy)\right)=e^{\alpha t}\mE(f(B_t-x))\le e^{\alpha t}\|f\|.$$ By \cite[Lemma 4.2 and 4.3]{SRZ} (with $A$ being replaced by $x$, and $x$ there replaced by 0), we get that there exists a positive constant $C$ such that $$ u(t,x)\le C(1+x^{-2/\vartheta})e^{at},\quad t,x>0 . $$ Combining the two displays above, we get that $$u_f(t,x)\le e^{\alpha t}\|f\|+ C(1+x^{-2/\vartheta})e^{at}\le (C+\|f\|)(1+x^{-2/\vartheta})e^{(a+\alpha)t}.$$ Now \eqref{est:u*} follows immediately. \hfill$\Box$ \medskip \begin{lemma}\label{lemma2} Assume that (H1) and (H2) hold. For any $A>0$ and $\epsilon>0$, $$\int_0^A \phi(k(s))s^\epsilon\,\d s<\infty.$$ \end{lemma} {\bf Proof:} Note that, by \eqref{deriv-k}, $$k'(s)=-\psi(k(s)+\lambda^*), \quad k''(s)=-\psi'(k(s)+\lambda^*)k'(s). $$ Thus, using \eqref{phi}, we have $$0\le \phi(k(s))=\psi'(k(s)+\lambda^*)-q=\frac{k''(s)}{-k'(s)}-q\le \frac{k''(s)}{-k'(s)}.$$ It follows that \begin{align}\label{3.10.1} \int_0^A \phi(k(s))s^{\epsilon}\,\d s&\le \int_0^A \frac{k''(s)}{-k'(s)}s^{\epsilon}\,\d s=\int_0^A s^{\epsilon}\,\d (-\log (-k'(s)))\nonumber\\ &=-\log(-k'(A))A^{\epsilon}+\lim_{s\to0} s^{\epsilon}\log (-k'(s))+\epsilon\int_0^A \log (-k'(s))s^{\epsilon-1}\,\d s. \end{align} Note that for $\lambda>0$, $\psi''(\lambda+\lambda^*)$ exists and is decreasing. By Taylor's expansion, since $\psi(\lambda^*)=0$, we have that $$\psi(\lambda+\lambda^*)\le\psi'(\lambda^*)\lambda+\psi''(\lambda^*)\lambda^2,\quad \lambda>0.$$ By \eqref{est:k}, we have that $k(s)\le Cs^{-1/\vartheta}$. Thus we get that \begin{align*} -k'(s)=\psi(k(s)+\lambda^*)&\le \psi'(\lambda^*)k(s)+\psi''(\lambda^*)k(s)^2\\ &\le C(s^{-1/\vartheta}+s^{-2/\vartheta})\le C s^{-2/\vartheta},\quad s\in[0,A]. \end{align*} Now the desired result follows immediately from \eqref{3.10.1}. \hfill$\Box$ \medskip Now we give some upper estimates of $v(t,x)$. \begin{lemma}\label{lemma:key} \begin{description} \item [{\bf(1)}]For any $t>0$, \begin{equation}\label{domi-B'} v(t,x)\le {\rm P}(B_t\le x), \qquad x\in \R, \end{equation} and \begin{equation}\label{domi-B} v(t,x)\le {\rm P}(B_t\le x) \le \frac{\sqrt{t}}{\sqrt{2\pi}|x|} e^{-\frac{x^2}{2t}}, \qquad x<0. \end{equation} \item [{\bf (2)}] There exist $t_0>1$ and $c>0$ such that for any $t>t_0$, \begin{align}\label{est:v2} v(t,\sqrt{2\alpha} \theta t-\sqrt{t})&\le {\bP}({M}^Z_t\le \sqrt{2\alpha} \theta t-\sqrt{t})\nonumber\\ & \le c t\left\{\begin{array}{ll} e^{-(q+\alpha \theta^2)t},&\hbox{$ \theta<1-\rho$};\\ e^{-2\alpha(\rho-1)(1- \theta)t},&\hbox{$1-\rho\le \theta<1$}. \end{array}\right. \end{align} \end{description} \end{lemma} {\bf Proof:} (1) By Proposition \ref{exp:v}, we have \begin{align}\label{eq-v2''} v(t,x)=&\mE \Big[e^{-\int_0^t \psi'(\lambda^*+u^*(t-r,x-B_r))\,dr},B_t\le x\Big]\nonumber\\ &+\mE\int_0^t e^{-\int_0^{s} \psi'(\lambda^*+u^*(t-r,x-B_{r}))\,dr}\hat{G}(t-s,x-B_{s})\,\d s\nonumber\\ =&\mE_x \Big[e^{-\int_0^t \psi'(\lambda^*+u^*(t-r,B_r))\,dr},B_t\ge 0\Big]\nonumber\\ &+\mE_x\int_0^t e^{-\int_0^{s} \psi'(\lambda^*+u^*(t-r,B_{r}))\,dr}\hat{G}(t-s,B_{s})\,\d s, \end{align} where $\hat{G}$ is the $\hat{G}_f$ defined in Proposition \ref{exp:v} with $f\equiv 0$. Thus, by the Feynman-Kac formula, we have \begin{align*} v(t,x)&=\mP_x \Big[B_t\ge 0\Big]+\mE_x \int_0^t\left[\hat{G}(t-s,B_{s})-\psi'(\lambda^*+u^*(s,B_{t-s}))v(s,B_{t-s})\right]\d s\\ &=\mP_x \Big[B_t\ge 0\Big]+\frac{1}{\lambda^*}\mE_x \int_0^t\left[\psi(u(s, B_{t-s}))-\psi(\lambda^*+u^*(s,B_{t-s}))\right]\d s. \end{align*} Note that $u(s,z)\le \lambda^*+u^*(s,z)$, $\psi$ is negative on $(0,\lambda^*)$ and increasing on $(\lambda^*,\infty)$. Thus $\psi(u(s, B_{t-s}))-\psi(\lambda^*+u^*(s,B_{t-s}))\le 0.$ Therefore we have that $$v(t,x)\le \mP_x \Big[B_t\ge 0\Big]=\mP \Big[B_t\le x\Big],\qquad x\in\R.$$ For $x<0$, \begin{align}\label{est:bm} \mP \Big[B_t\le x\Big]&=\mP \Big[B_1\ge |x|t^{-1/2}\Big]=\frac{1}{2\pi}\int_{|x|t^{-1/2}}^\infty e^{-y^2/2}\d y\nonumber\\ &\le \frac{1}{2\pi }\int_{|x|t^{-1/2}}^\infty \frac{y}{|x|t^{-1/2}}e^{-y^2/2}\d y\le \frac{\sqrt{t}}{\sqrt{2\pi}|x|} e^{-x^2/(2t)}. \end{align} Thus \eqref{domi-B} follows. (2) We claim that there exists $t_0>0$ such that for any $t>t_0$ and $x$, \begin{align}\label{upper} {\bP}({M}^Z_t\le z)\le (2qt+1)\sup_{0\le s\le t}e^{-qs}\mP\left(B_{1}\le ( z-\sqrt{2\alpha}(t-s)+\sqrt{t})/\sqrt{s}\right). \end{align} It is shown in \cite{DS} (see the discussion below \cite[Lemma 3]{DS}) that the claim is true when $p_2=1$ and $q=1$. Using similar arguments we see that it is also true for the general case. We omit the proof here. Put $a(t):=\sqrt{2\alpha}(1- \theta)t.$ By \eqref{upper}, for $t>t_0$, $${\bP}({M}^Z_t\le \sqrt{2\alpha} \theta t-\sqrt{t})\le (2qt+1)\sup_{0\le s\le t}e^{-qs}\mP(B_{1}\le (\sqrt{2\alpha}s-a(t))/\sqrt{s}).$$ Note that by \eqref{est:bm}, $P(B_1\le -y)\le \frac{1}{\sqrt{2\pi}} y^{-1}e^{-y^2/2}$ for all $y>0$. Thus, if $\sqrt{2\alpha}s<a(t)$, we have \begin{align}\label{6.1} e^{-qs}\mP\left(B_{1}\le (\sqrt{2\alpha}s-a(t))/\sqrt{s}\right) &\le \frac{\sqrt{s}}{\sqrt{2\pi}}\frac{1}{{a(t)-\sqrt{2\alpha}s}}e^{-qs} e^{-(\sqrt{2\alpha}s-a(t))^2/2s}\nonumber\\ &=\frac{\sqrt{s}}{\sqrt{2\pi}}\frac{1}{{a(t)-\sqrt{2\alpha}s}}e^{\sqrt{2\alpha}a(t)}e^{-\alpha\rho^2s-\frac{a(t)^2}{2s}} . \end{align} It is clear that \begin{equation}\label{easy-domi-below} \alpha\rho^2s+\frac{a(t)^2}{2s}\ge \sqrt{2\alpha}\rho a(t), \end{equation} and is decreasing on $(0,\frac{a(t)}{\sqrt{2\alpha}\rho})$. We now prove the desired result in four cases. (i) If $a(t)>\sqrt{2\alpha}\rho t$ (that is, $ \theta<1-\rho$), then $\sqrt{2\alpha}s<a(t)$ for $s\in[0,t]$ and thus by \eqref{6.1} we have that \begin{align*} &\sup_{0\le s\le t}e^{-qs}\mP\left(B_{1}\le (\sqrt{2\alpha}s-a(t))/\sqrt{s}\right) \le \frac{\sqrt{t}}{\sqrt{2\pi}}\frac{1}{{a(t)-\sqrt{2\alpha}t}}e^{\sqrt{2\alpha}a(t)}e^{-\alpha\rho^2t-\frac{a(t)^2}{2t}}\\ &= \frac{\sqrt{t}}{\sqrt{2\pi}}\frac{1}{{a(t)-\sqrt{2\alpha}t}}e^{-(\alpha\rho^2+\alpha(1- \theta)^2-2\alpha(1- \theta))t} \le\frac{1}{\sqrt{2\pi}}\frac{1}{\sqrt{2\alpha}(\rho-1)\sqrt{t}}e^{-(q+\alpha \theta^2)t}. \end{align*} (ii) If $\sqrt{2\alpha}\frac{\rho+1}{2} t \le a(t)\le\sqrt{2\alpha}\rho t$ (that is, $1-\rho\le \theta\le (1-\rho)/2$), then $\sqrt{2\alpha}s<a(t)$ for $s\in[0,t]$, and thus by \eqref{6.1} and \eqref{easy-domi-below} we have that \begin{align*} \sup_{0\le s\le t}e^{-qs}\mP\left(B_{1}\le (\sqrt{2\alpha}s-a(t))/\sqrt{s}\right) &\le\frac{1}{\sqrt{2\pi}}\frac{2}{\sqrt{2\alpha}(\rho-1)\sqrt{t}}e^{-\sqrt{2\alpha}(\rho-1)a(t)}\\ &=\frac{1}{\sqrt{2\pi}}\frac{2}{\sqrt{2\alpha}(\rho-1)\sqrt{t}}e^{-2\alpha(\rho-1)(1- \theta)t}. \end{align*} (iii) If $1<a(t)<\sqrt{2\alpha}\frac{\rho+1}{2}t$ (that is, $(1-\rho)/2< \theta<1-\frac{1}{\sqrt{2\alpha}t}$), then \begin{align*} &\sup_{0\le s\le t}e^{-qs}\mP\left(B_{1}\le (\sqrt{2\alpha}s-a(t))/\sqrt{s}\right)\\ \le & \sup_{0\le s\le \frac{2}{\sqrt{2\alpha}(\rho+1)}a(t)}e^{-qs}\mP\left(B_{1}\le (\sqrt{2\alpha}s-a(t))/\sqrt{s}\right)+e^{-q\frac{2}{\sqrt{2\alpha}(\rho+1)}a(t)}\\ \le& \sup_{0\le s\le \frac{2}{\sqrt{2\alpha}(\rho+1)}a(t)}\frac{1}{\sqrt{2\pi}}\frac{\sqrt{s}}{a(t)-\sqrt{2\alpha}s}e^{-\sqrt{2\alpha}(\rho-1)a(t)}+e^{-\sqrt{2\alpha}(\rho-1)a(t)}\\ \le & \sqrt{\frac{1}{\sqrt{2\alpha}\pi(\rho+1)}}\frac{1}{\frac{\rho-1}{(\rho+1)}\sqrt{a(t)}}e^{-\sqrt{2\alpha}(\rho-1)a(t)}+e^{-\sqrt{2\alpha}(\rho-1)a(t)}\\ \le & \left(\sqrt{\frac{\rho+1}{\sqrt{2\alpha}\pi}}\frac{1}{\rho-1}+1\right)e^{-2\alpha(\rho-1)(1- \theta)t}. \end{align*} Here in the second inequality we used \eqref{6.1}, \eqref{easy-domi-below} and the fact that $$ q\frac{2}{\sqrt{2\alpha}(\rho+1)}=\frac{2\alpha(\rho^2-1)}{\sqrt{2\alpha}(\rho+1)}=\sqrt{2\alpha}(\rho-1). $$ (iv) Finally, if $0< a(t)\le 1$ (that is, $1-\frac{1}{\sqrt{2\alpha}t}\le \theta<1$), then $${\bP}\left({M}^Z_t\le \sqrt{2\alpha}\theta t-\sqrt{t}\right)\le 1\le e^{\sqrt{2\alpha}(\rho-1)}e^{-\sqrt{2\alpha}(\rho-1)a(t)}=e^{\sqrt{2\alpha}(\rho-1)}e^{-2\alpha(\rho-1)(1- \theta)t}.$$ The proof is now complete. \hfill$\Box$ \medskip Recall that $m_t=\sqrt{2\alpha}t-\frac{3}{2\sqrt{2\alpha}}\log t$. The next lemma gives another estimate of $v(t,z)$. The proof will be given in Appendix. \begin{lemma}\label{lem_upper-w} For any $\epsilon\in(0,\sqrt{2\alpha}(\rho-1))$, there exist $c_\epsilon>1$ and $T_\epsilon\ge 1$ such that $$ v(t,m_t-z)\le \bP\left(M_t^Z\le m_t-z\right) \le c_\epsilon e^{-\sqrt{2\alpha}(\rho-1)z} e^{\epsilon z}, \quad t\ge T_\epsilon, z>0. $$ \end{lemma} \section{Proofs of the main results} Put $\zeta_f(t,x):=\psi'(\lambda^*+u^*_f(t,x))$. It is clear that $\zeta_f(t,x)\ge \psi'(\lambda^*)=q$. \begin{lemma}\label{lemma1} For any $f\in\mathcal{B}^+(\R)$, \begin{align*} U_{1,f}(t,\sqrt{2\alpha}\delta t) \le \left\{ \begin{array}{ll} e^{-qt}, & \hbox{$\delta\ge 0$;} \\ \frac{1}{2\sqrt{\pi\alpha}|\delta|} t^{-1/2}e^{-(q+\alpha\delta^2)t}, & \hbox{$\delta<0$.} \end{array} \right. \end{align*} \end{lemma} {\bf Proof:} Since $\zeta_f (t,x)\ge \psi'(\lambda^*)=q$, by \eqref{eq-v2}, we have that $$U_{1,f}(t,\sqrt{2\alpha}\delta t)=\mE\left(e^{-\int_0^t \zeta_f(t-r,\sqrt{2\alpha}\delta t-B_r)\,\d s};B_t \le \sqrt{2\alpha}\delta t\right)\le e^{-qt}\mP \Big[B_t\le \sqrt{2\alpha}\delta t\Big].$$ Thus, the desired result follows easily from \eqref{est:bm} with $x=\sqrt{2\alpha}\delta t$. \hfill$\Box$ \medskip Note that by the change of variables $s\to t-s$, we have $$U_{2,f}(t, x)=\mE \int_0^t e^{-\int_s^t \zeta_f(r,x-B_{t-r})\,dr}\hat{G}_f(s,x-B_{t-s})\,\d s.$$ \subsection{Proof of Theorem \ref{thm-case1}: $\delta\in(1-\rho,1)$} It follows from Lemma \ref{fact4} that, to prove Theorem \ref{thm-case1}, we only need to consider the limiting property of $v_f(t,\sqrt{2\alpha}\delta t)$. Note that \begin{align}\label{2.7} q+\alpha\delta^2- 2\alpha(\rho-1)(1-\delta)=\alpha(\rho-1+\delta)^2, \end{align} and \begin{align}\label{2.8} 2\alpha(\rho-1)(1-\delta)\le 2\alpha(\rho-1)<\alpha(\rho^2-1)=q,\quad \delta\in[0,1). \end{align} It follows from Lemma \ref{lemma1} that for any $\delta\in(1-\rho,1)$, $$\lim_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}} U_{1,f}(t,\sqrt{2\alpha}\delta t)=0. $$ Thus, by the decomposition \eqref{decom-v}, to prove the desired result, it suffices to show that $$\lim_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}}U_{2,f}(t, \sqrt{2\alpha}\delta t) =\frac{a_\delta^{3(\rho-1)/2}}{\sqrt{2\alpha}\rho} \int_{-\infty}^\infty e^{-\sqrt{2\alpha}(\rho-1)z}A(w_f(z))\d z,$$ where $a_\delta=1-\frac{1-\delta}{\rho}$ and $A(\lambda)=\frac{1}{\lambda^* }\psi(\lambda)+\psi'(\lambda^*)(1-\lambda/\lambda^*).$ The result above follows from Lemmas \ref{lem-case1} and \ref{lem-case1-2} below. In Lemma \ref{lem-case1-2}, we will show that for $\delta\in(1-\rho, 1)$, $$\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}}\bP\Big(M_t^I\le \sqrt{2\alpha}\delta t,\tau\notin \Big[\frac{1-\delta}{\rho} t-(\log t)\sqrt{t}, \frac{1-\delta}{\rho} t+(\log t)\sqrt{t}\Big] \Big)\to 0.$$ Thus, on the event $\left\{M_t^I\le \sqrt{2\alpha}\delta t\right\}$, with large probability, the first branching time of the skeleton happens in the interval $\left[\frac{1-\delta}{\rho} t-(\log t)\sqrt{t}, \frac{1-\delta}{\rho} t+(\log t)\sqrt{t}\right].$ \medskip \begin{lemma}\label{lem-case1} Let $\delta\in(1-\rho, 1)$ and $\mathcal{I}_{t}=[a_\delta t-(\log t)\sqrt{t}, a_\delta t+(\log t)\sqrt{t}]\cap [0,t]$. Then for any $f\in\mathcal{H}$, \begin{align*} &\lim_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}} {\rm E} \int_{\mathcal{I}_t} e^{-\int_s^t \zeta_f(r,\sqrt{2\alpha}\delta t-B_{t-r})\,dr}\hat{G}_f(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s\\ =&\frac{a_\delta^{3(\rho-1)/2}}{\sqrt{2\alpha}\rho} \int^\infty_{-\infty}e^{-\sqrt{2\alpha}(\rho-1)z}A(w_f(z))\d z. \end{align*} \end{lemma} {\bf Proof:} In this proof, we always assume that $t\ge1$ is large enough such that $a_\delta t/2\le a_\delta t-(\log t)\sqrt{t}\le a_\delta t+(\log t)\sqrt{t}\le (1+a_\delta )t/2$. Since $\psi'$ is increasing and $\psi''$ is decreasing, it follows that, for any $\lambda\ge0$ $$q=\psi'(\lambda^*)\le \psi'(\lambda^*+\lambda)\le q+\psi''(\lambda^*)\lambda.$$ Thus we have, for any $s\in \mathcal{I}_t$, \begin{align*} q(t-s)\le \int_s^{t}\zeta_f (r,\sqrt{2\alpha}\delta t-B_{t-r})\,dr&\le q(t-s)+\psi''(\lambda^*)\int_s^t u^*_f(r,\sqrt{2\alpha}\delta t-B_{t-r})\,dr\nonumber\\ &\le q(t-s)+\psi''(\lambda^*)t k(a_\delta t-(\log t)\sqrt{t}). \end{align*} Here the last inequality follows from Lemma \ref{lemma:u*}(1) and the fact that the function $k$ is decreasing. By Lemma \ref{lemma:u*}(1), $\sup_{t>1}e^{qt}k(t)<\infty$, which implies that $tk(a_\delta t-(\log t)\sqrt{t})\to0$ as $t\to\infty$. Thus as $t\to\infty$, \begin{align}\label{3.3.1} \mE \int_{\mathcal{I}_t} e^{-\int_s^t \zeta_f(r,\sqrt{2\alpha}\delta t-B_{t-r})\,dr}\hat{G}_f(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s \sim \int_{\mathcal{I}_t}e^{-q(t-s)} \mE[\hat{G}_f(s,\sqrt{2\alpha}\delta t-B_{t-s})]\,\d s. \end{align} By the change of variables $s=s(u):=a_\delta t+u\sqrt{t}$, we get that \begin{align}\label{1.5} &\int_{\mathcal{I}_t}e^{-q(t-s)} \mE[ \hat{G}_f(s,\sqrt{2\alpha}\delta t-B_{t-s})]\,\d s\nonumber\\ =& \int_{\mathcal{I}_t} e^{-q(t-s)} \mE[\hat{G}_f(s,m_s+(\sqrt{2\alpha}\delta t-m_s-B_{t-s}))]\,\d s\nonumber\\ =& \int_{\mathcal{I}_t}e^{-q(t-s)} \d s\int_{\R}\frac{1}{\sqrt{2\pi}(t-s)}e^{-\frac{(z+m_s-\sqrt{2\alpha}\delta t)^2}{2(t-s)}}\hat{G}_f(s,m_s +z)\,\d z\nonumber\\ =& \sqrt{t}\int^{\log t}_{-\log t}\frac{e^{-q(1-a_\delta )t} e^{q\sqrt{t}u}}{\sqrt{2\pi(t-s(u))}}du \int^\infty_{-\infty}e^{-\frac{(m_{s(u)}+z-\sqrt{2\alpha}\delta t)^2}{2(t-s(u))}}\hat{G}_f(s(u),m_{s(u)}+z)\,\d z. \end{align} For $u\in(-\log t ,\log t )$, we have that \begin{align*} &(m_{s(u)}+z-\sqrt{2\alpha}\delta t)^2 =\left(\sqrt{2\alpha}(a_\delta-\delta) t+\sqrt{2\alpha}u\sqrt{t}-\frac{3}{2\sqrt{2\alpha}}\log (a_\delta t+u\sqrt{t})+z\right)^2 \\ &=2\alpha(a_\delta-\delta)^2 t^2+2\alpha u^2t+4\alpha(a_\delta-\delta)u t\sqrt{t}-3(a_\delta-\delta)t \log (a_\delta t)\\ &\quad+2\sqrt{2\alpha}(a_\delta-\delta) zt+R_1(t,u,z), \end{align*} where $R_1(t,u,z)=\left(-\frac{3}{2\sqrt{2\alpha}}\log (a_\delta t+u\sqrt{t})+z\right)^2-3u\sqrt{t}\log (a_\delta t+u\sqrt{t})+2\sqrt{2\alpha}u\sqrt{t}z- 3(a_\delta-\delta)t \log (1+u/(a_\delta \sqrt{t}))$. Using this one can check that for $|u|\le \log t $, \begin{align*} R_1(t,u,z)\ge -3(\log t) ^2\sqrt{t}-2\sqrt{2\alpha}(\log t)\sqrt{t}|z| -\frac{3(a_\delta-\delta)}{a_\delta} \sqrt{t}\log t. \end{align*} Using the Taylor expansion of $(1-x)^{-1}$, we obtain that \begin{align*} &\frac{1}{2(t-s(u))} =\frac{1}{2(1-a_\delta)t}\frac{1}{1-u/[(1-a_\delta)\sqrt{t}]}\\ &=\frac{1}{2(1-a_\delta)t}\left(1+\frac{u}{(1-a_\delta)\sqrt{t}}+\frac{u^2}{(1-a_\delta)^2 t}+R_2(t,u)\right), \end{align*} where $$ |R_2(t,u)|=\left|\sum_{n=3}^\infty\left[\frac{u}{(1-a_\delta)\sqrt{t}}\right]^n \right|\le \sum_{n=3}^\infty\left[\frac{\log t}{(1-a_\delta)\sqrt{t}}\right]^n \le \frac{2}{(1-a_\delta)^3}(\log t )^3t^{-3/2}, $$ here we used the fact that $\log t/[(1-a_\delta)\sqrt{t}]\le 1/2,$ and for $0\le x\le 1/2$, $\sum_{n=3}^\infty x^n=\frac{x^3}{1-x}\le 2x^3$. Using the above estimates, we get that for $u\in(-\log t ,\log t )$, \begin{align}\label{3.3.2} &\frac{(m_{s(u)}+z-\sqrt{2\alpha}\delta t)^2}{2(t-s(u))}\nonumber\\ =&\frac{\alpha(a_\delta-\delta)^2}{1-a_\delta}\left(t+\frac{u}{(1-a_\delta)}\sqrt{t}+\frac{u^2}{(1-a_\delta)^2 }\right)-\frac{3(a_\delta-\delta)}{2(1-a_\delta)} \log t\nonumber\\ &+\frac{4\alpha(a_\delta-\delta)u }{2(1-a_\delta)}\left(\sqrt{t}+\frac{u}{(1-a_\delta)}\right)+\frac{2\alpha u^2-3(a_\delta-\delta) \log (a_\delta)+2\sqrt{2\alpha}(a_\delta-\delta) z}{2(1-a_\delta)}\nonumber\\ &+R_3(t,u,z)\nonumber\\ =&\frac{\alpha(\rho-1)^2(1-\delta)}{\rho}t+qu\sqrt{t}-\frac{3(\rho-1)}{2} \log t+\frac{\alpha\rho^3}{1-\delta}u^2+\sqrt{2\alpha}(\rho-1)z\nonumber\\ &-\frac{3}{2}(\rho-1)\log(a_\delta)+R_3(t,u,z), \end{align} where $\lim_{t\to\infty}R_3(t,u,z)=0$ and there exists a positive function $r(\cdot)$ with $\lim_{t\to\infty}r(t)=0$ such that for any $u\in(-\log t ,\log t )$, \begin{align}\label{1.6} -R_3(t,u,z)\le r(t)(1+|z|). \end{align} For any $\epsilon>0$, choose $t_\epsilon$ such that $r(t)\le \epsilon$ for any $t>t_\epsilon$. Noticing that $q(1-a_\delta)+\frac{\alpha(\rho-1)^2(1-\delta)}{\rho}=2\alpha(\rho-1)(1-\delta),$ by \eqref{3.3.1}, \eqref{1.5} and \eqref{3.3.2}, we get that \begin{align}\label{1.7} &\lim_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}} \mE \int_{I_t} e^{-\int_s^t \zeta(r,\sqrt{2\alpha}\delta t-B_{t-r})\,dr}\hat{G}_f(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s\nonumber\\ =&a_\delta^{3(\rho-1)/2}\lim_{t\to\infty}\int^{\log t }_{-\log t }\frac{\sqrt{t}}{\sqrt{2\pi(t-s(u))}}e^{-\frac{\alpha\rho^3}{1-\delta}u^2}du \int^\infty_{-\infty}e^{-\sqrt{2\alpha}(\rho-1)z}e^{-R_3(t,u,z)}\hat{G}_f(s(u),m_{s(u)}+z)\d z. \end{align} It follows from \eqref{M-w} that $$\lim_{t\to\infty}\E\left( e^{-\int_\R f(y-m_t-z)X_t(dy)}; M_t\le m_t+z|\mathcal{S}\right)=\frac{e^{-w_f(z)}-e^{-\lambda^*}}{1-e^{-\lambda^*}}.$$ Thus by \eqref{e:1}, we get that $$\lim_{t\to\infty} v_f(t,m_t+z)=1-\frac{w_f(z)}{\lambda^*}:=\tilde{w}_f(z), $$ here we used the fact that $0\le 1-e^{-u^*(t,x)}\le \P^*(X_t\neq 0)\to 0.$ It follows that \begin{align}\label{2.4} &\lim_{t\to\infty}\hat{G}_f(a_\delta t+u\sqrt{t},m_{a_\delta t+u\sqrt{t}}+z)=\lim_{t\to\infty}\hat{G}_f(t,m_t+z)\nonumber\\ &= \frac{1}{\lambda^*}\psi(\lambda^*(1-\tilde{w}_f(z)))+q\tilde{w}_f(z)=A(w_f(z)). \end{align} Thus, as $t\to\infty$, the limit of the integrand in \eqref{1.7} is $$ \frac{a_\delta^{3(\rho-1)/2}}{\sqrt{2\pi(1-a_\delta)}}e^{-\frac{\alpha\rho^3}{1-\delta}u^2-\sqrt{2\alpha}(\rho-1)z}A(w_f(z)). $$ By \eqref{1.6}, \eqref{est-hatG} and Lemma \ref{lem_upper-w}, we have that, for $\eta$ small enough, there exist $T_\eta>1$ and $c_\eta>0$ such that for $t>T_\eta+t_\epsilon$, the integrand in \eqref{1.7} is smaller than $$ q\frac{a_\delta^{3(\rho-1)/2}}{\sqrt{\pi(1-a_\delta)}} e^{-\frac{\alpha\rho^3}{1-\delta}u^2-\sqrt{2\alpha}(\rho-1)z}e^{\epsilon(1+|z|)}\times \left\{ \begin{array}{ll} c_\eta^2 e^{2[\sqrt{2\alpha}(\rho-1)-\eta]z}, & \hbox{$z<0$;} \\ 1, & \hbox{$z>0$,} \end{array} \right. $$ which is integrable over $\R\times \R$ if we choose $\epsilon<\sqrt{2\alpha}(\rho-1)$ and $-2\eta+\sqrt{2\alpha}(\rho-1)-\epsilon>0$. Thus using the dominated convergence theorem in \eqref{1.7}, we have that \begin{align*} &\lim_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}} \mE \int_{{\cal I}_t} e^{-\int_s^t \zeta_f(r,\sqrt{2\alpha}\delta t-B_{t-r})\,dr}\hat{G}_f(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s\\ =&\frac{a_\delta^{3(\rho-1)/2}}{\sqrt{2\pi(1-a_\delta)}} \int^{\infty}_{-\infty}e^{-\frac{\alpha\rho^3}{1-\delta}u^2}du\int^\infty_{-\infty}e^{-\sqrt{2\alpha}(\rho-1)z}A(w_f(z))\d z\\ =&\frac{a_\delta^{3(\rho-1)/2}}{\sqrt{2\alpha}\rho} \int^\infty_{-\infty}e^{-\sqrt{2\alpha}(\rho-1)z}A(w_f(z))\d z. \end{align*} \hfill$\Box$ \begin{lemma}\label{lem-case1-2} For $\delta\in(1-\rho, 1)$, it holds that for any $f\in\mathcal{B}^+(\R)$, $$\lim_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}}{\rm E} \int_{[0,t]\setminus \mathcal{I}_{t}} e^{-\int_s^{t}\zeta_f(r,\sqrt{2\alpha}\delta t-B_{t-r}\,dr)}\hat{G}_f(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s =0.$$ \end{lemma} {\bf Proof:} Since $\zeta_f(t,x)\ge q$, using \eqref{est-hatG} and the fact that $v_f(t,x)\le v(t,x)$, we only need to show that \begin{equation}\label{limit-0} \lim_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}}\mE \int_{[0,t]\setminus {\cal I}_{t}} e^{-q(t-s)}v^2(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s =0. \end{equation} Note that \begin{align*} [0,t]\setminus {\cal I}_t\subset &[0,\epsilon t]\cup \left([(a_\delta-\epsilon)t, a_\delta t-(\log t)\sqrt{t}])\cup [a_\delta t+(\log t)\sqrt{t}, (a_\delta+\epsilon)t]\right)\\ &\cup\left([\epsilon t,(a_\delta-\epsilon)t]\cup[(a_\delta+\epsilon)t,t]\right). \end{align*} The proof of \eqref{limit-0} is accomplished in the following three lemmas by handling the integral over $[0,\epsilon t]$, $[(a_\delta-\epsilon)t, a_\delta t-(\log t)\sqrt{t}]\cup [a_\delta t+(\log t)\sqrt{t}, (a_\delta+\epsilon)t]$ and $[\epsilon t,(a_\delta-\epsilon)t]\cup[(a_\delta+\epsilon)t,t]$ separately. \hfill$\Box$ \begin{lemma} Let $\delta\in(1-\rho, 1)$. For $\epsilon>0$ small enough, $$ \lim_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}}{\rm E}\int_{0}^{\epsilon t} e^{-q(t-s)}v^2(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s=0. $$ \end{lemma} {\bf Proof:} By \eqref{domi-B'}, we have that $$v(s,\sqrt{2\alpha}\delta t-B_{t-s})\le \mP_{B_{t-s}}(B_s\le \sqrt{2\alpha}\delta t)=\mP[B_t\le \sqrt{2\alpha}\delta t|\sigma(B_r: r\le t-s)]. $$ Thus it follows that \begin{align}\label{3.5.1} \mE(v^2\left(s,\sqrt{2\alpha}\delta t-B_{t-s})\right)&\le \mE\left(v(s,\sqrt{2\alpha}\delta t-B_{t-s})\right)\le \mP(B_t\le \sqrt{2\alpha}\delta t). \end{align} Hence, for any $\epsilon>0$, \begin{align}\label{2.11} &\mE\int_{0}^{\epsilon t} e^{-q(t-s)}v^2(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s \le q^{-1}e^{q\epsilon t}e^{-qt}\mP(B_t\le \sqrt{2\alpha}\delta t)\nonumber\\ &\le q^{-1}e^{q\epsilon t}\times \left\{\begin{array}{ll} e^{-qt}, & \hbox{$\delta\ge 0$;} \\ \frac{1}{2\sqrt{\pi\alpha}|\delta|} t^{-1/2}e^{-(q+\alpha\delta^2)t}, & \hbox{$\delta<0$,} \end{array} \right. \end{align} where in the last inequality we used \eqref{est:bm}. Using \eqref{2.7} and \eqref{2.8}, we can choose $\epsilon$ small enough so that $$ 2\alpha(\rho-1)(1-\delta)+q\epsilon < \left\{\begin{array}{ll} q, & \hbox{$\delta\ge 0$;} \\ q+\alpha\delta^2, & \hbox{$\delta\in(1-\rho,0)$,} \end{array} \right. $$ which implies the desired result. \hfill$\Box$ \begin{lemma}\label{lem:case1-3} Let $\delta\in(1-\rho, 1)$. For $\epsilon>0$ small enough, \begin{align*} \lim_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}}{\rm E}\left(\int_{(a_\delta-\epsilon)t}^{a_\delta t- (\log t)\sqrt{t}}+\int_{a_\delta t+(\log t)\sqrt{t}}^{(a_\delta+\epsilon)t}\right) e^{-q(t-s)}v^2(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s=0. \end{align*} \end{lemma} {\bf Proof:} Put $S_t:=(a_\delta-\epsilon,a_\delta-(\log t)/\sqrt{t})\cup(a_\delta+(\log t)/\sqrt{t},a_\delta+\epsilon)$. Recall the definition of $m_t$ given by \eqref{def-m_t}. By the change of variables $s=rt$, applying Lemma \ref{lem_upper-w} for $z>0$ and the fact $v\le 1$ for $z\le 0$, we get that, for $\eta$ small enough, there exists $c_\eta\ge 1$ such that for $t$ large enough, \begin{align*} &\mE\left(\int_{(a_\delta-\epsilon)t}^{a_\delta t-(\log t)\sqrt{t}}+\int_{a_\delta t+\log t\sqrt{t}}^{(a_\delta+\epsilon)t}\right) e^{-q(t-s)}v^2(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s\\ =&\mE\left(\int_{(a_\delta-\epsilon)t}^{a_\delta t-(\log t)\sqrt{t}}+\int_{a_\delta t+\log t\sqrt{t}}^{(a_\delta+\epsilon)t}\right) e^{-q(t-s)}v^2\left(s, m(s)-(m(s)-\sqrt{2\alpha}\delta t+B_{t-s})\right)\,\d s\\ \le& c_\eta^2 t\int_{S_t} e^{-q(1-r)t} \mE\left[ e^{-2(\sqrt{2\alpha}(\rho-1)-\eta)(m(rt)-\sqrt{2\alpha}\delta t+B_{(1-r)t})}\wedge1\right]\,dr. \end{align*} We claim that for any $b_1>b_2>0$, \begin{equation}\label{domi-e-B1} \mE\left(e^{-b_1(b_2+B_1)}\wedge1\right)\le \frac{1}{\sqrt{2\pi}}\left(\frac{1}{b_1-b_2}+\frac{1}{b_2}\right)e^{-b_2^2/2}. \end{equation} Indeed, the left-hand side of \eqref{domi-e-B1} can be written as \begin{align*} \mE\left(e^{-b_1(b_2+B_1)};B_1+b_2>0\right)+\mE(B_1+b_2\le 0). \end{align*} By \eqref{est:bm}, we have that \begin{equation*} \mE(B_1+b_2\le 0)=\mE(B_1>b_2)\le \frac{1}{\sqrt{2\pi}}\frac{1}{b_2}e^{-b_2^2/2}. \end{equation*} By the Girsanov theorem, we have \begin{align*} &\mE\left(e^{-b_1(b_2+B_1)};B_1+b_2>0\right)=e^{-b_1b_2} e^{b_1^2/2}\mE (B_1-b_1+b_2>0)\\ \le& \frac{1}{\sqrt{2\pi}}\frac{1}{b_1-b_2}e^{-b_1b_2} e^{b_1^2/2}e^{-(b_1-b_2)^2/2} = \frac{1}{\sqrt{2\pi}}\frac{1}{b_1-b_2}e^{-b_2^2/2}. \end{align*} Now \eqref{domi-e-B1} follows immediately. We will use \eqref{domi-e-B1} with $b_1=2(\sqrt{2\alpha}(\rho-1)-\eta)\sqrt{(1-r)t}$ and $b_2=\frac{m(rt)-\sqrt{2\alpha}\delta t}{\sqrt{(1-r)t}}.$ For $\epsilon\in \left(0, \frac{a_\delta-\delta}{2\rho-1}\wedge(1-a_\delta)\right)$, we have for any $r\in S_t\subset(a_\delta-\epsilon,a_\delta+\epsilon)$, $$ \frac{\sqrt{2\alpha}(a_\delta+\epsilon-\delta) }{\sqrt{(1-a_\delta-\epsilon)}}\sqrt{t}\ge b_2 \ge \frac{\sqrt{2\alpha}(a_\delta-\epsilon-\delta) }{\sqrt{(1-a_\delta+\epsilon)}}\sqrt{t} -\frac{\frac{3}{2\sqrt{2\alpha}}}{\sqrt{(1-a_\delta-\epsilon)}}\frac{\log t}{\sqrt{t}}, $$ and \begin{align*} b_1-b_2&\ge 2(\sqrt{2\alpha}(\rho-1)-\eta)\sqrt{(1-a_\delta-\epsilon)t} -\frac{\sqrt{2\alpha}(a_\delta+\epsilon-\delta) }{\sqrt{(1-a_\delta-\epsilon)}}\sqrt{t}\\ &=\frac{\sqrt{2\alpha}}{\sqrt{1-a_\delta-\epsilon}}\left[2(\rho-1)(1-a_\delta)-(a_\delta-\delta)-(2\rho-1)\epsilon-\frac{2\eta}{\sqrt{2\alpha}}(1-a_\delta-\epsilon)\right]\sqrt{t}\\ &\ge \frac{\sqrt{2\alpha}}{\sqrt{1-a_\delta-\epsilon}} \left[a_\delta-\delta-(2\rho-1)\epsilon-\frac{2\eta}{\sqrt{2\alpha}}\right]\sqrt{t}, \end{align*} where in the final inequality, we used $(\rho-1)(1-a_\delta)=(a_\delta-\delta)$. So if we choose $\eta\in \left(0, \sqrt{2\alpha}[a_\delta-\delta-(2\rho-1)\epsilon]/2\right)$, and then for $t$ large enough, $b_1>b_2>0$. Thus, using \eqref{domi-e-B1}, we have that, for $t$ large enough and $r\in S_t$, \begin{align}\label{2.12} &\mE\left[ e^{-2(\sqrt{2\alpha}(\rho-1)-\epsilon)(m(rt)-\sqrt{2\alpha}\delta t+B_{(1-r)t})}\wedge1\right] \le Ct^{-1/2} e^{-\frac{(m(rt)-\sqrt{2\alpha}\delta t)^2}{2(1-r)t}}\nonumber\\ \le & C t^{-1/2}t^{\frac{3(1-\delta)}{2(1-a_\delta-\epsilon)}} e^{-\frac{\alpha(r-\delta)^2}{(1-r)}t}. \end{align} Here in the last inequality we used the following facts: $r\le a_\delta+\epsilon<1$ and $$e^{-\frac{(m(rt)-\sqrt{2\alpha}\delta t)^2}{2(1-r)t}}\leq (rt)^{\frac{3(r-\delta)}{2(1-r)}}e^{-\frac{\alpha(r-\delta)^2}{(1-r)}t}\le t^{\frac{3(1-\delta)}{2(1-a_\delta-\epsilon)}}e^{-\frac{\alpha(r-\delta)^2}{(1-r)}t} .$$ By Lemma \ref{fact7}, we have that, for $r\in S_t$, $$ q(1-r)+\frac{\alpha(r-\delta)^2}{(1-r)}\ge 2\alpha(\rho-1)(1-\delta)+\alpha \rho^2(a_\delta-r)^2\ge 2\alpha(\rho-1)(1-\delta)+\alpha \rho^2\frac{(\log t )^2}{t}. $$ Thus, there exists $\theta$ such that \begin{align*} &\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}}\mE\left(\int_{(a_\delta-\epsilon)t}^{a_\delta t-\log t\sqrt{t}}+\int_{a_\delta t+\log t\sqrt{t}}^{(a_\delta+\epsilon)t}\right) e^{-q(t-s)}v^2(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s\\ \le &C t^\theta e^{-\alpha\rho^2(\log t)^2}\to 0, \quad \mbox{ as } t\to\infty. \end{align*} \hfill$\Box$ \begin{lemma}\label{lem:case1-4} Let $\delta\in(1-\rho, 1)$. For $\epsilon>0$ small enough, \begin{align*} \limsup_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}} {\rm E}\left(\int_{\epsilon t}^{(a_\delta-\epsilon)t}+\int_{(a_\delta+\epsilon)t}^{t}\right) e^{-q(t-s)}v^2(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s=0. \end{align*} \end{lemma} {\bf Proof:} Set $\mathcal{I}=(\epsilon,a_\delta-\epsilon)\cup (a_\delta+\epsilon,1)$. By the change of variables $r=s/t$, we get that \begin{align*} &\mE\left(\int_{\epsilon t}^{(a_\delta-\epsilon)t}+\int_{(a_\delta+\epsilon)t}^{t}\right) e^{-q(t-s)}v^2(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s\\ =&t\mE\int_{\mathcal{I}} e^{-q(1-r)t}v^2(rt,\sqrt{2\alpha}\delta t-B_{t-rt})\,\d r\\ =&t\mE\int_{\mathcal{I}} e^{-q(1-r)t}\, \d r \int_{\R}\frac{1}{\sqrt{2\pi(1-r)t}}e^{-\frac{(z-\sqrt{2\alpha}\delta t)^2}{2(1-r)t}} v^2(rt,z)\,\d z\\ =&\sqrt{2\alpha}t^2\int_{\mathcal{I}} \,dr\int_{\R}\frac{r}{\sqrt{2\pi(1-r)t}} e^{-q(1-r)t}e^{-\frac{(\sqrt{2\alpha}art-\sqrt{rt}-\sqrt{2\alpha}\delta t )^2}{2(1-r)t}}v^2(rt,\sqrt{2\alpha}\theta rt-\sqrt{rt})\,\d \theta\\ =&\frac{\sqrt{\alpha}}{\sqrt{\pi}}t^{3/2} \int_{\mathcal{I}}\frac{r\,\d r}{\sqrt{1-r}}\left(\int_{-\infty}^{1-\rho}+\int_{1-\rho}^1+\int_{1}^\infty\right) e^{-\frac{\alpha\left(\theta r-\frac{\sqrt{r}}{\sqrt{2\alpha t}}-\delta \right)^2t}{(1-r)}-q(1-r)t}v^2(rt,\sqrt{2\alpha}\theta rt-\sqrt{rt})\,\d\theta\\ =&:\frac{\sqrt{\alpha}}{\sqrt{\pi}}(I_1(t)+I_2(t)+I_3(t)). \end{align*} For $I_1(t)$, by Lemma \ref{lemma:key}(2) with $t$ replaced by $rt$, we have that for $\epsilon t>t_0$ and $\theta<1-\rho$, $$v(rt,\sqrt{2\alpha}\theta rt-\sqrt{rt})\le c rt e^{-\alpha \theta^2 rt}e^{-qrt}.$$ Then by the change of variables $\theta\to -\theta$ in $I_1(t)$, we get that for $t>t_0/\epsilon$, \begin{align*} I_1(t)&\le c^2 t^{7/2}\int_{\mathcal{I}} \frac{r^3\, dr}{\sqrt{1-r}}\int^{\infty}_{\rho-1} \exp\Big\{-\Big[q(1+r)+\frac{\alpha\left(\theta r+\frac{\sqrt{r}}{\sqrt{2\alpha t}}+\delta \right)^2}{(1-r)}+2\alpha \theta^2 r\Big]t\Big\}\,\d\theta\\ &\le c^2 t^{7/2} e^{-q(1+\epsilon)t}e^{-\alpha\delta^2t}\int_{\mathcal{I}}\frac{r^3\,\d r}{\sqrt{1-r}} \int^{\infty}_{-\infty} e^{-2\alpha \theta^2 rt}\,\d\theta\\ &= c^2 t^{7/2} e^{-q(1+\epsilon)t}e^{-\alpha\delta^2t}\int_{\mathcal{I}}\sqrt{\frac{\pi}{2\alpha rt}}\frac{r^3\,dr}{\sqrt{1-r}}\,\d r \le Ct^3 e^{-q\epsilon t} e^{-(q+\alpha\delta^2)t}. \end{align*} Since $q+\alpha \delta^2>2\alpha(\rho-1)(1-\delta)$, it holds that $$\lim_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}}I_1(t)=0.$$ For $I_2(t)$, by Lemma \ref{lemma:key}(2) and the change of variables $\theta-\frac{1}{\sqrt{2\alpha r t}}\to \theta$, we get that for $\epsilon t>t_0$, $I_2(t)$ is less than or equal to \begin{align*} &c^2t^{7/2} \int_{\mathcal{I}} \frac{r^3}{\sqrt{1-r}}\,\d r\int_{1-\rho}^{1} \exp\Big\{-\Big[q(1-r)+\frac{\alpha\left(\theta r-\frac{\sqrt{r}}{\sqrt{2\alpha t}}-\delta \right)^2}{(1-r)}+4\alpha (\rho-1)(1-\theta)r\Big]t\Big\}\,\d\theta\\ = &c^2t^{7/2}\int_{\mathcal{I}} \frac{r^3}{\sqrt{1-r}}\,\d r\int_{1-\rho-\frac{1}{\sqrt{2\alpha r t}}}^{1-\frac{1}{\sqrt{2\alpha r t}}} e^{-\left[q(1-r)+\frac{\alpha(\theta r-\delta )^2}{(1-r)}+4\alpha (\rho-1)(1-\theta)r\right]t}e^{2\sqrt{2\alpha}(\rho-1)\sqrt{rt}}\,\d\theta\\ \le& Ct^{7/2}e^{2\sqrt{2\alpha}(\rho-1)\sqrt{t}} e^{-\inf_{r\in\mathcal{I},\theta<1}H(\theta,r) t}, \end{align*} where $H(\theta,r):=q(1-r)+\frac{\alpha(\theta r-\delta )^2}{(1-r)}+4\alpha (\rho-1)(1-\theta)r$. We claim that \begin{align}\label{2.10} &\inf_{r\in\mathcal{I},\theta<1}H(\theta,r) > 2\alpha(\rho-1)(1-\delta). \end{align} Then it follows that $$\lim_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}}I_2(t)=0.$$ Now we prove \eqref{2.10}. Note that \begin{align*} H(\theta,r)=&\frac{\alpha r^2}{1-r}\left(\theta-\frac{\delta+2(\rho-1)(1-r)}{r}\right)^2-\alpha (\rho-1)(3\rho-1)(1-r)+4\alpha(\rho-1)(1-\delta). \end{align*} For $r^*:=\frac{\delta+2(\rho-1)}{2\rho-1}\le r<1$ (that is $\frac{\delta+2(\rho-1)(1-r)}{r}\le 1$) and $\theta<1$, $$H(\theta,r)\ge -\alpha (\rho-1)(3\rho-1)(1-r^*)+4\alpha(\rho-1)(1-\delta) =2\alpha(\rho-1)(1-\delta)+\frac{\alpha(\rho-1)^2(1-\delta)}{2\rho-1} .$$ For $r\in [0,r^*]\cap \mathcal{I}$ and $\theta<1$, since $\frac{\delta+2(\rho-1)(1-r)}{r}\ge 1$, we have that \begin{align*} H(\theta,r)&\ge H(1,r)=q(1-r)+\frac{\alpha(r-\delta )^2}{(1-r)}\\ &\ge 2\alpha(\rho-1)(1-\delta)+\alpha\rho^2(a_\delta-r)^2\\ &\ge 2\alpha(\rho-1)(1-\delta)+\alpha\rho^2\epsilon^2, \end{align*} where in the second inequality we used Lemma \ref{fact7}. Thus \eqref{2.10} is valid. Finally, we deal with $I_3(t)$. Since $v(t,x)\le 1$, we have \begin{align}\label{3.7.1} I_3(t)&\le t^{3/2} \int_{\mathcal{I}}\frac{r\,\d r}{\sqrt{1-r}}\int_1^\infty e^{-\frac{\alpha\left(\theta r-\frac{\sqrt{r}}{\sqrt{2\alpha t}}-\delta \right)^2t}{(1-r)}-q(1-r)t}\,\d\theta\nonumber\\ &= \frac{1}{\sqrt{2\alpha}}t\int_{\mathcal{I}}\,dr\int_{\frac{\sqrt{2\alpha t}(r-\delta)-\sqrt{r}}{\sqrt{1-r}}}^\infty e^{-q(1-r)t} e^{-z^2/2}\,\d z \nonumber\\ &\le \frac{\sqrt{\pi}}{\sqrt{\alpha}}t\int_{\mathcal{I}} e^{-q(1-r)t} \mP\left(B_1\ge \frac{\sqrt{2\alpha t}(r-\delta)-1}{\sqrt{1-r}}\right)\,\d r. \end{align} If $r\le \delta+\frac{2}{\sqrt{2\alpha t}}$, then \begin{align}\label{2.9} &e^{-q(1-r)t} \mP\left(B_1\ge \frac{\sqrt{2\alpha t}(r-\delta)-1}{\sqrt{1-r}}\right) \le e^{-q(1-r)t}\le e^{-q(1-\delta)t}e^{\frac{2q}{\sqrt{2\alpha }}\sqrt{t}}\nonumber\\ = &e^{-2\alpha(\rho-1)(1-\delta)t}e^{-\alpha(\rho-1)^2(1-\delta)t}e^{\frac{2q}{\sqrt{2\alpha }}\sqrt{t}}. \end{align} If $\delta+\frac{2}{\sqrt{2\alpha t}}< r<1$, then $\frac{\sqrt{2\alpha t}(r-\delta)-1}{\sqrt{1-r}}>1$, and thus by \eqref{est:bm}, \begin{align}\label{2.9-b} e^{-q(1-r)t} \mP\left(B_1\ge \frac{\sqrt{2\alpha t}(r-\delta)-1}{\sqrt{1-r}}\right) &\le \frac{1}{\sqrt{2\pi}}\frac{\sqrt{1-r}}{\sqrt{2\alpha t}(r-\delta)-1}e^{-q(1-r)t}e^{-\frac{(\sqrt{2\alpha t}(r-\delta)-1)^2}{2(1-r)}}\nonumber\\ &\le e^{-q(1-r)t}e^{-\frac{\alpha\left(r-\delta-\frac{1}{\sqrt{2\alpha t}} \right)^2t}{(1-r)}}. \end{align} It follows from Lemma \ref{fact7} that for $r\in\mathcal{I}$, \begin{align*} &q(1-r)+\frac{\alpha(r-\delta -\frac{1}{\sqrt{2\alpha t}})^2}{(1-r)}\nonumber\\ \ge&2\alpha(\rho-1)\left(1-\delta-\frac{1}{\sqrt{2\alpha t}}\right)+\alpha\rho^2\left(a_\delta-r+\frac{1}{\rho\sqrt{2\alpha t}}\right)^2\nonumber\\ \ge&2 \alpha(\rho-1)(1-\delta)+\alpha\rho^2\left(\epsilon-\frac{1}{\sqrt{2\alpha t}\rho}\right)^2-\sqrt{2\alpha}(\rho-1)t^{-1/2}. \end{align*} Then we continue the estimates in \eqref{2.9-b} to get that, if $\delta+\frac{2}{\sqrt{2\alpha t}}< r<1$, then \begin{equation}\label{2.9-b'} e^{-q(1-r)t} \mP\left(B_1\ge \frac{\sqrt{2\alpha t}(r-\delta)-1}{\sqrt{1-r}}\right)\le e^{-2\alpha(\rho-1)(1-\delta)t}e^{-\alpha\rho^2\left(\epsilon-\frac{1}{\sqrt{2\alpha t}\rho}\right)^2t+\sqrt{2\alpha}(\rho-1)\sqrt{t}}. \end{equation} Combining \eqref{3.7.1}, \eqref{2.9} and \eqref{2.9-b'}, we get $$ \limsup_{t\to\infty}\frac{e^{2\alpha(\rho-1)(1-\delta)t}}{t^{3(\rho-1)/2}}I_3(t)=0. $$ The proof is now complete. \hfill$\Box$ \subsection{Proof of Theorem \ref{them:case2}: $\delta=1-\rho$} It follows from Lemma \ref{fact4} that, to prove Theorem \ref{them:case2}, we only need to consider the limiting property of $v_f(t,\sqrt{2\alpha}\delta t)$. It follows from Lemma \ref{lemma1} that for $\delta=1-\rho<0$, $$\lim_{t\to\infty}t^{-3(\rho-1)/4}e^{(q+\alpha(1-\rho)^2)t}U_{1,f}(t,\sqrt{2\alpha}(1-\rho) t)=0.$$ Thus, by the decomposition \eqref{decom-v}, to prove the desired result, it suffices to show that \begin{align*} &\lim_{t\to\infty}t^{-3(\rho-1)/4}e^{(q+\alpha(\rho-1)^2)t}U_{2,f}(t, \sqrt{2\alpha}(\rho-1) t) \\ =&\frac{1}{\sqrt{2\pi }}\int^{\infty}_{ 0}s^{3(\rho-1)/2}e^{-\alpha\rho^2 s^2}\d s\int^\infty_{-\infty} e^{-\sqrt{2\alpha}(\rho-1)z}A(w_f(z))\d z. \end{align*} The display above follows from Lemmas \ref{lem-case2} and \ref{lem-case2-2} below. In Lemma \ref{lem-case2-2}, we will show that $$ t^{-3(\rho-1)/4}e^{(q+\alpha(1-\rho)^2) t}\bP\Big(M_t^I\le \sqrt{2\alpha}(1-\rho) t, \tau\notin \Big[t-(\log t)\sqrt{t},t-t^{1/4}\Big] \Big)\to 0. $$ Thus, on the event $\left\{M_t^I\le \sqrt{2\alpha}(1-\rho) t\right\}$, with large probability, the first branching time of the skeleton should happens in the interval $\Big[t-(\log t)\sqrt{t},t-t^{1/4}\Big].$ \medskip \begin{lemma}\label{lem-case2} It holds that for any $f\in\mathcal{H}$, \begin{align*} &\lim_{t\to\infty}t^{-3(\rho-1)/4}e^{(q+\alpha(1-\rho)^2) t} {\rm E} \int_{t^{1/4}}^{(\log t)\sqrt{t}} e^{-\int_s^t \zeta_f(r,\sqrt{2\alpha}(1-\rho) t-B_{t-r})\,dr}\hat{G}_f(s,\sqrt{2\alpha}(1-\rho) t-B_{t-s})\,\d s\\ =&\frac{1}{\sqrt{2\pi }}\int^{\infty}_{ 0}s^{3(\rho-1)/2}e^{-\alpha\rho^2 s^2}\d s\int^\infty_{-\infty} e^{-\sqrt{2\alpha}(\rho-1)z}A(w_f(z))\d z. \end{align*} \end{lemma} {\bf Proof:} In this proof, we always assume that $t\ge1$ is large enough such that $\log t\le \sqrt{t}$. Using an argument similar to that in the first paragraph of the proof of Lemma \ref{lem-case1}, we get that, as $t\to\infty$, \begin{align}\label{3.8.1} &\mE \int_{t^{1/4}}^{(\log t)\sqrt{t}} e^{-\int_s^t \zeta_f(r,\sqrt{2\alpha}(1-\rho) t-B_{t-r})\,dr}\hat{G}_f\left(s,\sqrt{2\alpha}(1-\rho) t-B_{t-s}\right)\,\d s\nonumber\\ \sim&\mE \int_{t^{1/4}}^{(\log t)\sqrt{t}} e^{-q(t-s)}\hat{G}_f\left(s,\sqrt{2\alpha}(1-\rho) t-B_{t-s}\right)\,\d s\nonumber\\ =&\sqrt{t}\int^{\log t}_{t^{-1/4}}\frac{e^{-q(t-u\sqrt{t})}}{\sqrt{2\pi(t-u\sqrt{t})}}\d u\int_{\R}e^{-\frac{\left(m_{u\sqrt{t}}+z+\sqrt{2\alpha}(\rho-1) t\right)^2}{2(t-u\sqrt{t})}}\hat{G}_f\left(u\sqrt{t},m_{u\sqrt{t}}+z\right)\d z. \end{align} For $u\in(t^{-1/4},\log t )$, we have that \begin{align*} (m_{u\sqrt{t}}+z+\sqrt{2\alpha}(\rho-1) t)^2 =&\left(\sqrt{2\alpha}(\rho-1) t+\sqrt{2\alpha}u\sqrt{t}-\frac{3}{2\sqrt{2\alpha}}\log (u\sqrt{t})+z\right)^2 \\ =&2\alpha(\rho-1)^2 t^2+2\alpha u^2t+4\alpha(\rho-1)u t\sqrt{t}-3(\rho-1)t \log (u\sqrt{t})\\ &+2\sqrt{2\alpha}(\rho-1) zt+R_4(t,u,z), \end{align*} where \begin{align*} R_4(t,u,z)&=\left(-\frac{3}{2\sqrt{2\alpha}}\log (u\sqrt{t})+z\right)^2-3u\sqrt{t}\log (u\sqrt{t})+2\sqrt{2\alpha}u\sqrt{t}z\\ &\ge -3(\log t) ^2\sqrt{t}-2\sqrt{2\alpha}(\log t)\sqrt{t}|z|. \end{align*} Using the Taylor expansion of $(1-x)^{-1}$, we obtain that, for $u\in(t^{-1/4},\log t )$, \begin{align*} \frac{1}{2(t-u\sqrt{t})} & =\frac{1}{2t}\frac{1}{1-u/\sqrt{t}}=\frac{1}{2t}\left(1+\frac{u}{\sqrt{t}}+\frac{u^2}{ t}+R_5(t,u)\right), \end{align*} where $ |R_5(t,u)| \le 2(\log t )^3t^{-3/2}. $ Thus \begin{align*} \frac{(m_{u\sqrt{t}}+z+\sqrt{2\alpha}(\rho-1) t)^2}{2(t-u\sqrt{t})} =&\alpha(\rho-1)^2t+qu\sqrt{t}-\frac{3(\rho-1)}{4} \log t\\ &+\alpha\rho^2u^2+\sqrt{2\alpha}(\rho-1)z-\frac{3}{2}(\rho-1)\log(u)+R_6(t,u,z). \end{align*} Here $\lim_{t\to\infty}R_6(t,u,z)=0$ and there is a positive function $r^*(\cdot)$ with $\lim_{t\to\infty}r^*(t)=0$ such that $-R_6(t,u,z)\le r^*(t)(1+|z|)$ for all $u\in(t^{-1/4},\log t )$. Now, using \eqref{3.8.1}, we get that \begin{align*} &\lim_{t\to\infty}t^{-3(\rho-1)/4}e^{(q+\alpha(1-\rho)^2)t} \mE \int_{t^{1/4}}^{(\log t)\sqrt{t}} e^{-\int_s^t \zeta_f(r,\sqrt{2\alpha}(1-\rho) t-B_{t-r})\,dr}\hat{G}_f\left(s,\sqrt{2\alpha}(1-\rho) t-B_{t-s}\right)\,\d s\\ =&\lim_{t\to\infty}\int^{\log t}_{t^{-1/4}}\frac{\sqrt{t}}{\sqrt{2\pi(t-u\sqrt{t})}} u^{3(\rho-1)/2}e^{-\alpha\rho^2 u^2}\d u\int_{\R}e^{-\sqrt{2\alpha}(\rho-1)z} e^{-R_6(t,u,z)}\hat{G}_f\left(u\sqrt{t},m_{u\sqrt{t}}+z\right)\d z. \end{align*} Using an arguments similar to those in the proof of Lemma \ref{lem-case1}, the desired result follows from the the dominated convergence theorem. \hfill$\Box$ \begin{lemma}\label{lem-case2-2} It holds that for any $f\in\mathcal{B}^+(\R)$, \begin{align*} &\lim_{t\to\infty}t^{-3(\rho-1)/4}e^{(q+\alpha(1-\rho)^2) t} {\rm E} \int_{[0,t]\setminus (t^{1/4},(\log t)\sqrt{t})} e^{-\int_s^t \zeta_f(r,\sqrt{2\alpha}(1-\rho) t-B_{t-r})\,dr}\hat{G}_f(s,\sqrt{2\alpha}(1-\rho) t-B_{t-s})\,\d s\\ &=0. \end{align*} \end{lemma} {\bf Proof:} We only need to show that \begin{align*} \lim_{t\to\infty}\frac{e^{(q+\alpha(1-\rho)^2) t}}{t^{3(\rho-1)/4}} \mE \int _{(0,t)\setminus (t^{1/4},(\log t)\sqrt{t})} e^{-q(t-s)}v^2(s,\sqrt{2\alpha}(1-\rho) t-B_{t-s})\,\d s=0. \end{align*} We prove the above result in three steps. {\it Step 1}: By \eqref{3.5.1}, we have that $$\mE\left(v^2(s,\sqrt{2\alpha}(1-\rho) t-B_{t-s})\right)\le \mP\left(B_t\le \sqrt{2\alpha}(1-\rho) t\right)\le \frac{1}{2\sqrt{\pi\alpha}(\rho-1)\sqrt{t}}e^{-\alpha(\rho-1)^2t}.$$ Thus, for any $T>0$, \begin{align}\label{3.4} & \frac{e^{(q+\alpha(\rho-1)^2) t}}{t^{3(\rho-1)/4}}\mE\int_0^T e^{-q(t-s)}v^2(s,\sqrt{2\alpha}(1-\rho) t-B_{t-s})\,\d s\nonumber\\ \le &\int_0^T e^{qs}\,\d s\frac{1}{2\sqrt{\pi\alpha}(\rho-1)\sqrt{t}} \frac{1}{t^{3(\rho-1)/4}}\to 0, \quad\mbox{ as } t\to\infty. \end{align} {\it Step 2:} Using arguments similar to those in the proofs of Lemmas \ref{lem:case1-3} and \ref{lem:case1-4}, we get that, $$\frac{e^{(q+\alpha(\rho-1)^2)t}}{t^{3(\rho-1)/4}}\mE\int_{\sqrt{t}\log t}^{t} e^{-q(t-s)}v^2(s,\sqrt{2\alpha}(1-\rho) t-B_{t-s})\,\d s\to 0,\quad \mbox{ as }t\to\infty.$$ {\it Step 3:} Note that there exists $T_0$ such that $m_s>0$ for all $s>T_0$. Using Lemma \ref{lem_upper-w}, we get that, for $\eta$ small enough, there exist $c_\eta>1$ and $T_\eta>1$ such that for $T>T_\eta+T_0$, \begin{align}\label{3.9.2} &\mE \int_{T}^{t^{1/4}}e^{-q(t-s)}v^2(s,\sqrt{2\alpha}(1-\rho) t-B_{t-s})\,\d s\nonumber\\ =&\mE\int_{T}^{t^{1/4}} e^{-q(t-s)}v^2(s, m(s)-(m(s)+\sqrt{2\alpha}(\rho-1) t+B_{t-s}))\,\d s\nonumber\\ \le &c_\eta^2 \int_{T}^{t^{1/4}} e^{-q(t-s)} \mE[ e^{-2(\sqrt{2\alpha}(\rho-1)-\eta)(m(s)+\sqrt{2\alpha}(\rho-1) t+B_{t-s})}\wedge1]\,\d s. \end{align} Similar to \eqref{2.12}, we have that, for $T<s<t^{1/4}$, \begin{align}\label{3.9.1} &\mE[ e^{-2(\sqrt{2\alpha}(\rho-1)-\eta)(m(s)+\sqrt{2\alpha}(\rho-1) t+B_{t-s})}\wedge1]\nonumber\\ \le& Ct^{-1/2}e^{-\frac{(m(s)+\sqrt{2\alpha}(\rho-1) t)^2}{2(t-s)}}\nonumber\\ \le& Ct^{-1/2}t^{\frac{3(\rho-1)}{8}}e^{-\alpha(\rho-1)^2 t-qs} \end{align} with $C$ being a positive constant. Here in the last inequality, we used the fact that \begin{align*} \frac{(m(s)+\sqrt{2\alpha}(\rho-1) t)^2}{2(t-s)}&= \frac{(\sqrt{2\alpha}\rho s-\frac{3}{2\sqrt{2\alpha}}\log s+\sqrt{2\alpha}(\rho-1) (t-s))^2}{2(t-s)}\nonumber\\ &\ge \alpha(\rho-1)^2(t-s)+\sqrt{2\alpha}(\rho-1) \left(\sqrt{2\alpha}\rho s-\frac{3}{2\sqrt{2\alpha}}\log s\right)\nonumber\\ &=\alpha(\rho-1)^2 t+qs-\frac{3}{2}(\rho-1)\log s. \end{align*} Putting \eqref{3.9.1} back to \eqref{3.9.2}, we get that \begin{align*} &\frac{e^{(q+\alpha(\rho-1)^2) t}}{t^{3(\rho-1)/4}}\mE\int_T^{t^{1/4}} e^{-q(t-s)}v^2(s,\sqrt{2\alpha}(1-\rho) t-B_{t-s})\,\d s \le Ct^{-1/4}t^{\frac{-3(\rho-1)}{8}}\to 0,\quad \mbox{as } t\to\infty. \end{align*} Now the proof is complete. \hfill$\Box$ \subsection{ Proof of Theorem \ref{them:case3} : $\delta<1-\rho$} By \eqref{eq-v2''} we have that \begin{align}\label{eq-v} v_f(t,x)&=e^{-qt}\mE \Big[B_t\le x\Big]+\mE \int_0^t e^{-q(t-s)}G_f(s,x-B_{t-s})\,\d s, \end{align} where \begin{align}\label{def:G} G_f(t,x):&=\hat{G}_f(t,x)-\phi(u^*_f(t,x))v_f(t,x)\\ &=\frac{1}{\lambda^* }\Big[\psi(\lambda^*+u^*_f(t,x)-\lambda^*v_f(t,x))-\psi(\lambda^*+u^*_f(t,x))\Big]+qv_f(t,x)\nonumber \end{align} with $\phi(\lambda)=\psi'(\lambda+\lambda^*)-q$ being defined by \eqref{phi}. \medskip It follows from Lemma \ref{fact4} that, to prove Theorem \ref{them:case3}, we only need to consider the limiting property of $v_f(t,\sqrt{2\alpha}\delta t)$. Using L'Hospital's rule, one has that \begin{equation}\label{tail-normal} \lim_{x\to\infty}\frac{\mP(B_1>x)}{x^{-1}e^{-x^2/2}}=\frac{1}{\sqrt{2\pi}}\lim_{x\to\infty}\frac{\int_x^\infty e^{-y^2/2}\,\d y}{x^{-1}e^{-x^2/2}}=\frac{1}{\sqrt{2\pi}}. \end{equation} It follows that \begin{align} \lim_{t\to\infty}\sqrt{t}e^{(q+\alpha\delta^2)t} e^{-qt}\mE \Big[B_t\le \sqrt{2\alpha}\delta t\Big]=&\lim_{t\to\infty}\sqrt{t}e^{(q+\alpha\delta^2)t} e^{-qt}\mE \Big[B_1\ge \sqrt{2\alpha}|\delta| \sqrt{t}\Big] =\frac{1}{2\sqrt{\pi\alpha}|\delta|}. \end{align} Hence, by \eqref{eq-v}, to prove the desired result, we only need to prove that \begin{align*} &\lim_{t\to\infty}\sqrt{t}e^{(q+\alpha\delta^2)t}\mE\int_0^{t} e^{-q(t-s)} G_f(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s\\ =&\frac{1}{\sqrt{2\pi}}\int_0^\infty e^{(q-\alpha\delta^2)s}\,\d s\int_{\R}e^{\sqrt{2\alpha}\delta z}{G}_f(s,z)\,\d z, \end{align*} which will follow from Lemmas \ref{int-upto-(t-a)} and \ref{int-(t-a,t)} below. In Lemma \ref{int-(t-a,t)}, we will show that, for any $T>0$, $$\sqrt{t}e^{(q+\alpha\delta^2)t}\bP\Big(M_t^I\le \sqrt{2\alpha}\delta t,\tau\in [0,t-T] \Big)\to 0.$$ Thus, on the event $\left\{M_t^I\le \sqrt{2\alpha}\delta t\right\}$, with large probability, the first branching of the skeleton happens in the interval $[t-T,t]. $ \medskip \begin{lemma}\label{int-upto-(t-a)} If $\delta<1-\rho$, then for any $f\in\mathcal{B}_b^+(\R)$ and any $T>0$, it holds that \begin{align*} &\lim_{t\to\infty}\sqrt{t}e^{(q+\alpha\delta^2)t}{\rm E}\int_0^{t-T} e^{-q(t-s)}{G}_f(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s\\ =&\frac{1}{\sqrt{2\pi}}\int_0^\infty e^{(q-\alpha\delta^2)s}\,\d s\int_{\R}e^{\sqrt{2\alpha}\delta z}{G}_f(s,z)\,\d z. \end{align*} \end{lemma} {\bf Proof:} Note that \begin{align*} &\sqrt{t}e^{(q+\alpha\delta^2)t} \mE\int_0^{t-T} e^{-q(t-s)}{G}_f(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s\\ =&\int_0^{t-T} \frac{\sqrt{t}}{\sqrt{2\pi(t-s)}}e^{(q-\alpha\delta^2)s}\,\d s\int_{\R}e^{\sqrt{2\alpha}\delta z}e^{-\frac{(z-\sqrt{2\alpha}\delta s)^2}{2(t-s)}}{G}_f(s,z)\,\d z. \end{align*} The absolute value of the integrand above is less than $\frac{1}{\sqrt{2\pi}}\sqrt{1+s/T}e^{(q-\alpha\delta^2)s}e^{\sqrt{2\alpha}\delta z}|{G}_f(s,z)|$, thus by the dominated convergence theorem, it suffices to show that \begin{align}\label{4.1} \int_{0}^{\infty} \sqrt{s+T}e^{(q-\alpha\delta^2)s}\,\d s\int_\R e^{\sqrt{2\alpha}\delta z}|{G}_f(s,z)|\,\d z<\infty. \end{align} By \eqref{def:G}, \eqref{est-hatG} and the fact that $v_f(t,x)\le v(t,x)$, we have that \begin{equation}\label{est:G} |G_f(s,z)|\le \phi(u^*_f(s,z))v_f(s,z)+ \hat{G}_f(s,z)\le \phi(u^*_f(s,z))v(s,z)+qv(s,z)^2. \end{equation} We will prove \eqref{4.1} in two steps. Recall that $k(t)=-\log \P^*(\|X_t\|=0)$. {\it Step 1}: First we consider the integral over $s\in(0,A)$, where $A>0$ is a constant. Since $\phi$ is increasing, by Lemma \ref{lemma:u*}(1), $\phi(u^*_f(s,z))\le \phi(k(s))$. By lemma \ref{lemma:key}(1), $v(s,z)\le\mP(B_s\le z)=\mP(B_1\le z/\sqrt{s})$. Thus we have for $0<s<A$, \begin{align*} &\int_{-\infty}^{0}e^{\sqrt{2\alpha}\delta z}\phi(u^*_f(s,z))v(s,z)\,\d z \le \phi(k(s))\int_{-\infty}^{0}e^{\sqrt{2\alpha}\delta z}\mP(B_1\le z/\sqrt{s})\d z\\ =&\sqrt{s}\phi(k(s))\int_0^{\infty}e^{\sqrt{2\alpha}|\delta| \sqrt{s}z}\mP(B_1\ge z)\d z \le\sqrt{s}\phi(k(s))\int_0^{\infty}e^{\sqrt{2\alpha}|\delta| \sqrt{A}z}\mP(B_1\ge z)\d z. \end{align*} Since $\mP(B_1\ge z)\sim \frac{1}{\sqrt{2\pi}}z^{-1}e^{-z^2/2}$ as $z\to\infty$, we have $\int_0^{\infty}e^{\sqrt{2\alpha}|\delta| \sqrt{A}z}\mP(B_1\ge z)\d z <\infty$. Thus \begin{align}\label{3.11.2} \int_{-\infty}^{0}e^{\sqrt{2\alpha}\delta z}\phi(u^*_f(s,z))v(s,z)\,\d z \le C\sqrt{s}\phi(k(s)). \end{align} For any $\epsilon>0$, since $v(s,z)\le 1$, we have \begin{align}\label{3.11.3} \int_{0}^{s^\epsilon}e^{\sqrt{2\alpha}\delta z}\phi(u^*_f(s,z))v(s,z)\,\d z \le s^\epsilon\phi(k(s)). \end{align} By \eqref{3.11.2}, \eqref{3.11.3} and Lemma \ref{lemma2}, for any $\epsilon>0$, \begin{align}\label{3.11.4} \int_0^A \sqrt{s+T}e^{(q-\alpha\delta^2)s} \int_{-\infty}^{s^\epsilon}e^{\sqrt{2\alpha}\delta z}\phi(u^*_f (s,z))v(s,z)\,\d z\d s<\infty. \end{align} Since $\phi'(\lambda)=\psi''(\lambda^*+\lambda)$ is decreasing and $\phi(0)=0$, we have \begin{equation}\label{domi-phi} \phi(\lambda)\le \phi'(0)\lambda. \end{equation} Thus, by \eqref{est:u*}, $$\phi(u^*_f(s,z))\le \phi'(0)u^*_f(s,z)\le C (1+z^{-2/\vartheta})e^{(a+\alpha)s},\quad z>0.$$ Since $v(s,z)\le 1$, we have for $0<s<A$, \begin{align*} &\int_{s^\epsilon}^\infty e^{\sqrt{2\alpha}\delta z}\phi(u^*_f(s,z))v(s,z)\,\d z\le Ce^{(a+\alpha)s} \int_{s^\epsilon}^\infty e^{-\sqrt{2\alpha}|\delta| z} (1+z^{-2/\vartheta})\d z\\ &\le Ce^{(a+\alpha)A} \Big[\int_{s^\epsilon}^{A^\epsilon } (1+z^{-2/\vartheta})\d z+ \int_{A^\epsilon}^\infty e^{-\sqrt{2\alpha}|\delta| z} (1+z^{-2/\vartheta})\d z\Big] \le C(1+s^{\epsilon(1-2/\vartheta)}). \end{align*} Now we choose $\epsilon$ small enough such that $\epsilon(2/\vartheta-1)<1$. Thus \begin{align}\label{3.11.5} \int_0^A \sqrt{s+T}e^{(q-\alpha\delta^2)s} \int_{s^\epsilon}^\infty e^{\sqrt{2\alpha}\delta z}\phi(u^*_f(s,z))v(s,z)\,\d z\d s<\infty. \end{align} Combining \eqref{3.11.4} and \eqref{3.11.5}, we obtain that \begin{align}\label{3.11.6} \int_0^A \sqrt{s+T}e^{(q-\alpha\delta^2)s} \int_{-\infty}^\infty e^{\sqrt{2\alpha}\delta z}\phi(u^*_f(s,z))v(s,z)\,\d z\d s<\infty. \end{align} {\it Step 2}: By Lemma \ref{lemma:u*}(1), $\sup_{s>A}e^{qs}k(s)=e^{qA}k(A)<\infty$. Hence we have for $s>A$, $$\phi(u^*_f(s,z))\le \phi'(0)u^*_f(s,z)\le \phi'(0)k(s)\le \phi'(0)e^{qA}k(A)e^{-qs}.$$ Thus we get that, for $s>A$, \begin{align}\label{3.11.7} &\int_\R e^{\sqrt{2\alpha}\delta z}\phi(u^*_f(s,z))v(s,z)\,\d z\le C e^{-qs}\int_\R e^{\sqrt{2\alpha}\delta z}v(s,z)\,\d z\nonumber\\ = &C\sqrt{2\alpha} s e^{-qs} e^{-\sqrt{2\alpha}\delta \sqrt{s}}\int_\R e^{2\alpha \delta s \theta }v(s,\sqrt{2\alpha}\theta s-\sqrt{s})\,\d \theta. \end{align} We will divide the above integral into three parts: $\int_{1}^\infty+\int_{1-\rho}^1+\int_{-\infty}^{1-\rho}$. We deal with them one by one. Using Lemma \ref{lemma:key}(2), we have that for $A>t_0$ and $s>A,$ $$\int_1^\infty e^{2\alpha \delta s \theta }v(s,\sqrt{2\alpha}\theta s-\sqrt{s})\,\d \theta \le \int_1^\infty e^{-2\alpha |\delta| s\theta}\,\d \theta =\frac{1}{\sqrt{2\alpha}|\delta|s}e^{-{2\alpha}|\delta| s},$$ \begin{align*} \int_{1-\rho}^1 e^{2\alpha \delta s \theta}v(s,\sqrt{2\alpha}\theta s-\sqrt{s})\,\d \theta &\le cs \int_{1-\rho}^1 e^{2\alpha \delta s \theta }e^{-2\alpha(\rho-1)(1-\theta)s}\,\d \theta\\ &\le cs \rho e^{-2\alpha(\rho-1)(\rho+\delta)s}, \end{align*} and \begin{align*} &\int_{-\infty}^{1-\rho} e^{2\alpha \delta s \theta }v(s,\sqrt{2\alpha}\theta s-\sqrt{s})\,\d \theta \le cs \int_{-\infty}^{1-\rho} e^{2\alpha \delta s \theta }e^{-(q+\alpha \theta^2)s}\,\d \theta\\ =&cs e^{(-q+\alpha\delta^2)s}\int_{-\infty}^{1-\rho} e^{-\alpha s(\theta-\delta)^2}\d \theta \le Cs^{1/2}e^{(-q+\alpha\delta^2)s}. \end{align*} For $\delta<1-\rho$, one can check that $$2\alpha\delta\le -2\alpha (\rho-1)(\rho+\delta)\le -q+{\alpha}\delta^2.$$ Thus for $s>A$, \begin{align}\label{3.11.8} \int_{-\infty}^{\infty} e^{2\alpha \delta s \theta }v(s,\sqrt{2\alpha}a s-\sqrt{s})\,\d \theta \le Cs e^{(-q+\alpha\delta^2)s}. \end{align} It follows from \eqref{3.11.7} and \eqref{3.11.8} that \begin{align*} & \int_{A}^{\infty} \sqrt{s+T}e^{(q-\alpha\delta^2)s}\,\d s\int_\R e^{\sqrt{2\alpha}\delta z}\phi(u^*(s,z))v(s,z)\,\d z\\ \le &C \int_{A}^{\infty} \sqrt{s+T}s^{2}e^{-qs}e^{-\sqrt{2\alpha}\delta\sqrt{s}}\,\d s <\infty. \end{align*} Combining the two steps above, we get \begin{align*} \int_{0}^{\infty} \sqrt{s+T}e^{(q-\alpha\delta^2)s}\,\d s\int_\R e^{\sqrt{2\alpha}\delta z}\phi(u^*_f(s,z))v(s,z)\,\d z & <\infty. \end{align*} Similarly, one can prove that \begin{align*} \int_{0}^{\infty} \sqrt{s+T}e^{(q-\alpha\delta^2)s}\,\d s\int_\R e^{\sqrt{2\alpha}\delta z}v(s,z)^2\,\d z & <\infty. \end{align*} Hence \eqref{4.1} holds and the desired result follows immediately. \hfill$\Box$ \begin{lemma}\label{int-(t-a,t)} If $\delta<1-\rho$, then for any $f\in\mathcal{B}_b^+(\R)$ and $T>0$, $$ \lim_{t\to\infty}\sqrt{t}e^{(q+\alpha\delta^2)t}{\rm E}\int_{t-T}^t e^{-q(t-s)}G_f(s,\sqrt{2\alpha}\delta t-B_{t-s})\,\d s=0. $$ \end{lemma} {\bf Proof:} Note that \begin{align*} &\mE\int_{t-T}^t e^{-q(t-s)}|G_f(s,\sqrt{2\alpha}\delta t-B_{t-s})|\,\d s=\int_{0}^T e^{-qs}\mE|G_f(t-s,\sqrt{2\alpha}\delta t-B_{s})|\,\d s\\ =&\int_{0}^T e^{-qs}\mE[|G_f(t-s,\sqrt{2\alpha}\delta t-B_{s})|;B_s<-(\epsilon t-\sqrt{t})]\,\d s\\ &+ \int_{0}^T e^{-qs}\mE[|G_f(t-s,\sqrt{2\alpha}\delta t-B_{s})|;B_s\ge-(\epsilon t-\sqrt{t})]\,\d s, \end{align*} where $\epsilon<1-\rho-\delta$ is a small constant. By \eqref{domi-phi} and Lemma \ref{lemma:u*}(1), $\sup_{t>1}\phi(u^*_f(t,x))\le \phi'(0)\sup_{t>1}u^*_f(t,x)\le \phi'(0)k(1)<\infty.$ Since $v(t,x)\le 1$, we have $\sup_{t>1}\sup_{x}|G_f(t,x)|<+\infty$. Hence we have, for $t>1$ large enough, and $s\in(0,T)$, \begin{align*} &\mE\left[|G_f(t-s,\sqrt{2\alpha}\delta t-B_{s})|;B_s\le -(\epsilon t-\sqrt{t})\right]\le C\mP\left(B_s\ge(\epsilon t-\sqrt{t})\right)\\ \le& C\frac{\sqrt{s}}{\epsilon t-\sqrt{t}}e^{-(\epsilon t-\sqrt{t})^2/(2s)}\le C\frac{\sqrt{T}}{\epsilon t-\sqrt{t}}e^{-(\epsilon t-\sqrt{t})^2/(2T)}, \end{align*} where in the second inequality, we used \eqref{est:bm}. Thus for any $\epsilon>0$, as $t\to\infty$, $$\sqrt{t}e^{(q+\alpha\delta^2)t}\int_{0}^T e^{-qs}\mE\left[|G_f(t-s,\sqrt{2\alpha}\delta t-B_{s})|;B_s<-(\epsilon t-\sqrt{t})\right]\,\d s\to 0.$$ Note that if $B_s\ge-(\epsilon t-\sqrt{t})$, then $$\sqrt{2\alpha}\delta t-B_s\le \sqrt{2\alpha}(\delta+\epsilon) t-\sqrt{t}\le \sqrt{2\alpha}(\delta+\epsilon) (t-s)-\sqrt{t-s}.$$ Using Lemma \ref{lemma:key}(2) with $\theta=\delta+\epsilon<1-\rho$, for $t>t_0+T$ and $s\in(0,T)$, $$ v(t-s,\sqrt{2\alpha}\delta t-B_s)\le v(t-s,\sqrt{2\alpha}(\delta+\epsilon)(t-s)-\sqrt{t-s} )\le c te^{-q(t-s)}e^{-\alpha(\delta+\epsilon)^2(t-s)}. $$ By Lemma \ref{lemma:u*}(1), we have that for $t\ge t_0+T$ and $s\in(0,T)$, \begin{align*} &\phi(u^*_f(t-s,\sqrt{2\alpha}\delta t+z))\le \phi'(0)u^*_ft-s,\sqrt{2\alpha}\delta t+z)\\ \le&\phi'(0) k(t-s)\le \phi'(0)e^{qt_0}k(t_0)e^{-q(t-s)}. \end{align*} Thus, by \eqref{est:G}, we get that, if $B_s\ge-(\epsilon t-\sqrt{t})$, \begin{align} &|G_f(t-s,\sqrt{2\alpha}\delta t-B_s)|\le Ct^2e^{-2q(t-s)}e^{-\alpha(\delta+\epsilon)^2(t-s)}\nonumber\\ \le &Ce^{2qs}e^{\alpha(\delta+\epsilon)^2 s}t^2e^{-2qt}e^{-\alpha\delta^2 t}e^{-2\alpha\delta\epsilon t}. \end{align} It follows that, as $t\to\infty$, \begin{align} &\sqrt{t}e^{(q+\alpha\delta^2)t}\int_{0}^T e^{-qs}\mE[|G_f(t-s,\sqrt{2\alpha}\delta t-B_{s})|;B_s\ge-(\epsilon t-\sqrt{t})]\,\d s\nonumber\\ \le& Ct^{5/2} e^{-(q+2\alpha\delta\epsilon)t}\int_0^T e^{qs}e^{\alpha(\delta+\epsilon)^2 s}\d s\le Ct^{5/2} e^{-(q+2\alpha\delta\epsilon)t}\to 0, \end{align} if we choose $\epsilon$ small enough such that $q+2\alpha\delta\epsilon>0$. The proof is now complete. \hfill$\Box$
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Caldwell plot \cite{Cald} of $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ presented at the Desy Workshop in November 1997 suprized the community. The results appeared to indicate that we have reached a region in the x and $Q^{2}$ where pQCD was no longer valid. DGLAP evolution leads us to expect that $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ at fixed $Q^{2}$ would be a monotonic increasing function of $\frac{1}{x}$, whereas a superficial glance at the data suggests that the logarithmic derivative of $F_{2}$ deviates from the expected pQCD behaviour, and has a turnover in the region of 2 $ \leq Q^{2} \leq $ 4 GeV$^{2}$ (see fig.1 where the ZEUS data and the GRV'94 predictions are shown). Opinions were voiced that the phenomena was connected with the transition from "hard" to "soft" interactions. Others \cite{AHM} felt that the "turnover" in $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ may be an indication of saturation of the parton distributions. \begin{figure}[p] \hspace*{0.4cm}\psfig{figure=slopef2exp.eps,width=10cm,height=2.5in} \caption{ ZEUS data and GRV'94 predictions for $F_{2}$ slope \label{fig:fig.1}} \end{figure} \par Amongst the problems that one faces in attempting to comprehend the data, is the fact that due to kinematic constraints the data is sparse, and each point shown pertains to a different pair of values of x and $Q^{2}$. We miss the luxury of having measurements at several different values of x for fixed values of $Q^{2}$, which would allow one to deduce the detailed behaviour of $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$. Bartels et al \cite{BCF} had previously suggested that the logarithmic derivative of the structure function $F_{2}$ should be sensitive to screening effects. \section{QCD and $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$} The DGLAP evolution equations \cite{DGLAP} imply the relation \begin{equation} \frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}\;= \; \frac{2 \alpha s}{9 \pi} xG^{DGLAP}(x,Q^{2}) \end{equation} where $xG^{DGLAP}(x,Q^{2})$ denotes the distribution of gluons in the proton. At present HERA energies $xG(x,Q^{2})$ grows rapidly with increasing $\frac{1}{x}$ i.e. energy. From unitarity constraints we know that this growth must taper off, and at some value of $x$ (=$x_{cr}$), $xG(x,Q^{2})$ must become saturated, and perturbative QCD will no longer be valid. \par To illustrate the effects that we can expect for the saturated case, we turn to the colour dipole picture of DIS \cite{AHM}. Here the $\gamma^{*}$ fluctuates into a $q\bar{q}$ pair, which then scatters on the proton over a relatively short time scale compared to the fluctuations. We have \begin{equation} \frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})} \; \sim \; Q^{2} \sigma_{q\bar{q}}(\Delta r_{\bot} \sim \frac{1}{Q}) \end{equation} where $\sigma_{q\bar{q}}$ denotes the cross section for the $q\bar{q}$ pair to interact with the proton, and $\Delta r_{\bot}$ the distance between the q and $\bar{q}$. When the distribution of gluons in the proton is normal pQCD is applicable, and the relation given in eq.(1) holds. However, when the gluons are densely packed (i.e. saturated) one reaches the unitarity limit and the colour dipole cross section can be assumed to be geometric i.e. $\sigma_{q\bar{q}} \sim \pi R_{p}^{2} $. \par For the saturated case we then expect \begin{equation} \frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})} \; \sim \; Q^{2} \pi R_{p}^{2} \end{equation} i.e. the logarithmic slope should grow linearly with $Q^{2}$. \section{Results} \par We show that the Caldwell plot is in agreement with the pQCD expectations, once screening corrections (SC) (which become more important as one goes to lower values of x and $Q^{2}$), are included. To provide a check of our calculations, we compare with the results one derives using the ALLM'97 parametrization \cite{ALLM}, which we use as a "pseudo data base". This parameterization is based on a Regge-type approach formulated so as to be compatible with pQCD and the DGLAP evolution equations. \begin{figure}[p] \hspace*{0.5cm}\psfig{figure=slope_many.ps,width=10cm,height=3.in} \caption{ The $F_{2}$ slope (a) in our QCD calculation incorporating SC, and (b) in the ALLM"97 parametrization. \label {fig:fig.2}} \end{figure} \par Following the method suggested by Levin and Ryskin \cite{LR} and Mueller \cite{M1} we calculate the SC pertaining to $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ for both the quark and gluon sector. In fig.2 we show the results as well as those of ALLM compared with the experimental results. \begin{figure}[t] \psfig{figure=df2dlnq2.ps,width=12cm,height=3.5in} \caption{ $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$. In addition to the ALLM band we show a typical data point with its error. \label{fig:fig3.}} \end{figure} \begin{figure}[t] \psfig{figure=fixedx.ps,width=12cm,height=3.in} \caption{ $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ at fixed x. \label{fig:fig.4}} \end{figure} \par In fig.3 and 4 we display our calculations for the logarithmic derivative of $F_{2}$ after SC have been incorporated, as well as the ALLM results. In fig.3 for fixed values of $Q^{2}$ and varying values of x, and in fig.4 for fixed x and varying values of $Q^{2}$. In fig.4 we show our results as well as those of ALLM compared with the experimental results. We note that $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ at fixed $Q^{2}$ both in our calculations and in the "psuedo data" (ALLM), remains a $ \bf{monotonic}$ increasing function of $\frac{1}{x}$. From fig.4 we note that for fixed x, $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ decreases as $Q^{2}$ becomes smaller. The decrease becomes stronger as we go to lower values of x. This phenomena which is due to SC adds to the confusion in interpreting the Cadwell plot. \section{Conclusions} $\;\;\;\;\;$ 1) We have obtained a good description of $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ for x $ \leq $ 0.1. 2) Our results suggest that there is a smooth transition between the "soft" and "hard" processes. 3) SC are essential for describing the Caldwell plot even at present HERA energies where we are far below the saturation region ( as $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ $ < F_{2}(x,Q^{2}$)). In the saturation region ($x \leq x_{cr}$) we expect $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ = $F_{2}(x,Q^{2}$). 4) At fixed $x$ and/or fixed $Q^{2}$, SC to do not change the qualitative behaviour of $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$, they only produce a smaller value of the slope. 5) The apparent turn over of $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ is an illusion, created by the experimental limitation in measuring the logarithmic derivative of $F_{2}$ at particular correlated values of $Q^{2}$ and x. 6) Direct experimental evidence supporting our hypothesis, was presented recently by Max Klein at the LP99 conference \cite{MK}. He concluded "H1 see no departure from the rising behaviour of $\frac{\partial F_{2}(x,Q^{2})}{\partial ln(Q^{2}/Q_{0}^{2})}$ as a function of increasing $\frac{1}{x}$ for $Q^{2} \geq 3 \;\; GeV^{2}$ ", (this is the lowest value of $Q^{2}$ for which H1 presented data). The detailed calculations and results that this talk was based on, appear in \cite{GLM1} and \cite{GLM2}. \section{Acknowledgements} I would like to thank my friends and collegues Genya Levin and Uri Maor for an enjoyable and fruitful collaboration. I am also indebted to John Dainton for drawing my attention to the new H1 data. This research was supported in part by the Israeli Science Foundation founded by the Israel Academy of Science and Humanities. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Hubbard model is one of the generic models to describe strongly correlated electron systems~\cite{Hubbard}. This model has played important roles to study magnetism and superconductivity. However, in spite of its simplicity, it is difficult to solve this model exactly except for one dimension or some special cases. On the other hand, extended versions of the Hubbard model have also been studied. The on-site repulsion of the Hubbard model is due to the matrix elements of the Coulomb interaction corresponding to the on-site Wannier states, and other matrix elements are neglected. Therefore, it is worth considering the effects of these neglected terms as site-off-diagonal interactions~\cite{Campbell-G-L}. For these generalized models, exact results for ferromagnetism and superconducting states have been discussed~\cite{Simon-A,Strack-V1993,Strack-V1994,Arrachea-A Boer-K-S,Boer-S,Montorsi-C,Kollar-S-V}. In addition to those, a different type of exact ground state has been discussed for a one-dimensional system, which is called ``bond N\'eel'' (BN) state~\cite{Itoh-N-M,Nakamura-I_2001,Nakamura-O-I}, by the projection operator method~\cite{Majumder-G,Affleck-K-L-T} for multicomponent systems~\cite{Itoh}. The BN state is regarded as a N\'eel ordered state of bond-located spins. Furthermore, the concept of the BN state in one dimension was extended to higher dimensional systems introducing plaquette states in corner sharing lattices such as the Kagom\'e lattice~\cite{Nakamura-I_2004}. In this paper, we extend this argument for the Kagom\'e Hubbard model at 1/3 filling to several fillings and give numerical verification based on exact diagonalization and density-matrix renormalization group (DMRG)~\cite{white92} techniques. We also calculate the entanglement entropy (EE) exactly. This paper is organized as follows: In Sec.~\ref{sec:method}, we review the method to construct Hamiltonians with exact ground states in multicomponent systems. In Sec.~\ref{sec:1D}, we review the application of this method to the one-dimensional model discussed in Ref.~\ref{Itoh-N-M}. In Sec.~\ref{sec:2D}, we apply the analysis to the Kagom\'e lattice. In addition to the exact result at 1/3-filling obtained in Ref.~\ref{Nakamura-I_2004}, we also discuss the results at 2/3-filling and at half-filling. The exact ground states are numerically confirmed using the exact diagonalization and DMRG methods. In Sec.~\ref{sec:EE}, we calculate the entanglement entropy. Finally, we give summary and discussion of the results. \begin{figure}[h] \input{fig1.tex} \caption{Examples of lattice structures where generalized Hubbard models with exact plaquette-ordered ground states can be constructed: (a) the one-dimensional chain and (b) the Kagom\'e lattice. The blue and the red plaquettes denote those belong to the groups $\mathcal{A}$ and $\mathcal{B}$, respectively.}\label{fig:lattices} \end{figure} \section{Construction of the Hamiltonian}\label{sec:method} The method to construct a Hamiltonian with an exact ground state is the following way~\cite{Itoh}. First, we consider a Hamiltonian given by a sum of products of projection operators \begin{equation} {\cal H}=\sum_{\alpha} h_{\alpha},\quad h_{\alpha}=\sum_{\mu,\nu}\lambda_{\mu\nu} R^{(\mu)}_{\alpha\uparrow}R^{(\nu)}_{\alpha\downarrow}, \quad \lambda_{\mu\nu}\geq 0, \label{Ham} \end{equation} where $\alpha$ denotes the position of one of the unit plaquettes that cover the lattice. $R^{(\mu)}_{\alpha\sigma}$ is an operator whose expectation value is positive semidefinite $\braket{R^{(\mu)}_{\alpha\sigma}}\geq 0$. This condition is realized, if $R^{(\mu)}_{\alpha\sigma}$ is given by a product of an operator and its Hermitian conjugate. Then the expectation value of the Hamiltonian is also positive semidefinite $\langle {\cal H}\rangle\geq 0$. Next, we introduce a trial wave function given by a direct product of up and down spin sectors, \begin{equation} |\Psi(\mathcal{A},\mathcal{B})\rangle=|\Phi_{\uparrow}(\mathcal{A})\rangle \otimes|\Phi_{\downarrow}(\mathcal{B})\rangle,\label{state} \end{equation} where $\mathcal{A}$ and $\mathcal{B}$ denote two groups of plaquettes that cover the lattice satisfying $\mathcal{A}\cup\mathcal{B}=\{\mbox{all lattice sites}\}$. We require that the projection operators have the following conditions, \begin{equation} R^{(\mu)}_{\alpha\uparrow}|\Phi_{\uparrow}(\mathcal{A})\rangle =R^{(\mu)}_{\beta\downarrow}|\Phi_{\downarrow}(\mathcal{B})\rangle=0, \label{method.5} \end{equation} where $\alpha\in\mathcal{A}$ and $\beta\in\mathcal{B}$. Therefore, even if we have \begin{equation} R^{(\mu)}_{\beta\uparrow} |\Phi_{\uparrow}(\mathcal{A})\rangle \neq 0,\quad R^{(\mu)}_{\alpha\downarrow} |\Phi_{\downarrow}(\mathcal{B})\rangle\neq 0, \end{equation} the eigenvalue of the Hamiltonian for $|\Psi(\mathcal{A},\mathcal{B})\rangle$ is always zero. Then, the lower bound and the upper bound of the energy are coincide, so that $|\Psi(\mathcal{A},\mathcal{B})\rangle$ turns out to be one of the exact ground state of this system. The above argument can be satisfied in corner sharing lattices with the bipartite structure. The simplest examples is the one-dimensional (1D) lattice, where the unit plaquette is one bond. In two dimension (2D), the Kagom\'e lattice can be covered by two colored triangles alternatively, as illustrated in Fig.~\ref{fig:lattices}. These states can be regarded as the N\'eel ordering on the dual lattice (i.e. the honeycomb lattice for the Kagom\'e lattice). In three dimension, the Pyrochlore lattice satisfies these conditions. If the system has a time-reversal symmetry, its ground state has two-fold degeneracy. \section{1D model}\label{sec:1D} We consider the 1D generalized Hubbard model at half-filling and zero-magnetic field, given by ${\cal H}=\sum_{i\sigma}h_{i,i+1,\sigma}$ with the local bond Hamiltonian, \begin{align} \lefteqn{h_{ij\sigma}=-t\,T_{ij\sigma} +\frac{U}{2z} (n_{i\sigma}n_{i\bar{\sigma}}+n_{j\sigma}n_{j\bar{\sigma}}) }\nonumber\\ & +V_{\parallel}n_{i\sigma}n_{j\sigma}+V_{\perp}n_{i\sigma}n_{j\bar{\sigma}} \nonumber\\ & +XT_{ij\sigma}(n_{i\bar{\sigma}}+n_{j\bar{\sigma}}) +\frac{W}{2}\sum_{\sigma'}T_{ij\sigma}T_{ij{\sigma}'}, \label{local_bond_Ham} \end{align} where $\bar{\sigma}$ is the opposite spin of $\sigma$, $z=1$ for the present 1D case, and periodic boundary conditions are assumed. We have defined the hopping and the density operators as $T_{ij\sigma}\equiv c_{i\sigma}^{\dag}c_{j\sigma}^{}+\mbox{H.c.}$, $n_{i\sigma}\equiv c_{i\sigma}^{\dag}c_{i\sigma}^{}$. Note that the bond-bond interaction ($W$) term can be rewritten as \begin{equation} -2W(\bm{S}_i\cdot\bm{S}_{j}+\bm{\eta}_i\cdot\bm{\eta}_{j} -{\textstyle\frac{1}{4}}),\label{eqn:W-term} \end{equation} where $\bm{S}_i$ and $\bm{\eta}_i$ are the spin and the pseudo spin operators, respectively. The components of the pseudo spin operator are defined by \begin{equation} \eta_i^{+}\equiv(-1)^i c_{i\uparrow}^{\dag}c_{i\downarrow}^{\dag},\ \ \eta_i^{-}\equiv(-1)^i c_{i\downarrow}c_{i\uparrow},\ \ \eta_i^{z}\equiv\frac{1}{2}(n_{i\uparrow}+n_{i\downarrow}-1). \label{eqn:eta-pairing} \end{equation} Now, we introduce the bonding and the anti-bonding operators, \begin{equation} A_{ij\sigma}^{\dag} ={\textstyle\frac{1}{\sqrt{2}}} (c_{i\sigma}^{\dag}+c_{j\sigma}^{\dag}),\quad B_{ij\sigma}^{\dag} ={\textstyle\frac{1}{\sqrt{2}}} (c_{i\sigma}^{\dag}-c_{j\sigma}^{\dag}). \end{equation} The two electron states are given by $B_{ij\sigma}^{\dag}A_{ij\sigma}^{\dag} =c_{i\sigma}^{\dag}c_{j\sigma}^{\dag}$. These operators on the same bond satisfy the anticommutation relations: \begin{displaymath} \{A_{ij\sigma},A_{ij\sigma'}^{\dag}\} =\{B_{ij\sigma},B_{ij\sigma'}^{\dag}\}=\delta_{\sigma\sigma'},\quad \mbox{otherwise}=0. \end{displaymath} The density operators for the bond operators are given as \begin{align} n_{A\sigma}\equiv&A_{ij\sigma}^{\dag}A_{ij\sigma} =\frac{1}{2}(n_{i\sigma}+n_{j\sigma}+T_{ij\sigma}),\\ n_{B\sigma}\equiv&B_{ij\sigma}^{\dag}B_{ij\sigma} =\frac{1}{2}(n_{i\sigma}+n_{j\sigma}-T_{ij\sigma}). \end{align} Since we restrict our attention only on the neighboring two sites $i,j$, we drop these indices from the operators defined above. As a trial state, we consider the following wave function, \begin{equation} \ket{\Psi_{\sigma}} \equiv A_{12\sigma}^{\dag} A_{23\bar{\sigma}}^{\dag}\cdots A_{L-1,L\sigma}^{\dag} A_{L,1\bar{\sigma}}^{\dag} |0\rangle, \end{equation} where $|0\rangle$ denotes a vacuum and $L$ is the number of sites. This state is regarded as a N\'eel ordering of the bond-located spins, so that we call this bond N\'eel (BN) state. There is two-fold degeneracy given by $|\Psi_{\uparrow}\rangle$ and $|\Psi_{\downarrow}\rangle$. In order to construct a model with the exact ground state, the local Hamiltonian $h_{ij}=\sum_{\sigma}h_{ij\sigma}$ should be decomposed by the projection operators $1-n_{A\sigma}$ and $n_{B\sigma}$ in the following form, \begin{align} h_{ij}&-\varepsilon_0 =\lambda_{\bar{A}\bar{A}}(1-n_{A\uparrow})(1-n_{A\downarrow}) +\lambda_{BB}n_{B\uparrow}n_{B\downarrow}\nonumber\\ &+\lambda_{\bar{A}B}\{(1-n_{A\uparrow})n_{B\downarrow} +n_{B\uparrow}(1-n_{A\downarrow})\}, \label{decomposed_Hamiltonian_1D} \end{align} where $\varepsilon_0$ is the ground-state energy per bond. According to the argument given in Sec.~\ref{sec:method}, for the BN ground state, the parameters should be chosen as \begin{equation} \lambda_{\bar{A}\bar{A}},\quad \lambda_{\bar{A}B},\quad\lambda_{BB} \geq 0. \label{condition_of_coefficients} \end{equation} Comparing Eqs.~(\ref{local_bond_Ham}) and (\ref{decomposed_Hamiltonian_1D}) (see Appendix~\ref{details}), the relations among the parameters are obtained as \begin{equation} V_{\perp}=\frac{U}{2},\quad V_{\parallel}=W,\quad X=t-W. \end{equation} The coefficients in Eq.~(\ref{decomposed_Hamiltonian_1D}) are identified as follows, \begin{align} \lambda_{\bar{A}\bar{A}} =&\frac{U}{2}-W+2t,\label{l_AA1}\\ \lambda_{\bar{A}B} =&-\frac{U}{2}+W,\label{l_AB1}\\ \lambda_{BB} =&\frac{U}{2}+3W-2t,\label{l_BB1}\\ \varepsilon_0 =&\frac{U}{2}. \end{align} From Eqs.~(\ref{condition_of_coefficients}), (\ref{l_AA1}), (\ref{l_AB1}) and (\ref{l_BB1}), we obtain the parameter space of the exact BN ground state as shown in Fig.~\ref{phase_diagrams_1D}. Note that the BN state appears only for $t>0$ region. \begin{figure}[t] \input{fig2.tex} \caption{Phase diagram of the generalized Hubbard chain (\ref{local_bond_Ham}) in the $U/2t$-$W/t$ parameter space with $t>0$~\cite{Itoh-N-M,Nakamura-O-I}. The parameters are set as $X=t-W$, $V_{\parallel}=W$ and $V_{\perp}=U/2$. The shaded regions labeled by BN, FM and PS denote bond-N\'eel, ferromagnetic and phase-separated states, respectively.} \label{phase_diagrams_1D} \end{center} \end{figure} The property of the BN state can be investigated based on the matrix-product method. According to Ref.~\ref{Itoh-N-M}, both charge-charge and spin-spin correlation functions vanish except for those of the nearest sites which indicates the existence of the charge and the spin gaps. On the other hand, the bond-located spin correlation exhibits a long range order. We can also calculate elementally excitation spectrum using the matrix-product method as a variational approach~\cite{Nakamura-O-I}. In the present one-dimensional model at half-filling, we can discuss not only the BN state but also the ferromagnetic (FM) and the phase-separated (PS) states. The last term of Eq.~(\ref{decomposed_Hamiltonian_1D}) stabilizes the fully polarized FM state for $\lambda_{\bar{A}B}<0$. Similarly, the PS state where the system is separated into a domain of doubly occupied sites and a vacuum, is stabilized when $\lambda_{\bar{A}\bar{A}}+\lambda_{BB}<0$, neglecting the surface energy. As shown in Fig.~\ref{phase_diagrams_1D}, the FM and the PS states appear in the $U/2t$-$W/t$ parameter space symmetrically in the positive- and in the negative-$U$ regions, respectively. This is consistent with the fact that the $W$ term is the ferromagnetic exchange interactions of the spins and the pseudo spins (\ref{eqn:W-term}), and the PS state is regarded as the FM state of the pseudo-spin space. The condition $W/t\geq 1/2$ for the FM and the PS phases is not clearly obtained in the present argument. To obtain this condition, we need to introduce three-types of $R$ operators~\cite{Nakamura-O-I}. The phase boundary of the BN and the FM states $\lambda_{\bar{A}B}=0$ corresponds to the SU(2) symmetry in the spins $V_{\parallel}=V_{\bot}$, so that the ground state is highly degenerate. The system undergoes a first-order phase transition at this level-crossing point. When $W/t=1$ ($X=0$), the system has the particle-hole symmetry. At $(U/2t,W/t)=(-1,1)$, the system has the SU(2) symmetry in the pseudo-spin space, so that the BN, the PS and the $\eta$-paring states are degenerate. The other lines which separate shaded and non-shaded regions in Fig.~\ref{phase_diagrams_1D} do not necessarily mean phase boundaries. \begin{figure} \begin{center} \includegraphics[width=7cm]{figDMRG_1D.eps} \caption{(a) Ground-state energy per site as a function of $W/t$ for the 1D model at $U/t=2$, obtained by the exact diagonalization and DMRG with finite-$L$ chains under periodic boundary conditions. Inset: enlarged figure around the lower level crossing. (b) Finite-size scaling analysis of the level-crossing points for the upper bound of the BN phase. (c) Ground-state energy per site as a function of $W/t$ for the 1D model at $U/t=2$ in the thermodynamic limit, which is obtained using DMRG data with $L=48$-$240$ chains under the open boundary conditions.} \label{figDMRG_1D} \end{center} \end{figure} Therefore, to confirm the BN and FM states and to explore the phase boundaries, we calculate the ground-state energy by the numerical methods. In Fig.~\ref{figDMRG_1D}(a) the numerical results of the ground-state energy at $U=2t$ are plotted as a function of $W/t$, where the periodic boundary conditions are applied. We obtain numerically the BN ground-state energy $\varepsilon_0=U/2\equiv\varepsilon_0({\rm BN})$ for $W \ge U/2$ and the FM ground-state energy $\varepsilon_0=2W\equiv\varepsilon_0({\rm FM})$ for $W \le U/2$. Thus, the BN-FM phase boundary coincides the analytical result $W=U/2$. The ground-state energy deviates from $\varepsilon_0=\varepsilon_0({\rm BN})$ at some larger $W/t$ ($\equiv W_c/t$), which corresponds to the upper bound of the BN phase and is detected as a level crossing in the present finite-$L$ calculations. As seen in Fig.~\ref{figDMRG_1D}(a), the level-crossing point depends on the system length because the BN state is more overstabilized in smaller-$L$ systems under the periodic boundary conditions. Accordingly, the level-crossing point is shifted to lower $W/t$ with increasing the system length $L$. We perform a finite-size scaling of the level-crossing point using $L=26$-$50$ periodic systems in Fig.~\ref{figDMRG_1D}(b). Although the data points oscillate and a fine fitting is not easy, the least-square linear fitting gives $W_c/t=6.37$ in the thermodynamic limit. This may mean the upper bound of the BN phase is fairly extended to $W/t=6.37$ in comparison to the analytical value $W/t=3$ in Fig.~\ref{phase_diagrams_1D}. The above overstabilization of the BN state can be avoided if we apply the open boundary conditions. It enables us to pick up the {\it real} ground state and to calculate the energy more definitely for a given $L$. The extrapolated ground-state energy to the thermodynamic limit, using $L=48$-$240$ open systems, is plotted in Fig.~\ref{figDMRG_1D}(c). We find that the ground-state energy begins to deviate from $\varepsilon_0=\varepsilon_0({\rm BN})$ at $W/t=6.40$. This value agrees very well with that obtained with the periodic systems ($W/t=6.37$). \section{Kagom\'e lattice}\label{sec:2D} We consider the generalized Hubbard model on the Kagom\'e lattice at $1/3$-filling with zero-magnetic field. In order to obtain an exact ground state, we need to include three site terms ($X'$, $W'$ terms). The Hamiltonian is given by ${\cal H}=\sum_{\langle ijk\rangle\sigma}h_{ijk\sigma}$, where the summation $\langle ijk\rangle$ is taken in each unit trimer as shown in Fig.~\ref{fig:lattices}, \begin{align} \lefteqn{h_{ijk\sigma} =h_{ij\sigma}+h_{jk\sigma}+h_{ki\sigma}} \nonumber\\ & +W'(T_{ij\sigma}T_{jk\bar{\sigma}}+T_{jk\sigma}T_{ki\bar{\sigma}} +T_{ki\sigma}T_{ij\bar{\sigma}})\nonumber\\ & +X'(T_{ij\sigma}n_{k\bar{\sigma}}+T_{jk\sigma}n_{i\bar{\sigma}} +T_{ki\sigma}n_{j\bar{\sigma}}), \label{local_trim_Ham} \end{align} where $h_{ij\sigma}$ is the local bond Hamiltonian (\ref{local_bond_Ham}) with $z=2$. $\bar{\sigma}$ denotes the opposite spin of $\sigma$. Now we define the following one-electron plaquette operators (see Fig.~\ref{fig:threeB}), \begin{align} A_{ijk\sigma}^{\dag} \equiv&{\textstyle\frac{1}{\sqrt{3}}} (c_{i\sigma}^{\dag}+c_{j\sigma}^{\dag}+c_{k\sigma}^{\dag}),\\ B_{ijk\sigma}^{\dag} \equiv&{\textstyle\frac{1}{\sqrt{3}}} (c_{i\sigma}^{\dag}+\omega c_{j\sigma}^{\dag} +\omega^2 c_{k\sigma}^{\dag}),\\ C_{ijk\sigma}^{\dag} \equiv&{\textstyle\frac{1}{\sqrt{3}}} (c_{i\sigma}^{\dag}+\omega^2c_{j\sigma}^{\dag} +\omega c_{k\sigma}^{\dag}), \end{align} where $\omega=\e^{\i 2\pi/3}$. These operators on the same plaquette satisfy the anticommutation relations: \begin{displaymath} \{A_{ijk\sigma},A_{ijk\sigma'}^{\dag}\} =\{B_{ijk\sigma},B_{ijk\sigma'}^{\dag}\} =\{C_{ijk\sigma},C_{ijk\sigma'}^{\dag}\} =\delta_{\sigma\sigma'}, \end{displaymath} and otherwise$=0$. Note that $A_{ijk\sigma}^{\dag}|0\rangle$, $B_{ijk\sigma}^{\dag}|0\rangle$, and $C_{ijk\sigma}^{\dag}|0\rangle$ are chosen as eigen states of density, hopping, and current operators, \begin{align} N_{ijk\sigma}\equiv&n_{i\sigma}+n_{j\sigma}+n_{k\sigma},\\ T_{ijk\sigma}\equiv&T_{ij\sigma}+T_{jk\sigma}+T_{ki\sigma},\\ J_{ijk\sigma}\equiv&J_{ij\sigma}+J_{jk\sigma}+J_{ki\sigma},\\ J_{ij\sigma}\equiv& \i(c_{i\sigma}^{\dag}c_{j\sigma}^{\mathstrut}-\mbox{H.c.}). \end{align} The density operators in terms of the plaquette operators are \begin{align} n_{A\sigma}=&\frac{1}{3} \left(N_{ijk\sigma}+T_{ijk\sigma}\right),\\ n_{B\sigma}=&\frac{1}{6} (2N_{ijk\sigma}-T_{ijk\sigma}-\sqrt{3}J_{ijk\sigma}),\\ n_{C\sigma}=&\frac{1}{6} (2N_{ijk\sigma}-T_{ijk\sigma}+\sqrt{3}J_{ijk\sigma}). \end{align} Since we restrict our attention only on the three sites $i,j,k$ in a triangle, we drop these indices from the operators defined above. \begin{figure}[t] \input{fig4.tex} \caption{Three bases for the unit trimer of the Kagom\'e lattice.} \label{fig:threeB} \end{figure} \subsection{Plaquette-N\'eel state at $1/3$-filling} Using these relations, the Hamiltonian with the exact ground state is given by the plaquette operators. We consider the following plaquette state at $1/3$-filling, \begin{equation} \ket{\Psi_{\sigma}}\equiv \prod_{\braket{ijk}\in\bigtriangleup} A_{ijk\sigma}^{\dag} \prod_{\braket{i'j'k'}\in\bigtriangledown} A_{i'j'k'\bar{\sigma}}^{\dag}\ket{0}, \label{Kagome_13_state} \end{equation} where $\braket{ijk}$ ($\braket{i'j'k'}$) is taken for all triangles of the Kagom\'e lattice with up (down) direction. As an extention of the BN state, we call this state ``plaquette N\'eel'' (PN) state. In order to make (\ref{Kagome_13_state}) the ground state, the local Hamiltonian for this state is constructed as \begin{align} \lefteqn{ h_{ijk}-\varepsilon_0= \lambda_{\bar{A}\bar{A}} (1-n_{A\uparrow}) (1-n_{A\downarrow})} \label{decomposed_Hamiltonian_2D}\\ & +\lambda_{BB}n_{B\uparrow}n_{B\downarrow} +\lambda_{CC}n_{C\uparrow}n_{C\downarrow}\nonumber\\ &+\lambda_{\bar{A}B}\left\{ (1-n_{A\uparrow})n_{B\downarrow}+ n_{B\uparrow}(1-n_{A\downarrow})\right\}\nonumber\\ &+\lambda_{\bar{A}C}\left\{ (1-n_{A\uparrow})n_{C\downarrow}+ n_{C\uparrow}(1-n_{A\downarrow})\right\}\nonumber\\ &+\lambda_{BC} \left\{ n_{B\uparrow}n_{C\downarrow}+n_{C\uparrow}n_{B\downarrow} \right\} \nonumber\\ =&\lambda_{\bar{A}\bar{A}} \label{decomposed_Hamiltonian_2D.2}\\ &+\sum_{\sigma} \left\{ -\lambda_{\bar{A}\bar{A}}n_{A\sigma} +\lambda_{\bar{A}B}n_{B\sigma} +\lambda_{\bar{A}C}n_{C\sigma} \right\}\nonumber\\ &+\lambda_{\bar{A}\bar{A}} n_{A\uparrow}n_{A\downarrow} +\lambda_{BB}n_{B\uparrow}n_{B\downarrow} +\lambda_{CC}n_{C\uparrow}n_{C\downarrow}\nonumber\\ &-\lambda_{\bar{A}B} (n_{A\uparrow}n_{B\downarrow}+n_{B\uparrow}n_{A\downarrow}) \nonumber\\ & -\lambda_{\bar{A}C} (n_{A\uparrow}n_{C\downarrow}+n_{C\uparrow}n_{A\downarrow}) \nonumber\\ & +\lambda_{BC} (n_{B\uparrow}n_{C\downarrow}+n_{C\uparrow}n_{B\downarrow}), \nonumber \end{align} with positive $\lambda_{\mu\nu}$. Here we consider the case that $\lambda_{BB}=\lambda_{CC}$ and $\lambda_{\bar{A}B}=\lambda_{\bar{A}C}$, assuming the time-reversal symmetry of the Hamiltonian. Then we have \begin{align} \lefteqn{ h_{ijk}-\varepsilon_0 =\frac{1}{3}(\lambda_{\bar{A}\bar{A}}+\lambda_{\bar{A}B}) h_{t}} \label{decomposed_Hamiltonian_2D.3}\\ & +\frac{1}{9}\left(\lambda_{\bar{A}\bar{A}} -4\lambda_{\bar{A}B}+4\lambda_{BB}\right) (2h_{U}+h_{V_{\perp}})\nonumber\\ & +\frac{1}{9}\left(\lambda_{\bar{A}\bar{A}} +2\lambda_{\bar{A}B}+\lambda_{BB}\right) (h_{V_{\parallel}}+h_{W}+h_{W'})\nonumber\\ & +\frac{1}{9}\left(\lambda_{\bar{A}\bar{A}} -\lambda_{\bar{A}B}-2\lambda_{BB}\right)(h_{X}+h_{X'})\nonumber\\ & +\frac{1}{9}\left(-4\lambda_{\bar{A}\bar{A}}+4\lambda_{\bar{A}B} -\lambda_{BB}\right)\sum_{\sigma}N_{ijk\sigma} +\lambda_{\bar{A}\bar{A}},\nonumber \end{align} where $h_t$, $h_U$, $\cdots$, $h_{X'}$ are defined in Appendix~\ref{details}. For $1/3$-filling, the density operator and the number of the triangles $N_{\rm tr}$ is related as $\sum_{\braket{ijk},\sigma}N_{ijk\sigma}=2N_{\rm tr}$, and the number of lattice sites is $L=3N_{\rm tr}/2$, so that the ground-state energy per site is identified as \begin{equation} \varepsilon_0=\frac{1}{9} \left(\lambda_{\bar{A}\bar{A}}+8\lambda_{\bar{A}B}-2\lambda_{BB}\right). \end{equation} The coefficients of projection operators are related to the parameters as \begin{align} \left[\begin{array}{c} \lambda_{\bar{A}\bar{A}}\\ \lambda_{\bar{A}B}\\ \lambda_{BB} \end{array}\right] =&\left[\begin{array}{ccc}1&4&4\\ -1&2&-1\\ 1&1&-2 \end{array}\right] \left[\begin{array}{c}U/2\\W\\X\end{array}\right]. \label{matrix_relation_PN} \end{align} Using the condition of the hopping in Eq.~(\ref{decomposed_Hamiltonian_2D.3}), we have \begin{align} \lambda_{\bar{A}\bar{A}}=&\frac{U}{2}-4W+4t,\\ \lambda_{\bar{A}B}=&-\frac{U}{2}+4W-t,\\ \lambda_{BB}=&\frac{U}{2}+5W-2t. \end{align} Since all these coefficients should be positive, the condition of the exact PN ground state is given as follows, \begin{gather} W\leq\frac{U}{8}+t,\quad W\geq\frac{U}{8}+\frac{t}{4},\quad W\geq -\frac{U}{10}+\frac{2t}{5},\nonumber\\ V_{\perp}=\frac{U}{2},\quad V_{\parallel}=W=W',\quad X=X'=t-2W,\nonumber \end{gather} and the ground state energy per site is \begin{equation} \varepsilon_0=\frac{1}{3}(U-4W). \label{PNenergy_n13} \end{equation} The phase diagram for the exact PN state is surrounded by three lines given by $\lambda_{\bar{A}\bar{A}}>0$, $\lambda_{\bar{A}B}>0$, and $\lambda_{BB}>0$, as shown in Fig~\ref{fig:PNpd}(a). \begin{figure}[t] \input{fig5.tex} \caption{Phase diagrams of the generalized Hubbard model on the Kagom\'e lattice, in the $U/|t|$-$W/|t|$ parameter space for (a) $t>0$ at $1/3$-filling~\cite{Nakamura-I_2004} and (b) $t<0$ at $2/3$-filling, respectively. The shaded regions labeled by PN denote the plaquette N\'eel state.} \label{fig:PNpd} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7cm]{figDMRG_kagome_n13.eps} \caption{(a) Ground-state energy per site as a function of $W/t$ for the $1/3$-filling Kagom\'e model with $U/t=2$ and $t>0$, obtained by the numerical methods using (a) anisotropic and (b) isotropic clusters (see Appendix B). The energy of the PN state [Eq.(\ref{PNenergy_n13})] is subtracted. Insets: enlarged figure around the lower level crossing. } \label{figDMRG_kagome_n13} \end{center} \end{figure} In Fig.~\ref{figDMRG_kagome_n13}, the numerical results of the ground-state energy for the $1/3$-filling Kagom\'e model at $U/t=2$ with $t>0$ are plotted as a function of $W/t$. The energy of the PN state $\varepsilon_0=(U-4W)/3\equiv\varepsilon_0({\rm PN})$ is subtracted so that a region with $\varepsilon_0-\varepsilon_0({\rm PN})=0$ corresponds to the PN phase. As seen in Figs.~\ref{figDMRG_kagome_n13}(a) and (b), a robust range with $\varepsilon_0-\varepsilon_0({\rm PN})=0$ exists for all the used clusters. We find a deviation from $\varepsilon_0-\varepsilon_0({\rm PN})=0$ below a level-crossing point $W/t=0.5$, which is independent of the cluster shape and size [see insets of Figs.~\ref{figDMRG_kagome_n13}(a) and (b)]. This $W/t$ value agrees perfectly with the analytical result of the lower bound of the PN phase, given by $\lambda_{\bar{A}B}=0$. Whereas, the level-crossing point related to the upper bound depends on the cluster. Like in the 1D BN state, the PN state would be overstabilized with small clusters. However, the data are not sufficient to perform a finite-size scaling analysis and it remains as a future work. \subsection{Plaquette-N\'eel state at $2/3$-filling} We consider the following plaquette N\'eel state at $2/3$-filling given by \begin{equation} \ket{\Psi_{\sigma}}\equiv \prod_{\braket{ijk}\in\bigtriangleup} C_{ijk\sigma}^{\dag} B_{ijk\sigma}^{\dag} \prod_{\braket{i'j'k'}\in\bigtriangledown} C_{i'j'k'\bar{\sigma}}^{\dag} B_{i'j'k'\bar{\sigma}}^{\dag} \ket{0}, \label{Kagome_23_state} \end{equation} where $\braket{ijk}$ ($\braket{i'j'k'}$) is taken for all triangles of the Kagom\'e lattice with up (down) direction, and \begin{equation} C_{ijk\sigma}^{\dag}B_{ijk\sigma}^{\dag} ={\textstyle\frac{\i}{\sqrt{3}}} (c_{i\sigma}^{\dag}c_{j\sigma}^{\dag} +c_{j\sigma}^{\dag}c_{k\sigma}^{\dag} +c_{k\sigma}^{\dag}c_{i\sigma}^{\dag}). \end{equation} The Hamiltonian for this state is constructed as \begin{align} \lefteqn{ h_{ijk}-\varepsilon_0= \lambda_{AA} n_{A\uparrow} n_{A\downarrow}}\\ & +\lambda_{\bar{B}\bar{B}}(1- n_{B\uparrow})(1- n_{B\downarrow}) +\lambda_{\bar{C}\bar{C}}(1- n_{C\uparrow})(1- n_{C\downarrow})\nonumber\\ &+\lambda_{A\bar{B}}\left\{ n_{A\uparrow}(1-n_{B\downarrow})+ (1-n_{B\uparrow})n_{A\downarrow}\right\}\nonumber\\ &+\lambda_{A\bar{C}}\left\{ n_{A\uparrow}(1-n_{C\downarrow})+ (1-n_{C\uparrow})n_{A\downarrow}\right\}\nonumber\\ &+\lambda_{\bar{B}\bar{C}} \left\{ (1- n_{B\uparrow})(1- n_{C\downarrow}) +(1- n_{C\uparrow})(1- n_{B\downarrow}) \right\}\nonumber\\ =&\lambda_{\bar{B}\bar{B}}+\lambda_{\bar{C}\bar{C}} +2\lambda_{\bar{B}\bar{C}}\\ & +\sum_{\sigma} \{ (\lambda_{A\bar{B}}+\lambda_{A\bar{C}})n_{A\sigma} -(\lambda_{\bar{B}\bar{B}}+\lambda_{\bar{B}\bar{C}})n_{B\sigma}\nonumber\\ & -(\lambda_{\bar{C}\bar{C}}+\lambda_{\bar{B}\bar{C}})n_{C\sigma} \}\nonumber\\ &+\lambda_{AA} n_{A\uparrow}n_{A\downarrow} +\lambda_{\bar{B}\bar{B}}n_{B\uparrow}n_{B\downarrow} +\lambda_{\bar{C}\bar{C}}n_{C\uparrow}n_{C\downarrow}\nonumber\nonumber\\ &-\lambda_{A\bar{B}} (n_{A\uparrow}n_{B\downarrow}+n_{B\uparrow}n_{A\downarrow})\nonumber\\ & -\lambda_{A\bar{C}} (n_{A\uparrow}n_{C\downarrow}+n_{C\uparrow}n_{A\downarrow})\nonumber\\ & +\lambda_{\bar{B}\bar{C}} (n_{B\uparrow}n_{C\downarrow}+n_{C\uparrow}n_{B\downarrow}). \nonumber \end{align} For $\lambda_{\bar{B}\bar{B}}=\lambda_{\bar{B}\bar{C}} =\lambda_{\bar{C}\bar{C}}$, and $\lambda_{A\bar{B}}=\lambda_{A\bar{C}}$, assuming the time-reversal symmetry of the Hamiltonian, we have \begin{align} \lefteqn{ h_{ijk}-\varepsilon_0 =\frac{2}{3}(-\lambda_{A\bar{B}}-\lambda_{\bar{B}\bar{B}}) h_{t}} \label{decomposed_Hamiltonian_2D.5}\\ & +\frac{1}{9}\left(\lambda_{AA}-4\lambda_{A\bar{B}} +4\lambda_{\bar{B}\bar{B}}\right)(2h_{U}+h_{V_{\perp}})\nonumber\\ & +\frac{1}{9}\left(\lambda_{AA}+2\lambda_{A\bar{B}} +\lambda_{\bar{B}\bar{B}}\right) (h_{V_{\parallel}}+h_{W}+h_{W'})\nonumber\\ & +\frac{1}{9}\left(\lambda_{AA}-\lambda_{A\bar{B}} -2\lambda_{\bar{B}\bar{B}}\right)(h_{X}+h_{X'})\nonumber\\ & +\frac{1}{9}\left(- \lambda_{AA}+4\lambda_{A\bar{B}} -13\lambda_{\bar{B}\bar{B}}\right) \sum_{\sigma}N_{ijk\sigma} +4\lambda_{\bar{B}\bar{B}}.\nonumber \end{align} For $2/3$-filling, the density operator and the number of the triangles $N_{\rm tr}$ is related as $\sum_{\braket{ijk},\sigma}N_{ijk\sigma}=4N_{\rm tr}$, and the number of lattice sites is $L=3N_{\rm tr}/2$, so that the ground-state energy per site is identified as \begin{equation} \varepsilon_0= \frac{8}{27}(\lambda_{AA}-4\lambda_{A\bar{B}}+4\lambda_{\bar{B}\bar{B}}). \end{equation} Since the relation between $(\lambda_{AA}, \lambda_{A\bar{B}}, \lambda_{\bar{B}\bar{B}})$ and $(U, W, X)$ is given by the same matrix as that of (\ref{matrix_relation_PN}), we identify the coefficients of the projection operators, using the condition for the hopping in Eq.~(\ref{decomposed_Hamiltonian_2D.5}), as \begin{align} \lambda_{AA}=&\frac{U}{2}+8W+2t,\\ \lambda_{A\bar{B}}=&-\frac{U}{2}+W-\frac{t}{2},\\ \lambda_{\bar{B}\bar{B}}=&\frac{U}{2}-W-t. \end{align} Thus the condition of the exact PN ground state is given as follows, \begin{gather} W\geq-\frac{U}{16}-\frac{t}{4},\quad W\geq \frac{U}{2}+\frac{t}{2},\quad W\leq \frac{U}{2}-t,\nonumber\\ V_{\perp}=\frac{U}{2},\quad V_{\parallel}=W=W',\quad X=X'=\frac{t}{2}+W. \end{gather} The ground state energy per site is \begin{equation} \varepsilon_0=\frac{4}{3}U. \end{equation} The phase diagram for the exact plaquette N\'eel state is surrounded by three boundaries given by $\lambda_{AA}>0$, $\lambda_{A\bar{B}}>0$, and $\lambda_{\bar{B}\bar{B}}>0$, as shown in Fig~\ref{fig:PNpd}(b). \begin{figure} \begin{center} \includegraphics[width=7cm]{figDMRG_kagome_n23.eps} \caption{(a) Ground-state energy per site as a function of $W/|t|$ for the $2/3$-filling Kagom\'e model with $U/|t|=2$ and $t<0$, obtained by the numerical methods using (a) anisotropic and (b) isotropic clusters (see Appendix B). Insets: similar figures for a wider range of $W/|t|$. } \label{figDMRG_kagome_n23} \end{center} \end{figure} The numerical results of the ground-state energy for the $2/3$-filling Kagom\'e model at $U/|t|=2$ with $t<0$ are plotted as a function of $W/|t|$ in Fig.~\ref{figDMRG_kagome_n23}. We find that the system has the PN state energy $\varepsilon_0({\rm PN})=4U/3$ in a wide range of $W/|t|$. The energy deviation from $\varepsilon_0=\varepsilon_0({\rm PN})$, indicating a transition to another phase, is clearly seen. Although the level crossing is not very sharp, we can approximately estimate the transition point $W/|t|\sim0.4$ for all the used clusters. This value is close but subtly smaller than the analytical result of the lower bound of the PN phase $W/|t|=U/(2|t|)-1/2=0.5$ given by $\lambda_{A\bar{B}}=0$. Let us then turn to the upper bound of the PN phase. It may be more puzzling. Differently from the case of $1/3$-filling Kagom\'e lattice with $t>0$, the PN state seems to maintain as the ground state up to $W/|t|=100$ in the present calculations with periodic clusters. To resolve this issue, further calculations are required. \subsection{Ferromagnetism at $1/2$-filling} We consider a ferromagnetic (FM) state at half-filling where each triangle is occupied by three particles with the same spin, \begin{equation} \ket{\Psi_{\sigma}} \equiv \prod_{\braket{ijk}\in\bigtriangleup} C_{ijk\sigma}^{\dag}B_{ijk\sigma}^{\dag}A_{ijk\sigma}^{\dag}\ket{0}. \end{equation} where \begin{equation} C_{ijk\sigma}^{\dag}B_{ijk\sigma}^{\dag}A_{ijk\sigma}^{\dag} =\i c_{i\sigma}^{\dag}c_{j\sigma}^{\dag}c_{k\sigma}^{\dag}. \end{equation} The Hamiltonian for this state is constructed as \begin{align} \lefteqn{ h_{ijk}-\varepsilon_0= \lambda_{\bar{A}\bar{A}} (1-n_{A\uparrow}) (1-n_{A\downarrow})}\\ &+\lambda_{\bar{B}\bar{B}}(1- n_{B\uparrow})(1- n_{B\downarrow})\nonumber\\ & +\lambda_{\bar{C}\bar{C}}(1- n_{C\uparrow})(1- n_{C\downarrow})\nonumber\\ &+\lambda_{\bar{A}\bar{B}}\left\{ (1-n_{A\uparrow})(1-n_{B\downarrow})+ (1-n_{B\uparrow})(1-n_{A\downarrow})\right\}\nonumber\\ &+\lambda_{\bar{A}\bar{C}}\left\{ (1-n_{A\uparrow})(1-n_{C\downarrow})+ (1-n_{C\uparrow})(1-n_{A\downarrow})\right\}\nonumber\\ &+\lambda_{\bar{B}\bar{C}} \left\{ (1-n_{B\uparrow})(1-n_{C\downarrow}) +(1-n_{C\uparrow})(1-n_{B\downarrow}) \right\}\nonumber\\ =& \lambda_{\bar{A}\bar{A}} +\lambda_{\bar{B}\bar{B}} +\lambda_{\bar{C}\bar{C}} +2(\lambda_{\bar{A}\bar{B}}+\lambda_{\bar{B}\bar{C}} +\lambda_{\bar{A}\bar{C}})\\ &+\sum_{\sigma} \{ -(\lambda_{\bar{A}\bar{A}}+\lambda_{\bar{A}\bar{B}} +\lambda_{\bar{A}\bar{C}})n_{A\sigma}\nonumber\\ & -(\lambda_{\bar{B}\bar{B}}+\lambda_{\bar{A}\bar{B}} +\lambda_{\bar{B}\bar{C}})n_{B\sigma}\nonumber\\ & -(\lambda_{\bar{C}\bar{C}}+\lambda_{\bar{A}\bar{C}} +\lambda_{\bar{B}\bar{C}})n_{C\sigma} \}\nonumber\\ &+\lambda_{\bar{A}\bar{A}} n_{A\uparrow}n_{A\downarrow} +\lambda_{\bar{B}\bar{B}}n_{B\uparrow}n_{B\downarrow} +\lambda_{\bar{C}\bar{C}}n_{C\uparrow}n_{C\downarrow}\nonumber\\ &+\lambda_{\bar{A}\bar{B}} (n_{A\uparrow}n_{B\downarrow}+n_{B\uparrow}n_{A\downarrow})\nonumber\\ & +\lambda_{\bar{A}\bar{C}} (n_{A\uparrow}n_{C\downarrow}+n_{C\uparrow}n_{A\downarrow})\nonumber\\ & +\lambda_{\bar{B}\bar{C}} (n_{B\uparrow}n_{C\downarrow}+n_{C\uparrow}n_{B\downarrow}). \nonumber \end{align} Under the time-reversal symmetry $\lambda_{\bar{B}\bar{B}}=\lambda_{\bar{B}\bar{C}}=\lambda_{\bar{C}\bar{C}}$, and $\lambda_{\bar{A}\bar{B}}=\lambda_{\bar{A}\bar{C}}$, we have \begin{align} \lefteqn{ h_{ijk}-\varepsilon_0 =\frac{1}{3}(\lambda_{\bar{A}\bar{A}}+\lambda_{\bar{A}\bar{B}} -2\lambda_{\bar{B}\bar{B}})h_{t}}\label{Ham_ferro.3}\\ & +\frac{1}{9}(\lambda_{\bar{A}\bar{A}}+4\lambda_{\bar{A}\bar{B}} +4\lambda_{\bar{B}\bar{B}})(2h_{U}+h_{V_{\perp}})\nonumber\\ & +\frac{1}{9}(\lambda_{\bar{A}\bar{A}}-2\lambda_{\bar{A}\bar{B}} +\lambda_{\bar{B}\bar{B}})(h_{V_{\parallel}}+h_{W}+h_{W'})\nonumber\\ & +\frac{1}{9}\left(\lambda_{\bar{A}\bar{A}}+\lambda_{\bar{A}\bar{B}} -2\lambda_{\bar{B}\bar{B}}\right)(h_{X}+h_{X'})\nonumber\\ & -\frac{1}{9}\left(4\lambda_{\bar{A}\bar{A}}+10\lambda_{\bar{A}\bar{B}} +13\lambda_{\bar{B}\bar{B}}\right)\sum_{\sigma}N_{ijk\sigma}\nonumber\\ &+\lambda_{\bar{A}\bar{A}}+4\lambda_{\bar{A}\bar{B}} +4\lambda_{\bar{B}\bar{B}}. \nonumber \end{align} At half-filling, the number of the triangle $N_{\rm tr}$ is related as $\sum_{\braket{ijk},\sigma}N_{ijk\sigma}=3N_{\rm tr}$, and the number of lattice sites $L=3N_{\rm tr}/2$, so that the ground-state energy per site is identified as \begin{equation} \varepsilon_0= \frac{2}{9}(\lambda_{\bar{A}\bar{A}}-2\lambda_{A\bar{B}} +\lambda_{\bar{B}\bar{B}}). \end{equation} The parameters are related as \begin{align} \left[\begin{array}{c} \lambda_{\bar{A}\bar{A}}\\ \lambda_{\bar{A}\bar{B}}\\ \lambda_{\bar{B}\bar{B}} \end{array}\right] =&\left[\begin{array}{ccc}1&4&4\\ 1&-2&1\\ 1&1&-2 \end{array}\right] \left[\begin{array}{c}U/2\\W\\X\end{array}\right]. \end{align} Then we have \begin{align} \lambda_{\bar{A}\bar{A}}=&\frac{U}{2}+4W+\frac{4}{3}t,\\ \lambda_{\bar{A}\bar{B}}=&\frac{U}{2}-2W+\frac{t}{3},\\ \lambda_{\bar{B}\bar{B}}=&\frac{U}{2}+W-\frac{2}{3}t. \end{align} Thus the condition of the exact ferromagnetic ground state is given as follows, \begin{gather} W\geq-\frac{U}{8}-\frac{t}{3},\quad W\leq \frac{U}{4}+\frac{t}{6},\quad W\geq-\frac{U}{2}+\frac{2}{3}t,\nonumber\\ V_{\perp}=\frac{U}{2},\quad V_{\parallel}=W=W',\quad X=X'=\frac{t}{3}. \end{gather} The ground state energy per site is \begin{equation} \varepsilon_0=2W. \end{equation} This is consistent with the fact that in the fully ferromagnetic state, only the $V_{\parallel}$ term contribute to the energy. The condition of the hopping in Eq.~(\ref{Ham_ferro.3}) means that $t$ may take both positive and negative values. As shown in Fig~\ref{fig:FMpd}, (a) for positive $t$, the exact ferromagnetic ground state is surrounded by three lines, while (b) for the negative $t$, the lines become two. \begin{figure}[t] \input{fig8.tex} \caption{ Phase diagrams of the generalized Hubbard model on the Kagom\'e lattice, in the $U/|t|$-$W/|t|$ parameter space with $V_{\parallel}=W=W'$, $V_{\perp}=U/2$, $X=X'=t/3$. (a) and (b) correspond to the case of $t>0$ and $t<0$, respectively. The shaded regions labeled by FM denote the exact ferromagnetic ground state. }\label{fig:FMpd} \end{figure} \begin{figure} \begin{center} \includegraphics[width=7cm]{figDMRG_kagome_n12.eps} \caption{Numerical results of the ground-state energy per site as a function of $W/|t|$ for the $1/2$-filling Kagom\'e model at $U/|t|=3$ with (a) $t>0$ and (b) $t<0$, where the number of spin-up and spin-down electrons are kept as $N_\uparrow=N_\downarrow$ or $N_\uparrow=N_\downarrow+1$. The energy of the FM state $\varepsilon_0({\rm FM})=2W$ is subtracted. } \label{figDMRG_kagome_n12} \end{center} \end{figure} In Fig.~\ref{figDMRG_kagome_n12} the numerical results of the ground-state energy for the $1/2$-filling Kagom\'e model at $U/|t|=3$ are plotted as a function of $W/|t|$, where the numbers of spin-up and spin-down electrons are kept to be as close as possible, namely, $N_\uparrow-N_\downarrow=0$ and $|N_\uparrow-N_\downarrow|=1$ for even- and odd-site clusters, respectively. Since the ground-state energy of the FM state $\varepsilon_0=2W\equiv\varepsilon_0({\rm FM})$ is subtracted in Fig.~\ref{figDMRG_kagome_n12}, the FM phase is indicated by a region having positive value of the numerical energy $\varepsilon_0-\varepsilon_0({\rm FM})>0$. The finite-size effect seems to be much smaller than that in the PN state. For the both positive and negative $t$ values, the FM phase appears at $-1 \lesssim W/|t|\lesssim 1$, though the region for $t<0$ may be slightly narrower than that for $t>0$. The FM phase would be comparatively more extended than the analytical results shown in Fig~\ref{fig:FMpd}. \section{Entanglement entropy} \label{sec:EE} In this section we consider the entanglement entropy (EE)\cite{Horodecki-H-H-H} of the system discussed above. When we divide the normalized wave function of the system into two regions A and B as \begin{equation} \ket{\Psi}=\sum_{nm}\Lambda_{nm} \ket{\Psi_n^{\rm A}}\otimes\ket{\Psi_m^{\rm B}}, \end{equation} the EE is given by \begin{equation} S^{\rm A} =-\mathrm{Tr}_{\rm A} \left[ \hat{\rho}_{\rm A}\log\hat{\rho}_{\rm A}\right], \end{equation} with the reduced density matrix \begin{equation} \hat{\rho}_{\rm A}=\sum_{nm}(\Lambda \Lambda^T)_{nm} \ket{\Psi^{\rm A}_n}\bra{\Psi^{\rm A}_m}, \label{RDM} \end{equation} where $\Lambda^T$ is the transposed matrix of $\Lambda$. For the BN state in 1D, $\ket{\Psi_n^{\rm A}}$ and $\ket{\Psi_n^{\rm B}}$ (see Fig.~\ref{fig:dividing}(a)) are given as \begin{align} \ket{\Psi_1^{\rm A}} =&X_{\rm A}^{\dag} c_{i\sigma}^\dag \ket{0}_{\rm A},\\ \ket{\Psi_2^{\rm A}} =&X_{\rm A}^{\dag}\ket{0}_{\rm A},\\ \ket{\Psi_1^{\rm B}} =&c_{j\sigma}^\dag X_{\rm B}^{\dag} \ket{0}_{\rm B},\\ \ket{\Psi_2^{\rm B}} =&X_{\rm B}^{\dag} \ket{0}_{\rm B}, \end{align} where $X_{\rm A}^{\dag}$ and $X_{\rm B}^{\dag}$ denote normalized operators that create the common parts of A and B regions, respectively. Then we get \begin{equation} \Lambda=\frac{1}{\sqrt{2}} \left[ \begin{array}{ccc} 0 & 1 \\ 1 & 0 \end{array} \right] ,\quad \Lambda \Lambda^T= \frac{1}{2} \left[ \begin{array}{cc} 1 & 0\\ 0 & 1 \end{array} \right]. \end{equation} The EE is easily obtained by using the eigenvalues $\lambda_i$ of the matrix $\Lambda\Lambda^T$ as \begin{equation} S^{\rm A}=-\sum_{i}\lambda_i\log\lambda_i=\log 2. \end{equation} This result is for an open boundariy system where the two regions are cut at one bond. Therefore, the EE for the periodic boundariy system is $S^{\rm A}=\log 4$. These results can also be obtained by using the matrix product representation of the wave function~\cite{Itoh-N-M,Nakamura-O-I}. \begin{figure}[t] \input{fig10.tex} \caption{Patterns to cut the systems into two regions A and B to calculate the entanglement entropy (EE) for (a) the 1D chain and (b) the Kagom\'e lattice, respectively.}\label{fig:dividing} \end{figure} For the PN state in the Kagom\'e lattice with $1/3$-filling, we consider a case that two regions A and B are connected via a single triangle, for simplicity, as shown in Fig.~\ref{fig:dividing}(b). Then $\ket{\Psi_n^{\rm A}}$ and $\ket{\Psi_n^{\rm B}}$ are given as \begin{align} \ket{\Psi_1^{\rm A}} =&X_{\rm A}^{\dag} c_{i\sigma}^\dag \ket{0}_{\rm A},\\ \ket{\Psi_2^{\rm A}} =&X_{\rm A}^{\dag}\ket{0}_{\rm A},\\ \ket{\Psi_1^{\rm B}} =&c_{j\sigma}^\dag X_{\rm B}^{\dag} \ket{0}_{\rm B},\\ \ket{\Psi_2^{\rm B}} =&c_{k\sigma}^\dag X_{\rm B}^{\dag} \ket{0}_{\rm B},\\ \ket{\Psi_3^{\rm B}} =&X_{\rm B}^{\dag} \ket{0}_{\rm B}. \end{align} In this case, we get the following matrix elements \begin{equation} \Lambda=\frac{1}{\sqrt{3}} \left[ \begin{array}{ccc} 1 & 1& 0 \\ 0 & 0 & 1 \end{array} \right] ,\quad \Lambda \Lambda^T= \frac{1}{3} \left[ \begin{array}{cc} 1 & 0\\ 0 & 2 \end{array} \right]. \end{equation} If we cut the triangle in the opposite way, we should consider the situation A$\leftrightarrow$B. In this case the matrix in Eq.~(\ref{RDM}) becomes \begin{equation} \Lambda^T \Lambda= \frac{1}{3} \left[ \begin{array}{ccc} 1 & 1& 0 \\ 1 & 1& 0 \\ 0 & 0 & 1 \end{array} \right]. \end{equation} The eigenvalues of the matrix $\Lambda^T \Lambda$ are \begin{equation} \lambda_i=\left\{\frac{1}{3},\frac{2}{3},0\right\}. \end{equation} Thus the value of the EE does not depend on the ways to cut the triangle, so that we get the EE in general cases as \begin{equation} S^{\rm A} =N_{\bigtriangleup}\underbrace{[\log3-(2/3)\log2]}_{s_0}, \end{equation} where $s_0=0.636514168\cdots$ and $N_{\bigtriangleup}$ means the number of triangles along the cutting lines. This means that the EE obeys the area law. The EE for the PN state at $2/3$-filling is obtained as the same value as that of $1/3$-filling via the particle-hole transformation. For the FM state at $1/2$ filling, the EE becomes zero. \begin{figure} \begin{center} \includegraphics[width=7cm]{figDMRG_EE.eps} \caption{(a) Isotropic periodic and (b) torus clusters of the Kagom\'e lattice. The bold (cutting) lines are examples of the system division. } \label{fig:DMRG_EE} \end{center} \end{figure} The value of EE can be easily verified numerically by using the DMRG method. For the BN state in 1D, the EE is $S^{\rm A}=1.38629437\approx\log 4$ which does not depend on length of regions A and B in a periodic chain. For the PN state in 1/3-filled Kagom\'e lattice, some examples of the cutting lines are shown in Fig.~\ref{fig:DMRG_EE}. We obtain $S^{\rm A}=3.182570841\approx5s_0$ and $S^{\rm A}=5.092113346\approx8s_0$ for the periodic cluster in Fig.~\ref{fig:DMRG_EE}(a); $S^{\rm A}=4.455599178\approx7s_0$ for the torus cluster in Fig.~\ref{fig:DMRG_EE}(b). Thus, we have confirmed that the EE is proportional to the number of triangles on the cutting lines, i.e., $S^{\rm A}=N_{\bigtriangleup}s_0$. \section{Summary and discussion} \label{sec:summary} In summary, we have discussed exact ground states of the generalized Hubbard model based on the projection operator method in multicomponent systems. The Hamiltonian with the exact ground state can be obtained when the lattices have bipartite structure in terms of corner sharing unit plaquettes. We have applied this method to the 1D chain and the Kagom\'e lattice, and obtained parameter regions of the exact ground states for several fillings. We have also calculated the entanglement entropy (EE). In addition, we have performed numerical calculations based on exact diagonalization and density-matrix renormalization group, and confirmed the results. In the 1D chain, the exact ground state is the bond N\'eel (BN) state where the system has a N\'eel ordered state on the bonds~\cite{Itoh-N-M,Nakamura-O-I}. This corresponds to the staggered dimer states in the spin-$1/2$ two-leg ladder model with four spin exchanges.\cite{Kolezhuk-M} We have numerically confirmed the existence of the exact BN ground state. The BN phase may be expanded to the outside of the analytical argument. The ferromagnetic (FM) and BN phase boundary agrees perfectly between the analytical and numerical results. In the Kagom\'e lattice, we have discussed the exact plaquette N\'eel (PN) state at 1/3-filling~\cite{Nakamura-I_2004}, and also the PN state at 2/3-filling as well as the FM state at half-filling. According to the numerical calculations, each the exact state seems to be stabilized in a wider region than those suggested by the analytical result. However, further calculations are required to corroborate it. For the EE, we have confirmed perfect agreement between the analytical and the numerical calculations. In addition to the PN state, we may introduce other exact plaquette ground states. For example, the following state \begin{equation} \ket{\Psi_{\sigma}}\equiv \prod_{\braket{ijk}\in\bigtriangleup} B_{ijk\sigma}^{\dag} \prod_{\braket{i'j'k'}\in\bigtriangledown} C_{i'j'k'\bar{\sigma}}^{\dag}\ket{0}, \label{Kagome_topo_state} \end{equation} seems like a ``topological state'', since local spin current state with time-reversal symmetry \cite{Kane-M}. In order to stabilize this state, we have to extend our model Hamiltonian to include the current terms $J_{ijk\sigma}$. \section{Acknowledgment} M. N. acknowledges the Visiting Researcher's Program of the Institute for Solid State Physics, the University of Tokyo, and the Max Planck Institute f\"ur Physik komplexer Systeme, Dresden where this work was initiated. M.~N. is supported by JSPS KAKENHI Grant Number 17K05580. S.~N. acknowledges support from the SFB 1143 of the Deutsche Forschungsgemeinschaft. S.~N. would like to thank U. Nitzsche for technical assistance.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Data Collection and Filtering} \label{data-collect} We collected \texttt{FiSCU}\xspace from various online study guides such as \texttt{shmoop},\footnote{\url{https://www.shmoop.com/study-guides/literature}} \texttt{SparkNotes},\footnote{\url{https://www.sparknotes.com/lit/}} \texttt{CliffsNotes},\footnote{\url{https://www.cliffsnotes.com/literature}} and \texttt{LitCharts}.\footnote{\url{https://www.litcharts.com}} These sources contain educational material to help students study for their literature classes. These study guides include summaries of various literary pieces as well as descriptions of characters that appear in them. These literature summaries and character descriptions were written by literary experts, typically teachers, and are of high pedagogical quality. We used \texttt{Scrapy},\footnote{\url{https://scrapy.org/}} a free and open-source web-crawling framework to crawl these study guides. Our initial crawl resulted in a set of $1,774$ literature summaries and $25,525$ character descriptions. These included all characters mentioned in the literary pieces. However, not all characters, especially those that played a minor role in the literary piece, appeared in the corresponding literature summaries. Since our task involves making inferences about characters from the literature summarie , we filtered out the characters which do not appear in the summaries or their names or the descriptions had very little overlap with the literature summaries. This is done to mitigate the reference divergence issue~\cite{kryscinski-etal-2019-neural, maynez-etal-2020-faithfulness} and ensure that the literature summary has enough information about the character to generate the description. For this, we define the ``information overlap'' between two pieces of text $\mathcal{A}$ and $\mathcal{B}$, $IO(\mathcal{B}||\mathcal{A})$, as the ratio of the length of the longest overlapping word sub-sequence between $\mathcal{A}$ and $\mathcal{B}$, over the length of $\mathcal{A}$.\footnote{Technically this is the same as Rouge-L precision} Note that this information overlap measure is not symmetric and intuitively measures how much information about $\mathcal{A}$ is present in $\mathcal{B}$. We used the information overlap measure to filter our dataset as follows. If the information overlap of the literature summary with the character name, $IO($literature summary $||$ character name$)$, is less than $0.6$, then we consider that the character is not prominently mentioned in the literature summary and we remove that character from our dataset. Similarly, if the information overlap between the character description and the literature summary, $IO($literature summary $||$ character description$)$, is less than $0.2$, then we consider the character description generation less feasible and we remove that data point from our dataset.\footnote{These thresholds were chosen by experimenting with different values and manually analyzing the quality of (a subset of) the data.} \begin{table}[t] \footnotesize \centering \input{figures/data_stat} \caption{Statistics of the \texttt{FiSCU}\xspace dataset.} \label{tab:stat} \end{table} However, during these filtering steps, we did not want to remove the most important characters of the narrative. The online study guides list characters in decreasing order of their importance in the literary piece. For example, narrators, protagonists, antagonists, etc., are always described first. Leveraging this ordering, we always retained the top $3$ characters of the literary piece in our dataset. After the filtering process, our final dataset consists of $1,708$ literature summaries and $9,499$ character descriptions in total. This set was split into train ($80\%$), test ($10\%$), and validation ($10\%$) sets. The data splits were created to avoid any data-leakages -- each literary piece and all of its character descriptions were consistently part of only one of the train, test and validation sets. Table~\ref{tab:stat} shows the statistics of the final dataset. The dataset also contains the full-text of the books for $2,052$ of the character descriptions. \subsection{Dataset Reproducibility} \label{data_reproducibility} \texttt{FiSCU}\xspace is drawn from various study guides on the web. While we do not have the rights to directly redistribute this dataset, to allow other researchers to replicate the \texttt{FiSCU}\xspace dataset and compare to our work, we provide a simple script that will allow others to recreate \texttt{FiSCU}\xspace from a particular time-stamped version of these study guides on \textit{Wayback Machine}, a time-stamped digital archive of the web. Our script ensures that others will be able to recreate the same train, test and validation splits. \section{\texttt{FiSCU}\xspace Task Definitions} \label{sec:task-def} We introduce two new tasks on the \texttt{FiSCU}\xspace dataset \begin{itemize}[noitemsep,nolistsep] \item \textit{Character Identification} \item \textit{Character Description Generation} \end{itemize} \subsection{Character Identification} The \textit{Character Identification} task requires models to identify the character in an anonymized character description. Given a summary $S$, a candidate list of characters that appear in the literature summary $C=\{c_1, c_2, ..., c_k\}$, and an anonymized character description $D_{masked}^{c*}$, the goal in this task is to identify the name of the character $c^*$ described in the anonymized character description. We anonymize character descriptions by masking out all mentions of the character $c^*$ in the original description $D^{c*}$. \subsection{Character Description Generation} The \textit{character description generation} task tests the ability of NLP models to critically analyze the narrative from the perspective of characters and generate coherent and insightful character descriptions. Formally, given a literature summary, $S$, and a character name, $c$, the goal in this task is to generate the character's description, $D^{c}$. Generating the character description necessitates understanding and analyzing every salient information about the character in the literature summary. \subsection{Human Assessment of \texttt{FiSCU}\xspace} \label{data-validation} In order to verify the tractability of these two tasks as well as assessing the quality of the collected \texttt{FiSCU}\xspace dataset, we conducted a set of human evaluations on Amazon Mechanical Turk. We run our human assessment on the full test set of \texttt{FiSCU}\xspace. \vspace{0.2cm} \noindent{\bf Assessing the Character Identification task:} In the first human assessment, we showed annotators the literature summaries, anonymized character descriptions , and a list of character names (plus one randomly sampled character from the literary piece). The descriptions were anonymized by replacing all mentions of the corresponding character names with blanks.\footnote{\label{anonymizing_footnote}We identified mentions of a character in the summary by using a coreference system \cite{joshi2019coref,spanbert} as well as by matching the first name or the full name of the character.} For each anonymized character description, we asked 3 judges to identify which character it is describing by choosing from the list of choices. The judges also had the option of saying that they are unable to identify the character given the literature summary and the anonymized character description. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{figures/human_assess.pdf} \caption{Human assessment of the feasibility of the character description generation task. } \label{fig:human-data-assess} \end{figure} \vspace{0.2cm} \noindent{\bf Assessing the Character Description Generation task:} In the second human assessment, the judges are shown the same summary along with the original de-anonymized character descriptions. For each character description, 3 judges were asked to evaluate the quality of the description by answering the following two questions: \begin{enumerate}[wide, noitemsep, nolistsep, labelwidth=!, labelindent=0pt \item {\bf Fact coverage:} Specify how much of the information about the specific character in the corresponding ``character description'' is present in the summary (either explicitly or implicitly). Answer choices included: a) \textit{almost all of the information}, b) \textit{most of the information}, c) \textit{some of the information}, d) \textit{little or none of the information}, and e) \textit{character does not appear in the summary at all}. \item {\bf Task difficulty:} Given the summary, how easy is it to write the character description on a Likert scale of 0-4 (0 being too difficult, 4 being too easy)? If in the previous question the judges found that some of the information in the character description was not present in the summary, they are asked to disregard that while answering this question. In other words, they only need to consider the information in the character description which is explicitly or implicitly mentioned in the summary. \end{enumerate} \begin{figure}[t!] \centering \includegraphics[width=0.95\columnwidth]{figures/identification_task.pdf} \caption{Approaches for \textit{Character Identification}.} \label{fig:identification} \end{figure} We recruited $200$ crowd-workers who were located in the US, UK, or CA, and had a $98\%$ approval rate for at least $5,000$ previous annotations. We collected each annotation from $3$ workers and use majority vote in our assessments. In the Appendix~\ref{appendix-annotations}, we describe several steps we took to alleviate limitations of using crowd-sourcing and ensure high quality annotations. Screenshots of our AMT experiments are provided in the Appendix. For the first assessment on identifying characters, the human accuracy was $91.80\%$ (Fleiss' Kappa \cite{landis1977measurement} $\kappa = 0.79 $), indicating the feasibility of the task. For the second assessment of fact coverage and task difficulty, we summarize the result in Fig.~\ref{fig:human-data-assess}. The top chart (`Fact Coverage') shows that around 75\% of the of the literature summaries contain reasonable amount of information about the character represented in the corresponding character description. The bottom chart (`Task Difficulty') shows that more than 90\% of the times, the human judges considered the task of writing the character descriptions from the literature summaries not too difficult.\footnote{There is a natural label bias in the annotations: most of the responses fell into few categories. In this case, standard inter-annotator agreement statistics are not reliable (the well-known paradoxes of kappa \cite{feinstein1990high}). Thus, we simply report a pairwise agreement (i.e., how often do two judges agree on the answer for the same question) of 0.71 and 0.64 for `fact coverage' and `task difficulty', respectively.} These results verify the feasibility of understanding and drawing reasonable inferences about characters in the literature summaries from the \texttt{FiSCU}\xspace dataset. Next, we describe models and establish baseline performances on the two proposed tasks. \section{Character Identification} We present two approaches to address this task: (1) solving it as a multiple-choice classification problem, and (2) using a generative classifier that generates, instead of identifying, the character name, as shown in Fig.~\ref{fig:identification}. In the multiple-choice approach, we use the standard setup introduced in BERT~\cite{devlin-etal-2019-bert} where the text from $c_i$, $D_{masked}^{c*}$ and $S$ (with custom prefix tokens) are concatenated as input, and the \texttt{[CLS]} token is projected to a final logit. We apply a Softmax function to the logits to obtain the scores for each $c_i$. For training practicalities, we limit the number of choices to 4 during training (using the earliest window of choices which include the correct one). During inference, we can generate the logits for all the answer choices since they are independent before the final Softmax. To establish a baseline performance, we experiment with finetuning RoBERTa~\cite{liu2019roberta}, and ALBERT~\cite{Lan2020ALBERT:} which have been shown to perform well in several classification tasks. However, both these models cannot process inputs longer than 512 tokens and the concatenated inputs are generally much longer. So we also tried Longformer~\cite{Beltagy2020LongformerTL}, a BERT-like model with an attention mechanism designed to scale linearly with sequence length, thus allowing the model to encode longer documents. However, despite trying various hyperparameters, Longformer was not able to match the scores in our experiments. \begin{table}[t] \scriptsize \centering \input{figures/discriminative_res} \caption{Accuracy for the \textit{Character Identification}. The `partial' description setup used a truncated description ($50$ words) to allow including more of the summary. } \label{tab:disc-res} \end{table} Our second approach, a generative classifier, is inspired by~\citet{JMLR:v21:20-074} who studied transfer learning by converting NLP problems into a text-to-text format. The generative classifier addresses the character identification problem by directly generating the character name $\hat{c}$, given all character names (answer choices), the masked character description, and the summary (see Fig.~\ref{fig:identification}). During inference, we compute the model's probability of each of the answer choices, and output the one with the highest probability. We use this procedure to train several strong baselines built on top of the following pre-trained transformer-based models: BART~\cite{lewis2019bart}, T5~\cite{JMLR:v21:20-074}, and Longformer~\cite{Beltagy2020LongformerTL}. \vspace{0.15cm} \noindent\textbf{Implementation Details.}\space\space The RoBERTa and ALBERT multiple-choice classifiers were trained for 6 epochs, initial learning rate 1e-5 (ADAM optimizer), batch size 16. The generative classifier using BART was trained for 5 epochs, initial learning rate 5e-6, batch size 8. We used the Transformer package~\cite{Wolf2019HuggingFacesTS} for training. The T5 model was trained for 12 epochs on a TPU using the default parameters from the T5 repository (learning rate 1e-3 with AdaFactor, batch size 8).\footnote{\url{https://github.com/google-research/text-to-text-transfer-transformer}} We truncate the summaries (and descriptions) to satisfy model-specific maximum input length. \vspace{0.2cm} \noindent\textbf{Results.}\space\space Table~\ref{tab:disc-res} shows the accuracies of different baselines. The highest accuracy is achieved by ALBERT-XXL ($83.33\%$) followed by T5-11B ($80.16\%$). Although both ALBERT and T5 were given partial character descriptions, their specific pre-training loss and larger number of parameters (for T5-11B) lead to superior performance over other baselines. We observe that there is still a significant difference between the human performance ($91.80\%$) and the best model performance ($83.33\%$) on the character identification task, warranting future work on this direction. \begin{table}[t] \scriptsize \centering \input{figures/results} \caption{Automatic evaluation results for \textit{Character Description Generation}. BART-L achieved the best BLEU and ROUGE scores while Longformer performed best on BERTScore. } \label{tab:gen-results} \end{table} \section{Character Description Generation} \label{generative} We present several strong baselines for generating character descriptions by fine-tuning pre-trained transformer-based language models (LM)~\cite{Vaswani:17}. We study two types of models: (1) a standard left-to-right LM, namely GPT2-L \cite{radford2019language} which is trained with LM objective to predict the next word; and (2) two encoder-decoder models, namely BART\footnote{We use the bart-large-xsum as initial weights as our task can benefit from the summarization capability.} \cite{lewis2019bart} and Longformer~\cite{Beltagy2020LongformerTL}\footnote{\url{https://github.com/allenai/longformer}. We initialize parameters of Longformer with the same pre-trained BART.} which initialize the state of the Transformer by reading the input, and learn to generate the output. One of the challenges of the proposed task is the length of the summaries, which might exceed the maximum allowable length for most existing pre-trained models. To overcome this, we either: (1) simply truncate the literature summary at the end, or (2) only keep sentences from the literature summary that have a mention of the character of interest. For the latter, we use a coreference resolution model, SpanBERT~\cite{joshi2019coref,spanbert}, to identify character mentions within a summary. This results in a modified dataset of character-specific literature summaries paired with character descriptions. In addition to these two approaches, we also fine-tune Longformer~\cite{Beltagy2020LongformerTL} with original full-length literature summary. Longformer leverages an efficient encoding mechanism to avoid the quadratic memory growth and has been previously explored for NLU tasks (encoder-only). We integrate this approach into the pre-trained encoder-decoder BART model to encode inputs longer than its maximum token limit. All the models take \texttt{[name] $c$ [sum] $S$ [desc]} as input and generate the character description $D^c$ as output. \vspace{0.05cm} \noindent\textbf{Experiment with Full Literary Pieces.} \space\space We also run an experiment on a subset of our data with accompanying full-text of the literary pieces. Since it is infeasible to use the full texts as input given the memory constraints of current models, we coarsely select spans of the full-text beginning $50$ tokens before, and $50$ tokens after the occurrence of character's name. We use a Longformer model where the input is simply the concatenation of the selected spans. Due to the small size of the this subset, we perform a $5$-fold cross validation starting from a pre-trained model fine-tuned on summary-description pairs.\footnote{Pre-training data do not contain instances of this subset. \vspace{0.05cm} \noindent\textbf{Implementation Details.}\space\space We use the Transformer library~\cite{Wolf2019HuggingFacesTS}. Each baseline was trained for $5$ epochs with effective batch size of $8$, and initial learning rate of 5e-6. We use the maximum input length of $1024$ for GPT2, and $2048$ for BART\footnote{BART originally accepts inputs of maximum 1024 BPE-tokens. We extend this to 2048 by adjusting its positional embeddings.} and the variant of Longformer with truncated input. For experiment with original books, we use $16,384$ which is the maximum allowable input length for Longformer. During inference, we use beam search decoding with $5$ beams. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{figures/fine-grained-plot-3_new} \caption{Breakdown results for BART-L on subsets with annotated fact coverage as all/most/some/little. Results for other baselines are provided in Appendix.} \label{tab:gen-results-fine} \end{figure} \subsection{Automatic Evaluation} Following previous works, we use several standard, widely used automatic evaluation metrics. We use \textbf{BLEU-4}~\cite{papineni2002bleu} that measures overlap of \textit{n}-gram up to $n=4$, \textbf{ROUGE-\textit{n}} ($n{=}1,2$), and \textbf{ROUGE-L} F-1 scores~\cite{lin2004rouge}\footnote{Note that we did not include perplexity score as it is not comparable across LM-based and encoder-decoder models.} . However, recent works~\cite{novikova2017we, wang-etal-2018-metrics} have raised concerns on the usage of these metrics as they fail to capture paraphrases and conceptual information. To overcome these issues, we additionally include a model-based metric, \textbf{BERTScore}~\cite{bert-score}, which measures the cosine similarity between contextualized embeddings of the gold and generated outputs.\footnote{We use the code at \url{https://github.com/Tiiiger/bert_score} The result of the automatic evaluation is presented in Table~\ref{tab:gen-results}. According to the table, BART-L consistently achieves the best performance across BLEU and ROUGE scores. However, Longformer achieves a slightly better BERTScore. Both BART and Longformer outperform GPT2 in general. This can be in part because BART and Longformer can handle longer context, and are initially pre-trained on a combination of books and Wikipedia data and further fine-tuned on summarization tasks, while GPT2 is pre-trained on WebText only.\footnote{While these models could have had access to the original book text, they do not have access to the character descriptions (our outputs) during pre-training. So, this information should not principally change any of our empirical conclusions.} Models perform relatively better in the length truncation setups than in the coreference truncatio . We posit that this is because a lot of the key points about major characters are likely to appear earlier in the book summary (favoring length truncation). Also, there might be errors introduced by the coreference resolution model itself. \begin{table}[t] \scriptsize \centering \input{figures/full_book_results} \caption{Automatic evaluation results for models using full-text of books vs. literature summaries.} \label{tab:full-book} \end{table} In order to have a better insight into the models' performance with respect to varying level of task feasibility, in Fig.~\ref{tab:gen-results-fine}, we additionally report the breakdown of the results for BART-L on separate subsets with ``almost all'', ``most'', ``some'', ``little or none'' of the information about the character (refer to \textit{Fact Coverage} in \S\ref{data-validation}). As expected, we observe a consistent decline in the performance with lower amount of fact coverage. Results for other baselines are reported in Table~\ref{tab:gen-results-fine-rest} of the Appendix. In Table ~\ref{tab:full-book}, we compare the models when using selected spans from the original literary piece as the input vs. literature summaries as the input. We observe a decline in performance when we used the full text. This reveals that even though the literary pieces contain all the character information, this information is scattered which makes it harder for the model to identify important facts about the character. Using full texts also requires encoders which are better at understanding dialog, first-person narratives and different writing styles of the authors. We invite the community to consider this challenging but important problem. \begin{figure}[t] \centering \includegraphics[width=0.90\columnwidth]{figures/human-eval-2.pdf} \caption{Human evaluation of generated character descriptions. While the descriptions are grammatically correct and logically coherent, they often misrepresent or miss important details about the character.} \label{fig:human-desc-eval} \end{figure} \subsection{Human Evaluation} \label{sec:human_eval} To better evaluate the quality of the generated character descriptions, we conduct a human evaluation on $100$ test pairs of literature summaries and character descriptions generated by the BART-L model on Amazon Mechanical Turk.\footnote{Here we are evaluating $4$ character descriptions per summary, for the total of $25$ literature summaries.} Given a literature summary and multiple generated character descriptions (shown one by one), the workers were asked to rate each generated description on a Likert scale of $1-5$ ($1$ being the worst, and $5$ being the best) according to the following criteria: (1) \textbf{Grammatical correctness} to indicate if the generated description is grammatically correct, (2) \textbf{Logical correctness} to indicate whether the generated description is logically meaningful and coherent, (3) \textbf{Faithfulness} of the generated description with respect to the given summary (a faithful character description will not mention facts which are irrelevant to the character and/or not stated in the summary), (4) \textbf{Centrality} to evaluate whether the description captures important details and key facts about the character, and finally (5) the \textbf{Overall score} considering all the four criteria listed above. We provide a screenshot of the experiment in Fig.~\ref{fig:human-assess-2} of the Appendix. Fig.~\ref{fig:human-desc-eval} presents the results of this human evaluation. We observe that the generated descriptions show a reasonable level of grammatical ($4.43$) and logical correctness ($3.94$). However, they lack behind when it comes to faithfulness ($3.11$) and centrality ($3.10$). We also report the distribution of ratings in Table~\ref{tab:human-eval-dist}. These results indicate that solving this task requires designing better models of character-centric analysis of narrative. \begin{table}[t] \footnotesize \centering \input{figures/human-eval-dist} \caption{Percentage of different ratings from human evaluation of generated descriptions (1=worst, 5=best).} \label{tab:human-eval-dist} \end{table} \begin{table}[t] \footnotesize \centering \input{figures/analysis_res} \caption{Error Analysis: proportion of generated descriptions with different error types.} \label{tab:analysis} \end{table} \subsection{Qualitative Analysis} \label{sec:analysis} \input{6-analysis} \section{Introduction} \label{sec:intro} \input{1-intro} \section{Background} \label{sec:background} \input{2-background} \section{The LiSCU Dataset} \label{sec:data} \input{3-data} \input{4-model} \section{Conclusion} \label{sec:conclusion} \input{7-conclusion} \section*{Acknowledgments} This work was supported in part by ETH Grant (ETH-19 21-1) and NSF grant IIS2047232. We would also like to thank Jena D. Hwang for helping with designing the AMT task. \section*{Broader Impacts and Ethics Statement} \label{sec:bias} \noindent{\bf Bias in Narrative Texts:} \texttt{FiSCU}\xspace is based on novels which often reflect societal norms and biases of their times. Such a dataset can be used to understand societal bias as well as design Natural Language Understanding models that can be more aware of and possibly even avoid such biases. With this motivation, we analyzed the issue of gender bias in \texttt{FiSCU}\xspace. First, we inferred the gender of the characters in our dataset using the pronouns used to refer to them. We could not infer the gender of some of the characters because of errors in the coreference system or lack of enough mentions, and we filtered them out for this analysis. We found that there are significantly more male characters than female characters in our dataset. Specifically, $66\%$ of the characters are male. This suggests that systems that do not account for this bias might end up having more training data (and hence yield better performance) on descriptions of male characters than of female characters. Second, we also investigated the scope of gender bias in the summaries. We computed the average number of mentions of male and female characters (in the summaries). We found that on average male and female characters are mentioned $32.1$ and $31.7$ times, respectively. This indicates that even though there are fewer female characters in the literary pieces of our dataset, the ones that are present play a significant role in the development of the narrative. Possibly because of their importance in the narrative, they are mentioned as many times as male characters in the summary (which describes the main developments and not all details from the literary piece). Third, we investigated if the literary experts who composed the descriptions were biased in their analysis. For this, we compute the length of character descriptions of various characters. We found that there is no significant difference between male and female characters in this aspect. Specifically, the average number of tokens in the description of a male character was $203$, and that of a female character was $200$. Also, the average number of sentences in the description of a male character was $9.4$ and that of a female character was $9.3$. This also aligns with our observation in the previous experiment where we found that female characters, though fewer, play important roles in the narrative, and so their descriptions are not any shorter than descriptions of male characters. Overall, this analysis suggests that descriptions are not biased in their treatment of male and female characters. In any language generation setting, such as ours, there is the possibility of (potentially harmful) social biases that can be introduced in the training data. As we did not specifically control or regularize our model to remove the possibility of such biases, we would urge downstream users to undertake the necessary quality-assurance testing to evaluate the extent to which such biases might be present and impacting their trained system and to make modifications to their model and procedures accordingly. \noindent{\bf Human participation in our study :} We conducted 2 human evaluations on Amazon Mechanical Turk. To ensure the annotators were fairly compensated, we did several rounds of test runs and estimated the average time to finish one HIT. Workers were paid \$12/hr based on the HIT timings. We did not ask any personal, sensitive or identifying information from the annotators. \section{Collecting Annotations from Crowd Workers} \label{appendix-annotations} To alleviate the limitations of crowd-sourcing and ensure high quality of annotations, we took several steps. First, we conducted a pilot annotation exercise where we (authors) assessed the feasibility of the proposed task on a subset (250 instances) of the data. This pilot annotation helped us set up the task on AMT in a way that would make the task feasible for turkers (e.g. by asking clear concise questions). Second, we designed our setup to avoid annotator fatigue by asking them to read the summary context once and answer questions about all characters in that summary. Third, we ran a few experiments on AMT (before annotating the entire set) where we also included a `comment` section for turkers to allow them to bring up issues or ambiguities in our setup. We then manually analyzed the results and modified the tasks based on the comments. Finally, after annotating the entire set, we computed inter-annotator agreement as a way to ensure trust in the annotation quality. We found reasonable agreements between annotators as reported in Footnote 10 of the paper. We would also like to mention that we received several comments from the annotators that they found the task very interesting and enjoyable. \begin{table*}[t] \scriptsize \centering \input{figures/results_fine_shortened} \caption{Breakdown results on subsets of test set with annotated fact coverage as all/most/some/little.} \label{tab:gen-results-fine-rest} \end{table*} \begin{table*}[t] \renewcommand{\arraystretch}{1.2} \scriptsize \centering \input{figures/example3_w_sum} \caption{Qualitative example 1 for the generated descriptions. Words in \textcolor{red}{red} correspond to hallucinated or missing content, words in \textcolor{MengGreen}{green} correspond to faithful information. } \label{tab:errore_examples1} \end{table*} \begin{table*}[t] \renewcommand{\arraystretch}{1.2} \scriptsize \centering \input{figures/example4_w_sum} \caption{Qualitative example 2 for the generated descriptions. Words in \textcolor{red}{red} correspond to hallucinated or missing content, and words in \textcolor{MengGreen}{green} correspond to faithful information. } \label{tab:errore_examples2} \end{table*} \begin{table*}[t] \renewcommand{\arraystretch}{1.2} \scriptsize \centering \input{figures/example2_w_sum} \caption{Qualitative example 3 for the generated descriptions. Words in \textcolor{red}{red} correspond to hallucinated or missing content, words in \textcolor{MengGreen}{green} correspond to faithful information, and \underline{underline} corresponds to generic repetitive content. } \label{tab:errore_examples3} \end{table*} \newpage \begin{table*}[t] \renewcommand{\arraystretch}{1.2} \scriptsize \centering \input{figures/example1_w_sum} \caption{Qualitative example 4 for the generated descriptions. words in \textcolor{MengGreen}{green} correspond to faithful information, and \underline{underline} corresponds to generic repetitive content. } \label{tab:errore_examples4} \end{table*} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{figures/amt_1.jpg} \caption{An illustration of human assessment on AMT.} \label{fig:human-assess-1} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{figures/amt_2.png} \caption{An illustration of human evaluation for generated character description.} \label{fig:human-assess-2} \end{figure*}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \else \section*{Introduction} \fi \label{sec:INTR} Tolerance against motion is desirable in magnetic resonance imaging (MRI). This includes brain MRI, where significant motion-induced image degradation prevalence has been documented~\citep{Andre15} and high resolution imaging quality may be compromised by head motion~\citep{Budde14}. Rigid-body MRI motion correction~\citep{Zaitsev15,Godenschweger16} can be tackled via prospective or retrospective techniques. Prospective techniques~\citep{Maclaren13} compare advantageously in terms of spin-history, dephasing confounders or k-space density guarantees. Particularly, optical tracking systems have been proposed for head motion estimation, with corrections showing impressive accuracy and latency~\citep{Schulz12}. However, prospective methods require additional hardware and/or scanner modifications, often involving intrusive markers attached to the subject. In addition, satisfactory corrections may not always be possible due to unpredictability or complexity of motion, or maximized sampling efficiency requirements. Retrospective techniques may facilitate scanning or improve prospective results~\citep{Aksoy12}, particularly for 3D encodings, where spin-history is less of a problem, and when using non-linear reconstruction paradigms~\citep{Adcock14} to deal with non-homogeneous sampling density after motion. Motion compensation is strongly dependent on motion estimation from the measured information. Some methods have proposed the use of navigators, where surrogate motion-sensitive information is interleaved with the main acquisition and correction is applied either prospective or retrospectively~\citep{Tisdall16,Johnson16,Gallichan17}. Due to variability in time requirements for different MRI sequences, application of a given navigator is usually limited to a specific sequence type. Furthermore, particular care has to be taken to prevent spin-history or saturation and, sometimes, scanning efficiency may be compromised. Alternatively, sequences can be constructed with relative resilience to motion or, similarly, sampling schemes can be designed to function as implicit navigators. This is the case for spiral and radial trajectories~\citep{Bammer07,Anderson13,Pipe14}, where temporally distributed low resolution information is used for motion estimation, with retrospective corrections usually grounded on an intermediate reconstruction of fully formed images for each motion state, often involving non-linear methods. Finally, other approaches have explored the redundancy of the information sensed by parallel MRI to detect and discard localized inconsistencies in k-space measurements~\citep{Bydder02}, usually requiring prior image models to limit noise amplification and improve inconsistency detection~\citep{Samsonov10}. Building on models of MRI acquisition in the presence of motion~\citep{Batchelor05,Bammer07}, some methods have proposed formulations for motion estimation from the k-space that do not require navigators~\citep{Odille08,Loktyushin13,Cordero-Grande16}. Our previous work~\citep{Cordero-Grande16} introduced a data-driven reconstruction method for retrospective multi-shot rigid-body motion correction or \emph{aligned reconstruction} taking advantage of the encoding redundancy in the measured data. Performed simulations showed that the ability to solve the aligned reconstruction problem is strongly sensitive to the k-space encoding order, which suggested that opportunities exist to maximize the sensitivity to motion by appropriate sampling order designs. Consequently, in this paper we introduce the Distributed and Incoherent Sample Orders for Reconstruction Deblurring using Encoding Redundancy (DISORDER) framework as a flexible way to correct for head motion on a variety of spatio-temporal scales and imaging contrasts by optimizing the \emph{sample orders} for k-space coverage. In addition, we propose some technical refinements to the aligned reconstruction formulation and extend the simulation domain. The technique is implemented on a $3\,\mbox{T}$ scanner and tested on controlled motion scans and pediatric examinations including magnetization-prepared rapid acquisition gradient echo (MP-RAGE), fast spin echo (FSE), fluid attenuated inversion recovery (FLAIR), spoiled gradient echo (SPGR), and balanced steady-state free precession (bSSFP) sequences. A \textsmaller{\textsc{MATLAB}} implementation to reproduce the experiments is made available at \url{https://github.com/mriphysics/DISORDER/releases/tag/1.1.0}. \section{Theory} \else \section*{Theory} \fi \label{sec:THEO} \ifdefined \subsection{Aligned reconstruction} \else \subsection*{Aligned reconstruction} \fi \label{sec:ALSE} Assuming whitened measurement noise~\citep{Pruessmann01}, the aligned reconstruction for parallel volumetric imaging can be formulated as: \begin{equation} \label{ec:GEFO} (\hat{\mathbf{x}},\hat{\boldsymbol{\theta}})=\displaystyle\argmin_{\mathbf{x},\boldsymbol{\theta}}r_{\mathbf{x},{\boldsymbol{\theta}}}=\displaystyle\argmin_{\mathbf{x},\boldsymbol{\theta}}\|\mathbf{A}\boldsymbol{\mathcal{F}}\mathbf{S}\mathbf{T}_{\boldsymbol{\theta}}\mathbf{x}-\mathbf{y}\|_2^2, \end{equation} where $\mathbf{x}$ is the image to be reconstructed, $\boldsymbol{\theta}$ are the motion parameters, $r$ is the loss function, $\mathbf{y}$ is the measured k-space data, $\mathbf{T}$ is a set of rigid motion transformations, $\mathbf{S}$ are the coil sensitivities, $\boldsymbol{\mathcal{F}}$ is the discrete Fourier transform (DFT), and $\mathbf{A}$ is a sampling mask. We are interested in reconstructing a 3D image of size $V=V_1V_2V_3$ with $V_d$ the number of voxels along dimension $d$ from $\displaystyle N=C\sum_{m=1}^{M}E_m$ $C$-element coil array samples of a discretized k-space grid of size $K$. $E_m$ denotes the number of samples within \emph{segment} $m$ and $M$ is the number of segments in the sequence, with each segment associated to a specific motion state. Detailed information about the terms in Eq.~\eqref{ec:GEFO} can be found in~\cite{Cordero-Grande16}. Here we provide a brief description of their structure: \begin{itemize} \item $\mathbf{y}$ is a $N\times 1$ vector. \item $\mathbf{A}$ is a $N\times KMC$ block matrix comprising submatrices of size $E_m\times K$ whose entries take the value $1$ if the sample $e$ of the segment $m$ corresponds to the k-space location indexed by $k$ and $0$ otherwise. \item $\boldsymbol{\mathcal{F}}$ is a $KMC\times VMC$ block diagonal matrix comprising submatrices of size $K\times V$ representing 3D DFT's with applied k-space sampling. \item $\mathbf{S}$ is a $VMC\times VM$ block matrix comprising diagonal submatrices of size $V\times V$ whose diagonal elements correspond to the spatial sensitivity of the coil $c$. \item $\mathbf{T}$ is a $VM\times V$ block matrix comprising unitary~\citep{Unser95} submatrices of size $V\times V$ corresponding to the 3D rigid transformation modeling the motion state $m$ by three translations and three Euler rotation angles codified in the parameter vector $\boldsymbol{\theta}_m$. \item $\mathbf{x}$ is a $V\times 1$ vector. \end{itemize} Eq.~\eqref{ec:GEFO} is a separable nonlinear least squares problem~\citep{Gan18,Herring18}. We confront it by iteratively addressing the subproblems: \begin{equation} \label{ec:GEFD} \begin{split} \hat{\mathbf{x}}^{(i+1)}=&\displaystyle\argmin_{\mathbf{x}}\|\mathbf{A}\boldsymbol{\mathcal{F}}\mathbf{S}\mathbf{T}_{\hat{\boldsymbol{\theta}}^{(i)}}\mathbf{x}-\mathbf{y}\|_2^2\\ \hat{\boldsymbol{\theta}}^{(i+1)}=&\displaystyle\argmin_{\boldsymbol{\theta}}\|\mathbf{A}\boldsymbol{\mathcal{F}}\mathbf{S}\mathbf{T}_{\boldsymbol{\theta}}\hat{\mathbf{x}}^{(i+1)}-\mathbf{y}\|_2^2. \end{split} \end{equation} The first subproblem, reconstructing the image $\mathbf{x}$ in the presence of rigid motion~\citep{Batchelor05}, can be solved by conjugate gradient (CG)~\citep{Pruessmann01}. As for the second, the solution must null the gradient of the objective function against the motion parameters~\citep{Cordero-Grande16}, which is tackled by a Levenberg-Marquardt (LM) algorithm using a simplified Jacobian~\citep{Ruano91}. A natural initialization is a zero-motion condition $\hat{\boldsymbol{\theta}}^{(0)}=\mathbf{0}$, so the first step corresponds to a standard sensitivity encoding (SENSE) reconstruction. Further in this paper, we describe how to temporally arrange the k-space samples into segments to improve the aligned reconstruction convergence. \ifdefined \subsection{DISORDER sampling} \else \subsection*{DISORDER sampling} \fi \label{sec:SATE} We focus on Cartesian 3D k-space grids with uniform sampling as sketched in Fig.~\ref{fig:VOEN}. Fig.~\ref{fig:VOEN}a shows $K_1=4$ collected samples after the first readout or profile in the $k_1$ direction. Fig.~\ref{fig:VOEN}b shows the first segment, in this example corresponding to the full acquisition of the $k_1$-$k_2$ plane. Fig.~\ref{fig:VOEN}c shows that segments can be used to define an ordered partition of the $k_1$-$k_2$-$k_3$ grid. Due to short duration, we assume negligible motion during the readout and focus on the phase encode (PE) plane in Fig.~\ref{fig:VOEN}d. We define $E^{\text{PE}}_m$ as the number of profiles per segment, so $E^{\text{PE}}_m=E_m/K_1$, and hereinafter we adopt the replacement $E_m\leftarrow E^{\text{PE}}_m$. \ifdefined \begin{figure}[!htb] \begin{center} \begin{overpic}[width=0.2495\textwidth,draft=false]{Fig01/fig01}\put(0,0){\textbf{a)}}\end{overpic \begin{overpic}[width=0.2495\textwidth,draft=false]{Fig01/fig02}\put(0,0){\textbf{b)}}\end{overpic \begin{overpic}[width=0.2495\textwidth,draft=false]{Fig01/fig03}\put(0,0){\textbf{c)}}\end{overpic \begin{overpic}[width=0.2495\textwidth,draft=false]{Fig01/fig04}\put(0,0){\textbf{d)}}\end{overpic \end{center}\vspace{-5mm} \input{caption1} \end{figure} \fi By modifying the PE gradients before each readout it is possible to design different encoding or view orders. These can be defined as a temporally ordered set of profiles \ifdefined \newline \fi $p\in\mathcal{P}=\{(k^{1,1}_2,k^{1,1}_3),\ldots,(k^{1,E_1}_2,k^{1,E_1}_3),\ldots,(k^{M,E_M}_2,k^{M,E_M}_3)\}$ with cardinality $\displaystyle P=|\mathcal{P}|=\sum_{m}E_m$. Fig.~\ref{fig:SAOR}a shows the first segment of a commonly used {\ber Sequential} ordering scheme. In this case, due to the partition definition, a segment includes two consecutive $k_3$ planes. Fig.~\ref{fig:SAOR}b, introduces the {\ber Checkered} traversal. First, a rectangular tiling of the PE plane is built using tiles of size $U_2\times U_3$ such that $U_2U_3=M$. Second, a spectral lexicographic order for the profiles within a tile $\mathcal{K}_U$ is defined by $\mathcal{K}_U\to\mathcal{M}_U=\{1,\ldots,M\}$, which can be extended to different tiles by translation. Third, interleaved segments are defined such that the same profile $m_u\in\mathcal{M}_U$ is used $\forall e\in\mathcal{E}=\{1,\ldots,E\}$, with $\mathcal{E}$ a temporally ordered set of tiles. Finally, the profile sequence to traverse each tile is defined by mapping from the set of profiles within a tile to a temporally ordered set of segments $\mathcal{M}_U\to\mathcal{M}_{T}=\{1,\ldots,M\}$ using an electrostatic repulsion criterion with periodic boundary conditions. Hence, a \emph{distributed} temporal coverage is guaranteed both for the whole spectrum and within each tile. This strategy aids the aligned reconstruction conditioning by reducing the chances for large uncovered spectral areas due to head rotation. Fig.~\ref{fig:SAOR}c presents the {\ber Random-checkered} modification. Tiles are built as in the {\ber Checkered} approach but segments are constructed by a random permutation of the tile elements drawn independently for each tile, so we have $m_{u_e}$. This guarantees a distributed coverage in probability and introduces some \emph{incoherence} among profiles within and across segments. Fig.~\ref{fig:SAOR}d shows a {\ber Random} view order where $\mathcal{P}$ is a random permutation of the profiles conforming the sequence. Note that, considering a free definition of segments, {\ber Sequential} and {\ber Random} schemes are particular cases of the {\ber Random-checkered} traversal respectively with tiling sizes of $1\times 1$ and $K_2\times K_3$. \ifdefined \begin{figure}[!htb] \begin{center} \begin{overpic}[width=0.2495\textwidth,draft=false]{Fig02/fig01}\put(8,0){\textbf{a)}}\end{overpic \begin{overpic}[width=0.2495\textwidth,draft=false]{Fig02/fig02}\put(8,0){\textbf{b)}}\end{overpic \begin{overpic}[width=0.2495\textwidth,draft=false]{Fig02/fig03}\put(8,0){\textbf{c)}}\end{overpic \begin{overpic}[width=0.2495\textwidth,draft=false]{Fig02/fig04}\put(8,0){\textbf{d)}}\end{overpic \end{center}\vspace{-5mm} \input{caption2} \end{figure} \fi View orders should preserve the contrast and the trajectory consistency. We establish the following differentiation: \begin{itemize} \item \textbf{Non steady-state sequences} (MP-RAGE, FSE, FLAIR). They acquire a fraction of the k-space or \emph{shot} after each radiofrequency (RF) preparation. Thus, they induce a natural sampling partition where segments are in correspondence to shots. In addition, magnetic properties are not invariant for the different shot samples. Typically, middle samples within each shot cover the central area of the spectrum~\citep{Busse08}, which our orders can fulfill by jumping from tile to tile in a {\ber Zig-zag} manner. An example is shown in Fig.~\ref{fig:ECORA} using an elliptical sampling area. The {\ber Checkered} traversal produces regular segment patterns (first column), with neighboring colors maximally separated within the tile, whereas the {\ber Random-checkered} traversal produces non-regular patterns. Tiling orders generate smooth color transitions across the spectrum for all presented traversals (second column), which translates into smooth magnetic properties of the profiles. \item \textbf{Steady-state sequences} (SPGR, bSSFP). They produce a temporally stable magnetization after reaching the steady-state, usually facilitated by some preparatory dummy profiles, so the contrast becomes independent of the encoding order. For estimates attempted at the segment level, temporal resolvability of motion will increase with bigger tiling sizes $U_2U_3$. However, large jumps in the spectrum may induce inconsistencies due to eddy currents, especially for low repetition times, so a trade-off may be required. If the profiles are covered by the application of $M$ spectral \emph{sweeps}, analogously to~\cite{Tsao05}, eddy currents can be minimized by an {\ber Alternating zig-zag} tiling order where the traversal polarity is reversed for consecutive sweeps. This is illustrated in Fig.~\ref{fig:ECORB}. The segment structure (first column) matches that of Fig.~\ref{fig:ECORA} but for some minor differences in the {\ber Sequential} case due to smooth magnetization requirements in shot-based sequences~\citep{Busse08}. The {\ber Sequential} scheme guarantees a smooth passage through k-space (third and fourth columns). Although quicker k-space sweeps of our traversals imply larger $\dd k_2$ and $\dd k_3$ steps, these remain substantially lower than for the {\ber Random} order, which should limit the impact of eddy currents. Finally, the {\ber Alternating zig-zag} suppresses the undesirable spikes in the fourth column of Fig.~\ref{fig:ECORA}. \end{itemize} \ifdefined \begin{figure}[!htb] \begin{center} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig01-01}\put(0,0){\textbf{a)}}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig01-02}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig01-03}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig01-04}\end{overpic}\\\vspace{2mm} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig02-01}\put(0,0){\textbf{b)}}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig02-02}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig02-03}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig02-04}\end{overpic}\\\vspace{2mm} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig03-01}\put(0,0){\textbf{c)}}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig03-02}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig03-03}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig03/fig03-04}\end{overpic} \end{center}\vspace{-5mm} \input{caption3} \end{figure} \fi \ifdefined \begin{figure}[!htb] \begin{center} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig01-01}\put(0,0){\textbf{a)}}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig01-02}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig01-03}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig01-04}\end{overpic}\\\vspace{2mm} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig02-01}\put(0,0){\textbf{b)}}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig02-02}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig02-03}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig02-04}\end{overpic}\\\vspace{2mm} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig03-01}\put(0,0){\textbf{c)}}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig03-02}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig03-03}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig03-04}\end{overpic}\\\vspace{2mm} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig04-01}\put(0,0){\textbf{d)}}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig04-02}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig04-03}\end{overpic} \begin{overpic}[width=0.246\textwidth,draft=false]{Fig04/fig04-04}\end{overpic} \end{center}\vspace{-5mm} \input{caption4} \end{figure} \fi \ifdefined \subsection{Aligned reconstruction refinements} \else \subsection*{Aligned reconstruction refinements} \fi \label{sec:REAS} We propose a series of refinements for improved and more efficient aligned reconstructions: \begin{itemize} \item \textbf{Spatial multiresolution}. The spatial and spectral grids for both subproblems in Eq.~\eqref{ec:GEFD} can be refined according to a given multiresolution pyramid as commonly used in image registration~\citep{Unser93}. In contrast to sequential sampling, the proposed orders allow for motion estimates from samples at coarse scales (area enclosed in cyan in Fig.~\ref{fig:SAOR}) to be completely exploited when reconstructing at fine scales. This is useful for quick aligned reconstructions as adequate motion estimates are often possible at coarse scales. \item \textbf{Temporal multiresolution}. Motion estimation can also be attempted at intra-shot (or intra-sweep) levels (for instance using the samples enclosed within the yellow areas for the $4$ intra-segment subdivisions in Fig.~\ref{fig:SAOR}). However, estimates using low spatial harmonics localize the structures at coarse scales only and, conversely, motion estimation using high spatial harmonics alone is limited by lower SNR and prone to local optima. These limitations can be alleviated using hierarchical estimation refinements by temporally subdividing the samples considered by the motion states within a shot in a coarse-to-fine manner. \item \textbf{Coil compression}. The two subproblems in Eq.~\eqref{ec:GEFD} can operate on a reduced number of virtual channels~\citep{Buehrer07}. \item \textbf{Motion compression}. The reconstruction subproblem complexity grows with the number of motion states, which can be reduced by motion compression or binning. Estimated motion parameter traces are approximated using piecewise constant functions by Haar wavelet decomposition truncation with threshold $\boldsymbol{\tau}$. Thereby, original motion states are packed into effective states by grouping together those contiguous states with similar motion parameters into an effective motion parameter vector $\tilde{\boldsymbol{\theta}}$. Thus, the reconstruction complexity is driven by the underlying motion complexity. \item \textbf{Robustness}. Accurate intra-shot corrections may be infeasible, for instance due to temporary inconsistencies in the magnetization. Denoting the real motion parameters by $\boldsymbol{\theta}^{\ast}$ we can ideally characterize the loss $r_{\mathbf{x},\boldsymbol{\theta}^{\ast}}$ using the sampling noise properties. Sampling noise follows a circularly symmetric complex Gaussian additive stationary distribution and, after whitening, it is independent across channels, so the losses per profile $r[m,e_m]=\sum_{c,k_1}r[m,e_m,c,k_1]$ should ideally follow a $\chi^2$ distribution. To account for the sensitivity of the residuals to the underlying signal, we use trimmed statistics on a logarithmic scale $r_b[m]=P^{\mathbb{E}_m}_{c(b)}\log(r[m,e_m])$ with $b\in{1,\ldots,B}$ indexing the $100c(b)\,\%$ centile $P_c$ of the loss distribution through k-space $\mathbb{E}_m$. As we are concerned with anomalously high residuals, robust estimates of the scale and mean of the statistic distribution across segments $\mathbb{M}$ are obtained respectively by $\sigma_b=\sqrt{2}(P^\mathbb{M}_{c_{\text{U}}}r_b[m]-P^{\mathbb{M}}_{c_{\text{L}}}r_b[m])/(\erfc^{-1}(2c_{\text{U}})-\erfc^{-1}(2c_{\text{L}}))$ and $\mu_b=P^{\mathbb{M}}_{(c_{\text{U}}+c_{\text{L}})/2}r_b[m]+\sqrt{2}\sigma_b\erfc^{-1}(c_{\text{U}}+c_{\text{L}})$ choosing $c_{\text{U}}=0.25$ and $c_{\text{L}}=0.125$. Using these estimates, the statistics are normalized and averaged into $\overline{r}[m]=\sum_b(r_b[m]-\mu_b)/B\sigma_b$, and segments are weighted in the reconstruction by a matrix $\mathbf{W}$ with entries $w[m]=\min(M\erfc(\overline{r}[m]/\sqrt{2})/(2\tau_w),1)$, with $\tau_w$ an acceptance threshold corrected for multiple comparisons. \item \textbf{Regularization}. If outlier segment rejection is activated or the reconstruction is applied to accelerated scans, some form of regularization may be advisable. This is considered by reformulating the reconstruction as: % \begin{equation} \label{ec:RESH} \hat{\mathbf{x}}^{(i+1)}=\displaystyle\argmin_{\mathbf{x}}\|\mathbf{W}^{1/2}(\mathbf{A}\boldsymbol{\mathcal{F}}\mathbf{S}\mathbf{T}_{\tilde{\boldsymbol{\theta}}^{(i)}}\mathbf{x}-\mathbf{y})\|_2^2+2\lambda\|\boldsymbol{\mathcal{S}}\mathbf{x}\|_1, \end{equation} where $\lambda$ controls the degree of regularization and $\boldsymbol{\mathcal{S}}$ corresponds to a shearlet decomposition, which provides nearly optimal approximation rates for piecewise smooth functions with discontinuities on a piecewise smooth surface~\citep{Kutyniok12}. We resort to an iteratively reweighted least squares (IRWLS) solver, able to produce high quality solutions in a few iterations \citep{Voronin17}, with $\lambda$ adaptively updated according to~\cite{Li18} using a normalized Rayleigh-quotient trace estimator~\citep{Avron11}. \end{itemize} \section{Methods} \else \section*{Methods} \fi \label{sec:METH} \ifdefined \subsection{Synthetic experiments} \else \subsection*{Synthetic experiments} \fi \label{sec:VADE} Our contributions are validated using a synthetic dataset built from a $T_2$ neonatal brain axial ground truth (GT) image without perceptible motion artifacts. This corresponds to a multi-slice TSE sequence acquired on a $3\,\mbox{T}$ \textsmaller{\textsc{Philips Achieva TX}} (same scanner as for in-vivo tests) using a $C=32$-element neonatal head coil array, $0.8\times 0.8\,\mbox{mm}$ in-plane resolution, $1.6\,\mbox{mm}$ slice thickness, echo time $T_{\text{E}}=145\,\mbox{ms}$, repetition time $T_{\text{R}}=12\,\mbox{s}$, and flip angle $\alpha=90^{\circ}$. Coil sensitivities were estimated from a separate reference scan~\citep{Allison13}. We use a 2D dataset and no regularization or outlier rejection for a concise presentation of results. We assume that the simulated 2D k-space corresponds to the $k_2$-$k_3$ PE plane of 3D scans and expect the driving conclusions to be extensible to 3D because estimates should be easier along the missing fully-sampled readout direction. Simulations were conducted to compare the conventional sequential order to the various proposed schemes as well as to characterize their performance. The forward model in the presence of rigid motion is applied to the GT to generate synthetically motion corrupted data. Synthesized measures are corrupted with noise levels corresponding to a mean SNR of $30\,$dB for reconstructions in the absence of motion or acceleration. Different degrees of motion are generated by drawing independent motion states uniformly at random on an interval of rotations $[-\theta/2,\theta/2]$ around the field of view (FOV) center. Satisfactory convergence in the presence of noise can be ascertained on the assumption of an identifiable global optimum basin by $r_{\hat{\mathbf{x}},\hat{\boldsymbol{\theta}}}\leq r_{\hat{\mathbf{x}},\boldsymbol{\theta}^{\ast}}$. In this case, the error in the motion parameters $\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}^{\ast}$ is attributed to the uncertainty from the measurement noise and not to partial convergence. Note that we can generally achieve a lower loss for the joint problem ($r_{\hat{\mathbf{x}},\hat{\boldsymbol{\theta}}}$) than with the knowledge of the motion parameters ($r_{\hat{\mathbf{x}},\boldsymbol{\theta}^{\ast}}$) due to the larger complexity of the former. Reconstructions are terminated when $r_{\hat{\mathbf{x}},\hat{\boldsymbol{\theta}}}\leq r_{\hat{\mathbf{x}},\boldsymbol{\theta}^{\ast}}$ and the abscissa scale of the convergence plots was chosen so that iterations have a direct translation into computational costs. \ifdefined \subsection{In-vivo experiments} \else \subsection*{In-vivo experiments} \fi \label{sec:MATE} In-vivo experiments include the main families of volumetric sequences for brain MRI (see Table~\ref{tab:SEQU}). We have performed a controlled motion experiment on a consented adult volunteer and applied the method to replace sedation on pediatric subjects scanned after written informed parental consent for an epilepsy study. Imaging is performed using a $32$-channel adult head coil. Data is acquired in the inferior-superior (IS) $k_1$, anterior-posterior (AP) $k_2$ and left-right (LR) $k_3$ orientation using our scanner implementation of the {\ber Random-checkered} traversal. This way, potentially strongest rotations on the sagittal plane are captured by the $k_1$-$k_2$ coordinates, which may increase the resolvability of intra-shot motion. In addition, IS readouts allow to easily downweight additional motion sources within the FOV when estimating for motion by restricting the loss function to the superior part of the FOV ($2/3$ in our implementation). Finally, this orientation facilitates non-selective RF excitation pulses for shorter $T_{\text{R}}$. \ifdefined \input{table1} \fi In the controlled motion experiment, the volunteer was first scanned without deliberate motion, and then asked to perform extreme and continuous motion for the entire scan, which was repeated three times. Reconstructed volumes are jointly registered together for error comparisons. The pediatric cohort includes $26$ subjects ranging from $3$ to $19$ years old (mean$\pm$std of $12\pm 5$ years), typically acquiring one MP-RAGE, TSE and FLAIR, two SPGRs and three bSSFPs for an approximate total of $208$ tested volumes across all participants. Strongest artifacts in our data are generally arising from motion, so the reported case has been separately chosen for each modality as the most artifacted after reconstruction without motion correction. \ifdefined \subsection{Implementation details} \else \subsection*{Implementation details} \fi \label{sec:IMDE} In the in-vivo experiments sensitivities are compressed into a number of channels corresponding to a $10\%$ SNR loss. The number of resolution levels is defined as $L=\displaystyle\left\lfloor\log_2(4\,\mbox{mm}/\Delta_{\mathbf{y}}^{\text{min}})\right\rfloor+1$, with $\lfloor\cdot\rfloor$ denoting the biggest integer lower or equal than the argument and $\Delta_{\mathbf{y}}^{\text{min}}$ the minimum of the voxel sizes along different directions. As we use $2\times$ subsampling ratios, we operate at a minimum resolution of $4\,\mbox{mm}$. In the first iteration at level $l$, a soft-masked~\citep{Fuderer04} full CG reconstruction is run till loss reduction saturation. Then, the method quickly alternates between reconstruction and motion correction using one CG and one LM iteration with heuristically updated damping and line search. We activate a flag for provisional convergence of the parameters of a given motion state when the maximum update is smaller than a threshold $\boldsymbol{\tau}_{\Delta\boldsymbol{\theta}}=\{0.05,0.02^{\circ}\mbox{mm}^{-1}\}\Delta_{\mathbf{y}^l}^{\text{max}}$, with same values used for motion compression. This saves computations by considering motion updates only on not-converged parameters. However, this flag is reset to $0$ whenever $i=n(n-1)/2+1$ ($n\in\mathbb{N}_{>0}$) to account for the impact of the updated reconstructions in the motion parameter estimates. Joint convergence is achieved when provisional convergence is achieved for all motion states. Then, the method runs a full CG reconstruction with the consolidated motion parameters. If regularized outlier rejection reconstructions are activated, artifacted segments are rejected at levels $l$ such that $\Delta_{\mathbf{y}^l}^{\text{max}}\leq 2\,\mbox{mm}$ by densely sampling within $[c(1),c(B)]=0.5+(0.35/\max(\Delta_{\mathbf{y}^l}^{\text{max}},1))[-1,1]$ and using $\tau_w=0.05$. If regularization is applied, shearlets are designed based on~\cite{Kutyniok16} and a final reconstruction is launched with $3$ CG iterations within $2$ updates of the IRWLS-induced cost function and $\lambda$. Reconstructions are performed on a $8(16)\times$ \textsmaller{\textsc{Intel(R) Core(TM) i7-5960X}} $3.00\,\mbox{GHz}$ CPU, $64\,\mbox{GB}$ RAM, \textsmaller{\textsc{GeForce GTX TITAN X}} GPU. For further implementation details, readers can refer to the source code. \section{Results} \else \section*{Results} \fi \label{sec:RESU} \ifdefined \subsection{Validation} \else \subsection*{Validation} \fi In Fig.~\ref{fig:VACO} we compare different simulated reconstruction scenarios showing the losses when iterating the method, $r_{\hat{\mathbf{x}}^{(i+1)},\hat{\boldsymbol{\theta}}^{(i)}}$ as solid colored lines with joint iterations represented by markers. Losses in the convergence plots are normalized to the minimum of the reference levels $r_{\hat{\mathbf{x}},\boldsymbol{\theta}^{\ast}}$, which are shown as dashed lines strongly overlapped for the different alternatives. Fig.~\ref{fig:VARE} includes reconstructions with and without motion correction for different reconstruction scenarios and provides absolute value error maps and mean SNR for the compared cases. \ifdefined \begin{figure}[!htb] \begin{center} \begin{overpic}[width=0.99\textwidth,draft=false]{Fig05/fig01}\put(0,0){\textbf{a)}}\end{overpic}\vspace{2mm} \begin{overpic}[width=0.99\textwidth,draft=false]{Fig05/fig02}\put(0,0){\textbf{b)}}\end{overpic}\vspace{2mm} \begin{overpic}[width=0.99\textwidth,draft=false]{Fig05/fig03}\put(0,0){\textbf{c)}}\end{overpic} \end{center} \input{caption5} \end{figure} \fi \ifdefined \begin{figure}[!htb] \begin{overpic}[width=0.997\textwidth,draft=false]{Fig06/fig01}\end{overpic}\vspace{2mm}\\ \begin{overpic}[width=\textwidth,draft=false]{Fig06/fig02}\put(0,-4){\textbf{a)}}\footnotesize{ \put(9.2593,-4){\makebox[0pt]{\Centerstack{No motion\\$\mbox{SNR}=29.99\,\mbox{dB}$}}} \put(27.7778,-4){\makebox[0pt]{\Centerstack{Uncorrected\\{\ber Sequential}\\$\mbox{SNR}=12.72\,\mbox{dB}$}}} \put(46.2963,-4){\makebox[0pt]{\Centerstack{Corrected\\{\ber Sequential}\\$\mbox{SNR}=13.37\,\mbox{dB}$}}} \put(64.8148,-4){\makebox[0pt]{\Centerstack{Uncorrected\\{\ber Random-checkered}\\$\mbox{SNR}=14.51\,\mbox{dB}$}}} \put(83.3333,-4){\makebox[0pt]{\Centerstack{Corrected\\{\ber Random-checkered}\\$\mbox{SNR}=28.40\,\mbox{dB}$}}} }\end{overpic}\vspace{14mm} \begin{overpic}[width=0.997\textwidth,draft=false]{Fig06/fig03}\end{overpic}\vspace{2mm}\\ \begin{overpic}[width=\textwidth,draft=false]{Fig06/fig04}\put(0,-4){\textbf{b)}}\footnotesize{ \put(7.8182,-4){\makebox[0pt]{\Centerstack{$R=1\times 1$\\No motion\\$\mbox{SNR}=29.98\,\mbox{dB}$}}} \put(23.4545,-4){\makebox[0pt]{\Centerstack{$R=1\times 1$\\Known motion\\$\mbox{SNR}=28.42\,\mbox{dB}$}}} \put(39.0909,-4){\makebox[0pt]{\Centerstack{$R=1\times 1$\\Estimated motion\\$\mbox{SNR}=28.40\,\mbox{dB}$}}} \put(54.7273,-4){\makebox[0pt]{\Centerstack{$R=2\times 2$\\No motion\\$\mbox{SNR}=20.78\,\mbox{dB}$}}} \put(70.3636,-4){\makebox[0pt]{\Centerstack{$R=2\times 2$\\Known motion\\$\mbox{SNR}=16.42\,\mbox{dB}$}}} \put(86,-4){\makebox[0pt]{\Centerstack{$R=2\times 2$\\Estimated motion\\$\mbox{SNR}=15.02\,\mbox{dB}$}}} }\end{overpic}\vspace{10mm} \input{caption6} \end{figure} \fi \ifdefined \subsubsection{Encoding orders} \else \subsubsection*{Encoding orders} \fi Fig.~\ref{fig:VACO}a compares the {\ber Sequential}, {\ber Checkered}, {\ber Random-checkered} and {\ber Random} traversals. Global convergence is achieved for all considered $M$ and $\theta$ when using any of the {\ber Checkered}, {\ber Random-checkered} or {\ber Random} traversals. In contrast, when using the {\ber Sequential} order, the method converges to a local optimum or fails to converge in the prescribed iterations except for $\theta\in\{2^{\circ},5^{\circ}\}$ / $M=4$. The loss at the first iteration $r_{\hat{\mathbf{x}}^{(1)},\hat{\boldsymbol{\theta}}^{(0)}}$ is always bigger when using non-sequential traversals. This increased inconsistency in the measurement domain relates to the aligned reconstruction sensitivity to motion degradation. Fig.~\ref{fig:VARE}a shows reconstructions and errors with and without motion correction for the {\ber Sequential} and {\ber Random-checkered} traversals together with GT motion-free reconstructions. Motion corrected reconstructions using the {\ber Random-checkered} data appear similar to the GT despite the strong blurring in uncorrected reconstructions. This is confirmed by the lack of perceptible structure in the residuals and a moderate noise amplification. In contrast, corrections using the {\ber Sequential} traversal provide only a modest visual benefit. \ifdefined \subsubsection{Multiresolution} \else \subsubsection*{Multiresolution} \fi Fig.~\ref{fig:VACO}b compares the {\ber Checkered}, {\ber Random-checkered} and {\ber Random} traversals when using a single scale for joint motion estimation and reconstruction ($L=1$) and when first approximating the motion solution at half the acquired resolution to initialize the joint problem at full resolution ($L=2$). The {\ber Sequential} traversal was excluded because, as discussed when introducing the multiresolution strategy, it has no opportunity to improve from the poor relative performance showed in Fig.~\ref{fig:VACO}a by exploiting multiresolution. Plots also include $r_{\hat{\mathbf{x}},\boldsymbol{\theta}^{\ast}}$ at the coarse scale. Global convergence is achieved for all traversals at all considered configurations except at $M=256$ / $\theta=20^{\circ}$. However, the multiresolution strategy ($L=2$) achieves global convergence in less iterations or provides a solution with lower residuals ($M=256$ / $\theta=20^{\circ}$). For moderate levels of motion, convergence is generally quick. For instance, it takes approximately $10$ joint iterations $i$ when using the {\ber Random-checkered} traversal in a case where random excursions of up to $\theta=10^{\circ}$ are imposed in every one of the $M=256$ segments, probably a more challenging scenario than expected in practice. \ifdefined \subsubsection{Acceleration} \else \subsubsection*{Acceleration} \fi Fig.~\ref{fig:VACO}c tests the ability of the {\ber Checkered}, {\ber Random-checkered} and {\ber Random} traversals (using $L=2$ scales) to operate in uniformly accelerated regimes as given by different acceleration factors $R$. We observe convergence to the global solution in all tested scenarios apart from the {\ber Random} traversal at $R=2\times 2$ / $M=16$ / $\theta=20^{\circ}$. Considering all conducted simulations, the {\ber Random-checkered} traversal is generally providing the quickest solutions. Fig.~\ref{fig:VARE}b shows an example of reconstructions and errors in the absence of motion, with known motion and with estimated motion at $R=1\times 1$ and $R=2\times 2$. The SNR figures for $R=1\times 1$ and known motion show a degradation of $1.56\,\text{dB}$ with respect to the reference due to noise amplification from non-uniform effective k-space sampling after motion. No further degradation is introduced from motion estimation errors, as approximately the same SNR figures are obtained for known and estimated motion. $R=2\times 2$ acceleration in the absence of motion introduces a degradation of $9.20\,\text{dB}$ with respect to the reference, which stems from the reduced number of samples and the g-factor~\citep{Pruessmann99}. The presence of motion adds further degradations quantified as $4.36\,\text{dB}$, thus stronger than in the non-accelerated case. Therefore, without regularization, the limiting reconstruction quality in the presence of motion decreases with larger distances between neighboring k-space points. Finally, accelerating the scan has also an impact in the uncertainty of motion estimates, as we observe a degradation of $1.40\,\text{dB}$ from known to estimated motion, although the errors show no perceptible structure. \ifdefined \subsection{Redundancy for motion tolerance} \else \subsection*{Redundancy for motion tolerance} \fi \label{sec:RERE} Fig.~\ref{fig:READ} compares reconstructions without correction, with inter-shot corrections and when activating intra-shot corrections in the presence of extreme motion during in-vivo data acquisition. Intra-shot corrections are triggered by subsequent temporal binary subdivisions of the sampled information within each shot until $16$ motion states are estimated per shot. We show reconstructions without deliberate motion (GT reconstructions), and reconstructions and absolute differences with respect to the GT using $Q=1$, $Q=2$ and $Q=3$ repeats under extreme motion (Extreme motion reconstructions / errors, $Q=\{1,2,3\}$). Results for $Q=1$ and $Q=2$ correspond to the first repeats, with no remarkable differences observed when choosing any other combination. Reconstructions are provided without regularization or outlier rejection. Results without deliberate motion show that inter- and intra-shot corrections do not reduce the reconstruction quality, which demonstrates a safe application of generalized reconstructions in the absence of motion. Degradation is noticeable for uncorrected reconstructions in the presence of motion for all values of $Q$, although with less coherent ghosts as $Q$ increases due to incoherent blurring by {\ber Random-checkered} motion averaging. Inter-shot corrections increase the reconstruction quality in all cases, with more finely resolved cortical structures as $Q$ increases but with noticeably inferior quality than without deliberate motion. Residual degradation is only partially accounted when using intra-shot corrections on a single repeat, but can be more satisfactorily addressed with $Q=2$ and even more with $Q=3$. Namely, the level of deblurring in the fourth and sixth columns of Fig.~\ref{fig:READ}c makes corresponding reconstructions visually comparable to those of the first column despite the extreme and continuous motion (estimated excursions up to $25^{\circ}$). Thus, we can reason that powerful tolerance is achieved for $R=2\times 2$ and $Q=2$, so that $Q=1$ with acceleration $R=\sqrt{2}\times\sqrt{2}$\footnote{Both alternatives involve the same scanning time but the latter generates a lower g-factor. The former was used in this experiment because it was more convenient to our scanner implementation of the traversals.} may be adequate for motion tolerance in practice, which has been used to guide the acceleration in the pediatric cohort (see Table~\ref{tab:SEQU}). However, in contrast to computation times of $2\,\mbox{min}$ (non-deliberate motion), and $11\,\mbox{min}$ (extreme motion, $Q=3$) for inter-shot corrections, corresponding intra-shot corrections required $21\,\mbox{min}$ and $20\,\mbox{h}\,36\,\mbox{min}$. Thus, despite being technically feasible, intra-shot corrections may have limited applicability due to computational costs. Computational cost increase with the complexity of motion is due to the larger number of iterations for convergence and to the proposed motion compression strategy, with $13/30$ binned inter-shot motion states without deliberate motion and $30/30$ with extreme motion ($Q=1$), with proportional savings in the reconstruction steps in the former. \ifdefined \begin{figure}[!htb] \begin{center} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig01-01}\put(1,2){\textcolor{white}{\textbf{a)}}}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig02-01}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig03-01}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig04-01}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig05-01}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig06-01}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig07-01}\end{overpic}\\ \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig01-02}\put(1,2){\textcolor{white}{\textbf{b)}}}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig02-02}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig03-02}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig04-02}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig05-02}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig06-02}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig07-02}\end{overpic}\\ \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig01-03}\put(1,2){\textcolor{white}{\textbf{c)}}}\footnotesize{\put(42.85,-26){\makebox[0pt]{\Centerstack{GT\\reconstructions}}}}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig02-03}\footnotesize{\put(42.85,-26){\makebox[0pt]{\Centerstack{Extreme motion\\reconstructions\\$Q=1$}}}}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig03-03}\footnotesize{\put(42.85,-26){\makebox[0pt]{\Centerstack{Extreme motion\\errors\\$Q=1$}}}}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig04-03}\footnotesize{\put(42.85,-26){\makebox[0pt]{\Centerstack{Extreme motion\\reconstructions\\$Q=2$}}}}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig05-03}\footnotesize{\put(42.85,-26){\makebox[0pt]{\Centerstack{Extreme motion\\errors\\$Q=2$}}}}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig06-03}\footnotesize{\put(42.85,-26){\makebox[0pt]{\Centerstack{Extreme motion\\reconstructions\\$Q=3$}}}}\end{overpic} \begin{overpic}[width=0.138\textwidth,draft=false]{Fig07/fig07-03}\footnotesize{\put(42.85,-26){\makebox[0pt]{\Centerstack{Extreme motion\\errors\\$Q=3$}}}}\end{overpic} \\\vspace{6mm} \end{center} \input{caption7} \end{figure} \fi \ifdefined \subsection{Non-compliant subjects} \else \subsection*{Non-compliant subjects} \fi \label{sec:INVI} Fig.~\ref{fig:REIN} shows worst-case reconstructions without motion correction, with motion correction alone and with motion correction and the regularized outlier segment rejection. Results are shown for main structural brain MRI modalities, MP-RAGE, TSE, FLAIR, SPGR and bSSFP. In all sequences we observe a substantial improvement when activating motion-corrected reconstructions alone, with better delineated cortical structures. However, subtle artifacts are still present, either in the form of ghosts or of coloured noise. Fig.~\ref{fig:REIN}c shows that quality can be further improved by rejecting the less consistent segments and performing a regularized reconstruction. In some sequences discarding the artifacted segments seems to reduce residual artifacts from uncorrected fast motion (see for instance fine details in SPGR) while in others it seems to mainly improve the magnetization consistency (see TSE contrast). Across the cohort we have observed that motion artifact levels always decrease when compensating for motion, with no remarkable differences when activating the corrections in the absence of artifacts. This is along the lines of the quantitative population metrics obtained for the less favourable sequential sampling~\citep{Cordero-Grande16} or for multi-slice scans~\citep{Cordero-Grande18}. Worst-case results of Fig.~\ref{fig:REIN} have been judged satisfactory by the practitioners and researchers involved in the project. Therefore, the proposed methodology is delivering reliable examinations for unsedated pediatric subjects challenging to comply to the MRI motion requirements. In this experiment motion estimates were performed at half the acquired resolution with joint motion estimation and reconstruction always taking less time than final reconstructions at full resolution. Computation times range between $5\,\mbox{min}$ in least artifacted and $40\,\mbox{min}$ in most artifacted volumes in our cohort. Estimated motion traces and outlier segments are reported in \textsl{Supporting Information} Fig.~S1. \ifdefined \begin{figure}[!htb] \begin{center} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig01-01}\put(1,2){\textcolor{white}{\textbf{a)}}}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig02-01}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig03-01}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig04-01}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig05-01}\end{overpic}\\ \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig01-02}\put(1,2){\textcolor{white}{\textbf{b)}}}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig02-02}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig03-02}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig04-02}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig05-02}\end{overpic}\\ \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig01-03}\put(1,2){\textcolor{white}{\textbf{c)}}}\footnotesize{\put(37.5,-9){\makebox[0pt]{\Centerstack{MP-RAGE}}}}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig02-03}\footnotesize{\put(37.5,-9){\makebox[0pt]{\Centerstack{TSE}}}}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig03-03}\footnotesize{\put(37.5,-9){\makebox[0pt]{\Centerstack{FLAIR}}}}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig04-03}\footnotesize{\put(37.5,-9){\makebox[0pt]{\Centerstack{SPGR}}}}\end{overpic} \begin{overpic}[width=0.195\textwidth,draft=false]{Fig08/fig05-03}\footnotesize{\put(37.5,-9){\makebox[0pt]{\Centerstack{bSSFP}}}}\end{overpic}\\ \vspace{0mm} \end{center} \input{caption8} \end{figure} \fi \section{Discussion} \else \section*{Discussion} \fi \label{sec:DISC} We have presented DISORDER, a retrospective framework for motion tolerant structural 3D k-space encoded brain imaging that combines optimized view orders with an improved aligned reconstruction. The proposed distributed and incoherent orders increase the motion sensitivity of the information sampled within a given time window which, provided a certain degree of redundancy, enables the resolvability of motion in the reconstruction. Conducted simulations have shown that reordering the k-space traversals introduces a significant boost in the ability to estimate the head pose and suppress motion artifacts. Tolerance to motion has been demonstrated in-vivo on a controlled experiment involving extreme and continuous motion throughout the examination as well as for the main families of sequences used for structural brain imaging by presenting the reconstruction results on the most challenging datasets from a pediatric cohort of $26$ subjects. Although DISORDER is robust enough in its current form so as to be of practical interest for reliable structural brain MR examinations in non-compliant cohorts, with plans in our center to use it to progressively replace unnecessary sedation or anesthesia in pediatric and neonatal populations~\citep{Barton18}, it is obviously not free from limitations. First, data consistency may be affected by additional degrading factors. These include inaccuracies in sensitivities but also water-fat shifts, eddy currents or flow artifacts. In practice, applying fat suppression when possible, designing the tiles for adequate trade-offs between eddy currents and motion resolvability in bSSFP sequences, and adequate planning and scanning procedures are usually sufficient to address these issues. Differently, correction of non-rigid motion components would require an extension of the formulation. Although analogous methodologies~\citep{Loktyushin15} have shown potential in this context, a robust and efficient extension to non-rigid motion models will probably require a careful computational design. This may be particularly the case for high resolution applications, where both rigid and non-rigid motion become more important and challenging to correct~\citep{Budde14}. Moreover, coarse scale motion at ultra high field may require additional corrections of high order effects. Finally, in this manuscript we have restricted ourselves to uniform sampling, with further work required to generalize the incoherent and distributed orders and characterize motion correction and resolution retrieval when using variable densities. In the in-vivo experiments of Fig.~\ref{fig:REIN} we have shown that inter-shot corrections can be sufficient in practical brain imaging scenarios requiring motion tolerance. Our underlying assumption is that the subject remains approximately still for a significant portion of the acquisition. In this case, inter-shot corrections are enough to reconcile the brain pose amongst the stable periods and data rejection can be applied to the transitions, again, provided that sampling is redundant enough. However, intra-shot corrections may become more important in challenging situations, as illustrated in Fig.~\ref{fig:READ}. Despite its computational limitations, our method is able to provide stable intra-shot estimates in the absence of motion while offering certain motion correction potential. Although a prior model for the temporal evolution of motion may aid in certain applications, in general, limitations arising from available computational resources and SNR per motion state are likely to complicate intra-shot tractability. The situation may perhaps be different if using supervised learning strategies to inform the exploration of the motion parameter space. These may help to improve the spatio-temporal resolvability of motion by aiding the intra-shot corrections to find better motion solutions. Training may also help to enlarge the motion capture range at a given level of redundancy or decrease the required level of redundancy for a given motion capture range. Although direct learning of motion-corrected reconstructions could also be attempted, it is likely that, in many circumstances, better results will be obtained when concatenating learned reconstructions with model-based strategies, as recently suggested in~\cite{Haskell19}. Further integration of both approaches could be tackled, for instance, by incorporating the motion operator into the model-based learning framework in~\cite{Aggarwal19}, which may be effective in dealing with the residual penalties from g-factor amplification due to motion (see Fig.~\ref{fig:VARE}b). Thereby, future work will explore the opportunities for extending the ranges of motion resilience by supervised learning. \section{Conclusion} \else \section*{Conclusion} \fi \label{sec:CONC} We have proposed a simple modification of standard 3D Cartesian sequences for structural brain imaging, involving only a distributed and incoherent reordering of the sampled profiles, for high quality imaging in the presence of motion. Improved convergence has been demonstrated when using a separable nonlinear least squares formulation for joint motion estimation and reconstruction. Feasibility and conditions for inter- and intra-shot corrections have been characterized by simulations and in-vivo reconstructions under extreme motion. The DISORDER method has been successfully applied to replace sedation in a pediatric population scanned using common clinical examination protocols by combining inter-shot corrections with regularized outlier segment rejection reconstructions. Future work will focus on applying DISORDER to other cohorts and on strengthening its performance by integrating motion learning strategies. \section*{Supporting information} \else \section*{Supporting information} \fi \label{sec:SUMA} Fig.~S1a collects the estimated motion traces for the cases in \ifdefined\external Fig.~8. \else Fig.~\ref{fig:REIN}. \fi Although no temporal regularization is used, all traces show periods of stability, which suggests accurate estimates at least in these periods. In Fig.~S1b the opacity of the traces is driven by the segment weights from the proposed outlier detection method. We observe that outliers generally correspond to main motion transients, in agreement to higher chances for intra-sweep degradation in these periods. \ifdefined \begin{figure*}[!htb] \begin{center} \hspace{8mm}\begin{overpic}[width=0.41\textwidth,draft=false]{Fig09/fig01-01}\put(-16,28){\makebox[0pt]{\Centerstack{MP-RAGE}}}\end{overpic} \hspace{2mm}\begin{overpic}[width=0.41\textwidth,draft=false]{Fig09/fig01-02}\end{overpic}\\ \hspace{8mm}\begin{overpic}[width=0.41\textwidth,draft=false]{Fig09/fig02-01}\put(-16,28){\makebox[0pt]{\Centerstack{TSE}}}\end{overpic} \hspace{2mm}\begin{overpic}[width=0.41\textwidth,draft=false]{Fig09/fig02-02}\end{overpic}\\ \hspace{8mm}\begin{overpic}[width=0.41\textwidth,draft=false]{Fig09/fig03-01}\put(-16,28){\makebox[0pt]{\Centerstack{FLAIR}}}\end{overpic} \hspace{2mm}\begin{overpic}[width=0.41\textwidth,draft=false]{Fig09/fig03-02}\end{overpic}\\ \hspace{8mm}\begin{overpic}[width=0.41\textwidth,draft=false]{Fig09/fig04-01}\put(-16,28){\makebox[0pt]{\Centerstack{SPGR}}}\end{overpic} \hspace{2mm}\begin{overpic}[width=0.41\textwidth,draft=false]{Fig09/fig04-02}\end{overpic}\\ \hspace{8mm}\begin{overpic}[width=0.41\textwidth,draft=false]{Fig09/fig05-01}\put(-16,28){\makebox[0pt]{\Centerstack{BSSFP}}}\put(1,-6){\textbf{a)}}\end{overpic} \hspace{2mm}\begin{overpic}[width=0.41\textwidth,draft=false]{Fig09/fig05-02}\put(1,-6){\textbf{b)}}\end{overpic} \end{center} \vspace{1mm} \input{caption9} \end{figure*} \section*{Acknowledgments} \ifdefined This work received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/20072013/ERC, grant agreement no. [319456], dHCP project). The research was supported by the Wellcome/EPSRC Centre for Medical Engineering at King's College London [WT 203148/Z/16/Z]; the Medical Research Council [MR/K006355/1]; and the National Institute for Health Research (NIHR) Biomedical Research Centre based at Guy's and St Thomas' NHS Foundation Trust and King's College London. Jonathan O'Muircheartaigh is supported by a Sir Henry Dale Fellowship jointly funded by the Wellcome Trust and the Royal Society [206675/Z/17/Z]. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health. \fi The authors acknowledge the Department of Perinatal Imaging \& Health at King's College London. \subsection{Modifications to the aligned \texorpdfstring{\textst{SENSE}{ reconstruction}}{} formulation \def{} \begin{document} \maketitle \input{0abstract} \input{1introduction} \input{2theory} \input{3methods} \input{4results} \input{5discussion} \input{6conclusion} \input{acknowledgments} \clearpage \bibliographystyle{apalike}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Network data science has traditionally focused on studies capturing two-way interactions or connections between pairs of vertices or agents in networks. In this context, the problems of interest have been to identify heterogeneous and power law vertex degree distributions (e.g., determine if the networks are scale-free) as well as dense subgraphs and cliques, and efficiently detect and isolate community structures~\citep{newman2003structure,barabasi1999emergence,watts1998collective}. It has by now become apparent that many aspects of relational organization, functionality and the evolving structure of a complex network can only be understood through higher-order subgraph (motif) interactions involving more than two vertices~\citep{milo2002network,shen2002network,mangan2003structure,honey2007network,alon2007network,porter2009communities,benson2016higher,yaverouglu2014revealing}. Certain subgraphs in networks function as fundamental units of control and regulation of network communities and dynamics: for example, network motifs are crucial regulators in brain networks~\citep{sporns2004motifs,park2013structural,battiston2017multilayer}, transcriptional regulatory networks~\citep{mangan2003structure}, food webs \citep{paulau2015motif,li2017inhomogoenous}, social networks~\citep{girvan2002community,snijders2001statistical} and air traffic networks~\citep{rosvall2014memory,benson2016higher}. Traditionally, statistical and algorithmic work on network motifs has been concerned with discovering and counting the frequency of over-expressed subgraphs (which are usually determined in comparison with some statistical null model) in various real world networks~\citep{alon2007network,klusowski2018counting}. Indeed, frequency distributions or spectra of motifs have been shown to provide useful information about the regulatory and dynamic organization of networks obtained from disparate sources. Network motifs have also recently been used to perform learning tasks such as community detection~\citep{benson2016higher,li2017inhomogoenous,tsourakakis2017scalable}. A parallel line of work has focused on identifying communities in hypergraphs and was reported in~\citet{zhou2006learning,angelini2015spectral,kim2017community,ghoshdastidar2017consistency,chien2018community}. Unfortunately, existing random graph models with community structures based on Erd\"os-R\'enyi random graphs \citep{er60}, such as the Stochastic Block Models~\citep{hll83,sn97,bc09,cwa12,rqf12, cdp11,rcy11,qr13,j15,lei2015consistency,decelle2011asymptotic,hajek2016achieving,abbe2015community,gao2017achieving}, their degree-corrected versions~\citep{kn11,zlz12}, and other extensions fail to produce graphs with strong local clustering, i.e., with over-abundant triangles and other relevant higher-order structures. To investigate community structures in terms of particular subgraphs and determine under which conditions they can be recovered or detected, one needs to consider more versatile community structure models. To address the aforementioned problem, a number of more realistic network models with some of the desired motif structures have been proposed in the literature; however, most such models are not mathematically tractable in general or in the context of community detection due to dependencies among the edges~\citep{bollobas2011sparse}. Notable exceptions include the mathematically tractable random graph model with local clustering and dependences among edges proposed in~\cite{bollobas2011sparse}. There, the authors constructed random graphs by superimposing small subgraphs and edges, thereby introducing dependencies among subsets of vertices. More specifically, they constructed an inhomogeneous random hypergraph with conditionally independent hyperedges, and then replaced each hyperedge by a complete graph over the same set of vertices. A similar model, termed the Subgraph Generation Model (SUGM), was proposed in~\cite{chandrasekhar2014tractable,chandrasekhar2016network}. More recently,~\citet{hajek2018recovering} analyzed a variation of the preferential attachment model with community structure and proposed a message passing algorithm to recover the communities. In parallel, a geometric block model that uses Euclidean latent space geometric graphs instead of the usual Erdo\"s-Re\'nyi graphs for the mixture components was introduced in~\citet{galhotra2017geometric,galhotra2018connectivity}. Although all these modes capture some aspects of real-life networks and introduce controlled dependencies among the edges in the graphs, they fail to provide a general approach for combining dependent motif structures and analytical techniques that highlight if communities should be identified though pairwise or higher-order interactions. Our contributions are two-fold. First, we propose a new Superimposed Stochastic Block Model (SupSBM), a random graph model for networks with community structure obtained by generalizing the framework of~\cite{chandrasekhar2014tractable} and~\cite{bollobas2011sparse} to account for communities akin to the classical SBM. SupSBM captures the most relevant aspects of higher-order organization of the datasets under consideration, e.g., it incorporates triangles and other motifs, but couples them through edges that may be viewed as noise in the motif-based graphs. The community structure of interest maybe present either at a higher-order structural level only or both at the level of higher-order structures and edges. Drawing parallels with the classical SBM which is a mixture of Erd\"os-R\'enyi graphs, SupSBM may be viewed as a mixture of superimposed inhomogeneous random graphs generated according to process described in~\cite{chandrasekhar2014tractable} and~\cite{bollobas2011sparse}. Second, we derive theoretical performance guarantees for higher-order spectral clustering methods~\citep{benson2016higher,tsourakakis2017scalable} applied to SupSBM. The main difference between our analysis and previous lines of work on spectral algorithms for the SBM \citep{rcy11,lei2015consistency, gao2017achieving, chin2015stochastic, vu13}, and hypergraph SBM \citep{ghoshdastidar2017consistency, kim2017community,chien2018community} is that the elements of the analogues of adjacency matrices in our analysis are dependent. We derive several non-asymptotic upper bounds of the spectral norms of such generalized adjacency matrices, and these results are of independent interest in other areas of network analysis. For this purpose, we express the spectral norms as sums of polynomial functions of independent random variables. The terms in the sums are dependent, however, any given term is dependent only on a small fraction of other terms. We exploit this behavior to carefully control the effects of such dependence on the functions of interest. We use recent results on polynomial functions of independent random variables~\citep{boucheron2013concentration,kim2000concentration,janson2004deletion}, typical bounded differences inequalities~\citep{warnke2016method} and Chernoff style concentration inequalities under limited dependence~\citep{warnke2017upper} to complete our analysis. In addition, we derive a number of corollaries implying performance guarantees for higher-order spectral clustering under the classical stochastic block model and the hypergraph stochastic block model. The analysis of the non-uniform hypergraph SBM also reveals interesting results regarding the benefit of using ordinary versus higher order spectral clustering methods on random hypergraphs. The remainder of the article is organized as follows. Section~\ref{sec:super} defines superimposed random graph models and then develops the Superimposed Stochastic Block Model (SupSBM). Section~\ref{sec:analysis} presents a non-asymptotic analysis of the misclustering rate of higher-order spectral clustering under the SupSBM. Some real world network examples are discussed in Section~\ref{sec:data}. The Appendix contains proofs of all the theorems and many auxiliary lemmas used in the derivations. \section{Superimposed random graph and block models} \label{sec:super} We start our analysis by defining what we refer to as an \emph{inhomogeneous superimposed random graph model,} which is based on the random graph models described in~\citet{bollobas2011sparse,chandrasekhar2014tractable}. We then proceed to introduce a natural extension of the stochastic block model in which the community components are superimposed random graphs. Our main focus is on models that superimpose edges and triangles, as these are prevalent motifs in real social and biological networks \citep{alon2007network,benson2016higher,li2017inhomogoenous,laniado2016gender}. However, as discussed in subsequent sections, the superimposed SBM can be easily extended to include other superimposed graph structures. Formally, the proposed random graph model, denoted by $G_s(n,P^e,\mathbb{P}^t)$, is a superimposition of a classical dyadic (edge-based) random graph $G_e(n,P^e)$ and a triadic (triangle-based) random graph $G_t(n, \mathbb{P}^t)$. In this setting, $n$ denotes the number of vertices in the graph, $P^e$ denotes an $n \times n$ matrix whose $(i,j)$th entry equals the probability of an edge in $G_e$ between the vertices $i$ and $j$, and $\mathbb{P}^t$ denotes a $3$-way ($3$rd order) $n \times n \times n$ tensor whose $(i,j,k)$th element equals the probability of a triangle involving the vertices $(i,j,k)$ in $G_t$. A random graph from the model $G_s(n,P^e,\mathbb{P}^t)$ is generated as follows. One starts with $n$ unconnected vertices. The $G_t(n, \mathbb{P}^t)$ graph is generated by creating triangles ($3-$hyperedges) for each of the $\dbinom{n}{3}$ $3$-tuples of vertices $(i,j,k)$ according to the outcome of \emph{independent} Bernoulli random variables $T_{ijk}$ with parameter $p^t_{ijk}=(\mathbb{P}^t)_{ijk}$. The hyperedges are consequently viewed as triangles, which results in a loss of their generative identity. Note that this process may lead to multi-edges between pairs of vertices $i$ and $j$ if these are involved in more than one triangle. The multi-edges in the graph $G_t$ are collapsed into single edges. All pairs of vertices $(i,j)$ remain within all their constituent triangles as before the merging procedure. Next, the graph $G_e(n,P^e)$ is generated by placing edges between the $\dbinom{n}{2}$ pairs of vertices $(i,j)$ according to the outcomes of independent Bernoulli random variables $E_{ij}$ with parameter $p^e_{ij}=(P^e)_{ij}$. Note this is simply the usual inhomogeneous random graph model~\citep{bollobas2007phase} that may be viewed as a generalization of the Erd\"os-R\'enyi model in which the probabilities of individual edges are allowed to be nonuniform. The two independently generated graphs are then superimposed to arrive at $G_s(n,P^e,\mathbb{P}^t)$. The graph generation process is depicted by an example in Figure~\ref{superimposed}. Observe that the superimposed graph is allowed to contain multi-edges (or, more precisely, exactly two edges) between two vertices if and only if those vertices are involved in both at least one triangle in $G_t$ and an edge in $G_e$. A practical justification for this choice of a multi-edge model comes from the fact that pair-wise and triple-wise affinities often provide complementary information\footnote{For example,~\cite{laniado2016gender} studied gender patterns in dyadic and triadic ties in an online social network and found different degrees of gender homophily in different types of ties. Hence instead of duplicating evidence from the same source, we retain two parallel edges in the graph only if they reinforce the information provided by each other.}. Clearly, the resulting graph $G_s$ has dependences among its edges and strong local clustering properties for properly chosen matrices $\mathbb{P}^t$ due to the increased presence of triangles. Furthermore, we would like to point out that this inhomogeneous superimposed random graph model differs in a number of important ways from non-uniform hypergraph random graph models on which the non-uniform hypergraph SBM, analyzed by~\cite{ghoshdastidar2017consistency,chien2018community} and others, is based. First, our model captures networks in which we cannot differentiate between an ``ordinary'' edge and a hyperedge, as hyperedges simply appear as higher-order structures in the graph. In contrast, the non-uniform hypergraph SBM is a model for networks in which different types of hyperedges are distinguishable during the observation process and labelled. Hence, a major technical difficulty of analyzing methods under the SupSBM is to deal with edge dependencies which are not present in the non-uniform hypergraph SBM. Second, we collapse all multi-edges generated in the hyperedge generation process into single edges which are more realistic as observable network interaction models. We do, however, allow for double edges if there is complementary evidence of both dyadic and triadic ties. \begin{figure}[h] \centering \begin{subfigure}{0.16\textwidth} \centering \includegraphics[width=\linewidth]{triadgraph} \end{subfigure}% \begin{subfigure}{0.05\textwidth} \centering \includegraphics[width=\linewidth]{Slide2.jpg} \end{subfigure}% \begin{subfigure}{0.16\textwidth} \centering \includegraphics[width=\linewidth]{triadcollapsed} \end{subfigure}% \begin{subfigure}{0.05\textwidth} \centering \includegraphics[width=\linewidth]{Slide1.jpg} \end{subfigure}% \begin{subfigure}{0.16\textwidth} \centering \includegraphics[width=\linewidth]{dyadgraph} \end{subfigure}% \begin{subfigure}{0.05\textwidth} \centering \includegraphics[width=\linewidth]{Slide3.jpg} \end{subfigure}% \begin{subfigure}{0.16\textwidth} \centering \includegraphics[width=\linewidth]{fullgraph} \end{subfigure} \begin{center} (a) \hspace{70pt} (b) \hspace{70pt} (c) \hspace{70pt} (d) \end{center} \caption{(a) A realization of the graph $G_t$ with $n=7$ vertices, before multi-edge collapsing; (b) the collapsed graph $G_t$; (c) the dyadic graph $G_e$, and (d) the superimposed graph $G_s$.} \label{superimposed} \end{figure} In the simplest incarnation of the model, one may choose $(P^e)_{ij}=p^e$ for all $i,j$ and $(\mathbb{P}^t)_{ijk}=p^t$ for all $i,j,k$. In this case, the graph $G_e$ is a classical Erd\"os-R\'enyi dyadic random graph, while $G_t$ before multi-edges collapsing may be thought of as a generalization of Erd\"os-R\'enyi graphs to the triadic setting. We describe next the superimposed stochastic block model based on $G_s$ graphs. \subsection{Superimposed stochastic block models} Our superimposed stochastic block model (SupSBM) is based on the inhomogeneous superimposed random graph framework defined in the previous section. We consider two types of SupSBMs. In the first case, ``community signals'' are present both in the higher-order structures and the dyadic edges, while in the second case, the ``community signals'' are present only in the higher-order structures but not in the dyadic edges. Drawing a parallel with the classical SBM, where intra- and inter-community edges are generated via Erd\"os-R\'enyi graphs, both the intra- and inter-community edges in SupSBM are generated by superimposed random graph models ($G_s$) as defined in the previous section. We formally define a graph with $n$ vertices and $k$ communities generated from a SupSBM as follows. Each vertex of the graph is assigned a community label vector of length $k$, which takes the value of $1$ at the position corresponding to its community and $0$ at all other positions. To organize the labels, we keep track of an $n \times k$ community assignment matrix $C$ whose $i$th row $C_i$ is the community label vector for the $i$th vertex. Given the community assignments for all the vertices in the graph, the triangle hyperedge indicators $T_{ijk}$ involving three distinct vertices $i$, $j$, $k$ are (conditionally) independent, and they follow a Bernoulli distribution with a parameter that depends only on the community assignments, i.e., \[ P(T_{ijk}=1| C_{ip} =1, C_{jq}=1, C_{kl}=1) = \pi^{t}_{pql}, \quad p,q,l \in \{1,\ldots ,k\}, \] where $\pi^{t}$ is a $3$-way $k \times k \times k$ tensor of parameters. The triangle hyperedges naturally reduce to a triangle, and as before, multi-edges are collapsed to form the graph $G_{t}$. An edge between two vertices $i$ and $j$ is generated independently of other edges and hyperedges following a Bernoulli distribution with a parameter that also depends on the community assignments, so that the edge indicator variable $E_{ij}$ satisfies \[ P(E_{ij}=1| C_{ip} =1, C_{jq}=1) = \pi^{e}_{pq}, \quad p,q \in \{1,\ldots ,k\}, \] where $\pi^{e}$ is a $k \times k$ matrix of model parameters. For the case that the community structure is present only in the higher-order structures and not at the level of dyadic edges, this parameter equals $p^e$ irrespective of the communities that the vertices $i$ and $j$ belong to. The desired graph is obtained by superimposing $G_t$ and $G_e$ following the process described in the previous section. The above described model contains a large number of unknown parameters, and to enable a more tractable analysis, we proceed as follows. We define the stochastic block model on $3-$hyperedges in the following manner: \[ P(T_{ijk}=1| C_{i}, C_{j}, C_{k}) = \begin{cases} \frac{a_t}{n}, & \quad \text{ if } C_{i} = C_{j} = C_{k} \\ \frac{b_t}{n}, & \quad \text{ otherwise} \end{cases}, \] so that the probability of a triangle hyperedge equals $\frac{a_t}{n}$ if the three vertices involved are in the same community, and $\frac{b_t}{n}$ if at least one of the vertices is in a different community than the other two. The dyadic edges are generated according to the following rule: the probability of an edge is $\frac{a_e}{n}$ if both the end points belong to the same community and $\frac{b_e}{n}$ if they belong to different communities. Another simplification consists in assuming that all communities are of the same size, leading to balanced $n$-vertex $k$-block SupSBMs, $G_s(n,k,C,a_e,b_e,a_t,b_t)$, in which all the $k$ communities have $\frac{n}{k}$ vertices and the matrix $C$ is an $n \times k$ community assignment matrix. \section{Analysis of higher-order spectral clustering} \label{sec:analysis} Spectral clustering methods for hypergraphs, also known as higher-order spectral clustering methods, have been studied in a number of recent papers~\citep{zhou2006learning,benson2016higher,tsourakakis2017scalable,li2017inhomogoenous}. In particular,~\cite{benson2016higher} introduced a method that creates a ``motif adjacency matrix" for each motif structure of interest. In a motif adjacency matrix, the $(i,j)$th element represents the number of motifs that include the vertices $i$ and $j$. Spectral clustering is applied to the motif adjacency matrix in a standard form in order to find communities of motifs. While there are many variants of spectral clustering that may be applied to the motif adjacency matrices, throughout our analysis, we investigate only one algorithm, which computes the $k$ eigenvectors corresponding to the $k$ largest in absolute value eigenvalues of the motif adjacency matrix. The algorithm subsequently performs a $(1+\epsilon)$-approximate, $\epsilon>0$, $k$-means clustering~\citep{kumar2004simple,lei2015consistency} on the rows of the resultant $n \times k $ matrix of eigenvectors. Furthermore, we only consider two motif adjacency matrices, involving edges and triangles. The primary goal of our analysis is to describe how to detect the community structures of the SupSBM from observed triangle patterns using spectral clustering. We will consider both versions of SupSBM, namely, one with community structure present only at the triangle level, and another with community structure present both at the triangle and edge levels. In what follows, we first prove a number of concentration results for certain motif adjacency matrices under the more general inhomogeneous superimposed random graph model. Subsequently, we specialize our analysis to the SupSBMs. \subsection{Higher-order spectral clustering and superimposed random graphs} Let $G \sim G_s(n,P^e,P^t)$ be a graph generated from the inhomogeneous superimposed edge-triangle random graph model. We introduce two matrices, $A_E$ and $A_T$, respectively; $(A_E)_{ij}$ represent the number of observed edges between the vertices $i$ and $j$, while $(A_T)_{ij}$ represents the number of observed triangles including both $i$ and $j$ as vertices. Note these matrices are not the motif adjacency matrices of $G_e$ and $G_t$, since there are edges in $G_t$ that contribute to $A_E$ and triangles from $G_e$ that contribute to $A_T$. There may be many ``incidentally generated" or imposed triangles~\citep{chandrasekhar2014tractable} that arise due to superimposition which also contribute to $A_T$. The different scenarios are depicted in Figure~\ref{incidental}. For our analysis, we also introduce the following two matrices: \begin{itemize} \tightlist \item $A_{E^2}$: the adjacency matrix of edges in $G_e$; here, $(A_{E^2})_{ij} = E_{ij}.$ \item $A_{T^2}$: the adjacency matrix of triangle motifs in $G_t$; here, $(A_{T^2})_{ij} = \sum_{k} T_{ijk}.$ \end{itemize} \begin{figure}[h] \centering \vspace{-15pt} \includegraphics[width=0.95\linewidth]{imposed_all.jpg} \vspace{-40pt} \begin{center} (a) \hspace{80pt} (b) \hspace{80pt} (c) \hspace{80pt} (d) \end{center} \caption{Imposed triangles generated through the superimposition of edges and triangles: (a) $E^3$, (b) $T^3$, (c) $T^2E$, and (d) $TE^2$.} \label{incidental} \end{figure} As noted in~\cite{chandrasekhar2014tractable}, there are four classes of matrices needed to describe the model, namely \begin{itemize} \tightlist \item[(a)] $A_{E^3}$: the motif adjacency matrix of all triangles formed by random edges from $G_e$. The generative random variable for these triangles reads as: \[ E^3_{ijk} = E_{ij}E_{jk}E_{ik}, \] and $(A_{E^3})_{ij} = \sum_{k} E_{ij}E_{jk}E_{ik}$. \item[(b)] $A_{T^3}$: the motif adjacency matrix of all triangles formed by three intersecting triangles from $G_t$. The generative random variable for these triangles reads as: \[ T^3_{ijk}=(1-T_{ijk}) 1(\sum_{k_1 \neq k}T_{ijk_1}>0)1(\sum_{k_2 \neq i}T_{jkk_2}>0)1(\sum_{k_3\neq j}T_{ikk_3}>0), \] and $(A_{T^3})_{ij} = \sum_{k} T^3_{ijk}$. \item[(c)] $A_{T^2E}$: the motif adjacency matrix of all triangles formed by two triangles from $G_t$ and one edge from $G_e$. The generative random variable for these triangles reads as: \begin{align*} T^2E_{ijk} = (1-T_{ijk}) &1(\sum_{k_1 \neq k}T_{ijk_1}>0 \, \cap \, E_{ij}=0) \\ &1(\sum_{k_2\neq i}T_{jkk_2}>0 \, \cap \, E_{jk}=0) 1(\sum_{k_3\neq j}T_{ikk_3}=0 \, \cap \, E_{ik}=1), \end{align*} and $(A_{T^2E})_{ij} = \sum_{k} T^2E_{ijk}$. \item[(d)] $A_{TE^2}$: the motif adjacency matrix of all triangles formed by one triangle from $G_t$ and two edges from $G_e$. The generative random variable for these triangles reads as: \begin{align*} TE^2_{ijk} = (1-T_{ijk}) &1(\sum_{k_1 \neq k}T_{ijk_1}>0 \,\cap \, E_{ij}=0)\\ &1(\sum_{k_2\neq i}T_{jkk_2}=0 \, \cap \, E_{jk}=1) 1(\sum_{k_3\neq j}T_{ikk_3}=0 \,\cap \, E_{ik}=1), \end{align*} and $(A_{TE^2})_{ij} = \sum_{k} TE^2_{ijk}$. \end{itemize} Note that except for case (a), an imposed triangle involving the vertices $(i,j,k)$ arises only if there is no model-created triangle involving $(i,j,k)$ already present. Hence, the definitions of each of the random variables $T^3$, $T^2E$ and $TE^2$ include $(1-T_{ijk})$ as a factor that indicates this dependence. For case (a), since we allow a multiedge between two vertices that are both involved in a triangle hyperedge and an edge, it is possible to have an imposed triangle in addition to a model-generated triangle on the same triple of vertices. With these definitions, we have that the triangle adjacency matrix reads as $$A_{T}=A_{T^2}+A_{TE}+A_{T^2E}+A_{T^3}+ A_{TE^2},$$ capturing both model-based and imposed triangles. Obviously, we only observe the matrices $A_E$ and $A_T$ and not their specific constituents, as in real networks we do not have labels describing how an interaction is formed. Hence, even though the community structure is most explicitly described by $A_{T^2}$, we need to analyze how this matrix reflects on $A_T$ and what the properties of the latter matrix are based on $A_{T^2}$. Then, the expectation of $A_{T}$ equals \[ E[A_{T}] = E[A_{T^2}]+E[A_{TE}]+E[A_{T^2E}]+E[A_{T^3}]+ E[A_{TE^2}], \] where all operators are used component-wise. \subsubsection{Notation and asymptotic properties of superimposed graphs} We start with some notation. Let \[p^e_{\max}=\max_{i,j} p^e_{ij} \quad \text{ and } \quad p^t_{\max}=\max_{i,j,k} p^t_{ijk}, \] denote the maximum probability of edge inclusion in $G_e$ and triangle hyperedge inclusion in $G_t,$ respectively. As noted by~\cite{chandrasekhar2014tractable}, in the superimposed random graph framework, the generative probabilities summarized in the two matrices $P^t$ and $P^e$ must satisfy certain conditions in order to ensure that the imposed triangles do not significantly outnumber the generative triangles. Accordingly, we impose the following asymptotic growth conditions on $p^t_{\max}$ and $p^e_{\max}$: \begin{equation} c_1\frac{\log n}{n} \leq p^e_{\max}<c_2\frac{n^{2/5-\epsilon}}{n}, \label{pemax} \end{equation} \begin{equation} c_1\frac{(\log n)^8}{n^2}<p^t_{\max}<c_2\frac{n^{2/5-\epsilon}}{n^2}, \label{ptmax} \end{equation} and $p^t_{\max} > c_3 p^e_{\max} \frac{\log n}{n}$ for some $\epsilon>0$ and constants $c_1$, $c_2$, and $c_3$ independent of $n$. A typical example are the following two growth rates: $p^e_{\max}=O(\frac{\log n}{n})$ and $p^t_{\max}=O(\frac{n^{1/4}}{n^2})$. Note that the asymptotic growth bounds are required only for the analysis of superimposed random graphs under the SupSBM. We do not require these relations to hold for results regarding regular SBMs or 3-uniform hypergraph SBMs. Hence, we will not make any assumptions on the asymptotic growth bounds for $p^t_{\max}$ and $p^e_{\max}$ until Theorem~\ref{AT}. The following five results, summarized in Theorems \ref{ATT} to \ref{ATEE}, provide non-asymptotic error bounds that hold in more general settings, as described in the statements of the respective theorems. Note that we make repeated use of the symbols $c$ or $r$ to represent different generic constants as needed in the proofs in order to avoid notational clutter. It is well-known that the Frobenius norm $\|A_{E^2}-P_{E^2}\|_2$ is bounded by $c_1\sqrt{\Delta}$ with probability at least $1-n^{-r}$ \citep{lei2015consistency,gao2015achieving,chin2015stochastic}, where $\Delta = \max\{np^e_{\max},c\log n\}$. The following theorems establish similar upper bounds for other component matrices involved in our analysis as well as a bound on $\|A_{T}-P_{T}\|_2$. The proofs of all theorems are delegated to the Appendix. \subsubsection{Concentration bounds for $\mathbf{A_{T^2}}$} \begin{thm} Let $G_t(n,\mathbb{P}^t)$ be a $3$-uniform hypergraph in which each possible 3-hyperedge is generated according to a Bernoulli random variable $T_{ijk}$ with parameter $p^t_{ijk},$ independent of all other 3-hyperedges. Let $A_{T^2}$, as before, stand for the triangle-motif adjacency matrix. Furthermore, let $\Delta_{t} = \max \{n^2 p^t_{\max},c \log n\}$. Then, for some constant $r>0$, there exists a constant $c_1(c,r)>0$ such that with probability at least $1-n^{-r}$, one has \[ \|A_{T^2}-E[A_{T^2}]\|_2 \leq c_1\sqrt{\Delta_{t}}. \] \label{ATT} \end{thm} Note that in the above bound, $\Delta_{t}$ may be interpreted as being an approximation of the maximum expected ``triangle degree'' of vertices in $G_t$. Drawing a parallel with adjacency matrices of graphs, one may define the ``degree" of a row of an arbitrary matrix as the sum of the elements in that row. Then, $\Delta_t$ is an upper bound on the degree of a row in the matrix $A_{T^2},$ much like $\Delta$ is an upper bound for the degrees of the rows in $A_{E^2}$. The above result for triangle-motif adjacency matrices is hence an analogue of a similar result for standard adjacency matrices described in ~\citet{lei2015consistency,gao2015achieving,chin2015stochastic}. The arguments used to prove the result in the cited papers are based on an $\epsilon-$net analysis of random regular graphs laid out in~\cite{friedman1989second,feige2005spectral}. We extend these arguments to the case of triangle hyperedges; due to the independence of the random variables corresponding to the hyperedges involved in all sums of interest, we do not require new concentration inequalities to establish the claim. This is not the case for the results to follow. \subsubsection{Concentration bounds for $\mathbf{A_{E^3}}$} Next, we derive an upper bound for the spectral norm of the matrix $A_{E^3}-E[A_{E^3}]$. Note that the elements of the matrix $A_{E^3}$ are dependent and consequently, the sums of the random variables used in the $\epsilon-$net approach include dependent variables. Hence, the $\epsilon-$ net approach cannot be applied directly, and several substantial modifications in the proofs are needed. However, each element of $A_{E^3}$ is a low-degree polynomial of the generic independent random variables $E_{ij}$. Therefore, we show that in all the sums of dependent random variables of our interest, the dependencies between the random variables are limited, and the number of co-dependent random variables are significantly smaller than that of all the variables in the sum. In what follows, we build upon recent advances in concentration inequalities for functions of independent random variables~\citep{warnke2016method} and sums of dependent random variables \citep{warnke2017upper} to derive concentration bounds for $A_{E^3}$. Let $\tau_{\max}=\max\{n(p^e_{\max})^2, c\log n\}$, $\Delta_{E^3}=\max\{n^2(p^e_{\max})^3, c(\log n)^2\}$ and $D_{E^3} = np^e_{\max}\tau_{\max}^2=\max \{n^3 (p^e_{\max})^5, c\,np^e_{\max}(\log n)^2\}$ and assume $np^e_{\max} \geq c \log n$. We have the following result. \begin{thm} Let $G_e(n,P^e)$ be an inhomogeneous random graph in which each edge is independently generated by a Bernoulli random variable $E_{ij}$ with parameter $p^e_{ij}, \, i,j=1,\ldots,n$. Let $D_{E^3} = \max \{n^3 (p^e_{\max})^5, cnp^e_{\max}(\log n)^2\}$ for some constant $c$. Then, for some constant $r>0$, there exists a constant $c_2(c,r)>0$ such that with probability at least $1-n^{-r}$, \[ \|A_{E^3} - E[A_{E^3}]\|_2 \leq c_2 \sqrt{D_{E^3}}. \] \label{ATE} \end{thm} The proof of the result is based on Theorem 1.3 of~\cite{warnke2016method} and Theorem 9 of~\cite{warnke2017upper}. We still resort to the use of $\epsilon-$nets but also take into account the particular dependencies between the random variables. The key observations are that two triangles generated by edges from $G_e$ are dependent if they share an edge (see Figure \ref{dependence}(a)), and, with high probability, each triangle shares an edge with at most $2\tau_{\max}$ other triangles. The random variables whose sums we are interested in bounding represent such triangles. Note that our result is a stronger bound than the one actually needed for obtaining an upper bound for $\|A_{T} - P_{T}\|_2 $ under the asymptotic growth conditions of interest. Indeed, Proposition \ref{prop1} stated in the Appendix automatically gives an upper bound of the form $O(\Delta_{E^3})$ for $\|A_{E^3} - E[A_{E^3}]\|_2 $. While this loose bound would have been sufficient, we resorted to a more careful analysis using the $\epsilon-$net approach and the results of Theorem 1.3 of~\cite{warnke2016method} to arrive at a significantly improved bound $O(\sqrt{D_{E^3}})$. We also note that based on the results derived for $A_{E^2}$ and $A_{T^2}$, the bound for the case when the elements of $A_{E^3}$ are mutually independent should read as $O(\sqrt{\Delta_{E^3}})$. The current bound is worse than this bound by a factor of $\sqrt{\Delta}$, and it is not immediately clear how the latter bound can be improved further. \begin{figure}[h] \centering \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{good1.jpg} \end{subfigure \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{good2.jpg} \end{subfigure}% \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{good3.jpg} \end{subfigure} \begin{center} (a) \hspace{100pt} (b) \hspace{100pt} (c) \end{center} \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{good4.jpg} \end{subfigure}% \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{good5.jpg} \end{subfigure}% \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{good6.jpg} \end{subfigure} \begin{center} (d) \hspace{100pt} (e) \hspace{100pt} (f) \end{center} \caption{Dependence among the random variables of incidental triangles that include vertex $i$, (a) $E^3$ (b) $T^3$ (c) $T^2E$ of type 1, (d) $T^2E$ of type 2, (e) $TE^2$ of type 1, and (f) $TE^2$ of type 2.} \label{dependence} \end{figure} \subsubsection{Concentration bounds for other relevant matrices} For the next three results, we use the following property of the spectral norm of a square symmetric matrix. For any $n \times n$ square symmetric matrix $X$, define the spectral norm of $X$ as $\|X\|_2 = \sigma_{\max}(X)$, the largest singular value of $X$, the 1-norm as $\|X\|_1 = \max_{j}\sum_{i}|X_{ij}|$, and the $\infty-$norm as $\|X\|_{\infty}=\max_{i}\sum_{j}|X_{ij}|$. Now assume $X$ is an $n \times n$ symmetric matrix whose elements are non-negative random variables. Let the entries of its expectation, $E[X]$, also be non-negative. Then, \begin{align} \|X-E[X]\|_2 & \leq \sqrt{\|X-E[X]\|_1 \|X-E[X]\|_{\infty}} = \|X-E[X]\|_1 \nonumber\\ & = \max_{i}\sum_{j}|X_{ij}-E[X]_{ij}| \leq \max_{i}\sum_{j}X_{ij} + \max_{i}\sum_{j}E[X]_{ij}, \label{twonorm} \end{align} where the first inequality is Corollary 2.3.2 in~\cite{golub2012matrix}, and the second equality follows since $X-E[X]$ is a symmetric matrix by assumption. Note the first term in the final sum is the \emph{degree} of row $i$ of the matrix $X$. Hence, a high-probability bound on the maximum degree will allow us to upper bound this quantity. The second term equals the maximum expected degree of $X$, which is a deterministic quantity. For the next three theorems in this section we assume $n^2p^t_{\max} > c_0(\log n)^2$ and $np^e_{\max} > c_0' \log n$. \begin{thm} Let $G\sim G_s(n,P^e,\mathbb{P}^t)$ be a graph generated by the superimposed random graph model. Let $$\Delta_{T^3} = \max \{n^5(p^{t}_{\max})^3,c(\log n)^4\},$$ where $c$ is some constant. Then, there exists a constant $c' > 0$, such that with probability at least $1-o(1)$, one has \[ \|A_{T^3}- E[A_{T^3}]\|_2 \leq c'\, \Delta_{T^3}. \] \label{ATTT} \end{thm} \begin{thm} Let $G\sim G_s(n,P^e,\mathbb{P}^t)$ be a graph generated by the superimposed random graph model. Let $$\Delta_{T^2E} = \max \{n^4(p^{t}_{\max})^2p^e_{\max},c(\log n)^4\},$$ where $c$ is some constant. Then, there exists a constant $c' > 0$, such that with probability at least $1-o(1)$, one has \[ \|A_{T^2E}- E[A_{T^2E}]\|_2 \leq c'\, \Delta_{T^2E}. \] \label{ATTE} \end{thm} \begin{thm} Let $G\sim G_s(n,P^e,\mathbb{P}^t)$ be a graph generated by the superimposed random graph model. Let $$\Delta_{TE^2} = \max \{n^3p^{t}_{\max}(p^e_{\max})^2,c (\log n)^3\},$$ where $c$ is some constant. Then, there exists a constant $c' > 0$, such that with probability at least $1-o(1)$, one has \[ \|A_{TE^2}- P_{TE^2}\|_2 \leq c'\, \Delta_{TE^2}. \] \label{ATEE} \end{thm} The proofs of all three above results follow a similar outline. In each case, the degree of a row $i$ is a sum of \textit{dependent} triangle-indicator random variables for triangles that include vertex $i$. However, in each case we carefully enumerate the events that lead to two such incidental triangle indicator random variables to be dependent and show that the number of such cases is limited with high probability. This allows us to apply Theorem 9 of~\cite{warnke2017upper} in an iterative manner and obtain concentration results on the respective sums. While we relegate technically involved rigorous proofs to the Appendix, we graphically illustrate all the events that lead to the dependencies that we need to consider in Figure \ref{dependence}. For the family of random variables $\{(T^3)_{ijk}, \, j,k \in \{1,\ldots,n\}\}$, two variables are dependent if and only if one of the triangles from $G_t$ covering $(T^3)_{ijk}$ has an edge $ij'$ or $ik'$ and consequently also covers $(T^3)_{ij'k'}$ (see Figure~\ref{dependence}(b)). For the family of random variables $\{(T^2E)_{ijk}, \, j,k \in \{1,\ldots,n\}\}$, two variables are dependent if either one of the edges $ij$ or $ik$ is covered by a triangle from $G_t$ and the same triangle edge-intersects $(T^2E)_{ij'k'}$ (see Figure~\ref{dependence}(c)), where one of $ij$ or $ik$ is an edge in $G_e$ and is also an edge in $(T^2E)_{ij'k'}$ (see Figure~\ref{dependence}(d)). Finally, for the family of random variables $\{(TE^2)_{ijk}, \, j,k \in \{1,\ldots,n\}\}$, two variables are dependent if either one of the edges $ij$ or $ik$ is covered by a triangle from $G_t$ and the same triangle edge-intersects $(TE^2)_{ij'k'}$ (see Figure~\ref{dependence}(e)), or, one of the two edges $ij$ and $ik$ is generated by $G_e$ and is also an edge covered by $(TE^2)_{ij'k'}$ (see Figure~\ref{dependence}(f)). We also note that the bounds in the previous three results can be improved by applying the $\epsilon-$net approach for dependent random variables as used in Theorem~\ref{ATE}. However, the above stated upper bounds suffice to obtain the desired concentration bound for $A_T$, as summarized in the following subsection. \subsubsection{Concentration bound for $A_{T}$} In the next theorem we combine the previous results to arrive at a concentration bound for the matrix $A_T$ under the assumptions made on $p^e_{\max}$ and $p^t_{\max}$ in (\ref{pemax}) and (\ref{ptmax}). \begin{thm} Let $A_{T}$ denote the triangle-motif adjacency matrix of a random graph $G$ generated by the inhomogeneous superimposed random graph model $G_s(n,P^e,\mathbb{P}^t)$. If $\Delta_t > c (\log n)^8$ for some constant $c$, and the assumptions (\ref{pemax}) and \ref{ptmax}) on $p^e_{\max}$ and $p^t_{\max}$ hold, then with probability at least $1-o(1)$, one has \[ \|A_{T}-E[A_{T}]\|_2 \leq c' \sqrt{\Delta_{t}}, \] \label{AT} where $c'$ is a constant independent of $n$. \end{thm} We note the similarity of the upper bound of this concentration inequality with that obtained for $A_{T^2}$ in Theorem \ref{ATT}. The above result then tells us that the effect of the incidental triangles on the concentration of $A_T$ is limited and the rate in the upper bound is predominantly determined by the rate for $A_{T^2}$. This suggests that while the superimposition process induces dependencies between the edges in $G_s$ through the presence of triangles from $G_t$, the model, under suitable sparsity conditions, is still mathematically tractable. The influence of the incidental triangles can be analyzed and controlled. Next, we turn our attention to analyzing random graphs generated by SupSBMs, and focus in particular on quantifying the misclustering error rate under a higher-order spectral clustering algorithm. \subsection{Higher-order spectral clustering under the SupSBM} Let $G \sim G_s(C,n,k,a_e,b_e,a_t,b_t)$ be a graph generated by the balanced $n$-vertex, $k$-block SupSBM with a community assignment matrix $C$ as defined before. Let $\hat{C}$ denote the $n \times k$ matrix of eigenvectors corresponding to the $k$ largest absolute-value eigenvalues of the triangle motif adjacency matrix $A_{T}$. To obtain the community assignments for the vertices, a $(1+\epsilon)$-approximate $k$-means clustering, with $\epsilon>0$, is performed on the rows of $\hat{C}$~\citep{kumar2004simple,lei2015consistency}. We define the misclustering error rate $R$ as follows. Let $\bar{e}$ and $\hat{e}$ denote the vectors containing the true and estimated community labels of all the vertices in $V$. Then we define \[ R = \inf_{\Pi} \frac{1}{n} \sum_{i=1}^{n} 1( \bar{e}_i \neq \Pi(\hat{e}_i)), \] where the infimum is taken over all permutations $\Pi(\cdot)$ of the community labels. To bound the misclustering rate $R$, one needs to relate it to the difference between the estimated and the true eigenvectors. For this purpose, one can use the well-known Davis-Kahan Theorem~\citep{dk70,stewart} that characterizes the influence of perturbations on the spectrum of a matrix. For a symmetric matrix $X$, let $\lambda_{\min}(X)$ stand for its smallest in absolute value non-zero eigenvalue. Since $\hat{C}_{n \times k}$ is the matrix of eigenvectors it has orthonormal columns, and hence we have the following bound \begin{equation} R \leq \frac{1}{n}\frac{n}{k}8 (2+\epsilon)\|\hat{C}-C(C^TC)^{-1/2}\mathcal{O}\|_F^2 \leq 64(2+\epsilon)\frac{\|A_T-E[A_T]\|_2^2}{(\lambda_{\min}(E[A_T])^2} , \label{misclus} \end{equation} where $\mathcal{O}$ is an arbitrary orthogonal matrix~\citep{lei2015consistency} and the last inequality arises from the Davis-Kahan Theorem. Next, we derive a lower bound on $\lambda_{\min}(E[A_T])$. We start by computing the expectations of the motif adjacency matrices $A_{E^2}$, $A_{T^2}$, and $A_{E^3}$ under the SupSBM. In all three cases, these expectations are of the form $C((g-h)I_k + h1_k1_k^T)C^T$, where as before $C$ denotes the community assignment matrix, $I_k$ is the $k$-dimensional identity matrix, $1_k$ is the $k$-dimensional vector of all $1$s, and $g$ and $h$ are functions of the parameters $n,k,a_e,b_e,a_t,b_t$. For matrices of the form $C((g-h)I_k + h1_k1_k^T)C^T$, with $g>h>0$, $1_k$ is an eigenvector corresponding to the eigenvalue $\frac{n}{k}(g-h)+nh,$ and the remaining non-zero eigenvalues are of the form $\frac{n}{k}(g-h),$ where the values of $g$ and $h$ differ for the different matrices~\citep{rcy11}. Since $nh>0$, the smallest non-zero eigenvalue equals $\frac{n}{k}(g-h)$. We start by analyzing $A_{E^2}$. Clearly, \begin{align*} E[A_{E^2}]= C\left(\frac{(a_e-b_e)}{n}I_k + \frac{b_e}{n}1_k1_k^T\right)C^T, \end{align*} so that $\lambda_{\min}(E[A_{E^2}])=\frac{a_e-b_e}{k}.$ Next, we note that the expected value of $A_{T^2}$ equals $E[A_{T^2}]_{ij} = \sum_{k \neq i,j} p^t_{ijk}$. When $C_i = C_j$, i.e., when the vertices $i$ and $j$ are in the same community, then \[ E[A_{T^2}]_{ij} =\left(\frac{n}{k}-2\right) \frac{a_t}{n} + (k-1)\frac{n}{k}\frac{b_t}{n}, \] while when $C_i \neq C_j,$ \[ E[A_{T^2}]_{ij} = (n-2) \frac{b_t}{n}. \] The difference between the two above entities equals \[ \left(\frac{n}{k}-2\right) \frac{a_t}{n} + (k-1)\frac{n}{k}\frac{b_t}{n}-(n-2) \frac{b_t}{n} = \left(\frac{n}{k}-2\right) \frac{a_t-b_t}{n}. \] Hence, \[ E[A_{T^2}] = C \left( \left(\frac{n}{k}-2\right) \frac{a_t-b_t}{n}I_k + (n-2) \frac{b_t}{n} 1_k1_k^T \right)C^T. \] Consequently, \begin{equation} \lambda_{\min}(E[A_{T^2}]) = \frac{n}{k} \left(\frac{n}{k}-2\right) \frac{a_t-b_t}{n} = \left(\frac{n}{k}-2\right) \frac{a_t-b_t}{k}. \label{lambdaPTT} \end{equation} To determine $E[A_{E^3}]$, we first note that \[ E[A_{E^3}]_{ij} = \sum_{k \neq i,j} p_{ij}p_{jk}p_{ik} = p_{ij} \sum_{k \neq i,j} p_{jk}p_{ik}. \] When $C_i =C_j$, \[ E[A_{E^3}]_{ij} = \frac{a_e}{n} \,\{{ \left(\frac{n}{k}-2\right) \frac{a^2_e}{n^2} + (k-1)\frac{n}{k} \frac{b^2_e}{n^2}\}}, \] while when $C_i \neq C_j$, \[ E[A_{E^3}]_{ij} = \frac{b_e}{n}\, \{ 2\left(\frac{n}{k}-1\right) \frac{a_eb_e}{n^2} + (k-2)\frac{n}{k} \frac{b_e^2}{n^2}\}. \] The difference between the above two probabilities equals \begin{align*} \frac{b_e^2(a_e-b_e)}{n^2} + \frac{\left(a_e^2+a_eb_e-2b_e^2\right)(a_e-b_e)}{kn^2} - 2\frac{a_e(a_e+b_e)(a_e-b_e)}{n^3}. \end{align*} Hence, \begin{align*} E[A_{E^3}] = Z ( & (\frac{b_e^2(a_e-b_e)}{n^2} + \frac{\left(a_e^2+a_eb_e-2b_e^2\right)(a_e-b_e)}{kn^2} - 2\frac{a_e(a_e+b_e)(a_e-b_e)}{n^3})I_k \\ & \quad + \frac{b_e}{n} ( 2(\frac{n}{k}-1) \frac{a_eb_e}{n^2} + (k-2)\frac{n}{k} \frac{b_e^2}{n^2}) 1_k1_k^T)Z^T. \end{align*} Consequently, the smallest non-zero eigenvalue equals \begin{align} \lambda_{\min}(E[A_{E^3}]) & = \frac{(kb_e^2 + a_e^2+a_eb_e-2b_e^2)(a_e-b_e)}{k^2n} - 2\frac{a_e(a_e+b_e)(a_e-b_e)}{kn^2}. \label{lambdaPTE} \end{align} A special case of the SBM that is widely analyzed in the literature is the balanced SBM, in which $2n$ vertices are partitioned into two blocks. In our setting, balancing implies that the SupSBM model of interest has parameters $G_s(Z,2n,2,2a_e,2b_e,2a_t,2b_t)$, and it results in $\lambda_{\min}(E[A_{T^2}])=(n-2)(a_t-b_t)$ and $\lambda_{\min}(E[A_{E^3}])=\frac{a_e(a_e+b_e)(a_e-b_e)}{n} - 2\frac{_e(a_e+b_e)(a_e-b_e)}{n^2}$. We are now ready to state the main result of the paper \begin{thm} Let $G \sim G_s(C,n,k,a_e,b_e,a_t,b_t)$ be a graph generated from the balanced $k$ block SupSBM. If $\Delta_t>c(\log n)^6,$ then with probability at least $1-o(1)$, the misclustering rate of community detection using the higher-order spectral clustering method satisfies \[ R_T \leq \frac{64(2+\epsilon)c_6\Delta_t}{((\frac{n}{k}-2)(a_t-b_t))^2}. \] \label{RT} \end{thm} Under the assumed growth rate on $p^t_{\max}$ we have $n^2p^t_{\max}>c(\log n)^6$, and hence we can replace $\Delta_t$ by $n^2p^t_{\max}$. Under the $k$ block balanced SupSBM model, $p^t_{\max} =\frac{a_t}{n}$. Hence ignoring the constants, we can rewrite the upper bound as \[ R_T \lesssim \frac{k^2a_t}{n(a_t-b_t)^2}. \] As an example, if we assume $a_t = m_t n^{1/4}$ and $b_t = s_t n^{1/4}$ for constants $m_t$ and $s_t$, then the result implies that it is possible to detect the communities consistently as long as $(m_t-s_t)^2 = \omega (\frac{k^2}{n^{5/4}})$. For the special case of a $2n$-vertices and $2$-block SupSBM $G_s(C,2n,2,2a_e,2b_e,2a_t,2b_t)$, we can further simplify this upper bound to \begin{equation} R \lesssim \frac{a_t}{n(a_t-b_t)^2}. \label{Rsup} \end{equation} For comparison, we note that the result in~\cite{lei2015consistency} for the misclustering rate of spectral clustering with classical adjacency matrices under the SBM reads as $R_E \lesssim \frac{a_e}{(a_e-b_e)^2}$. \subsection{Uniform and non-uniform hypergraph SBMs} In what follows, we analyze the performance of the higher-order spectral clustering under the uniform and non-uniform hypergraph SBMs \citep{ghoshdastidar2017consistency,chien2018community}. The balanced $n$-vertex $k$-block \emph{3-uniform hypergraph} SBM $G_t(C,n,k,a_t,b_t)$ is defined in the following way. All the $k$ communities have an equal number of vertices $s=\frac{n}{k}$, and the probability of forming a triangle hyperedge equals $\frac{a_t}{n}$ if all three vertices belong to the same community, while the probability of forming a triangle hyperedge equals $\frac{b_t}{n}$ if one of the vertices belongs to a different community than the other two. Non-uniform hypergraphs involve two types of hyperedges, edges and triangles, that need to be described separately. Note that this model differs from the superimposed random graph framework used throughout the paper as the observations are of the form of two-way and three-way interactions between entities. Hence, we have a way to differentiate between an edge and a triangle hyperedge. The $n$-vertex $k$-block balanced non-uniform hypergraph SBM $G_H(C,n,k,a_e,b_e,a_t,b_t)$ is defined in the same way as a SupSBM, except that we do not replace the generated triangle hyperedges with three ordinary edges and we do not collapse multiedges. If we assume our observed graph is generated from a uniform hypergraph SBM on triangle hyperedges, then spectral clustering of the motif adjacency matrix is equivalent to spectral clustering based on $A_{T^2}$ only. Let $\hat{C}^{(T^2)}$ be the matrix of eigenvectors corresponding to the $k$ largest absolute eigenvalues of the matrix $A_{T^2}$. Then, using the bound for $A_{T^2}$ in Theorem~\ref{ATT} from Section 3.1.2, we arrive at the following result. \begin{thm} Let $G_t$ be a triangle hypergraph generated from the $k$-block uniform triangle hypergraph SBM with parameters $C,n,k,a_t,b_t$. Then, with probability at least $1-n^{-c}$, the misclustering rate of the community assignments obtained using the higher-order spectral clustering algorithm applied to the triangle motif adjacency matrix equals \[ R_{T^2} \leq 64(2+\epsilon)\frac{\|A_{T^2}-E[A_{T^2}]\|_2^2}{(\lambda_{\min}(E[A_{T^2}]))^2} \leq \frac{64(2+\epsilon)c_1\Delta_t}{((\frac{n}{k}-2)(a_t-b_t))^2}. \] \label{RTT} \end{thm} We can simplify this upper bound under the assumption of a $2n$-vertex $2$-block triangle hypergraph SBM $G_t(C,2n,2,2a_t,2b_t)$ to $R_{T^2} \lesssim \frac{a_t}{n(a_t-b_t)^2}.$ Note that the concentration bound is smaller by a factor of $n$ when compared to the same result for spectral clustering of ordinary edge-adjacency matrix in \cite{lei2015consistency}, provided that the parameters $a_e, b_e$ and $a_t,b_t$ are comparable. Alternatively, the misclustering error rate using the triangle motif adjacency matrix for a graph generated from a triangle hypergraph SBM is better than the corresponding rate from an edge-based adjacency matrix of a graph generated from SBM as long as $a_t \gtrsim \frac{a_e}{n}$. The above observation has important implication for non-uniform hypergraph SBMs. To describe why this is the case, assume that we are given a non-uniform hypergraph generated from the $2n$-vertex $2$-block balanced non-uniform hypergraph SBM $G_H(C,n,k,a_e,b_e,a_t,b_t).$ The question of interest is: Given $a_e,b_e,a_t,b_t$, with $a_e \asymp b_e$ and $a_t\asymp b_t$, should one use the edge-based adjacency matrix, the triangle-based adjacency matrix, or a combination thereof? Let \begin{equation} a_t \asymp \frac{a_e}{\delta}, \quad a_t-b_t=m\frac{a_e-b_e}{\delta}, \quad a_e \asymp b_e, \label{NUHsetup} \end{equation} so that asymptotically, the probabilities $a_e$ and $b_e$ are $\delta$-times the probabilities $a_t$ and $b_t$, while the difference between the probabilities $a_e-b_e$ is $\frac{\delta}{m}$ times that of the difference between $a_t-b_t$. Clearly, $\delta$ captures the asymptotic difference between the densities of triangle hyperedges and dyadic edges, while $m$ captures the difference in the ``communal'' qualities between these two types of hyperedges. Note that the notation for asymptotic equivalence ignores all constants. \begin{thm} Let $G \sim G_{H}(C,2n,2,2a_e,2b_e,2a_t,2b_t)$ be a graph generated from the non-uniform hypergraph SBM. Assume the relationships between the probabilities $a_e,b_e,a_t,b_t$ are as in Equation~(\ref{NUHsetup}). Then, spectral clustering based on a triangle adjacency matrix has a lower error rate than spectral clustering based on an edge adjacency matrix if $\frac{\delta}{m^2n} \lesssim 1,$ and a higher error rate if $\frac{\delta}{m^2n} \gtrsim 1$. \label{tradeoff} \end{thm} Note that even though in practice we do not observe the quantities $m$ and $\delta$, it is possible to estimate them reliably and efficiently. To estimate $\delta$, we only need to look at the ratio of the densities of the hyperedges. The expected degree density of triangle hyperedges is $O(na_t)$, while that of edges is $O(a_e)$. This implies that $$\delta \asymp \frac{a_e}{a_t}=n\frac{\text{expected edge degree}}{\text{expected triangle degree}}.$$ Hence $\hat{\delta}=n\frac{\text{average edge degree}}{\text{average triangle degree}}$ is a ``good'' estimator of $\delta$. To obtain an estimate of $m$, we first cluster the vertices using spectral clustering on edges and triangles separately, and then compute the respective probability parameters for intra- and inter-cluster connections. Then, we may use $\hat{m} =\frac{\hat{\delta}(\hat{a}_t-\hat{b}_t)}{\hat{a}_e-\hat{b}_e}$ as an estimate of $m$. The above results also allow us to bound the error rate of spectral clustering of a weighted motif adjacency matrix under the non-uniform hypergraph SBM. Let $A_{W}=A_{E^2} + w A_{T^2}$ be the weighted sum of adjacency matrices of edges and triangle hyperedges with known relative weight $w>0$. Clearly, $E[A_{W}] = E[A_{E^2}] + w E[A_{T^2}]$ and the smallest non-zero eigenvalue of $E(A_{W})$ is $\lambda_{\min}(E[A_{W}]) = (a_e-b_e) + w(n-2)(a_t-b_t)$. Then, with probability at least $1-o(1)$ we have \[ \|A_W - E[A_W]\|_2 \leq \| A_{E^2}-E[A_{E^2}]\|_2 + w \|A_{T^2}-E[A_{T^2}]\|_2 \lesssim \sqrt{\Delta} + w \sqrt{\Delta_{t}}, \] and the error rate is upper bounded according to, \[ R_{W} \lesssim \left(\frac{ \sqrt{\Delta} + w \sqrt{\Delta_t}}{ ( a_e-b_e) + w n(a_t-b_t)}\right)^2 \asymp \left(\frac{ \sqrt{a_e} + w \sqrt{na_t}}{ ( a_e-b_e) + w n(a_t-b_t)}\right)^2. \] When the asymptotic relationships of Equation (\ref{NUHsetup}) hold, we can further simplify this expression to \begin{equation} R_{W} \lesssim \left(\frac{1+\sqrt{\frac{n}{\delta}}w}{1+\frac{mn}{\delta}w}\right)^2\frac{a_e}{(a_e-b_e)^2}. \label{weightnonunif} \end{equation} While Theorem \ref{tradeoff} suggests that depending upon the values of $\delta, m, n$, either the edge-based or triangle-based adjacency matrix has a lower error rate, in practice it might be beneficial for numerical stability to use a weighted average of both of them. The result in Equation \ref{weightnonunif} provides a bound for any weighted sum of these two hyperedge adjacency matrices. \subsection{The classical SBM} For the case of a classical SBM, we are only presented with $G_e$ but not $G_t$. In this case, $A_E$ is the adjacency matrix of the graph $G_e$, which we denoted by $A_{E^2}$. The matrix $A_T$ is the triangle motif adjacency matrix constructed from the triangles that arise due to $G_e$, which we denoted by $A_{E^3}$. The smallest non-zero eigenvalue $\lambda_{\min}$ of $E[A_{E^2}]$ equals $\frac{n}{k}\frac{a_e-b_e}{n} = \frac{a_e-b_e}{k}$. In Equation ~(\ref{lambdaPTE}) we described the smallest non-zero eigenvalue for $E[A_{E^3}]$. Let $\hat{C}^{(E^2)}$ and $\hat{C}^{(E^3)}$ denote the $n \times k$ matrices of eigenvectors corresponding to the $k$ largest in absolute value eigenvalues of the classical edge-based adjacency matrix and the triangle-adjacency matrix, respectively. Using the bound for $\|A_{E^3}-E[A_{E^3}]\|_2$ from Theorem~\ref{ATE}, Section 3.1.3, and the Davis-Kahan Theorem we have the following result. \begin{thm} Let $G_e$ be a dyadic graph generated from the $k$-block balanced SBM with parameters $C,n,k,a_e,b_e$. Then, with probability at least $1-n^{-c}$, the misclustering rate of the community assignments obtained by higher-order spectral clustering equals \[ R_{E^3} \leq 64(2+\epsilon)\frac{\|A_{E^3}-E[A_{E^3}]\|_2^2}{(\lambda_{\min}(E[A_{E^3}]))^2} \lesssim \frac{64(2+\epsilon)k^4n^2D_{E^3}}{(kb_e^2 + a_e^2+a_eb_e-2b_e^2)^2(a_e-b_e)^2}. \] \label{REEE} \end{thm} For the case $k=2$ which is a widely analyzed setting in the SBM literature, one can simplify the bound above. First, note that in this case $\lambda_{\min}(E[A_{E^3}])$ simplifies to $\frac{a(a-b)(a+b)}{n} - O(\frac{1}{n^2})$, while $D_{E^3}=\max\{\frac{a^5}{n^2},c(\log n)^3 \}$. Furthermore, since $a_e \asymp b_e$, we have $a_e \asymp a_e + b_e$. Thus, \begin{equation} R_{E^3} \lesssim \frac{n^2\max\{(a_e^5/n^2), (\log n)^3\}}{a_e^2(a_e-b_e)^2(a_e+b_e)^2} \asymp \frac{\max\{a_e, (n^2/a_e^4)(\log n)^3\}}{(a_e-b_e)^2}. \label{RSBM} \end{equation} We conclude this section by evaluating the performance of spectral clustering on the weighted sum of the two motif adjacency matrices, the edge-based matrix $A_{E^2}$ and the triangle-based matrix $A_{E^3}$. For this purpose, let $A_{W}=A_{E^2} + w A_{E^3}$ be the weighted sum of motif adjacency matrices, where $w>0$ is a known weight. Clearly, $E(A_{W}) = E[A_{E^2}] + w E[A_{E^3}]$. Then, from the results of Theorem~\ref{ATE} and Theorem 5.2 of~\cite{lei2015consistency}, we have that with probability at least $1-o(1)$, it holds that \[ \|A_W - E[A_W]\|_2 \leq \| A_{E^2}-E[A_{E^2}]\|_2 + w \|A_{E^3}-E[A_{E^3}]\|_2 \lesssim \sqrt{\Delta} + w \sqrt{D_{E^3}}. \] The smallest non-zero eigenvalue of $E[A_W]$ can be computed as \[ \lambda_{\min}(E[A_W])=( a_e-b_e) + w\frac{(a_e(a_e+b_e) (a_e-b_e)}{n} - O (1/n^2) . \] From Equation~(\ref{misclus}), we have \[ R_{W} \lesssim \left(\frac{ \sqrt{\Delta} + w \sqrt{D_{E^3}}}{ (a_e-b_e) + w \frac{(a_e(a_e+b_e) (a_e-b_e)}{n}}\right)^2. \] \subsection{Remarks} Let us start by comparing the upper bound $R_{E^3}$ for higher-order spectral clustering under the SBM obtained in Equation (\ref{RSBM}) with the corresponding upper bound for spectral clustering based on the edge-based adjacency matrix $A_{E^2}$, which reads as $R \lesssim \frac{a_e}{(a_e-b_e)^2}$ \citep{lei2015consistency}. The bound based on the triangle motif adjacency matrix $A_{E^3}$ is essentially equal to the bound based on the edge adjacency matrix as long as $a_e \gtrsim n^{2/5 + \epsilon}$, or equivalently, as long as $p^e_{\max} \gtrsim \frac{n^{2/5 + \epsilon}}{n}$. However, when $a_e$ grows slower than this rate, the performance guarantees for spectral clustering based on the motif adjacency matrix is worse than the corresponding bound based on the edge adjacency matrix. This result is intuitively justified as we expect very few triangles in a sparse dyadic graph. The presence of a triangle is a random phenomenon rather than an indicator of community structure. Hence, using triangles for community detection could lead to unwanted errors unless the graph is dense. For $a_e \gtrsim n^{2/5 + \epsilon}$, we say that the graph is ``triangle-dense'' and in this case one can use the triangle-adjacency matrix for community detection. When this condition is not satisfied, we either need to perform some form of regularization or completely dispose of the triangle based adjacency matrix. However, as previously observed, real world networks contain more triangles and higher-order structures, and consequently have a higher level of local clustering than one would expect from the SBM. Hence, the SupSBM is a more appropriate model for networks with community structures. The upper bound on the misclustering rate in Equation~(\ref{Rsup}) suggests that spectral clustering based on higher-order structures can consistently detect communities under the SupSBM. In fact, if $a_e \asymp a_t$, then the underlying upper bound is smaller by a factor of $n$ compared to that of the spectral clustering under the standard SBM. This suggests that even though spectral clustering based on higher-order structures may not be appropriate for the SBM, it offers improved performance for the SupSBM. Note that as we focus on higher-order structures based spectral clustering, we did not analyze edge-adjacency matrix based spectral clustering under the SupSBM. We hence cannot compare the misclustering rate of spectral clustering on the triangle-adjacency matrix with that of the edge-adjacency matrix under SupSBM, as the later does not follow directly from the existing results, e.g.,~\cite{lei2015consistency} (due to the fact that the observed edge-adjacency matrix has edges generated by triangles from $G_t$ in addition to edges from $G_e$). Nevertheless, our analysis of the non-uniform hypergraph SBM, especially Theorem~\ref{tradeoff}, describes the error rate tradeoff between spectral clustering with edge-based and triangle-based adjacency matrices. \section{Experiments on Real Data} \label{sec:data} We test the effectiveness of spectral clustering using a weighted sum of adjacency and Laplacian matrices for higher-order structures on three benchmark network datasets. In particular, we choose to work with a uniformly weighted edge-triangle adjacency matrix, $A_W=A_E+A_T$, where $A_E$ and $A_T$ are the observed edge and triangle adjacency matrices defined earlier. The normalized Laplacian matrix is obtained as $L_w=D_w^{-1/2}A_WD_W^{-1/2}$, where $D_W$ is a diagonal matrix such that $(D_W)_{ii}=\sum_{j}(A_{W})_{ij}$. We compare the performance of various known forms of spectral clustering methods based on edge-based matrices, namely those using adjacency matrices (spA), normalized Laplacian matrices (spL), and regularized normalized Laplacian matrices (rspL)~\citep{sarkar2015role,chin2015stochastic,qr13} with their weighted higher-order structure counterparts, hospA, hospL and horspL, respectively. In all six instances of the spectral clustering, the eigenvectors are row-normalized before applying the k-means algorithm. Table \ref{tab:polblogs} summarizes the performance of the methods. \textbf{Political blogs data.} The political blog datasets \citep{adamic05}, collected during the 2004 US presidential election, comprise $1490$ political blogs with hyperlinks between them, giving rise to directed edges. These benchmark datasets have been analyzed by a number of authors~\citep{kn11,amini13,qr13,joseph2016impact,j15,gao2017achieving,pc16} in order to test community detection algorithms. Following previous approaches, we first convert directed edges into undirected edges by assigning an edge between two vertices if there is an edge between them in either direction and consider the largest connected component of the resultant graph which contains $1222$ vertices. The ground truth community assignment used for comparisons splits the graph in two groups, liberal and conservative, according to the political leanings of the blogs. We note the hospA and horspL are competitive with the corresponding edge based methods spA and rspL, respectively. However, for spectral clustering based on the normalized Laplacian matrix, the edge-based method spL completely fails to detect the community structure due to well-documented reasons described in~\cite{qr13,j15,joseph2016impact,gao2017achieving}. On the other hand, hospL succeeds in splitting the graph into two communities with only $59$ misclustered vertices. \begin{table} \protect\caption{The number of misclustered vertices for various spectral community detection algorithms that use different forms of weighted higher-order matrices. Performance is evaluated based on a known ground truth model.} \centering \begin{tabular}{ccccccccc} \hline Dataset & spA & hospA & spL & hospL & rspL & horspL \tabularnewline \hline Political blogs & 63 & 71 & 588 & 59 & 64 & 64 \tabularnewline Karate club & 0 & 0 & 1 & 0 & 0 & 0 \tabularnewline Dolphins & 2 & 2 & 2 & 1 & 2 & 1 \tabularnewline \hline \end{tabular} \label{tab:polblogs} \end{table} \textbf{Karate club data.} The Zachary's karate club data \citep{zachary77} is another frequently used benchmark dataset for network community detection \citep{ng04,bc09,j15}. The network describes friendship patterns of $34$ members of a karate club and the ground truth splits club members into two subgroups. The method spL misclusters one vertex, while all other methods manage to recover the communities in an error-free manner. \textbf{Dolphin social network data.} This dataset describes an undirected social network involving $62$ dolphins in Doubtful Sound, New Zealand, curated by~\citet{dolphin2}. Over the course of the study the group split into two due to departure of a ``well connected'' dolphin. These two subgroups are used as the ground truth. From Table~\ref{tab:polblogs}, one can see that only hospL and horspL miscluster one dolphin, while all the remaining methods miscluster two dolphins. \section{Conclusion and future directions} We proposed and analyzed a superimposed stochastic block model, a mathematically tractable random graph model with community structure, that produces networks with properties similar to that observed in real networks. In particular it can generate sparse networks with short average path length (small-world), strong local clustering, and community structure. To produce the strong local clustering property, the model allows for dependencies among the edges, yet remaining mathematically suitable for analysis of algorithms. While not pursued here, a degree correction to the model similar to that of degree corrected stochastic block model is expected to produce networks with highly heterogeneous degree distribution (power law), hub nodes and core-periphery structure while simultaneously retaining the aforementioned properties. We hope to extend the model in that direction in a future work. We have also analyzed the performance of the higher-order spectral clustering algorithm under the proposed SupSBM. This analysis showed that it is possible to mathematically analyze community detection algorithms under the supSBM, and that the method can detect community structure consistently for graphs generated from the SupSBM. In future, we hope to determine minimax rates of error of community detection under the SupSBM and obtain algorithms that achieve those rates. \section*{Appendix A} In the Appendices we will use $r$ and $c$ to represent generic constants whose values will be different for different results. \subsubsection*{Proof of Theorem \ref{ATT}} \begin{proof} We follow and extend the arguments in the proof of a similar result for standard adjacency matrices in~\citet{lei2015consistency,gao2017achieving}, and \citet{chin2015stochastic} to the case of triangle-motif adjacency matrices. The arguments in all of the above mentioned papers rely on the use of $\epsilon-$nets on random regular graphs~\citep{friedman1989second,feige2005spectral}. Let $S$ denote the unit sphere in the $n$ dimensional Euclidean space. An $\epsilon-$net of the sphere is defined as follows: \[ \mathcal{N}=\{x=(x_1,\ldots,x_n) \in S: \, \forall i, \, \epsilon\sqrt{n}x_i \in \mathbb{Z}\}, \] where $\mathbb{Z}$ denotes the set of integers. Hence, $\mathcal{N}$ is a set of grid points of size $\frac{1}{\epsilon \sqrt{n}}$ spanning all directions within the unit sphere. For our analysis we only use $\epsilon=1/2-$nets of spheres and henceforth use $\mathcal{N}$ to denote such nets. Next, we recall Lemma 2.1 of~\cite{lei2015consistency} which established that for any $W \in \mathbb{R}^{n\times n}$, one has $\|W\|_2 \leq 4 \sup_{x,y \in \mathcal{N}} |x^TWy|$. Hence, a constant-approximation upper bound for $\|A_{T^2}-E[A_{T^2}]\|_2$ may be found by optimizing $|x^T(A_{T^2}-E[A_{T^2}]) y|$ over all possible pairs $(x,y) \in \mathcal{N}$. In addition, note that \begin{equation} x^T(A_{T^2}-E[A_{T^2}]) y =\sum_{i,j} x_iy_j(A_{T^2}-E[A_{T^2}])_{ij}=\sum_{i,j}\sum_{k \neq i,j}x_iy_j(T_{ijk}-E[T_{ijk}]). \end{equation} We now divide the pairs $(x_i,y_j)$ into two sets, the set of \textit{light pairs} $L$ and the set of \textit{heavy pairs} $H$, according to \begin{align*} L & =\{(i,j): |x_iy_j| \leq \frac{\sqrt{\Delta_t}}{n}\}, \\ H & =\{(i,j): |x_iy_j| > \frac{\sqrt{\Delta_t}}{n}\}, \end{align*} where $\Delta_t$ is as defined in the statement of the theorem. We bound the term $x^T(A_{T^2}-E[A_{T^2}]) y$ separately for the light and heavy pairs, as summarized in the following two lemmas. \begin{lem} (\textit{Light pairs}) For some constant $r_1>0$, there exists a constant $c_2(r_1)>0$, such that with probability at least $1-\exp(-r_1n)$, \[ \sup_{x,y \in T} |\sum_{(i,j) \in L} \sum_{k} x_{i}y_{j}(T_{ijk}-E[T_{ijk}])| < c_2(r_2)\sqrt{\Delta_t}. \] \label{lightpairs} \end{lem} Whenever clear from the context, we suppress the dependence of the constants on other terms (e.g., $c_2(r_2)=c_2$.) To obtain a similar bound for heavy pairs, we first note that \begin{equation} \sup_{x,y \in T} |\sum_{(i,j) \in H} \sum_{k} x_{i}y_{j}w_{ijk}| \leq \sup_{x,y \in T} |\sum_{(i,j) \in H} \sum_{k} x_{i}y_{j}a_{ijk}| + \sup_{x,y \in T} |\sum_{(i,j) \in H} \sum_{k} x_{i}y_{j}p_{ijk}|. \label{heavy pairs} \end{equation} The second term can be easily bounded as follows: \begin{align*} |\sum_{(i,j) \in H} \sum_{k} x_{i}y_{j}p_{ijk}| & \leq \sum_{(i,j) \in H} \sum_{k} \frac{x_{i}^2y_{j}^2}{|x_{i}y_{j}|}p_{ijk} \\ & \leq \frac{n}{\sqrt{\Delta_t}}\sum_k \max_{i,j,k} (p_{ijk}) \sum_{i,j} x_i^2 y_j^2 \\ & \leq \frac{n}{\sqrt{\Delta_t}} \frac{\Delta_t}{n} \leq \sqrt{\Delta_t}. \end{align*} How to bound the first term is described in the next Lemma \ref{heavypairs}. \begin{lem} For some constant $r_2>0$, there exists a constant $c_3(r_2)>0$ such that with probability at least $1-n^{-r_2}$, $\sum_{(i,j) \in H} \sum_{k} x_{i}y_{j}T_{ijk} \leq c_3 \sqrt{\Delta_t}$. \label{heavypairs} \end{lem} Combining the results for the light and heavy pairs, we find that with probability at least $1-n^{-r}$, \[ \|A_{T^2}-E[A_{T^2}]\|_2 \leq 4 \sup_{x,y \in T}|x^T(A_{T^2}-E[A_{T^2}])y| \leq c_1 \sqrt{\Delta_t}. \] This completes the proof of Theorem 1. \end{proof} \subsubsection*{Proof of Theorem \ref{ATE}} \begin{proof} As before, we create an $\epsilon-$ net $\mathcal{N}$ for the unit sphere and separately analyze the light and heavy pairs. In this setting, the pairs are defined according to $$L =\{(i,j): |x_iy_j| \leq \frac{\sqrt{D_{E^3}}}{n\tau_{\max}}\}$$ and $$H =\{(i,j): |x_iy_j| > \frac{\sqrt{D_{E^3}}}{n\tau_{\max}}\},$$ with $D_{E^3}$ as defined in the statement of the Theorem. For the light pairs, we can prove the following result. \begin{lem} (\textit{Light pairs}) For some constant $r_1>0$, there exists a constant $c_3(r_1)>0$ such that with probability at least $1-n^{-r_1}$, \[ \sup_{x,y \in \mathcal{N}} |\sum_{(i,j) \in L} x_{i}y_{j}(A_{E^3} -E[A_{E^3}])_{ij}| < c_3\sqrt{D_{E^3}}. \] \label{TElight} \end{lem} To bound the contribution of the heavy pairs, we once again divide the sum into two terms. First, let $W_{E^3} = (A_{E^3} -E[A_{E^3}])$ and note that $\max_{i,j} (E[A_{E^3}])_{ij} \leq n(p^e_{\max})^3$. Then, \begin{equation} \sup_{x,y \in \mathcal{N}} |\sum_{(i,j) \in H} x_{i}y_{j}(W_{E^3})_{ij}| \leq \sup_{x,y \in \mathcal{N}} |\sum_{(i,j) \in H} x_{i}y_{j}(A_{E^3})_{ij}| + \sup_{x,y \in \mathcal{N}} |\sum_{(i,j) \in H} x_{i}y_{j}(E[A_{E^3}])_{ij}|. \label{heavy pairs TE} \end{equation} The second term can be bounded as follows: \begin{align*} |\sum_{(i,j) \in H} x_{i}y_{j}(E[A_{E^3}])_{ij}| & \leq \sum_{(i,j) \in H} \frac{x_{i}^2y_{j}^2}{|x_{i}y_{j}|}(E[A_{E^3}])_{ij} \\ & \leq \frac{n\tau_{\max}}{\sqrt{D_{E^3}}}\max_{ij} (E[A_{E^3}])_{ij}\sum_{(i,j)} x_i^2 y_j^2 \\ & \leq \frac{n\tau_{\max}}{\sqrt{D_{E^3}}}n(p^e_{\max})^3 \leq \frac{D_{E^3}}{\sqrt{D_{E^3}}} \leq \sqrt{D_{E^3}}, \end{align*} where the penultimate inequality follows since if $\tau_{\max}=n(p^e_{\max})^2$, then $$n\tau_{\max}n(p^e_{\max})^3 = n^3(p^e_{\max})^5 \leq D_{E^3}.$$ In addition, if $\tau_{\max}=\log n$, then $n(p^e_{\max})^2 \leq \log n$. Consequently, $$n\tau_{\max}n(p^e_{\max})^3 = n(p^e_{\max})n(p^e_{\max})^2 \log n \leq n(p^e_{\max}) (\log n)^2 \leq D_{E^3}.$$ For the first term in Equation (\ref{heavy pairs TE}) we have the following result. \begin{lem} For some constant $c>0$, there exists a constant $c_1(c)>0$ such that with probability at least $1-2n^{-c}$, $\sum_{(i,j) \in H} \sum_{k} x_{i}y_{j}(A_{E^3})_{ij} \leq c_1 \sqrt{D_{E^3}}$. \label{TEheavy1} \end{lem} Combining the results for the light and heavy pairs, we obtain \[ \|A_{E^3}-E[A_{E^3}\| \leq 4 \sup_{x,y \in \mathcal{N}}|x^T(A_{E^3}-E[A_{E^3}])y| \leq c_1 \sqrt{D_{E^3}}. \] \end{proof} \subsubsection*{Proof of Theorem \ref{ATTT}} \begin{proof} The proof of this result and those of Theorems \ref{ATTE} and \ref{ATEE} will repeatedly use Theorem 9 of \cite{warnke2017upper}, which we reproduce here for ease of reference. \begin{prop} [Theorem 9 of \cite{warnke2017upper}] Let $(Y_{i}),\, i \in \mathcal{I}$ be a collection of non-negative random variables with $\sum_{i \in \mathcal{I}} E(Y_{i}) \leq \mu$. Assume that $\sim$ is a symmetric relation on $\mathcal{I}$ such that each $Y_{i}$ with $i \in \mathcal{I}$ is independent of $\{Y_{j}:\, j \in \mathcal{I}, \, j \nsim i \}$. Let $Z_C = \max \sum_{i \in \mathcal{J}} Y_{i}$, where the maximum is taken over all sets $\mathcal{J} \subset \mathcal{I}$ such that $\max_{j \in \mathcal{J}} \sum_{i \in \mathcal{J}, i \sim j} Y_{i} \leq C$. Then for all $C, t > 0$ we have \begin{align*} P(Z_C \geq \mu + t ) \leq \min \Big\{ \exp \left( -\frac{t^2}{2C(\mu +t/3)}\right), \left(1+\frac{t}{2\mu}\right)^{-t/2C} \Big \}. \end{align*} \label{prop1} \end{prop} For any vertex $i$, define the degree of $i$ in the matrix $A_{T^3}$ according to \[ (d_{T^3})_i = \sum_{j}\sum_{k} T^3_{ijk}. \] The expectation of the degree may be bounded as \begin{align*} E[(d_{T^3})_i] & =E[\sum_{j} \sum_{k}(1-T_{ijk}) 1(\sum_{k_1 \neq k}T_{ijk_1}>0)1(\sum_{k_2 \neq k}T_{jkk_2}>0)1(\sum_{k_3\neq k}T_{ikk_3}>0)]\\ & \leq \sum_{j} \sum_{k} P(\sum_{k_1 \neq k}T_{ijk_1}>0)P(\sum_{k_2 \neq k}T_{jkk_2}>0)P(\sum_{k_3\neq k}T_{ikk_3}>0) \\ & \leq \sum_{j} \sum_{k} (np_{\max}^{t})^3 \leq n^5(p^{t}_{\max})^3 \leq \Delta_{T^3}, \end{align*} where the second inequality follows since \[ P(\sum_{k_1 \neq k}T_{ijk_1}>0) \leq P(\cup_{k_1 \neq k}\{T_{ijk_1}=1\}) \leq \cup_{k_1 \neq k} P(\{T_{ijk_1}=1\}) \leq np^t_{\max}. \] Let $I_{i}=\{(T^3)_{ijk}, j=\{1,\ldots,n\},k=\{1,\ldots,n\}\}$ denote the set of all triangles incident to vertex $i$ and generated incidentally by three other triangles in $G_t$. Observe that the set $\{T^{3}\}$ comprises elements that are indicator random variables indexed by $\theta=\{i,j,k\}$ corresponding to incidentally generated triangle. Consequently, two random variables in the family $(T^3)_{\theta}$, when restricted to the set $I_i$, are dependent if and only if one of the triangles creating the edges $ik$ or $ij$ of $(T^3)_{ijk}$ includes $j'$ or $k'$ as a vertex and is consequently part of $(T^3)_{ij'k'}$ (see Figure~\ref{dependence}(b)). We refer to an event corresponding to the above described scenario as $TC$ and note that this event also accounts for the case when we have $T^3_{ij'k}$ ``sharing'' the edge $ik$ with $T^3_{ijk}$. In summary, the set $I_i$ contains $O(n^2)$ dependent random variables, with $(d_{T^3})_i$ denoting the sum of all the random variables in the set $I_i$. However, in what follows we show that the dependence is ``limited" in the sense that we can limit the number of other variables dependent on one random variable in the set $I_i$ to $O(n^4(p^t_{\max})^3)$ with high probability. We characterize the event $TC$ through an associated indicator random variable. We show that the number of incidentally generated triangles $T^3_{ij'k'}$ that give rise to TC events is bounded, provided that certain ``good events" occur with high probability. For this purpose, let $T_{ij'k}$ be a triangle in $G_t$ leading to the creation of an incidental triangle $T^3_{ijk}$ (see Figure~\ref{dependence}(b)). To create the incidental triangle $T^3_{ij'k'}$, we also require the existence of at least two triangles from $G_t$ with edges $ik'$ and $jk'$. We capture this event through its indicator variable \[ V_{j'k'} = (1-T_{ij'k'})T_{ij'k}1(\sum_{k''\neq i}T_{j'k'k''}>0)1(\sum_{k'''}T_{ik'k'''}>0). \] Observe that one can think of $V_{j'k'}$, as the random variable $T^3_{ij'k'}$ given that $T^3_{ijk}=1$ (see Figure~\ref{dependence}(b)). Consequently, for any $T^3_{ijk} \in I_i$, the number of incidentally generated triangles $T^3_{ij'k'}$ that also belong to the set $I_i$ \textit{and} contribute to the occurrence of the event $TC$ is at most $2 \sum_{j'}(\sum_{k'} V_{j'k'})$. Next, we define a ``good event" as $\Gamma = \Gamma_1 \cap \Gamma_2$, where $\Gamma_1$ and $\Gamma_2$ are two events that for any $i,j,k$ may be described as follows: \begin{align*} \Gamma_1 & = \{\text{For an edge } ij\text{ there are at most } V_{\max}= \max \{n^3(p^t_{\max})^2,(\log n)^2\} \text{ vertices } k' \\ & \quad \text{ such that the edges } ik' \text{ and } jk' \text{ are introduced by triangles from } G_t\}, \\ \Gamma_2 & = \{ \text{The number of triangles in } G_t \text{ sharing an edge } ij \text{ is at most } 3W_{\max}=3\max\{np^t_{\max}, \log n \} \}. \end{align*} Hence, the event $\Gamma_2$ essentially asserts that there are $6W_{\max}$ choices for the value of $j'$, and given a $j'$, the event $\Gamma_1$ asserts that there are $V_{\max}$ choices for a $k'$. Consequently, under the ``good event" $\Gamma$ the number of triangles in $I_i$ on which a triangle $T^3_{ijk}$ depends on is $2\sum_{j'}\sum_{k'}V_{j'k'} \leq 6V_{\max}W_{\max}$. Next, define a set $J \subset I_i$ as follows: \[ J=\{ \theta \in I_i : \, \max_{\theta_1 \in J} |\theta_2 \in J;\theta_1 \text{ and } \theta_2 \text{ are dependent } | \leq 6V_{\max}W_{\max} \}. \] In a nutshell, the sets $J$ are collections of $\theta$'s such that no $T^3_{\theta_1}$ is dependent on more than $6V_{\max}W_{\max}$ other $T^3_{\theta_2}$'s. Let $\theta_1 \sim \theta_2$ state that the two underlying random variables indexed by $\theta_1$ and $\theta_2$ are dependent. Then we have, \[ \max_{\theta_2 \in J} \sum_{\theta_1 \in J: \theta_1 \sim \theta_2} T^3_{\theta_1} = 2\sum_{j'}\sum_{k'}V_{j'k'}\leq 6V_{\max}W_{\max}, \quad E [ \sum_{\theta \in I_i}T^3_{\theta}] = E[(d_{T^3})_i] \leq \Delta_{T^3}. \] Applying Proposition \ref{prop1} for $t=\mu=\Delta_{T^3}$ leads to \begin{align*} P(\max_{J} \sum_{\theta \in J}T^3_{\theta} \geq 2\Delta_{T^3}) & \leq \min \bigg \{\exp \left( -\frac{\Delta_{T^3}^2}{12V_{\max}W_{\max}(\Delta_{T^3} + \Delta_{T^3}/3)}\right), \left(1+\frac{\Delta_{T^3}}{2\Delta_{T^3}}\right)^{-\frac{\Delta_{T^3}}{12V_{\max}W_{\max}}} \bigg \}\\ & = \min \bigg \{\exp \left(- \frac{\Delta_{T^3}}{ \frac{48}{3}V_{\max}W_{\max}}\right),\frac{3}{2}^{-\Delta_{T^3}/12V_{\max}W_{\max}} \} \\ & \leq \exp(-c' \log n) \leq n^{-c'}. \end{align*} The last inequality may be established through the following argument. If $W_{\max} =np^t_{\max}$, then $np^t_{\max} \geq \log n$, which implies $$n^3(p^t_{\max})^2 \geq n^3 (\frac{\log n}{n})^2 = n (\log n)^2.$$ Then, $V_{\max}=n^3(p^t_{\max})^2$, and consequently $$\frac{\Delta_{T^3}}{V_{\max}W_{\max}} = \max \{n, \frac{(\log n)^4}{n^4(p^t_{\max})^3} \} \geq n.$$ On the other hand, if $W_{\max} =\log n$, then $np^t_{\max} < \log n$. Now, either $V_{\max}=(\log n)^2$, in which case $W_{\max}V_{\max}=(\log n)^3$ and $\frac{\Delta_{T^3}}{V_{\max}} \geq \log n$. Or, $V_{\max}=n^3(p^t_{\max})^2$, and consequently $V_{\max}W_{\max}=n^3(p^t_{\max})^2 \log n$. Then $$\frac{\Delta_{T^3}}{V_{\max}W_{\max}} = \max \{\frac{n^2p^t_{\max}}{\log n}, \frac{(\log n)^4}{n^3(p^t_{\max})^2 \log n} \}\geq \log n,$$ since $n^2p^t_{\max} > (\log n)^2$ by assumption. Recall that the event $TC$ describes the only setting for which two random variables in the set $I_i$ are dependent on each other; under the good event $\Gamma$, we have $I_i=\operatornamewithlimits{arg \,max}_J |J|$ and consequently, $\max_{J} \sum_{\alpha \in J}T^3_{\alpha}=(d_{T^3})_i$. Next, we need to show that the probability of the ``bad event'' (i.e., complement of the good event) is exponentially small. For that, we note \[ P(\Gamma^C) = P(\Gamma_1^C \cup \Gamma_2^C) \leq P(\Gamma_1^C)+P(\Gamma_2^C). \] The last term $P(\Gamma_2^C)$ can be easily bounded using Bernstein's inequality as follows. Let $W_{ij}=\sum_{k}T_{ijk}$. Then $W_{ij}$ counts the number of triangles in $G_t$ sharing an edge $ij$. The event $\Gamma_2$ asserts that the number of triangles in $G_t$ sharing an edge is at most $3W_{\max}=3\max\{np^t_{\max}, \log n \}$. From Bernstein's inequality and the union bound we consequently have \begin{align*} P(\Gamma_2^C) & \leq n^2 P (W_{ij} > 3W_{max}) \\ & \leq n^2 \exp(-\frac{9W^2_{\max} }{2\sum_{k}p^t_{ijk}(1-p^t_{ijk})+\frac{6}{3}W_{\max}}) \\ & \leq n^2 \exp(-\frac{9W^2_{\max}}{2W_{\max}+2W_{\max}}) \\ &\leq n^2 \exp(-\frac{9}{4}W_{\max}) \leq \exp(-\frac{1}{4}\log n) \leq n^{-c''}. \end{align*} \begin{figure}[h] \centering \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{Gamma1.jpg} \end{subfigure \hspace{40pt} \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{Gamma2.jpg} \end{subfigure} \begin{center} (a) \hspace{200pt} (b) \end{center} \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{Gamma3.jpg} \end{subfigure}% \hspace{40pt} \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{Gamma4.jpg} \end{subfigure} \begin{center} (c) \hspace{200pt} (d) \end{center} \caption{Second-order dependencies that need to be taken into account in the concentration inequalities for ``good events": (a) $\Gamma_1$ for $T^3$, (b) $\Gamma_1$ for $T^2E$, (c) $\Gamma_3$ for $T^2E$, and (d) $\Gamma_3$ for $TE^2$.} \label{secdep} \end{figure} We now turn our attention to the event $\Gamma_1$. Fix a $j'$. The sum $\sum_{k'}V_{j'k'}$ includes dependent random variables; two random variables in the sum, say $V_{j'k'}$ and $V_{j'k''}$, are dependent if and only if there is a triangle, say $T_{ik'k''}$ in $G_t$ that has both $ik'$ and $ik''$ as edges (Figure~\ref{secdep}(a)). However, the number of triangles in $G_t$ sharing an edge $ij'$ can be bounded by referring to the event $\Gamma_2$. First, we define $I_{ij'}$ to be the collection of all $V_{j'k'}$ with fixed $i$ and $j'$. Then, we define the sets $J$s as subsets of $I_{ij'}$ such that no $V_{j'k'}$ is dependent on more than $W_{\max}$ other $V_{j'k'}$s. To apply Proposition \ref{prop1} to $\sum_{k'} V_{j'k'}$, we first observe that one may upper bound the relevant expectation as \[ E[\sum_{k'}(1-T_{ij'k'})1(\sum_{k''\neq i}a^t_{j'k'k''}>0)1(\sum_{k'''}a^t_{ik'k'''}>0)] \leq n^3(p^t_{\max})^2 \leq V_{\max}, \] and $\max_{\beta \in J} \sum_{\alpha \in J: \alpha \sim \beta} V_{j'k'} \leq W_{\max}.$ Then, with $t=\mu=V_{\max}$, \begin{align*} P(V_{ij'} \geq 2V_{\max}) & \leq \min \bigg \{\exp \left( -\frac{V_{\max}^2}{2W_{\max}(V_{\max} + V_{\max}/3)}\right), \left(1+\frac{V_{\max}}{2V_{\max}}\right)^{-V_{\max}/2W_{\max}} \bigg \}\\ & = \min \bigg \{\exp \left(- \frac{V_{\max}}{ \frac{8}{3}W_{\max}}\right),\frac{3}{2}^{-V_{\max}/2W_{\max}} \} \\ & \leq \exp(-c''' \log n) \leq n^{-c'''}. \end{align*} The last inequality holds due to the following argument. If $W_{\max} =np^t_{\max}$, then $p^t_{\max} \geq \frac{\log n}{n}$, and consequently, $n^2p^t_{\max} \geq n \log n$. Then $\frac{V_{\max}}{W_{\max}} \geq n^2p^t_{\max}>\log n$. If $W_{\max} =\log n$, then $\frac{V_{\max}}{W_{\max}} \geq \log n,$ since $V_{\max} \geq (\log n)^2$. Since there are at most $n$ choices for $j'$, for any $i$, the union bound leads to $$P(\Gamma_1^C)\leq nP(V_{ij'} \geq 2V_{\max}) \leq n^{-c''''}.$$ Combining the results we have \[ P((d_{T^3})_i \geq 2 \Delta_{T^3}) \leq n^{-c'} +n^{-c''} +n^{-c''''}+ n^{-c''}. \] Invoking the union bound, now over all $i$, we can show that $\max_i (d_{T^3})_i \leq c_1 \Delta_{T^3}$ with probability at least $1-n^{-c''}$. By Equation (\ref{twonorm}), the claimed result holds. \end{proof} \subsubsection*{Proof of Theorem \ref{ATTE}} \begin{proof} Triangles of type $T^2E$ are generated by two triangles from $G_t$ and one edge from $G_e$. Without loss of generality, we may assume that in $T^2E_{ijk}$, the sides $ij$ and $jk$ are generated by triangles from $G_t$ and that the side $ik$ is generated by an edge from $G_e$. Then, we have \begin{align*} E[(d_{T^2E})_{i}] & =E[\sum_{j} \sum_{k} T^2E_{ijk}]\\ & \leq \sum_{j} \sum_{k} P(\sum_{k_1 \neq k}T_{ijk_1}>0)P(\sum_{k_2 \neq k}T_{jkk_2}>0)P(E_{ik}=1) \\ & \leq \sum_{j} \sum_{k} (np_{\max}^{t})^2(p^e_{\max}) \leq n^4(p^{t}_{\max})^2p^e_{\max} \leq \Delta_{T^2E}. \end{align*} Let the set $I_i=\{(T^2E)_{ijk}, j=\{1,\ldots,n\},k=\{1,\ldots,n\}\}$ denote the set of all incidentally generated triangles of type $T^2E$ that includes the vertex $i$. The set $\{T^2E_{\theta}\}$, indexed by $\theta=\{i,j,k\}$, represents a family of indicator variables corresponding to incidentally generated triangles of type $T^2E$. Two random variables in the family $(T^2E)_{\theta}$ restricted to the set $I_i$ may be dependent in two scenarios. One possibility is that one of the sides $ij$ or $ik$ is an edge from $G_e$ and serves as an edge for $(T^2E)_{ijj'}$ or $(T^2E)_{ikj'},$ for some $j'$ (see Figure \ref{dependence}(c)). The other possibility is that one of the sides $ij$ or $ik$ is created by a triangle from $G_t$ and the same triangle is involved in creating $(T^2E)_{ij'k'}$ for some $j'$ and $k'$ (see Figure \ref{dependence}(d)). We refer to these two events as $TC_1$ and $TC_2$, respectively. We now need to derive a bound on $(d_{T^2E})_i$, which equals the sum of the random variables $T^2E_{ijk}$, that holds with high probability. We proceed as in the proof of the previous theorem and describe ``good events" which limit the number of random variables that a random variable in the sum depends on with high probability. For this purpose, we characterize the events $TC_1$ and $TC_2$ using indicator variables. First, define the random variables \[Q_{j'}=(1-T_{ijj'})1(\sum_{k'}T_{jj'k'}>0)1(\sum_{k''}T_{ij'k''}>0). \] Then, for any $T^2E_{ijk}$, the number of other incidentally generated triangles in $I_i$ creating the event $TC_1$ is at most $\sum_{j'} Q_{j'}$ (Figure~\ref{dependence}(c)). With regards to the event $TC_2$, define the following random variable \[U_{j'k'}=(1-T_{ij'k'})T_{ijj'}1(\sum_{k''\neq i}T_{j'k'k''}>0)1(E_{ik'}=1). \] Then, for any $T^2E_{ijk}$, the number of additional incidental triangles in $I_i$ that contribute to the event $TC_2$ is at most $2\sum_{j'} \sum_{k'}U_{j'k'}$ (Figure~\ref{dependence}(d)). As before, define a ``good event" as $\Gamma = \Gamma_1 \cap \Gamma_2 \cap \Gamma_3$, where for any $i,j,k$, $\Gamma_1$, $\Gamma_2$ and $\Gamma_3$ are defined as: \begin{align*} \Gamma_1 & = \{\text{For an edge } ij\text{ there are at most } V_{\max}= \max \{n^3(p^t_{\max})^2,(\log n)^2\} \text{ vertices } k', \\ & \quad \text{ such that edges } ik' \text{ and } jk' \text{ are generated by triangles from } G_t\}, \\ \Gamma_2 & = \{ \text{The number of triangles in } G_t \text{ incident to an edge } ij \text{ is at most } 3W_{\max}=3\max\{np^t_{\max}, \log n \} \},\\ \Gamma_3 & = \{\text{For an edge } ij\text{ there are at most } U_{\max}= \max \{n^2p^t_{\max}p^e_{\max},(\log n)^2\} \text{ vertices } k', \\ & \quad \text{ such that the edge } ik' \text{ arises from } G_e \text{ and edge } jk' \text{ arises from a triangle in } G_t\}. \end{align*} Note the second event $\Gamma_2$ is the same as the event $\Gamma_2$ described in the proof of Theorem~\ref{ATTT}. As in the previous setting, the events above are defined in a way that ensures that ``good events" happen with high probability. We note that under the events $\Gamma_1$ and $\Gamma_2$, one has \[ \sum_{j'}Q_{j'} \leq V_{\max}, \quad 2\sum_{j'}\sum_{k'} U_{j'k'} \leq 6W_{\max}U_{\max}. \] Hence, the two events limit the number of occurrences of the events $TC_1$ and $TC_2$, respectively, and consequently limit the number of random variables that a random variable in the sum $(d_{T^2E})_i$ is dependent on. We once again apply Proposition \ref{prop1} to $(d_{T^2E})_i$ under the good event $\Gamma$ to obtain an upper bound for $P(\Gamma^C)$. For this purpose, define a set $J \subset I_i$ as follows: \[ J=\{ \theta \in I_i : \, \max_{\theta_1 \in J} |\theta_2 \in J;\theta_1 \text{ and } \theta_2 \text{ are dependent } | \leq C \}, \] where $C$ may be found from \begin{align*} \max_{\beta \in J} \sum_{\theta \in J: \theta \sim \beta} T^2E_{\theta} & = \sum_{j'}Q_{j'} + \sum_{j'}\sum_{k'} U_{j'k'} \\ & \leq V_{\max} + 6W_{\max}U_{\max} \\ &\leq 7\max \{n^3(p^t_{\max})^2, n^2p^t_{\max}p^e_{\max}\log n, (\log n )^3 \} =C. \end{align*} Then, $E[\max_{J}\sum_{\alpha \in J}T^2E_{\alpha}] \leq \Delta_{T^2E}$, and with $t=\mu=\Delta_{T^2E}$, \begin{align*} P( \max_{J}\sum_{\theta\in J}T^2E_{\theta} \geq 2\Delta_{T^2E}) & \leq \min \bigg \{\exp \left( -\frac{\Delta_{T^2E}^2}{2C(\Delta_{T^2E} + \Delta_{T^2E}/3)}\right), \left(1+\frac{\Delta_{T^2E}}{2\Delta_{T^2E}}\right)^{-2\Delta_{T^2E}/2C} \bigg \}\\ & = \min \bigg \{\exp \left(- \frac{\Delta_{T^2E}}{ \frac{8}{3}C}\right),2^{-\Delta_{T^2E}/C} \} \\ & \leq \exp(-c' \log n) \leq n^{-c'}, \end{align*} where the last inequality holds due to the following argument. If $C =n^3(p^t_{\max})^2$, then $\frac{\Delta_{T^2E}}{C} \geq np^e_{\max}$ which, by assumption, is greater than $ \log n$. If $C =n^2p^t_{\max}p^e_{\max} \log n$, then $\frac{\Delta_{T^2E}}{C} \geq \frac{n^2p^t_{\max}}{\log n}$ which, by assumption, is greater than $\log n$. Finally, if $C=(\log n )^3,$ then $\frac{\Delta_{T^2E}}{C} \geq \log n$. In our previous proofs, we already established upper bounds for $P(\Gamma_1^C)$ and $P(\Gamma_2^C)$. To complete the proof of the claimed result, we only need to determine an upper bound on $P(\Gamma_3^C)$. Using the previously introduced variables $U_{j'k'}$, the event $\Gamma_3$ occurs if $ \sum_{k'}U_{j'k'} \leq U_{\max}$ for any $j'$. Note that the sum $\sum_{k'}U_{j'k'}$ includes dependent random variables. An upper bound on the expectation of this sum reads as \[ E(\sum_{k'}U_{j'k'}) \leq E[ \sum_{k'}1(\sum_{k''\neq i}T_{jk'k''}>0)1(E_{ik'}=1)] \leq n^2p^t_{\max}p^e_{\max} \leq U_{\max}. \] We also introduce the set $J$ that restricts the number of $U$-variables that another $U$-variable in the sum $ \sum_{k'}U_{j'k'}$ is dependent on: \[ J=\{ \theta : \, \max_{\theta_1 \in J} |\theta_2 \in J;\theta_1 \text{ and } \theta_2 \text{ are dependent } | \leq W_{\max} \}. \] Fix $i$ and $j'$ and define $I_{ij'}$ to be the collection of all random variables $U_{j'k'}$ that contribute to the event $TC_2$. Two random variables in the sum $\sum_{k'}U_{j'k'}$, say $U_{j'k'}$ and $U_{j'k''}$, are dependent (conditioned on $TE^2_{ijk}$) if and only if the triangle $T_{jk'k''}$ from $G_t$ generates an edge for both the incidental triangles characterized by $U_{j'k'}$ and $U_{j'k''}$ (see Figure~\ref{secdep}(c)). The set $\Gamma_3$ essentially limits the frequency of such triangles $T_{jk'k''}$s: under the event $\Gamma_3$, the largest set $J$ as defined above is equal to the set $I_{ij'}$. We therefore have $\max_{\beta \in J} \sum_{\alpha \in J: \alpha \sim \beta} U_{\alpha} \leq W_{\max}$, and for $t=\mu=U_{\max}$, \begin{align*} P(\max \sum_{\theta \in J}U_{\theta}\geq 2U_{\max}) & \leq \min \bigg \{\exp \left( -\frac{U_{\max}^2}{2W_{\max}(U_{\max} + U_{\max}/3)}\right), \left(1+\frac{U_{\max}}{2U_{\max}}\right)^{-U_{\max}/2W_{\max}} \bigg \}\\ & = \min \bigg \{\exp \left(- \frac{K_{\max}}{ \frac{8}{3}W_{\max}}\right),\frac{3}{2}^{-U_{\max}/2W_{\max}} \} \\ & \leq \exp(-c''' \log n) \leq n^{-c'''}, \end{align*} where the last inequality follows since if $W_{\max} =np^t_{\max}$, then $\frac{U_{\max}}{W_{\max}} \geq np^e_{\max}$ which, by assumption, is greater than $c_2 \log n$; and, if $W_{\max} =\log n$, then $\frac{U_{\max}}{W_{\max}} \geq \log n$. Combining the previous results we obtain \[ P((d_{T^2E})_i \geq 2 \Delta_{T^2E}) \leq P(\max_{J}\sum_{\theta \in J}T^2E_{\theta}\geq 2\Delta_{T^2E}) + P(\Gamma^C) \leq n^{-c'} +n^{-c''} +n^{-4c'''}+ n^{-c''}. \] Applying the union bound over all indices $i$ we can bound $\max_i (d_{T^2E})_i \leq c_1 \Delta_{T^2E}$ with probability at least $1-n^{-c''}$. Then, from~Equation (\ref{twonorm}) we arrive at the result claimed in the theorem. \end{proof} \subsubsection*{Proof of Theorem \ref{ATEE}} \begin{proof} For incidental triangles of type $TE^2$, the generating class is one triangle from $G_t$ and two edges from $G_e$. Consequently, we have \begin{align*} E[(d_{TE^2})_{i}] & =E[\sum_{j} \sum_{k} TE^2_{ijk}]\\ & \leq \sum_{j} \sum_{k} P(\sum_{k_1 \neq k}T_{ijk_1}>0)P(E_{jk}=1)P(E_{ik}=1) \\ & \leq \sum_{j} \sum_{k} np_{\max}^{t}(p^e_{\max})^2 \leq n^3p^{t}_{\max}(p^e_{\max})^2 \leq \Delta_{TE^2}. \end{align*} Next, let $I_i=\{(TE^2)_{ijk}, j=\{1,\ldots,n\},\,k=\{1,\ldots,n\}\},$ denote the set of all incidentally generated triangles of type $TE^2$ including a vertex $i$. Then, $(TE^2)_{\theta}$ indexed by $\theta=\{i,j,k\}$ is a family of indicator variables with each variable corresponding to an incidentally generated triangle of type $TE^2$. Two different random variables in the family $(TE^2)_{\theta}$ restricted to the set $I_i$ may be dependent in two ways. First, one of the edges $ij$ or $ik$ of the incidental triangle characterized by $TE^2_{ijk}$, may be an edge from $G_e$ and be an edge in the incidental triangle characterized by $(TE^2)_{ijk'}$ for some $k'$ (see Figure~\ref{dependence}(e)). Second, one of the edges $ij$ or $ik$ may have been created by a triangle from $G_t$, with the same triangle being involved in creating the incidental triangle characterized by $(TE^2)_{ij'k'}$ for some $j'$ and $k'$ (see Figure~\ref{dependence}(f)). Note the second possibility also includes the case when the triangles characterized by $(TE^2)_{ijk}$ and $(TE^2)_{ijk'}$ share an edge $ij$ which is created by a triangle from $G_t$. We refer to these two events as $TC_1$ and $TC_2$, respectively. With regards to the event $TC_1$, define the following random variable \[K_{k'}=(1-T_{ijk'})1(\sum_{k''\neq i}T_{jk'k''}>0)1(E_{ik'}=1). \] Conditioned on $TE^2_{ijk}$, each $K_{k'}$ characterizes an incidentally generated triangle in $I_i$ and contributes to the event $TC_1$; for simplicity, we let $I_K$ stand for the set of all such variables $K_{k'}$. Then, for any $TE^2_{ijk}$, the number of additional incidentally generated triangles in $I_i$ contributing to the event $TC_1$ is at most $2 \sum_{k'}K_{k'}$ (Figure~\ref{dependence}(e)). With regards to the event $TC_2$, define the random variable \[S_{j'k'}=(1-T_{ij'k'})T_{ijj'}E_{ik'}E_{j'k'}. \] Conditioned on $TE^2_{ijk}$, each $S_{j'k'}$ characterizes an incidentally generated triangle in $I_i$ and leads to the event $TC_2$; for simplicity, we let $I_S$ stand for the set of all such variables $S_{j'k'}$s. Then, for any $TE^2_{ijk}$, the number of additional incidentally generated triangles in $I_i$ contributing to the event $TC_2$ is at most $\sum_{j'}\sum_{k'}S_{j'k'}$ (Figure~\ref{dependence}(f)). Define a ``good event" as $\Gamma = \Gamma_2 \cap \Gamma_3 \cap \Gamma_4$, where for any $i,j,k$, $\Gamma_2$, $\Gamma_3$ and $\Gamma_4$ may be described as follows: \begin{align*} \Gamma_2 & = \{ \text{The number of triangles in } G_t \text{ incident to an edge } ij \text{ is at most } 3W_{\max}=3\max\{np^t_{\max}, \log n \} \},\\ \Gamma_3 & = \{\text{For a side } ij\text{ there are at most } U_{\max}= \max \{n^2p^t_{\max}p^e_{\max},(\log n)^2\} \text{ vertices } k', \\ & \quad \text{ such that the side } ik' \text{ is an edge from } G_e \text{ and the side } jk' \text{ belongs to a triangle from } G_t\}, \\ \Gamma_4 & = \{\text{Two vertices } \{i,j\} \text{ have at most } 4\tau_{\max}= 4\max \{n(p^e_{\max})^2,(\log n)\} \text{ common neighbors} \{k'\}\}. \end{align*} We again apply Proposition \ref{prop1} to $(d_{TE^2})_i$ under the good event $\Gamma$ to obtain an upper bound on $P(\Gamma^C)$. Under the event $\Gamma_3$, it holds that $2\sum_{k'}K_{k'} \leq 2U_{\max},$ which in turn implies that the number of $T^2E_{\alpha}$ in $I_i$ that depend on $T^2E_{ijk}$ according to the event $TC_1$ is limited to $U_{\max}$. Furthermore, for the event $\Gamma_4$, we have $ \sum_{j'}\sum_{k'}S_{j'k'} \leq 12\tau_{\max}W_{\max}$ which implies that the number of $T^2E_{\alpha}$ in $I_i$ that depend on $T^2E_{ijk}$ according to the event $TC_2$ is limited to $\tau_{\max}W_{\max}$. Now, define a set $J \subset I_i$ as follows: \[ J=\{ \theta : \, \max_{\theta_1 \in J} |\theta_2 \in J;\theta_1 \text{ and } \theta_2 \text{ are dependent } | \leq C \}, \] where $C$ may be found according to \[C=\sum_{\theta \in J: \theta \sim \beta} TE^2_{\theta} = 2U_{\max} + 12\tau_{\max}W_{\max} \leq 14 \max \{n^2p^t_{\max}p^e_{\max},np^t_{\max}\log n, n(p^e_{\max})^2 \log n, (\log n )^2 \}. \] Then, for $t=\mu=\Delta_{TE^2}$, \begin{align*} P(\max \sum_{\theta \in J}TE^2_{\theta}\geq 2\Delta_{TE^2}) & \leq \min \bigg \{\exp \left( -\frac{\Delta_{TE^2}^2}{2C(\Delta_{TE^2} + \Delta_{TE^2}/3)}\right), \left(1+\frac{\Delta_{TE^2}}{2\Delta_{TE^2}}\right)^{-2\Delta_{TE^2}/2C} \bigg \}\\ & = \min \bigg \{\exp \left(- \frac{\Delta_{TE^2}}{ \frac{8}{3}C}\right),2^{-\Delta_{TE^2}/C} \} \\ & \leq \exp(-c' \log n) \leq n^{-c'}, \end{align*} where the last inequality follows since if $C =n^2p^t_{\max}p^e_{\max}$, then $\frac{\Delta_{TE^2}}{C} \geq np^e_{\max},$ which is by assumption greater than $c_2 \log n$; if $C=np^t_{\max}\log n$, then $\frac{\Delta_{TE^2}}{C} \geq \frac{(np^e_{\max})^2}{\log n} \geq \log n$; and if $C=n(p^e_{\max})^2\log n$, then $\frac{\Delta_{TE^2}}{C} \geq \frac{n^2p^t_{\max}}{\log n} \geq \log n$. Finally, if $C=(\log n )^2,$ then $\frac{\Delta_{TE^2}}{C} \geq \log n$. We bounded the probability $P(\Gamma_3^C)$ in the previous proof, while a bound on $P(\Gamma_4^C)$ is given in Lemma~\ref{TElight}. Combining the expressions for all previously evaluated bounds, we obtain \[ P((d_{TE^2})_i \geq 3 \Delta_{TE^2}) \leq P(Z_C \geq 3\Delta_{TE^2}) + P(\Gamma^C) \leq n^{-c'} +n^{-c''} +n^{-4c'''}+ n^{-c''}. \] Taking the union bound over all $i$, we can show that $\max_i (d_{TE^2})_i \leq c_1 \Delta_{TE^2}$ holds with probability at least $1-n^{-c''}$. The claimed result then follows from Equation (\ref{twonorm}). \end{proof} \subsubsection*{Proof of Theorem \ref{AT}} \begin{proof} We note that under the given assumptions on $p^e_{\max}$ and $p^t_{\max}$ we have the following results: \[ \sqrt{D_{E^3}} = \max \{n^{3/2} (p^e_{\max})^{5/2},(n^{1/2}(p^e_{\max})^{1/2}(\log n)^{3/2}\} \leq \max \{n^{-\frac{5}{2}\epsilon},\sqrt{\Delta_t}\} = \sqrt{\Delta_{t}}, \] \begin{align*} \Delta_{T^3} = \max \{n^{5} (p^t_{\max})^{3},(\log n)^{4}\} &\leq \max \{\sqrt{\Delta_t}n^4(p^t_{\max})^{5/2},\sqrt{\Delta_t}\} \\ &\leq \max \{\sqrt{\Delta_t}n^{-\frac{5}{2}\epsilon},\sqrt{\Delta_t}\} = \sqrt{\Delta_{t}}, \end{align*} \begin{align*} \Delta_{T^2E} = \max \{n^4 (p^t_{\max})^2p^e_{\max},(\log n)^4\} &\leq \max \{\sqrt{\Delta_t}n^3(p^t_{\max})^{3/2}p^e_{\max},\sqrt{\Delta_t}\} \\ & \leq \max \{\sqrt{\Delta_t}n^{-\frac{5}{2}\epsilon},\sqrt{\Delta_t}\} = \sqrt{\Delta_{t}}, \end{align*} \begin{align*} \Delta_{TE^2} = \max \{n^3 (p^t_{\max})(p^e_{\max})^2,(\log n)^{4}\} &\leq \max \{\sqrt{\Delta_t}n^2(p^t_{\max})^{1/2}(p^e_{\max})^2,\sqrt{\Delta_t}\}, \\ & \leq \max \{\sqrt{\Delta_t}n^{-\frac{5}{2}\epsilon},\sqrt{\Delta_t}\} = \sqrt{\Delta_{t}}. \end{align*} Consequently, \begin{align*} \|A_T - E[A_T]\|_2 & \leq \|A_{T^2}-E[A_{T^2}]\|_2 + \|A_{E^3}-E[A_{E^3}]\|_2 + \|A_{T^3}-E[A_{T^3}]\|_2\\ & \quad \quad + \|A_{T^2E}-E[A_{T^2E}]\|_2 + \|A_{TE^2}-E[A_{TE^2}]\|_2 \\ & \leq c(\sqrt{\Delta_{t}} + \sqrt{D_{E^3}} + \Delta_{T^3} + \Delta_{T^2E} + \Delta_{TE^2})\\ & \leq \tilde{c} \sqrt{\Delta_{t}}, \end{align*} where $c$ is the maximum of all constants used for bounding the individual matrix terms, and $\tilde{c}$ is another constant that may be easily computed from the previous inequalities. \end{proof} \subsubsection*{Proof of Theorem \ref{RT}} \begin{proof} First, note that $E[A_T]=E[A_{T^2}]+E[A_{E^3}]+E[A_{T^2E}]+E[A_{T^3}]+E[A_{TE^2}],$ and all matrices in the sum under the SupSBM model may be written in the form $C((g-h)I_k + y1_k1_k^T)C^T$. Consequently $E[A_T]$ can also be written in the form $C((g-h)I_k + y1_k1_k^T)C^T$. Then, we have $\lambda_{\min}(E[A_T])=\frac{n}{k}(g-h)$ for some $g$ and $h$. Now note that the $(g-h)$ term in $E[A_T]$ is the sum of the corresponding $(g-h)$ terms in the component matrices, all of which are positive due to the community structure of the SupSBM. Hence, the $(g-h)$ term of $E[A_T]$ is going to be greater than the $(g-h)$ term of $E[A_{T^2}]$, so that $\lambda_{\min}(E[A_T]) \geq \lambda_{\min}(E[A_{T^2}])$. This implies that we can replace $\lambda_{\min}(E[A_T])$ with $\lambda_{\min}(E[A_{T^2}])$ in the upper bound of Equation~(\ref{misclus}). We have already computed $\lambda_{\min}(E[A_{T^2}])$ in Equation~(\ref{lambdaPTT}) and the numerator of Equation~(\ref{misclus}) has been upper bounded in Theorem~\ref{AT}. Combining the results, we arrive at the claimed result. \end{proof} \subsubsection*{Proof of Theorem \ref{RTT}} \begin{proof} The first inequality is a result of Equation~(\ref{misclus}) which relates the misclustering rate $R_{T^2}$ with $\|A_{T^2}-E[A_{T^2}]\|_2$ and $\lambda_{\min}(E[A_{T^2}])$ through the Davis-Kahan Theorem. The second inequality is obtained by replacing the numerator with the bound from Theorem~\ref{ATT} and the denominator with the result computed in Equation~(\ref{lambdaPTT}). \end{proof} \subsubsection*{Proof of Theorem \ref{tradeoff}} \begin{proof} We have the following asymptotic relationship between the two error rates: \begin{align*} \frac{a_t}{n(a_t-b_t)^2} \asymp \frac{a_e/\delta}{\frac{nm^2(a_e-b_e)^2}{\delta^2}} \asymp \frac{\delta}{m^2n}\frac{a_e}{(a_e-b_e)^2}. \end{align*} Hence, the error rate obtained by using the information about edges is $\frac{\delta}{m^2n}$ times that of using triangles. Consequently, the error rate is lower for triangle hyperedges if $\frac{\ell}{m^2n} \lesssim 1$ and higher otherwise. \end{proof} \subsubsection*{Proof of Theorem \ref{REEE}} \begin{proof} The first inequality follows from Equation~(\ref{misclus}) that relates the misclustering rate with $\|A_{E^3}-E[A_{E^3}]\|_2$ and $\lambda_{\min}(E[A_{E^3}])$ through the Davis-Kahan Theorem. The second inequality is obtained by replacing the numerator with the bound from Theorem~\ref{ATE} and the denominator with the result in Equation (\ref{lambdaPTE}). \end{proof} \section*{Appendix B} \subsubsection*{Proof of Lemma \ref{lightpairs}.} \begin{proof} Define $u_{ij}=x_iy_j1((i,j)\in L) +x_jy_i1((j,i)\in L)$ for all $i,j=1,\dots,n$. Then, \[ \sum_{(i,j) \in L} \sum_{k} x_{i}y_{j}(T_{ijk}-E[T_{ijk}])=\sum_{i<j} \sum_{k} (T_{ijk}-p_{ijk})u_{ij}. \] Note that each term in the above sum is a zero-mean random variable bounded in absolute value, $|(T_{ijk}-p_{ijk})u_{ij}|\leq 2\sqrt{\Delta_t}/n$. By applying Bernstein's inequality we have \begin{align*} P \left(|\sum_{i<j} \sum_{k \neq (i,j)} (T_{ijk}-p_{ijk})u_{ij}| \geq c_2\sqrt{\Delta_t}\right) & \leq 2 \exp \left(-\frac{\frac{1}{2}c_2^2\Delta_t}{\sum\limits_{i<j} \sum\limits_{k \neq (i,j)} p_{ijk}(1-p_{ijk})u_{ij}^2 + \frac{1}{3}2\frac{\sqrt{\Delta_t}}{n}c\sqrt{\Delta_t}}\right)\\ & \leq 2 \exp \left(-\frac{\frac{1}{2}c_2^2\Delta_t}{\max_{i,j} (\sum_{k\neq (i,j)} p_{ijk}) \sum u_{ij}^2 + \frac{2}{3}c_2\frac{\Delta_t}{n}}\right) \\ & \leq 2 \exp \left(-\frac{\frac{1}{2}c_2^2\Delta_t}{ \frac{\Delta_t}{n}(2 + \frac{2c_2}{3})}\right) \\ & \leq 2 \exp(-\frac{c_2^2}{4+\frac{4c_2}{3}}n), \end{align*} where the third inequality follows as a consequence of two observations. First, since $\Delta_t \geq n^2 \max_{i,j,k} p_{ijk}$, we have $$\max_{i,j} (\sum_{k\neq (i,j)} p_{ijk}) \leq n \max_{i,j,k} p_{ijk} \leq \frac{\Delta_t}{n}.$$ Second, $$\sum_{i,j}u_{ij}^2 \leq 2 \sum_{i,j} (x_i^2y_j^2 ) \leq 2 \|x\|_2^2\|y\|_2^2 \leq 2.$$ From Lemma 5 in~\cite{vershynin2010introduction} regarding the covering number of a sphere, we have $|\mathcal{N}| \leq \exp (n \log 5)$. Hence, taking the union bound over all possible $x$ and $y$ we obtain \[ P\left(\sup_{x,y \in \mathcal{N}} |\sum_{(i,j) \in L} \sum_{k} x_{i}y_{j}(T_{ijk}-E[T_{ijk}])| \geq c_2\sqrt{\Delta_t}\right) \leq \exp \left( \left (-\frac{c_2^2}{4+\frac{4c_2}{3}}+ \log 5 \right)n \right). \] The claimed result now follows from selecting a sufficiently large constant $c_2$ and $r_1=(-\frac{c_2^2}{4+\frac{4c_2}{3}}+ \log 5 )$. \end{proof} \subsubsection*{Proof of Lemma \ref{heavypairs}.} \begin{proof} We first address the subset of heavy pairs $H_1= \{(i,j) \in H: x_i >0, y_j>0\}$. The other cases may be analyzed similarly. Define the following two families of sets: \[ I_1=\{\frac{2^{-1}}{\sqrt{n}} \leq x_i \leq \frac{1}{\sqrt{n}} \}, \quad I_s=\{\frac{2^{s-1}}{2\sqrt{n}} < x_i \leq \frac{2^{s}}{2\sqrt{n}}\}, \,s=2,3,\ldots, \lceil \log_2 2\sqrt{n} \rceil, \] \[ J_1=\{\frac{2^{-1}}{\sqrt{n}} \leq y_i \leq \frac{1}{\sqrt{n}} \}, \quad J_t=\{\frac{2^{t-1}}{2\sqrt{n}} < y_i \leq \frac{2^{t}}{2\sqrt{n}}\}, \, t=2,3,\ldots, \lceil \log_2 2\sqrt{n} \rceil. \] Next, for two arbitrary sets $I$ and $J$ of vertices, also define \[ e(I,J)=\begin{cases} \sum_{i \in I} \sum_{j \in J} \sum_{k \neq (i,j)} T_{ijk} & I \cap J =\emptyset, \\ \sum_{(i,j) \in I \times J \backslash (I \cap J)^2} \sum_{k \neq (i,j)} T_{ijk} + \sum_{(i,j) \in (I \cap J)^2, i<j} \sum_{k \neq (i,j)} T_{ijk} & I \cap J \neq \emptyset, \end{cases} \] \[ \mu(I,J)=E[e(I,J)], \quad \bar{\mu}=|I||J| n\max_{i,j,k} p_{ijk} \leq |I||J| \frac{\Delta_t}{n}, \] Finally, let $\bar{\mu}_{st}=\bar{\mu}(I_s,J_t)$ , $\lambda_{st} =e(I_s,J_t)/\bar{\mu}_{st}$, $\alpha_s =|I_s|2^{2s}/n$, $\beta_t=|J_t|2^{2t}/n$, and $\sigma_{st}=\lambda_{st}\sqrt{\Delta_t}2^{-(s+t)}$. We have the following two results establishing relationships between the previously introduced entities. \begin{lem} Let $d_{t,i} = \sum_{j} \sum_{k \neq i,j} T_{ijk}$ denote the triangle-degree of vertex $i$. Then, for all $i$, and a constant $r_3>0$, there exists a constant $c_4(r_3)>0 $ such that $d_{t,i} \leq c_4\Delta_t$ with probability at least $1-n^{-r_3}$. \label{bounded degree} \end{lem} \begin{lem} For a constant $r_4>0$, there exists constants $c_5(r_4), c_6(r_4)>1$ such that for any pair of vertex sets $I,J \subseteq \{1,\ldots,n\}$ such that $|I|\leq |J|$, with probability at least $1-2n^{-r_4}$, at least one of the following statements holds: \vspace{5pt} (a) $ \frac{e(I,J)}{\bar{\mu} (I,J)} \leq e\, c_5,$ (b) $ e(I,J) \log \frac{e(I,J)}{\bar{\mu} (I,J)} \leq c_6\, |J| \, \log \frac{n}{|J|}.$ \label{twostatement} \end{lem} Now, we use the result of the two previous lemmas to complete the proof of the claimed result for the heavy pairs. We note \begin{align*} \sum_{(i,j) \in H_1} x_i y_j \sum_{k \neq (i,j)} T_{ijk} \leq 2 \sum_{(s,t): 2^{(s+t)}\geq \sqrt{\Delta_t}} e(I_s,J_t)\frac{2^s}{2\sqrt{n}}\frac{2^t}{2\sqrt{n}} \leq \frac{\sqrt{\Delta_t}}{2} \sum_{(s,t): 2^{(s+t)}\geq \sqrt{\Delta_t}} \alpha_s \beta_t \sigma_{st}. \end{align*} We would like to bound the right-hand-side of the inequality by a constant multiple of $\sqrt{\Delta_t}$. To this end, first note the following two facts: \[ \sum_{s}\alpha_s \leq 4 (1/2)^{-2} =1, \quad \sum_{t}\beta_{t} \leq 1. \] Following the approach of ~\cite{lei2015consistency} and~\cite{ chin2015stochastic}, we split the set of pairs $C: \{(s,t): 2^{(s+t)}\geq \sqrt{\Delta_t}, |I_s| \leq |J_t| \}$ into six parts and show that desired invariant for each part is bounded. \begin{itemize} \item $C_1: \{(s,t) \in C, \sigma_{st} \leq 1\}$: \[ \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_1 \} \leq \sum_{s,t}\alpha_s \beta_t \leq 1. \] \item $C_2: \{(s,t) \in C\backslash C_1, \lambda_{st} \leq e\, c_5\}$: \\ Since \[ \sigma_{st}=\lambda_{st}\sqrt{\Delta_t}2^{-(s+t)} \leq \lambda_{st} \leq e\, c_5, \] consequently \[ \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_2 \} \leq e \, c_5 \sum_{s,t}\alpha_s \beta_t \leq e\, c_5. \] \item $C_3: \{(s,t) \in C\backslash (C_1 \cup C_2), 2^{s-t} \geq \sqrt{\Delta_t}\}$:\\ By Lemma (\ref{bounded degree}), $e(I_s,J_t)\leq c_4 |I_s|\Delta_t$. Hence, $$\lambda_{st} =e(I_s,J_t)/\bar{\mu}_{st} \leq c_4\frac{|I_s|\Delta_t}{|I_s||J_t|\Delta_t/n} \leq c_4\frac{n}{|J_t|},$$ and consequently, $$\sigma_{st} \leq c_4\sqrt{\Delta_t}2^{-(s+t)}\frac{n}{|J_t|} \leq c_42^{-2t}\frac{n}{|J_t|},$$ for $(s,t) \in C_3$. Then, \begin{align*} \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_3 \} & \leq \sum_s \alpha_s \sum_t \beta_t c_1\, 2^{-2t}\frac{n}{|J_t|} \\ & \leq \sum_s \alpha_s \sum_t 2^{2t}\frac{|J_t|}{n} c_4\, 2^{-2t}\frac{n}{|J_t|} \leq c_4\, \sum_s \alpha_s \leq c_4. \end{align*} \item $C_4: \{(s,t) \in C\backslash (C_1 \cup C_2 \cup C_3), \log \lambda_{st} > \frac{1}{4}[2t \log 2 + \log (1/\beta_t)] \}$:\\ From part (b) of Lemma \ref{twostatement}, we have, \[ \lambda_{st} \log \lambda_{st} \frac{|I_s| |J_t| \Delta_t}{n} \leq \frac{e(I_s,J_t)}{\bar{\mu}(I_s,J_t)} \log \frac{e(I_s,J_t)}{\bar{\mu}(I_s,J_t)} \bar{\mu}(I_s,J_t) \leq c_6\, |J_t| \log \frac{2^{2t}}{|J_t|}, \] which is equivalent to \[ \sigma_{st} \alpha_s \leq c_6 \frac{1}{\log \lambda_{st} }\frac{2^{s-t}}{\sqrt{\Delta_t}}\{2t \log 2 + \log (1/\beta_t)\} \leq 4\,c_6 \frac{2^{s-t}}{\sqrt{\Delta_t}}. \] Then, \begin{align*} \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_4 \} & = \sum_t \beta_t \sum_s \sigma_{st} \alpha_s 1\{(s,t) \in C_4 \} \\ & \leq 4\, c_6 \sum_t \beta_t \sum_s \frac{2^{s-t}}{\sqrt{\Delta_t}} 1\{(s,t) \in C_4 \} \leq 8\, c_6 \sum_t \beta_t \leq 8\, c_6. \end{align*} \item $C_5: \{(s,t) \in C\backslash (C_1 \cup C_2 \cup C_3 \cup C_4), 2t \log 2 \geq \log (1/\beta_t)] \}$:\\ First, note that since $(s,t) \notin C_4$, we have $\log \lambda_{st} \leq \frac{1}{4}[2t \log 2 + \log (1/\beta_t)] \leq t \log 2$ and hence $\lambda_{st} \leq 2^t$. Next, $\sigma_{st} =\lambda_{st} \sqrt{\Delta_t}2^{-(s+t)} \leq 2^{-s}\sqrt{\Delta_t},$ and hence $\sigma_{st}\alpha_s \leq 4c_6 \frac{2^{s-t}}{\sqrt{\Delta_t}} 4t \log 2$. Therefore, \[ \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_5 \} \leq \sum_t \beta_t \sum_s 4\, c_6 \frac{2^{s-t}}{\sqrt{\Delta_t}} 4t \log 2 \leq 2\, c_6 \log 2 \sum_t \beta_t \leq 2\, c_6. \] \item $C_6: \{(s,t) \in C\backslash (C_1 \cup C_2 \cup C_3 \cup C_4 \cup C_5) \}$:\\ Since $2t\log 2 < \log (1/\beta_t)$, we have $\log \lambda_{st} \leq t \log 2 \leq \log (1/\beta_t) /2$. This observation, along with the fact $\lambda_{st} \geq 1$, implies that $\lambda_{st} \leq 1/\beta_t$. As a result, \[ \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_6 \} \leq \sum_s \alpha_s \sum_t 2^{-(s+t)}\sqrt{\Delta_t} \{(s,t) \in C_6 \} \leq \sum_s \alpha_s \leq 2. \] \end{itemize} In a similar fashion, the set of pairs $C: \{(s,t): 2^{(s+t)}\geq \sqrt{\Delta_t}, |I_s| > |J_t| \}$ is split into six categories in order to bound $\sum_{(s,t)} \alpha_s \beta_t \sigma_{st}$. The derivations are omitted. Collecting all the previously obtained terms, we arrive at the claimed result for heavy pairs: for some constant $r_2>0$, there exists a constant $c_3(r_2)>0$ such that with probability at least $1-2n^{-r_2}$, one has \[ \sum_{(i,j) \in H} \sum_{k} x_{i}y_{j}T_{ijk} \leq c_3 \sqrt{\Delta_t}. \] \end{proof} \subsubsection*{Proof of Lemma \ref{TElight}} \begin{proof} As before, define $u_{ij}=x_iy_j1((i,j)\in L) +x_jy_i1((j,i)\in L)$ for all $i,j=1,\dots,n$. Then, \[ \sum_{(i,j) \in L} x_{i}y_{j}(A_{E^3}- E[A_{E^3}])_{ij}=\sum_{i<j} \sum_{k \neq (i,j)} (E_{ij}E_{jk}E_{ik} - p^e_{ij}p^e_{jk}p^e_{ik})u_{ij}. \] To analyze the above sum, we use the \emph{typical bounded differences inequality} of Theorem 1.3 in~\cite{warnke2016method}. For this purpose, define $$f(E) = \sum_{i<j} \sum_{k \neq (i,j)} E_{ij}E_{jk}E_{ik} u_{ij}.$$ Clearly, $f$ is a low-order polynomial of independent random variables. More precisely, since $E_{ij}$ are independent Bernoulli random variables with parameters $p^e_{ij}$, we have $$E[f(A)]=\sum_{i<j} \sum_{k \neq (i,j)}p^e_{ij}p^e_{jk}p^e_{ik}u_{ij}.$$ Let $\tau_{ij}$ be the number of common neighbors of the vertices $i$ and $j$, i.e., $\tau_{ij}=\sum_{k \neq {i,j}}E_{ik}E_{jk}$. Then $\tau_{ij}$ is a sum of $n-2$ independent Bernoulli random variables with parameters $p^e_{ik}p^e_{jk} \leq (p^e_{\max})^2$. Next, define a ``good set" $\Gamma$ under which the contribution of one single random variable to the function is limited with high probability as follows: \[ \Gamma=\{(E_{ij}): \max_{ij} \tau_{ij} \leq 4\tau_{\max}\}, \] asserting that every pair of vertices $i,j$ has at most $4\tau_{\max}$ common neighbors. From Bernstein inequality we have, \begin{align*} P(E \notin \Gamma) & \leq n^2 P (\tau_{ij} > 4\tau_{max}) \\ & \leq n^2 P \left(\sum_{k \neq {i,j}}(E_{ik}E_{jk} -p^e_{ik}p^e_{jk}) > 3\tau_{max}\right) \\ & \leq n^2 \exp \left(-\frac{9\tau^2_{\max} }{2\sum_{k}p^e_{ik}p^e_{jk}(1-p^e_{ik}p^e_{jk})+\frac{3}{3}\tau_{\max}}\right) \\ & \leq n^2 \exp(-\frac{9\tau^2_{\max}}{2\tau_{\max}+\tau_{\max}}) \\ &\leq n^2 \exp(-\frac{9}{3}\tau_{\max}) \leq \exp(-\log n), \end{align*} where the last inequality holds since $\tau_{\max}>\log n$ by definition. Observe that the good event $\Gamma$ is independent on the particular values of $x,y$. Next, we determine the typical Lipschitz (TL) condition for the function $f(E)$. Changing one element in the sequence $(E_{11},E_{12},\ldots,E_{nn})$ (say, $E_{ij}$) from $1$ to $0$ may have two different types of effects on $f(E)$. The effect may be ``large" on the term $(A_{E^3})_{ij}u_{ij}= \sum_{k \neq (i,j)} E_{ij}E_{jk}E_{ik} u_{ij}$, which is upper bounded by $\sum_{k \neq (i,j)} E_{jk}E_{ik} u_{ij}$. Or, the effects may be ``small" for the terms of the form $(A_{E^3})_{ik}u_{ik}$ and $(A_{E^3})_{jk}u_{jk}$ (i.e., the terms that involve the entries of the matrix $A_{E^3}$ which represent neighboring edges of $(i,j)$). These ``small'' effects are upper bounded by $u_{ik}$ and $u_{jk}$, respectively. Under the good event $\Gamma$, we have $\sum_{k \neq i,j} E_{ik}E_{jk} \leq 4\tau_{\max}$, and consequently, the ``large'' effect of changing $E_{ij}$ from 1 to 0 is bounded by $4\tau_{\max}u_{ij}$. Moreover, under the good event $\Gamma$, there are at most $4\tau_{\max}$ ``small'' effects, since for a ``small'' effect to occur, both $E_{ik}$ and $E_{jk}$ must be 1, i.e., $ik$ and $jk$ must be connected by an edge. However, an additional complication is that the effects contribute differently to the bound, depending upon which common neighbor $k$ we are looking at. To mitigate this problem, we first lump the ``small'' effects together into $\sum_{k:\, E_{ik}E_{jk}=1}u_{ik}$. Combining the ``large'' and ``small'' effects, under the good event, $$c_{ij}=4\tau_{\max}u_{ij}+\sum_{k: E_{ik}E_{jk}=1, E\in \Gamma}u_{ik}$$ emerges as an upper bound on the total effect of changing one $E_{ij}$ in $f(E)$. For the case that the bad event occurs instead, an upper bound on the effect of the change is $d_{ij}=2nu_{ij}+\sum_{k: E_{ik}E_{jk}=1}u_{ik}$. Now, let $\gamma_{ij}=\frac{1}{n},$ for all $i,j$. Then $e_{ij}=o(c_{ij})$, and \[ C=\max_{ij}c_{ij}=4\tau_{\max}\frac{\sqrt{D_{E^3}}}{n\tau_{\max}} + 4\tau_{\max} \frac{\sqrt{D_{E^3}}}{n\tau_{\max}} = 8\tau_{\max}\frac{\sqrt{D_{E^3}}}{n\tau_{\max}} =8\frac{\sqrt{D_{E^3}}}{n}. \] Next, we need to compute an upper bound on $\sum_{ij}c_{ij}^2$. This can be done as follows \begin{align*} c_{ij}^2 & \leq 2(16\tau^2_{\max}u^2_{ij} + (\sum_{k: a_{ik}a_{jk}=1, a \in \Gamma}u_{ik})^2), \quad \quad \quad \quad \text{(since $(a+b)^2 \leq 2(a^2+b^2)$)} \\ & \leq 2(16\tau^2_{\max}u^2_{ij} + (\sum_{k: a_{ik}a_{jk}=1, a \in \Gamma} 1) (\sum_{k: a_{ik}a_{jk}=1, a \in \Gamma} u^2_{ik})) \quad \quad \text{(due to the Cauchy-Schwartz inequality)}\\ & \leq 2(16\tau^2_{\max}u^2_{ij} + 4\tau_{\max} (\sum_{k: a_{ik}a_{jk}=1, a \in \Gamma} u^2_{ik})). \end{align*} Within the sum of $c_{ij}^2$ over all $i,j$, each term $u_{ik}^2$ will appear at most $\sum_{k: a_{ik}a_{jk}=1,a \in \Gamma}=4\tau_{\max}$ times. This implies that $\sum_{ij}c^2_{ij} \leq 64 \tau^2_{\max} \sum_{ij}u_{ij}^2$. In the notations of Theorem 1.3 of~\cite{warnke2016method} define $\gamma_{ij} =\frac{1}{n}$ for all $ij$ and the event $\mathcal{B}(\Gamma,\{\gamma_{n}\})$, such that $\mathcal{B}^C \subset \Gamma$ and \[ P(\mathcal{B}^{C}) \leq P(A \notin \Gamma) \sum_{i,j}\gamma^{-1}= n^3 P(A \notin \Gamma) = \exp(-(c'-3)\log n). \] Then using Theorem 1.3 of~\cite{warnke2016method}, we have \begin{align*} P(\{{f(A)-E[f(A)] \geq \sqrt{D_{TE}}\}} \, \cap \,\mathcal{B}^{C}) & \leq \exp(- \frac{D_{E^3}}{2 \sum_{ij}p_{ij}(1-p_{ij})(c_{ij} + e_{ij})^2 + 2C\sqrt{D_{E^3}}/3}) \\ & \leq \exp(- \frac{D_{E^3}}{2 \sum_{ij}p_{\max}(64\tau^2_{\max})u^2_{ij} + \frac{16}{3}\frac{\sqrt{D_{E^3}}}{n}\sqrt{D_{E^3}}}) \\ & \leq \exp(- \frac{D_{E^3}}{128p_{\max}\tau_{\max}^2 + \frac{16}{3}\frac{\sqrt{D_{E^3}}}{n}\sqrt{D_{E^3}}}) \\ & \leq \exp(- \frac{D_{E^3}}{(128 + 16/3)\frac{D_{E^3}}{n}}) \leq \exp(-cn), \end{align*} where the penultimate inequality follows since $D_{E^3}=np^e_{\max}\tau_{\max}^2$. Clearly, the event $\{A \notin \Gamma\}$ does not depend on the choice of the vectors $x,y$. Hence, taking the supremum over all $x$ and $y$, we have \[ P(\sup_{x,y \in \mathcal{N}} |\sum_{i,j \in L} x_iy_j(A_{E^3}-E[A_{E^3}])_{ij}\geq \sqrt{D_{E^3}}) \leq \exp(-(c-\log 5)n) + \exp(-(c'-3)\log n). \] This completes the proof. \end{proof} \subsubsection*{Proof of Lemma \ref{TEheavy1}} \begin{proof} As before, we first focus on the subset of heavy pairs, $H_1=\{(i,j) \in H: x_i >0, y_j>0\}$; the other two cases follow similarly. The vertex sets $I_1,\ldots,I_{\lceil\log_2 2\sqrt{n}\rceil}, J_1, \ldots, J_{\lceil\log_2 2\sqrt{n} \rceil}$ are defined as before. In addition, we write \[ e(I,J)=\begin{cases} \sum_{i \in I} \sum_{j \in J}(A_{E^3})_{ij} & I \cap J =\emptyset \\ \sum_{(i,j) \in I \times J \backslash (I \cap J)^2} (A_{E^3})_{ij} + \sum_{(i,j) \in (I \cap J)^2, i<j} (A_{E^3})_{ij} & I \cap J \neq \emptyset \end{cases}, \] \[ \mu(I,J)=E(e(I,J)), \quad \bar{\mu}=|I||J| n(p^e_{\max})^3, \] \[ \bar{\mu}_{st}=\bar{\mu}(I_s,J_t) , \lambda_{st} =e(I_s,J_t)/\bar{\mu}_{st}, \alpha_s =|I_s|2^{2s}/n, \beta_t=|J_t|2^{2t}/n, \text{ and } \sigma_{st}=\lambda_{st}\frac{\sqrt{D_{TE}}}{\tau_{\max}}2^{-(s+t)}.\] The degree of row $i$ of the matrix $A_{E^3}$ is $(d_{E^3})_i=\sum_{j} (A_{E^3})_{ij}=\sum_{j} \sum_{k}E_{ij}E_{jk}E_{ik}$. Hence, $(d_{E^3})_i$ counts the number of triangles incident to the vertex $i$. Let $\Delta_{TE} = np_{\max}T_{\max} = \max \{ n^2 p_{\max}^3, np_{\max} \log n\}$. Then $\Delta_{TE}$ may be vaguely interpreted as being the (approximate) maximum degree of the rows of the $P_{TE}$ matrix. The next lemma bounds the degrees of the rows of the matrix $A_{TE}$ with high probability. \begin{lem} If $np^e_{\max} > \log n$, then for a constant $r_3>0$, there exists a constant $c_4(r_3)>0 $ such that the ``degree" of row $i$, $(d_{E^3})_i \leq c_4\Delta_{E^3}$ with probability at least $1-n^{-r_3}$ for all $i$. \label{bounded degree TE} \end{lem} \begin{lem} For a constant $c>0$, there exist constants $c_2(c), c_3(c)>1$ such that with probability at least $1-2n^{-c}$ and for any vertex sets $I,J \subseteq [n]$ and $|I|\leq |J|$ one of the following two statements is true: \vspace{5pt} (a) $\frac{e(I,J)}{\bar{\mu} (I,J)} \leq ec_2,$ (b) $ e(I,J) \log \frac{e(I,J)}{\bar{\mu} (I,J)} \leq c_3 n(p^e_{\max})^2 |J| \log \frac{n}{|J|}.$ \label{two statement TE} \end{lem} We use the result of the two previous lemmas to establish the proof for heavy pairs. In this setting, note that \begin{align*} \sum_{(i,j) \in H_1} x_i y_j (A_{E^3})_{ij} & \leq 2 \sum_{(s,t): 2^{(s+t)}\geq \frac{\sqrt{D_{E^3}}}{\tau_{\max}}} e(I_s,J_t)\frac{2^s}{2\sqrt{n}}\frac{2^t}{2\sqrt{n}} \\ &= 2 \, \frac{1}{4} \sum_{(s,t): 2^{(s+t)}\geq \frac{\sqrt{D_{E^3}}}{\tau_{\max}}} \frac{e(I_s,J_t)}{|I_s||J_t|(np^e_{\max})^3}\frac{D_{E^3}}{\tau_{\max}}\frac{2^s|I_s|2^t|J|_t}{n^2} \\ & = \frac{1}{2} \sqrt{D_{E^3}}\sum_{(s,t): 2^{(s+t)}\geq \frac{\sqrt{D_{E^3}}}{\tau_{\max}}} \frac{e(I_s,J_t)}{\bar{\mu}(I_s,J_t)}\frac{\sqrt{D_{E^3}}}{\tau_{\max}} 2^{-(s+t) }\frac{2^{2s}|I_s|2^{2t}|J|_t}{n^2} \\ & = \frac{\sqrt{D_{E^3}}}{2} \sum_{(s,t): 2^{(s+t)}\geq \frac{\sqrt{D_{E^3}}}{\tau_{\max}}} \alpha_s \beta_t \sigma_{st}. \end{align*} Next, we need to bound this quantity by a constant multiple of $\sqrt{D_{E^3}}$. Following the approach of~\cite{lei2015consistency} and~\cite{ chin2015stochastic}, we split the set of pairs $C: \{(s,t): 2^{(s+t)}\geq \frac{\sqrt{D_{E^3}}}{\tau_{\max}}, |I_s| \leq |J_t| \}$ into six parts and show that the contribution of each part is bounded accordingly. Again, in our proof we rely on two facts, \[ \sum_{s}\alpha_s \leq \sum_{i}|4x_i|^2 \leq 16, \quad \sum_{t}\beta_{t} \leq 16. \] \begin{itemize} \item $C_1: \{(s,t) \in C, \sigma_{st} \leq 1\}$: \[ \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_1 \} \leq \sum_{s,t}\alpha_s \beta_t \leq 256. \] \item $C_2: \{(s,t) \in C\backslash C_1, \lambda_{st} \leq c_3\,e\}$:\\ Under $C$, $2^{s+t} \geq \frac{\sqrt{D_{E^3}}}{\tau_{\max}}$, and consequently, \[ \sigma_{st}=\lambda_{st}\frac{\sqrt{D_{E^3}}}{\tau_{\max}}2^{-(s+t)} \leq \lambda_{st} \leq c_3\, e. \] This implies \[ \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_2 \} \leq c_3\, e\, \sum_{s,t}\alpha_s \beta_t \leq 256\,c_3\,e. \] \item $C_3: \{(s,t) \in C\backslash (C_1 \cup C_2), 2^{s-t} \geq \frac{\sqrt{D_{E^3}}}{\tau_{\max}}\}$:\\ By Lemma (\ref{bounded degree TE}), $e(I_s,J_t)\leq c_1 |I_s|\Delta_{E^3}$. Hence, $$\lambda_{st} =e(I_s,J_t)/\bar{\mu}_{st} \leq c_1\frac{|I_s|\Delta_{E^3}}{|I_s||J_t|(np^e_{\max})^3} \leq c_1\frac{n}{|J_t|},$$ and consequently, \[\sigma_{st} = \lambda_{st}\frac{\sqrt{D_{E^3}}}{\tau_{\max}}2^{-(s+t)}\leq c_1 \frac{n}{|J_t|}\frac{\sqrt{D_{E^3}}}{\tau_{\max}}2^{-(s+t)} \leq c_12^{-2t}\frac{n}{|J_t|}, \] for $(s,t) \in C_3$. Then, \begin{align*} \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_3 \} & \leq \sum_s \alpha_s \sum_t \beta_t c_12^{-2t}\frac{n}{|J_t|} \\ & \leq \sum_s \alpha_s \sum_t 2^{2t}\frac{|J_t|}{n} c_12^{-2t}\frac{n}{|J_t|} \leq c_1\sum_s \alpha_s \leq 256\, c_1. \end{align*} \item $C_4: \{(s,t) \in C\backslash (C_1 \cup C_2 \cup C_3), \log \lambda_{st} > \frac{1}{4}[2t \log 2 + \log (1/\beta_t)] \}$:\\ From part (b) of Lemma~\ref{twostatement}, we have \[ \lambda_{st} \log \lambda_{st} |I_s| |J_t| np^3_{\max} \leq \frac{e(I_s,J_t)}{\bar{\mu}(I_s,J_t)} \log \frac{e(I_s,J_t)}{\bar{\mu}(I_s,J_t)} \bar{\mu}(I_s,J_t) \leq c_4 T_{\max}|J_t| \log \frac{n}{|J_t|}. \] Noting that $\frac{\tau_{\max}}{n^2(p^e_{\max})^3}= \frac{\tau_{\max}^2}{n^2(p^e_{\max})^3\tau_{\max}}=\frac{\tau_{\max}^2}{D_{E^3}}$, we may write \begin{align*} \sigma_{st} \alpha_s & \leq \lambda_{st}\frac{\sqrt{D_{E^3}}}{\tau_{\max}}2^{-(s+t)} \frac{|I_s|2^{2s}}{n} \\ & \leq c_4 \frac{1}{\log \lambda_{st} } \frac{(\tau_{\max})^2}{D_{E^3}}\log (\frac{n}{|J_t|}) \frac{\sqrt{D_{E^3}}}{\tau_{\max}}2^{(s-t)} \\ &\leq c_4 \frac{1}{\log \lambda_{st} }\frac{2^{s-t}}{\frac{\sqrt{D_{E^3}}}{\tau_{\max}}}\{2t \log 2 + \log (1/\beta_t)\} \leq 4\, c_4 \, \frac{2^{s-t}}{\sqrt{D_{E^3}}/\tau_{\max}}. \end{align*} Then, \begin{align*} \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_4 \} & = \sum_t \beta_t \sum_s \sigma_{st} \alpha_s 1\{(s,t) \in C_4 \} \\ & \leq 4\, c_4\, \sum_t \beta_t \sum_s \frac{2^{s-t}}{\sqrt{D_{E^3}}/\tau_{\max}} 1\{(s,t) \in C_4 \} \leq 8\, c_4 \sum_t \beta_t \leq 128\, c_4, \end{align*} where the penultimate inequality relies on the fact that $(s,t) \notin C_3$, and that consequently $2^{(s-t)} \leq \frac{\sqrt{D_{E^3}}}{\tau_{\max}}$. \item $C_5: \{(s,t) \in C\backslash (C_1 \cup C_2 \cup C_3 \cup C_4), 2t \log 2 \geq \log (1/\beta_t) \}$:\\ First, note that since $(s,t) \notin C_4$, we have $\log \lambda_{st} \leq \frac{1}{4}[2t \log 2 + \log (1/\beta_t)] \leq t \log 2$ and hence $\lambda_{st} \leq 2^t$. Furthermore, $\sigma_{st} =\lambda_{st}\frac{\sqrt{D_{E^3}}}{\tau_{\max}}2^{-(s+t)} \leq 2^{-s}\frac{\sqrt{D_{E^3}}}{\tau_{\max}}$. Since $(s,t) \notin C_2$, we have $\log \lambda_{st} \geq 1$ and \[\sigma_{st}\alpha_s \leq c_4 \frac{1}{\log \lambda_{st} }\frac{2^{s-t}}{\frac{\sqrt{D_{E^3}}}{\tau_{\max}}}\{2t \log 2 + \log (1/\beta_t)\} \leq c_4 \frac{2^{s-t}}{\frac{\sqrt{D_{TE}}}{\tau_{\max}}} 4\,t \, \log 2. \] Then, \[ \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_5 \} \leq \sum_t \beta_t \sum_s c_4 \frac{2^{s-t}}{\frac{\sqrt{D_{E^3}}}{\tau_{\max}}} 4t \log 2 \leq 2\, c_4 \, \log 2 \sum_t \beta_t \leq 32\, c_4. \] \item $C_6: \{(s,t) \in C\backslash (C_1 \cup C_2 \cup C_3 \cup C_4 \cup C_5) \}$:\\ Since $2t\log 2 < \log (1/\beta_t)$, we have $\log \lambda_{st} \leq t \log 2 \leq \log (1/\beta_t) /2$. This fact, along with $\lambda_{st} \geq 1,$ implies that $\lambda_{st} \leq 1/\beta_t$. Therefore, \[ \sum_{(s,t)}\alpha_s \beta_t \sigma_{st}1\{(s,t) \in C_6 \} \leq \sum_s \alpha_s \sum_t 2^{-(s+t)}\frac{\sqrt{D_{E^3}}}{\tau_{\max}} \{(s,t) \in C_6 \} \leq \sum_s \alpha_s \leq 16. \] \end{itemize} The set of pairs $C: \{(s,t): 2^{(s+t)}\geq \sqrt{\Delta_t}, |I_s| > |J_t| \}$ may be similarly split into six categories categories and similar arguments may be used to bound each of the contributions $\sum_{(s,t)} \alpha_s \beta_t \sigma_{st}$. Collecting all the terms we have the following result for heavy pairs: for some constant $c>0$, there exists a constant $c_1(c)>0$ such that with probability at least $1-2n^{-c}$, one has \[ \sum_{(i,j) \in H} x_{i}y_{j}(A_{E^3})_{ij} \leq c_1\sqrt{D_{E^3}}. \] \end{proof} \subsubsection*{Proof of Lemma \ref{bounded degree}.} \begin{proof} We note $d_{t,i}=\sum_{j}\sum_{k} T_{ijk}$ is a sum of independent random variables, each bounded in absolute value by 1. Therefore, Bernstein's inequality gives \begin{align*} P(d_{t,i} \geq c_4\Delta_t) & \leq P\left(\sum_{j}\sum_{k} w_{ijk} \geq (c_4-1)\Delta_t\right) \\ & \leq \exp \left( -\frac{\frac{1}{2}(c_4-1)^2\Delta_t^2}{\sum_j \sum_k p_{ijk}(1-p_{ijk}) + \frac{1}{3}(c_4-1)\Delta_t}\right)\\ & \leq \exp (-\Delta_t \frac{3(c_4-1)^2}{2c_4 + 4}) \\ & \leq n^{-c_7}, \end{align*} where the last inequality follows since $\Delta_t \geq c \log n$. Taking the union bound over all values of $i$ we obtain that $\max_i d_{t,i} \leq c_4 \Delta_t$ with probability at least $1-n^{-r_3},$ where $c_4$ is a function of the constant $r_3$. \end{proof} \subsubsection*{Proof of Lemma \ref{twostatement}.} \begin{proof} If $|J|>n/e$, then the result of Lemma \ref{bounded degree} implies \[ \frac{e(I,J)}{\Delta_t |I||J|/n} \leq \frac{\sum_{i \in I} \max_{i} d_{t,i}}{\Delta_t |I|/e} \leq \frac{|I|c_2\Delta_t}{\Delta_t|I|/e} \leq c_2\, e, \] and consequently, (a) holds for this case. If $|J| < n/e$, let $S(I,J)=\{(i,j),i \in I, j \in J\}$. We next invoke Corollary A.1.10 of~\cite{alon2004probabilistic}, described below. \begin{prop} For independent Bernoulli random variables $X_u \sim Bern(p_u), u=1,\ldots,n$ and $p=\frac{1}{n}\sum_{u}p_{u}$, we have \[ P(\sum_{u} (X_u -p_u) \geq a) \leq \exp (a- (a+ pn) \log (1+ a/pn)). \] \end{prop} Using the above result, for $l\geq 8$, we have \begin{align*} P(e(I,J) \geq l \bar{\mu}(I,J)) & \leq P (\sum_{(i,j) \in S(I,J)}\sum_{k\neq (i,j)} (T_{ijk}-p_{ijk}) \geq l \bar{\mu}(I,J) - \sum_{(i,j) \in S(I,J)} \sum_{k\neq (i,j)} p_{ijk} ) \\ & \leq P (\sum_{(i,j) \in S(I,J)}\sum_{k\neq (i,j)} w_{ijk} \geq (l-1) \bar{\mu}(I,J) ) \\ & \leq \exp((l-1)\bar{\mu}(I,J) - l\bar{\mu}(I,J) \log l) \\ & \leq \exp (-\frac{1}{2} l \log l \bar{\mu}(I,J)). \end{align*} For a constant $c_5>0$, let \[ t(I,J) \log t (I,J)=\frac{c_5|J|}{\bar{\mu} (I,J)} \log \frac{n}{|J|}, \] and let $l(I,J)=\max \{8,t(I,J)\}$. Then, from the previous calculations, we have \[ P(e(I,J) \geq l(I,J) \bar{\mu}(I,J)) \leq \exp (-\frac{1}{2} \bar{\mu}(I,J) l(I,J) \log l(I,J) ) \leq c_3|J| \log \frac{n}{|J|}. \] From this point onwards identical arguments as those used in \cite{lei2015consistency} can be invoked to complete the proof of Lemma ~\ref{twostatement}. \end{proof} \subsubsection*{Proof of Lemma \ref{bounded degree TE}} \begin{proof} We use Proposition \ref{prop1}. Let us start with the observation that \[ E[(d_{E^3})_i]=E[\sum_{j} \sum_{k}E_{ij}E_{jk}E_{ik}] \leq n^2(p^e_{\max})^3 \leq \Delta_{E^3}. \] Furthermore, let $I_i$ be the set of all triangles of type $E^3$ incident to a vertex $i$. Let $E^3_{\theta}=E_{ij}E_{jk}E_{ik}$, indexed by $\theta=\{i,j,k\},$ denote a family of indicator random variables. Define a ``good event" $\Gamma$ as before: under the good event, every pair of vertices has at most $C=4\tau_{\max}$ common neighbors. Clearly, two triangles belonging to the set $I_i$ are independent if they do not share any edges. For simplicity, let ``$\sim$" denote a relation such that $\theta_1 \sim \theta_2$ holds if $\theta_1$ and $\theta_2$ share an edge. For any $E^3_{ijk}$, the good event $\Gamma$ restricts the number of triangles of type $E^3$ in the set $I_i$ that are dependent on $E^3_{ijk}$ to $2C$. Define $J \subset I$ as \[ J=\{ \theta : \max_{\theta_1 \in J} |\theta_2 \in J;\theta_1 \text{ and } \theta_2 \text{ share at least one edge } | \leq 2C \}. \] Then, we have \[ \max_{\theta_2 \in J} \sum_{\theta_1 \in J: \theta_1 \sim \theta_2} E^3_{\theta_1} \leq 2C = 8\tau_{\max}, \quad \mu = E[\sum_{\theta \in I_i} E^3_{\theta}]\Delta_{E^3}. \] For $t=\mu=\Delta_{E^3},$ the above results imply \begin{align*} P(\max_{J} \sum_{\theta\in J} E^3_{\theta} \geq 2\Delta_{E^3}) & \leq \min \bigg \{\exp \left( -\frac{\Delta_{E^3}^2}{2C(\Delta_{E^3} + \Delta_{E^3}/3)}\right), \left(1+\frac{\Delta_{E^3}}{2\Delta_{E^3}}\right)^{-\Delta_{E^3}/2C} \bigg \}\\ & = \min \bigg \{\exp \left(- \frac{\Delta_{E^3}}{ \frac{32}{3}\tau_{\max}}\right),2^{-\Delta_{E^3}/8\tau_{\max}} \} \\ & \leq \exp(-c' \log n) \leq n^{-c'}, \end{align*} where the last inequality is a consequence of the following argument. If $\tau_{\max}=n(p^e_{\max})^2$, then $\frac{\Delta_{E^3}}{ \tau_{\max}} = np^e_{\max} \geq \log n$ by assumption, and if $\tau_{\max}=\log n$, then $\frac{\Delta_{E^3}}{ \tau_{\max}} \geq \log n$. Under the good event $\Gamma$, we have $I=J^{*}$ and consequently, $\max_{J} \sum_{\alpha \in J} E^3_{\alpha} = (d_{E^3})_i$. Then, \[ P((d_{E^3})_i \geq 2 \Delta_{E^3}) n^{-c'} + P(A \notin \Gamma) \leq n^{-c''}, \] since $P(A \notin \Gamma) \leq \exp(-\frac{1}{4} \log n)$. Taking the union bound over all values of $i$ results in $\max_i (d_{TE})_i \leq c_1 \Delta_{TE}$ with probability at least $1-n^{-c''}$. \end{proof} \subsection*{Proof of Lemma \ref{two statement TE}} \begin{proof} We first note that $\bar{\mu}(I,J)=|I||J|n(p^e_{\max})^3$ and $\Delta_{E^3} \geq n^2(p^e_{\max})^3$. Next, if $|J|>n/e$, then the result of Lemma \ref{bounded degree TE} implies that \[ \frac{e(I,J)}{\bar{\mu}(I,J)} = \frac{e(I,J)}{|I||J|n(p^e_{\max})^3} \leq \frac{\sum_{i \in I} \max_i (d_{E^3})_i}{n^2p^3_{\max}|I|/e} \leq \frac{|I|c_1\Delta_{E^3}}{n^2p^3_{\max}|I|/e} \leq c_1\,e, \] so that in this case (a) holds true. If $|J| < n/e$, define $S(I,J)$ as the set of all $3$-tuples such that each tuple has one vertex in each of the sets $I$ and $J$. To prove that the second statement also holds, we cannot invoke the exponential concentration inequality used in the proof of Theorem 1 due to the lack of the independence assumption. Instead, we use Proposition \ref{prop1} on the set $S(I,J)$ of $3$-tuples. First, note that $$e(I,J) =\sum_{i \in I} \sum_{j \in J} (A_{E^3})_{ij} = \sum_{\theta \in S(I,J)} (A_{E^3})_{\theta}$$ and $$E(\sum_{\theta \in S} (A_{E^3})_{\theta}) \leq |I||J|n(p^e_{\max})^3 =\bar{\mu}(I,J).$$ Define $S^* \subset S$ to be such that any $(A_{E^3})_{\theta}$, for some $\theta\in S^*$, depends only on $\tau_{\max}$ other $(A_{E^3})_{\theta}$ s. We then have $\max_{S^*}\sum_{\theta \in S*(I,J)} (A_{E^3})_{\theta} \leq \tau_{\max}$. Next, let $t=(l-1)\bar{\mu}(I,J)$. Then, for some $l\geq 8$, \begin{align*} P(e(I,J) \geq l \bar{\mu}(I,J)) & \leq \exp(-\frac{l\bar{\mu}(I,J) \log l -(l-1)\bar{\mu}(I,J)}{\tau_{\max}}) \\ & \leq \exp (-\frac{1}{2} \frac{l \log l \bar{\mu}(I,J)}{\tau_{\max}}). \end{align*} For a constant $c_3>0$, define $t(I,J)$ according to \[ t(I,J) \log t (I,J)=\frac{c_3\tau_{\max}|J|}{\bar{\mu} (I,J)} \log \frac{n}{|J|}, \] and let $l(I,J)=\max \{8,t(I,J)\}$. From the previous calculations, we have \[ P(e(I,J) \geq l(I,J) \bar{\mu}(I,J)) \leq \exp (-\frac{1}{2} \frac{\bar{\mu}(I,J)}{\tau_{\max}} l(I,J) \log l(I,J) ) \leq \exp(-c_3 |J| \log \frac{n}{|J|}). \] Following an identical argument as described in~\cite{lei2015consistency}, we have \[ P(\exists I,J: |I| \leq |J| \leq \frac{n}{e}, \, e(I,J) \geq l(I,J)\bar{\mu}(I,J)) \leq n^{-c_4}. \] Therefore, with probability at least $1-n^{-c_4}$, we have $e(I,J) \leq l(I,J)\bar{\mu}(I,J))$. For pairs $\{(I,J): |I| \leq |J| \leq \frac{n}{e}\}$ such that $l(I,J) =8$, we readily have \[ \frac{e(I,J)}{\bar{\mu}(I,J)}\leq 8. \] This establishes that part (a) of the claim is true. For the remaining pairs for which $l(I,J)=t(I,J)$ holds, we have $\frac{e(I,J)}{\bar{\mu}(I,J)} \leq t(I,J)$, and \[ \frac{e(I,J)}{\bar{\mu}(I,J)} \log \frac{e(I,J)}{\bar{\mu}(I,J)} \leq t(I,J) \log t(I,J) = \frac{c_3\tau_{\max}|J|}{\bar{\mu} (I,J)} \log \frac{n}{|J|}, \] implying that part (b) of the claim is true as well. \end{proof} \section*{Acknowledgement} The authors would like to thank Prof. Lutz Warnke of Georgia Institute of Technology for explaining how his work may be applied in parts of the analysis. We also appreciate his generous assistance with proving some of the results. The work was supported by the NSF grant CCF 1527636, the Center of Science of Information NSF STC center, and an NIH U01 grant for targeted software development.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Proof for the main result (1)} Here we provide the proof for the main result, that is, Eq.~(1). Although here we focus on deriving the bound (1) for QKD protocols between Alice and Bob, the same technique can be applied for protocols to share entanglement between them, similarly to the TGW bound \cite{TGW14,TGW14e}. Suppose that, with the help of other parties $\{C^k \}_{k=1,2,\ldots,n}$ in a quantum network, Alice and Bob share physical systems ${\cal H}_{A'A''} \otimes {\cal H}_{B'B''}$ in private state \cite{HHHO05} \begin{equation} \hat{\gamma}^{AB}_d=\hat{U}^{A'B'A''B''} (\ket{\Phi} \bra{\Phi}_{A'B'} \otimes \hat{\rho}^{A''B''}) \hat{U}^{A'B'A''B''\dag} \end{equation} with unitary operator $\hat{U}^{A'B'A''B''}:=\sum_{i,j=0}^{d-1} \ket{ij} \bra{ij}_{A'B'} \otimes \hat{U}^{A''B''}_{ij}$, maximally entangled state $\ket{\Phi}_{A'B'}:= \sum_{i=0}^{d-1} \ket{ii}_{A'B'}/\sqrt{d}$ and orthonormal states $\{\ket{ij}_{A'B'}\}_{i,j=0,1,\ldots,d-1}$ for systems ${\cal H}_{A'} \otimes {\cal H}_{B'}$. In particular, Alice and Bob obtain a private state through the following most general adaptive protocol: (i) Alice, Bob and parties $\{C^k \}_{k=1,2,\ldots,n}$ begin by preparing their physical systems ${\cal H}^0$ in a separable state $\hat{\rho}_1^{ABC^1C^2\ldots C^n}$, where ${\cal H}^j:={\cal H}_{A}^{j} \otimes {\cal H}_{B}^j \otimes {\cal H}_{C^1}^j \otimes {\cal H}_{C^2}^j\otimes \cdots \otimes {\cal H}_{C^n}^j$ and ${\cal H}_{X}^j$ represents the physical system held by party $X\in \{ A,B,C^1,C^2,\ldots, C^n\}$. (ii) In the first round, party $X|1 \in \{A,B,C^1,C^2,\ldots, C^n \}$ sends his/her subsystem $\bar{{\cal H}}_{X|1}$ to party $Y|1$ through quantum channel ${\cal N}^{\bar{{\cal H}}_{X|1} \to \tilde{{\cal H}}_{Y|1}} $ with isometric extension ${\cal U}^{\bar{{\cal H}}_{X|1}\to \tilde{{\cal H}}_{Y|1} \otimes {\cal H}_{E_1}}$ for environment system ${\cal H}_{E_1}$, which provides a refreshed description of the whole system, ${\cal H}^{0'|1}$ with ${\cal H}_{Y|1}^{0'|1}={\cal H}_{Y|1}^0 \otimes \tilde{{\cal H}}_{Y|1}$, ${\cal H}_{X|1}^{0}={\cal H}_{X|1}^{0'|1} \otimes \bar{{\cal H}}_{X|1}$ and ${\cal H}_{Z}^{0'|1}={\cal H}_{Z}^0$ for any party $Z$ except for parties $X|1$ and $Y|1$. This is followed by an LOCC operation, which presents a renewed entire system ${\cal H}^{1|1}$ in state $\hat{\rho}_{k_1}^{ABC^1C^2\ldots C^n}$ with probability $p_{k_1}$. Let ${\cal H}_{R_{k_1}}$ be a system that purifies the state $\hat{\rho}_{k_1}^{ABC^1C^2\ldots C^n}$, providing pure-state expression $\ket{\hat{\rho}_{k_1}}_{ABC^1C^2\ldots C^nR_{k_1}}$. (iii) Similarly, in the $i$th round ($i=2,3,\ldots,l$), depending on the previous outcomes ${\bm k}_{i-1}:={ k}_{i-1}\cdots { k}_1$ (with ${\bm k}_{0}:=1$), for given entire system ${\cal H}^{(i-1)|{\bm k}_{i-2}}$, party $X| {\bm k}_{i-1} \in \{A,B,C^1,C^2,\ldots, C^n \}$ sends his/her subsystem $\bar{{\cal H}}_{X|{{\bm k}_{i-1}}}$ to party $Y|{{\bm k}_{i-1}}\in \{A,B,C^1,C^2,\ldots, C^n \}$ through quantum channel ${\cal N}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}} } $ with isometric extension ${\cal U}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}}\otimes {\cal H}_{E_{{\bm k}_{i-1}}}}$ for environment system $ {\cal H}_{E_{{\bm k}_{i-1}}}$, which updates the description of the whole system as ${\cal H}^{(i-1)'|{\bm k}_{i-1}}$ with ${\cal H}^{(i-1)'|{\bm k}_{i-1}}_{Y|{{\bm k}_{i-1}}}={\cal H}^{(i-1)|{\bm k}_{i-2}}_{Y|{{\bm k}_{i-1}}} \otimes \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}}$, ${\cal H}^{(i-1)|{{\bm k}_{i-2}}}_{X|{{\bm k}_{i-1}}}={\cal H}^{(i-1)'|{{\bm k}_{i-1}} }_{X|{{\bm k}_{i-1}}} \otimes\bar{ {\cal H}}_{X|{{\bm k}_{i-1}}}$ and ${\cal H}_{Z}^{(i-1)'|{\bm k}_{i-1}}={\cal H}_{Z}^{(i-1)|{{\bm k}_{i-2}}}$ for any party $Z$ except for parties $X|{{\bm k}_{i-1}}$ and $Y|{{\bm k}_{i-1}}$. This is followed by an LOCC operation, providing an entire system ${\cal H}^{i|{\bm k}_{i-1}}$ in state $\hat{\rho}_{{\bm k}_i}^{ABC^1C^2\ldots C^n}$ with probability $p_{k_i| {\bm k}_{i-1}}$. Let ${\cal H}_{R_{{\bm k}_{i}}}$ be a system that purifies the state $\hat{\rho}_{{\bm k}_i}^{ABC^1C^2\ldots C^n}$, presenting pure-state expression $\ket{\hat{\rho}_{{\bm k}_i}}_{ABC^1C^2\ldots C^n R_{{\bm k}_{i}} }$. (iv) Finally, i.e., in the $l$th round, Alice and Bob obtain state $\hat{\rho}_{{\bm k}_l}^{ABC^1C^2\ldots C^n}$ close to private state $\hat{\gamma}_{d_{{\bm k}_l}}^{AB}$ for integer $d_{{\bm k}_l}(\ge 1)$. From the definition, the final state $\hat{\rho}_{{\bm k}_l}^{ABC^1C^2\ldots C^n}$ should be close to private state $\hat{\gamma}^{AB}_{d_{{\bm k}_l}}$, i.e., $||\hat{\rho}_{{\bm k}_l}^{AB}- \hat{\gamma}^{AB}_{d_{{\bm k}_l}}||_1 \le \epsilon$ for $\epsilon >0$, where we define $\hat{\rho}^X:={\rm Tr}_Y (\hat{\rho}^{XY})$. From the continuity of the squashed entanglement \cite{C06phd}, this implies \begin{equation} |E_{\rm sq}^{{\cal H}_A^{l|{\bm k}_{l-1}}:{\cal H}_B^{l|{\bm k}_{l-1}}} (\hat{\rho}_{{\bm k}_l}^{AB})- E_{\rm sq}^{{\cal H}_A^{l|{\bm k}_{l-1}}:{\cal H}_B^{l|{\bm k}_{l-1}}} (\hat{\gamma}^{AB}_{d_{{\bm k}_l}})| \le 16\sqrt{\epsilon} \log d'_{{\bm k}_{l}} +4 h(2\sqrt{\epsilon}), \end{equation} where $d'_{{\bm k}_l}:=\min\{ \dim( {\cal H}_A^{l|{\bm k}_{l-1}} ),\dim( {\cal H}_B^{l|{\bm k}_{l-1}}) \}$, $h(x):=-x \log_2 x -(1-x)\log_2 (1-x)$ and $E_{\rm sq}^{X:Y} (\hat{\rho}^{XY})$ is the squashed entanglement between systems $X$ and $Y$ in state $\hat{\rho}^{XY}$ \cite{CW04}. Since $d'_{{\bm k}_l}=d_{{\bm k}_l}$ without loss of generality and $ E_{\rm sq}^{{\cal H}_A^{l|{\bm k}_{l-1}}:{\cal H}_B^{l|{\bm k}_{l-1}}} (\hat{\gamma}^{AB}_{d_{{\bm k}_l}})\ge \log d_{{\bm k}_l}$ \cite{C06phd}, we have \begin{align} \log d_{{\bm k}_l} \le \frac{1}{1-16\sqrt{\epsilon}} (E_{\rm sq}^{{\cal H}_A^{l|{\bm k}_{l-1}}:{\cal H}_B^{l|{\bm k}_{l-1}}} (\hat{\rho}_{{\bm k}_l}^{AB})+ 4 h(2\sqrt{\epsilon})). \label{eq:proof1} \end{align} Our proof for Eq.~(1) is made by regarding the general multi-party protocol as bipartite communication and by applying the technique of the TGW bound \cite{TGW14} to the bipartite one. Hence, let us divide the set of parties $\{ A,B,C^1,C^2,\ldots, C^n\} (=:{\cal P})$ into two disjoint groups ${\cal P}_A$ and ${\cal P}_B(={\cal P}\setminus {\cal P}_A)$ that include parties $A$ and $B$, respectively. We define ${\cal H}_{{\cal P}_A}^r:= \otimes_{X \in {\cal P}_A} {\cal H}_X^r$ and ${\cal H}_{{\cal P}_B}^r:= \otimes_{X \in {\cal P}_B} {\cal H}_X^r$. In addition, we regard ${\bm k}_{i-1}$ as ${\bm k}_{i-1} \in K_{\rm in|{\cal P}_A} $ (${\bm k}_{i-1} \in K_{{\rm out}|{\cal P}_A} $) if $X|{{\bm k}_{i-1}} \in {\cal P}_C$ and $Y|{{\bm k}_{i-1}} \in {\cal P}_C$ (if $X|{{\bm k}_{i-1}} \in {\cal P}_C$ and $Y|{{\bm k}_{i-1}} \in {\cal P} \setminus {\cal P}_C$) for $C=A$ or $C=B$. In what follows, we derive inequalities for two cases, ${\bm k}_{i-1} \in K_{\rm in|{\cal P}_A} $ and ${\bm k}_{i-1} \in K_{{\rm out}|{\cal P}_A} $. Let us consider an $i$th round with ${\bm k}_{i-1} \in K_{\rm in|{\cal P}_A} $. In this case, the channel $ {\cal N}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}}}$ should be regarded as just a local channel for the bipartite communication between ${\cal P}_A$ and ${\cal P}_B$. To make this clearer, let us first assume $X|{{\bm k}_{i-1}} \in {\cal P}_A$ and $Y|{{\bm k}_{i-1}} \in {\cal P}_A$. Then, we have \begin{align} \sum_{k_i} p_{k_i|{\bm k}_{i-1}} E_{\rm sq}^{{\cal H}_A^{i|{{\bm k}_{i-1}}}:{\cal H}_B^{i|{{\bm k}_{i-1}}}} (\hat{\rho}_{{\bm k}_i}^{AB})\le & \sum_{k_i} p_{k_i|{\bm k}_{i-1}} E_{\rm sq}^{{\cal H}_{{\cal P}_A}^{i|{{\bm k}_{i-1}}}:{\cal H}_{{\cal P}_B}^{i|{{\bm k}_{i-1}}}} (\hat{\rho}_{{\bm k}_i}^{ABC^1C^2\ldots C^n}) \\ \le & E_{\rm sq}^{{\cal H}_{{\cal P}_A}^{(i-1)'|{{\bm k}_{i-1}}}:{\cal H}_{{\cal P}_B}^{(i-1)'|{{\bm k}_{i-1}}}} ( {\cal N}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}}}(\hat{\rho}_{{\bm k}_{i-1}}^{ABC^1C^2\ldots C^n})) \\ \le & E_{\rm sq}^{{\cal H}_{{\cal P}_A}^{(i-1)|{{\bm k}_{i-2}}}:{\cal H}_{{\cal P}_B}^{(i-1)|{{\bm k}_{i-2}}}} (\hat{\rho}_{{\bm k}_{i-1}}^{ABC^1C^2\ldots C^n}). \label{eq:a1} \end{align} The first inequality is derived from the fact that the squashed entanglement does not increase under partial traces. The second inequality comes from the fact that the squashed entanglement does not increase on average under LOCC. The final inequality states that the squashed entanglement does not increase under any local quantum channel. The same inequality is obtained if we begin by assuming $X|{{\bm k}_{i-1}} \in {\cal P}_B$ and $Y|{{\bm k}_{i-1}} \in {\cal P}_B$. Let us consider an $i$th round with ${\bm k}_{i-1} \in K_{\rm out|{\cal P}_A} $. In this case, $ {\cal N}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}}}$ is a channel connecting parties ${\cal P}_A$ and ${\cal P}_B$ nontrivially, which should put a limitation on the communication. To make this more precise, we first assume $X|{{\bm k}_{i-1}} \in {\cal P}_A$ and $Y|{{\bm k}_{i-1}} \in {\cal P}_B$. Then, we have \begin{align} \sum_{k_i}&\; p_{k_i|{\bm k}_{i-1}} E_{\rm sq}^{{\cal H}_A^{i|{{\bm k}_{i-1}}}:{\cal H}_B^{i|{{\bm k}_{i-1}}}} (\hat{\rho}_{{\bm k}_i}^{AB})\le \sum_{k_i} p_{k_i|{\bm k}_{i-1}} E_{\rm sq}^{{\cal H}_{{\cal P}_A}^{i|{{\bm k}_{i-1}}}:{\cal H}_{{\cal P}_B}^{i|{{\bm k}_{i-1}}}} (\hat{\rho}_{{\bm k}_i}^{ABC^1C^2\ldots C^n}) \\ \le& E_{\rm sq}^{{\cal H}_{{\cal P}_A}^{(i-1)'|{{\bm k}_{i-1}}}:{\cal H}_{{\cal P}_B}^{(i-1)'|{{\bm k}_{i-1}}}} ( {\cal N}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}}}(\hat{\rho}_{{\bm k}_{i-1}}^{ABC^1C^2\ldots C^n})) \\ =& E_{\rm sq}^{{\cal H}_{{\cal P}_A }^{(i-1)'|{{\bm k}_{i-1}}}:{\cal H}_{{\cal P}_B\setminus (Y|{\bm k}_{i-1})}^{(i-1)'|{{\bm k}_{i-1}}} \otimes {\cal H}_{Y|{\bm k}_{i-1}}^{(i-1)|{{\bm k}_{i-2}}} \otimes \tilde{{\cal H}}_{Y|{\bm k}_{i-1}} } ( {\cal U}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}} \otimes {\cal H}_{E_{{\bm k}_{i-1}}}}(\ket{ \hat{\rho}_{{\bm k}_{i-1}}}_{ABC^1C^2\ldots C^n R_{{\bm k}_{i-1}}} )) \\ \le& E_{\rm sq}^{{\cal H}_{{\cal P}_A }^{(i-1)'|{{\bm k}_{i-1}}} \otimes {\cal H}_{{\cal P}_B\setminus (Y|{\bm k}_{i-1})}^{(i-1)'|{{\bm k}_{i-1}}} \otimes {\cal H}_{Y|{\bm k}_{i-1}}^{(i-1)|{{\bm k}_{i-2}}} \otimes {\cal H}_{R_{{\bm k}_{i-1}}} : \tilde{{\cal H}}_{Y|{\bm k}_{i-1}} } ( {\cal U}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}} \otimes {\cal H}_{E_{{\bm k}_{i-1}}}}(\ket{ \hat{\rho}_{{\bm k}_{i-1}}}_{ABC^1C^2\ldots C^n R_{{\bm k}_{i-1}}} )) \nonumber \\ &+ E_{\rm sq}^{{\cal H}_{{\cal P}_A }^{(i-1)'|{{\bm k}_{i-1}}} \otimes \tilde{{\cal H}}_{Y|{\bm k}_{i-1}} \otimes {\cal H}_{E_{{\bm k}_{i-1}}} :{\cal H}_{{\cal P}_B\setminus (Y|{\bm k}_{i-1})}^{(i-1)'|{{\bm k}_{i-1}}} \otimes {\cal H}_{Y|{\bm k}_{i-1}}^{(i-1)|{{\bm k}_{i-2}}} } ( {\cal U}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}} \otimes {\cal H}_{E_{{\bm k}_{i-1}}}}(\ket{ \hat{\rho}_{{\bm k}_{i-1}}}_{ABC^1C^2\ldots C^n R_{{\bm k}_{i-1}}} )) \\ =&E_{\rm sq}^{{\cal H}_{{\cal P}_A }^{(i-1)'|{{\bm k}_{i-1}}} \otimes {\cal H}_{{\cal P}_B\setminus (Y|{\bm k}_{i-1})}^{(i-1)|{{\bm k}_{i-2}}} \otimes {\cal H}_{Y|{\bm k}_{i-1}}^{(i-1)|{{\bm k}_{i-2}}} \otimes {\cal H}_{R_{{\bm k}_{i-1}}} : \tilde{{\cal H}}_{Y|{\bm k}_{i-1}} } ( {\cal N}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}} }(\ket{ \hat{\rho}_{{\bm k}_{i-1}}}_{ABC^1C^2\ldots C^n R_{{\bm k}_{i-1}}} )) \nonumber \\ &+ E_{\rm sq}^{{\cal H}_{{\cal P}_A }^{(i-1)'|{{\bm k}_{i-1}}} \otimes \bar{{\cal H}}_{X|{{\bm k}_{i-1}}} :{\cal H}_{{\cal P}_B\setminus (Y|{\bm k}_{i-1})}^{(i-1)|{{\bm k}_{i-2}}} \otimes {\cal H}_{Y|{\bm k}_{i-1}}^{(i-1)|{{\bm k}_{i-2}}} } (\ket{ \hat{\rho}_{{\bm k}_{i-1}}}_{ABC^1C^2\ldots C^n R_{{\bm k}_{i-1}}} ) \\ \le& E_{\rm sq} ( {\cal N}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}} }) +E_{\rm sq}^{{\cal H}_{{\cal P}_A }^{(i-1)|{{\bm k}_{i-2}}} :{\cal H}_{{\cal P}_B}^{(i-1)|{{\bm k}_{i-2}}} } (\hat{\rho}_{{\bm k}_{i-1}}^{ABC^1C^2\ldots C^n} ) . \label{eq:a2} \end{align} The first inequality is derived from the fact that the squashed entanglement does not increase under partial traces. The second inequality comes from the fact that the squashed entanglement does not decrease on average under LOCC. The third inequality is the application of Lemma 2 in Ref.~\cite{TGW14} by regarding ${\cal H}_{{\cal P}_A }^{(i-1)'|{{\bm k}_{i-1}}}$ as system $A$, $\tilde{{\cal H}}_{Y|{\bm k}_{i-1}}$ as system $B_1$, ${\cal H}_{E_{{\bm k}_{i-1}}}$ as system $E_1$, $ {\cal H}_{{\cal P}_B\setminus (Y|{\bm k}_{i-1})}^{(i-1)'|{{\bm k}_{i-1}}} \otimes {\cal H}_{Y|{\bm k}_{i-1}}^{(i-1)|{{\bm k}_{i-2}}}$ as system $B_2$, and ${\cal H}_{R_{{\bm k}_{i-1}}}$ as system $E_2$. The final inequality follows from the definition \cite{TGW14} of the squashed entanglement of a quantum channel. The same inequality is derived if we start by assuming $X|{{\bm k}_{i-1}} \in {\cal P}_B$ and $Y|{{\bm k}_{i-1}} \in {\cal P}_A$. Therefore, using Eqs.~(\ref{eq:a1}) and (\ref{eq:a2}) recursively and the fact that $\hat{\rho}_1^{ABC^1C^2\ldots C^n}$ is separable, we obtain \begin{align} \sum_{{\bm k_l} } p_{{\bm k}_l} E_{\rm sq}^{{\cal H}_A^{l|{{\bm k}_{l-1}}}:{\cal H}_B^{l|{{\bm k}_{l-1}}}} (\hat{\rho}_{{\bm k}_l}^{AB}) =& \sum_{{\bm k_{l-1}} } p_{{\bm k}_{l-1}} \sum_{k_l } p_{k_l|{\bm k}_{l-1} } E_{\rm sq}^{{\cal H}_A^{l|{{\bm k}_{l-1}}}:{\cal H}_B^{i|{{\bm k}_{l-1}}}} (\hat{\rho}_{{\bm k}_l}^{AB}) \\ \le& \sum_{{\bm k_{l-1}} \in K_{\rm out|{\cal P}_A} } p_{{\bm k}_{l-1}} E_{\rm sq} ( {\cal N}^{\bar{{\cal H}}_{X|{{\bm k}_{l-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{l-1}}} }) \nonumber \\ &+\sum_{{\bm k_{l-1}} } p_{{\bm k}_{l-1}} E_{\rm sq}^{{\cal H}_{{\cal P}_A }^{(l-1)|{{\bm k}_{l-2}}} :{\cal H}_{{\cal P}_B}^{(l-1)|{{\bm k}_{l-2}}} } (\hat{\rho}_{{\bm k}_{l-1}}^{ABC^1C^2\ldots C^n} ) \\ \le& \sum_{i=1}^l \sum_{{\bm k_{i-1}} \in K_{\rm out|{\cal P}_A}} p_{{\bm k}_{i-1}} E_{\rm sq} ( {\cal N}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}} }) . \end{align} Combined with Eq.~(\ref{eq:proof1}), this concludes \begin{align} \sum_{{\bm k_l} } p_{{\bm k}_l} \log d_{{\bm k}_l} \le \frac{1}{1-16\sqrt{\epsilon}} \left(\sum_{i=1}^l \sum_{{\bm k_{i-1}} \in K_{\rm out|{\cal P}_A}} p_{{\bm k}_{i-1}} E_{\rm sq} ( {\cal N}^{\bar{{\cal H}}_{X|{{\bm k}_{i-1}}} \to \tilde{{\cal H}}_{Y|{{\bm k}_{i-1}}} }) + 4 h(2\sqrt{\epsilon}) \right). \label{eq:main} \end{align} This is equivalent to Eq.~(1).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Deep learning - primarily deep neural networks (DNNs) - has led to major breakthroughs in scientific computing in recent years \cite{jumper2021highly,rolnick2019tackling, lavin2021simulation,khatib2021deep}, such as chemistry \cite{schutt2019unifying}, materials science \cite{nadell2019deep}, and biology \cite{zhavoronkov2019deep}. Given its significance, it has emerged in recent years as a major area of research in the machine learning community \cite{lavin2021simulation}, involving a variety of unique technical challenges. One such challenge - shared by many disciplines - is that DNNs require significant quantities of training data \cite{lecun2015deep}. One widely-studied strategy to mitigate this problem is active learning (AL)\cite{ren2021survey, settles2009active}, and we focus here on active learning specifically for DNNs, sometimes referred to as Deep Active Learning (DAL) \cite{roy2018deep}. Broadly speaking, the premise of DAL is that some training instances will lead to greater model performance than others. Therefore we can improve the training sample efficiency of DNNs by selecting the best training instances. A large number methods have been investigated in recent years for DAL \cite{settles2009active, ren2021survey}, often reporting significant improvements in sample efficiency compared to simpler strategies, such as random sampling \cite{tsymbalov2018dropout, kading2018active, kee2018query}. While these studies are encouraging and insightful, they nearly always assume that one or more DAL hyperparmeters are known prior to deployment (i.e., collection of labeled data). While such assumptions are realistic in many contexts of machine learning, the hyperparameters of many DAL require labeled data (see Section \ref{sec:ProblemSetting}) to be optimized which, by assumption, are not yet available. It is therefore unclear how these hyperparameters should be set when DAL is applied to new problems, and whether DAL still offers advantages over alternative methods when accounting for hyperparameter uncertainty. Although rarely discussed in the literature, this is a problem users face when applying DAL in to new problems in real-world conditions, or "in the wild" as it is sometimes described \cite{huang2008labeled}. This problem is especially acute in scientific computing applications because nearly all applicable DAL methods are \textit{pool-based}, illustrated in Fig \ref{img:pool_based}. Scientific computing most often utilizes DNNs for regression tasks, where nearly all applicable DAL methods are \textit{pool-based}; these methods rely upon selecting unlabeled points (i.e., settings of $x$) to label from a finite pre-existing set, or "pool". This is in contrast to query synthesis methods, for example, which consider all possible settings of $x$ (e.g., all possible settings of $N$-dimensional real numbers). As we argue in this work (see Section \ref{sec:ProblemSetting}), all pool-based methods share a common hyperparameter, termed the \textit{pool ratio}, $\gamma$, which cannot be optimized without significant quantities of labeled data, and which (as we show) has a significant impact on the effectiveness of DAL methods. \begin{figure*}[h!] \begin{center} \centerline{\includegraphics[width=\linewidth]{imgs/Pool_based_AL_subsample_from_big_pool_0124_two_row.png}} \caption{Schematic diagram for pool-based deep active learning procedure for scientific computing. In the input space X, the triangles represent labeled data ($L$), and the circles represent unlabeled data ($D$ for the full set of possibly infinite unlabeled $x$, $U$ for current unlabeled pool). At each step, after the model is trained using the existing training set $L$, a subset of unlabeled data $U$ is sampled and evaluated by the AL criteria q(x). Then the top-k points according to q(x) are then labeled by the oracle function.} \label{img:pool_based} \end{center} \end{figure*} \subsection{Contributions of this work} \label{subsec:contributions_of_this_work} In this work we investigate the effectiveness of DAL methods in scientific computing applications, \textit{in the wild}. Specifically, we investigate DAL performance when the optimal setting of $\gamma$ is \textit{not} known in advance. Although many DAL methods have additional hyperaparameters that cannot be optimized apriori, we focus on $\gamma$ because it is shared by nearly all DAL methods that are applicable to scientific computing (i.e., regression problems). To support our investigation, we assembled eight scientific computing problems to examine the performance of DAL methods in this setting; to our knowledge, this is the first such benchmark of its kind. We then identified past and recent DAL methods that are suitable for scientific computing problems (i.e., regression), totaling ten methods. Our collection of benchmark methods encompasses many DAL methods that are employed beyond scientific computing, making our findings relevant to the broader active learning community as well. We examine the performance of our DAL methods on each of our eight benchmark problems, compared to simple random sampling, and also as we vary their $\gamma$ setting. Our results indicate that their performance varies significantly with respect to $\gamma$ within our range, and that no single $\gamma$ setting works well across all problems. To characterize the real-world performance of the DAL models, we also examined their performance under three key conditions: (i) when $\gamma$ is set to the optimal value in each problem, (ii) the worst value in each problem, and (iii) when we choose the best overall setting across all eight problems (i.e., each problem gets the same setting of $\gamma$, per method). Although many models often perform worse than random sampling, the results indicate that some methods consistently outperform random sampling, even with a poor $\gamma$ setting. We now summarize our contributions: \begin{itemize} \item We develop the first large benchmark for DAL in scientific computing (e.g., regression problems), involving ten state-of-the-art DAL methods and eight datasets. We publish the datasets and code to facilitate reproducibility. \item Using our benchmark, we perform the first analysis of DAL performance \textit{in the wild}. We highlight the rarely-discussed problem that some DAL hyperparameters cannot be known prior to model deployment, such as pool ratio ($\gamma$), and that existing DAL models may be sensitive to them. We investigate the performance of our DAL methods when $\gamma$ is not assumed known, and we find in this setting that many DAL models often perform no better than random sampling. Crucially, we find that some DAL still consistently outperform random sampling. \item We analyze the factors that contribute to robustness of DAL. Our results suggest that simply maximizing diversity of sampled data in $x$-space is the most reliable way to get robustness. \textit{This is a surprising finding, suggesting that existing uncertainty based DAL models do not robustly leverage information from the learner - a premise of many AL models}. \end{itemize} \section{Related works}\label{sec:related_work} \textbf{Active learning benchmarks.} The majority of the existing AL benchmarks are for classification tasks, rather than regression, and many AL methods for classification cannot be applied to regression, making them unsuitable for most scientific computing applications. Some existing studies include \cite{zhan2021comparative}, which benchmarked AL using a Support Vector Machine (SVM) with 17 AL methods on 35 datasets. \cite{yang2018benchmark} benchmarked logistic regression with ten AL methods and 44 datasets. \cite{meduri2020comprehensive} benchmarked specific entity matching application (classification) of AL with three AL methods on ten datasets, with three different types of classifiers (DNN, SVM, and Tree-based). \cite{trittenbach2021overview} benchmarked an AL application in outlier detection on 20 datasets and discussed the limitation of simple metrics extensively. \cite{hu2021towards} benchmarked five classification tasks (including both image and text) using DNN. \cite{beck2021effective} benchmarked multiple facets of DAL on five image classification tasks. For the regression AL benchmark, \cite{o2017model} benchmarked five AL methods and seven UCI \footnote{University of California Irvine Machine Learning Repository} datasets, but they only employed linear models. \cite{wu2019active} compared five AL methods on 12 UCI regression datasets, also using linear regression models. Our work is fundamentally different with both as we use DNNs as our regressors, and we employ several recently-published scientific computing problems that also involved DNN regressors, making them especially relevant for DAL study. \textbf{Active learning for regression problems.} As discussed, scientific primarily involves regression problems which has received (relatively) little attention compared to classification \cite{ren2021survey, guyon2011results}. For the limited AL literature dedicated to regression tasks, Expected Model Change (EMC) \cite{settles2008curious} was explored, where an ensemble of models was used \cite{cai2013maximizing} to estimate the true label of a new query point using both linear regression and tree-based regressors. Gaussian processes were also used with a natural variance estimate on unlabeled points in a similar paradigm \cite{kading2018active}. \cite{smith2018less} used Query By Committee (QBC), which trains multiple networks and finds the most disagreeing unlabeled points of the committee of models trained. \cite{tsymbalov2018dropout} used the Monte Carlo drop-out under a Bayesian setting, also aiming for the maximally disagreed points. \cite{yu2010passive} found $x$-space-only methods outperforming y-space methods in robustness. \cite{yoo2019learning} proposed an uncertainty-based mechanism that learns to predict the loss using an auxiliary model that can be used on regression tasks. \cite{ranganathan2020deep} and \cite{kading2016active} used Expected Model Output Change (EMOC) with Convolutional Neural Network (CNN) on image regression tasks with different assumptions. We've included all the above-mentioned methods that used deep learning in our benchmarks. \textbf{DAL in the wild.} To our knowledge, all empirical studies of pool-based DAL methods assume that an effective pool ratio hyperparameter, $\gamma$, is known apriori. While the majority of the work assumed the original training set as the fixed, unlabeled pool, \cite{yoo2019learning} explicitly mentioned their method work with a subset of 10k instead of the full remaining unlabeled set and \cite{beluch2018power} also mentioned subsampling to create the pool $U$ (and hence $\gamma$). In real-world settings - termed in the wild - we are not aware of any method to set $\gamma$ apriori, and there has been no study of DAL methods under this setting; therefore, to our knowledge, ours is the first such study. \section{Problem Setting} \label{sec:ProblemSetting} In this work, we focus on DAL for regression problems, which comprise the majority of scientific computing problems involving DNNs. As discussed in Sec. \ref{sec:introduction}, nearly all DAL methods for regression are pool-based, which is one of the three major paradigms of AL (along with stream-based and query synthesis)\cite{settles2009active}. \textbf{Formal description.} Let $L^i = (X^{i}, Y^{i})$ be the dataset used to train a regression model at the $i^{th}$ iteration of active learning. We assume access to some oracle (e.g., a computational simulator for scientific computing problems), denoted $f : \mathcal{X} \rightarrow \mathcal{Y}$, that can accurately produce the target values, $y \in \mathcal{Y}$ associated with input values $x \in \mathcal{X}$. Since we focus on DAL, we assume a DNN as our regression model, denoted $\hat{f}$. We assume that some relatively small number of $N_{0}$ labeled training instances are available to initially train $\hat{f}$, denoted $L^0$. In each iteration of DAL, we must get $k$ query instances $x_{k} \in \mathcal{X}$ to be labeled by the oracle, yielding a set of labeled instances, denoted $Q$, that is added to the training dataset. Our goal is then to choose $L$ that maximizes the performance of the DNN-based regression models over unseen test data at each iteration of active learning. \textbf{Pool-based Deep Active Learning.} General pool-based DAL methods assume that we have some pool $U$ of $N_{U}$ unlabeled instances from which we can choose the $k$ instances to label. Most pool-based methods rely upon some acquisition function $q: \mathcal{X} \rightarrow \mathbb{R}$ to assign some scalar value to each $x \in U$ indicating its "informativeness", or utility for training $\hat{f}$. In each iteration of active learning, $q$, is used to evaluate all instances in $U$, and the top $k$ are chosen to be labeled and included in $L$. This general algorithm is outlined in Algorithm \ref{alg:PoolBasedAL}. \begin{algorithm}[h] \caption{Generic pool-based active learning algorithm} \label{alg:PoolBasedAL} \begin{algorithmic} \STATE {\bfseries Input:} Initial labeled training set $L^{0}$ of size $N_{ini}$, step size $k$, AL criteria function $q$, number of steps $I$ \FOR{$i=0$ {\bfseries to} $I$} \STATE Train DNN-based model(s) using training set $L^{i}$ \STATE Create $U$ by sampling $N_{U}$ instances $x \in \mathcal{X}$ \STATE Calculate $q(x) \; \forall x \in U$ \STATE Create $L$ by labeling top $k$ points in $U$ ranked by $q(x)$ \STATE $L^{i+1} = L^{i} \cup Q$ \ENDFOR \end{algorithmic} \end{algorithm} \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=0.7\linewidth]{imgs/QBC_1by2_1224.png}} \caption{Schematic diagram for pool-based DAL for uncertainty-based mechanism. $q(x)$ is the acquisition metric. (a, b) are two scenarios of the pool ratio ($\gamma$) being too small (4 in b) or too large (32 in c) in $k$ (step size) of 2. } \label{img:pool_ratio_schematic} \end{center} \end{figure} \textbf{The pool ratio hyperparameter, $\gamma$.} We define the \textit{pool ratio} as $\gamma = N_{U}/k$. By definition, $N_{U}$ and $k$ are hyperparameters of pool-based problems, and therefore $\gamma$ also is. While one could, in principle, vary $N_{U}$ and $k$ independently, this is not often done in practice. Typically $k$ is set as small as possible, limited by computational resources. This leaves $N_{U}$ as the major free hyperparameter, however, prior research has found that its impact depends strongly upon its size relative to $k$ \cite{kee2018query, tsymbalov2018dropout, kading2018active}, encoded in $\gamma$. Given a fixed value of $k$, a larger value of $N_{U}$ can lead to the discovery of points with larger values of $q(x)$ because the input space is sampled more densely; however, larger $N_{U}$ also tends to increase the similarity of the points, so that they provide the same information to the model - a problem sometimes called mode collapse \cite{burbidge2007active, ren2021survey, kee2018query}. In the limit as $N_{U} \rightarrow \infty$ all of the $k$ selected query points will be located near the same $x \in \mathcal{X}$ that has the highest value of $q(x)$. This tradeoff is illustrated in Fig. \ref{img:pool_ratio_schematic}(a-b) for a simple problem. In most real-world settings, there is a substantial quantity of unlabeled data (often infinite), and the user has the freedom (or burden) of choosing a suitable $\gamma$ setting for their problem by varying the size of $U$. Crucially, and as we show in our experiments, choosing a sub-optimal $\gamma$ value can result in poorer performance than naive random sampling. This isn't necessarily a problem if either (i) one $\gamma$ setting works across most problems or, alternatively, (ii) $\gamma$ can be optimized on new problems without using labels. To the best of our knowledge, there is no method for optimizing $\gamma$ on a new problem without running multiple trials of AL to find the best one (i.e., collecting labels), defeating the purpose of AL in real-world settings. Furthermore, the value of $\gamma$ varies widely across the literature, suggesting that suitable settings for $\gamma$ indeed vary across problems (see supplement for a list). \section{Benchmark Regression Problems} \label{sec:benchmark_problems} \begin{table}[h] \caption{Benchmark datasets dimensionality and oracle functions. $Dim_{x, y}$ are the dimensionality of $x$ and y. Note that ODE solutions are implemented in the form of analytical functions as well.} \label{tbl:benchmark_dataset} \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccc} \toprule Data & $Dim_{x}$ & $Dim_{y}$ & (Proxy) Oracle\\ \midrule Sine & 1 & 1 & Analytical \\ Robo& 4 & 2 & Analytical \\ Stack & 5 & 201 & Numerical simulator\\ ADM & 14 & 2000 & Neural simulator\\ Foil & 5 & 1 & Random Forest\\ Hydr & 6 & 1 & Random Forest\\ Bess & 2 & 1 & ODE solution \\ Damp & 3 & 100 & ODE solution \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \end{table} We propose eight regression problems to include in our public DAL regression benchmark: two simple toy problems (SINE, ROBO), four contemporary problems from recent publications in diverse fields of science and engineering (STACK, ADM, FOIL, HYDR) and two problems solving ordinary differential equations (also prevalent in engineering). Scientific computing problems vary substantially in their dimensionality (see \cite{lavin2021simulation,takamotopdebench,deng2021benchmarking} for varying examples). We chose relatively lower-dimensional problems because they are still common in the literature, while facilitating larger-scale experimentation and reproducibility by others. Specifically, for each of our problems there was sufficiently-large quantities of labeled data to explore a wide variety of pool ratios, which often isn't feasible in higher-dimensional problems. We suggest studies with higher dimensional problems as an important opportunity for future work, especially since sensitivity to pool ratio has been noted in that setting as well \cite{yoo2019learning, sener2017active}. \textbf{1D sine wave (SINE).} A noiseless 1-dimensional sinusoid with smoothly-varying frequency. \textbf{2D robotic arm (ARM)} \cite{ren2020benchmarking} In this problem we aim to predict the 2-D spatial location of the endpoint of a robotic arm based upon its three joint angles, $x$. \textbf{Stacked material (STACK)} \cite{Chen2019} The goal is to predict the 201-D reflection spectrum of a material based on the thickness of five layers of the material. \textbf{Artificial Dielectric Material (ADM)} \cite{deng2021neural} The goal is to predict the 2000-D reflection spectrum of a material based on its 14-D geometric structure. Full wave electromagnetic simulations were utilized in \cite{deng2021benchmarking} to label data in the original work, requiring 1-2 minutes per input point. \textbf{NASA Airfoil (FOIL)} \cite{Dua:2019} The goal is to predict the sound pressure of an airfoil based on the structural properties of the foil, such as its angle of attack and chord length. This problem was published by NASA \cite{brooks1989airfoil} and the instance labels were obtained from a series of real-world aerodynamic tests in an anechoic wind tunnel. It has been used in other AL literature \cite{wu2018pool, liu2020unsupervised}. \textbf{Hydrodynamics (HYDR)} \cite{Dua:2019} The goal is to predict the residual resistance of a yacht hull in water based on its shape. This problem was published by the Technical University of Delft, and the instance labels were obtained by real-world experiments using a model yacht hull in the water. It is also referred to as the "Yacht" dataset in some AL literature \cite{wu2019active, cai2013maximizing}. \textbf{Bessel function (BESS)} The goal is to predict the value of the solution to Bessel's differential equation, a second-order ordinary differential equation that is common in many engineering problems. The inputs are the function order $\alpha$ and input position $x$. The order $\alpha$ is limited to non-negative integers below 10. \textbf{Damping Oscillator (DAMP)} The goal is to predict the full-swing trajectory of a damped oscillator in the first 100 time steps, of the solution to a second-order ordinary differential equation. The input is the magnitude, damping coefficient, and frequency of the oscillation. \begin{table*}[t!] \caption{List of benchmarked methods. $L$ is the labeled set, $Q$ is the already selected query points and $dist$ being L2 distance, $\hat{f}(x)$ is model estimate of $x$, $f(x)$ is oracle label of $x$, $\mu(x)$ is the average of ensemble model output, N is number of models in the model ensemble, $N_k$ is the k-nearest-neighbors, $sim$ is cosine similarity, $\phi$ is current model parameter, $\phi'$ is the updated parameter, $\mathcal{L}(\phi;(x',y'))$ is the loss of model with parameter $\phi$ on new labeled data $(x', y')$, $f_{loss}(x)$ is the auxiliary model that predicts the relative loss,} \label{tbl:benchmark_method} \begin{center} \begin{small} \begin{sc} \begin{tabular}{cccc} \toprule Method & Abbreviation & Implementation used & Acquisition function (q) \\ \midrule Core-set & GSx & \cite{sener2017active} & $ \displaystyle \min_{x\in \mathcal{L} \cup \mathcal{Q}} dist(x^*, x) $\\ Greedy sampling in y & GSy & \cite{wu2019active} & $\displaystyle \min_{y\in \mathcal{L} \cup \mathcal{Q}} dist(\hat{f}(x^*), y)$\\ Improved greedy sampling & GSxy & \cite{wu2019active} & $\displaystyle \min_{(x,y)\in \mathcal{L} \cup \mathcal{Q}} dist(x^*, x)*dist(\hat{f}(x^*), y)$\\ Query by committee & QBC & \cite{kee2018query} & $\displaystyle \frac{1}{N}\sum^N_{n=1}(\hat{f}_n(x)-\mu(x))^2$ \\ \multirow{2}{*}{QBC with diversity} & \multirow{2}{*}{QBCDiv} & \multirow{2}{*}{\cite{kee2018query}} & $\displaystyle q_{QBC}(x) + q_{div}(x) $ \\ &&& $\displaystyle (q_{div}(x^*) = q_{GSx}(x^*) )$ \\ QBC with diversity & \multirow{2}{*}{QBCDivDen} & \multirow{2}{*}{\cite{kee2018query}} & $\displaystyle q_{QBC}(x) + q_{div}(x) + q_{den}(x)$ \\ and density &&& \small $\displaystyle (q_{den}(x^*) = \dfrac{1}{k} \sum_{x\in N_k(x^*)} sim(x^*, x) )$ \\ bayesian by disagreement & BALD & \cite{tsymbalov2018dropout} & $\displaystyle q_{QBC}(x)$ \\ Expected model output & \multirow{2}{*}{EMOC} & \multirow{2}{*}{\cite{ranganathan2020deep}} & $\displaystyle \mathbb{E}_{y'|x'} \mathbb{E}_{x} || \hat{f}(x; \phi') - \hat{f}(x; \phi)||_1 $ \\ change & && $\displaystyle \approx \mathbb{E}_{x} || \nabla_{\phi} \hat{f}(x; \phi) * \nabla_{\phi} \mathcal{L}(\phi; (x', y'))||_1$ \\ Learning Loss & - & \cite{yoo2019learning} & $\displaystyle f_{loss}(x)$ \\ Real loss & MSE & - & $MSE(\hat{f}(x), f(x))$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \end{table*} \section{Benchmark Active Learning Methods} \label{sec:baseline_active_learning_methods} From the literature we found ten AL methods that are applicable to (i) regression problems, with (ii) DNN-based regressors, making them suitable for our scientific computing problems. Due to space constraints, we list each method in Table \ref{tbl:benchmark_method} along with key details, and we refer readers to the Supplementary Material for full details. Some of the methods have unique hyper-parameters that must be set by the user. In these cases, we adopt hyperparameter settings suggested by the methods' authors, shown in Table \ref{tbl:benchmark_method}. Upon publication, we will publish software for all of these methods to support future benchmarking. Note that the last method "MSE" in our benchmark is not an applicable method in real-life scenarios as having the oracle function's label defeats the purpose of active learning. The purpose of including such a method is to provide an empirical upper bound on the performance of uncertainty sampling DALs that use proxy losses to sample the "low-performance region" of the input space. \section{Benchmark Experiment Design} \label{sec:exp_design} In our experiments, we will compare ten state-of-the-art DAL methods on eight different scientific computing problems. We evaluate the performance of our DAL methods as a function of $\gamma$ on each of our benchmark problems, with $\gamma \in [2,4,8,16,32,64]$ (i.e., at each step we sample our $U$ with $k*\gamma$ points). Following convention \cite{kee2018query,tsymbalov2018dropout}, we assume a small training dataset is available at the outset of active learning, $T^{0}$, which has $N_{0} = 80$ randomly sampled training instances. We then run each DAL model to $T^{50}$ AL steps, each step identifying $k=40$ points to be labeled from a fresh, randomly generated pool of size $k*\gamma$. For each benchmark problem, we assume an appropriate neural network architecture is known apriori. Each experiment (i.e., the combination of dataset, DAL model, and $\gamma$ value) is run 5 times to account for randomness. The performance MSE is calculated over a set of 4000 test points that are uniformly sampled within the $x$-space boundary. We must train a regression model for each combination of problem and DAL method. Because some DAL methods require an ensemble model (e.g., QBC), we use an ensemble of 10 DNNs as the regressor for all of our DAL algorithms(except for the ADM problem, which is set to 5 due to the GPU RAM limit). More details of the models used and training procedures can be found in the supplementary materials. Due to the space constraints, we summarize our DAL performance by the widely-used area under curve (AUC) of the error plot \cite{tsymbalov2018dropout, liu2020unsupervised, wu2019active}. We report the full MSE vs \# labeled point plots in the supplementary material. For the AUC calculation, we used 'sklearn.metrics.auc' \cite{scikit-learn} then further normalized by such AUC of random sampling method. All below results are given in the unit of normalized AUC of MSE ($nAUC_{MSE})$. \section{Experimental Results} \label{sec:result} \begin{figure*}[t!] \begin{center} \vskip -0.1in \centerline{\includegraphics[width=\linewidth]{imgs/agg_mid_bar_plot.png}} \vskip -0.1in \caption{The performance of each DAL method (x-axis) in terms of $nAUC_{MSE}$ (y-axis). For each of the DAL method, we report a bar indicating the \textit{range} of $nAUC_{MSE}$ values obtained as we vary the pool ratio, $\gamma \in [2,4,...,64]$; for a given DAL method, we report one bar for each of the eight benchmark problems, indicated by a unique color in the legend. Each bar is bisected by a solid black and magenta line, respectively. The black line represents the average $nAUC_{MSE}$ value across all settings of $\gamma$. The magenta line represents the performance using $\gamma_{prior}$ (see Sec. \ref{sec:result} for details). The dashed red line at $nAUC_{MSE}=1$ corresponds to the performance obtained using random sampling. Note that some vertical bars are clipped at the top; this was done intentionally to improve the visualization overall.} \label{img:main_perf} \end{center} \vskip -0.3in \end{figure*} The performance of all ten DAL methods on all eight benchmark datasets is summarized in Fig \ref{img:main_perf}. The y-axis is the normalized $nAUC_{MSE}$ and the x-axis is the DAL methods of interest and the color code represents the different benchmark datasets. The horizontal red dashed line represents the performance of random sampling, which by definition is equal to one (see Sec. \ref{sec:exp_design}). Further details about Fig. \ref{img:main_perf} are provided in its caption. We next discuss the results, with a focus on findings that are most relevant to DAL in the wild. \textbf{(i) DALs are sensitive to their pool ratio, $\gamma$} The results in Fig. \ref{img:main_perf} indicate that \textit{all} of our benchmark DAL methods are sensitive to their setting of $\gamma$ - a central hypothesis of this work. As indicated by the vertical bars in Fig. \ref{img:main_perf}, the $nAUC_{MSE}$ obtained by each DAL method varies substantially with respect to $\gamma$. For most of the DAL methods, there exist settings of $\gamma$ (often many) that cause them to perform worse than random sampling. This has significant implications for DAL in the wild since, to our knowledge, there is no general method for estimating a good $\gamma$ setting prior to collecting large quantities of labeled data (e.g., to run trials of DAL with different $\gamma$ settings), defeating the purpose of DAL. Furthermore, there is no single setting of $\gamma$ that works well across all of our benchmark problems. In Fig. \ref{img:best_pr_hist} we present a histogram of the best $\gamma$ setting for each DAL method. The results indicate that the best $\gamma$ parameter depends strongly upon both the DAL method being used, and the particular benchmark problem. More importantly, there is no DAL method such that a single setting of $\gamma$ performs best across all benchmark problems. Therefore, in the wild, there is uncertainty regarding the best setting of $\gamma$ and therefore the (i) performance one can expect from DAL methods, and (ii) even whether it will perform better than random sampling. \textbf{(ii) Do DAL methods outperform random sampling in the wild?} The results indicate that several DAL methods tend to obtain much lower $nAUC_{MSE}$, on average, than random sampling. This includes methods such as GSx, GSxy, GSy, QBC, QBCDiv, and QBCDivDen. The results therefore suggest that these methods are beneficial more often than not, compared to random sampling - an important property. However, as discussed in Sec. \ref{sec:result}(i), all DAL methods exhibit significant performance variance with respect to $\gamma$, as well as the particular problem of interest. Consequently, many of the aforementioned subset of methods can still often perform worse than random sampling. For example, this is the case of QBC, GSy, and QBCDivDen on the SINE problem. In many real-world settings, especially in scientific computing scenarios, the cost of collecting labels can be high and the risks associated with poor DAL performance may deter its use. Therefore, another important criteria is performance robustness: do any DAL methods consistently perform better than random sampling, in the wild, when $\gamma$ is unknown? Our results indicate that GSx, GSxy, and QBCDiv always outperform at least as well as random sampling, and often substantially better, regardless of the problem setting or the setting of $\gamma$. Note that all three robust DALs (GSx, GSxy, QBCDiv) has x-space diversity in their loss function. As we show more evidence in Sec. \ref{sec:mode_collapse} that worse than random performance highly correlates with mode collapse (i.e. lack of x-space diversity), we conclude that diversity in x-space is a crucial factor contributing towards DAL robustness in the current setting. The only DAL that considers diversity but did not show robustness is QBCDivDen. We attribute this failure in robustness to the lower weight on diversity due to the addition of the new density metric. \textbf{(iii) Are some problems inherently not suitable for DAL?} Our results indicate that some problems seem to benefit less from DAL than others. For example, the average performance of our ten DAL methods varies substantially across problems - this is visually apparent in Fig. \ref{img:main_perf}. It is possible that some problems have unique statistical properties that make them ill-suited for \textit{most} DAL methods. Even if we only consider the best-performing DAL method for each benchmark problem, we still see substantial differences in achievable performance across our problems. A notable example is the ADM problem, where the \textit{best-performing} DAL method achieves $nAUC_{MSE}$ $>$ 0.9, which is only slightly better than random sampling. By contrast, the best performing DAL method for the BESS and DAMP problems achieve $nAUC_{MSE} \approx 0.3$. These results suggest that some problems may have properties that make DAL (and perhaps AL in general) inherently difficult, or unhelpful. Understanding these properties may reveal useful insights about AL methods, and when they should be applied; we propose this a potential area of future work. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=0.5\linewidth]{imgs/best_pr_hist.png}} \caption{The histogram (frequency) of the best pool ratio values found in each of the DALs. For a particular DAL method, this figure shows the frequency (\% out of 8) that a particular pool ratio (x-axis) performs the best in terms of average $nAUC_{MSE}$ metric. } \label{img:best_pr_hist} \end{center} \end{figure} \begin{figure*}[t!] \begin{center} \centerline{\includegraphics[width=\linewidth]{imgs/negative_correlation_robo.png}} \caption{A representative, combined plot with $nAUC_{MSE}$ performance (bottom, y-axis at left, solid) and collapse metric,nDiv (upper, y-axis at right, more transparent) for each of the ten DAL models at all pool ratio (color coded) for \textit{robotic arm} dataset (ROBO). Dashed horizontal red lines starting from both $y$ axes represent the random sampling's average $nAUC_{MSE}$ and nDiv at 1. } \label{img:mode_collapse} \end{center} \end{figure*} \subsection{Knowledge about optimal pool ratio} \label{sec:knowledge_about_pool_ratio} The robustness of DALs with respect to different pool ratios would not be a problem if the practitioners can find a good performing pool ratio for a single DAL method (dataset is always considered unknown in AL as no labeled data is available). Fig \ref{img:best_pr_hist} shows the histogram of the optimal pool ratio values among the eight datasets. Immediately we can see that the maximum frequency of the best pool ratio of all ten DALs is 0.5 (4 out of 8 datasets perform best under this pool ratio), which means there is no dominating pool ratio for any of the benchmarked DAL methods. Furthermore, if we take the best pool ratio from Fig \ref{img:best_pr_hist} (winning in most datasets, break a tie randomly) and apply this pool ratio across all datasets, we get the performance of the 'best overall' pool ratio (magenta line in bars) shown in Fig \ref{img:main_perf}. Nearly all magenta lines are better than the black lines, showing the value of having good previous benchmark knowledge about a wise choice of pool ratio. However, the robustness issue is still present as only the MSE method became robust with the extra knowledge. \subsection{Mode collapse analysis} \label{sec:mode_collapse} As one hypothesis of failure of DALs is the mode collapse issue due to lack of diversity. We calculated the diversity metric as the average nearest neighbor distance \begin{equation*} Div = \dfrac{1}{|T|} \sum_t^T \dfrac{1}{K} \sum_{i}^K \min_{x^* \in \mathcal{Q^T}} dist(x^*, x^i) \end{equation*} where $\mathcal{Q}^t$ represents the queried batch at active learning step $t$ and $|T|=50$ is the total number of active learning steps. Note that this metric is similar to, but not a simple average of $q_{GSx}(x)$ as $Div$ only focuses on per batch diversity and does not take the labeled set into consideration. It is also further normalized ($nDiv$) by the value of random sampling for each dataset separately. The lower this metric's value, the more severe the mode collapse issue would be. The $nDiv$ is plotted at the top half of Fig \ref{img:mode_collapse} using the inverted right y-axis. For the obvious failure cases (BALD, EMOC and Learning Loss) in this particular dataset (their $nAUC_{MSE}$ exceeds 1), a clear trend of mode collapse can be observed at the upper half of the plot (nDiv much lower than 1). Meanwhile, a strong correlation between the pool ratio and the diversity metric can be observed: (i) For all GS methods, as they seek to maximize diversity their diversity increases monotonically with larger pool ratio. (ii) For uncertainty-based methods (BALD, EMOC, LearningLoss, QBC, MSE), as they seek to maximize uncertainty, their diversity decreases monotonically with larger pool ratios. (iii) For combined methods like QBCDiv and QBCDivDen, the relationship between pool ratio and diversity does not observe a strong correlation. \section{Conclusion} \label{sec:conclusions} In conclusion, for the first time in the literature, we benchmarked ten state-of-the-art deep active learning methods on eight benchmark datasets for scientific computing regression scenarios. It shed light on an important, surprising discovery that a lot of pool-based DALs are not robust compared to random sampling under scientific computing scenarios when no pre-defined pool is given. Furthermore, we showed that the value of a good-performing pool ratio is problem dependent and hence hard to obtain in advance. We also analyzed the failure mode of the DALs, and discovered a strong correlation between lack of diversity ($x$-space mode-collapse) and lack of robustness and suggested that in future scientific computing scenarios practitioners should always employ $x$-space diversity in their DAL method to curb such mode collapse.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:Intro} Introduction} {\emph{Introduction.}} Single-walled carbon nanotubes ({SWCNTs}) are quasi-one-dimensional materials possessing extraordinary electrical, mechanical and optoelectronic properties~\cite{RevModPhys.79.677}. They can be metallic, small-gap semiconducting or semiconducting~\cite{ouyang2001energy, CaoJien2003}. The geometry of a SWCNT can be described by the tube's chiral vector which is defined by a pair of integers $(n, m)$~\cite{RevModPhys.79.677}. The band structure of a metallic SWCNT shows nearly linear dispersion relation in the low energy range around the Fermi level. Optical transitions are forbidden in this low energy range under an external electric field along the tube axis~\cite{Ando2005}. When the CNTs are stimulated by electrons or photons, significant light emissions can occur due to transitions between pairs of van Hove singularities that are mirror symmetric with respect to the Fermi level. Electroluminescence (EL) and photoluminescence (PL) from SWCNTs have been studied a lot in semiconducting CNTs experimentally~\cite{misewich2003electrically, chen2005bright, PhysRevLett.98.167406, avouris2008carbon}. In contrast, observations of EL from metallic SWCNTs are much less frequent~\cite{mann2007electrically, xie2009electroluminescence, essig2010phonon}. Electroluminescence from suspended metallic SWCNTs are explained by Joule heating~\cite{mann2007electrically}, and the emission spectrum is different from black-body-like emissions discovered in nanotube bundles and multiwalled CNTs~\cite{Sveningsson2002, LiPeng2003, Jinquan2004}. The important role of phonons in light emission has been stressed in recent experiments, where a side peak close to the main transition peak due to phonon-assisted emission appears in the radiation spectrum~\cite{xie2009electroluminescence, essig2010phonon}. Defects in CNTs are widely studied and they are shown to have significant influences on various properties of CNTs, such as electric and magnetic properties~\cite{shtogun2009electronic, Partovi_Azar_2011, Jianhua2011}, transport properties~\cite{PhysRevB.54.2600, PhysRevLett.84.2917, Neophytou2007, TEICHERT201749}, field emission~\cite{WeiGu2006}, mechanical and optical properties~\cite{BUONGIORNONARDELLI20001703, PhysRevB.70.245416, SHARMA20123373, Harutyunyan2009, Jinglin2019}. Common atomic-scale defects in CNTs are vacancies, adatoms and Stone-Wales (SW) reconstruction~\cite{RevModPhys.79.677, fan2005identifying}. Recent experiments have shown that defects can be engineered to tune the optic properties of the CNTs~\cite{brozena2019controlling}, such as enhancing the PL and tuning the single photon emission by $sp^{3}$ defects~\cite{piao2013brightening, he2017tunable, Ishii2018}. In contrast, the influence of defects on the EL from CNTs is challenging and less studied~\cite{coratger2001stm, UEMURA2006L15, Katano2018}. A recent experiment showed that a local defect could be induced by injecting tunneling electrons from a scanning tunneling microscope (STM) tip to a multiwalled CNT, and corresponding changes of EL due to the defect were observed~\cite{Katano2018}. Theoretically, bias-induced light emission from nanoscale system has received much attention in recent years~\cite{galperin2012, Jingtao2013, Kaasbjerg2015, Miwa2019, Parzefall_2019, Ridley2021}, especially in molecular junctions. However, few works have taken into account of the full geometry of the system at the atomic-scale level. Quantitative calculations of EL from metallic SWCNTs and also taking into account of the influence of defects would be helpful to related experiments. In this work, we consider a two-terminal device to study the EL from the conducting channels of metallic SWCNTs under the influences of the single vacancy (SV) defect and the single SW defect. We consider electron transport in the ballistic regime and electron-phonon interaction is not included. This can be reasonable as the length of the conducting channel considered here is much smaller than the electron mean free path of the metallic SWCNT, which is about several micrometers~\cite{PhysRevLett.98.186808}. By turning off the applied bias in the device, we also study the thermal radiation from perfect and defected CNTs. \begin{figure} \centering \includegraphics[width=8 cm]{fig1.png} \caption{Illustration of a two-terminal transport device of a SWCNT. (a) The left (L) and right (R) leads are semi-infinite extensions of the pristine carbon nanotube with diameter $d$, and central region is the conducting channel with a finite length $l$. The interaction of electrons with the EM field is included only in the central region. The driving current in the central channel induces light emission. (b) A single vacancy defect in the channel, with the missing atom denoted by a blue ring. (c) A Stone-Wales defect in the channel. The structures of nanotubes are drawn using VESTA 3~\cite{Momma2011}. } \label{fig:MD} \end{figure} {\emph{ Theory.}} We describe the Hamiltonian of the electrons in the CNT using the nearest-neighbor (NN) tight-binding (TB) model \begin{equation} \label{eq:H0} H_{0} = - \sum_{\langle ij \rangle} t_{ij} c_{i}^{\dag} c_{j}, \end{equation} where $t_{ij}$ is the hopping parameter, $c_{i}^{\dag}$ ($c_{j}$) is the electron creation (annihilation) operator on site $i$ (site $j$), the angular bracket $\langle ij \rangle$ denotes NN sites. We introduce the coupling of the electrons with the electromagnetic field in free space by using the Peierls substitution, i.e., substituting in Eq.~(\ref{eq:H0}) via $t_{ij} \rightarrow t_{ij} e^{i \theta_{ij}}$, with the phase factor $\theta_{ij} = \frac{e}{\hbar} \int_{\bm{r}_{j}}^{\bm{r}_{i}} \bm{A} \bm{\cdot} d \bm{l}$. Here, $e = - |e|$ is the electron charge, $\hbar$ is the reduced Planck constant, and $\bm{A}$ is the vector potential for describing the free space electromagnetic field. The coupling of the electrons with the electromagnetic field in the lowest order approximation can be obtained by expanding $\theta_{ij}$ in terms of $\bm{A}$ to the linear term, given by \begin{equation} \label{eq:Hint} H_{\textrm{int}} = \sum_{\langle ij \rangle} \sum_{k} \sum_{\mu = x, y,z} M_{i j}^{k \mu} c_{i}^{\dag} c_{j} A_{\mu}(\bm{r}_{k}). \end{equation} Here, the electron-photon coupling matrice is $M_{i j }^{k \mu} = i \frac{e}{2\hbar} t_{ij} (\bm{r}_{i} - \bm{r}_{j})_{\mu} (\delta_{ki} + \delta_{kj})$. The EL from a SWCNT is considered by a typical two-terminal device under a bias voltage, as shown in Fig.~{\ref{fig:MD}}(a). The device consists of three parts. The left and right leads are semi-infinite extensions of the pristine CNT. The central part is the conducting channel and it has a finite length. We take into account of the interaction of electrons with the EM field only in the central region. Upon applying a bias voltage, an electric current flows through the channel, and photons are excited and emitted due to the inelastic scattering of electrons interacting with the electromagnetic field. The cases that the channel contains a SV defect or a single SW defect are shown in Fig.~{\ref{fig:MD}}(b) and Fig.~{\ref{fig:MD}}(c) respectively. The SV defect is modeled by using a large onsite energy of $10^{6}$ eV for the vacant atom, and the SW defect is formed by rotating a C-C bond by 90 degrees. The effects of structure relaxations due to the defects are not considered in this paper. Also, we consider the NN hopping parameter $t_{ij} = t$ as a constant. Radiation from the device is calculated using the nonequilibrium Green's function (NEGF) method based on our previous work~\cite{Wang2008,Wang2014,Zhang2020Angular,Zhang2020Farfield}. The important quantity is the local current-current correlation function. Its lesser component can be expressed in the random phase approximation as \begin{equation} \label{eq:Pilocal} \begin{split} \Pi_{\mu \nu}^{<} & (\bm{r}_{i}, \bm{r}_{j}; \omega) \\ =& - i \hbar \int_{-\infty}^{\infty} \frac{dE}{2 \pi \hbar} \textrm{Tr} \big[ M^{i\mu} g^{<}(E) M^{j \nu} g^{>}(E- \hbar \omega) \big], \end{split} \end{equation} where, $\textrm{Tr}[\cdots]$ stands for trace over the electron degrees of freedom. The electron's lesser (greater) Green's function (GF) without coupling to the EM field is given by $g^{<(>)} = g^{r} \Sigma_{\textrm{leads}}^{<(>)} g^{a}$, with the retarded GF $g^{r}(E) = \big[ (E + i \eta)I - H_{0} - \Sigma_{\textrm{leads}}^{r} \big]^{-1}$, and advanced GF $g^{a} = (g^{r})^{\dag}$. $I$ is the identity matrix, and $\eta$ is the GF infinitely small quantity. $\Sigma_{\textrm{leads}}^{r}$ is the total self-energy of the two semi-infinite leads, which are calculated by using the recursive GF method~\cite{Nardelli1999}. Each lead is in equilibrium and follows the fluctuation-dissipation theorem, obeying the relation $\Sigma_{p}^{<} = - f_{p} (\Sigma_{p}^{r} - \Sigma_{p}^{a})$, with $p = \textrm{L}, \textrm{R}$ being the lead indices. $f_{p}(E, \mu_{p}) = 1/ \big[ \textrm{exp}(\frac{E - \mu_{p}}{k_{\textrm{B}} T_{p}}) + 1 \big]$ is the Fermi distribution function, $k_{\textrm{B}}$ is the Boltzmann constant, $\mu_{p}$ and $T_{p}$ are the temperature and chemical potential of the lead respectively. Using the monopole approximation and ignoring the screening effect on current fluctuations, the radiation power and rate of the photon counts (number of photons emitted per unit time) in the far field are given by~\cite{Zhang2020Angular} \begin{eqnarray} P &=& - \int_{0}^{\infty} \frac{d \omega}{2 \pi} \frac{\hbar \omega^2}{3 \pi \varepsilon_{0} c^{3}} \sum_{\mu} \textrm{Im} \big[ \Pi_{\mu \mu}^{\textrm{tot},<}(\omega) \big], \label{eq:Erad} \\ \frac{d N}{d t} &=& - \int_{0}^{\infty} \frac{d \omega}{2 \pi} \frac{\omega}{3 \pi \varepsilon_{0} c^{3}} \sum_{\mu} \textrm{Im} \big[ \Pi_{\mu \mu}^{\textrm{tot},<}(\omega) \big], \label{eq:Nrad}\end{eqnarray} where $\varepsilon_{0}$ is the vacuum permittivity, $c$ is the speed of light, and $\Pi_{\mu \mu}^{\textrm{tot},<} (\omega) = \sum_{ij} \Pi_{\mu \nu}^{<}(\bm{r}_{i}, \bm{r}_{j}; \omega)$ is the total current-current correlation function. \begin{figure*} \centering \includegraphics[width=16 cm]{fig2.pdf} \caption{Results for the two-terminal device of the CNT with chiral index (7, 7). Three curves are shown in each figure, for the CNTs with a single SW defect, a single SV defect, and the perfect one without defect respectively. (a) Photon counts per second and (e) yields from the CNT under a bias voltage. (b) and (f) show the spectrum of the radiation power under the bias voltages 2.0~eV and 5.0~eV respectively. (c) The total density of states of the electrons in the channel. (g) The conductance and (d) the $I$-$V$ curve. } \label{fig:7v7} \end{figure*} {\emph{Results and discussions.}} In the numerical calculation, we set the bias between the two leads to be symmetric, with $\mu_{\textrm{L}} = -\mu_{R} = V/2$. The C-C bond length is $a = 1.42 $~\AA. The NN hopping parameter is $t = 2.7$ eV~\cite{wilder1998electronic}. In this work, we consider the metallic CNTs to be armchair type and we do not consider spins of electrons. Temperatures for the two leads are both set to be $300$~K unless stated otherwise. Restricted by the computational cost, we use the central CNT channel with the length $l = 10\sqrt{3}a$. Firstly, we consider a typical metallic SWCNT with chiral index (7, 7) for the two-terminal device. We compare the results in each plot of Fig.~{\ref{fig:7v7}} for the cases that the CNT channel containing a single SW defect (SWD), a single SV defect (SVD), and no defect, respectively. As shown in Fig.~{\ref{fig:7v7}}(a), for a perfect CNT, with the increasing of the bias voltage, the photon counts is very small and little changed in the low bias range until the onset of bias at about $2.4$~eV, which implies the opening of the $M_{11}$ transition between the two van Hove singularities shown by the DOS in Fig.~{\ref{fig:7v7}}(c). Below the onset of bias, thermal radiation is small but dominant for the perfect CNT. However, when the CNT channel contains a single SW defect or a single SV defect, the photon counts increase almost exponentially in the low bias range, the former is about two times as large as the latter. We plot in Fig.~{\ref{fig:7v7}}(b) and Fig.~{\ref{fig:7v7}}(f) the spectrum of the radiation power, which is defined as $S(\omega) = - \frac{\omega^2}{6 \pi^2 \varepsilon_{0} c^3} \sum_{\mu} \textrm{Im}\left[ \Pi_{\mu\mu}^{\textrm{tot},<} (\omega) \right]$ from the integrand of Eq.~(\ref{eq:Erad}), setting the bias below and above the onset of bias for the $M_{11}$ transition respectively, with $V = 2.0$~eV and $V = 5.0$~eV. The spectrum in Fig.~{\ref{fig:7v7}}(b) shows that the average energy of the emitted photons from the CNT with a SW defect is larger than that from the CNT with a SV defect. The radiation spectrums for the perfect CNT and the defected CNTs under a large bias show little difference in Fig.~{\ref{fig:7v7}}(f), where the influence of the defects on the radiation is not obvious due to the strong transitions from high energy bands. To analyze the enhancement of EL in the low energy range, we plot in Fig.~{\ref{fig:7v7}}(c) the density of states of the CNT channel. There are extra peaks of the DOS in the low energy range induced by the defects. These localized states account for the EL in the low bias range. For the CNT with a SV defect, the localized state locates near the Fermi level, and for the case with a SW defect, the localized states are away from the Fermi level, close to the edge of the first pair of the van Hove singularities. Thus, the latter induces transitions to emit photons with higher energy on average than the former. Also, compared with that of the perfect CNT, the electric current in Fig.~\ref{fig:7v7}(d) decreases more significantly for the case with a SV defect than that with a SW defect in the low bias range. The localized states due to the defects in the low energy range reduce the conductance by one quantum unit, as shown in Fig.~{\ref{fig:7v7}}(g). The emission yield, i.e., the number of photons emitted per electron injected into the device channel, is an important quantity to characterize the emission efficiency of the device. Here, the defects can enhance the counts and decrease the electric current, thus they enhance the yields of the EL, as shown in Fig.~{\ref{fig:7v7}}(e). The yields can reach the order of $10^{-7}$ in the high bias range, which is consistent experimental values~\cite{Freitag2004Hot}. \begin{figure} \centering \includegraphics[width=8 cm]{fig3.pdf} \caption{Photon counts per second (left panel) and conductance (right panel) for the two-terminal device using CNTs with different diameter, with the chiral index (n,n) ranging from $n=4$ to n=7. } \label{fig:RadSize} \end{figure} In Fig.~{\ref{fig:RadSize}}, we discuss the influence of the diameter of the CNT to the EL. Specifically, We consider four different armchair CNTs with the chiral index (n,n) ranging from $n = 4$ to $n = 7$. Their diameters are $d = 5.42$~\AA, $6.78$~\AA, $8.14$~\AA, and $9.49$~{\AA}, respectively. There are distinct features for the ELs from the CNTs with different diameters, though the overall trends are similar, as shown in Fig.~{\ref{fig:RadSize}}(a)-(c). Firstly, plots for the counts from CNTs with different diameters formed some `bubbles' in the high bias range for the CNTs with defects in Fig.~{\ref{fig:RadSize}}(a) and Fig.~{\ref{fig:RadSize}}(b), and the perfect CNTs in Fig.~{\ref{fig:RadSize}}(c). This is due to that the transition energy corresponding to the $M_{11}$ gap decreases with the increasing of the tube diameter, which is shown from the conductance plots in Fig.~{\ref{fig:RadSize}}(d)-(f). Secondly, the dependence of the counts on the tube diameter is very different for perfect CNTs and defected CNTs under low bias. When the bias is smaller than the onsite bias of $M_{11}$ transition, photon counts are inversely proportional to the tube diameter for SW defected CNT, and they are proportional to the tube diameter for SV defected CNT, while they are nearly independent of the tube diameter for the perfect CNT, as shown in Fig.~{\ref{fig:RadSize}}(a)-(c) respectively. In the low bias range, thermal radiation is dominant over the EL for the perfect CNTs. The energy dispersion relation in the low energy range accounts for the electron transport in the longitudinal direction along the tube axis and it is of little difference for the CNTs with different diameters, so they show little dependence of thermal radiation on the tube diameter in Fig.~{\ref{fig:RadSize}}(c). \begin{figure} \centering \includegraphics[width=8 cm]{fig4.pdf} \caption{Results of thermal radiation from the conducting channel of the two-terminal device using (7,7) CNT under zero bias. Temperatures of the two leads are the same. (a) Radiation intensity as a function of the temperature. (b) Spectrum of the radiation power with temperature $T = 300 \, \textrm{K}$. (c) The radiation spectrum using the perfect CNT under different temperatures are compared with the spectrum of the black-body radiation with the same area as the CNT channel. } \label{fig:TherRad} \end{figure} We plot in Fig.~{\ref{fig:TherRad}} the thermal radiation from the CNT channel of the two-terminal device by turning off the bias voltage. Figure~{\ref{fig:TherRad}}(a) shows that the radiation power of the perfect CNT and the defected CNTs vs the temperature follows the $T^{4}$ scaling law as the black-body (BB) radiation. However, the intensity is about two orders of magnitude smaller than the black-body radiation. Figure~{\ref{fig:TherRad}}(b) shows the spectrum of the thermal radiation. Both the perfect and defected CNTs show black-body-like radiation, i.e., their spectrums fit well with that of the black-body radiation despite the magnitude is smaller. The fitting of the spectrum with the black-body radiation is shown in Fig.~{\ref{fig:TherRad}}(c), with the perfect CNT as an example. \begin{figure} \centering \includegraphics[width=8 cm]{fig5.pdf} \caption{Optical conductivity (real part) for the CNT channel of the device using the (7,7) CNTs at $T=300$ K. $\sigma$, $\sigma_{\parallel}$ and $\sigma_{\perp}$ are the total conductivity, the longitudinal part and perpendicular part respectively. } \label{fig:Conductivity} \end{figure} Why the thermal radiations from the CNTs follows the black-body-like spectrum but they are much smaller than black-body radiation? To analysis this, we start from the general expression in Eq.~(\ref{eq:Erad}) to analysis the spectrum of the radiation power. The optical conductivity is related to the retarded component of the current-current correlation function as $\sigma(\omega) = \frac{i}{A \omega} \Pi^{r}(\omega)$, with $A= \pi d l$ the area of the central CNT channel. Using the fluctuation-dissipation relation in thermal equilibrium $\Pi^{<}(\omega) = i N_{B}(\omega) 2 \textrm{Im} [\Pi^{r}(\omega)]$, with $N_{B}(\omega)$ the Bose distribution function, we can write from Eq.~(\ref{eq:Erad}) the spectrum of the radiation power in thermal equilibrium as \begin{equation} S(\omega) = \frac{A \omega^3}{3 \pi^2 \varepsilon_{0} c^3} \textrm{Re} [\sigma^{\textrm{tot}}(\omega)] N_{B} (\omega). \label{eq:Spectrum} \end{equation} Here we use the notation $\sigma^{\textrm{tot}}(\omega) = \sum_{\mu = x, y, z}\sigma_{\mu \mu}(\omega)$. The optical conductivity is calculated by \begin{equation} \begin{split} \sigma_{\mu \nu}&(\omega) \\ =& \frac{1}{A} \int_{-\infty}^{\infty} \frac{dE}{2 \pi \omega} \sum_{ij} \textrm{Tr} \big[ M^{i\mu} g^{r}(E) M^{j \nu} g^{<}(E- \hbar \omega) \\ +& M^{i\mu} g^{<}(E) M^{j \nu} g^{a}(E- \hbar \omega) \big]. \end{split} \end{equation} The longitudinal and transverse components of the conductivity are given by $\sigma_{\parallel} = \sigma_{zz}$, and $\sigma_{\perp} = \sigma_{xx} + \sigma_{yy}$ respectively. The spectrum of black-body radiation with the same area is $S_{\textrm{BB}} (\omega) = \frac{A \omega^3}{4 \pi^2 c^2} N_{B}(\omega)$. Comparing it with Eq.~(\ref{eq:Spectrum}), we conclude that the strict condition for the shape of the radiation spectrum of a metallic material to fit with that of the black-body radiation is that the real part of the conductivity should be a constant in the energy range of thermal excitation. The black-body-like spectrum in Fig.~{\ref{fig:TherRad}}(b) is determined to a large extent by the intrinsic nature of thermal equilibrium fluctuation, i.e., the factor $\omega^3 N_{B}(\omega)$ in Eq.~(\ref{eq:Spectrum}), despite that the real parts of the conductivities for the channels using the perfect and defected CNTs in Fig.~{\ref{fig:Conductivity}} are not strictly constant in the energy range of thermal excitation. For both the perfect and defected CNTs, the total conductivity is mainly contributed by the longitudinal component in the low energy range, noting that the photon energy from thermal radiation is smaller than $0.5$~eV, while the transverse component is only significant in the high energy range. Thus, thermal radiation is much smaller than the black-body radiation due to the constraint of the excitations in the circular direction of the CNTs. Finally, a quantitative analysis of the change of the magnitude of the spectrum due to the defects of CNTs in Fig.~{\ref{fig:TherRad}}(b) can be attributed to the decrease of the conductivity by the defects shown in Fig.~{\ref{fig:Conductivity}}. {\emph{Conclusion.}} Using the NEGF method, we study the EL and thermal radiation from metallic SWCNTs with defects in the ballistic transport regime based on a tight-binding model. We find that both the SV defect and SW defect can enhance the EL, which increases exponentially in the low bias range, while for the perfect nanotube only thermal radiation contributes and the EL can be neglected. The enhancement of radiation due to the defects is not obvious in the high bias range, where strong radiation due to transitions between high energy bands becomes dominant. The enhancement of the EL and the diameter of the CNT have a positive correlation in the presence of a SW defect, while for the CNT with a SV defect they have a negative correlation. Due to confinement of thermal excitation in the transverse direction, the intensity of the thermal radiation is much smaller than that of the black-body radiation and it is independent of the nanotube diameter. Defects can reduce the optical conductivity of the CNT, and they reduce the thermal radiation. This reducing effect is more significant for the CNT with a SV defect than that with a SW defect. \begin{acknowledgments} We acknowledge the support by MOE tier 2 Grant No. R-144-000-411-112 and FRC Grant No. R-144-000-402-114. \end{acknowledgments} \bibliographystyle{apsrev4-1}
{ "redpajama_set_name": "RedPajamaArXiv" }
\chapter*{Introduction : fonctions trigonométriques et symboles modulaires} Comme Eisenstein l'explique lui-même dans \cite{Eisenstein}, sa méthode pour construire des fonctions elliptiques s'applique de manière élégante au cas plus simple des fonctions trigonométriques. C'est par là que débute le livre que Weil \cite{Weil} consacre à ce sujet; nous suivons son exemple. \section{La relation d'addition pour la fonction cotangente} La méthode d'Eisenstein se base sur la considération de la série $$\varepsilon (x) = \frac{1}{2i\pi} \sideset{}{{}^e}\sum_{m \in \mathbf{Z}} \frac{1}{x+m} = \frac{1}{2i\pi} \lim_{M \to \infty} \sum_{m = -M}^{M} \frac{1}{x+m},$$ où le symbole $\sum^e$ désigne la sommation d'Eisenstein définie par la limite de droite. Eisenstein démontre que\footnote{La normalisation appara\^{\i}tra plus naturellement par la suite; elle est bien évidemment liée au fait que la forme $dx/ (2i \pi x)$ possède un résidu égal à $1$ en $0$.} $\varepsilon (x) = \frac{1}{2i} \cot \pi x$ et ce faisant retrouve la \emph{formule d'addition}, originellement découverte par Euler, selon laquelle pour tous les nombres complexes $x$ et $y$ tels que ni $x$, ni $y$, ni $x+y$ ne soient entiers, on a \begin{equation} \label{E:addition} \varepsilon (x) \varepsilon (y) - \varepsilon (x) \varepsilon (x+y) - \varepsilon (y) \varepsilon (x+y) = - 1/4. \end{equation} Le point de départ de la démonstration d'Eisenstein est une identité élémentaire entre fractions rationnelles~: \begin{equation} \label{E:addition0} \frac{1}{xy} - \frac{1}{x(x+y)} - \frac{1}{y (x+y)} = 0. \end{equation} Formellement on a en effet \begin{multline*} \varepsilon (x) \varepsilon (y) - \varepsilon (x) \varepsilon (x+y) - \varepsilon (y) \varepsilon (x+y) \\ = \sum_{p,q,r} \left( \frac{1}{(x+p) (y+q)} - \frac{1}{(x+p)(x+y+r)} - \frac{1}{(y+q) (x+y+r)} \right) \end{multline*} où les entiers $p$, $q$ et $r$ varient dans $\mathbf{Z}$ tout en étant astreints à la relation $p+q-r=0$; les sommes n'étant pas absolument convergente cette décomposition n'a pas de sens, mais la relation \eqref{E:addition} se déduit d'une version régularisée de cette observation. Sczech \cite{Sczech92} interprète la relation d'addition \eqref{E:addition} comme une ``relation de cocycle''; nous y reviendrons. On relie d'abord cette relation aux \emph{symboles modulaires} dans le demi-plan de Poincaré $\mathcal{H}$. \section{Symboles modulaires} Notons $\mathcal{H}^*$ l'espace obtenu en adjoignant à $\mathcal{H}$ les points rationnels $\mathbf{P}^1 (\mathbf{Q})$ de son bord à l'infini $\mathbf{P}^1 (\mathbf{R}) = \mathbf{R} \cup \{ \infty \}$. \'Etant donné deux points distincts $r$ et $s$ dans $\mathbf{P}^1 (\mathbf{Q})$, on note $\{ r,s \}$ la géodésique orientée reliant $r$ à $s$ dans $\mathcal{H}$. Soit $\Delta$ le groupe abélien engendré par les symboles $\{ r, s \}$ et soumis aux relations engendrées par $$\{ r , s \} + \{ s, r \} = 0 \quad \mbox{et} \quad \{r,s \} + \{ s,t \} + \{ t,r \} =0 .$$ On appelle \emph{symbole modulaire} l'image d'un symbole $\{r,s \}$ dans $\Delta$; on la note $[r,s]$. Manin \cite{Manin} observe que $\Delta$ est engendré par les symboles \emph{unimodulaires}, c'est à dire les $[r,s]$ avec $r=a/c$ et $s=b /d$ tels que $ad-bc = 1$, dont la géodésique associée $\{r,s\}$ est une arête de la triangulation de Farey représentée ci-dessous. L'action de $\mathrm{SL}_2 (\mathbf{Z})$ sur $\mathcal{H}$ par homographies se prolonge en une action sur $\mathcal{H}^*$ et induit une action naturelle sur $\Delta$ de sorte que $$g \cdot [\infty , 0 ] = [a/c , b/d], \quad \mbox{pour tout } g = \left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right) \in \mathrm{SL}_2 (\mathbf{Z}).$$ Notons $$\overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2 ) = \mathcal{M} (\mathbf{C}^2 / \mathbf{Z}^2) / \mathbf{C} \cdot \mathbf{1}$$ le quotient de l'espace des fonctions méromorphes et $\mathbf{Z}^2$-périodiques sur $\mathbf{C}^2$ par le sous-espace des fonctions constantes. L'action linéaire de $\mathrm{SL}_2 (\mathbf{Z})$ sur $\mathbf{C}^2$ induit une action sur $\mathbf{C}^2 / \mathbf{Z}^2$, et donc sur $\overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2 )$, et l'observation suivante est simplement une reformulation de la relation d'addition \eqref{E:addition}. \medskip \noindent {\bf Observation.} {\it L'application $\mathbf{c} : \Delta \to \overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2)$ définie sur les symboles \emph{unimodulaires} par $$\mathbf{c} ([r,s]) (x, y) = \epsilon (dx-by) \epsilon (-cx + ay), \quad \mbox{pour } r= a/c, \ s = b/d, \ ad-bc=1, $$ est bien définie et $\mathrm{SL}_2 (\mathbf{Z})$-équivariante. } \medskip On dit alors que $\mathbf{c}$ est un \emph{symbole modulaire à valeurs dans $\overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2)$}. \begin{center} \includegraphics[width=0.9\textwidth]{Fareybis.png} \end{center} Cette remarque élémentaire place la relation d'addition dans un nouveau contexte. Elle suggère d'étudier l'action des opérateurs de Hecke sur $\mathbf{c}$. \section{Opérateurs de Hecke} Les actions du groupe $\mathrm{SL}_2 (\mathbf{Z})$ sur $\Delta$ et sur $\mathbf{C}^2/ \mathbf{Z}^2$ s'étendent naturellement au monoïde $M_2 (\mathbf{Z})^\circ = M_2 (\mathbf{Z}) \cap \mathrm{GL}_2 (\mathbf{Q})$. Cela munit $\mathrm{Hom} (\Delta , \overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2))$ d'une action à droite qui étend celle de $\mathrm{SL}_2 (\mathbf{Z})$~: $$\phi_{| g} ([r,s]) (x,y) = \phi (g \cdot [r,s]) \left( g \cdot \left( \begin{smallmatrix} x \\ y \end{smallmatrix} \right) \right),$$ $( \phi \in \mathrm{Hom} (\Delta , \overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2)), \ g \in M_2 (\mathbf{Z})^\circ ).$ L'espace $$\mathrm{Hom} (\Delta , \overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2) )^{\mathrm{SL}_2 (\mathbf{Z})}$$ des symboles modulaires à valeurs dans $\overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2)$ hérite alors d'une action à droite de l'algèbre de Hecke associée à la paire $(M_2 (\mathbf{Z})^\circ , \mathrm{SL}_2 (\mathbf{Z}))$~: étant donné un élément $g \in M_2 (\mathbf{Z})^\circ$ on décompose la double classe de $g$ par $\mathrm{SL}_2 (\mathbf{Z})$ en une union finie de classes à gauche $$\mathrm{SL}_2 (\mathbf{Z} ) g \mathrm{SL}_2 (\mathbf{Z}) = \bigsqcup_j \mathrm{SL}_2 (\mathbf{Z}) g_j.$$ L'opérateur de Hecke associé à $g$ opère sur un symbole modulaire $\phi$ à valeurs dans $\overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2)$ par $$\mathbf{T}(g) \phi = \sum_j \phi_{| g_j}.$$ \medskip \noindent {\it Exemples.} Notons $\mathbf{T}_p$ l'opérateur de Hecke associé à la matrice $\left( \begin{smallmatrix} p & 0 \\ 0 & 1 \end{smallmatrix} \right)$. Pour $p=2$, on a $$ (\mathbf{T}_2 \mathbf{c}) ([\infty , 0]) (x,y) = \varepsilon (2x) \varepsilon (y) + \varepsilon (x) \varepsilon (2y) + \varepsilon (2x) \varepsilon (x+y) + \varepsilon (2y) \varepsilon (x+y). $$ On laisse au lecteur le plaisir coupable de vérifier que, si $2x$, $2y$ et $x+y$ ne sont pas des entiers, on a \begin{multline*} \varepsilon (2x) \varepsilon (y) + \varepsilon (x) \varepsilon (2y) + \varepsilon (2x) \varepsilon (x+y) + \varepsilon (2y) \varepsilon (x+y) \\ - 2 \varepsilon (2x) \varepsilon (2y) - \varepsilon (x) \varepsilon (y) = 1/4 \end{multline*} et donc que le symbole modulaire $\mathbf{c}$, à valeurs dans $\overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2)$, est annulé par l'opérateur $$\mathbf{T}_2 - 2[2]^* -1,$$ où l'on note $[m]^*$ le tiré en arrière par l'application $\mathbf{C}^2/ \mathbf{Z}^2 \to \mathbf{C}^2 / \mathbf{Z}^2$ induite par la multiplication par $m$, autrement dit $(x,y) \mapsto (mx,my)$. Pour $p=3$ et $p=5$, on a respectivement \begin{multline*} (\mathbf{T}_3 \mathbf{c}) ([\infty , 0]) (x,y) = \varepsilon (3x) \varepsilon (y) + \varepsilon (x) \varepsilon (3y) + \varepsilon (3x) \varepsilon (x+y) \\ + \varepsilon (3y) \varepsilon (x+y) + \varepsilon (3x) \varepsilon (y-x) + \varepsilon (3y) \varepsilon (x-y). \end{multline*} et \begin{multline*} (\mathbf{T}_5 \mathbf{c}) ([\infty , 0]) (x,y) = \varepsilon (5x) \varepsilon (y) + \varepsilon (x) \varepsilon (5y) + \varepsilon (5x) \varepsilon (x+y) + \varepsilon (5y) \varepsilon (x+y) \\ + \varepsilon (5x) \varepsilon (y-2x) + \varepsilon (5y) \varepsilon (x+2y) - \varepsilon (y-2x) \varepsilon ( x+2y) + \varepsilon (5x) \varepsilon (y+2x) \\ + \varepsilon (5y) \varepsilon (x+3y) + \varepsilon (y+2x) \varepsilon ( x+3y) + \varepsilon (5x) \varepsilon (y-x) + \varepsilon (x-y) \varepsilon (5y) . \end{multline*} Au prix de fastidieux calculs on peut là encore vérifier que dans chacun de ces cas la relation \begin{equation} \label{E:HeckeTrig} (\mathbf{T}_p - p[p]^* -1) \mathbf{c} =0 \end{equation} est satisfaite. \section{Un théorème et quelques questions} Il est naturel de conjecturer que les relations \eqref{E:HeckeTrig} sont vérifiées pour tout nombre $p$ premier. Un tel énoncé rappelle une conjecture de Busuioc \cite{Busuioc} et Sharifi \cite{Sharifi}; la démonstration récente que Sharifi et Venkatesh \cite{SharifiVenkatesh} en ont donnée implique aussi que les relations \eqref{E:HeckeTrig} sont vérifiées pour tout $p$ premier.\footnote{Le lien entre les travaux de Sharifi et Venkatesh et les questions abordées ici est expliqué dans le sixième paragraphe de cette introduction.} En plus de cela on aimerait relever l'application $\mathbf{c}$ en un symbole modulaire --- nécessairement \emph{partiel}, au sens de la thèse de Dasgupta, voir \cite{Evil,DD} --- à valeurs dans $\mathcal{M} (\mathbf{C}^2 / \mathbf{Z}^2)$ plutôt que son quotient par les fonctions constantes. Ces deux desiderata font l'objet du théorème ci-dessous qui s'énonce plus naturellement en termes de cohomologie des groupes et requiert en partie d'augmenter le niveau. L'application $\overline{\mathbf{S}} : g \mapsto \mathbf{c} ([\infty , g \cdot \infty ])$ définit en effet un $1$-cocycle\footnote{La relation de cocycle s'écrit $$\overline{\mathbf{S}} (hg) = \overline{\mathbf{S}} (h) + h \cdot \overline{\mathbf{S}} (g).$$} de $\mathrm{SL}_2 (\mathbf{Z} )$ à valeurs dans $\overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2)$ et donc une classe de cohomologie dans $$H^1 (\mathrm{SL}_2 (\mathbf{Z} ) , \overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2)).$$ \'Etant donné un entier $N$ strictement positif, on note comme d'habitude $$\Gamma_0 (N) = \left\{ \left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right) \in \mathrm{SL}_2 (\mathbf{Z}) \; : \; c \equiv 0 \ (\mathrm{mod} \ N) \right\}$$ le sous-groupe de $\mathrm{SL}_2 (\mathbf{Z})$ constitué des matrices qui fixent la droite $\langle e_1 \rangle$, engendrée par le premier vecteur de la base canonique de $\mathbf{Z}^2$, modulo $N$. On note enfin $\Delta^\circ_N \subset \Delta$ le sous-groupe engendré par les symboles $[r , s]$ avec $r, s \in \Gamma_0 (N) \cdot \infty \subset \mathbf{P}^1 (\mathbf{Q})$. Le sous-groupe $\Delta^\circ_N$ est donc engendré par les éléments $[a/Nc , b/Nd] \in \Delta$ avec $a$ et $b$ premiers avec $N$. On note finalement $D_N$ le groupe abélien libre engendré par les combinaisons linéaires entières formelles de diviseurs positifs de $N$ et $D_N^\circ$ le sous-groupe constitué des éléments de degré $0$, c'est-à-dire des éléments $\delta = \sum_{d | N} n_d [d]$ tels que $\sum_{d|N} n_d =0$. \begin{theorem*} Il existe un morphisme $\delta \mapsto \mathbf{S}_\delta$ de $D_N$ vers le groupe des $1$-cocycles de $\Gamma_0 (N)$ à valeurs dans $\mathcal{M} (\mathbf{C}^2 / \mathbf{Z}^2 )$ vérifiant les propriétés suivantes. 1. On a $$[\mathbf{S}_{[1]}] = [\overline{\mathbf{S}}] \neq 0 \quad \mbox{dans} \quad H^{1} (\Gamma_0 (N), \overline{\mathcal{M}} (\mathbf{C}^2 / \mathbf{Z}^2 ) ).$$ 2. Pour tout entier $p$ premier ne divisant pas $N$ et pour tout $\delta \in D_N$, la classe de cohomologie de $\mathbf{S}_\delta$ dans $H^{1} (\Gamma_0 (N), \mathcal{M} (\mathbf{C}^2 / \mathbf{Z}^2 ) )$ annule l'opérateur $\mathbf{T}_{p} - p [p]^* - 1$. 3. Pour tout entier strictement positif $N$ et pour tout $\delta = \sum_{d | N } n_d [d]$ dans $D_N^\circ$, le $1$-cocycle $\mathbf{S}_\delta$ est cohomologue à un cocycle explicite $\mathbf{S}_{\delta}^*$ défini par \begin{multline} \label{Sexplicit} \mathbf{S}_{\delta}^*\left( \begin{array}{cc} a & * \\ Nc & * \end{array} \right) \\ = \left\{ \begin{array}{ll} 0 & \mbox{si } c=0, \\ \sum_{d | N} \frac{n_d }{d' c} \sum_{j \ {\rm mod} \ d' c} \varepsilon \left( \frac{1}{d' c} (y+j) \right) \varepsilon \left( dx - \frac{a}{d' c} (y+j) \right) & \mbox{sinon}, \end{array} \right. \end{multline} avec $dd'=N$. \end{theorem*} \medskip \noindent {\it Remarque.} Chaque application $\mathbf{S}_\delta^*$ définit en fait un symbole modulaire \emph{partiel} dans $$\mathrm{Hom} (\Delta_N^\circ , \mathcal{M} (\mathbf{C}^2 / \mathbf{Z}^2 ))^{\Gamma_0 (N)}.$$ On le définit sur un symbole $[\infty , a/Nc]$ par $\mathbf{S}^*_\delta \left( \begin{smallmatrix} a & u \\ Nc & v \end{smallmatrix} \right)$ où $u$ et $v$ sont tels que $av-Ncu=1$. \medskip Ce texte est consacré à une vaste généralisation de ce théorème. Le but est de répondre aux questions suivantes~: \begin{enumerate} \item Que dire des produits de $n$ fonctions cotangentes lorsque $n \geq 3$ ? \item Y a-t-il des résultats analogues pour des fonctions elliptiques ou, plus simplement, pour des fractions rationnelles ? \end{enumerate} Les réponses à ces questions sont énoncées au chapitre \ref{C:2} qui est une sorte de deuxième introduction dans laquelle les résultats généraux sont énoncés. Avant cela, dans le premier chapitre, on commence par détailler la construction de classes de cohomologie, pour des sous-groupes de $\mathrm{GL}_n$, qui généralisent la classe de $\mathbf{S}_{[1]}$ et ont toutes une origine topologique commune. Les classes que nous construisons sont à coefficients dans une partie de la cohomologie d'arrangements d'hyperplans dans des produits $A^n$ où $A$ est isomorphe au groupe multiplicatif ou à une courbe elliptique. Un point important, pour démontrer le théorème ci-dessus ou les résultats annoncés dans le chapitre \ref{C:2} est ensuite de montrer que cette partie de la cohomologie d'arrangements d'hyperplans peut être représentée par des formes méromorphes sur $A^n$. C'est l'objet du théorème \ref{P:Brieskorn} dont la démonstration occupe le chapitre \ref{S:OrlikSolomon}. Le calcul explicite des classes ainsi obtenues occupe le reste de l'ouvrage. \medskip Dans les deux derniers paragraphes de cette introduction on relie les cocycles $\mathbf{S}_\delta$, et leurs généralisations annoncées, à des objets plus classiques en théorie des nombres. \section{\'Evaluation, terme constant, morphismes de Dedekind--Rademacher} Le slogan suivant, à retenir, distingue les avantages respectifs des cocycles $\mathbf{S}_\delta$ et $\mathbf{S}_\delta^*$. \begin{quote} {\it Les cocycles notés $\mathbf{S}$ peuvent être \emph{évalués} en des points de torsion alors que les cocycles notés $\mathbf{S}^*$ peuvent eux être \emph{calculés} en le point générique.} \end{quote} Nous verrons en effet que l'on peut contrôler l'ensemble des points $P \in \mathbf{C}^2 / \mathbf{Z}^2$ en lesquels les fonctions méromorphes $\mathbf{S}_\delta (g)$ sont régulières. Après évaluation en de tels points de torsion de $\mathbf{C}^2/\mathbf{Z}^2$ bien choisis, on retrouve alors des résultats plus classiques. \medskip \noindent {\it Exemple.} Les fonctions dans l'image de $\mathbf{S}_{[1]}$ sont régulières en les points $(j/N,0)$, pour tous $j \in \{1 , \ldots , N-1 \}$. L'application \begin{equation} \label{E:cocDR} \Psi_{N} : \Gamma_0 (N ) \to \mathbf{C}; \quad g \mapsto \sum_{j=1}^{N-1} \mathbf{S}_{[1]} (g ) (j/N, 0) \end{equation} définit donc un morphisme de groupes. On retrouve ainsi un multiple du morphisme de Dedekind--Rademacher \cite{Rademacher,Mazur} donné par \begin{equation} \label{E:DR} \Phi_N \left( \begin{array}{cc} a & b \\ N c & d \end{array} \right) = \left\{ \begin{array}{ll} (N -1)b/d & \mbox{si } c=0 \\ \frac{(N -1)(a+d)}{Nc} + 12 \cdot \mathrm{sign}(c) \cdot D^N \left( \frac{a}{N |c|} \right) & \mbox{si } c \neq 0, \end{array} \right. \end{equation} où, en notant $D : \mathbf{Q} \to \mathbf{Q}$ la \emph{somme de Dedekind} usuelle $$D(a/c) = \frac{1}{c} \sum_{j=1}^{c-1}\varepsilon \left( \frac{j }{c} \right) \varepsilon \left( - \frac{j a}{c} \right) \quad \mbox{pour } c>0 \quad \mbox{et} \quad (a,c)=1,$$ on a $D^N (x) = D(x) - D(N x)$. De l'expression de $\mathbf{S}_{[1]}$ que l'on donnera au chapitre~\ref{C:7}, on peut plus précisément déduire --- comme dans \cite[\S 11]{Takagi} --- que $$12 \cdot \Psi_N = \Phi_N .$$ \medskip Plutôt que d'évaluer les fonctions méromorphes $\mathbf{S}^*_\delta (g)$ on peut considérer leur terme constant en $0$. Le théorème du paragraphe précédent implique alors le corollaire qui suit. \begin{cor*} Soit $\delta \in D_N^\circ$ tel que $\sum_{d | N} n_d d =0$ et $D^\delta (x) = \sum_{d | N} n_d D (dx)$. Alors l'application $\Psi_\delta : \Gamma_0 (N) \to \mathbf{C}$ donnée par \begin{equation} \label{E:DD} \Psi_\delta \left( \begin{array}{cc} a & * \\ N c & * \end{array} \right) = \left\{ \begin{array}{ll} 0 & \mbox{si } c=0 \\ \mathrm{sign}(c) \cdot D^\delta \left( \frac{a}{N |c|} \right) & \mbox{si } c \neq 0, \end{array} \right. \end{equation} définit un morphisme de groupes. \end{cor*} On notera que le morphisme $12 \Psi_\delta$ est à valeurs entières et qu'il coïncide avec le \emph{morphisme de Dedekind--Rademacher modifié} $\Phi_\delta$ de Darmon et Dasgupta \cite{DD}. \section{Relations avec les travaux de Kato et de Sharifi--Venkatesh} Kato \cite{Kato} construit des unités de Siegel sur les courbes modulaires $Y_1 (N)$ à partir de fonctions theta sur la courbe elliptique universelle $E$ au-dessus de $Y_1 (N)$. Il découle en effet de \cite[Proposition 1.3]{Kato} qu'étant donné un entier strictement positif $m$ premier à $6N$, il existe une fonction theta ${}_m \theta$ dans $\mathbf{Q} (E)^\times$ qui est une unité en dehors des points de $m$-torsion. La fonction ${}_m \theta$ est caractérisée par son diviseur dans $E[m]$ et des relations de distribution associées aux applications de multiplication par des entiers relativement premiers à $m$. Les unités de Siegel sont alors obtenues en tirant en arrière sur $Y_1 (N)$ ces fonctions theta par des sections de $N$-torsion. Sharifi et Venkatesh \cite{SharifiVenkatesh} considèrent des analogues des fonctions ${}_m \theta$. Leur méthode permet en fait de construire des $1$-cocycles sur des sous-groupes $\Gamma$ de $\mathrm{GL}_2 (\mathbf{Z})$ à valeurs dans les groupes de $K$-théorie de degré $2$ des corps de fonctions de $\mathbf{C}^2 / \mathbf{Z}^2$ ou du carré $E^2$ de la courbe elliptique universelle. Les $1$-cocycles du premier type sont des applications de la forme $\Gamma \to K_2 (\mathbf{Q} (\mathbf{C}^2 / \mathbf{Z}^2))$. En les composant avec $$K_2 (\mathbf{Q} (\mathbf{C}^2 / \mathbf{Z}^2)) \to \mathcal{M} (\mathbf{C}^2 / \mathbf{Z}^2); \quad \{ f , g \} \mapsto \frac{d \log (f) \wedge d \log (g)}{dx \wedge dy},$$ où $x$ et $y$ sont les deux coordonnées de $\mathbf{C}^2$, on obtient des $1$-cocycles dont on peut vérifier qu'ils sont cohomologues aux cocycles du théorème énoncé ci-dessus. Le fait que les relations \eqref{E:HeckeTrig} sont bien vérifiées pour tout nombre premier $p$ découle alors de \cite[Lemma 4.2.9]{SharifiVenkatesh}. Un aspect intéressant de notre construction est que nous obtenons ces cocycles à partir d'une classe purement topologique; l'émergence de fonctions méromorphes se déduit au final d'un théorème ``de type Brieskorn'' qui permet de représenter certaines classes de cohomologie singulière par des formes méromorphes. La construction est suffisamment maniable pour nous permettre de considérer plus généralement l'action de $\mathrm{GL}_n (\mathbf{Z})$ sur $\mathbf{C}^n / \mathbf{Z}^n$ ou sur $E^n$. Les $1$-cocycles du second type chez Sharifi et Venkatesh sont des applications de la forme $\Gamma \to K_2 (\mathbf{Q} (E^2))$. Comme pour les unités de Siegel, on peut tirer en arrière ces cocycles par des sections de torsion. On obtient ainsi des homomorphismes du premier groupe d'homologie de $\Gamma$ vers le $K_2$ d'une courbe modulaire. Goncharov et Brunault \cite{Gon1,Bru1,Bru2} avaient déjà construit de tels homomorphismes en associant explicitement à certains symboles modulaires des symboles de Steinberg d'unités de Siegel. Obtenir ces morphismes par spécialisation d'un $1$-cocycle à valeurs dans $K_2 (\mathbf{Q} (E^2))$ permet de montrer que ces homomorphismes sont Hecke-équivariants. En composant un $1$-cocycle $\Gamma \to K_2 (\mathbf{Q} (E^2))$ avec le symbole différentiel $\partial \log \wedge \partial \log$ puis en tirant le résultat en arrière par une section de torsion, on obtient un homomorphisme du premier groupe d'homologie de $\Gamma$ vers les formes modulaires de poids $2$ sur $Y_1(N)$. \'Etendu aux symboles modulaires (partiels), ce morphisme associe à un tel symbole une ``zeta modular form'' au sens de \cite[Section 4]{Kato}. Cette construction s'étend là encore à l'action de $\mathrm{GL}_n (\mathbf{Z})$ sur le produit de $n$ courbes elliptiques universelles; voir chapitre \ref{C:2}. Les cocycles ainsi obtenus révèlent des relations cachées entre des produits de fonctions elliptiques classiques, relations gouvernées par l'homologie de sous-groupes de congruence dans $\mathrm{GL}_n (\mathbf{Z})$. On peut tirer de cela un certain nombres de conséquences arithmétiques \cite{Takagi,ColmezNous}; d'autres conséquences sont en préparation. \section{Remerciements} Ce texte est l'aboutissement d'une réflexion entamée il y a quelques années avec Akshay Venkatesh. Plusieurs idées sont issues de discussions avec lui (ainsi que l'impulsion d'écrire en français), un grand merci à lui pour sa générosité. De toute évidence, ce travail doit beaucoup aux idées initiées et développées par Robert Sczech. Nous profitons de cette occasion pour lui exprimer notre gratitude. P.C. remercie Samit Dasgupta pour lui avoir posé une question étincelle il y a 10 ans. L.G. remercie l'IHES et le soutien de l'ERC de Michael Harris durant les premiers temps de ce projet. N.B. tient à remercier Olivier Benoist pour de nombreuses discussions autour du chapitre 3. Merci à Emma Bergeron pour les dessins. Enfin, c'est un plaisir de remercier Henri Darmon, Clément Dupont, Javier Fresan, Peter Xu et les rapporteurs anonymes pour leurs nombreux commentaires. \numberwithin{equation}{chapter} \numberwithin{section}{chapter} \resettheoremcounters \chapter{Construction de cocycles : aspects topologiques} \label{C:1} \section{Résumé} Le cocycle $\mathbf{S}$ discuté en introduction est relié aux ``cocycles d'Eisenstein'' qui interviennent sous différentes formes dans la littérature, par exemple dans \cite{Sczech93,Nori,CD,CDG}. Un cocycle très proche est en fait explicitement considéré par Sczech dans \cite{Sczech92,Sczech93}. Le premier but de ce texte est de donner une construction générale de cocycles de ``type Sczech'' et de montrer qu'ils ont une source topologique commune. La méthode utilisée consiste à relever certaines classes de cohomologie dans $H^{2n-1}(X^*)$, où $X$ est un $G$-espace et $X^*$ est égal à $X$ privé d'un nombre fini de ses points, en des classes de cohomologie {\'e}quivariante dans $H_{G}^{2n-1}(X^* )$. Elle rappelle la méthode proposée par Quillen \cite{Quillen} pour calculer la cohomologie d'un groupe linéaire sur un corps fini. En pratique on considère essentiellement trois cas~: \begin{itemize} \item Le cas \emph{additif} (ou \emph{affine}). Dans ce cas $X = \mathbf{C}^n$, l'espace épointé $X^*$ est égal à $X$ privé du singleton $D=\{ 0 \}$ et $G = \mathrm{GL}_n (\mathbf{C})^\delta$, le groupe des transformations lin{\'e}aires de $\mathbf{C}^n$, consid{\'e}r{\'e} comme un groupe discret. \item Le cas \emph{multiplicatif} (ou \emph{trigonométrique}). Dans ce cas $X = \mathbf{C}^n / \mathbf{Z}^n$, l'espace épointé $X^*$ est égal à $X$ privé d'un cycle $D$ de degré $0$ constitué de points de torsion et $G$ est un sous-groupe de $\mathrm{GL}_n (\mathbf{Z})$ qui préserve $D$. \item Le cas \emph{elliptique}. Dans ce cas $X=E^n$, où est $E$ une courbe elliptique ou une famille de courbes elliptiques, l'espace épointé $X^*$ est égal à $X$ privé d'un cycle $D$ de degré $0$ constitué de points de torsion et $G$ est un sous-groupe d'indice fini de $\mathrm{GL}_n (\mathbf{Z})$, ou $\mathrm{GL}_n (\mathcal{O})$ si $E$ est à multiplication par $\mathcal{O}$, qui préserve $D$. \end{itemize} Dans chacun de ces cas l'action naturelle (linéaire) de $G$ sur $X$ donne lieu à un fibré d'espace total $$EG \times_G X$$ au-dessus de l'espace classifiant $BG=EG/G$. L'isomorphisme de Thom associe alors à $D$ une classe dans $H^{2n} ( EG \times_G X , EG \times_G X^* )$. La construction de Borel identifie ce groupe au groupe de cohomologie {\it équivariante} $H_G^{2n} (X , X^*)$. On renvoie à l'annexe \ref{A:A} pour plus de détails sur la cohomologie équivariante; elle généralise à la fois la cohomologie des groupes et la cohomologie usuelle. La construction de Borel permet de retrouver les propriétés usuelles, comme par exemple associer une suite exacte longue à une paire de $G$-espaces. Dans la suite on considère $$[D] \in H_G^{2n} (X , X^*)$$ et la suite exacte longue associée à la paire $(X,X^*)$~: \begin{equation} \label{suiteexacte0} H_G^{2n-1} (X ) \to H_G^{2n-1} ( X^*) \to H_G^{2n} (X , X^*) \to H_G^{2n} (X). \end{equation} L'origine topologique de nos cocycles repose alors sur le fait suivant~: \medskip \noindent {\bf Fait.} {\it La classe $[D] \in H_G^{2n} (X , X^*)$ admet un relevé (privilégié) $$E[D] \in H_{G}^{2n-1}(X^*).$$ } \medskip Nous démontrons ce fait au cas par cas dans les paragraphes qui suivent. Dans le cas affine il résulte du fait qu'un fibré vectoriel complexe plat possède une classe d'Euler rationnelle triviale, alors que dans le cas elliptique on le déduit d'un théorème de Sullivan qui affirme que la classe d'Euler rationnelle d'un fibré vectoriel à groupe structural contenu dans $\mathrm{SL}_n (\mathbf{Z})$ est nulle. L'étape suivante part d'une remarque générale~: supposons que $Y/\mathbf{C}$ soit une vari\'et\'e {\em affine}, de dimension $n$, sur laquelle opère un groupe $G$, et supposons donnée une classe de cohomologie équivariante $\alpha \in H_G^{2n-1}(Y(\mathbf{C}))$. Puisque $H^i(Y(\mathbf{C}))$ s'annule pour $i > n$, la suite spectrale pour la cohomologie \'equivariante donne une application $$H_G^{2n-1}(Y(\mathbf{C})) \rightarrow H^{n-1}(G, H^{n}(Y(\mathbf{C}))).$$ Elle permet donc d'associer à $\alpha$ une classe de cohomologie du groupe $G$. Dans les cas qui nous intéressent la variété $X^*$ n'est pas affine, mais on peut restreindre la classe $E[D]$ \`a un ouvert affine $U$. On voudrait aussi que $U$ soit invariant par $G$; mais un tel $U$ n'existe pas. Dans le cas additif où $X = \mathbf{C}^n$, on peut toutefois formellement prendre $U := `` \mathbf{C}^n - \bigcup_{\ell} \ell^{-1}(0)$''. Plus précisément, étant donné un ensemble fini $L$ de fonctionnelles affines, on pose $U_L = \mathbf{C}^n - \cup_{\ell \in L} \ell^{-1}(0)$. En regardant $U$ comme la limite inverse des $U_L$, on associe à $E[D]$ une classe dans le groupe $$H^{n-1}(G, \varinjlim_{L} H^j(U_L )).$$ La dernière étape de notre construction consiste à représenter $\varinjlim_{L} H^j(U_L)$ par des formes méromorphes. Dans le cas affine cela résulte d'un théorème célèbre de Brieskorn \cite{Brieskorn}~: $$\varinjlim_{L} H^j(U_L ) = \begin{cases} 0, \ j > n, \\ \Omega^n_{\mathrm{aff}}, \ j = n, \end{cases} $$ où $\Omega^n_{\mathrm{aff}} \subset \Omega^n_{\mathrm{mer}} (\mathbf{C}^n )$ est une sous-algèbre de formes méromorphes, voir Définition~\ref{def:Omegamer}. Le chapitre \ref{S:OrlikSolomon} est consacré à la démonstration d'un résultat de ce type dans les cas multiplicatif et elliptique. En admettant pour l'instant ce théorème ``de type Brieskorn'', on consacre le présent chapitre à détailler la construction esquissée ci-dessus. Elle conduit à des classes $$\mathbf{S}[D] \in H^{n-1} (G , \Omega^n_{\rm mer} (X )).$$ \section{Le cocycle additif} \subsection{Une classe de cohomologie \'equivariante} \label{S21} Soit $G= \mathrm{GL}_n (\mathbf{C})^\delta$, le groupe des transformations lin{\'e}aires de $\mathbf{C}^n$, consid{\'e}r{\'e} comme un groupe discret. La repr{\'e}sentation linéaire\footnote{On identifie donc $\mathbf{C}^n$ à l'espace des vecteurs colonnes.} $$G \rightarrow \mathrm{GL} (\mathbf{C}^n ); \quad g \mapsto ( z \in \mathbf{C}^n \mapsto gz)$$ donne lieu {\`a} un fibr{\'e} vectoriel $\mathcal{V}$, d'espace total $EG \times_G \mathbf{C}^n$, sur l'espace classifiant $BG$. On renvoie à l'annexe \ref{A:A} pour des rappels sur les espaces classifiants, les espaces simpliciaux et la cohomologie équivariante. Rappelons juste ici que si $X$ est un espace topologique muni d'une action continue de $G$, on a $$H^*_G (X) = H^* (EG \times_G X) .$$ Lorsque $X$ est contractile, ce groupe se réduit à $H^* (BG)=H^* (G)$, la cohomologie du groupe $G$. On peut considérer la classe de Thom du fibré $\mathcal{V}$~: $$u \in H^{2n}_G ( \mathbf{C}^n , \mathbf{C}^n - \{ 0 \} )$$ à coefficients dans $\mathbf{C}$. Dans la suite exacte $$\xymatrix{ H_G^{2n-1} (\mathbf{C}^n - \{ 0 \}) \ar[r] & H_G^{2n} (\mathbf{C}^n , \mathbf{C}^n - \{ 0 \} ) \ar[r]^{ \quad \quad c} & H_{G}^{2n} (\mathbf{C}^n )},$$ l'image de $u$ par l'application $c$ est la classe de Chern \'equivariante $$c_{2n}(\mathcal{V}) \in H^{2n}(BG, \mathbf{C}),$$ qui est nulle parce que $\mathcal{V}$ est plat. On peut donc relever la classe $u$ en une classe dans $H_G^{2n-1} (\mathbf{C}^n - \{ 0 \})$. \medskip \noindent {\it Remarque.} Ce relev\'e n'est pas unique, mais on peut consid\'erer la suite exacte $$H^{2n-1} (G) \to H^{2n-1}_G (\mathbf{C}^n - \{ 0 \} ) \to H^{2n-1} (\mathbf{C}^n -\{ 0 \})$$ associée à la fibration $\mathcal{V}^* \to BG$, où $\mathcal{V}^*$ désigne le complémentaire de la section nulle dans $\mathcal{V}$, Chaque relev\'e de $u$ dans $H_G^{2n-1} (\mathbf{C}^n - \{ 0 \})$ s'envoie sur la classe fondamentale dans $H^{2n-1} (\mathbf{C}^n -\{ 0 \})$. \medskip Le quotient $EG \times_G (\mathbf{C}^n - \{ 0 \})$ est une \emph{variété simpliciale}, c'est-à-dire un ensemble semi-simplicial dont les $m$-simplexes $$(EG \times_G (\mathbf{C}^n - \{ 0 \}))_m = (EG_m \times (\mathbf{C}^n - \{ 0 \}) / G$$ sont des variétés et dont les applications de faces et de dégénérescences sont lisses. La \emph{réalisation grossière} $\| EG \times_G (\mathbf{C}^n - \{ 0 \}) \|$ de cette variété simpliciale est l'espace topologique $$\| EG \times_G (\mathbf{C}^n - \{ 0 \}) \| = \sqcup_{m \geq 0} \Delta_m \times (EG \times_G (\mathbf{C}^n - \{ 0 \}))_m / \sim.$$ Ici $\Delta_m$ désigne le $m$-simplexe standard et les identifications sont données par \begin{equation*} (\sigma^i (t) , x) \sim (t , \sigma_i (x)), \quad t \in \Delta_{m-1} , \ x \in (EG \times_G (\mathbf{C}^n - \{ 0 \}))_m, \ i \in \{ 0 , \ldots , m \}, \end{equation*} où $\sigma^i : \Delta_{m-1} \to \Delta_m$ désigne l'inclusion de la $i$-ème face et $\sigma_i : (EG \times_G (\mathbf{C}^n - \{ 0 \}))_m \to (EG \times_G (\mathbf{C}^n - \{ 0 \}))_{m-1}$ est l'application de face correspondante. On renvoie à l'annexe \ref{A:A} pour plus de détails sur ces objets. Retenons que l'on a une projection continue naturelle de la réalisation grossière vers la réalisation géométrique de $EG \times_G (\mathbf{C}^n - \{ 0 \})$ et que cette application est une équivalence d'homotopie. En pratique nous travaillerons avec la réalisation grossière. En tirant en arrière le relevé de $u$ on obtient la proposition suivante. \begin{proposition} \label{P4} La classe de Thom $u$ admet un relev\'e dans $$H^{2n-1} ( \| EG \times_G (\mathbf{C}^n - \{ 0 \}) \| ).$$ \end{proposition} {\it Via} la théorie de Chern--Weil et les travaux de Mathai et Quillen \cite{MathaiQuillen}, nous construirons au paragraphe~\ref{S:61} du chapitre \ref{S:6} un relevé {\rm privilégié} de $u$ représenté par une forme différentielle. \subsection{Effacer les hyperplans} \label{S:222} \`A tout \'el\'ement $g\in G$ de premier vecteur ligne $v \in \mathbf{C}^n - \{ 0 \} $, on associe une forme linéaire $$e_1^* \circ g : \mathbf{C}^n \to \mathbf{C}; \quad z \mapsto v z.$$ Pour tout $(k+1)$-uplet $(g_0 , \ldots , g_k ) \in G^{k+1}$, on note $$U (g_0 , \ldots , g_k) = \{ z \in \mathbf{C}^n \; : \; \forall j \in \{ 0 , \ldots , k \}, \ e_1^* (g_j z) \neq 0 \}.$$ C'est un ouvert de $\mathbf{C}^n$ qui est égal au complémentaire d'un arrangement d'hyperplans~: \begin{equation} \label{hypComp} U (g_0 , \ldots , g_k) = \mathbf{C}^n - \cup_j H_j, \quad H_j = \mathrm{ker} (e_1^* \circ g_j ). \end{equation} L'action du groupe $G$ sur $\mathbf{C}^n$ préserve l'ensemble de ces ouverts~: $$g \cdot U (g_0 , \ldots , g_k) = U (g_0 g^{-1} , \ldots , g_k g^{-1}).$$ Comme les variétés \eqref{hypComp} sont affines de dimension $n$, elles n'ont pas de cohomologie en degré $>n$. Nous montrons dans l'annexe \ref{A:A} qu'il correspond alors à la classe de cohomologie fournie par la proposition \ref{P4} une classe dans $$H^{n-1} ( G , \lim_{\substack{\rightarrow \\ H_j}} H^n (\mathbf{C}^n - \cup_j H_j )).$$ \subsection{Une classe de cohomologie à valeurs dans les formes méromorphes} \'Etant donné une forme linéaire $\ell$ sur $\mathbf{C}^n$, on définit une forme différentielle méromorphe sur $\mathbf{C}^n$ par la formule \begin{equation} \omega_\ell = \frac{1}{2i \pi} \frac{d\ell }{\ell }. \end{equation} Pour tout $g \in G$, on a \begin{equation} \label{E:relg} g^* \omega_{ g \cdot \ell} = \omega_{\ell}. \end{equation} D'après un théorème célèbre de Brieskorn \cite[Lemma 5]{Brieskorn}, confirmant une conjecture d'Arnold, l'application naturelle $\eta \mapsto [\eta]$ de la $\mathbf{Z}$-algèbre graduée engendrée par les formes $\omega_{\ell}$ et l'identité vers la cohomologie singulière à coefficients entiers de \eqref{hypComp} est un isomorphisme d'algèbre. Cela justifie d'introduire la définition suivante dans notre contexte. \begin{definition} \label{def:Omegamer} Soit $$\Omega_{\rm aff}= \bigoplus_{p=0}^n \Omega_{\rm aff}^p$$ la $\mathbf{Z}$-algèbre graduée de formes différentielles méromorphes sur $\mathbf{C}^n$ engendrée par les formes $\omega_{\ell}$, avec $\ell \in (\mathbf{C}^n)^\vee - \{0 \}$, et par l'identité en degré $0$. \end{definition} Le th\'eor\`eme de Brieskorn implique que l'application naturelle $$\Omega_{\rm aff} \to \lim_{\substack{\rightarrow \\ H_{j}}} H^\bullet (\mathbf{C}^n - \cup_{j} H_{j} )$$ est un isomorphisme. Finalement, on a démontré~: \begin{proposition} \label{P:Sa} La classe de cohomologie fournie par la proposition \ref{P4} induit une classe \begin{equation} \label{E:Sa} S_{\rm aff} \in H^{n-1} (G , \Omega_{\rm aff}^n ). \end{equation} \end{proposition} Nous donnons deux représentants explicites de cette classe de cohomologie au chapitre suivant. \section{Les cocycles multiplicatif et elliptique} \label{S:23} On considère plus généralement une famille lisse $A \to S$ de groupes algébriques commutatifs dont les fibres sont connexes et de dimension $1$. Dans les cas multiplicatif et elliptique, chaque fibre est un groupe abélien isomorphe au groupe multiplicatif $\mathbf{G}_m$ dont le groupe des points complexes est isomorphe à $\mathbf{C}^\times = \mathbf{C} / \mathbf{Z}$, {\it via} l'application $$ \mathbf{C} \to \mathbf{C}^\times ; \quad z \mapsto q_z = e(z) := e^{2i\pi z},$$ ou à une courbe elliptique. Soit $T \to S$ le produit fibré de $n$ copies de $A$ au-dessus de $S$. Le groupe $\mathrm{GL}_n (\mathbf{Z})$ opère sur $T$ par multiplication matricielle~: on voit un élément $\mathbf{a} \in T$ comme un vecteur colonne $\mathbf{a}=(a_1 , \ldots , a_n)$ où chaque $a_i \in A$, et un élément $g \in G$ envoie $\mathbf{a}$ sur $g\mathbf{a}$. Soit $G$ un sous-groupe de $\mathrm{GL}_n (\mathbf{Z})$. \begin{definition} Soit $c$ un entier supérieur à $1$. Un \emph{cycle invariant de $c$-torsion} sur $T$ est une combinaison linéaire formelle à coefficients entiers de sections de $c$-torsion de $T$ qui est invariante par $G$, autrement dit un élément $$D \in H_G^0 (T[c]).$$ On dit de plus que $D$ est \emph{de degré $0$} si la somme de ses coefficients est égale à $0$. \end{definition} \medskip \noindent {\it Exemple.} Lorsque $A$ est une famille de courbes elliptiques, l'élément $$[T[c] - c^{2n} \{ 0 \} ] \in H_G^0 (T[c])$$ est un cycle invariant de $c$-torsion de degré $0$. \medskip L'isomorphisme de Thom induit un isomorphisme $$H_G^0 (T[c]) \to H_G^{2n} (T , T - T[c]);$$ on pourra se référer à \cite[Section 2]{Takagi} pour plus de détails sur cet isomorphisme et la deuxième partie du lemme ci-dessous. Considérons maintenant la suite exacte longue de la paire $(T , T-T[c])$~: \begin{equation} \label{suiteexacte} H_G^{2n-1} (T ) \to H_G^{2n-1} ( T - T[c]) \to H_G^{2n} (T , T - T[c]) \stackrel{\delta}{\to} H_G^{2n} (T). \end{equation} \medskip \begin{lem} \label{L:Sul} {\rm (1)} Dans le cas multiplicatif, l'image dans $H_G^{2n} (T)$ d'un cycle invariant de $c$-torsion $D$ sur $T$ est {\rm rationnellement} nulle. {\rm (2) (Sullivan \cite{Sullivan})} Dans le cas elliptique, un cycle invariant de $c$-torsion $D$ sur $T$ est de degré $0$ si et seulement si son image dans $H_G^{2n} (T)$, par l'application $\delta$ de la suite exacte \eqref{suiteexacte}, est {\rm rationnellement} nulle. Plus précisément, si $D$ est de degré $0$ son image est nulle dans $H_G^{2n} (T , \mathbf{Z} [1/c])$. \end{lem} \begin{proof} 1. Commençons par considérer le cas où $D = \{0\}$. On veut montrer que son image $[0]$ dans $H_G^{2n} (T)$ est nulle. Puisque cette image est contenue dans le noyau du morphisme $$H_G^{2n} (T) \to H_G^{2n} (T- \{ 0 \})$$ induit par l'application de restriction, il suffit de montrer que son tiré en arrière par la section nulle $e=0^* [0]$ dans $H^{2n} (BG)$ est rationnellement nul. Par définition $e$ est la classe d'Euler du fibré normal de $\{0\}$ dans $EG \times_G T$ au-dessus de $BG$. Dans le cas multiplicatif il est isomorphe au fibré $$EG \times_G \mathbf{C}^n \to BG$$ qui est complexe\footnote{Ce n'est plus vrai dans le cas elliptique car $E$ peut varier au-dessus de $S$. Dans ce cas on obtient un fibré en $\mathbf{R}^{2n}$ qui, même plat, peut avoir une classe d'Euler non nulle.} et plat. Les classes de Chern de ce fibré sont donc nulles et donc la classe d'Euler $e$ aussi. Ainsi l'image de $\{0\}$ dans $H_G^{2n} (T)$ est bien triviale. Considérons maintenant le cas général où $D$ est un cycle de $c$-torsion. Son image $[c]_* (D)$, par l'application $[c]$ de multiplication dans les fibres, est égale au cycle $\{0\}$. On vient donc de montrer que l'image de la classe de $[c]_* (D)$ dans $H_G^{2n} (T)$ est nulle. L'application $[c]: T \to T$ étant un revêtement fini de degré $c^n$, le morphisme $[c]_*: H_G^{2n} (T) \to H_G^{2n} (T)$ est rationnellement injectif. L'image de $D$ dans $H_G^{2n} (T)$ est donc aussi (rationnellement) nulle. 2. Voir par exemple \cite[Lemma 9]{Takagi} pour plus de détails. \end{proof} Soit $D$ un cycle invariant de $c$-torsion $D$ sur $T$ que l'on supposera de plus de degré $0$ dans le cas où $A$ est une famille de courbes elliptiques. On peut alors relever $D$ en un élément de $H_G^{2n-1} (T - T[c])$. Toutefois, en général ce relevé n'est pas uniquement déterminé; l'ambiguïté est précisément $H_G^{2n-1} (T)$. On réduit cette ambiguïté en considérant la multiplication dans les fibres par un entier $s$, voir \cite{Faltings}. La multiplication dans les fibres induit une application propre $[s] : T \to T$ qui induit à son tour une application image directe $[s]_*$ en cohomologie (équivariante). En supposant de plus $s$ premier à $c$, on a $[s]^{-1} T[c] = T[sc]$. L'immersion ouverte $$j : T - T[sc] \to T -T[c]$$ induit un morphisme $$j^* : H^\bullet (T-T[c]) \to H^\bullet (T-T[sc] ).$$ Avec un léger (abus de notation, on notera simplement $[s]_*$ la composition $$\xymatrix{ H^\bullet (T-T[c]) \ar[r]^{j^*} & H^\bullet (T-T[sc] ) \ar[r]^{[s]_*} & H^\bullet (T-T[c])}$$ de l'application de restriction à $T-T[sc]$ par l'application d'image directe en cohomologie, et de même en cohomologie équivariante. On définit de même une application $$[s]_* : H^\bullet (T , T - T[c]) \to H^\bullet (T , T - T[c])$$ en cohomologie et aussi en cohomologie équivariante. Noter que, puisque $s$ est premier à $c$, on a $$[s]_* ( [T[c] - c^{2n} \{ 0 \} ] ) = [T[c] - c^{2n} \{ 0 \} ].$$ En général, quitte à augmenter $s$, on peut supposer que $[s]_* (D) = D$. Cela motive la définition suivante. \begin{definition} \label{Def1.7} Soit $$H_G^\bullet (T-T[c])^{(1)} \subset H_G^\bullet (T-T[c])$$ l'intersection, pour tout entier $s>1$ premier à $c$, des sous-espaces caractéristiques de $[s]_*$ associées à la valeur propre $1$, c'est-à-dire le sous-espace des classes de cohomologie complexes qui sont envoyées sur $0$ par une puissance de $[s]_*-1$. \end{definition} On définirait de même $H_G^\bullet (T)^{(1)}$, $H_G^\bullet ( T , T - T[c])^{(1)}$, et leurs analogues $H^\bullet (T-T[c])^{(1)}$ en cohomologie usuelle. Comme dans le cas affine, la construction de Borel permet de calculer la cohomologie équivariante de $T$, resp. $T-T[c]$, comme cohomologie d'un espace fibré au-dessus de $BG$ de fibre $T$, resp. $T-T[c]$. On en déduit des suites spectrales compatibles à l'action de $[s]_*$~: \begin{equation} \label{SST} H^p (G , H^q (T )) \Longrightarrow H^{p+q}_G (T) \end{equation} et \begin{equation} \label{SSTm} H^p (G , H^q (T -T[c] )) \Longrightarrow H^{p+q}_G (T-T[c]). \end{equation} Dans le cas elliptique, les fibres sont compactes et $H_G^k (T)^{(1)} = \{ 0 \}$ si $k < 2n$. Les valeurs propres de $[s]_*$ sont donc des puissances $s^j$, avec $j>1$. Il en résulte que l'on peut projeter un relevé de $D$ sur le sous-espace propre associé à la valeur $1$ dans $H_G^{2n-1} (T - T[c])$; on renvoie à \cite[\S 3.2]{Takagi} pour les détails. On obtient ainsi que le cycle $D$ possède un relevé canonique dans $H_G^{2n-1} (T - T[c])^{(1)}$. Dans le cas multiplicatif il n'est plus vrai que $D$ possède un relevé canonique, le relevé n'est défini que modulo $H_G^{2n-1} (T)^{(1)}$. Comme expliqué en introduction, on voudrait maintenant restreindre cette classe à un ``ouvert affine $G$-invariant''. Un tel ouvert n'existant pas dans $T$, on considère là encore les réalisations géométriques des espaces simpliciaux correspondants. Le but du prochain paragraphe est de montrer, en procédant comme dans le cas additif, les deux théorèmes qui suivent. Dans les deux cas on note $\Omega_{\rm mer} (T)$ l'algèbre graduée des formes différentielles méromorphes sur $T$ et $\Omega (T)$ la sous-algèbre constituée des formes partout holomorphes sur $T$. \begin{theorem} \label{T:cocycleM} Supposons que $A$ soit une famille de groupes multiplicatifs. Alors, tout cycle $G$-invariant $D$ donne lieu à une classe $$S_{\rm mult} [D] \in H^{n-1} (G , \Omega^n_{\rm mer} (T))$$ qui est uniquement déterminée par $D$ modulo $\Omega^n (T)$. \end{theorem} \begin{theorem} \label{T:cocycleE} Supposons que $A$ soit une famille de courbes elliptiques. Alors, tout cycle $G$-invariant $D$ de degré $0$ donne lieu à une classe $$S_{\rm ell} [D] \in H^{n-1} (G , \Omega^n_{\rm mer} (T) )$$ qui est uniquement déterminée par $D$. \end{theorem} \medskip \noindent {\it Remarque.} La construction permet en outre de montrer que si $\mathbf{a} \in T-T[c]$ est $G$-invariant alors $S_{\rm mult} [D]$, resp. $S_{\rm ell} [D]$, est cohomologue à une classe de cohomologie à valeurs dans les formes régulières en $\mathbf{a}$. Le point de vue topologique décrit ci-dessus mène ainsi naturellement à la construction de classes de cohomologie ``à la Sczech'' comme celle évoquée en introduction. \medskip Sous certaines conditions supplémentaires sur le cycle $D$, nous décrivons des représentants explicites des classes $S_{\rm mult} [D]$ et $S_{\rm ell} [D]$ dans le chapitre suivant. \section[Démonstration des théorèmes 1.7 et 1.8]{Démonstration des théorèmes \ref{T:cocycleM} et \ref{T:cocycleE}} \subsection{Arrangement d'hyperplans trigonométriques ou elliptiques} On fixe un groupe algébrique $A$, isomorphe au groupe multiplicatif ou à une courbe elliptique. Soit $n$ un entier naturel et soit $T=A^n$. On appelle {\it fonctionnelle affine} toute application $\chi: T \rightarrow A$ de la forme $$t_0 + \mathbf{a} \mapsto \chi_0(\mathbf{a})$$ où $t_0$ est un élément de $T_{\mathrm{tors}}$ et $\chi_0 : A^n \rightarrow A$ un morphisme de la forme $\mathbf{a} = (a_1, \dots, a_n) \mapsto \sum r_i a_i$ où les $r_i$ sont des entiers. On dit que $\chi$ est {\it primitif} si les coordonnées $ (r_1, \dots, r_n) \in \mathbf{Z}^n$ de $\chi_0$ sont premières entre elles dans leur ensemble. Dans ce cas le lieu d'annulation de $\chi$ est un translaté de l'ensemble $$\mathrm{ker}(\chi_0) := \{(a_1, \dots, a_n) \in A^n \; : \; \sum r_i a_i = 0\}.$$ Soit $\mathbf{v}_1, \dots, \mathbf{v}_{n-1}$ une base du sous-module de $\mathbf{Z}^n$ orthogonal au vecteur $\mathbf{r}$. Les $\mathbf{v}_i$ définissent une application $$A^{n-1} \longrightarrow A^n$$ qui est un isomorphisme sur son image $\mathrm{ker}(\chi_0)$. (On peut en effet se ramener au cas où $(r_1, \dots, r_n) = (0,\dots, 0, 1)$.) \begin{definition} On appelle \emph{hyperplan} le lieu d'annulation (ou abusivement ``noyau'') d'une fonctionnelle affine primitive. Un \emph{arrangement d'hyperplans} $\Upsilon$ est un fermé de Zariski dans $T$ réunion d'hyperplans. La taille $\# \Upsilon$ est le nombre d'hyperplans distincts de cet arrangement. \end{definition} Noter que de manière équivalente, un hyperplan est l'image d'une application $A^{n-1} \rightarrow T$ linéaire relativement à un morphisme $A^{n-1} \rightarrow A^n$ induit par une matrice entière de taille $(n-1) \times n$. \begin{lemma} \label{affine} Si $\Upsilon$ contient $n$ fonctionnelles affines $\chi$ dont les vecteurs associés $\mathbf{r} \in \mathbf{Z}^n$ sont linéairement indépendants alors le complémentaire $T-\Upsilon$ est affine. Lorsque $A$ est une courbe elliptique, c'est même une équivalence. \end{lemma} \proof S'il existe $n$ fonctionnelles affines dans $\Upsilon$ dont les vecteurs de $\mathbf{Z}^n$ associés sont linéairement indépendants alors ces fonctionnelles définissent une application finie $T \rightarrow A^n$ et $\Upsilon$ est la pré-image de la réunion des axes de coordonnées dans $A^n$. Mais $(A-\{0\})^n$ est affine, et la pré-image d'une variété affine par une application finie est encore affine. Maintenant, si $A$ est une courbe elliptique et que l'espace engendré par les vecteurs des fonctionnelles affines qui définissent $\Upsilon$ est un sous-module propre de $M \subset \mathbf{Z}^n$ alors la donnée d'un point de $T-\Upsilon$ et d'un vecteur de $\mathbf{Z}^n$ orthogonal à $M$ définit un plongement $$A \longrightarrow T - \Upsilon.$$ Comme $A$ n'est pas affine, l'espace $T-\Upsilon$ ne l'est pas non plus. \qed \subsection[Cohomologie des arrangements d'hyperplans]{Opérateurs de dilatation et cohomologie des arrangements d'hyperplans} \label{S:dil} On appelle {\em application de dilatation} toute application $[s] : T \rightarrow T$ associée à un entier $s>1$ et de la forme $$[s] : \mathbf{a} \mapsto s \mathbf{a}.$$ L'image d'un hyperplan par une application de dilatation est encore un hyperplan. Tout hyperplan est l'image d'un sous-groupe par un translation par point de \emph{torsion} dans $T$. \'Etant donné un arrangement d'hyperplans, on peut donc trouver une application de dilatation $[s]$ qui préserve cet arrangement, c'est-à-dire telle que $[s] \Upsilon \subset \Upsilon$. Puisque $[s]$ préserve $\Upsilon$, on a une immersion ouverte $$j : T - [s]^{-1} \Upsilon \to T - \Upsilon .$$ La dilatation $[s]$ induit une application de $T-[s]^{-1}\Upsilon$ vers $T-\Upsilon$ qui est à fibres finies. En abusant légèrement des notations on note $[s]_*$ la composition $$\xymatrix{ H^\bullet (T-\Upsilon) \ar[r]^-{j^*} & H^\bullet (T-[s]^{-1} \Upsilon) \ar[r]^-{[s]_*} & H^\bullet (T-\Upsilon)}$$ de l'application de restriction à $T-[s]^{-1} \Upsilon$ par l'application d'image directe de $[s]$ en cohomologie. On peut alors poser l'analogue suivant de la définition \ref{Def1.7}. \begin{definition} \label{D:sep1} Soit $$H^*(T-\Upsilon, \mathbf{C})^{(1)} \subset H^*(T-\Upsilon, \mathbf{C})$$ l'intersection, pour tout entier $s>1$ tel que la dilatation $[s]$ préserve $\Upsilon$, des sous-espaces caractéristiques de $[s]_*$ associé à la valeur propre $1$, c'est-à-dire le sous-espace des classes de cohomologie complexes qui sont envoyées sur $0$ par une puissance de $[s]_*-1$. \end{definition} \subsection{Démonstration des théorèmes \ref{T:cocycleM} et \ref{T:cocycleE}} On se place maintenant dans le cas où la famille lisse $A \to S$ de groupes algébriques commutatifs est soit simplement $A = \mathbf{G}_m$ ou une famille de courbes elliptiques au-dessus d'une courbe modulaire. \`A tout vecteur ligne $v\in \mathbf{Z}^n$ dont les coordonnées sont premières entre elles dans leur ensemble, il correspond la fonctionnelle linéaire primitive $$\chi_{v} : T \to A; \ \mathbf{a} \mapsto v \mathbf{a}.$$ Il découle du lemme \ref{L:Sul} que, sous les hypothèses des théorèmes \ref{T:cocycleM} et \ref{T:cocycleE}, le cycle $D$ se relève en un élément de $$H_G^{2n-1} (T-T[c]) = H^{2n-1} (EG \times_G (T-T[c]))$$ et donc de $$H^{2n-1} (\| EG \times_G (T-T[c]) \| ).$$ \`A tout $(k+1)$-uplet $\mathbf{g} \in (EG)_k$ on associe un ouvert \begin{equation*} \label{U} U (\mathbf{g}) = \left\{ \mathbf{a} \in T \; \Bigg| \; \begin{array}{l} \forall j \in \{ 0 , \ldots , k\}, \ \forall i \in \{ 0 , \ldots , n\}, \\ \chi_{e_i g_j } (\mathbf{a}) \notin A[c] \end{array} \right\} . \end{equation*} C'est le complémentaire d'un arrangement d'hyperplans dans $T$~: \begin{equation} \label{E:arrgtHyp} U (\mathbf{g}) = T - \cup_{j=0}^k \cup_{i=1}^n \cup_{a \in A[c]} \chi_{e_i g_j }^{-1} (a). \end{equation} L'action du groupe $G$ préserve l'ensemble de ces ouverts~: $$h \cdot U (\mathbf{g}) = U (\mathbf{g} h^{-1}).$$ Il découle par ailleurs du lemme \ref{affine} que les variétés \eqref{E:arrgtHyp} sont affines; elles n'ont par conséquent pas de cohomologie en degré $>n$. Comme expliqué dans l'annexe~\ref{A:A}, il correspond à tout élément dans $H^{2n-1}_G (T -T[c])$ une classe de cohomologie dans $$H^{n-1} (G , \lim_{\substack{\rightarrow \\ \Xi }} H^n (T - \cup_{\chi \in \Xi} \cup_{a \in A[c]} \chi^{-1} (a))),$$ où $\Xi$ désigne l'ensemble des translatés par $G$ des morphismes $\chi_{e_1}, \ldots , \chi_{e_n}$. Noter que toute classe dans $H^n ( U (\mathbf{g}))$ définie un élément de la limite inductive. En pratique, nos cocycles seront représentés par des formes régulières sur des $U (\mathbf{g})$. L'élément de $H^{2n-1}_G (T -T[c])$ que nous considérons appartient à $H^{2n-1}_G (T -T[c])^{(1)}$, on obtient donc en fait une classe de cohomologie dans $$H^{n-1} (G , \lim_{\substack{\rightarrow \\ \Xi }} H^n (T - \cup_{\chi \in \Xi} \cup_{a \in A[c]} \chi^{-1} (a))^{(1)}).$$ Il nous reste à représenter $$\lim_{\substack{\rightarrow \\ \Xi }} H^n (T - \cup_{\chi \in \Xi} \cup_{a \in A[c]} \chi^{-1} (a))^{(1)},$$ par des formes méromorphes. C'est l'objet du chapitre \ref{S:OrlikSolomon} dans lequel nous démontrons un théorème ``à la Brieskorn'' dans ce contexte, cf. Théorème \ref{P:Brieskorn}. Finalement le cycle $D$ donne donc lieu à un élément de $H^{n-1} (G , \Omega^n_{\rm mer} (T) )$. Dans le cas elliptique cet élément est uniquement déterminé. Ce n'est pas vrai dans le cas multiplicatif. Alors $T$ est elle-même affine et on en déduit un diagramme commutatif $$\xymatrix{ H_G^{2n-1} (T)^{(1)} \ar[d] \ar[r] & H_G^{2n-1} (T-T[c] )^{(1)} \ar[d] \ar[r] & H_G^{2n} (T , T - T[c])^{(1)} \\ H^{n-1} (G , H^n (T)^{(1)} ) \ar[r] & H^{n-1} (G , \Omega^n_{\rm mer} (T))). & }$$ La classe associée à $D$ dans $H^{n-1} (G , \Omega^n_{\rm mer} (T) )$ n'est donc déterminée qu'à un élément de $H^{n-1} (G , H^n (T)^{(1)} )$ près. En invoquant encore une fois le théorème~\ref{P:Brieskorn} on identifie cette indétermination à un élément de $H^{n-1} (G , \Omega^n (T) )$. Pour conclure, notons que la remarque qui suit les théorèmes \ref{T:cocycleM} et \ref{T:cocycleE} se démontre en partant non plus des hyperplans $\chi_{e_j}^{-1} (A[c])$, pour $j \in \{1, \ldots , n \}$, translatés par les éléments de $G$ mais de $n$ hyperplans ne passant pas par $\mathbf{a}$, ce que l'on peut faire de manière $G$-équivariante puisque $\mathbf{a}$ est $G$-invariant. \chapter{Énoncés des principaux résultats : cocycles explicites} \label{C:2} \resettheoremcounters \numberwithin{equation}{chapter} Dans ce chapitre on décrit dans chacun des trois cas (affine, multiplicatif, elliptique) des cocycles explicites représentants les classes de cohomologie construites au chapitre précédent. Les démonstrations des résultats énoncés ici feront l'objet des chapitres suivants. \section[Le cas affine]{Le cas affine : symboles modulaires universels et algèbre de Orlik--Solomon} Un théorème célèbre de Orlik et Solomon \cite{OrlikSolomon} fournit une présentation, par générateurs et relations, de l'algèbre graduée $\Omega_{\rm aff}$ engendrée par les formes $\omega_{\ell}$ et l'identité. En particulier dans $\Omega^n_{\rm aff}$ l'ensemble des relations, entre les monômes de degré $n$, est engendré par \begin{enumerate} \item $\omega_{\ell_1} \wedge \ldots \wedge \omega_{\ell_{n}} = 0$ si $\det (\ell_1 , \ldots , \ell_{n} ) = 0$, et \item $\sum_{i=0}^n (-1)^i \omega_{\ell_0} \wedge \ldots \wedge \widehat{\omega_{\ell_i}} \wedge \ldots \wedge \omega_{\ell_n} = 0$, pour tous $\ell_0 , \ldots , \ell_n$ dans $(\mathbf{C}^n)^\vee - \{ 0 \}$. \end{enumerate} Le fait que les relations ci-dessus soient effectivement vérifiées dans $\Omega^n_{\rm aff}$ n'est pas difficile, il est par contre remarquable qu'elles engendrent \emph{toutes} les relations. Dans ce paragraphe on commence par expliquer que le fait que les relations soient vérifiées donne naturellement lieu à un cocycle de $G= \mathrm{GL}_n (\mathbf{C})^\delta$ à valeurs dans $\Omega^n_{\rm aff}$. On énonce alors un théorème qui relie ce cocycle à celui construit au chapitre \ref{C:1}. Finalement on explique que ce cocycle est la spécialisation d'un symbole modulaire universel. \subsection{Un premier cocycle explicite} De manière générale, si $X$ est un ensemble muni d'une action transitive de $G$, si $M$ est un $G$-module, si $F : X^n \to M$ est une fonction $G$-équivariante vérifiant $$\sum_{i=0}^{n} (-1)^i F (x_0 , \ldots , \widehat{x}_i , \ldots , x_{n} ) = 0,$$ et si $x$ est un point dans $X$, alors $$f_x (g_1 , \ldots , g_{n} ) := F(g_1^{-1} x , \ldots , g_{n}^{-1} x )$$ définit un $(n-1)$-cocycle du groupe $G$ à valeurs dans $M$. De plus, la classe de cohomologie représentée par ce cocycle ne dépend pas de $x$. En appliquant ce principe général à $$X = (\mathbf{C}^n )^\vee -\{0 \}, \quad M = \Omega^n_{\rm aff}, \quad F (\ell_1 , \ldots , \ell_{n}) = \omega_{\ell_1} \wedge \ldots \wedge \omega_{\ell_{n}} \quad \mbox{et} \quad x=e_1^*,$$ on obtient un $(n-1)$-cocycle homogène \begin{equation} \label{cocycleSa} \mathbf{S}_{\rm aff} : G^{n} \to \Omega_{\rm aff}^n; \quad (g_1 , \ldots , g_{n} ) \mapsto \omega_{\ell_1} \wedge \ldots \wedge \omega_{\ell_{n}}, \end{equation} où $\ell_j (z) = e_1^*(g_j z)$. \medskip \noindent {\it Remarque.} Le cocycle ainsi construit est homogène à droite, ce qui se traduit donc par la relation \begin{equation} \label{invSa} g^* \mathbf{S}_{\rm aff} (g_1 g^{-1} , \ldots , g_{n} g^{-1}) = \mathbf{S}_{\rm aff} (g_1 , \ldots , g_{n} ) , \end{equation} qui découle de l'équation (\ref{E:relg}). \medskip \begin{theorem} \label{T:Sa} Le cocycle $\mathbf{S}_{\rm aff}$ représente la classe (non nulle) $$S_{\rm aff} \in H^{n-1} (G , \Omega_{\rm aff}^n )$$ de la proposition \ref{P:Sa}. \end{theorem} \subsection{Immeuble de Tits et symboles modulaires universels} \label{S:ARuniv} Considérons maintenant l'immeuble de Tits $\mathbf{T}_n$. C'est le complexe simplicial dont les sommets sont les sous-espaces propres non-nuls de $\mathbf{C}^n$ et dont les simplexes sont les drapeaux de sous-espaces propres. Rappelons (voir \S \ref{S:Tits}) que l'immeuble de Tits s'identifie naturellement au bord à l'infini de l'espace symétrique associé à $\mathrm{GL}_n (\mathbf{C})$ dans la compactification géodésique. D'après le théorème de Solomon--Tits \cite{SolomonTits}, l'immeuble de Tits $\mathbf{T}_n$ a le type d'homotopie d'un bouquet de $(n-2)$-sphères. Son homologie réduite en degré $n-2$ est appelé module de Steinberg de $\mathbf{C}^n$; on note donc $$\mathrm{St} (\mathbf{C}^n ) = \widetilde{H}_{n-2} (\mathbf{T}_n ).$$ Ash et Rudolph décrivent un ensemble explicite de générateurs de $\mathrm{St} (\mathbf{C}^n )$, appelés {\it symboles modulaires universels}, de la manière suivante~: soient $v_1 , \ldots , v_n$ des vecteurs non nuls de $\mathbf{C}^n$. Identifiant le bord de la première subdivision barycentrique $\Delta_{n-1} '$ du $(n-1)$-simplexe standard au complexe simplicial dont les sommets sont les sous-ensembles propres non vides de $\{ 1 , \ldots , n \}$, on associe aux vecteurs $v_1 , \ldots , v_n$ l'application simpliciale \begin{equation} \label{app1} \partial \Delta_{n-1} ' \to \mathbf{T}_n \end{equation} qui envoie chaque sommet $I \subsetneq \{ 1 , \ldots , n \}$ de $\partial \Delta_{n-1} '$ sur le sommet $\langle v_i \rangle_{i \in I}$ de $\mathbf{T}_n$. Le symbole modulaire universel $[v_1 , \ldots , v_n] \in \mathrm{St}(\mathbf{C}^n)$ est alors défini comme l'image de la classe fondamentale de $\partial \Delta_{n-1} '$ par l'application \eqref{app1}. D'après \cite[Prop. 2.2]{AshRudolph} le symbole $[v_1 , \ldots , v_n]$ vérifie les relations suivantes. \begin{enumerate} \item Il est anti-symétrique (la transposition de deux vecteurs change le signe du symbole). \item Il est homogène de degré $0$~: pour tout $a \in \mathbf{C}^*$, on a $[ a v_1 , \ldots , v_n] =[v_1 , \ldots , v_n]$. \item On a $[v_1 , \ldots , v_n]=0$ si $\det (v_1 , \ldots , v_n)=0$. \item Si $v_0 , \ldots, v_{n}$ sont $n+1$ vecteurs de $\mathbf{C}^n$, on a $$\sum_{j=0}^n (-1)^j [ v_0 , \ldots , \widehat{v}_j , \ldots , v_n] =0 .$$ \item Si $g \in \mathrm{GL}_n (\mathbf{C})$, alors $[g v_1 , \ldots , g v_n ] = g \cdot [v_1 , \ldots , v_n]$, où le point désigne l'action naturelle de $\mathrm{GL}_n (\mathbf{C})$ sur $\mathrm{St}(\mathbf{C}^n)$. \end{enumerate} D'après \cite[Prop. 2.3]{AshRudolph} les symboles modulaires universels engendrent $\mathrm{St}(\mathbf{C}^n)$. Kahn et Sun \cite[Corollary 2]{KahnSun} montrent que les relations ci-dessus fournissent en fait une {\it présentation} de $\mathrm{St}(\mathbf{C}^n)$. Comme pour les symboles modulaires classiques discutés en introduction, étant donné un $G$-module $M$ une application $G$-équivariante $\Phi : \mathrm{St}(\mathbf{C}^n) \to M$ induit un $(n-1)$-cocycle de $G$ à coefficients dans $M$~: $$(g_1 , \ldots , g_n ) \mapsto \Phi (g_1^{-1} e_1 , \ldots , g_n^{-1} e_1).$$ \subsection{Un deuxième cocycle explicite} Dans le cas affine notre principal résultat est le suivant. \begin{theorem} \label{T:Sa} L'application \begin{multline} \label{E:applAff} \mathrm{St}(\mathbf{C}^n) \to \Omega^n_{\rm aff}; \\ \quad [v_1 , \ldots , v_n] \mapsto \left\{ \begin{array}{ll} 0 & \mbox{si } \det (v_1 , \ldots , v_n)=0, \\ \omega_{v_1^*} \wedge \ldots \wedge \omega_{v_n^*} & \mbox{sinon}, \end{array} \right. \end{multline} où, dans le deuxième cas, $(v_1^* , \ldots , v_n^*)$ désigne la base duale à $(v_1 , \ldots , v_n)$, induit un $(n-1)$-cocycle $\mathbf{S}_{\rm aff}^*$ qui représente encore la classe (non nulle) $S_{\rm aff}$ dans $H^{n-1} (G , \Omega_{\rm aff}^n )$. \end{theorem} Le cocycle $\mathbf{S}_{\rm aff}^*$ est explicitement donné par \begin{equation} \label{cocycleSa*} \mathbf{S}_{\rm aff}^* : G^{n} \to \Omega_{\rm aff}^n; \quad (g_1 , \ldots , g_{n} ) \mapsto \omega_{\ell_1} \wedge \ldots \wedge \omega_{\ell_{n}}, \end{equation} où cette fois $\ell_j$ est une forme linéaire sur $\mathbf{C}^n$ de noyau $\langle g_1^{-1} e_1 , \ldots , \widehat{g_j^{-1} e_1} , \ldots , g_{n}^{-1} e_1 \rangle$ (et identiquement nulle si les $g_j^{-1} e_1$ ne sont pas en position générale). Il découle encore immédiatement de l'équation \eqref{E:relg} au chapitre \ref{C:1} que $\mathbf{S}_{\rm aff}^*$ est homogène à droite, autrement dit qu'il vérifie la relation \eqref{invSa}. Il est par contre un peu moins évident qu'il définisse bien un cocycle. Les deux cocycles $\mathbf{S}_{\rm aff}$ et $\mathbf{S}_{\rm aff}^*$ sont considérés par Sczech dans une note non publiée \cite{Sczechprepub}; le cocycle $\mathbf{S}_{\rm aff}$ est le point de départ d'un article important de Sczech \cite{Sczech93} sur lequel nous reviendrons. Nous lui préférons $\mathbf{S}_{\rm aff}^*$ précisément parce qu'il provient de \eqref{E:applAff}. Du fait que $\mathbf{S}_{\rm aff}^*$ provienne de l'application \eqref{E:applAff} on peut penser à ce cocycle comme à une classe de cohomologie \emph{relative} au bord de Tits. Nous donnerons un sens rigoureux à cela au cours de la démonstration que nous détaillons au chapitre \ref{S:6}. Notre démarche pour démontrer le théorème \ref{T:Sa} consiste à partir de la description topologique (\ref{E:Sa}) de $S_{\rm aff}$ et d'exhiber un représentant explicite grâce à la théorie de Chern--Weil. Outre qu'il permet de montrer que les cocycles $\mathbf{S}_{\rm aff}$ et $\mathbf{S}_{\rm aff}^*$ sont cohomologues et trouvent leur origine dans le relevé dans $H_G^{2n-1} (\mathbf{C}^n -\{ 0 \})$ de la classe fondamentale de $\mathbf{C}^n - \{ 0 \}$, l'avantage de ce point de vue est qu'il se généralise naturellement dans les cas multiplicatif et elliptique que nous discutons dans les paragraphes qui suivent. \medskip {\it Remarques.} 1. On peut reformuler le résultat de Ash--Rudolph \cite[Prop. 2.2]{AshRudolph} cité plus haut de la manière suivante~: \begin{quote} L'application qui à $(g_1 , \ldots , g_n ) \in G^n$ associe le symbole modulaire $[g_1^{-1} e_1 , \ldots , g_n^{-1} e_1]$ dans $\mathrm{St} (\mathbf{C}^n)$ est un $(n-1)$-cocycle homogène. \end{quote} Dans l'annexe \ref{A:B} on donne une démonstration topologique de cette assertion qui réalise la classe de cohomologie associée comme une classe d'obstruction; on pourra comparer cette manière de voir avec \cite{SharifiVenkatesh}. 2. L'application $$\phi : \mathrm{St} (\mathbf{C}^n ) \to \mathrm{St}( (\mathbf{C}^n )^\vee )$$ qui à un symbole $[v_1 , \ldots , v_n]$ associe $0$ si $\det (v_1 , \ldots , v_n)= 0$, et $[v_1^* , \ldots , v_n^*]$ sinon, est un isomorphisme $G$-équivariant. Le cocycle $\mathbf{S}_{\rm aff}$ se déduit alors de l'application $G$-équivariante \begin{equation} \label{R:Sa} \mathbf{S}_{\rm aff}^* \circ \phi^{-1} : \mathrm{St} ((\mathbf{C}^n )^\vee ) \to \Omega_{\rm aff}^n; \quad [\ell_1 , \ldots , \ell_n ] \mapsto \omega_{\ell_1} \wedge \ldots \wedge \omega_{\ell_n}. \end{equation} 3. Les applications \eqref{R:Sa} et \eqref{E:applAff} sont $G$-équivariantes et surjectives. Or Andrew Putman et Andrew Snowden \cite{Putman} ont récemment démontré l'irréductibilité de la représentation de Steinberg $\mathrm{St} (\mathbf{C}^n)$. Les applications \eqref{R:Sa} et \eqref{E:applAff} sont donc des isomorphismes. Il s'en suit que les relations dans $\Omega^n_{\rm aff}$ sont engendrées par les images des relations de Ash--Rudolph; on retrouve donc que les relations de Orlik et Solomon engendrent {\it toutes} les relations entre les formes $\omega_\ell$ dans $\Omega^n_{\rm aff}$. \section[Le cas multiplicatif]{Le cas multiplicatif : formes différentielles trigonométriques et symboles modulaires} \label{S:2-2} Considérons maintenant l'algèbre graduée $\Omega_{\rm mer} = \Omega_{\rm mer} ((\mathbf{C}^\times )^n)$ des formes méromorphes sur le produit de $n$ copies du groupe multiplicatif $\mathbf{C}^\times$ que l'on identifie au quotient $\mathbf{C}/ \mathbf{Z}$ {\it via} l'application $$ \mathbf{C} / \mathbf{Z} \to \mathbf{C}^\times; \quad z \mapsto q=e^{2i\pi z}.$$ Rappelons que la fonction $\varepsilon (z) = \frac{1}{2i} \cot (\pi z)$ est égale à la somme --- régularisée au sens de Kronecker --- de la série $$\frac{1}{2i \pi} \sum_{m \in \mathbf{Z}} \frac{1}{z+m}.$$ \'Etant $\mathbf{Z}$-périodique la forme $\varepsilon (z) dz$ définit bien une forme méromorphe sur $\mathbf{C}/ \mathbf{Z}$. {\it Via} l'identification $\mathbf{C} / \mathbf{Z} \cong \mathbf{C}^\times$ rappelée ci-dessus, on a $$\varepsilon (z) dz = \frac{1}{2i\pi} \frac{dq}{q-1} - \frac{1}{4i \pi} \frac{dq}{q} \quad \mbox{et} \quad dz = \frac{1}{2i\pi} \frac{dq}{q}.$$ On note finalement $\overline{\Omega}_{\rm mer}$ le quotient de $\Omega_{\rm mer} ((\mathbf{C}^\times )^n)$ par la sous-algèbre engendrée par les formes régulières $$dz_j = \frac{1}{2i\pi} \frac{dq_j}{q_j} \quad (j \in \{ 1 , \ldots , n \} ).$$ L'action (à gauche) des matrices $n \times n$ sur $\mathbf{C}^n$ (identifiées aux vecteurs colonnes) induit une action du monoïde $M_n (\mathbf{Z} )$ sur $\mathbf{C}^n / \mathbf{Z}^n$ et donc une action du groupe $\mathrm{SL}_n (\mathbf{Z})$. On commence par rappeler dans ce contexte les définitions des opérateurs de Hecke puis, comme dans le cas additif on décrit un premier cocycle explicite avant de faire le lien avec les symboles modulaires. \subsection{Opérateurs de Hecke} Soit $\Gamma$ un sous-groupe d'indice fini de $\mathrm{SL}_n (\mathbf{Z})$ et soit $S$ un sous-monoïde de $M_n (\mathbf{Z} )^\circ = M_n (\mathbf{Z} ) \cap \mathrm{GL}_n (\mathbf{Q})$ contenant $\Gamma$. L'action, \emph{à droite}, de $M_n (\mathbf{Z} )^\circ$ sur $\Omega_{\rm mer}^n$, par tiré en arrière, induit une action de l'algèbre de Hecke associée au couple $(S, \Gamma)$ sur $H^{n-1} (\Gamma ,\Omega_{\rm mer}^n)$~: une double classe $$\Gamma a \Gamma \quad \mbox{avec} \quad a \in S$$ induit un opérateur --- dit de Hecke --- sur $H^{n-1} (\Gamma ,\Omega_{\rm mer}^n)$, noté $\mathbf{T}(a)$, que l'on décrit comme suit~: on décompose la double classe $$\Gamma a \Gamma = \sqcup_j \Gamma a_j $$ où l'union est finie. Pour tout $g \in \Gamma$ on peut donc écrire $$a_j g^{-1} = (g^{(j)})^{-1} a_{\sigma (j)} \quad \mbox{avec} \quad \sigma \mbox{ permutation et } g^{(j)} \in \Gamma.$$ \'Etant donné un cocycle (homogène à droite) $c : \Gamma^n \to \Omega^n_{\rm mer}$, on pose alors $$\mathbf{T} (a) c (g_1 , \ldots , g_n) = \sum_j a_{j}^*c (g_1^{(j)} , \ldots , g_n^{(j)});$$ c'est encore un cocycle homogène (à droite) et on peut montrer que sa classe de cohomologie est indépendante du choix des $a_j$; voir \cite{RhieWhaples}. Noter par ailleurs qu'un élément $a \in S$ induit une application $[a] : \mathbf{C}^n / \mathbf{Z}^n \to \mathbf{C}^n / \mathbf{Z}^n$. Notons $\mathrm{Div}_\Gamma$ le groupe (abélien) constitué des combinaisons linéaires entières formelles $\Gamma$-invariantes de points de torsion dans $\mathbf{C}^n / \mathbf{Z}^n$. Dans la suite on note $[\Gamma a \Gamma]$ l'application $$[\Gamma a \Gamma] = \sum_j [a_j] : \mathrm{Div}_\Gamma \to \mathrm{Div}_\Gamma.$$ \subsection{Cocycles multiplicatifs I} \'Etant donné un élément $D \in \mathrm{Div}_\Gamma$, la théorie de Chern--Weil nous permettra de construire, au chapitre \ref{S:chap9}, des représentants suffisamment explicites de la classe de cohomologie $S_{\rm mult} [D] \in H^{n-1} (\Gamma , \Omega_{\rm mer}^n )$ du théorème \ref{T:cocycleM} pour démontrer le théorème qui suit au \S \ref{S:3.2.2}. \begin{theorem} \label{T:mult} Soit $\chi_0 : \mathbf{C}^{n} / \mathbf{Z}^n \to \mathbf{C} / \mathbf{Z}$ un morphisme primitif. Il existe une application linéaire $$\mathrm{Div}_\Gamma \to C^{n-1} (\Gamma , \Omega_{\rm mer}^n )^{\Gamma}; \quad D \mapsto \mathbf{S}_{\rm mult, \chi_0} [D]$$ telle que \begin{enumerate} \item chaque cocycle $\mathbf{S}_{\rm mult , \chi_0} [D]$ représente $S_{\rm mult} [D] \in H^{n-1} (\Gamma , \Omega_{\rm mer}^n )$; \item chaque forme différentielle méromorphe $\mathbf{S}_{\rm mult , \chi_0} [D] (g_1 , \ldots , g_n)$ est régulière en dehors des hyperplans affines passant par un point du support de $D$ et dirigés par $\mathrm{ker} ( \chi_0 \circ g_j)$ pour un $j \in \{1 , \ldots , n \}$; \item pour tout entier $s >1$, on a $$\mathbf{S}_{\rm mult , \chi_0} [[s]^*D] = [s]^* \mathbf{S}_{\rm mult , \chi_0}[D] \quad \mbox{\emph{(relations de distribution)} et}$$ \item pour tout $a \in S$, $$\mathbf{T} (a) \mathbf{S}_{\rm mult , \chi_0} [D] = \mathbf{S}_{\rm mult , \chi_0} [ [\Gamma a \Gamma]^* D] \quad \mbox{dans} \quad H^{n-1} (\Gamma , \Omega_{\rm mer}^n ).$$ \end{enumerate} \end{theorem} \medskip \noindent {\it Exemple.} Lorsque $\Gamma = \mathrm{SL}_n (\mathbf{Z})$ et $S = M_n (\mathbf{Z} )^\circ$, l'algèbre de Hecke est engendrée par les opérateurs $$\mathbf{T}^{(k)}_p = \mathbf{T} (a^{(k)}_p) \quad \mbox{avec} \quad a^{(k)}_p=\mathrm{diag} (\underbrace{p, \ldots , p}_{k} , 1 , \ldots , 1 ),$$ où $p$ est un nombre premier et $k$ un élément de $\{ 1 , \ldots , n-1 \}$. Maintenant, le tiré en arrière de $D_0 = \{ 0 \}$ par l'application $[\Gamma a^{(k)}_p \Gamma]$ est supporté sur l'ensemble de tous les points de $p$-torsion comptés avec multiplicité $\left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p$ sauf $0$ compté avec multiplicité $\left( \begin{smallmatrix} n \\ k \end{smallmatrix} \right)_p$.\footnote{Ici $$ \left( \begin{smallmatrix} n \\ k \end{smallmatrix} \right)_p = \frac{(p^n-1) \cdots (p^{n-k+1} -1)}{(p^k-1) \cdots (p-1)} = \frac{(p^n-1)(p^n-p) \cdots (p^n-p^{k-1})}{(p^k-1) (p^k-p) \cdots (p^k - p^{k-1})}$$ est le coefficient $p$-binomial de Gauss, égal au nombre de sous-espaces vectoriels de dimension $k$ dans $\mathbf{F}_p ^n$.} On en déduit que $$[\Gamma a^{(k)}_p \Gamma]^* D_0 = \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p [p]^*D_0 + \left( \left( \begin{smallmatrix} n \\ k \end{smallmatrix} \right)_p - \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p \right) D_0$$ et donc que la classe de cohomologie de $\mathbf{S}_{\rm mult, \chi_0} [D_0 ]$ annule l'opérateur $$\mathbf{T}^{(k)}_p - \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p [p]^* - \left( \begin{smallmatrix} n \\ k \end{smallmatrix} \right)_p + \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p.$$ On retrouve ainsi les deux premiers points du théorème énoncé en introduction, le lien avec le cocycle $\mathbf{S}$ est plus précisément que pour $n=2$, $$\mathbf{S}_{\rm mult , e_1^*} [D_0] (1 , g) = \mathbf{S}_{[1]} (g^{-1}) dx \wedge dy ,$$ où $x$ et $y$ sont les coordonnées, abscisse et ordonnée, dans $\mathbf{C}^2 / \mathbf{Z}^2$. \medskip \subsection{Symboles modulaires} Soit $$\Delta_n (\mathbf{Z} ) \subset \mathrm{St} (\mathbf{C}^n)$$ le sous-groupe abélien engendré par les symboles $$[h] = [v_1 , \ldots , v_n] \quad \mbox{où} \quad h = ( v_1 | \cdots | v_n ) \in M_n (\mathbf{Z})^\circ .$$ On appelle \emph{symbole modulaire} tout élément $[h] = [v_1 , \ldots , v_n]$ de $\Delta_n (\mathbf{Z})$. Par définition le $\mathbf{Z}$-module $\Delta_n (\mathbf{Z})$ est égal au quotient de $\mathbf{Z} [M_n (\mathbf{Z})^\circ ]$ par les relations (1), (2), (3) et (4) de Ash--Rudolph. On note $I_n \subset \mathbf{Z} [\mathrm{SL}_n (\mathbf{Z})]$ le sous-module engendré par les éléments \begin{equation} \label{E:eltIdeal} [h] + [hR] , \quad [h] + (-1)^n [hP] \quad \mbox{et} \quad [h] + [hU] + [hU^2] \quad (h \in \mathrm{SL}_n (\mathbf{Z}) ), \end{equation} avec \begin{equation*} R=(-e_2 |e_1 | e_3 | \cdots | e_n), \quad P = (e_2 | e_3 | \cdots | e_n | (-1)^{n+1} e_1) \end{equation*} et $$U= (-e_1 - e_2 | e_1 | e_3 | \cdots | e_n ).$$ Bykovskii \cite{Bykovskii} démontre que \begin{equation} \label{E:byk} \Delta_n (\mathbf{Z}) = \mathbf{Z} [ \mathrm{SL}_n (\mathbf{Z}) ] / I_n . \end{equation} Un sous-groupe d'indice fini $\Gamma$ dans $\mathrm{SL}_n (\mathbf{Z})$ opère naturellement à gauche sur $\Delta_n (\mathbf{Z})$ et sur $\mathbf{Z}[\mathrm{SL}_n (\mathbf{Z} )]$, et \eqref{E:byk} est une identité entre $\mathbf{Z} [\Gamma]$-modules. Dans l'introduction on a exhibé, dans le cas $n=2$, un premier lien entre les structures de $\Delta_2 (\mathbf{Z})$ et $\Omega^2_{\rm mer}$. La situation générale, où $n$ est arbitraire, est plus subtile; elle fait l'objet du théorème qui suit. Commençons par naturellement étendre les définitions vues en introduction. Le monoïde $M_n (\mathbf{Z} )^\circ$ opère \emph{à droite} sur $\mathrm{Hom} (\Delta_n (\mathbf{Z}) , \Omega_{\rm mer}^n )$ par $$\phi_{|g} ([v_1 , \ldots , v_n]) = g^* \phi ([gv_1 , \ldots , gv_n ]) \quad (\phi \in \mathrm{Hom} (\Delta_n (\mathbf{Z}) , \Omega_{\rm mer}^n ), \ g \in M_n (\mathbf{Z} )^\circ).$$ Cette action induit en particulier une action de $\mathrm{SL}_n (\mathbf{Z})$ et, étant donné un sous-groupe d'indice fini $\Gamma \subset \mathrm{SL}_n (\mathbf{Z})$, on appelle \emph{symbole modulaire} pour $\Gamma$ à valeurs dans $\Omega_{\rm mer}^n$ un élément de $$\mathrm{Hom} (\Delta_n (\mathbf{Z}) , \Omega_{\rm mer}^n )^{\Gamma }.$$ Soit maintenant $C$ un sous-ensemble $S$-invariant\footnote{Et donc aussi $\Gamma$-invariant.} de vecteurs non-nuls dans $\mathbf{Z}^n$. On note $\Delta_C \subset \Delta_n (\mathbf{Z})$ le sous-groupe engendré par les $[v_1 , \ldots , v_n]$, où chaque $v_j$ appartient à $C$. \begin{definition} Un \emph{symbole modulaire partiel} sur $C$ pour $\Gamma$ à valeurs dans $\Omega^n_{\rm mer}$ est un élément de $$\mathrm{Hom} (\Delta_C , \Omega^n_{\rm mer} )^\Gamma.$$ \end{definition} Soit $v_0 \in C$. Un symbole modulaire partiel $\phi$ donne lieu à un $(n-1)$-cocycle à valeurs dans $\Omega^n_{\rm mer}$ $$c_\phi : (g_1 , \ldots , g_n) \mapsto \phi (g_1^{-1} v_0 , \ldots , g_n^{-1} v_0 )$$ dont la classe de cohomologie ne dépend pas du choix de $v_0$ dans $C$; dans la suite on prendra toujours $v_0 = e_1$. Les opérateurs de Hecke opèrent sur les symboles modulaires partiels par $$\mathbf{T} (a) \phi = \sum_j \phi_{|a_j} \quad \left( a \in S , \ \phi \in \mathrm{Hom} (\Delta_C , \Omega^n_{\rm mer} )^\Gamma \right),$$ de sorte que $$\mathbf{T} (a) [c_\phi] = [c_{\mathbf{T} (a) \phi} ] \in H^{n-1} (\Gamma , \Omega^n_{\rm mer}) .$$ \subsection{Cocycles multiplicatifs II} Un élément de $\mathrm{Div}_\Gamma$ est une fonction $\Gamma$-invariante $D : \mathbf{Q}^n / \mathbf{Z}^n \to \mathbf{Z}$ dont le support est contenu dans un réseau de $\mathbf{Q}^n$. \begin{definition} Soit $\mathrm{Div}_\Gamma^{\circ}$ le noyau du morphisme $$\mathrm{Div}_\Gamma \to \mathbf{Z} [ (\mathbf{Q}/ \mathbf{Z})^{n-1}]$$ induit par la projection sur les $n-1$ dernières coordonnées. \end{definition} Lorsque $D \in \mathrm{Div}_\Gamma^{\circ}$ on peut représenter la classe $S_{\rm mult } [D]$ par un cocycle complètement explicite à valeurs dans $\Omega_{\rm mer}^n$, déduit d'un symbole modulaire partiel; c'est l'objet du théorème suivant que l'on démontre au \S \ref{S:8.3.3}. Pour simplifier les expressions on identifie, {\it via} la multiplication par la $n$-forme invariante $dz_1 \wedge \cdots \wedge dz_n$, l'espace $\Omega_{\rm mer}^n$ à $\mathcal{M} (\mathbf{C}^n / \mathbf{Z}^n)$, l'espace des fonctions méromorphes sur $\mathbf{C}^n / \mathbf{Z}^n$. \begin{theorem} \label{T:mult2} Soit $\Gamma$ un sous-groupe d'indice fini dans $\mathrm{GL}_n (\mathbf{Z})$ et $C = \Gamma \cdot \mathbf{Z} e_1 \subset \mathbf{Z}^n$. L'application linéaire $$\mathrm{Div}_\Gamma^\circ \to \mathrm{Hom} (\Delta_C , \mathcal{M} (\mathbf{C}^n / \mathbf{Z}^n))^\Gamma ; \quad D \mapsto \mathbf{S}^*_{\rm mult} [D],$$ où \begin{multline*} \mathbf{S}^*_{\rm mult} [D] : [v_1 , \ldots , v_n] \mapsto \frac{1}{\det h} \sum_{w \in \mathbf{Q}^n / \mathbf{Z}^n} D(w) \cdot \\ \sum_{\substack{\xi \in \mathbf{Q}^n/\mathbf{Z}^n \\ h \xi = w \ (\mathrm{mod} \ \mathbf{Z}^n)}} \varepsilon (v_1^* + \xi_1 ) \cdots \varepsilon (v_{n}^* + \xi_n) , \end{multline*} avec $h = ( v_1 | \cdots | v_n) \in M_n (\mathbf{Z})^\circ$, est bien définie et vérifie les propriétés suivantes. \begin{enumerate} \item Pour tout $D \in \mathrm{Div}_\Gamma^\circ$, le cocycle associé à $\mathbf{S}_{\rm mult}^* [D]$ représente la classe de cohomologie $S_{\rm mult} [D] \in H^{n-1} (\Gamma , \Omega^n_{\rm mer} )$. \item Pour tout $a \in M_n (\mathbf{Z})^\circ$ préservant $C$, on a $$\mathbf{T} (a) (\mathbf{S}^*_{\rm mult } [D] ) = \mathbf{S}^*_{\rm mult } [ [\Gamma a \Gamma]^* D].$$ \end{enumerate} \end{theorem} \medskip \noindent {\it Remarque.} On peut vérifier sur les formules que les relations de distributions $$\mathbf{S}^*_{\rm mult } [[s]^*D] = [s]^* \mathbf{S}^*_{\rm mult } [D],$$ pour tout entier $s>1$ et pour tout $D \in \mathrm{Div}_\Gamma^\circ$, sont encore satisfaites. On peut de même montrer que pour tous $g_1, \ldots , g_n \in \Gamma$, la fonction méromorphe $$\mathbf{S}^*_{\rm mult } [D] (g_1 , \ldots , g_n)$$ est régulière en dehors des hyperplans affines passant par un point du support de $D$ et dirigés par $$\langle g_1^{-1} e_1, \ldots , \widehat{g_j^{-1} e_1} , \ldots , g_n^{-1} e_1 \rangle \quad \mbox{pour un } j \in \{ 1 , \ldots , n \}.$$ On pourrait bien sûr ici remplacer $e_1$ par n'importe quel vecteur primitif $v_0$, à condition de prendre $C=\Gamma \cdot v_0$. \medskip \noindent {\it Exemple.} Soit $N$ un entier strictement positif. Considérons le groupe $\Gamma$ constitué des matrices de $\mathrm{SL}_n (\mathbf{Z})$ qui fixent la droite $\langle e_1 \rangle$, engendrée par le premier vecteur de la base canonique de $\mathbf{Z}^n$, modulo $N$; dans la suite on note ce groupe $\Gamma_0 (N,n)$ ou simplement $\Gamma_0 (N)$ s'il n'y a pas d'ambiguïté sur la dimension. \`A toute combinaison linéaire formelle $\delta = \sum_{d | N} n_d [d]$ de diviseurs positifs de $N$ on associe $$D_\delta = \sum_{d | N } n_d \sum_{j=0}^{d-1} \left[ \frac{j}{d} e_1 \right] ;$$ c'est un élément de $\mathrm{Div}_{\Gamma_0 (N)}$ qui appartient à $\mathrm{Div}_{\Gamma_0 (N)}^\circ$ si et seulement si on a $\sum_{d | N } n_d d =0$. Par définition $D_\delta = \sum_{d | N } n_d \pi_d^* D_0 $, où $D_0$ désigne toujours l'élément de $\mathrm{Div}_{\Gamma_0 (N)}$ de degré $1$ supporté en $0$ et $\pi_d$ désigne la matrice diagonale $\mathrm{diag} (d, 1 , \ldots , 1)$. On a donc $$\mathbf{S}_{\rm mult , \chi_0} [D_\delta ] = \sum_{d | N } n_d \mathbf{S}_{\rm mult , \chi_0} [\pi_d^* D_0 ] .$$ Noter que $D_0$ est invariant par le sous-groupe $\pi_d \Gamma_0 (N) \pi_d^{-1}$ de $\mathrm{SL}_n (\mathbf{Z})$. On verra que pour $g_1 , \ldots , g_n \in \Gamma_0 (N)$ et pour tout diviseur $d$ de $N$, les cocycles $$\mathbf{S}_{\rm mult , \chi_0} [\pi_d^* D_0 ] (g_1 , \ldots , g_n) \quad \mbox{et} \quad \pi_d^* \mathbf{S}_{\rm mult , \chi_0} [D_0 ] (\pi_d g_1 \pi_d^{-1} , \ldots , \pi_d g_n \pi_d^{-1})$$ représentent la même classe de cohomologie dans $H^{n-1} (\Gamma , \overline{\Omega}^n_{\rm mer})$ et donc qu'il en est de même pour $$\mathbf{S}_{\rm mult , \chi_0} [D_\delta ] (g_1 , \ldots , g_n) \quad \mbox{et} \quad \sum_{d | N } n_d \pi_d^* \mathbf{S}_{\rm mult , \chi_0} [D_0 ] (\pi_d g_1 \pi_d^{-1} , \ldots , \pi_d g_n \pi_d^{-1}) .$$ \medskip \noindent {\it Remarque.} On retrouve le théorème énoncé en introduction en prenant $n=2$ et $$\mathbf{S}_{\delta} (g^{-1}) = \mathbf{S}_{\rm mult , e_1^* } [D_{\delta^\vee} ] (1 , g ) \quad \mbox{et} \quad \mathbf{S}^*_{\delta} (g^{-1}) = \mathbf{S}^*_{\rm mult } [D_{\delta^\vee} ] (1 , g ),$$ avec $\delta^\vee = \frac{1}{N} \sum_{d |N } n_d d' [d]$. \medskip Le théorème \ref{T:mult2} implique que le cocycle $\mathbf{S}_{\rm mult , \chi_0} [D_\delta ]$, qui a l'avantage d'être régulier en la plupart des points de torsion, est cohomologue au cocycle qui se déduit du symbole modulaire partiel $\mathbf{S}_{\rm mult}^* [D_\delta ]$ et dont l'avantage est d'avoir une expression simple en le point générique. Un calcul direct permet en effet de vérifier qu'il associe à un élément $[v_1 , \ldots , v_n ] \in \Delta_C$, où les $n-1$ dernières coordonnées des vecteurs $v_j $ sont toutes divisibles par $N$, l'expression $$\sum_{d | N } n_d d \pi_d^* \mathbf{c} (v_1^{(d)} , \ldots , v_n^{(d)} ),$$ où $v_j^{(d)}$ désigne le vecteur de $\mathbf{Z}^n$ obtenu à partir de $v_j$ en divisant par $d$ ses $n-1$ dernières coordonnées et $$\mathbf{c} (v_1 , \ldots , v_n ) = \frac{1}{\det h} \sum_{\substack{\xi \in \mathbf{Q}^n/\mathbf{Z}^n \\ h \xi \in \mathbf{Z}^n }} \varepsilon (v^*_1 + \xi_1 ) \cdots \varepsilon (v^*_{n} + \xi_n ) , $$ avec toujours $h= ( v_1 | \cdots | v_n)$. \subsection{Cocycles de Dedekind--Rademacher généralisés} Soit $N$ un entier strictement positif et $\delta = \sum_{d | N} n_d [d]$ une combinaison linéaire formelle entière de diviseurs positifs de $N$ comme dans l'exemple précédent. Puisque $D_\delta$ est $\Gamma_0 (N)$-invariant, il lui correspond un cocycle (régulier) $\mathbf{S}_{\rm mult , e_1^*} [D_\delta ]$ du groupe $\Gamma_0 (N)$. On note \begin{equation*} \mathbf{\Psi}_\delta : \Gamma_0 (N)^n \to \mathcal{M} (\mathbf{C}^n / \mathbf{Z}^n); \quad (g_1 , \ldots , g_n ) \mapsto \mathbf{S}_{\rm mult , e_1^*} [D_\delta ] (g_1 , \ldots , g_n ). \end{equation*} C'est un $(n-1)$-cocycle du groupe $\Gamma_0 (N)$ à valeurs dans les fonctions méromorphes sur $\mathbf{C}^n / \mathbf{Z}^n$. Sous l'hypothèse que $\delta$ est de degré $0$, c'est-à-dire $\sum_{d | N} n_d =0$, les points de $D_\delta$ ont tous une première coordonnée non nulle dans $\frac{1}{N} \mathbf{Z} / \mathbf{Z}$. Il découle donc du théorème \ref{T:mult} que l'image de $\mathbf{\Psi}_\delta$ est contenue dans les fonctions régulières en $0$. \begin{proposition} \label{P:DRgen} L'application $$\Phi_\delta : \Gamma_0 (N )^n \to \mathbf{C}; \quad (g_1 , \ldots , g_n ) \mapsto \left[ \mathbf{\Psi}_\delta (g_1 , \ldots , g_n ) \right] (0) $$ définit un $(n-1)$-cocycle (à valeurs scalaires) qui représente une classe de cohomologie non-nulle $[\Phi_\delta] \in H^{n-1} (\Gamma_0 (N) , \mathbf{Q})$ telle que \begin{enumerate} \item la classe $d_n [\Phi_\delta ]$, où $2d_n$ désigne le dénominateur du $n$-ème nombre de Bernoulli, est entière, et \item pour tout nombre premier $p$ qui ne divise pas $N$ et pour tout $k \in \{ 0 , \ldots , n-1 \}$, on a $$\mathbf{T}_p^{(k)} [\Phi_\delta ] = \left( \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p + \left( \begin{smallmatrix} n \\ k \end{smallmatrix} \right)_p - \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p \right) \cdot [\Phi_\delta].$$ \end{enumerate} \end{proposition} Les cocycles $\Phi_\delta$ sont des généralisations à $\mathrm{SL}_n (\mathbf{Z})$ des cocycles de Dedekind--Rademacher pour $\mathrm{SL}_2 (\mathbf{Z} )$. Si $F$ est un corps de nombres totalement réel $F$ de degré $n$ au-dessus de $\mathbf{Q}$ et $L$ est un idéal fractionnaire de $F$, on peut considérer le groupe $U$ des unités totalement positives de $\mathcal{O}_F$ préservant $L$. Le choix d'une identification de $L$ avec $\mathbf{Z}^n$, et donc de $F$ avec $\mathbf{Q}^n$, permet de plonger $U$ dans $\mathrm{GL}_n (\mathbf{Z})$ {\it via} la représentation régulière de $U$ sur $L$. Dans le dernier paragraphe du chapitre \ref{S:chap9} on montre que l'évaluation de $\Psi_\delta$ sur la classe fondamentale dans $H_{n-1} (U , \mathbf{Z})$ est égale à la valeur en $0$ d'une stabilisation d'une fonction zeta partielle de $F$. Le théorème \ref{T:mult2} permet alors de retrouver les expressions de ces valeurs comme sommes de Dedekind explicites obtenues par Sczech \cite{Sczech93}; voir aussi \cite{CD,CDG}. \subsection{Lien avec le cocycle de Sczech} Les cocycles $\mathbf{S}^*_{\rm mult} [D]$ ($D \in \mathrm{Div}_\Gamma^\circ$) sont très proches du cocycle de Sczech \cite[Corollary p. 598]{Sczech93}. Il y a toutefois quelques différences notables~: \begin{enumerate} \item Par souci de simplicité nous nous sommes restreints ici\footnote{Mais pas dans \cite{ColmezNous} motivé par l'étude de toutes les valeurs critiques des fonctions $L$ correspondantes.} à ne considérer que de la cohomologie à coefficients triviaux et non tordue. De sorte qu'en prenant les notations de Sczech on a $P=1$ et $v=0$. \item L'hypothèse que $D \in \mathrm{Div}_\Gamma^\circ$ conduit à une stabilisation du cocycle de Sczech qui fait disparaitre, au prix d'une augmentation du niveau, son paramètre $Q$ mais n'est défini qu'en un point \emph{générique} de la fibre $\mathbf{C}^n / \mathbf{Z}^n$. \end{enumerate} \section[Le cas elliptique]{Le cas elliptique : formes différentielles elliptiques et symboles modulaires} Considérons maintenant une courbe elliptique $E$ au-dessus d'une base $Y$ et l'algèbre graduée $\Omega_{\rm mer} (E^n)$ des formes méromorphes sur le produit symétrique $$E^n = E \times_Y \cdots \times_Y E$$ de $n$ copies de $E$. Une matrice entière induit, par multiplication à gauche, une application $E^n \to E^n$. On en déduit donc là encore une action du semi-groupe $M_n (\mathbf{Z})^\circ = M_n (\mathbf{Z} ) \cap \mathrm{GL}_n (\mathbf{Q})$ sur $\Omega_{\rm mer} (E^n)$. \subsection{Opérateurs de Hecke} Soit $\Gamma$ un sous-groupe d'indice fini de $\mathrm{SL}_n (\mathbf{Z})$ et soit $S$ un sous-monoïde de $M_n (\mathbf{Z} )^\circ$ contenant $\Gamma$. L'action, \emph{à droite}, de $M_n (\mathbf{Z} )^\circ$ sur $\Omega_{\rm mer} (E^n)$, par tiré en arrière, induit là encore une action de l'algèbre de Hecke associée au couple $(S, \Gamma)$ sur $H^{n-1} (\Gamma ,\Omega_{\rm mer} (E^n))$. On note encore $\mathbf{T}(a)$ l'opérateur de Hecke sur $H^{n-1} (\Gamma ,\Omega_{\rm mer} (E^n))$ associé à la double classe $$\Gamma a \Gamma \quad \mbox{avec} \quad a \in S.$$ Noter par ailleurs qu'un élément $a \in S$ induit une application $[a] : E^n \to E^n$. Notons $\mathrm{Div}_\Gamma(E^n)$ le groupe (abélien) constitué des combinaisons linéaires entières formelles $\Gamma$-invariantes de points de torsion dans $E^n$. Dans la suite on note $[\Gamma a \Gamma]$ l'application $$[\Gamma a \Gamma] = \sum_j [a_j] : \mathrm{Div}_\Gamma(E^n) \to \mathrm{Div}_\Gamma(E^n).$$ Supposons maintenant que la base $Y$ soit une courbe modulaire $\Lambda \backslash \mathcal{H}$ avec $\Lambda$ sous-groupe d'indice fini dans $\mathrm{SL}_2 (\mathbf{Z})$. \`A travers l'identification \begin{equation} \label{E:ident} \mathcal{H} \times \mathbf{R}^2 \stackrel{\simeq}{\longrightarrow} \mathcal{H} \times \mathbf{C}; \quad (\tau , (u,v)) \mapsto (\tau , u \tau +v ) \end{equation} l'action de $\mathbf{Z}^2$ sur $\mathbf{R}^2$ par translation induit une action de $\mathbf{Z}^2$ sur $\mathcal{H} \times \mathbf{C}$ par $$(\tau , z ) \stackrel{(m,n)}{\longmapsto} (\tau , z+ m \tau + n).$$ Le groupe $\mathrm{SL}_2 (\mathbf{R})$ opère à gauche sur $\mathcal{H} \times \mathbf{R}^2$ par $$(\tau , (u,v)) \stackrel{g}{\longmapsto} (g \tau , (u,v) g^{-1} ).$$ \`A travers \eqref{E:ident} cette action devient $$(\tau , z) \stackrel{\left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right)}{\longmapsto} \left( \frac{a\tau +b}{c\tau +d} , \frac{z}{c\tau + d} \right).$$ La courbe elliptique $E$, et plus généralement le produit fibré $E^n$, s'obtiennent comme double quotient \begin{equation} E^n= \Lambda \backslash \left[ ( \mathcal{H} \times \mathbf{C}^n) / \mathbf{Z}^{2n} \right] = \Lambda \backslash \left[ \mathcal{H} \times (\mathbf{R}^{2n} / \mathbf{Z}^{2n}) \right]. \end{equation} Considérons maintenant un sous-monoïde $\Delta \subset M_2 (\mathbf{Z} )^\circ$ contenant $\Lambda$. L'action de $\Lambda$ sur \eqref{E:ident} s'étend au monoïde $\Delta$ par la formule $$(\tau , (u,v)) \stackrel{g}{\longmapsto} (g \tau , \det (g) (u,v) g^{-1} ).$$ Cette action préserve le réseau $\mathbf{Z}^2$ dans $\mathbf{R}^2$ de sorte que tout élément $g \in \Delta$ induit une application $$[g] : \mathcal{H} \times \mathbf{R}^{2n}/ \mathbf{Z}^{2n} \to \mathcal{H} \times \mathbf{R}^{2n} /\mathbf{Z}^{2n}.$$ Une double classe $$\Lambda b \Lambda \quad \mbox{avec} \quad b \in \Delta$$ induit alors un op\'erateur $$[\Lambda b \Lambda] : \Omega^n_\mathrm{mer}(E^n) \to \Omega^n_\mathrm{mer}(E^n)$$ lui aussi dit de Hecke, sur $\Omega_{\rm mer}^n (E^n)$ et donc aussi sur $H^{n-1} (\Gamma ,\Omega_{\rm mer} (E^n))$, que l'on notera simplement $T(b)$. \subsection{Cocycles elliptiques I} \label{S:ellI} Soit $\mathrm{Div}_\Gamma^0 (E^n)$ le groupe des combinaisons linéaires entières formelles $\Gamma$-invariantes \emph{de degré $0$} de points de torsion dans $E^n$. \'Etant donné un élément $D \in \mathrm{Div}_\Gamma^0 (E^n )$, la théorie de Chern--Weil nous permettra de construire, au chapitre \ref{S:chap10}, des représentants suffisamment explicites de la classe de cohomologie $S_{\rm ell} [D] \in H^{n-1} (\Gamma , \Omega_{\rm mer} (E^n) )$ du théorème \ref{T:cocycleE} pour démontrer le théorème suivant. Dans la suite on fixe $n$-morphismes primitifs linéairement indépendants $$\chi_1 , \ldots , \chi_n : \mathbf{Z}^n \to \mathbf{Z}.$$ On note encore $\chi_j : \mathbf{Q}^n \to \mathbf{Q}$ les formes linéaires correspondantes et $\chi_j : E^n \to E$ les morphismes primitifs qu'ils induisent. On pose $$\chi = (\chi_1 , \ldots , \chi_n).$$ On démontre le théorème suivant au \S \ref{S:9.3.3}. \begin{theorem} \label{T:ell} Il existe une application linéaire $$\mathrm{Div}_\Gamma^0 (E^n ) \to C^{n-1} (\Gamma , \Omega^n_{\rm mer}(E^n) )^{\Gamma}; \quad D \mapsto \mathbf{S}_{\rm ell , \chi} [D]$$ telle que \begin{enumerate} \item pour tout entier $s >1$, on a $\mathbf{S}_{\rm ell , \chi} [[s]^*D] = [s]^* \mathbf{S}_{\rm ell , \chi} [D]$ (\emph{relations de distribution}); \item chaque cocycle $\mathbf{S}_{\rm ell , \chi} [D]$ représente $S_{\rm ell} [D] \in H^{n-1} (\Gamma , \Omega_{\rm mer}^n (E^n) )$; \item chaque forme différentielle méromorphe $\mathbf{S}_{\rm ell , \chi} [D] (g_1 , \ldots , g_n)$ est régulière en dehors des hyperplans affines passant par un point du support de $D$ et dirigés par $\mathrm{ker} ( \chi_i \circ g_j)$ pour $i,j \in \{1 , \ldots , n \}$; \item pour tout $a \in S$, $$\mathbf{T} (a) \left[\mathbf{S}_{\rm ell , \chi} [D] \right] = \left[\mathbf{S}_{\rm ell , \chi } [ [\Gamma a \Gamma]^* D] \right];$$ \item pour tout $b \in \Delta$, $$T (b) \left[ \mathbf{S}_{\rm ell , \chi } [D] \right] = \left[ \mathbf{S}_{\rm ell , \chi } [ [\Lambda b \Lambda]^* D] \right].$$ \end{enumerate} \end{theorem} \medskip \noindent {\it Exemple.} Soit $c$ un entier supérieur à $2$. La combinaison linéaire de points de $c$-torsion $E^n [c] - c^{2n} \{0 \}$ dans $E^n$ est de degré $0$ et invariante par $\Gamma = \mathrm{SL}_n (\mathbf{Z})$; elle définit donc un élément $D_c \in \mathrm{Div}_\Gamma^0 (E^n)$. L'algèbre de Hecke associée au couple $(S , \Gamma)$, avec $S = M_n (\mathbf{Z} )^\circ$, est engendrée par les opérateurs $$\mathbf{T}^{(k)}_p = \mathbf{T} (a^{(k)}_p) \quad \mbox{avec} \quad a^{(k)}_p=\mathrm{diag} (\underbrace{p, \ldots , p}_{k} , 1 , \ldots , 1 ),$$ où $p$ est premier et $k$ appartient à $\{ 1 , \ldots , n-1 \}$. Soit $p$ un nombre premier. La base canonique de $\mathbf{C}$ au-dessus du point $\tau=i \in \mathcal{H}$ fournit une base de $E[p]$ et permet d'identifier $E^n [p]$ au groupe abélien des matrices $M_{n,2} (\mathbf{F}_p)$. Le tiré en arrière de $\{ 0 \}$ par l'application $[\Gamma a^{(k)}_p \Gamma]$ est supporté sur l'ensemble de tous les points de $p$-torsion et une matrice dans $M_{n,2} (\mathbf{F}_p)$ est comptée avec multiplicité\footnote{\'Egale au nombre de $k$-plans contenant un $2$-plan donné. Noter que ce nombre est $0$ si $k \leq 1$ il n'y a alors que des matrices de rang $\leq 1$ dans le support.} $\left( \begin{smallmatrix} n-2 \\ k-2 \end{smallmatrix} \right)_p$ si elle est de rang $2$, avec multiplicité \footnote{\'Egale au nombre de $k$-plans contenant une droite donnée.} $\left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p$ si elle est de rang $1$ et avec multiplicité $\left( \begin{smallmatrix} n \\ k \end{smallmatrix} \right)_p$ si elle est nulle. D'un autre côté on peut considérer la double classe $\Lambda \left( \begin{smallmatrix} p & 0 \\ 0 & 1 \end{smallmatrix} \right) \Lambda$ avec $$\Lambda = \mathrm{SL}_2 (\mathbf{Z}).$$ La pré-image de $\{ 0 \}$ par l'application induite $E^n \to E^n$ est cette fois égale à l'ensemble des matrices de rang $1$ comptées avec multiplicité $1$ et la matrice nulle comptée avec multiplicité $\left( \begin{smallmatrix} 2 \\ 1 \end{smallmatrix} \right)_p = p+1$. On en déduit que pour $p$ premier à $c$, on a \begin{multline*} [\Gamma a_p^{k} \Gamma]^* D_c = \left( \begin{smallmatrix} n-2 \\ k-2 \end{smallmatrix} \right)_p [p]^*D_c + \left( \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p - \left( \begin{smallmatrix} n-2 \\ k-2 \end{smallmatrix} \right)_p \right) [ \Lambda \left( \begin{smallmatrix} p & 0 \\ 0 & 1 \end{smallmatrix} \right) \Lambda ]^* D_c \\ + \left( \left( \begin{smallmatrix} n \\ k \end{smallmatrix} \right)_p - \left( \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p - \left( \begin{smallmatrix} n-2 \\ k-2 \end{smallmatrix} \right)_p \right) (p+1) - \left( \begin{smallmatrix} n-2 \\ k-2 \end{smallmatrix} \right)_p \right) D_c \end{multline*} et donc que la classe de cohomologie de $\mathbf{S}_{\rm ell , \chi} [D_c]$ annule l'opérateur \begin{multline*} \mathbf{T}_p^{(k)} - \left( \begin{smallmatrix} n-2 \\ k-2 \end{smallmatrix} \right)_p [p]^* - \left( \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p - \left( \begin{smallmatrix} n-2 \\ k-2 \end{smallmatrix} \right)_p \right) T_p \\ - \left( \begin{smallmatrix} n \\ k \end{smallmatrix} \right)_p + \left( \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p - \left( \begin{smallmatrix} n-2 \\ k-2 \end{smallmatrix} \right)_p \right) (p+1) + \left( \begin{smallmatrix} n-2 \\ k-2 \end{smallmatrix} \right)_p. \end{multline*} \medskip \subsection{Cocycles elliptiques II} La $1$-forme différentielle $dz$ induit une trivialisation du fibré des $1$-formes relatives $\Omega^1_{E / Y}$. On utilise cette trivialisation pour identifier les sections holomorphes du fibré des formes méromorphes relatives $\Omega_{{\rm mer}, E^n / Y}^n$ à l'espace $\mathcal{M}_n (E^n)$ des fonctions $F$ méromorphes sur $(\mathcal{H} \times \mathbf{C}^n) / \mathbf{Z}^{2n}$ qui vérifie la propriété de modularité $$F \left( \frac{a\tau +b }{c \tau +d } , \frac{z}{c\tau +d} \right) = (c\tau +d )^n F (z,\tau ) \quad \mbox{pour tout} \quad \left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right) \in \Lambda .$$ Sous certaines hypothèses naturelles sur $D$ on peut, comme dans le cas multiplicatif, représenter la classe $S_{\rm ell} [D]$ par un cocycle complètement explicite déduit d'un symbole modulaire partiel. Le long d'une fibre $E_\tau$ au-dessus d'un point $[\tau] \in Y$ il est naturel de remplacer la fonction $\varepsilon$ par la série d'Eisenstein \begin{equation} E_1 (\tau , z) = \frac{1}{2i\pi} \sideset{}{{}^e}\sum_{\omega \in \mathbf{Z} + \mathbf{Z} \tau} \frac{1}{z+\omega} = \frac{1}{2i\pi} \lim_{M \to \infty} \sum_{m=-M}^M \left( \lim_{N \to \infty} \sum_{n=-N}^N \frac{1}{z+ m \tau + n } \right). \end{equation} Toutefois, deux problèmes se présentent: \begin{enumerate} \item La série d'Eisenstein n'est pas périodique en $z$ de période $\mathbf{Z}\tau + \mathbf{Z}$ mais vérifie seulement \begin{equation} \label{E1per} E_1 ( \tau , z + 1) = E_1 (\tau , z) \quad \mbox{et} \quad E_1 (\tau , z+ \tau) = E_1 (\tau , z ) - 1, \end{equation} voir \cite[III \ \S 4 \ (5)]{Weil}. \item La série d'Eisenstein n'est pas non plus modulaire mais vérifie seulement que pour toute matrice $\left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right) \in \mathrm{SL}_2 (\mathbf{Z})$ on a \begin{equation} \label{E1mod} E_1 \left( \frac{a\tau + b}{c\tau +d} , \frac{z}{c\tau +d} \right) = (c\tau + d) E_1 (\tau , z) + cz, \end{equation} voir \cite[III \ \S 5 \ (7)]{Weil}. \end{enumerate} Pour remédier à ces deux problèmes on suppose dorénavant que $Y=Y_0 (N)$, avec $N$ entier et supérieur à $2$. La courbe $E$ est alors munie d'un sous-groupe $K \subset E[N]$ cyclique d'ordre $N$ et on peut considérer la fonction \begin{equation*} E_1^{(N)} (\tau , z) = \sum_{\xi \in K} E_1 \left( \tau , z + \xi \right) - N E_1 (\tau , z) = \sum_{j=0}^{N-1} E_1 \left( \tau , z + \frac{j}{N} \right) - N E_1 (\tau , z). \end{equation*} D'après \eqref{E1per} et \eqref{E1mod} cette fonction est en effet périodique et modulaire relativement au groupe $\Gamma_0 (N)$. La fonction $E_1^{(N)} (\tau , z)$ est associée au cycle de torsion $K - N \{ 0 \}$ dans $E$ qui est \emph{de degré $0$}. \medskip Revenons maintenant au produit fibré $E^n$. On note $\mathrm{Div}_{\Gamma , K} (E^n) \subset \mathrm{Div}_\Gamma (E^n)$ le sous-groupe constitué des combinaisons linéaires entières formelles $\Gamma$-invariantes de points de torsion dans $K^n \subset E[N]^n$. Identifiant ces points de torsion au sous-groupe $\left( \frac{1}{N} \mathbf{Z} / \mathbf{Z} \right)^n \subset (\mathbf{Q} / \mathbf{Z} )^n$, on verra un élément de $\mathrm{Div}_{\Gamma , K} (E^n)$ comme une fonction $\Gamma$-invariante $D : (\mathbf{Q} / \mathbf{Z} )^n \to \mathbf{Z}$ dont le support est contenu dans le réseau $\left( \frac{1}{N} \mathbf{Z} \right)^n \subset \mathbf{Q}^n$. \begin{definition} Soit $\mathrm{Div}_{\Gamma , K}^{\circ} (E^n)$ l'intersection des noyaux des $n$ morphismes $$\mathrm{Div}_{\Gamma , K} (E^n) \to \mathbf{Z} [(\mathbf{Q} /\mathbf{Z})^{n-1}]$$ induits par la projection sur les $n-1$ dernières coordonnées. \end{definition} Noter que les éléments de $\mathrm{Div}_{\Gamma , K}^{\circ} (E^n)$ sont de degré $0$. \medskip \noindent {\it Exemple.} Supposons $\Gamma = \Gamma_0 (N,n)$. \`A toute combinaison linéaire formelle $\delta = \sum_{d | N} n_d [d]$ de diviseurs positifs de $N$ on associe $$D_\delta = \sum_{d | N } n_d \sum_{\xi \in K[d]} (\xi , 0 , \ldots , 0),$$ où $K[d]$ désigne l'ensemble des éléments de $d$-torsion dans $K$. La combinaison formelle de points de torsion $D_\delta$ définit un élément de $\mathrm{Div}_{\Gamma , K}^{\circ}(E^n)$ si et seulement si $\sum_{d| N} n_d d =0$. \medskip Notre construction donne lieu à des classes de cohomologie à valeurs dans les formes méromorphes $\Omega_{\rm mer}^n (E^n)$. En restreignant ces formes aux fibres on obtient des classes de cohomologie à valeurs dans les sections holomorphes du fibré des formes méromorphes relatives $\Omega^n_{{\rm mer}, E^n /Y}$ et donc dans $\mathcal{M}_n (E^n )$. Le théorème suivant, que nous démontrons au \S \ref{S:9.4.2}, donne des représentants explicites de ces classes de cohomologie. \begin{theorem} \label{T:ellbis} Supposons $Y=Y_0 (N)$ avec $N$ entier supérieur à $2$ et notons $K \subset E[N]$ le sous-groupe cyclique d'ordre $N$ correspondant. Soit $\Gamma$ un sous-groupe d'indice fini dans $\mathrm{SL}_n (\mathbf{Z})$ et $C = \Gamma \cdot \mathbf{Z} e_1 \subset \mathbf{Z}^n$. L'application linéaire $$\mathrm{Div}_{\Gamma , K}^{\circ} (E^n) \to \mathrm{Hom} (\Delta_C , \mathcal{M}_n (E^n) )^\Gamma ; \quad D \mapsto \mathbf{S}^*_{\rm ell} [D],$$ où \begin{equation} \label{E:S*ell} \mathbf{S}^*_{\rm ell} [D] : [v_1 , \ldots , v_n] \mapsto \frac{1}{\det h} \sum_{w \in E^n} D(w) \sum_{\substack{\xi \in E^n \\ h \xi = w}} E_1 (\tau , v_1^* + \xi_{1}) \cdots E_1 (\tau , v_n^* + \xi_{n}) , \end{equation} avec $h = ( v_1 | \cdots | v_n) \in M_n (\mathbf{Z})^\circ$, est bien définie et représente la classe de cohomologie $$S_{\rm ell} [D] \in H^{n-1} (\Gamma , \mathcal{M}_n (E^n) ).$$ \end{theorem} Le fait que $\mathbf{S}^*_{\rm ell} [D]$ définit bien un cocycle est démontré d'une manière différente dans la thèse de Hao Zhang \cite[Theorem 4.1.6]{Zhang}. La démonstration repose sur la construction, par le second auteur, d'un $(n-1)$-cocycle du groupe $\mathrm{GL}_n (\mathbf{Z})$ à valeurs dans les fonctions méromorphes sur $\mathcal{H} \times \mathbf{C}^n \times \mathbf{C}^n$, voir \cite{CS}. \medskip \noindent {\it Remarques.} 1. Le fait que les fonctions dans l'image de $\mathbf{S}^*_{\rm ell } [D]$, qui ne sont {\it a priori} définies que sur $\mathcal{H} \times \mathbf{C}^n $, soient en fait $\mathbf{Z}^{2n}$-invariantes et modulaires de poids $n$ relativement à l'action de $\Lambda$ fait partie de l'énoncé du théorème mais peut être vérifié à la main. Comme dans le cas multiplicatif on peut aussi vérifier directement sur les formules que les relations de distribution $$\mathbf{S}^*_{\rm ell } [[s]^*D] = [s]^* \mathbf{S}^*_{\rm ell } [D],$$ pour tout entier $s>1$ et pour tout $D \in \mathrm{Div}_{\Gamma , K}^{\circ} (E^n)$, sont encore vérifiées et que pour tous $g_1, \ldots , g_n \in \Gamma$, la forme différentielle méromorphe $\mathbf{S}^*_{\rm ell } [D] (g_1 , \ldots , g_n)$ est régulière en dehors des hyperplans affines passant par un point du support de $D$ et dirigés par $$\langle g_1^{-1} e_1, \ldots , \widehat{g_j^{-1} e_1} , \ldots , g_n^{-1} e_1 \rangle \quad \mbox{pour un } j \in \{ 1 , \ldots , n \}.$$ 2. Pour tout entier $m$ premier à $N$, l'opérateur de Hecke $T_m$ correspondant à la double classe $$\Lambda\left( \begin{array}{cc} m & 0 \\ 0 & 1 \end{array} \right) \Lambda = \bigsqcup_{\substack{a, d >0 \\ ad=m}} \bigsqcup_{b=0}^{d-1} \Lambda \left( \begin{array}{cc} a & b \\ 0 & d \end{array} \right)$$ opère sur une fonction méromorphe $F$ du type $$F = E_1 (\tau , z_1 ) \ldots E_n (\tau , z_n),$$ par l'expression familière $$T_m F = m^n \sum_{\substack{a, d >0 \\ ad=m}} \sum_{b=0}^{d-1} \frac{1}{d^n} E_1 \left( \frac{a\tau+b}{d} , a z_1 \right) \ldots E_n \left( \frac{a\tau+b}{d} , az_n \right).$$ \medskip \medskip \noindent {\it Exemple.} Soit $\Gamma = \Gamma_0 (N,n)$ et soit $\delta = \sum_{d | N} n_d [d]$ une combinaison linéaire formelle de diviseurs positifs de $N$ telle que $\sum_{d| N} n_d d =0$. Après restriction aux fibres, le théorème \ref{T:ell} associe à tout $n$-uplet $\chi$ de morphismes primitifs et linéairement indépendants $\mathbf{Z}^n \to \mathbf{Z}$ un $(n-1)$-cocycle régulier $\mathbf{S}_{\rm ell , \chi } [D_\delta]$ du groupe $\Gamma$ à valeurs dans $\mathcal{M}_n (E^n )$. Le théorème \ref{T:mult2} implique que ce cocycle est cohomologue au cocycle qui se déduit du symbole modulaire partiel $\mathbf{S}_{\rm mult}^* [D_\delta ]$ et dont un calcul permet de vérifier qu'il associe à un élément $[v_1 , \ldots , v_n ] \in \Delta_C$, où les $n-1$ dernières coordonnées des vecteurs $v_j $ sont toutes divisibles par $N$, l'expression $$\sum_{d | N } n_d d \pi_d^*\mathbf{E}( \tau ; v_1^{(d)} , \ldots , v_n^{(d)} ) ,$$ où $v_j^{(d)}$ désigne le vecteur de $\mathbf{Z}^n$ obtenu à partir de $v_j$ en divisant par $d$ ses $n-1$ dernières coordonnées et \begin{equation*} \mathbf{E} ( \tau ; v_1 , \ldots , v_n ) = \frac{1}{\det h} \sum_{\substack{\xi \in E^n \\ h \xi = 0 }} E_1 (\tau , v_1^* + \xi_{1}) \cdots E_1 (\tau , v_n^* + \xi_{n}) , \end{equation*} avec toujours $h= ( v_1 | \cdots | v_n)$. \subsection{Relèvement theta explicite} Soit $N$ un entier supérieur à $4$ et soit $E$ la courbe elliptique universelle au-dessus de la courbe modulaire $Y_1 (N) = \Gamma_1 (N) \backslash \mathcal{H}$, où $$\Gamma_1 (N) = \left\{ \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \in \mathrm{SL}_2 (\mathbf{Z}) \; : \; \left( \begin{array}{c} a \\ c \end{array} \right) \equiv \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \ (\mathrm{mod} \ N ) \right\};$$ c'est l'espace des modules fin qui paramètre la donnée d'une courbe elliptique $E_\tau = \mathbf{C} / (\mathbf{Z} \tau + \mathbf{Z})$ et d'un point de $N$-torsion $[1/N]$. Soit $\Gamma$ un sous-groupe de congruence contenu dans $\Gamma_0 (N,n)$ et $\delta = \sum_{d | N} n_d [d]$ une combinaison de diviseurs positifs de $N$ telle que $\sum_{d | N} n_d d=0$. La fonction $D_\delta$ appartient alors à $\mathrm{Div}_\Gamma^0 (E^n)$. Prenons $$\chi = (e_1^*, e_1^* + e_2^* , \ldots , e_1^* + e_n^*).$$ Sous l'hypothèse que $\delta$ est de degré $0$, c'est-à-dire $\sum_{d | N} n_d =0$, les points de $D_\delta$ ont tous une première coordonnée non nulle dans $\frac{1}{N} \mathbf{Z} / \mathbf{Z}$. Il découle donc du théorème \ref{T:ell} que l'image de $\mathbf{S}_{\rm ell , \chi} [D_\delta ]$ est contenue dans les formes régulières en $0$. Après restriction de ces formes aux fibres et évaluation en $0$ on obtient une classe de cohomologie à valeurs dans l'espace $M_n (Y_1 (N))$ des formes modulaires de poids $n$ sur $Y_1 (N)$. On note $\Theta [D_\delta ] : \Gamma^n \to M_n (Y_1 (N))$ le cocycle correspondant. \begin{theorem} Le cocycle $\Theta [D_\delta ]$ représente une classe de cohomologie dans $H^{n-1} (\Gamma , M_n (Y_1 (N)))$ qui réalise le relèvement theta des formes modulaires paraboliques de poids $n$ pour $Y_1 (N)$ dans la cohomologie $H^{n-1} (\Gamma , \mathbf{C})$. Le relèvement associe à une forme modulaire parabolique $f \in M_n (Y_1 (N))$ la classe de cohomologie du cocycle $$\Theta_{f} [D_\delta ] = \langle \Theta [D_\delta ] , f \rangle_{\rm Petersson} : \Gamma^n \to \mathbf{C}$$ et pour tout entier $p$ premier ne divisant pas $N$ et pour tout $k\in \{ 1 , \ldots , n-1 \}$, on a \begin{multline*} \mathbf{T}_p^{(k)} \left[ \Theta_{ f} [D_\delta ] \right] = \left( \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p - \left( \begin{smallmatrix} n-2 \\ k-2 \end{smallmatrix} \right)_p \right) \left[\Theta_{ T_p f} [D_\delta ] \right] \\ + \left( \left( \begin{smallmatrix} n \\ k \end{smallmatrix} \right)_p - (p+1) \left( \left( \begin{smallmatrix} n-1 \\ k-1 \end{smallmatrix} \right)_p - \left( \begin{smallmatrix} n-2 \\ k-2 \end{smallmatrix} \right)_p \right) \right) \left[ \Theta_{ f} [D_\delta ] \right] . \end{multline*} \end{theorem} \medskip \noindent {\it Remarques.} 1. Si $f$ est une forme propre pour $T_p$ de valeur propre $a_p$, le relevé de $f$ est propre pour les opérateurs de Hecke $\mathbf{T}_p^{(k)}$ Lorsque $n=3$ les valeurs propres correspondantes aux opérateurs $\mathbf{T}_p^{(1)}$ et $\mathbf{T}_p^{(2)}$ sont respectivement $a_p+p^2$ et $pa_p+1$; ce relèvement est notamment étudié dans \cite{AshGraysonGreen}. 2. L'hypothèse $\sum_{d | N} n_d d =0$ faite sur $D_\delta$ implique que les formes modulaires dans l'image de $\Theta [D_\delta ]$ sont paraboliques à l'infini. Après évaluation en l'infini de $\Theta [D_\delta ]$ on retrouve le cocycle de Dedekind-Rademacher généralisé $\Psi_\delta$. \medskip Si $F$ est un corps de nombres totalement réel de degré $n$ au-dessus de $\mathbf{Q}$ et $L$ est un idéal fractionnaire de $F$, on peut là encore considérer le groupe $U$ des unités totalement positives de $\mathcal{O}_F$ préservant $L$ et le plonger dans $\mathrm{GL}_n (\mathbf{Z})$. On peut cette fois montrer que l'évaluation de $\Theta_\delta [D]$ sur la classe fondamentale dans $H_{n-1} (U , \mathbf{Z})$ est une forme modulaire de poids $n$ égale à la restriction à la diagonale d'une série d'Eisenstein partielle\footnote{Associée à la classe de $L$.} sur une variété modulaire de Hilbert; voir \cite[\S 13.3]{Takagi}. Le théorème \ref{T:ellbis} permet alors d'exprimer cette restriction comme une combinaison linéaire de produits de séries d'Eisenstein $E_1$. \medskip \noindent {\it Remarque.} Lorsque $n=2$, le théorème \ref{T:ellbis} est à rapprocher d'un théorème de Borisov et Gunnells \cite[Theorem 3.16]{BorisovGunnells} selon lequel l'application qui à un symbole unimodulaire $[a/c : b/d]$, avec $c,d \neq 0$ modulo $N$, associe $$E_1 \left(\tau , \frac{c}{N} \right) E_1 \left(\tau , \frac{d}{N} \right)$$ induit un symbole modulaire partiel Hecke équivariant à valeurs dans les formes modulaires de poids $2$ sur $Y_1 (N)$ \emph{modulo les séries d'Eisenstein de poids $2$}. Borisov et Gunnells considèrent en effet les séries d'Eisenstein, de poids $1$ et niveau $Y_1(N)$, $$E_1 (\tau , a/N) \quad ( a \in \{ 1 , \ldots , N-1 \} ),$$ et remarquent que si $a+b+c = 0$ modulo $N$ et si $a,b,c \neq 0$, l'expression $$E_1 (\tau , a/N) E_1 (\tau , b/N) + E_1 (\tau , b/N) E_1 (\tau , c/N) + E_1 (\tau , c/N) E_1 (\tau , a/N)$$ est une combinaison linéaire de séries d'Eisentein de poids $2$, voir \cite[Proposition 3.7 et 3.8]{BorisovGunnells}.\footnote{Cela se déduit d'une formule plus générale démontrée dans \cite{BorisovGunnellsInventiones} et que l'on peut également trouver chez Eisenstein \cite{Eisenstein}.} \medskip Notons finalement que la construction de ce chapitre s'applique de manière similaire à une courbe elliptique plutôt qu'à une famille de courbes elliptiques. Lors-\ que cette courbe elliptique est à multiplication complexe par l'anneau des entiers $\mathcal{O}$ d'un corps quadratique imaginaire $k$ on obtient des cocycles de degré $n-1$ de sous-groupes de congruence, non plus de $\mathrm{GL}_n (\mathbf{Z})$ mais, de $\mathrm{GL}_n (\mathcal{O})$. Ce sont des généralisations des cocycles considérés par Sczech \cite{SczechBianchi} et Ito \cite{Ito} lorsque $n=2$. Comme d'après un théorème classique de Damerell les évaluations des séries d'Eisenstein $E_1$ en des points CM sont, à des périodes transcendantes explicites près, des nombres algébriques, on peut montrer que les cocycles associés aux corps quadratiques imaginaires sont à valeurs algébriques. Leur étude fait l'objet de l'article \cite{ColmezNous} dans lequel nous démontrons une conjecture de Sczech et Colmez relative aux valeurs critiques des fonctions $L$ attachées aux caractères de Hecke d'extensions de $k$. \chapter[Cohomologie d'arrangements d'hyperplans]{Cohomologie d'arrangements d'hyperplans~: représentants canoniques} \label{S:OrlikSolomon} \resettheoremcounters \numberwithin{equation}{chapter} Dans ce chapitre, qui peut se lire de manière indépendante du reste de l'ouvrage, on démontre un théorème ``à la Orlik--Solomon'' pour les arrangements d'hyperplans dans des produits de $\mathbf{C}^\times$ ou des produits de courbes elliptiques. Le résultat principal que nous démontrons est le théorème \ref{P:Brieskorn}. \section{Arrangement d'hyperplans trigonométriques ou elliptiques} On fixe un groupe algébrique $A$, isomorphe au groupe multiplicatif ou à une courbe elliptique. Soit $n$ un entier naturel. On considère un $A^n$-torseur $T$ muni d'un sous-ensemble $T_{\mathrm{tors}} \subset T$ fixé qui est un torseur pour le groupe des points de torsions de $A^n$. \medskip \noindent {\it Remarque.} Même dans le cas (qui nous intéresse principalement) où $T=A^n$, nous serons amené à considérer des hyperplans affines comme $\{a \} \times A^{n-1}$ où $a \in A_{\rm tors}$. Ces derniers sont encore des $A^{n-1}$-torseurs. \medskip Comme dans le cas où $T=A^n$, on appelle {\it fonctionnelle affine} toute application $\chi: T \rightarrow A$ de la forme $$t_0 + \mathbf{a} \mapsto \chi_0(\mathbf{a})$$ où $t_0$ est un élément de $T_{\mathrm{tors}}$ et $\chi_0 : A^n \rightarrow A$ un morphisme de la forme $\mathbf{a} = (a_1, \dots, a_n) \mapsto \sum r_i a_i$ où les $r_i$ sont des entiers.\footnote{Autrement dit un morphisme standard, sauf dans le cas où $A$ est une courbe elliptique à multiplication complexe.} Rappelons que $\chi$ est {\it primitif} si les coordonnées $ (r_1, \dots, r_n) \in \mathbf{Z}^n$ de $\chi_0$ sont premières entre elles dans leur ensemble. Dans ce cas le lieu d'annulation de $\chi$ est un translaté de $\mathrm{ker}(\chi_0)$ qui est isomorphe à l'image de l'application linéaire $A^{n-1} \to A^n$ associée à une base du sous-module de $\mathbf{Z}^n$ orthogonal au vecteur $\mathbf{r}$. Le lieu d'annulation de $\chi$ a donc, comme $T$, une structure de torseur sous l'action de $A^{n-1}$ et il est muni d'une notion de points de torsion. On appelle \emph{hyperplan} le lieu d'annulation (ou abusivement ``noyau'') d'une fonctionnelle affine primitive, ou de manière équivalente l'image d'une application $A^{n-1} \rightarrow T$ linéaire relativement à un morphisme $A^{n-1} \rightarrow A^n$ induit par une matrice entière de taille $(n-1) \times n$. \begin{definition} Un \emph{arrangement d'hyperplans} $\Upsilon$ est un fermé de Zariski dans $T$ réunion d'hyperplans. La taille $\# \Upsilon$ est le nombre d'hyperplans distincts de cet arrangement. \end{definition} Comme dans le cas linéaire (Lemme \ref{affine}) on a le lemme suivant. \begin{lemma} \label{affineb} Si $\Upsilon$ contient $n$ fonctionnelles affines $\chi$ dont les vecteurs associés $\mathbf{r} \in \mathbf{Z}^n$ sont linéairement indépendants alors le complémentaire $T-\Upsilon$ est affine. Lorsque $A$ est une courbe elliptique, c'est même une équivalence. \end{lemma} \section[Cohomologie des arrangements d'hyperplans]{Opérateurs de dilatation et cohomologie des arrangements d'hyperplans} \label{S:dil} On appelle {\em application de dilatation} toute application $[s] : T \rightarrow T$ associée à un entier $s>1$ et de la forme $$[s] : t+ \mathbf{a} \mapsto t + s \mathbf{a}$$ pour un certain $t \in T_{\mathrm{tors}}$. L'image d'un hyperplan par une application de dilatation est encore un hyperplan. \'Etant donné un arrangement d'hyperplans $\Upsilon$ définis par des fonctionnelles affines $\chi_1 , \ldots , \chi_r$ on peut, comme dans le cas $T=A^n$, trouver une application de dilatation $[s]$ qui préserve $\Upsilon$. Quitte à augmenter $s$ on peut de plus supposer que si $\chi_1 , \ldots , \chi_r$ est une collection de fonctionnelles affines définissant des hyperplans dans $\Upsilon$, alors $[s]$ préserve les composantes connexes de la préimage de $0$ par $$(\chi_1 , \ldots , \chi_r) : T \to A^r;$$ ces composantes connexes sont en effet des fermés de Zariski dans $T$. On fixe dorénavant un tel choix de dilatation $[s]$. Pour ne pas alourdir les notations on notera simplement $$H^*(T-\Upsilon, \mathbf{C})^{(1)} \subset H^*(T-\Upsilon, \mathbf{C})$$ le sous-espace caractéristique de $[s]_*$ associé à la valeur propre $1$. Noter que cette fois ce sous-espace dépend {\it a priori} du choix de la dilatation. On utilisera les mêmes notations pour les espaces de formes différentielles. \subsection{Faisceau de formes différentielles et un théorème de Clément Dupont} Une forme méromorphe $\omega$ sur $T$ est {\it à pôles logarithmiques le long de $\Upsilon$} si au voisinage de chaque point $p \in T$ on peut décomposer $\omega$ comme combinaison linéaire sur $\mathbf{C}$ de formes du type \begin{equation} \label{E31} \nu \wedge \bigwedge_{J} \frac{d f_j}{f_j} \end{equation} où $\nu$ est une forme holomorphe au voisinage de $p$ et chaque indice $j \in J$ paramètre un hyperplan $H_j$ de $\Upsilon$ passant par $p$ défini par une équation linéaire locale $f_j = 0$. Notons que cette définition est indépendante du choix des $f_j$ puisque ceux-ci sont uniquement déterminés à une unité locale près. Les formes méromorphes sur $T$ à pôles logarithmiques le long de $\Upsilon$ forment un complexe de faisceaux d'espaces vectoriels complexes sur $T$ que, suivant Dupont \cite{DupontC}, nous notons $\Omega^\bullet_{\langle T , \Upsilon \rangle}$. On prendra garde au fait que ce complexe est en général strictement contenu dans le complexe de Saito $\Omega^\bullet_T (\log \Upsilon)$ lorsque le diviseur $\Upsilon$ n'est pas à croisements normaux. Dupont \cite[Theorem 1.3]{DupontC} démontre que l'inclusion entre complexes de faisceaux \begin{equation} \label{Dqi} \Omega^\bullet_{\langle T , \Upsilon \rangle} \hookrightarrow j_* \Omega^\bullet_{T - \Upsilon}, \end{equation} où $j$ désigne l'inclusion de $T-\Upsilon$ dans $T$, est un quasi-isomorphisme. On rappelle ci-dessous les ingrédients principaux de sa démonstration. \subsection{Opérateurs de dilatation} Il découle du lemme suivant que l'application de dilatation $[s]$ induit un opérateur $[s]_*$ sur $\Omega^\bullet_{\langle T , \Upsilon \rangle}$. \begin{lemma} Pour tout ouvert $V \subset T$, le morphisme de trace $$\Omega^*_{\mathrm{mer}}([s]^{-1} V) \rightarrow \Omega^*_{\mathrm{mer}}(V)$$ induit par $[s]$ envoie $\Omega^\bullet_{\langle T , \Upsilon \rangle} ([s]^{-1} V)$ dans $\Omega^\bullet_{\langle T , \Upsilon \rangle} (V)$. Autrement dit, la trace définit un morphisme de faisceaux $$[s]_* \Omega^\bullet_{\langle T , \Upsilon \rangle} \rightarrow \Omega^\bullet_{\langle T , \Upsilon \rangle}.$$ \end{lemma} \proof Soient $p_1$ et $p_2$ deux points de $T$ tels que $[s] p_1 = p_2$ et soit $\omega_1$ une section locale de $\Omega^\bullet_{\langle T , \Upsilon \rangle}$ définie sur un voisinage de $p_1$. Puisque $[s]$ préserve $\Upsilon$, l'image par $[s]$ d'un hyperplan $H_1$ de $\Upsilon$ passant par $p_1$ est un hyperplan $H_2$ de $\Upsilon$ passant par $p_2$. Soit maintenant $f_2 = 0$ une équation locale pour l'hyperplan $H_2$ dans un voisinage $W$ de $p_2$. Comme l'application $[s]$ est un biholomorphisme local, quitte à rétrécir $W$, l'application $f_1 = [s]^* f_2$ définit une équation locale pour l'hyperplan $H_1$ au voisinage de $p_1$. De plus, le morphisme trace induit par $[s]$ envoie $f_1 \in \Omega^0 ([s]^{-1} W)$ sur $\deg([s]) f_2$ et donc $$[s]_* \frac{df_1}{f_1} = \frac{df_2}{f_2}.$$ Comme $\Omega^\bullet_{\langle T , \Upsilon \rangle}([s]^{-1} V)$ est engendré par des formes qui localement peuvent s'exprimer comme produits extérieurs de formes régulières et de formes du type $df_1/f_1$, le lemme s'en déduit. \qed \begin{definition} Soit $$H^* (T , \Omega^j_{\langle T , \Upsilon \rangle})^{(1)} \subset H^*(T , \Omega^j_{\langle T , \Upsilon \rangle})$$ le sous-espace caractéristique de $[s]_*$ associé à la valeur propre $1$, c'est-à-dire le sous-espace des classes de cohomologie qui sont envoyées sur $0$ par une puissance de $[s]_*-1$. \end{definition} Le but de ce chapitre est la démonstration du théorème qui, dans les contextes multiplicatifs et elliptiques, remplace le théorème de Brieskorn invoqué dans le cas affine pour construire la classe \eqref{E:Sa}. \begin{theorem} \label{P:Brieskorn} Supposons $T-\Upsilon$ affine. Pour tout degré $j \leq n = \dim(T)$ les formes dans $H^0(T, \Omega^j_{\langle T , \Upsilon \rangle})^{(1)}$ sont fermées et l'application naturelle $$H^0(T, \Omega^j_{\langle T , \Upsilon \rangle})^{(1)} \rightarrow H^j(T-\Upsilon)^{(1)}$$ est un isomorphisme. \end{theorem} Après avoir terminé la rédaction de ce chapitre nous avons appris que, dans le cas multiplicatif, il existe en fait, comme dans le cas affine, une algèbre de formes différentielles algébriques fermées sur $T-\Upsilon$, qui est isomorphe à la cohomologie de $T-\Upsilon$. Dans le cas des arrangements d'hyperplans affines, cette algèbre est celle d'Arnol'd et Brieskorn. Dans le cas des arrangements toriques, elle est plus difficile à décrire, notamment parce que la cohomologie n'est pas engendrée en degré 1 en général. De Concini et Procesi \cite[Theorem 5.2]{DeConciniProcesi} traitent le cas unimodulaire; ce cas-là est analogue au cas affine, l'algèbre est engendrée par des représentants d'une base du $H^1$ du tore et les formes $d\log (\chi-a)$, où $\{\chi =a\}$ est une équation d'un des hyperplans de l'arrangement. Une description de l'algèbre dans le cas général est donnée par Callegaro, D'Adderio, Delucchi, Migliorini et Pagaria \cite{CDDMP}. \subsection{Mise en place de la démonstration (par récurrence)} On démontre le théorème \ref{P:Brieskorn} par récurrence sur la paire $(n=\dim \ T , \#(\Upsilon))$, relativement à l'ordre lexicographique. L'initialisation de la récurrence se fera en démontrant directement le théorème \begin{itemize} \item lorsque $\# \Upsilon = n$, dans le cas elliptique, et \item lorsque $\Upsilon$ est vide, dans le cas multiplicatif. \end{itemize} \medskip \noindent {\it Remarque.} Lorsque $A$ est elliptique et que $\# \Upsilon = n$ --- cas d'initialisation de la récurrence --- les vecteurs de $\mathbf{Z}^n$ associés aux hyperplans de $\Upsilon$ sont nécessairement linéairement indépendants d'après le lemme \ref{affineb}. Le diviseur $\Upsilon \subset T$ est {\it à croisements normaux simples}. \medskip \`A chaque étape de récurrence on procède comme suit~: supposons que $\Upsilon'$ soit un ensemble non vide d'hyperplans avec $T-\Upsilon'$ affine. Supposons de plus $\# \Upsilon' > n$ dans le cas elliptique. Considérons un hyperplan $H$ dans $\Upsilon'$ tel que $T-\Upsilon$, avec $\Upsilon = \Upsilon '-H$, soit affine. Notons $\Upsilon \cap H$ l'ensemble des hyperplans de $H$ --- vu comme $A^{n-1}$-torseur --- obtenus par intersection de $H$ avec les hyperplans (de $T$) appartenant à $\Upsilon$. L'arrangement $$H - (\Upsilon \cap H),$$ étant un fermé de Zariski de la variété affine $T-\Upsilon$, est affine. On peut maintenant considérer le diagramme \begin{equation} \label{E:diag} T - \Upsilon ' \stackrel{j}{\hookrightarrow} T - \Upsilon \stackrel{\iota}{\hookleftarrow} H - (\Upsilon \cap H), \end{equation} où $\iota$ est une immersion fermée et $j$ une injection ouverte. Dans la suite $\iota$ désignera plus généralement l'inclusion de $H$ dans $T$. Noter que $\iota (H - (\Upsilon \cap H))$ est un diviseur lisse de $T-\Upsilon$ de complémentaire l'image de $j$. Le point clé de la démonstration est le fait, non trivial et démontré par Clément Dupont, que la suite de faisceaux \begin{equation} \label{dupont} 0 \rightarrow \Omega^q_{\langle T , \Upsilon \rangle} \rightarrow \Omega^{q}_{\langle T , \Upsilon ' \rangle} \stackrel{\mathrm{Res}}{\longrightarrow} \iota_* \Omega^{q-1}_{\langle H , \Upsilon \cap H \rangle} \rightarrow 0 \end{equation} est \emph{exacte}. Dans le paragraphe suivant on décrit les grandes lignes de la démonstration de Dupont. \section{Travaux de Dupont} \label{travaux dupont} Suivant Dupont \cite[Definition 3.4]{DupontC}, on munit le faisceau $\Omega^\bullet_{\langle T , \Upsilon \rangle}$ d'une filtration ascendante $$W_0 \Omega^\bullet_{\langle T , \Upsilon \rangle} \subset W_1 \Omega^\bullet_{\langle T , \Upsilon \rangle} \subset \ldots $$ appelé {\it filtration par les poids}. Le $k$-ème terme de cette filtration $W_k \Omega^\bullet_{\langle T , \Upsilon \rangle}$ est engendré par les formes du type \eqref{E31} avec $\# J \leq k$. On a donc $$W_0 \Omega^\bullet_{\langle T , \Upsilon \rangle} = \Omega^\bullet_T \quad \mbox{et} \quad W_q \Omega^q_{\langle T , \Upsilon \rangle} = \Omega^q_{\langle T , \Upsilon \rangle}.$$ Pour comprendre le complexe gradué associé considérons $\mathrm{gr}_{n}^W \Omega^\bullet_{\langle T , \Upsilon \rangle}$, c'est-à-dire le quotient $$W_n \Omega^\bullet_{\langle T , \Upsilon \rangle} /W_{n-1} \Omega^\bullet_{\langle T , \Upsilon \rangle}.$$ Localement, la classe d'une forme de type \eqref{E31} dans $\mathrm{gr}_{n}^W \Omega^\bullet_{\langle T , \Upsilon \rangle}$ ne dépend pas du choix des $f_j$ puisque ceux-ci sont uniquement déterminés à une unité locale près. Un choix approprié de coordonnées locales permet alors de vérifier que la fibre de $\mathrm{gr}_{n}^W \Omega^\bullet_{\langle T , \Upsilon \rangle}$ au-dessus d'un point $p$ est nulle à moins que $p$ ne soit contenu dans $n$ hyperplans linéairement indépendants. Maintenant, si $p$ appartient à $n$ hyperplans linéairement indépendants, Dupont \cite[Theorem 3.11]{DupontC} construit une application naturelle de l'algèbre de Orlik--Solomon locale en $p$ vers $\mathrm{gr}_{n}^W \Omega^\bullet_{\langle T , \Upsilon \rangle}$. Les hyperplans de $\Upsilon$ passant par $p$ induisent un arrangement d'hyperplans (linéaires) $\Upsilon^{(p)}$ dans l'espace tangent en $p$ à $T$ qu'un choix de coordonnées locales identifie à $\mathbf{C}^n$. Orlik et Solomon \cite{OrlikSolomon} définissent une algèbre graduée $\mathrm{OS}_\bullet (\Upsilon^{(p)})$ par générateurs et relations. En énumérant $H_1 , \ldots , H_\ell$ les hyperplans de $\Upsilon^{(p)}$, autrement dit les hyperplans de $\Upsilon$ passant par $p$, Orlik et Solomon définissent en particulier $\mathrm{OS}_n (\Upsilon^{(p)})$ comme étant engendrée par des éléments $e_J$ pour $J = \{ j_1 , \ldots , j_n \} \subset \{1 , \ldots , \ell \}$ avec $j_1 < \ldots < j_n$, quotientée par l'espace des relations engendré par les combinaisons linéaires \begin{equation} \label{RelOS} \sum_{i=0}^n (-1)^i e_{K-\{j_i \}}, \end{equation} où $K=\{ j_0 , \ldots , j_n \} \subset \{1 , \ldots , \ell \}$ avec $j_0 < j_1 < \ldots < j_n$. L'application \begin{equation} \label{appDupont} \mathrm{OS}_n (\Upsilon^{(p)}) \to \mathrm{gr}_{n}^W \Omega^\bullet_{\langle T , \Upsilon \rangle} \end{equation} construite par Dupont est alors définie de la manière suivante. Pour chaque $j$ dans $\{1 , \ldots , \ell \}$ on choisit une équation locale $f_j =0$ définissant l'hyperplan $H_j$ au voisinage de $p$, et on envoie chaque générateur $e_J$ de $\mathrm{OS}_n (\Upsilon^{(p)})$ sur la forme $$\wedge_{j \in J} \frac{df_j}{f_j} \in \mathrm{gr}_{n}^W \Omega^\bullet_{\langle T , \Upsilon \rangle}.$$ Rappelons que ces éléments ne dépendent pas des choix faits pour les $f_j$, le fait qu'ils vérifient les relations \eqref{RelOS} s'obtient, comme dans le cas classique des arrangements d'hyperplans dans $\mathbf{C}^n$, en explicitant la relation de dépendance entre les $H_j$ $(j \in K)$. Cela montre que l'application \eqref{appDupont} est bien définie. Par définition, elle est surjective au voisinage de $p$. Dupont définit plus généralement une application \begin{equation} \label{appDupont2} \bigoplus_{\mathrm{codim}(S) = k} \iota_{S*} \Omega_S^{\bullet} \otimes \mathrm{OS}_S (\Upsilon ) \rightarrow \mathrm{gr}_{k}^W \Omega^\bullet_{\langle T , \Upsilon \rangle}, \end{equation} où la somme porte maintenant sur les {\it strates}, c'est-à-dire les composantes connexes d'intersections d'hyperplans, de codimension $k$ et $\iota_S$ désigne l'inclusion de la strate $S$ dans $T$. \`A tout point $p$ d'une strate $S$ il correspond une algèbre d'Orlik--Solomon locale $\mathrm{OS}_\bullet (\Upsilon^{(p)})$ et on note $$\mathrm{OS}_S (\Upsilon) \subset \mathrm{OS}_\bullet (\Upsilon^{(p)})$$ le sous-espace engendré par les monômes $e_J$ où $J$ décrit les sous-ensembles d'hyperplans appartenant à $\Upsilon^{(p)}$ et dont l'intersection est exactement la trace $S^{(p)}$ de la strate $S$ dans l'espace tangent en $p$ à $T$. Noter que $\mathrm{OS}_S (\Upsilon)$ ne dépend pas du choix du point $p$ dans $S$. Le facteur correspondant à une strate $S = \{ p \}$ de codimension $n$ dans \eqref{appDupont2} correspond à l'application \eqref{appDupont}. Pour construire l'application \eqref{appDupont2} en général, considérons une strate $S$ de codimension $k$ et un point $p \in S$. Un choix d'équations locales $f_j =0$ pour les hyperplans de $\Upsilon$ contenant $S$ permet --- par un argument similaire à celui détaillé ci-dessus --- de construire une application \begin{equation} \label{E:app1} \underline{\mathrm{OS}_S (\Upsilon)} \rightarrow \mathrm{gr}_{k}^W \Omega^\bullet_{\langle T , \Upsilon \rangle}. \end{equation} (Ici $\underline{\mathrm{OS}_S (\Upsilon)}$ désigne le faisceau constant associé à $\mathrm{OS}_S (\Upsilon)$.) On étend maintenant \eqref{E:app1} en une application \begin{equation} \label{E:app2} \Omega_T^{\bullet} \otimes \mathrm{OS}_S (\Upsilon) \rightarrow \mathrm{gr}_{k}^W \Omega^\bullet_{\langle T , \Upsilon \rangle} \end{equation} par linéarité. Finalement \eqref{E:app2} se factorise par $\iota_{S*} \Omega_S^{\bullet } \otimes \mathrm{OS}_S (\Upsilon)$~: étant donné une forme $\nu$ dans le noyau de l'application $\Omega_T^{\bullet } \to \iota_{S*} \Omega_S^{\bullet }$ et un monôme $e_J \in \mathrm{OS}_S (\Upsilon)$ il s'agit de montrer que \eqref{E:app2} envoie $\nu \otimes e_J$ sur $0$. Notons $f_j = 0$ les équations locales des hyperplans de $J$. Dans les coordonnées les $f_j$ engendrent l'idéal de $S$, on peut donc écrire $\nu$ comme une combinaison linéaire $$\nu \sum \omega_j f_j + \sum \omega_j ' (df_j).$$ Il nous reste alors finalement à vérifier que $$f_j \wedge \left( \frac{df_1}{f_1} \wedge \dots \wedge \frac{df_k}{f_k} \right) \in W_{k-1} \Omega^\bullet_{\langle T , \Upsilon \rangle}$$ et $$df_j \wedge \left( \frac{df_1}{f_1} \wedge \dots \wedge \frac{df_k}{f_k} \right) \in W_{k-1} \Omega^\bullet_{\langle T , \Upsilon \rangle}$$ ce qui résulte des définitions. L'application \eqref{appDupont2} est surjective, ce qui peut se voir en calculant les fibres. Dupont \cite[Theorem 3.6]{DupontC} montre en fait que \eqref{appDupont2} est un isomorphisme de complexes. On extrait ici de sa démonstration le lemme crucial (pour nous) suivant. \begin{lemma} \label{L:exactdupont} La suite courte de faisceaux \eqref{dupont} est exacte. \end{lemma} \begin{proof} La suite de complexes \eqref{dupont} induit des suites courtes \begin{equation} \label{dupontW} \tag{$\mathbf{W}_k$} 0 \rightarrow W_k \Omega^q_{\langle T , \Upsilon \rangle} \rightarrow W_k \Omega^{q}_{\langle T , \Upsilon ' \rangle} \rightarrow W_{k-1} \Omega^{q-1}_{\langle H , \Upsilon \cap H \rangle} \rightarrow 0 \end{equation} et \begin{equation} \label{dupontgr} \tag{$\mathbf{gr}^W_k$} 0 \rightarrow \mathrm{gr}^W_k \Omega^q_{\langle T , \Upsilon \rangle} \rightarrow \mathrm{gr}^W_k \Omega^{q}_{\langle T , \Upsilon' \rangle} \rightarrow \mathrm{gr}^W_{k-1} \Omega^{q-1}_{\langle H , \Upsilon \cap H \rangle} \rightarrow 0. \end{equation} Puisque $W_k \Omega^\bullet_{\langle T , \Upsilon \rangle} = \Omega^\bullet_{\langle T , \Upsilon \rangle}$ pour $k$ suffisamment grand, pour démontrer le lemme il suffit de montrer que les suites \eqref{dupontW} sont exactes. Et comme par ailleurs les suites courtes \eqref{dupontW} sont exactes à gauche et à droite, il nous faut seulement vérifier qu'elles sont exactes au milieu. Remarquons maintenant que les suites \eqref{dupontW} et \eqref{dupontgr} donne lieu à une suite exacte courte de complexes \begin{equation} \label{SEcx} 0 \rightarrow (\mathbf{W}_{k-1}) \rightarrow (\mathbf{W}_k) \rightarrow (\mathbf{gr}_k^W) \to 0 . \end{equation} La suite exacte longue associée en cohomologie implique que si les suites $(\mathbf{W}_{k-1})$ et \eqref{dupontgr} sont exactes au milieu alors \eqref{dupontW} l'est aussi. Puisque $(\mathbf{W}_{0})$ est évidemment exacte, une récurrence sur $k$ réduit la démonstration du lemme à la démonstration que les suites \eqref{dupontgr} sont exactes au milieu. Pour abréger les expressions, on notera simplement $\mathrm{OS}^{(k)} (\Upsilon)$ le faisceau apparaissant à la source de l'application \eqref{appDupont2}. En tensorisant avec les faisceaux $\iota_{S*} \Omega_S^\bullet$ les suites exactes entre algèbres d'Orlik--Solomon obtenues par ``restriction et effacement'' on obtient que les suites de complexes $$0 \to \mathrm{OS}^{(k)} (\Upsilon) \to \mathrm{OS}^{(k)} (\Upsilon' ) \to \mathrm{OS}^{(k)} (\Upsilon \cap H ) \to 0$$ sont exactes. En considérant les diagrammes commutatifs $$\xymatrix{ 0 \ar[r] & \mathrm{OS}^{(k)} (\Upsilon) \ar[r] \ar[d] & \mathrm{OS}^{(k)}_{\Upsilon'} \ar[r] \ar[d] & \mathrm{OS}^{(k-1)}_{\Upsilon \cap H} \ar[d] \ar[r] &0 \\ 0 \ar[r] & \mathrm{gr}_k^W \Omega^\bullet_{\langle T ,\Upsilon \rangle} \ar[r] & \mathrm{gr}_k^W \Omega^\bullet_{\langle T , \Upsilon' \rangle} \ar[r]& \mathrm{gr}_{k-1}^W \Omega^\bullet_{\langle H , \Upsilon \cap H \rangle} \ar[r] & 0. } $$ on montre, dans un même élan, par récurrence sur le cardinal de l'arrangement d'hyperplans que les flèches verticales sont des isomorphismes et que la ligne du bas est exacte au milieu. Par hypothèse de récurrence on peut supposer que les flèches verticales à gauche et à droite sont des isomorphismes. Une chasse au diagramme montre alors que la ligne du bas est exacte au milieu pour tout $k$. Comme expliqué ci-dessus cela suffit à montrer que les suites courtes \eqref{dupontW} sont exactes. La suite exacte longue en cohomologie associée à la suite exacte courte de complexes \eqref{SEcx} implique alors que la suite \eqref{dupontgr} est enfin partout exacte (pas seulement au milieu). Finalement les deux lignes des diagrammes commutatifs ci-dessus sont exactes et, puisque les flèches verticales à gauche et à droite sont des isomorphismes, le lemme des cinq implique que la flèche du milieu est également un isomorphisme. Ce qui permet de continuer la récurrence. \end{proof} Terminons cette section par un autre résultat important (pour nous en tout cas) de Clément Dupont \cite[Theorem 3.13]{DupontC}. \begin{lemma} \label{L:exactdupont} L'inclusion \eqref{Dqi} est un quasi-isomorphisme. \end{lemma} \begin{proof}[Esquisse de démonstration] La cohomologie de complexe $j_* \Omega_{T-\Upsilon}^\bullet$ en $p$ est précisément la cohomologie de l'arrangement local $\Upsilon^{(p)}$ d'hyperplans dans $\mathbf{C}^n$, c'est-à-dire l'algèbre de Orlik--Solomon $\mathrm{OS}_\bullet (\Upsilon^{(p)})$. D'un autre côté on peut calculer la cohomologie de $\Omega^\bullet_{\langle T , \Upsilon \rangle}$ en se servant de la filtration par les poids et de l'isomorphisme \eqref{appDupont2}, on obtient le même résultat. \end{proof} \section[Démonstration du théorème 3.5]{Démonstration du théorème \ref{P:Brieskorn}} \subsection{Initialisation dans le cas multiplicatif} \label{B1} On l'a dit, on démontre le théorème \ref{P:Brieskorn} par récurrence. Dans le cas multiplicatif où $A =\mathbf{G}_m$, l'initialisation de la récurrence correspond au cas où $\Upsilon$ est vide. \begin{lemma} \label{L:B1} Le sous-espace \begin{equation} \label{E:H0(1)} H^0(\mathbf{G}_m^n, \Omega^\bullet)^{(1)} \end{equation} est de dimension $1$ engendré par la forme $\wedge_{i=1}^{n} dz_i/z_i$ où $z_i$ désigne la $i$-ème coordonnée sur $\mathbf{G}_m^n$. En particulier \eqref{E:H0(1)} est concentré en degré $n$. \end{lemma} \proof Une forme différentielle holomorphe sur $\mathbf{G}_m^n$ se décompose de manière unique en série de Laurent à plusieurs variables. Il suffit donc de considérer les monômes \begin{equation} \label{E:monome} \left( \prod_{i=1}^n z_i^{a_i} \right) \cdot \bigwedge_{j \in J} \frac{dz_j}{z_j} \quad (J \subset \{1 , \ldots , n \} , \ a_1 , \ldots , a_n \in \mathbf{Z}). \end{equation} Maintenant, pour tout entier $a \in \mathbf{Z}$ on a $$\sum_{w : w^s=z} w^a = \left\{ \begin{array}{cl} s z^{a/s} & \mbox{si } s | a, \\ 0 & \mbox{sinon}. \end{array} \right.$$ En poussant en avant le monôme \eqref{E:monome} par $[s]_*$ on obtient donc \begin{multline*} \sum_{w_i^s = z_i} \left( \left( \prod_{i=1}^n w_i^{a_i} \right) \cdot \bigwedge_{j \in J} \frac{dw_j}{w_j} \right) \\ = \left\{ \begin{array}{ll} s^{n- \# J} \left( \prod_{i=1}^n z_i^{a_i /s} \right) \cdot \bigwedge_{j \in J} \frac{dz_j}{z_j} & \mbox{si } s | a_i \mbox{ pour tout } i, \\ 0 & \mbox{sinon}. \end{array} \right. \end{multline*} Il s'en suit que tout élément de $H^0(\mathbf{G}_m^n , \Omega^\bullet )$ appartient à un sous-espace $[s]_*$-invariant de dimension fini et qu'un monôme \eqref{E:monome} appartient à $H^0(\mathbf{G}_m^n, \Omega^\bullet)^{(1)}$ si et seulement si $\# J = n$ et tous les $a_i$ sont nuls. \qed La cohomologie de $\mathbf{G}_m^n$ est égale à l'algèbre extérieure sur les formes fermées $dz_i /z_i$. Le sous-espace $H^\bullet ( \mathbf{G}_m^n )^{(1)}$ est donc de dimension $1$ engendré par la forme $\wedge_{i=1}^{n} dz_i/z_i$. Le lemme \ref{L:B1} implique donc que, dans ce cas où $A =\mathbf{G}_m$ et $\Upsilon$ est vide, le théorème \ref{P:Brieskorn} est bien vérifié. \subsection{Initialisation dans le cas elliptique} \label{B2} Dans le cas elliptique où $A =E$, l'initialisation de la récurrence correspond au cas où $\Upsilon$ est constitué de $n$ hyperplans linéairement indépendants. Dans ce cas, le faisceau $\Omega^\bullet_{\langle T , \Upsilon \rangle}$ coïncide avec le faisceau $\Omega^\bullet_T (\log \Upsilon)$ des formes différentielles logarithmiques. C'est un complexe de faisceaux et, d'après Atiyah--Hodge \cite{AtiyahHodge}, Griffiths \cite{Griffiths} et Deligne \cite{DeligneH2}, on a un isomorphisme canonique \begin{equation} \label{G-D} H^k (T-\Upsilon , \mathbf{C}) = \mathbf{H}^k (T , \Omega^\bullet_T (\log \Upsilon) ); \end{equation} cf. \cite[Corollary 8.19]{Voisin}. De plus, la suite spectrale de Hodge--de Rham $$E_1^{pq} = H^p (T , \Omega^q_T (\log \Upsilon)) \Longrightarrow \mathbf{H}^{p+q} (T , \Omega^\bullet_T (\log \Upsilon ) )$$ dégénère à la première page, cf. \cite[Corollaire 3.2.13, (ii)]{DeligneH2}. Le lemme suivant implique donc que, dans ce cas, le théorème \ref{P:Brieskorn} est bien vérifié. \begin{lemma} \label{L:affirmation} Supposons que $\Upsilon$ soit constitué de $n$ hyperplans linéairement indépendants. Alors le sous-espace $H^p(T, \Omega^q_T (\log \Upsilon) )^{(1)}$ est nul pour tout $p > 0$. \end{lemma} \begin{proof} On démontre le lemme en explicitant les groupes de cohomologie $H^p(T, \Omega^q_T (\log \Upsilon) )^{(1)}$. On commence par remarquer que l'on a un isomorphisme \begin{equation} \label{E:isomF} \bigoplus_{i=1}^n \mathcal{O}(H_i) \stackrel{\sim}{\longrightarrow} \Omega^1_T (\log \Upsilon) , \end{equation} où $\Upsilon = \{ H_1, \dots, H_n \}$. Pour tout $i \in \{ 1 , \ldots , n \}$ soit $\omega_i$ une $1$-forme différentielle sur $T$ obtenue comme tirée en arrière par un caractère définissant $H_i$ d'une $1$-forme partout non-nulle sur $A$. L'application $$(f_i) \in \oplus_i \mathcal{O} (H_i) \longmapsto \sum f_i \omega_i $$ est globalement définie et induit un isomorphisme au niveau des fibres; elle réalise l'isomorphisme \eqref{E:isomF}. Noter que $[s]^* \omega_i = s \omega_i$; pour tout $f \in \mathcal{O} (H_i )$ on a donc $$[s]_*( f \omega_i) = (s^{-1} [s]_* f) \omega_i.$$ Comme par ailleurs $$\Omega^q_T (\log \Upsilon) = \wedge^q \Omega^1_T (\log \Upsilon) \simeq \bigoplus_{\substack{J \subset \{1, \ldots , n \} \\ \# J = q}} \mathcal{O}(\sum_{j \in J} H_j )$$ le lemme \ref{L:affirmation} découle du lemme qui suit. \end{proof} \begin{lemma} Soit $J \subset \{1, \ldots , n \}$ de cardinal $q$ et soit $p$ un entier strictement positif. Le morphisme $[s]_*$ opère sur $H^p(T, \mathcal{O}(\sum_{j \in J} H_j))$ par le scalaire $s^{2n-p}$ et l'espace $H^p(T, \mathcal{O}(\sum_{j \in J} H_j))$ est réduit à $0$ si $n\leq p+q$. \end{lemma} En particulier $s^q$ n'est pas un valeur propre généralisée. \proof Lorsque $q = n$, le faisceau $\mathcal{O}(\sum_{i=1}^n H_i)$ est ample. Or il découle du théorème d'annulation de Mumford \cite[\S III.16]{Mumford} que la cohomologie supérieure, de degré $p>0$, des fibrés amples est nulle sur une variété abélienne. Supposons maintenant $q<n$ et supposons pour simplifier $T=A^n$. Les caractères définissant les hyperplans $H_j$ ($j \in J$) induisent un morphisme de variétés abéliennes $$\pi: A^n \rightarrow A^q$$ et le fibré en droites $\mathcal{O}(\sum_{j \in J} H_j )$ sur $A^n$ est égal au tiré en arrière $\pi^* \mathcal{L}$ du fibré en droites $\mathcal{L}$ sur $A^q$ associé au diviseur $z_1 \dots z_q = 0$ égal à la somme des hyperplans de coordonnées. Commençons par remarquer qu'en général si $f : C \to B$ est un morphisme de variétés abéliennes à fibres connexes alors les $R^j f_* \mathcal{O}_C$ sont des fibrés vectoriels triviaux. Pour le voir, on note $F$ le noyau de $f$; c'est une variété abélienne et il existe une sous-variété abélienne $B' \subset C$ telle que \begin{itemize} \item l'application somme $F \times B' \to C$ et \item l'application induite $B' \to B$ \end{itemize} soient des isogénies (voir par exemple \cite[Proposition 12.1]{Milne}). Notons $C' = F \times B' $ et $f' : C' = F \times B' \to B'$. Le faisceau de cohomologie $R^j f'_* \mathcal{O}_{C'}$ est un faisceau constant de fibre $H^j (F , \mathcal{O}_F)$. Comme les translations de $F$ opèrent trivialement sur $H^j (F , \mathcal{O}_F)$, ce faisceau constant sur $B'$ descend sur $B$ en un faisceau, nécessairement égal à $R^j f_* \mathcal{O}_C$, constant de fibre $H^j (F, \mathcal{O}_F)$. En général le morphisme $\pi$ n'est pas à fibres connexes mais, comme $\pi$ est un morphisme de groupes, les composantes connexes des fibres sont toutes de même dimension et le quotient $B$ de $A^n$ par les composantes connexes des fibres est un revêtement fini de $A^q$. On peut donc factoriser $\pi$ en la composition $\pi = g \circ \pi'$ de morphismes $\pi': A^n \to B$ et $g : B \to A^q$ avec $\pi'$ à fibres connexes et $g$ fini. Notons $F$ la fibre de $\pi'$. La suite spectrale de Leray associée au morphisme $\pi '$ s'écrit~: \begin{equation} \label{suiteLeray} H^r (B , R^p \pi '_*(\pi {}^* \mathcal{L} )) \Longrightarrow H^{p+r} (A^n , \pi {}^* \mathcal{L}). \end{equation} Par la formule de projection, on a un isomorphisme \begin{equation} \label{ProjFormula} R^p \pi '_*(\pi {}^* \mathcal{L} ) = R^p \pi '_*(\pi' {}^* g^* \mathcal{L} ) \cong g^*\mathcal{L} \otimes R^p \pi ' _* \mathcal{O}_{A^n} = g^*\mathcal{L} \otimes H^p (F , \mathcal{O}_F), \end{equation} où la dernière égalité découle du paragraphe précédent. Puisque les composantes connexes des fibres de $\pi$ sont des copies de $A^{n-q}$, on obtient que $$H^r (B , R^p \pi '_*(\pi^* \mathcal{L} )) \cong H^r (B , g^*\mathcal{L}) \otimes H^p (A^{n-q} , \mathcal{O}_{A^{n-q}} ).$$ Il découle à nouveau du théorème d'annulation de Mumford que ce groupe s'annule pour $r>0$ car $g^*\mathcal{L}$ est ample. La suite spectrale \eqref{suiteLeray} dégénère donc et on obtient un isomorphisme \begin{equation} \label{HpDec} H^p (A^n , \pi^* \mathcal{L}) \cong H^0 (B , g^*\mathcal{L} ) \otimes H^p (A^{n-q} , \mathcal{O}_{A^{n-q}} ). \end{equation} Notons de plus que $$H^0 (B, g^*\mathcal{L} ) = H^0 (A^q , g_* (g^* \mathcal{L})) = H^0 (A^q , \mathcal{L} \otimes g_* \mathcal{O}_B)$$ où $g_* \mathcal{O}_B$ est une somme de fibrés en droites de torsion $\mathcal{T}_1 , \ldots ,\mathcal{T}_r$. On a donc $$H^0 (B, g^*\mathcal{L} ) = \bigoplus_{j=1}^r H^0 (A^q , \mathcal{L}_j')$$ où $\mathcal{L}_j ' = \mathcal{L} \otimes \mathcal{T}_j$. Par hypothèse, la multiplication par $s$ sur $A^n$ préserve les composantes connexes des fibres de $\pi$ et donc sa décomposition de Stein. L'endomorphisme induit $[s]_*$ sur $$H^p (A^n , \mathcal{O}(\sum_{j \in J} H_j)) = H^p (A^n , \pi^* \mathcal{L})$$ préserve donc la décomposition \eqref{HpDec} et chaque terme $H^0 (A^q , \mathcal{L}_j')$ de la décomposition de $H^0 (B, g^*\mathcal{L} )$. On étudie l'action de $[s]_*$ sur chacun des facteurs du produit tensoriel~: \begin{itemize} \item L'endomorphisme $[s]_*$ opère sur\footnote{Ici on utilise que la suite spectrale de Hodge--de Rham pour $A^{n-q}$ dégénère en $E_1$.} $$H^p (A^{n-q} , \mathcal{O}_{A^{n-q}} ) \hookrightarrow H^p(A^{n-q}, \mathbf{C})$$ par $s^{2n-2q-p}$. De plus, le groupe de gauche est nul si $n \leq p+q$. \item Comme $H^r (A^q , \mathcal{L}_j ')=0$ pour $r>0$, il découle par exemple de \cite[Theorem 13.3]{Milne} que chaque groupe $H^0 (A^q , \mathcal{L}_j')$ est de rang $1$. La section canonique est une fonction theta sur laquelle l'homomorphisme $[s]_*$ opère par multiplication par $s^{2q}$; voir par exemple \cite[Eq. (3.1)]{Beauville}. \end{itemize} Finalement $[s]_*$ opère sur $H^p (A^n , \mathcal{O}(\sum_{j \in J} H_j))$ par le scalaire $s^{2n-p}$ et le groupe $H^p (A^n , \mathcal{O}(\sum_{j \in J} H_j))$ est nul si $n\leq p+q$. Il est donc exclu d'obtenir $s^q$ comme valeur propre. \qed \subsection{Pureté, par récurrence} \begin{lemma} \label{C1} La partie invariante de la cohomologie $H^j(T -\Upsilon )^{(1)}$ est pure (comme structure de Hodge \cite{DeligneH2}) de poids $2j$ pour $j \leq \dim(T)$. En particulier, si $T-\Upsilon$ est affine, la cohomologie est pure en tout degré. \end{lemma} \proof Pour $n=0$ le lemme est immédiat. Lorsque $n$ est strictement positif mais que $\# \Upsilon=0$, la cohomologie invariante n'intervient qu'en degré cohomologique maximal. Dans le cas elliptique ce degré est $2 \dim (T)$ et il n'y a rien à démontrer. Dans le cas multiplicatif le lemme se déduit du fait que $H^1(\mathbf{G}_m)$ est pur de poids $2$. Pour l'étape de récurrence (sur $(n, \# \Upsilon )$), on considère un triplet \eqref{E:diag} tel que $(T,\Upsilon )$ et $(H , (\Upsilon \cap H))$ vérifient tous les deux les conclusions du lemme \ref{C1}. La suite exacte longue de Gysin en cohomologie associée à ce triplet s'écrit \begin{multline} \label{LESGysin} H^{j-2}( H - (H \cap \Upsilon ), \mathbf{C}(-1)) \stackrel{\delta_1}{\rightarrow} H^j( T - \Upsilon ) \rightarrow H^j(T - \Upsilon ' ) \\ \rightarrow H^{j-1}(H - (H \cap \Upsilon), \mathbf{C}(-1)) \stackrel{\delta_2}{\rightarrow} H^{j+1}( T - \Upsilon ). \end{multline} Comme $\mathbf{C} (-1)$ est de poids $2$ et que les applications de la suite exacte ci-dessus préservent la filtration par le poids et sont compatibles avec l'action de $[s]_*$, on conclut que $H^j(T - \Upsilon ' )^{(1)}$ est pure de poids $2j$ pour $j \leq \dim(T)$. Cela prouve la première assertion du lemme. La deuxième partie du lemme s'en déduit puisque, si $T-\Upsilon$ est affine, la cohomologie de $T-\Upsilon$ s'annule pour $j > \dim(T)$. \qed \begin{lemma} \label{exactness in topology} Supposons que $T - \Upsilon$ et $H - (H \cap \Upsilon)$ soient affines. Alors $$0 \rightarrow H^j( T - \Upsilon)^{(1)} \rightarrow H^j(T - \Upsilon')^{(1)} \stackrel{\mathrm{Res}}{\longrightarrow} H^{j-1}(H - (H \cap \Upsilon), \mathbf{C}(-1))^{(1)} \rightarrow 0$$ est une suite exacte courte. \end{lemma} \proof Puisque $T - \Upsilon$ et $H - (H \cap \Upsilon)$ sont affines, il découle du lemme \ref{C1} que pour tout degré $j$ les parties invariantes de la cohomologie $H^j(T -\Upsilon )^{(1)}$ et $H^j(H- (H \cap \Upsilon) )^{(1)}$ sont pures de poids $2j$. Les membres de la suite \eqref{LESGysin} appartiennent à la catégorie des $\mathbf{C}$-espaces vectoriels de dimension finie munis d'une action linéaire de $\mathbf{Z}$ (le groupe engendré par $[s]_*$). Le foncteur qui à un tel espace $H$ associe $H^{(1)}$ est exact. On peut en effet décomposer $H$ en la somme directe $\oplus H^{(\lambda)}$ des sous-espaces caractéristiques de $[s]_*$ et l'application $$\alpha \mapsto \frac{1}{\prod_{\lambda \neq 1} (1- \lambda)^m} \prod_{\lambda \neq 1} ([s]_* - \lambda)^m \quad \left( m = \dim \ H \right)$$ est un projecteur sur $H^{(1)}$.\footnote{C'est pour cette raison qu'on considère le sous-espace caractéristique $H^{(1)}$ plutôt que le sous-espace des vecteurs $1$-propre.} En passant aux parties invariantes dans la suite \eqref{LESGysin}, on obtient donc une suite exacte dont le terme de gauche est pur de poids $2j-2$ alors que le terme suivant est pur de poids $2j$. L'application $\delta_1$ est donc nulle. De même le terme de droite est pur de poids $2j+2$ alors que le terme précédent $H^{j-1}(H - (H \cap \Upsilon), \mathbf{C}(-1))^{(1)}$ est pur de poids $2j$. L'application $\delta_2$ est donc nulle elle aussi. \qed \medskip On compare maintenant ces suites exactes à la suite exacte longue associée à \eqref{dupont}, c'est-à-dire, \begin{multline} \label{dupont2} H^p(T, \Omega^q_{\langle T ,\Upsilon \rangle} )^{(1)} \rightarrow H^p(T, \Omega^q_{\langle T , \Upsilon ' \rangle })^{(1)} \rightarrow H^p(H, \Omega^{q-1}_{\langle H , \Upsilon \cap H\rangle })^{(1)} \\ \stackrel{\delta}{\rightarrow} H^{p+1}(T, \Omega^q_{\langle T ,\Upsilon \rangle})^{(1)}. \end{multline} Le quasi-isomorphisme \eqref{Dqi} donne plus précisément lieu à une suite spectrale qui calcule la cohomologie de $T-\Upsilon$ et le lemme suivant montre que la première page de cette suite spectrale permet de calculer la partie invariante de la cohomologie de $T-\Upsilon$. \begin{lemma}\label{C2} Supposons que $A$ soit une courbe elliptique et que $T-\Upsilon$ soit affine. Alors la suite spectrale de Hodge--de Rham \begin{equation} \label{SSHdR} H^p(T, \Omega^q_{\langle T , \Upsilon \rangle})^{(1)} \Longrightarrow H^{p+q}(T-\Upsilon)^{(1)} \end{equation} dégénère à la première page et tous les morphismes de connexions $\delta$ dans \eqref{dupont2} sont nuls. \end{lemma} \proof On procède à nouveau par récurrence sur $(n , \# \Upsilon)$. On initialise la récurrence avec le cas où $\Upsilon$ est constitué de $n$ hyperplans linéairement indépendants. Alors $H$ est un diviseur à croisements normaux et $\Omega^q_{\langle T , \Upsilon \rangle} = \Omega^q_T (\log \Upsilon )$. Dans ce cas le lemme est une conséquence de la dégénérescence de la suite spectrale de Hodge--de Rham à pôles logarithmiques, cf. \S \ref{B2}. Supposons maintenant par récurrence que le lemme est démontré pour les arrangements $(H, H \cap \Upsilon)$ et $(T, \Upsilon)$; nous le vérifions alors pour l'arrangement $(T, \Upsilon ')$ avec toujours $\Upsilon '=\Upsilon \cup H$. De la suite exacte courte \eqref{dupont} on tire \begin{multline} \label{dps} \dim \ H^p(T, \Omega^q_{\langle T , \Upsilon ' \rangle})^{(1)} \\ \leq \dim \ H^p(T, \Omega^q_{\langle T , \Upsilon \rangle})^{(1)} + \dim \ H^p(H, \Omega_{\langle H , H\cap \Upsilon \rangle }^{q-1})^{(1)}. \end{multline} En sommant sur tous les couples $(p,q)$ tels que $p+q=j$ on obtient tour à tour les inégalités suivantes~: \begin{equation*} \begin{split} \dim \ H^j(T - \Upsilon ' )^{(1)} & \stackrel{(i)}{\leq} \sum_{p+q=j} \dim \ H^p(T, \Omega^q_{\langle T , \Upsilon ' \rangle} )^{(1)} \\ & \stackrel{(ii)}{ \leq } \sum_{p+q=j} \left( \dim \ H^p(T, \Omega^q_{\langle T , \Upsilon \rangle})^{(1)} + \dim \ H^p(H, \Omega_{\langle H , H\cap \Upsilon \rangle }^{q-1})^{(1)} \right) \\ & \stackrel{(iii)}{=} \dim \ H^j(T - \Upsilon)^{(1)} + \dim \ H^{j-1}(H - (\Upsilon \cap H))^{(1)}, \end{split} \end{equation*} où l'on explique chacune des (in)égalités ci-dessous. \begin{itemize} \item[(i)] Découle de l'existence de la suite spectrale \eqref{SSHdR}; on a égalité pour tous les $j$ si et seulement si la suite spectrale dégénère à la première page. \item[(ii)] Découle de \eqref{dps}; on a égalité si et seulement les morphismes de connexions dans la suite exacte longue \eqref{dupont2} sont nuls. \item[(iii)] C'est l'étape de récurrence. \end{itemize} Finalement le lemme \ref{exactness in topology} implique que toutes les inégalités ci-dessus sont en fait des égalités et donc que la suite spectrale pour $\Upsilon '=\Upsilon \cup H$ dégénère et que la suite exacte longue \eqref{dupont2} se scinde en suites exactes courtes. \qed \medskip \noindent {\it Remarque.} Comme nous l'a fait remarquer un rapporteur, le formalisme général des complexes de Hodge mixtes \cite[Scholie 8.1.9, (v)]{DeligneH3} et les travaux de Dupont impliquent en fait directement que la suite spectrale de Hodge--de Rham \eqref{SSHdR} dégénère à la première page. \medskip \begin{lemma}\label{C3} Supposons $A = \mathbf{G}_m$. La suite \eqref{dupont} donne lieu à des suites exactes courtes : \begin{equation} \label{dupont3} 0 \rightarrow H^0(T, \Omega^j_{\langle T,\Upsilon \rangle }) \rightarrow H^0(T, \Omega^{j}_{\langle T , \Upsilon ' \rangle } ) \rightarrow H^0(H , \Omega^{j-1}_{\langle H , H \cap \Upsilon \rangle}) \rightarrow 0. \end{equation} De plus, l'action de $[s]_*$ sur chacun des espaces impliqués est localement finie. \end{lemma} \proof La première assertion découle du fait que qu'il n'y a pas de cohomologie supérieure puisque $T$ est affine. La seconde affirmation s'obtient par récurrence; l'initialisation est conséquence du lemme \ref{L:B1}. \qed \subsection{Fin de la démonstration du théorème \ref{P:Brieskorn}} On peut maintenant démontrer le théorème par récurrence sur $(n, \# \Upsilon)$. L'initialisation a été vérifiée aux \S \ref{B1} et \ref{B2}. Supposons maintenant par récurrence que le théorème est démontré pour les arrangements $(H, H \cap \Upsilon)$ et $(T, \Upsilon)$ (avec $T-\Upsilon$ et $H-(H\cap \Upsilon)$ affines donc). Vérifions alors le théorème pour l'arrangement $(T, \Upsilon ')$ avec toujours $\Upsilon '=\Upsilon \cup H$. D'après les lemmes \ref{exactness in topology}, \ref{C1} et \ref{C2} on a des diagrammes commutatifs $$ \xymatrix{ H^0(T, \Omega^j_{\langle T,\Upsilon \rangle })^{(1)} \ar[d] \ar@{^{(}->}[r] & H^0(T, \Omega^{j}_{\langle T , \Upsilon ' \rangle } )^{(1)} \ar[d] \ar@{->>}[r] & H^0(H , \Omega^{j-1}_{\langle H , H \cap \Upsilon \rangle})^{(1)} \ar[d] \\ H^j(T - \Upsilon )^{(1)} \ar@{^{(}->}[r] & H^j (T- \Upsilon ' )^{(1)} \ar@{->>}[r] & H^{j-1}( H - (H \cap \Upsilon ))^{(1)} } $$ où les suites horizontales sont exactes et, par hypothèse de récurrence, les morphismes verticaux à gauche et à droite sont des isomorphismes. Le morphisme vertical du milieu est donc lui aussi un isomorphisme, ce qui démontre le théorème. \qed \medskip \noindent {\it Remarque.} Comme nous l'a fait remarquer un rapporteur, la théorie de Hodge mixte et les travaux de Dupont permettent de déduire directement du lemme \ref{C1} de pureté que $$H^0(T, \Omega^\bullet_{\langle T,\Upsilon \rangle })^{(1)} \to H^\bullet (T - \Upsilon )^{(1)}$$ est un isomorphisme. Nous avons préféré maintenir notre approche un peu plus pédestre. Dans tous les cas, le c{\oe}ur de l'argument repose sur \cite{DupontC}. \medskip \chapter{Formes différentielles sur l'espace symétrique associé à $\mathrm{SL}_n (\mathbf{C})$} \label{C:4} \resettheoremcounters Dans ce chapitre, on fixe un entier $n\geq 2$ et on note $V = \mathbf{C}^n$; on voit les éléments de $V$ comme des vecteurs colonnes. Une matrice $g \in \mathrm{GL}_n (\mathbf{C})$ définit une forme hermitienne sur $\mathbf{C}^n$ de matrice hermitienne associée $g^{-*}g^{-1}$.\footnote{On note $M^\top$ la transposée d'une matrice $M$, et $M^*=\overline{M}^\top$. Lorsque $M$ est inversible on note enfin $M^{-\top}$ et $M^{-*}$ la transposée et la conjuguée de son inverse.} On en déduit une bijection \begin{equation*} S:=\mathrm{GL}_n (\mathbf{C} ) / \mathrm{U}_n \simeq \left\{H \; : \; H \text{ forme hermitienne définie positive sur } \mathbf{C}^n \right\}. \end{equation*} Dans ce chapitre on note $$G = \mathrm{GL}_n (\mathbf{C}) \quad \mbox{et} \quad K = \mathrm{U}_n$$ de sorte que $S=G/K$. En identifiant $\mathbf{R}_{>0}$ au centre réel de $\mathrm{GL}_n (\mathbf{C})$ {\it via} l'application $s \mapsto s \cdot 1_n$, le quotient $$X = \mathrm{GL}_n (\mathbf{C}) / \mathrm{U}_n \mathbf{R}_{>0}$$ est l'espace symétrique associé au groupe $\mathrm{SL}_n (\mathbf{C})$. On prendra pour point base dans $S$ la métrique hermitienne $|\cdot |$. Pour $z = (z_1 , \ldots , z_n) \in \mathbf{C}^n$, vu comme vecteur colonne, on a $|z|^2 = |z_1 |^2 + \ldots + |z_n|^2$ et la forme hermitienne $H$ associée à une matrice $g \in \mathrm{GL}_n (\mathbf{C})$ est donc donnée par $H(z,z) = | g^{-1} z |^2$. Dans ce chapitre on explique que la théorie de Mathai--Quillen permet de construire deux formes différentielles $G$-invariantes naturelles $$\psi \in A^{2n-1}(S \times \mathbf{C}^n ) \quad \mbox{et} \quad \varphi \in A^{2n} (S \times \mathbf{C}^n )$$ qui décroissent rapidement le long des fibres de $S \times \mathbf{C}^n \to S$ et vérifient \begin{enumerate} \item $\varphi$ est une \emph{forme de Thom} au sens qu'elle est fermée et d'intégrale $1$ dans les fibres $\mathbf{C}^n$, et \item la transformée de Mellin de $\psi$ en $0$ définit est une forme fermée sur $S\times (\mathbf{C}^n - \{ 0 \} )$ et sa restriction aux fibres $\mathbf{C}^n - \{0 \}$ représente la classe fondamentale. \end{enumerate} \section{Formes de Mathai--Quillen} Dans la suite de ce chapitre on note $\mathfrak{g}$ et $\mathfrak{k}$ les algèbres de Lie respectives de $G$ et $K$ et on note $\mathfrak{p}$ le supplémentaire orthogonal de $\mathfrak{k}$ dans $\mathfrak{g}$ relativement à la forme de Killing. \subsection{Fibré $K$-principal au-dessus de $S$} La projection $G \to G/K$ permet de voir le groupe $G$ comme un fibré $K$-principal au-dessus de $S$. Une connexion sur $G$ est une $1$-forme $\theta \in A^1(G) \otimes \mathfrak{k}$ telle que \begin{equation} \begin{split} \mathrm{Ad}(k)(k^*\theta) &= \theta, \qquad k \in K, \\ \iota_X \theta &=X, \qquad X \in \mathfrak{k}. \end{split} \end{equation} Le fibré $G \to S$ est naturellement $G$-équivariant (relativement aux actions naturelles, à gauche, de $G$ sur lui-même et sur $S$). Soit $\theta$ la connexion $G$-invariante sur $G$ définit comme suit~: {\it via} les isomorphismes \[ (A^1(G) \otimes \mathfrak{k})^{G \times K} \simeq (\mathfrak{g}^* \otimes \mathfrak{k})^K \simeq \mathrm{Hom}_K(\mathfrak{g},\mathfrak{k}), \] une connexion $G$-invariante correspond à une section $K$-équivariante de l'inclusion $\mathfrak{k} \hookrightarrow \mathfrak{g}$. On définit $\theta$ comme étant la connexion associée à la projection $p : \mathfrak{g} \to \mathfrak{k}$. La forme de connexion $\theta$ est donc explicitement donnée par la formule \[ \theta = p(g^{-1}dg) = \frac{1}{2}(g^{-1}dg - g^* d( g^{-*})). \] Sa forme de courbure associée \[ \Omega = (\Omega_{ij} )_{1 \leq i, j \leq n} = d\theta+\theta^2 \in A^2(G) \otimes \mathfrak{k} \] est $G$-invariante et horizontale, autrement dit $\iota_X \Omega = 0$ pour tout $X \in \mathfrak{k}$ --- identifié au champs de vecteur engendré par l'action à droite de $\mathfrak{k}$ on $G$. \subsection{Fibré vectoriel associé} Soit $$G \times^K V = [ G \times V ] / K,$$ où l'action (à droite) de $K$ on $G \times V$ est $(g,v)k = (gk,k^{-1}v)$. C'est un fibré vectoriel au-dessus de $S=G/K$ et l'action $$h \cdot [g,v]=[hg,v]$$ l'équipe d'une structure de fibré $G$-équivariant. La forme hermitienne standard $v \mapsto v^*v$ sur $V$ le munit finalement d'une métrique $G$-\'equivariante. \subsection{Une forme de Thom explicite} On rappelle ici l'expression de la forme de Thom construite par Mathai et Quillen \cite{MathaiQuillen} sur $G \times^K V$. Soient $z_1,\overline{z}_1, \ldots,z_n, \overline{z}_n$ les coordonnées standard sur $V$. On note respectivement $z$ et $dz$ les vecteurs colonnes $(z_1,\ldots,z_n)^\top$ et $(dz_1,\ldots,dz_n)^\top$. \'Etant donné un sous-ensemble $I=\{i_1,\ldots,i_p\} \subseteq \{1 \ldots,n\}$, avec $i_1 < \cdots < i_p$, et un vecteur $\xi=(\xi_1,\ldots,\xi_n )$ on note $$\xi^I = \xi_{i_1} \cdots \xi_{i_p} \quad \mbox{et} \quad \xi^{I*} = \xi_{i_p} \cdots \xi_{i_1}.$$ On note $I'=\{1,\ldots,n \}-I$ le complémentaire de $I$ et on définit une signe $\epsilon(I,I')$ par l'égalité $dz^I dz^{I'} = \epsilon(I,I') dz_1 \cdots dz_n$. L'expression \begin{equation} \label{E:UMQ} U = \left(\frac{i}{2\pi}\right)^n e^{-|z|^2} \sum_{I, J} \epsilon(I,I') \epsilon(J,J') \det(\Omega_{IJ}) (dz+\theta z)^{I'} \overline{(dz+\theta z)}^{J' *}, \end{equation} où la somme porte sur tous les couples $(I,J)$ de sous-ensembles de $\{1,\ldots,n\}$ avec $|I|=|J|$ et $\Omega_{IJ}=(\Omega_{ij})_{i \in I, j \in J}$, définit une forme $K$-invariante, fermée, horizontale et de degré $2n$ sur $G \times V$ et donc une forme dans $A_{d=0}^{2n}(G \times^K V)$; c'est la forme de Thom de Mathai--Quillen. \subsection{Formes différentielles sur $S \times V$} La représentation standard de $G$ dans $V$ fait du fibré trivial $$E = S \times V \to S$$ un fibré $G$-équivariant --- pour tout $g \in G$, l'isomorphisme $\alpha(g):E \xrightarrow{\sim} g^*E$ est donné, dans chaque fibre, par la multiplication par $g$. Le fibré $E$ est naturellement muni d'une métrique hermitienne $G$-équivariante $|\cdot|_h^2$ donnée par $|v|_H^2 = v^*Hv$, où $H=H(g)=g^{-*}g^{-1}$. L'application \[ \Phi: E \to G \times^{K} V, \qquad \Phi(gK,v) = (g,g^{-1}v) \] est une isométrie $G$-équivariante. \begin{definition} Soit \[ \varphi = \Phi^*( U) \in A^{2n} (E)^G \quad \mbox{et} \quad \psi = \iota_X \varphi \in A^{2n-1} (E)^G, \] où $X = \sum_i (z_i \partial_{i} + \overline{z}_i \overline{\partial}_i )$ est le champs de vecteur radial sur $E$. \end{definition} La forme $\varphi$ est fermée, rapidement décroissante et d'intégrale $1$ le long des fibres de $S \times V \to V$; c'est une \emph{forme de Thom} en ce sens. \section{Une $(2n-1)$-forme fermée sur $X \times (\mathbf{C}^n - \{ 0 \})$} \label{S:42} La multiplication par un réel strictement positif $s$ sur $V=\mathbf{C}^n$ induit une application $$[s] : S \times V \to S \times V.$$ Il découle des définitions que \begin{equation} \label{E:tddt1} d ([s]^* \psi ) = s \frac{d}{ds} ( [s]^* \varphi ). \end{equation} La proposition suivante est essentiellement due à Mathai et Quillen \cite[\S 7]{MathaiQuillen}. \begin{proposition} \label{P:eta} L'intégrale \begin{equation} \label{E:eta} \eta = \int_0^{+\infty} [s]^* \psi \frac{ds}{s} \end{equation} est convergente et définit une forme \emph{fermée} dans $A^{2n-1} (X \times (\mathbf{C}^n - \{ 0 \}))^G$ dont la restriction à chaque fibre $\mathbf{C}^n -\{0 \}$ de $X \times (\mathbf{C}^n - \{ 0 \} )$ représente la classe fondamentale. \end{proposition} \begin{proof} L'expression explicite \eqref{E:UMQ} de la forme de Thom de Mathai--Quillen permet de montrer que l'intégrale converge en $0$; en fait lorsque $s$ tend vers $0$ la forme $[s]^* \psi$ est un $O(s)$. Maintenant si $v \in V$ est un vecteur non nul, l'image $s v$ tend vers l'infini avec $s$ et le fait que $\psi$ soit rapidement décroissante dans les fibres de $S \times V \to S$ implique que l'intégrale \eqref{E:eta} converge sur $S \times (\mathbf{C}^n - \{ 0 \})$. Comme $\eta$ est de plus invariante, par construction, par multiplication dans les fibres, elle définit finalement une forme sur $X \times (\mathbf{C}^n - \{ 0 \} )$. La forme $\eta$ est de degré $2n-1$ et $G$-invariante. On calcule sa différentielle à l'aide de \eqref{E:tddt1}~: \begin{equation*} \begin{split} d \eta & = d \left( \int_0^{+\infty} [s]^* \psi \frac{ds}{s} \right) \\ & = \int_0^{+\infty} d([s]^* \psi) \frac{ds}{s} \\ & = \int_0^{+\infty} \frac{d}{ds} ([s]^* \varphi ) ds . \end{split} \end{equation*} La décroissance rapide de $\varphi$ le long des fibres de $E \to S$ montre comme ci-dessus que sur $S \times (\mathbf{C}^n - \{ 0 \} )$ les formes $[s]^* \varphi$ tendent vers $0$ quand $s$ tend vers l'infini. Enfin, il découle de \eqref{E:UMQ} que lorsque $s$ tend vers $0$ la forme $[s]^* \varphi$ tend vers \begin{equation} \label{E:chern} \omega = \left( \frac{i}{2\pi} \right)^n \Phi^* (\det \Omega). \end{equation} La différentielle $d\eta$ est donc égale à \eqref{E:chern}. Montrons maintenant que cette forme est identiquement nulle. La $2n$-forme $G$-invariante \eqref{E:chern} est égale au tiré en arrière, par la projection $E \to S$, du représentant de Chern--Weil de la classe de Chern de degré maximal $c_n (E)$.\footnote{Le $G$-fibré $E$ étant plat la forme \eqref{E:chern} admet donc nécessairement une primitive $G$-invariante, ce qu'est précisément le forme $\eta$. Montrer que $\eta$ est fermée est un problème analogue à l'existence du relevé canonique dans la proposition \ref{P4}.} Soit $B \subset \mathrm{GL}_n (\mathbf{C})$ le sous-groupe de Borel des matrices triangulaires supérieures. La décomposition $G = B K$ induit un difféomorphisme $B$-équivariant $$f: B \times^{B \cap K} V \to G \times^K V.$$ Il suffit donc de montrer que la $2n$-forme différentielle \begin{equation} \label{E:f*MQ} \left( \frac{i}{2\pi} \right)^n f^* (\det \Omega) \end{equation} est identiquement nulle. Maintenant, comme $B \cap K = \mathrm{U}_1^n$, le fibré hermitien $B \times^{B \cap K} V$ se scinde métriquement en une somme directe de fibrés en droite au-dessus de $B/(B \cap K)$. Par fonctorialité la forme \eqref{E:f*MQ} au-dessus d'un point $[b]$ dans $B/(B \cap K)$ s'obtient comme $b$-translaté du produit de $n$ formes sur les facteurs $\mathrm{U}_1$ de $B \cap K$, chacune de ces formes égale à \eqref{E:f*MQ} avec $n=1$. On est ainsi réduit à considérer le cas $n=1$ et la nullité de \eqref{E:f*MQ} résulte du fait qu'il n'y a pas de $2$-forme non nulle sur le cercle. \medskip \noindent {\it Remarque.} Une démonstration plus conceptuelle est possible : par l'astuce de Weyl il suffit en effet de démontrer que la forme invariante correspondant à \eqref{E:chern} sur le dual compact de $S$ $$S^c = \mathrm{U}_n = (\mathrm{U}_n \times \mathrm{U}_n) /\mathrm{U}_n$$ est identiquement nulle. Mais celle-ci est harmonique car fermée invariante et représente la classe de Chern maximale d'un fibré plat. Elle est donc nécessairement nulle. \medskip Il nous reste à voir que $\eta$ représente la classe fondamentale de $S \times (\mathbf{C}^n - \{ 0 \} )$. Il suffit pour cela de se restreindre à la fibre au-dessus du point base de $S$. Alors $\varphi$ est simplement $$ \left( \frac{i}{2\pi} \right)^n e^{-|z|^2} dz_1 \wedge \ldots \wedge dz_n \wedge d\overline{z}_n \wedge \ldots \wedge d \overline{z}_1 = \frac{1}{\pi^n} e^{- \sum_j x_j^2} dx_1 \wedge \ldots \wedge dx_{2n}$$ où les $x_j$ sont les coordonnées dans une base orthonormée et orienté positivement du $\mathbf{R}$-espace vectoriel $\mathbf{C}^n$. Dans ces coordonnées le champ radial $X$ s'écrit $\sum_j x_j \partial_{x_j}$ et la forme $\psi$ est égale à $$\frac{1}{\pi^n} e^{- \sum_j x_j^2} \sum_{j=1}^{2n} (-1)^{j-1} x_j dx_1 \wedge \ldots \wedge \widehat{dx_j} \wedge \ldots \wedge dx_{2n}.$$ On trouve finalement que la forme $\eta$, en restriction à la fibre $\mathbf{C}^n - \{ 0 \}$ au-dessus du point base de $S$, est égale à $$\frac{\Gamma (n)}{2 \pi^n} \frac{ \sum_{j=1}^{2n} (-1)^{j-1} x_j dx_1 \wedge \ldots \wedge \widehat{dx_j} \wedge \ldots \wedge dx_{2n}}{ (x_1^2 + \ldots + x_{2n}^2)^n}$$ qui est la forme volume normalisée sur la sphère $\mathbf{S}^{2n-1}$. \end{proof} \medskip \noindent {\it Remarque.} On peut identifier $S$ à $X \times \mathbf{R}_{>0}$ {\it via} l'application $\mathrm{GL}_n (\mathbf{C} )$-équivariante $gU_n \mapsto ( [g] , | \det (g) |^{1/n})$. Les formes $\varphi$ et $\psi$ définissent alors deux formes invariantes sur $X \times \mathbf{R}_{>0} \times V$ et on a $$\varphi = \alpha - \psi \wedge \frac{dr}{r},$$ où $\alpha$ est une forme différentielle sur $X \times \mathbf{R}_{>0} \times V$ qui ne contient pas de composante $dr$ le long de $\mathbf{R}_{>0}$ et dont la restriction à chaque $X \times \{s \} \times V$ $(s \in \mathbf{R}_{>0})$ est égale à $[s]^* \varphi$. Le fait que $\varphi$ soit fermée est équivalent à \eqref{E:tddt1} et, au-dessus de $V-\{0 \}$, la forme $\eta$ est le résultat de l'intégration partielle $$\eta = \int_{\mathbf{R}_{>0}} \varphi.$$ \section{Calculs explicites dans le cas $n=1$} On suppose dans ce paragraphe que $n=1$. Dans ce cas $S= \mathbf{R}_{>0}$ et $X$ est réduit à un point. On peut alors calculer explicitement les formes de Mathai--Quillen, cf. \cite{Takagi}. On trouve que pour $(r,z) \in \mathbf{R}_{>0} \times \mathbf{C}$ on a \begin{equation} \label{E:phiN1} \varphi = \frac{i}{2\pi} e^{- r^2 |z|^2 } \left( r^2 dz \wedge d \overline{z} -r^2 ( z d\overline{z} - \overline{z}dz ) \wedge \frac{dr}{r} \right) \end{equation} de sorte que \begin{equation} \label{E:psiN1} \psi = - \frac{i}{2\pi} \left( r^2 |z|^2 e^{- r^2 |z|^2 } \right) \left( \frac{dz}{z} - \frac{d\overline{z}}{\overline{z}} \right) \end{equation} et \begin{equation} \label{E:etaN1} \eta = - \frac{i}{4\pi} \left( \frac{dz}{z} - \frac{d\overline{z}}{\overline{z}} \right). \end{equation} \section{Formes de Schwartz et représentation de Weil} \label{S:44} L'espace total du fibré $E$ est un espace homogène sous l'action du groupe affine $G \ltimes V = \mathrm{GL}_n (\mathbf{C}) \ltimes \mathbf{C}^n$, où $$(g,v) \cdot (g' , v' ) = (gg' , g v' + v).$$ L'action à gauche du groupe affine sur $E= S \times V$ est transitive et le stabilisateur du point base $([K] , 0)$ est le groupe $K=\mathrm{U}_n$ plongé dans le groupe affine {\it via} l'application $k \mapsto (k,0)$. Soit $\mathcal{S} (V)$ l'espace de Schwartz de $V$. On appelle {\it représentation de Weil} du groupe affine dans $\mathcal{S} (V)$ la représentation $\omega$ donnée par~: $$\omega (g , v) : \mathcal{S} (V) \to \mathcal{S} (V) ; \quad \phi \mapsto \left( w \mapsto \phi ( g^{-1} (w-v)) \right).$$ Soit\footnote{L'isomorphisme est induit par l'évaluation en le point base $(eK, 0)$ dans $S \times V$.} $$A^{k} (E , \mathcal{S}(V))^{G \ltimes V} := \left[ \mathcal{S} (V) \otimes A^k (E) \right]^{G \ltimes V} \cong \left[\mathcal{S}(V) \otimes \wedge^\bullet (\mathfrak{p} \oplus V)^* \right]^K$$ l'espace des $k$-formes différentielles invariantes sur $E$ à valeurs dans $\mathcal{S}(V)$. \begin{definition} Pour tout $w \in V$, on pose $$\widetilde{\varphi} (w) = t_{-w}^* \varphi \in A^{2n} (E) \quad \mbox{et} \quad \widetilde{\psi}(w) = t_{-w}^* \psi \in A^{2n-1} (E),$$ où $t_{-w} : E \to E$ désigne la translation par $-w$ dans les fibres de $E$. \end{definition} \begin{lemma} Les applications $w \mapsto \widetilde{\varphi} (w)$ et $w \mapsto \widetilde{\psi}(w)$ définissent des éléments $$\widetilde{\varphi} \in A^{2n} (E , \mathcal{S}(V))^{G \ltimes V} \quad \mbox{et} \quad \widetilde{\psi} \in A^{2n-1} (E , \mathcal{S}(V))^{G \ltimes V}.$$ \end{lemma} \begin{proof} Montrons d'abord l'invariance, c'est à dire que pour tous $(g,v)$ et $(h,w)$ dans $G \ltimes V$, on a \begin{equation} \label{E:invariancephipsi} (g,v)^* \widetilde{\varphi} (gw+v) = \widetilde{\varphi} (w) \quad \mbox{et} \quad (g,v)^* \widetilde{\psi} (gw+v) = \widetilde{\psi} (w). \end{equation} Ici $(g,v)^*$ désigne le tiré en arrière par le difféomorphisme de $E$ induit par l'élément $(g,v) \in G \ltimes V$. L'identité \eqref{E:invariancephipsi} résulte des définitions; on la vérifie pour $\widetilde{\varphi}$, le cas de $\widetilde{\psi}$ se traitant de la même manière~: \begin{equation*} \begin{split} (g,v)^* \widetilde{\varphi} (gw+v) & = \left[ (1,v) \cdot (g , 0) \right]^* \widetilde{\varphi} (gw+v) \\ & = g^* \left[ (1,v)^* \widetilde{\varphi} (gw+v) \right] \\ & = g^* \left[ (1,v)^* ((1, -gw -v)^* \varphi ) \right] \\ & = g^* \left[ (1 , -gw )^* \varphi \right] = \left[ (1, -gw) \cdot (g, 0 ) \right]^* \varphi \\ & = (g , -gw)^* \varphi = \left[ (g,0) \cdot (1 , -w) \right]^* \varphi \\ & = (1 , -w)^* (g^* \varphi ) = (1,-w)^* \varphi = \widetilde{\varphi} (w). \end{split} \end{equation*} Il nous reste à vérifier que $\widetilde{\varphi}$ --- et de la même manière $\widetilde{\psi}$ --- définit bien une forme différentielle à valeurs dans $\mathcal{S}(V)$. Par $(G \ltimes V)$-invariance, il suffit pour cela de remarquer que pour tout $X$ dans $\wedge^{2n} (\mathfrak{p} \oplus V)$, la fonction $$(\widetilde{\varphi} (w))_{(eK , 0)} (X) = (t_{-w}^* \varphi )_{(eK , 0)} (X) = [\varphi (-w)] (X) $$ est bien une fonction de Schwartz de $w$. \end{proof} \medskip \noindent {\it Remarque.} De notre point de vue, l'apport majeur du formalisme de Mathai--Quillen est de fournir ces fonctions tests ``archimédiennes'' auxquelles on peut alors appliquer le formalisme automorphe. Il est en effet notoirement délicat de construire les bonnes fonctions tests à l'infini en général. \medskip Par définition on a $$\widetilde{\varphi} (0) = \varphi \quad \mbox{et} \quad \widetilde{\psi} (0) = \psi$$ et il découle en particulier de \eqref{E:invariancephipsi} que \begin{equation} \label{E:pullback} v^* \widetilde{\varphi} (w) = (v-w)^* \varphi \quad \mbox{et} \quad v^* \widetilde{\psi} (w) = (v-w)^* \psi \in A^\bullet (S) , \end{equation} où l'on a identifié un vecteur de $V$ à la section constante $S \to E$ qu'il définit. Finalement, les formes $[s]^* \varphi$ et $[s]^* \psi$ définissent à leur tour des formes différentielles $\widetilde{[s]^* \varphi}$ et $\widetilde{[s]^* \psi}$ dans $A^{\bullet} (E, \mathcal{S}(V))^{G \ltimes V}$ qui vérifient \begin{equation} \label{E:invariancephipsi2} \widetilde{[s]^* \varphi} (w) = [s]^* \widetilde{\varphi} (s w) \quad \mbox{et} \quad \widetilde{[s]^* \psi} (w) = [s]^* \widetilde{\psi} (s w). \end{equation} Le lemme suivant découle de la construction. \begin{lemma} \label{L:convcourant} 1. Lorsque $s$ tend vers $+\infty$, les formes $\widetilde{[s]^*\varphi} (0) = [s]^* \varphi$ convergent uniformément sur tout compact de $S \times (\mathbf{C}^n - \{ 0 \})$ vers la forme nulle et convergent au sens des courants vers le courant d'intégration le long de $S \times \{ 0 \}$. 2. Soit $v \in \mathbf{C}^n - \{ 0 \}$. Lorsque $s$ tend vers $+\infty$, les formes $$\widetilde{[s]^*\varphi} (v) = [s]^* \widetilde{\varphi} (sv)$$ convergent uniformément exponentiellement vite sur tout compact de $S \times \mathbf{C}^n$ vers la forme nulle. 3. Lorsque $s$ tend vers $0$, les formes $\widetilde{[s]^*\varphi} (0) = [s]^* \varphi$ convergent uniformément sur tout compact de $S \times \mathbf{C}^n$ vers la forme nulle. \end{lemma} \medskip \noindent {\it Remarque.} Dans le dernier cas, les formes $\widetilde{[s]^*\varphi} (0) = [s]^* \varphi$ convergent uniformément sur tout compact de $S \times \mathbf{C}^n$ vers la forme $\omega$ définie en \eqref{E:chern} et dont on a montré qu'elle est identiquement nulle. Dans le dernier chapitre, on considère la forme $\varphi$ associée au produit $$\mathrm{GL}_n (\mathbf{R} ) / \mathrm{SO}_n \times \mathrm{SL}_2 (\mathbf{R} ) /\mathrm{SO}_2 \times \mathbf{C}^n.$$ Dans ce cas les formes $\widetilde{[s]^*\varphi} (0) = [s]^* \varphi$ convergent uniformément sur tout compact de $S \times \mathbf{C}^n$ vers une forme invariante non nulle en générale. \chapter{Compactifications de Satake, de Tits et symboles modulaires} \label{C:5} \resettheoremcounters Les espaces symétriques admettent de nombreuses compactifications équivariantes, cf. \cite{BorelJi}. On rappelle ici deux d'entre elles, la compactification (minimale) de Satake et la compactification de Tits. On étudie ensuite le comportement de la forme $\eta$, définie au chapitre précédent, lorsque l'on s'approche du bord de ces compactifications. \section{Compactification de Satake} L'espace $S$ est un cône ouvert dans l'espace vectoriel réel $\mathcal{H}$ des matrices hermitiennes de rang $n$. L'action de $G$ sur $S$ $$g : H \mapsto g^{-*} H g^{-1}$$ s'étend en une action sur $\mathcal{H}$ qui induit une action de $G$ sur l'espace projectif $\mathrm{P} (\mathcal{H})$. L'application $$i : X \to \mathrm{P} (\mathcal{H})$$ est un plongement $G$-équivariant. L'adhérence de son image $i(X)$ dans l'espace compact $\mathrm{P} (\mathcal{H})$ est donc une compactification $G$-équivariante de $X$; elle est appelée {\it compactification (minimale) de Satake} et notée $\overline{X}^S$. Elle est convexe et donc contractile. La compactification de Satake $\overline{X}^S$ se décompose en une union disjointe \begin{equation} \label{E:Xsatake1} \overline{X}^S = X \bigcup_{W} b(W) , \end{equation} où $W$ parcourt les sous-espaces non nuls et propres de $\mathbf{C}^n$ et $b(W)$ désigne l'image dans $\mathrm{P} (\mathcal{H})$ du cône sur l'ensemble des matrices hermitiennes semi-définies positives dont le noyau est exactement $W$. On note $P_W$ le sous-groupe parabolique de $\mathrm{SL}_n (\mathbf{C})$ qui préserve $W$. Si $g \in \mathrm{SL}_n (\mathbf{C})$ est tel que $$W = g \langle e_1 , \ldots , e_{j} \rangle \quad (j=\dim W) ,$$ le groupe $P_W$ est obtenu en conjuguant par $g$ le groupe $$P=P_j = P_{\langle e_1 , \ldots , e_{j} \rangle}.$$ Soient \begin{equation*} \begin{split} N=N_j &= \left\{ \left. \begin{pmatrix} 1_j & x \\ 0 & 1_{n-j}\end{pmatrix} \ \right| \ x \in M_{j, (n-j)} (\mathbf{C} ) \right\} \\ M=M_j &= \left\{ \left. \begin{pmatrix} A & 0 \\ 0 & B \end{pmatrix} \ \right| \ A \in \mathrm{GL}_j(\mathbf{C}), \ B \in \mathrm{GL}_{n-j}(\mathbf{C}) , \ |\det(A)| =|\det(B)|=1 \right\} \\ A=A_j &= \left\{ \left. a(t_1,t_2):=\begin{pmatrix} t_1 1_j & 0 \\ 0 & t_2 1_{n-j} \end{pmatrix} \ \right| \ t_1, t_2 \in \mathbf{R}_{>0}, \ \det a(t_1,t_2) = 1 \right\} . \end{split} \end{equation*} Un élément $g \in \mathrm{SL}_n(\mathbf{C})$ peut s'écrire \[ g = u m a k, \quad u \in N, \ m \in M, \ a \in A, \ k \in \mathrm{SU}_n \] Dans cette décomposition $u$ et $a$ sont uniquement déterminés par $g$, et $m$ et $k$ sont déterminés à un élément de $M \cap \mathrm{SU}_n$ près. On peut donc écrire $a=a(t_1(g),t_2(g))$ avec $t_1(g)$ et $t_2(g)$ déterminé par $g$; on a plus précisément \[ t_1(g)^{-j} = \det (H(g)|_{ \langle e_1,\ldots,e_j \rangle }), \] où $H(g)|_{\langle e_1,\ldots,e_j \rangle}$ désigne la restriction de la métrique hermitienne $v \mapsto |g^{-1}v|^2$ déterminée par $g$ au sous-espace $\langle e_1,\ldots, e_j \rangle$. \'Etant donné un réel $t \in \mathbf{R}^+$, on pose \[ A_t = \{a(t_1,t_2) \in A \; : \; t_1 /t_2 \geq t \}. \] \begin{definition} On appelle \emph{ensemble de Siegel} associé à un sous-ensemble relativement compact $\omega \subset NM$ le sous-ensemble \[ \mathfrak{S}_j (t,\omega) := \omega A_t \cdot \mathrm{SU}_n \subset \mathrm{SL}_n(\mathbf{C}); \] on parlera aussi d'ensemble de Siegel pour son image dans $X$. On appelle plus généralement, \emph{ensemble de Siegel associé à $W$} tout sous-ensemble de la forme \[ \mathfrak{S}_W (g,t,\omega) := g \omega A_t \cdot \mathrm{SU}_n \] où $g \in \mathrm{GL}_n(\mathbf{C})$ vérifie $g \langle e_1,\ldots,e_{\mathrm{dim} W} \rangle =W$. \end{definition} Dans la décomposition \eqref{E:Xsatake1}, les composantes de bord $b(W)$ appartiennent toutes à une même $G$-orbite $G\cdot X_{n-j}$ où l'on note $X_{n-j}$ l'espace symétrique associé au sous-espace $\langle e_{j +1},\ldots, e_n \rangle$ plongé dans $P (\mathcal{H})$ {\it via} l'inclusion $$\mathrm{GL}_{n-j} (\mathbf{C}) \to \mathcal{H} ; \quad A \mapsto \begin{pmatrix} 0 & 0 \\ 0 & A \end{pmatrix}$$ de sorte que les formes hermitiennes de l'image aient pour noyau $\langle e_1,\ldots, e_j \rangle$. Le groupe $G$ a en fait exactement $n$ orbites~: \begin{equation} \label{E:Xsatake2} \overline{X}^S = X \sqcup G \cdot X_{n-1} \sqcup \ldots \sqcup G \cdot X_1. \end{equation} Finalement la topologie sur $\overline{X}^S$ peut être comprise inductivement~: un ouvert $U$ relativement compact voisinage d'un point $p$ dans $X_{n-1}$ se relève en un ouvert relativement compact de $\mathrm{GL}_{n-1} (\mathbf{C}) \subset M_1$. Le produit de cet ouvert avec un sous-ensemble relativement compact de $N_1$ définit un sous-ensemble relativement compact $\omega \subset N_1 M_1$. On construit alors un voisinage de $p$ dans $\overline{X}^S$ en prenant la réunion de $U$ avec l'ensemble de Siegel $\mathfrak{S} (t , \omega )$. \section{Compactification de Tits} \label{S:Tits} L'immeuble de Tits $\mathbf{T}=\mathbf{T}_n$ associé au groupe $\mathrm{SL}_n (\mathbf{C})$ est un ensemble simplicial dont les simplexes non dégénérés sont en bijection avec les sous-groupes paraboliques propres de $\mathrm{SL}_n (\mathbf{C})$, ou de manière équivalente avec les drapeaux propres \begin{equation} \label{E:flag} W_\bullet: 0 \subsetneq W_1 \subsetneq \cdots \subsetneq W_k \subsetneq \mathbf{C}^n, \quad k \geq 0. \end{equation} Le stabilisateur $Q(W_\bullet) \subset \mathrm{SL}_n (\mathbf{C} )$ d'un tel drapeau est un sous-groupe parabolique propre qui définit un simplexe de dimension $k$ dans $\mathbf{T}$. La $i$-ème face de ce simplexe correspond au drapeau déduit de $W_\bullet$ en enlevant $W_i$. Notons $Q= N_Q A_Q M_Q$ la décomposition de Langlands (associé au choix fixé du sous-groupe compact $\mathrm{SU}_n$) d'un sous-groupe parabolique $Q$ de $\mathrm{SL}_n (\mathbf{C})$ et $\mathfrak{q}$, $\mathfrak{n}_Q$, $\mathfrak{a}_Q$ et $\mathfrak{m}_Q$ les algèbres de Lie correspondantes. Soit $\Phi^+(Q ,A_Q )$ l'ensemble des racines pour l'action adjointe de $\mathfrak{a}_Q$ on $\mathfrak{n}_Q$. Ces racines définissent une chambre positive $$ \mathfrak{a}_Q^+ = \left\{ H \in \mathfrak{a}_Q \; : \; \alpha(H)>0, \quad \alpha \in \Phi^+(Q,A_Q) \right\}. $$ En notant $\langle \cdot, \cdot \rangle$ la forme de Killing sur $\mathfrak{sl}_N (\mathbf{C})$, on définit un simplexe ouvert $$ \mathfrak{a}_Q^+(\infty) = \left\{ H \in \mathfrak{a}_Q^+ \; : \; \langle H, H \rangle = 1 \right\} \subset \mathfrak{a}_Q^+ $$ et un simplexe fermé $$ \overline{\mathfrak{a}_Q^+}(\infty) = \left\{ H \in \mathfrak{a}_Q \; : \; \alpha(H) \geq 0, \ \langle H, H \rangle = 1, \quad \alpha \in \Phi^+(Q,A_Q) \right\} $$ dans $\mathfrak{a}_Q$. \medskip \noindent {\it Remarque.} Lorsque $Q= P_W$ est maximal, l'algèbre de Lie $\mathfrak{a}_Q$ est de dimension $1$ et $\overline{\mathfrak{a}_Q^+}(\infty)$ se réduit à un point. \medskip Si $Q_1$ et $Q_2$ sont deux sous-groupes paraboliques propres, alors $\overline{\mathfrak{a}_{Q_1}^+}(\infty)$ est une face de $\overline{\mathfrak{a}_{Q_2}^+}(\infty)$ si et seulement si $Q_2 \subseteq Q_1$. L'immeuble de Tits peut donc être réalisé géométriquement comme \begin{equation} \label{eq:Tits_building_geometric_realization} \mathbf{T}_n \sim \coprod_{Q} \overline{\mathfrak{a}_Q^+}(\infty)/\sim, \end{equation} où l'union porte sur tous les sous-groupes paraboliques propres $Q$ dans $\mathrm{SL}_n (\mathbf{C})$ et $\sim$ désigne la relation d'équivalence induite par l'identification de $\overline{\mathfrak{a}_{Q_1}^+}(\infty)$ à une face de $\overline{\mathfrak{a}_{Q_2}^+}(\infty)$ dès que $Q_2 \subseteq Q_1$. Au niveau ensembliste on peut décomposer $$ \mathbf{T}_n = \coprod_Q \mathfrak{a}_Q^+(\infty) $$ en l'union disjointe des simplexes ouverts $\mathfrak{a}_Q^+(\infty)$. La compactification de Tits $\overline{X}^T$ de $X$ a pour bord l'immeuble de Tits $\mathbf{T}_n$~: au niveau ensembliste on a $$ \overline{X}^T = X \cup \coprod_Q \mathfrak{a}_Q^+(\infty). $$ On renvoie à \cite[\S I.2]{BorelJi} pour une description détaillée de la topologie sur $\overline{X}^T$ et son identification avec la compactification géodésique de $X$, on se contente ici d'énoncer les trois propriétés suivantes qui caractérisent cette topologie~: \begin{enumerate} \item La topologie induite sur le bord $\mathbf{T}_n$ est la topologie quotient donnée par \eqref{eq:Tits_building_geometric_realization}. \item Soit $x \in X$. Une suite $x_j \in \overline{X}^T$, $n \geq 1$, converge vers $x$ si et seulement si $x_j \in X$ pour $j \gg 1$ et $x_j$ converge vers $x$ dans $X$ muni de sa topologie usuelle. \item Soit $H_\infty \in \mathfrak{a}_Q^+(\infty)$ et soit $(x_j)_{j \geq 1}$ une suite dans $X$. La décomposition de Langlands de $Q$ permet d'écrire $x_j=u_j \exp(H_j)m_j$ avec $u_j \in N_Q$ et $H_j \in \mathfrak{a}_Q$ uniquement déterminés et $m_j \in M_Q$ uniquement déterminé modulo $\mathrm{SU}_n \cap M_Q$. Alors $x_j \to H_\infty$ si et seulement si $x_j$ est non bornée et \begin{enumerate} \item[(i)] $H_j/||H_j|| \to H_\infty$ dans $\mathfrak{a}_Q$, \item[(ii)] $d(u_j m_j x_0,x_0)/||H_j|| \to 0$, \end{enumerate} où $d$ désigne la métrique symétrique sur $X$. \end{enumerate} Muni de cette topologie, l'espace $\overline{X}^T$ est séparé et l'action de $\mathrm{SL}_n (\mathbf{C})$ sur $X$ s'étend naturellement en une action continue sur $\overline{X}^T$. \begin{definition} \'Etant donné deux points $x \in X$ et $x' \in \overline{X}^T$, on note $[x,x']$ l'unique segment géodésique orienté joignant $x$ à $x'$. \end{definition} Si $x' \in X$, on définit plus explicitement $[x,x']$ comme étant égal à l'image de l'application \begin{equation} \label{E:segment} s(x,x') : [0,1] \to \overline{X}^T; \quad t \mapsto s(t;x,x'), \end{equation} l'unique segment géodésique orienté, paramétré à vitesse constante par l'intervalle unité, reliant $x$ à $x'$ dans $X$ avec $s(0;x,x') = x$ et $s(1; x,x' )= x'$. Si $x'$ appartient au bord de $\overline{X}^T$, il existe un unique sous-groupe parabolique $Q$ tel que $x'$ corresponde à $H_\infty \in \mathfrak{a}_Q^+ (\infty)$. Dans les coordonnées horocycliques associées à la décomposition de Langlands de $Q$, on a $x= u \exp (H)m$. On définit alors $[x,x']$ comme étant égal à l'image de l'application \begin{equation*} s(x,x') : [0,1] \to \overline{X}^T; \quad t \mapsto s(t;x,x') = \left\{ \begin{array}{ll} u \exp \left( H + \frac{t}{1-t} H_\infty \right) m, & \mbox{si } t<1 ,\\ x' , & \mbox{si } t=1. \end{array} \right. \end{equation*} \section{Ensembles de Siegel généralisés} Soit $J = \{ j_1 < \ldots < j_r \}$ une suite strictement croissante d'entiers dans $\{1 , \ldots , n-1 \}$. On associe à $J$ le drapeau $$W_J : 0 \subsetneq W_{j_1} \subsetneq \cdots \subsetneq W_{j_r} \subsetneq \mathbf{C}^n,$$ où $W_{j_k} = \langle e_1 , \ldots , e_{j_k} \rangle$, et on note $Q_J$ le sous-groupe parabolique qui stabilise $W_J$. On peut décrire explicitement la décomposition de Langlands $Q_J$~: soient \begin{equation*} \begin{split} N &= N_{J} = \left\{ \begin{pmatrix} 1_{j_1} & * & \cdots & * \\ 0 & 1_{j_2} & \cdots & * \\ 0 & 0 & \ddots & * \\ 0 & 0 & 0 & 1_{j_{r+1}} \end{pmatrix}\right\} \\ M &= M_{J} = \left\{ \left. \begin{pmatrix} A_1 & 0 & \cdots & 0 \\ 0 & A_2 & \cdots & 0 \\ 0 & 0 & \ddots & 0 \\ 0 & 0 & 0 & A_{r+1} \end{pmatrix} \ \right| \ A_k \in \mathrm{GL}_{j_k}(\mathbb{C}), \ |\det(A_k)|=1 \right\} \\ A &= A_{J} = \left\{ a(t_1,\ldots,t_{r+1}) \; : \; t_k>0, \ \det a(t_1,\ldots,t_{r+1}) = 1 \right\}, \end{split} \end{equation*} où $$a(t_1,\ldots,t_{r+1}) =\begin{pmatrix} t_1 1_{j_1} & 0 & \cdots & 0 \\ 0 & t_2 1_{j_2} & \cdots & 0 \\ 0 & 0 & \ddots & 0 \\ 0 & 0 & 0 & t_{r+1} 1_{j_{r+1}} \end{pmatrix}.$$ On a $Q_J = N A M$. Un élément $g \in \mathrm{SL}_n (\mathbf{C})$ peut être décomposé en un produit \[ g = u ma k, \quad u \in N, \ m \in M, \ a \in A, \ k \in \mathrm{SU}_n, \] où $u$ et $a$ sont uniquement déterminés par $g$, et $m$ et $k$ sont déterminés à un élément de $M \cap \mathrm{SU}_n$ près. \'Etant donné un nombre réel strictement strictement positif $t$, on pose \[ A_t = \{a(t_1,\ldots,t_{r+1}) \in A \; : \; t_k/t_{k+1} \geq t \text{ for all } k \}. \] L'ensemble de Siegel généralisé déterminé par $t>0$ et un sous-ensemble relativement compact $\omega \subset NM$ est $$\mathfrak{S}_J (t , \omega) = \omega A_t \cdot \mathrm{SU}_n;$$ on le verra aussi bien comme un sous-ensemble de $\mathrm{SL}_n (\mathbf{C})$ que de $X$. Soit $$\overline{\mathfrak{S}_{J} (t , \omega)}^T \subset \overline{X}^T$$ l'adhérence de $\mathfrak{S}_{J} (t , \omega)$. Les deux affirmations suivantes découlent du fait que la compactification de Tits coïncide avec la compactification géodésique, cf. \cite[\S 1.2 et Proposition I.12.6]{BorelJi}. \begin{quote} {\it Affirmation 1 :} Soit $H \in \mathfrak{a}_{Q_J}^+(\infty)$ et soit $\omega \subset N_J M_J$ un sous-ensemble ouvert relativement compact. Alors pour tout réel $t>0$, l'ensemble des $x \in X$ tels qu'il existe $y \in [x, H[$ tel que $$[ y , H ] \subset \overline{\mathfrak{S}_{J} (t , \omega)}^T$$ est ouvert. \end{quote} \medskip \begin{quote} {\it Affirmation 2 :} Soit $x$ un point dans $X$ et soit $H \in \mathfrak{a}_{Q_J}^+(\infty)$. Il existe un sous-ensemble ouvert relativement compact $\omega \subset N_J M_J$ tel que pour tout $t>0$, il existe $y \in [x, H [$ tel que $$[ y , H ] \subset \overline{\mathfrak{S}_{J} (t , \omega)}^T.$$ \end{quote} \medskip Il découle en particulier de ces deux affirmations que si $\kappa \subset X$ un sous-ensemble compact et si $t$ est un réel strictement positif, le cône $$C (\kappa , [W_J]) = \bigcup_{\substack{x \in \kappa \\ x' \in [W_J]}} [x,x']$$ est asymptotiquement contenu dans une réunion finie d'ensembles de Siegel généralisés, autrement dit il existe un compact $\Omega \subset X$ et des ensembles relativement compacts $\omega_{J '} \subset N_{J'} M_{J'}$, pour tout sous-ensemble $J' \subset J$, tels que $$C(\kappa , [W_J]) \subset \Omega \cup \bigcup_{J ' \subset J} \overline{\mathfrak{S}_{J'} (t , \omega_{J'} )}^T.$$ Le dessin ci-dessous représente schématiquement la décomposition en ensemble de Siegel lorsque $\# J=2$ (de sorte que $[W_J]$ est un simplexe de dimension $1$). \begin{center} \includegraphics[width=0.3\textwidth]{SiegelSet.png} \end{center} On pose plus généralement la définition suivante. \begin{definition} Soit $W_\bullet$ un drapeau propre de $\mathbf{C}^n$. On appelle \emph{ensemble de Siegel généralisé} associé au cusp $W_\bullet$ tout ensemble de la forme $$\mathfrak{S}_{W_\bullet} (g,t , \omega) = g \omega A_t \cdot \mathrm{SU}_n $$ dans $\mathrm{SL}_n (\mathbf{C})$ (ou dans $X$), où $g \in \mathrm{SL}_n (\mathbf{C})$ est tel que $g^{-1} W_\bullet$ est un drapeau standard, c'est-à-dire de la forme $W_J$ pour un certain $J$. \end{definition} Après translation par $g$, l'observation ci-dessus implique~: \begin{proposition} \label{P:reduction} Soit $W_\bullet$ un drapeau propre de $\mathbf{C}^n$ de simplexe associé $[W_\bullet] \subset \mathbf{T}$, soit $t$ un réel strictement positif et soit $\kappa \subset X$ un sous-ensemble compact. Il existe un sous-ensemble relativement compact $\Omega \subset X$, un élément $g \in \mathrm{SL}_n (\mathbf{C})$ et des ensembles relativement compacts $\omega_{J '} \subset N_{J'} M_{J'}$, pour tout sous-ensemble $J' \subset J$, tels que $$C(\kappa , [W_\bullet]) \subset \Omega \cup \bigcup_{W_\bullet ' \subset W_\bullet} \overline{\mathfrak{S}_{W_\bullet '} (g, t , \omega_{J'} )}^T.$$ \end{proposition} \begin{proof} On prend pour $g$ un élément tel que $g^{-1} W_\bullet = W_J$. Alors $$C(\kappa , [W_\bullet]) = g C(g^{-1} \kappa , [W_J])$$ et on est ramené au cas du drapeau standard $W_J$ détaillé ci-dessus. \end{proof} \section{Comportement à l'infini de $\eta$} \'Etant donné un sous-espace $W \subset V=\mathbf{C}^n$ on note $W^\perp$ l'orthogonal de $W$ relativement à la métrique hermitienne standard $| \cdot |$ sur $\mathbf{C}^n$ et $p_{W} : V \to W$ la projection orthogonale. \begin{lem} \label{L8} Soit $Q_J \subset \mathrm{SL}_n (\mathbf{C})$ le sous-groupe parabolique propre associé au drapeau $W_J$. Soit $\omega \subset N_J M_J$ un sous-ensemble relativement compact. Il existe alors des constantes strictement positives $C$, $\alpha$ et $\beta$ telles que pour tout réel $t >0$, on ait \begin{equation*} \| v^*\varphi \|_\infty \leq C e^{- \alpha t^\beta | p_{W_{j_r}^\perp }(v) |^2} t^{\beta n/2} \max (1 , |v|^n ) \end{equation*} et \begin{equation*} \| v^* \psi \|_\infty \leq C e^{- \alpha t^\beta | p_{W_{j_r}^\perp }(v) |^2} t^{\beta n/2} \max ( |v| , |v|^n) \quad (v \in V) \end{equation*} en restriction à $\mathfrak{S}_J (t , \omega) \subset X (\subset S)$. \end{lem} \begin{proof} Le carré de la métrique hermitienne sur $V$ associée à un élément $g$ dans $\omega A_t \cdot \mathrm{SU}_n$ est bi-Lipschitz à $$v \mapsto \frac{1}{t_1^2} | p_{W_{j_1}} (v) |^2 + \ldots + \frac{1}{t_r^2} | p_{W_{j_{r-1}}^\perp \cap W_{j_r}} (v) |^2 + \frac{1}{t_{r+1}^2} | p_{W_{j_r}^\perp} (v) |^2 \geq \frac{1}{t_{r+1}^2} | p_{W_{j_r}^\perp} (v) |^2.$$ Mais puisque chaque $t_j$ est supérieur à $t^{r+1-j} t_{r+1}$ et que $t_1 \cdots t_{r+1} =1$, on a $1 \geq t^{j_1 + \ldots + j_r} t_{r+1}^n$ et donc $$\frac{1}{t_{r+1}^2} \geq t^\beta \quad \mbox{avec} \quad \beta = \frac{2}{n} (j_1 + \ldots + j_r) >0.$$ Le lemme se déduit alors des expressions explicites de $\varphi$ et $\psi$ déduites de \eqref{E:UMQ}.\footnote{Noter que $\varphi(v)$ tend vers une constante quand $v$ tend vers $0$ alors que $\psi (v)$ tend vers $0$ linéairement.} \end{proof} On déduit de ce lemme la proposition suivante. \begin{proposition} \label{P32} Soit $Q_J \subset \mathrm{SL}_n (\mathbf{C})$ le sous-groupe parabolique propre associé au drapeau $W_J$. Soit $\omega \subset N_J M_J$ un sous-ensemble relativement compact, soit $t$ un réel strictement positif et soit $\kappa \subset V - W_{j_r}$ un sous-espace compact. La restriction de $\eta$ à $\mathfrak{S}_J (t,\omega) \times \kappa$ s'étend en une forme fermée, nulle à l'infini, à l'adhérence $$\overline{\mathfrak{S}_J (t,\omega)}^S \times \kappa, \quad \mbox{resp. } \overline{\mathfrak{S}_J (t,\omega)}^T \times \kappa,$$ dans $\overline{X}^S \times \kappa$, resp. $\overline{X}^T \times \kappa$. \end{proposition} \begin{proof} Pour tout $s >0$ on a $$v^* ([s]^* \psi ) = (sv)^* \psi .$$ Il découle donc du lemme précédent qu'il existe des constantes strictement positives $C$, $\alpha$ et $\beta$ telles que, en restriction à $\mathfrak{S}_J (t ,\omega) \times \kappa$, la forme $[s]^*\psi$ soit de norme \begin{equation} \label{E:psis} \| [s]^* \psi \|_\infty \leq s C e^{- s^2 \alpha t^\beta } . \end{equation} L'intégrale $$\int_0^{+\infty} [s]^* \psi \frac{ds}{s}$$ est donc uniformément convergente sur $\mathfrak{S}_J (t ,\omega) \times \kappa$. Il découle enfin de la proposition \ref{P:reduction} et de \eqref{E:psis} que la forme $\eta$ tend uniformément vers $0$ lorsque l'on s'approche du bord de Tits (ou de Satake) dans $\mathfrak{S}_J (t ,\omega) \times \kappa$. \end{proof} \section{Symboles modulaires} Soit $k$ un entier naturel. On note $\Delta_k '$ la première subdivision barycentrique du $k$-simplexe standard. On identifie chaque sommet $v$ de $\Delta_k '$ à un sous-ensemble non vide de $\{0,\ldots,k\}$ de sorte qu'un ensemble de sommets $\{v_0,\ldots,v_r\}$ forme un $r$-simplexe de $\Delta_k '$ si et seulement si $$v_0 \subseteq \cdots \subseteq v_r.$$ On notera $\Delta_{v_0,\ldots,v_r}$ ce simplexe. \`A tout $(k+1)$-uplet $\mathbf{q} = (q_0 , \ldots , q_{k})$ de vecteurs non nuls dans $V$ avec $k \leq n-1$, on associe maintenant une application continue \begin{equation} \label{E:appDelta} \Delta(\mathbf{q}) : \Delta_{k} ' \to \overline{X}^T. \end{equation} Supposons dans un premier temps que $\langle q_0 , \ldots, q_{k} \rangle$ soit un sous-espace propre de $V = \mathbf{C}^n$. Pour toute chaîne $v_0 \subseteq \cdots \subseteq v_r$ définissant un $r$-simplexe de $\Delta_{k}'$, le drapeau associé \begin{equation} \label{E:flagq} 0 \subsetneq \langle q_i \; | \; i \in v_0 \rangle \subseteq \langle q_i \; | \; i \in v_1 \rangle \subseteq \cdots \subseteq \langle q_i \; | \; i \in v_r \rangle \subsetneq \mathbf{C}^n \end{equation} est propre et définit un $r$-simplexe (possiblement dégénéré) de l'immeuble de Tits $\mathbf{T}$. On définit alors $\Delta (\mathbf{q})$ comme étant l'application simpliciale $\Delta_{k}' \to \mathbf{T}$ qui envoie chaque $r$-simplexe $\Delta_{v_0,\ldots,v_r}$ sur le $r$-simplexe associé à \eqref{E:flagq} dans $\mathbf{T}$ (on laisse au lecteur le soin de vérifier que cette application est bien simpliciale, autrement dit qu'elle est compatible aux applications de faces et de dégénérescence). Supposons maintenant que $k=n-1$ et que les vecteurs $q_0 , \ldots , q_{n-1}$ soient linéairement indépendants. Soit \begin{equation} \label{E:g} g = (q_0 |\cdots|q_{n-1}) \in \mathrm{GL}_n ( \mathbf{C}) \end{equation} la matrice dont les vecteurs colonnes sont précisément les vecteurs $q_0 , \ldots , q_{n-1}$. On définit alors une sous-variété $\Delta^\circ (\mathbf{q})$ dans $X$ de la manière suivante. Soit $B$ le sous-groupe parabolique minimal de $\mathrm{SL}_n (\mathbf{C})$ associé au drapeau maximal $$0 \subsetneq \langle e_1 \rangle \subsetneq \langle e_1 , e_2 \rangle \subsetneq \cdots \subsetneq \langle e_1 , \ldots , e_{n-1} \rangle \subsetneq \mathbf{C}^n.$$ En notant simplement $A$ le groupe $$A_B=\{ \mathrm{diag}(t_1,\ldots,t_n) \in \mathrm{SL}_n (\mathbf{C}) \; : \; t_j \in \mathbf{R}_{>0}, \ t_1 \cdots t_n = 1 \} \cong \mathbf{R}_{>0}^{n-1},$$ on pose \begin{equation} \Delta^\circ (\mathbf{q}) := g A K \mathbf{R}_{>0} \subset G/K \mathbf{R}_{>0}=X, \end{equation} muni de l'orientation induite par les coordonnées $\mathrm{diag}(t_1,\ldots,t_n) \mapsto t_i$ identifiant $\Delta^\circ(\mathbf{q})$ à $\mathbf{R}_{> 0}^{n-1}$ (ce dernier étant muni de l'orientation standard). Son adhérence dans $\overline{X}^T$ est naturellement identifiée à la première subdivision barycentrique d'un $(n-1)$-simplexe dont le bord est la réunion dans $\mathbf{T}$ des translatés par $g$ de tous les $\overline{\mathfrak{a}_Q^+} (\infty )$ où $Q$ est un sous-groupe parabolique propre de $\mathrm{SL}_n (\mathbf{C})$ contenant $A$. On construit ainsi une application \eqref{E:appDelta} pour $k=n-1$ dont la restriction au bord $\partial \Delta_{n-1} '$ coïncide avec les applications construites précédemment. \medskip Concluons ce paragraphe en expliquant comment recouvrir l'image de $\Delta (\mathbf{q})$ par des ensembles de Siegel généralisés~: à chaque sommet $v$ de $\Delta_{n-1} '$, autrement dit un sous-ensemble propre non vide de $\{0,\ldots , n-1\}$, il correspond un sous-espace \begin{equation} W(\mathbf{q})_v = \langle q_k \; | \; k \in v \rangle \end{equation} dans $\mathbf{C}^n$. Il découle de la proposition \ref{P:reduction} que l'on peut recouvrir l'image de $\Delta (\mathbf{q})$ par des ensembles de Siegel généralisés associés aux cusps $W_\bullet$ formés de sous-espaces $W(\mathbf{q})_v$~: \begin{proposition} \label{P33} Soit $\mathbf{q} = (q_0 , \ldots , q_{n-1})$ un $n$-uplet de vecteurs non nuls dans $V$ et soit $t$ un réel strictement positif. Il existe alors un sous-ensemble relativement compact $\Omega \subset X$ et un nombre fini d'ensembles de Siegel généralisés $\mathfrak{S}_{W_\bullet} (g , t , \omega)$, où chaque drapeau $W_\bullet$ est formé de sous-espaces $W(\mathbf{q})_v$ et chaque $g$ est une matrice dont les vecteurs colonnes sont des $q_j$, tels que l'image de l'application $\Delta (\mathbf{q})$ dans $\overline{X}^T$ soit contenue dans la réunion finie $$\Omega \cup \bigcup_{W_\bullet, g , \omega} \overline{\mathfrak{S}_{W_\bullet} (g , t , \omega)}.$$ \end{proposition} \medskip \noindent {\it Remarque.} L'adhérence de $\Delta^\circ (\mathbf{q})$ dans $\overline{X}^S$ est égale à l'enveloppe convexe conique $$\mathrm{P} \left\{ \sum_{j=0}^{n-1} t_j m_j \in \mathcal{H} \; : \; \forall j \in \{0, \ldots , n-1 \}, \ t_j >0 \right\} \subset \overline{X}^S$$ des formes hermitiennes semi-définies positives $m_j = q_j^* (\overline{q_j^*} )^\top$ où\footnote{On prendra garde au fait que Ash et Rudolph \cite[p. 5]{AshRudolph} commettent une légère erreur en identifiant $\Delta^\circ (\mathbf{q})$ avec l'enveloppe convexe conique des formes hermitiennes semi-définies positives $m_j = q_j \overline{q_j}^\top$.} $$g^{-*} = (q_0^* | \cdots | q_{N-1}^* ), \quad \mbox{de sorte que } (\overline{q_i^*} )^\top q_j = \delta_{ij} \quad \mbox{et} \quad q_j^* = g^{-*} e_{j+1} .$$ \medskip \begin{proof} Pour tout $j \in \{0, \ldots , n-1 \}$, on a $ge_{j+1} = q_{j}$. On peut donc se ramener au cas où $\mathbf{q} = (e_1 , \ldots , e_n)$ et $$\Delta (\mathbf{q})^\circ = A K \mathbf{R}_{>0} \subset G/K \mathbf{R}_{>0}=X$$ ce qui prouve immédiatement la remarque. Le cas général s'en déduit en translatant par $g$. Noter qu'alors la forme $m_j$ est égale à $q_j^* (q_j^* )^\top$ qui a bien pour noyau $$W(\mathbf{q})^{(j)} = W(\mathbf{q})_{\{0 , \ldots , \widehat{j} , \ldots , n-1 \}}. $$ \end{proof} \section{\'Evaluation de $\eta$ sur les symboles modulaires} Soit $\mathbf{q} = (q_0 , \ldots , q_{k})$ un $(k+1)$-uplet de vecteurs de non nuls dans $V$ avec $k \leq n-1$. Les propositions \ref{P32} et \ref{P33} impliquent que la forme différentielle fermée \begin{equation} \eta (\mathbf{q} ) = (\Delta (\mathbf{q}) \times \mathrm{id} )^* \eta \in A^{2n-1} \left( \Delta_{k}' \times \left( V - \bigcup_{|v| <n} W(\mathbf{q})_v \right) \right), \end{equation} où $\Delta_{k}'$ est identifié à $$\mathbf{R}^{k} \cong \{ (t_0 , \ldots , t_{k} ) \in \mathbf{R}_+^{k+1} \; : \; t_0 + \ldots + t_{k} =1 \}$$ {\it via} les coordonnées barycentriques, est bien définie. Il découle en outre des propositions \ref{P32} et \ref{P33} que l'intégrale partielle $$\int_{\Delta_{k}'} \eta (\mathbf{q})$$ converge et définit une forme de degré $2n-k-1$ sur $V - \bigcup_{|v|<n} W(\mathbf{q})_v$. \begin{proposition} \label{P34} 1. Si $\langle q_0 , \ldots , q_{k} \rangle$ est un sous-espace propre de $V$, alors la forme $\int_{\Delta_{k}'} \eta (\mathbf{q})$ est identiquement nulle. 2. Supposons $k=n-1$ et que les vecteurs $q_0 , \ldots , q_{n-1}$ soient linéairement indépendants. Alors la $n$-forme $\int_{\Delta_{n-1}'} \eta (\mathbf{q})$ est égale à $$\frac{1}{(4i\pi)^n} \left( \frac{d \ell_0}{\ell_0} - \overline{\frac{d\ell_0}{\ell_0}} \right) \wedge \ldots \wedge \left( \frac{d \ell_{n-1}}{\ell_{n-1}} - \overline{\frac{d\ell_{n-1}}{\ell_{n-1}}} \right) \in A^n \left( V - \bigcup_{j} W(\mathbf{q})^{(j)} \right),$$ où $\ell_j$ est la forme linéaire sur $\mathbf{C}^n$, de noyau $W(\mathbf{q})^{(j)}$, qui à $z$ associe $z^\top q_j^*$. \end{proposition} \begin{proof} 1. Dans ce cas l'image de $\Delta (\mathbf{q})$ est contenue dans le bord de $\overline{X}^T$ et il résulte de la proposition \ref{P32} que $\int_{\Delta_{k}'} \eta (\mathbf{q})$ est nulle sur tout ouvert relativement compact de $V - \bigcup_{|v| <n} W(\mathbf{q})_v$. 2. Supposons donc $k=n-1$ et que les vecteurs $q_0 , \ldots , q_{n-1}$ soient linéairement indépendants. En notant toujours $g$ l'élément \eqref{E:g}, la $G$-invariance de $\eta$ implique que $$g^* \left( \int_{\Delta_{n-1}'} \eta (\mathbf{q}) \right) = \int_{\Delta_{n-1}'} \eta (e_1 , \ldots , e_n ).$$ Comme par ailleurs $g^*\ell_j$ est la forme linéaire $e_{j+1}^*$ de noyau $$g^{-1} W(\mathbf{q})^{(j)} = \langle e_1 , \ldots , \widehat{e_{j+1}} , \ldots , e_n \rangle,$$ on est réduit à vérifier la proposition dans le cas où $\mathbf{q} = (e_1 , \ldots , e_n )$. Il nous reste donc à calculer l'intégrale $$\int_{AK\mathbf{R}_{>0}} \eta, \quad \mbox{où } A=\{ \mathrm{diag}(t_1,\ldots,t_n) \in \mathrm{SL}_n (\mathbf{C}) \; : \; t_j \in \mathbf{R}_{>0}, \ t_1 \cdots t_n = 1 \}.$$ D'après la remarque à la fin du paragraphe \ref{S:42} on a \begin{equation} \label{E:intsymbMod} \int_{AK\mathbf{R}_{>0}} \eta = \int_{\{ \mathrm{diag}(t_1,\ldots,t_n ) \; : \; t_j \in \mathbf{R}_{>0} \} K} \varphi. \end{equation} Or, en restriction à l'ensemble des matrices symétriques diagonales réelles, le fibré en $\mathbf{C}^n$ se scinde {\it métriquement} en une somme directe de $n$ fibrés en droites, correspondant aux coordonnées $(z_j )_{j=1 , \ldots , n}$ de $z$ et la forme $\varphi$ se décompose en le produit de $2$-formes associées à ces fibrés en droites~: $$ \varphi^{(j)} = \frac{i}{2\pi} e^{- t_j^2 |z_j |^2 } \left( t_j^2 dz_j \wedge d \overline{z}_j - t_j^2 ( z_j d\overline{z}_j - \overline{z}_j dz_j ) \wedge \frac{dt_j}{t_j} \right), $$ d'après (\ref{E:phiN1}). Finalement, on obtient que l'intégrale \eqref{E:intsymbMod} est égale à \begin{multline*} \frac{(-i)^n}{(2\pi)^n} \left( \prod_{j=1}^n \int_{\mathbf{R}_{>0}} t_j^2 |z_j|^2 e^{-t_j ^2 |z_j |^2} \frac{dt_j }{t_j} \right) \wedge_{j=1}^n \left( \frac{dz_j}{z_j} - \frac{d\overline{z}_j}{\overline{z}_j} \right) \\ = \frac{1}{(4i\pi)^n} \wedge_{j=1}^n \left( \frac{dz_j}{z_j} - \frac{d\overline{z}_j}{\overline{z}_j} \right) , \end{multline*} comme attendu. \end{proof} \chapter{Cocycles de $\mathrm{GL}_n (\mathbf{C})$ explicites} \label{S:6} \resettheoremcounters Dans ce chapitre on note à nouveau $G=\mathrm{GL}_n (\mathbf{C})^{\delta}$. Dans un premier temps on explique comment associer à la forme de Mathai--Quillen $\eta \in A^{2n-1} (X \times (\mathbf{C}^n - \{ 0 \} ))^G$ un représentant explicite du relevé canonique $\Phi \in H_G^{2n-1} (\mathbf{C}^n -\{ 0 \})$ fourni par la proposition \ref{P4}. On utilise ensuite ce représentant explicite pour démontrer le théorème \ref{T:Sa}. \section{Forme simpliciale associée à $\eta$} \label{S:61} Soit $x_0 \in X$ le point base associé à la classe de l'identité dans $\mathrm{SL}_n (\mathbf{C})$. L'application $\exp : T_{x_0} X \to X$ étant un difféomorphisme, il existe une rétraction \begin{equation} \label{E:R} R : [0, 1] \times X \to X \end{equation} de $X$ sur $\{x_0 \} $. Elle est donnée par la formule $$R_s (x ) = \exp (s \exp^{-1} (x) ) \quad (x \in X, \ s \in [0,1]).$$ Suivant \cite{Dupont} on déduit de $R$ une suite d'applications \begin{equation} \label{E:mapr} \rho_k : \Delta_k \times E_kG \times \mathbf{C}^n \longrightarrow X \times \mathbf{C}^n \end{equation} définies de la manière suivante~: pour $t = (t_0 , \ldots , t_k ) \in \Delta_k$ on pose $s_j = t_j + t_{j+1} + \ldots + t_k$ ($j=1, \ldots , k$). \'Etant donné un $(k+1)$-uplet $$\mathbf{g} = (g_0 , \ldots , g_k ) \in E_kG$$ et un vecteur $z \in \mathbf{C}^n$, on a alors \begin{multline} \label{E:mapr2} \rho_k ( t , \mathbf{g} , z ) = (g_0^{-1} \cdot R_{s_1} ( g_0 g_1^{-1} \cdot R_{s_2 / s_1} ( g_1 g_2^{-1} \cdot \\ \ldots g_{j-1} g_j^{-1} \cdot R_{s_{j+1} / s_j} ( g_j g_{j+1}^{-1} \cdot \cdots R_{s_k / s_{k-1} } ( g_{k-1} g_k^{-1} \cdot x_0) \ldots ))) , z). \end{multline} La suite $(\rho_k )$ est constituée d'applications $G$-équivariantes qui font commuter le diagramme\footnote{Ici $\epsilon^k : \Delta_{k-1} \to \Delta_k$ désigne l'application d'inclusion de la $k$-ième face.} \begin{equation} \label{diag:simpl} \xymatrix{ \Delta_{k-1} \times E_kG \times \mathbf{C}^n \ar[d]^{\ \mathrm{id} \times \partial_k \times \mathrm{id}} \ar[r]^{\epsilon^k \times \mathrm{id}} & \Delta_k \times E_kG \times \mathbf{C}^n \ar[d]^{\rho_{k}} \\ \Delta_{k-1} \times E_{k-1} G \times \mathbf{C}^n \ar[r]^{\quad \quad \rho_{k-1}} & X\times \mathbf{C}^n, } \end{equation} de sorte que $\rho$ induit une application $G$-équivariante $$\rho^* : A^\bullet (X \times \mathbf{C}^n ) \to \mathrm{A}^\bullet (EG \times \mathbf{C}^n),$$ où l'espace à droite est celui des formes différentielles simpliciales sur la variété simpliciale $EG \times \mathbf{C}^n$, cf. annexe \ref{A:A}. La proposition suivante précise la proposition \ref{P4}. \begin{proposition} La forme simpliciale $$\rho^* \eta \in \mathrm{A}^{2n-1} (EG \times (\mathbf{C}^n - \{ 0 \}))^G$$ est fermée et représente une classe dans $H_G^{2n-1} (\mathbf{C}^n - \{ 0 \} )$ qui relève la classe fondamentale dans $H^{2n-1} (\mathbf{C}^n - \{ 0 \} )$. \end{proposition} \begin{proof} La forme $\rho^* \eta$ est fermée et $G$-invariante puisque $\eta$ l'est (proposition \ref{P:eta}) et, comme $\eta$ représente la classe fondamentale de $\mathbf{C}^n - \{ 0 \}$, la classe de cohomologie équivariante $[\rho^* \eta ]$ s'envoie sur la classe fondamentale dans $H^{2n-1} (\mathbf{C}^n - \{ 0 \} )$. \end{proof} \section{Cocycle associé} \label{S:62} Considérons maintenant les ouverts \begin{equation} \label{hypCU'} \begin{split} U(g_0 , \ldots , g_k ) & = \left\{ z \in \mathbf{C}^n \; : \; \forall j \in \{ 0 , \ldots , k \}, \ e_1^* (g_j z ) \neq 0 \right\} \\ & = \mathbf{C}^n - \cup_j H_j, \end{split} \end{equation} où $H_j$ est le translaté par $g_j^{-1}$ de l'hyperplan $z_1=0$ dans $\mathbf{C}^n$. Ces sous-variétés sont toutes affines de dimension $n$ et n'ont donc pas de cohomologie en degré $>n$. La forme simpliciale $\rho^* \eta$ représente une classe de cohomologie équivariante dans $H^{2n-1}_G (\mathbf{C}^n - \{0 \})$. On peut lui appliquer la méthode décrite dans l'annexe \ref{A:A} et lui associer un $(n-1)$-cocycle de $G$ à valeurs dans $$\lim_{\substack{\rightarrow \\ H_j}} H^n (\mathbf{C}^n - \cup_j H_j ).$$ On calculera explicitement ce cocycle au paragraphe \S \ref{S:demTSa}. Avant cela remarquons qu'on aurait pu également considérer les ouverts \begin{equation} \label{hypC} \begin{split} U^* (g_0 , \ldots , g_k) & = \left\{ z \in \mathbf{C}^n \; \left| \; \begin{array}{l} \forall J \subset \{ 0 , \ldots , k \}, \\ |J| <\min (k,n) \Rightarrow z \notin \langle q_j \; : \; j \in J \rangle \end{array} \right. \right\} \\ & = \mathbf{C}^n - \bigcup_j W(\mathbf{q})^{(j)}, \end{split} \end{equation} où $\mathbf{q} = (q_0 , \ldots , q_k)=( g_0^{-1} e_1 , \ldots , g_{k}^{-1} e_1 )$ et $W(\mathbf{q})^{(j)} = \langle q_i \; : \; i \neq j \rangle$. En général la sous-variété \eqref{hypC} n'est pas affine. C'est toutefois le cas lorsque les vecteurs $g_0^{-1} e_1, \ldots , g_{k}^{-1} e_1$ engendrent $\mathbf{C}^n$. L'argument de l'annexe \ref{A:A} implique donc encore que la forme simpliciale $\rho^* \eta$ détermine un $(n-1)$-cocycle de $G$ à valeurs dans $$\lim_{\substack{\rightarrow \\ H_j}} H^n (\mathbf{C}^n - \cup_j H_j ).$$ On détaille particulièrement le calcul explicite de ce cocycle dans les paragraphes suivants; ce sont en effet ces calculs que nous généraliserons dans les cas multiplicatifs et elliptiques. \section{Section simpliciale et homotopie} \label{S:63} Pour tout entier $k \in [0 , n-1]$, on définit, par récurrence, une subdivision simpliciale $\left[ \Delta_k \times [0,1] \right]'$ de $\Delta_k \times [0,1]$ en joignant tous les simplexes dans $\Delta_k \times \{0 \} \cup \partial \Delta_k \times [0,1]$ au barycentre de $\Delta_k \times \{1\}$, comme sur la figure ci-dessous. \begin{center} \includegraphics[width=0.5\textwidth]{Figure1.png} \end{center} L'ensemble des $(k+1)$-simplexes de $\left[ \Delta_k \times [0,1] \right]'$ est constitué des joints \begin{equation} \label{E:cone1} \Delta_w \star \Delta_{v_0, \ldots , v_{k - |w|}} \quad (w \subset v_0) \end{equation} où $\Delta_w \subset \Delta_k = \Delta_k \times \{ 0 \}$ est le simplexe correspondant à un sous-ensemble non-vide $w \subset \{0, \ldots , k \}$, de cardinal $|w|$, et $\Delta_{v_0, \ldots , v_{k-|w|}}$ est le $(k-|w|)$-simplexe de la subdivision barycentrique $\Delta_k' = \Delta_k ' \times \{ 1 \}$ associé à une suite croissante $v_0 \subset \cdots \subset v_{k-|w|}$ de sous-ensembles de $\{0 , \ldots , k \}$ contenant tous $w$. On définit maintenant une suite d'applications \begin{equation} \varrho_k : \left[ \Delta _k \times [0,1] \right]' \times E_k G \times \mathbf{C}^n \to \overline{X}^T \times \mathbf{C}^n \quad (k \in \{ 0 , \ldots , n-1 \} ) \end{equation} de manière à ce que la restriction de $\varrho_k$ à $$\Delta_k \times E_k G \times \mathbf{C}^n = (\Delta _k \times \{0 \} ) \times E_k G \times \mathbf{C}^n$$ coïncide avec $\rho_k$ et qu'en restriction à $$\Delta_k ' \times E_k G \times \mathbf{C}^n \subset (\Delta _k ' \times \{1 \} ) \times E_k G \times \mathbf{C}^n$$ on ait $$\varrho_k ( - , \mathbf{g} , -) = \Delta (\mathbf{q} ) \times \mathrm{Id}_{\mathbf{C}^n },$$ où si $\mathbf{g}=(g_0 , \ldots , g_{k} ) \in E_{k} G$ on note toujours $\mathbf{q} = ( g_0^{-1} e_1 , \ldots , g_k^{-1} e_1 ).$ En procédant par récurrence sur $k$ on est ramené à définir $\varrho_k$ sur chaque simplexe \eqref{E:cone1}. Considérons donc un sous-ensemble non-vide $w \subset \{0, \ldots , k \}$ et une suite croissante $v_0 \subset \cdots \subset v_{k-|w|}$ de sous-ensembles de $\{0 , \ldots , k \}$ contenant tous $w$. L'application $\varrho_k$ étant définie sur $\Delta_w \subset \Delta _k \times \{0 \}$ et sur $\Delta_{v_0, \ldots , v_{k-|w|}} \subset \Delta _k ' \times \{1 \} $, l'expression $$\varrho_k ( t , \mathbf{g} , z ) = s \left( t ; \varrho_k ( 0 , \mathbf{g} , z ) , \varrho_k ( 1 , \mathbf{g} , z ) \right)$$ définit une application $$|\Delta_w| \times |\Delta_{v_0, \ldots , v_{k-|w|}}| \times [0,1] \to \overline{X}^T \times \mathbf{C}^n$$ qui se factorise en une application $$| \Delta_w \star \Delta_{v_0, \ldots , v_{k - |w|}} | \to \overline{X}^T \times \mathbf{C}^n.$$ On définit ainsi $\varrho_k$ sur les simplexes \eqref{E:cone1}; on laisse au lecteur le soin de vérifier les relations de compatibilité. La proposition \ref{P33} implique que, pour tout $\mathbf{g} \in E_k G$, on peut recouvrir l'image $$\varrho_k \left( \left[ \Delta _k \times [0,1] \right]' \times \{ \mathbf{g} \} \times \mathbf{C}^n \right) \subset \overline{X}^T \times \mathbf{C}^n$$ par un nombre fini de produits $\mathfrak{S} \times \mathbf{C}^n$ d'ensembles de Siegel par $\mathbf{C}^n$. Il découle donc de la proposition \ref{P32} que la forme différentielle fermée $$\varrho_k^* \eta (\mathbf{g}) \in \mathrm{A}^{2n-1} \left( \left[ \Delta _k \times [0,1] \right]' \times U^* (g_0, \ldots , g_k) \right)$$ est bien définie. \begin{definition} Pour tout entier $k \in [0, n-1]$ et pour tout $(k+1)$-uplet $(g_0 , \ldots , g_{k}) \in E_k G$, on pose $$H_k (g_0 , \ldots , g_k ) = \int_{\left[ \Delta _k \times [0,1] \right]'} \varrho_k^* \eta (g_0 , \ldots , g_k ) \in A^{2n-2-k} \left(\mathbf{C}^n - \bigcup_{j=0}^k W(\mathbf{q})^{(j)} \right),$$ où $\mathbf{q} = (q_0 , \ldots , q_k)$ avec $q_j = g_j^{-1} e_1$. \end{definition} \section{Calcul du cocycle} Il résulte de la définition de l'application bord $\delta$ donnée dans l'annexe \ref{A:A} que, pour tout entier $k \in [0, n-1]$, on a $$\delta H_{k-1} (g_0 , \ldots , g_{k}) = \sum_{j=0}^k H_{k-1} (g_0 , \ldots , \widehat{g}_j , \ldots , g_k ),$$ vue comme forme différentielle sur $\mathbf{C}^n - \bigcup_{j=0}^k W(\mathbf{q})^{(j)}$. \begin{theorem} \label{T37} Pour tout entier $k \in [0 , n-1]$ et pour tout $\mathbf{g} = (g_0 , \ldots , g_k)$ dans $E_k G$, l'intégrale $\int_{\Delta_k} \rho^* \eta (\mathbf{g})$ est égale à $$\delta H_{k-1} (g_0 , \ldots , g_{k}) \pm d H_k (g_0 , \ldots , g_k), \quad \mbox{si} \quad k < n-1,$$ et $$\int_{\Delta_{n-1}'} \eta (\mathbf{q}) + \delta H_{n-2} (g_0 , \ldots , g_{n-1}) \pm dH_{n-1} (g_0 , \ldots , g_{n-1}), \quad \mbox{si} \quad k=n-1,$$ dans $A^{2n-2-k} \left(\mathbf{C}^n - \bigcup_{j=0}^k W(\mathbf{q})^{(j)} \right),$ où $\mathbf{q} = (q_0 , \ldots , q_k)$ avec $q_j = g_j^{-1} e_1$. \end{theorem} \begin{proof} Puisque $\eta$ est fermée on a~: $$(d_{\left[ \Delta _k \times [0,1] \right]'} \pm d ) \varrho_k^* \eta = 0$$ et donc $$\int_{\left[ \Delta _k \times [0,1] \right]'} d_{\left[ \Delta _k \times [0,1] \right]'} \varrho_k^* \eta (g_0 , \ldots , g_k ) \pm d H_k (g_0 , \ldots , g_k ) =0.$$ Maintenant, d'après le théorème de Stokes on a \begin{multline*} \int_{\left[ \Delta _k \times [0,1] \right]'} d_{\left[ \Delta _k \times [0,1] \right]'} \varrho_k^* \eta (g_0 , \ldots , g_k ) = \int_{\Delta_k \times \{ 0 \}} \varrho_k^* \eta (g_0 , \ldots , g_k ) \\ + \int_{\left[(\partial \Delta_k) \times [0,1]\right]'} \varrho_k^* \eta (g_0 , \ldots , g_k ) - \int_{\Delta_k' \times \{ 1 \}} \varrho_k^* \eta (g_0 , \ldots , g_k ). \end{multline*} La dernière intégrale est égale à $\int_{\Delta_k '} \eta (\mathbf{q})$ et est donc nulle si $k < n-1$ d'après la proposition \ref{P34}. Finalement, par définition de $\varrho_k$ on a $$\int_{\Delta_k \times \{ 0 \}} \varrho_k^* \eta (g_0 , \ldots , g_k ) = \int_{\Delta_k} \rho_k^* \eta (g_0 , \ldots , g_k) $$ et $$\int_{\left[(\partial \Delta_k) \times [0,1]\right]'} \varrho_k^* \eta (g_0 , \ldots , g_k ) = - \delta H_{k-1} (g_0 , \ldots , g_k ).$$ \end{proof} Le corollaire suivant découle du théorème \ref{T37} et de la proposition \ref{P34}. \begin{cor} \label{C38} La forme différentielle simpliciale fermée $\rho^* \eta $ définit un $(n-1)$-cocycle de $G$ à valeurs dans $$\lim_{\substack{\rightarrow \\ H_j}} H^n (\mathbf{C}^n - \cup_j H_j )$$ qui est cohomologue au cocycle $$(g_0 , \ldots , g_{n-1} ) \mapsto \left[ \frac{1}{(4i\pi)^n} \left( \frac{d \ell_0}{\ell_0} - \overline{\frac{d\ell_0}{\ell_0}} \right) \wedge \ldots \wedge \left( \frac{d \ell_{n-1}}{\ell_{n-1}} - \overline{\frac{d\ell_{n-1}}{\ell_{n-1}}} \right) \right],$$ où $\ell_j$ est une forme linéaire sur $\mathbf{C}^n$, de noyau $W(\mathbf{q})^{(j)}$, qui à $z$ associe $z^\top q_j^*$. \end{cor} \section[Démonstration du théorème 2.2]{Démonstration du théorème \ref{T:Sa}} \label{S:demTSa} Les identités immédiates suivantes entre formes différentielles sur $\mathbf{C}^*$ $$\frac{1}{2i\pi} \frac{dz}{z} = \frac{d\theta}{2\pi} + d \left( \frac{1}{2i\pi} \log r \right) \quad \mbox{et} \quad \frac{1}{4i\pi} \left( \frac{dz}{z} - \overline{\frac{dz}{z}} \right) = \frac{d\theta}{2\pi},$$ avec $z = re^{i\theta}$, impliquent que si $\ell$ est une forme linéaire sur $\mathbf{C}^n$ les formes différentielles $$\frac{1}{2i\pi} \frac{d\ell}{\ell} \quad \mbox{et} \quad \frac{1}{4i\pi} \left( \frac{d \ell}{\ell} - \overline{\frac{d\ell}{\ell}} \right)$$ sur $\mathbf{C}^n- \mathrm{ker} (\ell )$ sont cohomologues. Le théorème de Brieskorn \cite{Brieskorn} évoqué au \S \ref{S21} implique donc que l'application $$\Omega_{\rm aff} \to \lim_{\substack{\rightarrow \\ H_j}} H^\bullet (\mathbf{C}^n - \cup_j H_j ); \quad \frac{1}{2i\pi} \frac{d\ell}{\ell} \mapsto \left[ \frac{1}{4i\pi} \left( \frac{d \ell}{\ell} - \overline{\frac{d\ell}{\ell}} \right) \right]$$ est un isomorphisme d'algèbre. Comme par ailleurs cet isomorphisme est $G$-équivariant, il résulte du corollaire \ref{C38} que la classe de $[\rho^* \eta]$ donne lieu à un $(n-1)$-cocycle de $G$ dans $\Omega^n_{\rm aff}$ cohomologue à $$\mathbf{S}_{\rm aff}^* : G^n \to \Omega^n_{\rm aff} ; \quad (g_0 , \ldots , g_{n-1}) \mapsto \frac{1}{(2i\pi )^n} \frac{d\ell_0}{\ell_0} \wedge \ldots \wedge \frac{d\ell_{n-1}}{\ell_{n-1}},$$ où $\ell_j$ est une forme linéaire de noyau $\langle g_0^{-1} e_1 , \ldots , \widehat{g_j^{-1} e_1} , \ldots , g_{n-1}^{-1} e_1 \rangle$. Il nous reste à montrer que ce cocycle est cohomologue à $\mathbf{S}_{\rm aff}$ et que sa classe de cohomologie est non nulle. Pour ce faire on applique à nouveau l'argument de l'annexe \ref{A:A} à la forme simpliciale fermée $\rho^*\eta$ mais en utilisant cette fois les ouverts \eqref{hypCU'}. Rappelons que dans ce cas \begin{equation*} U (g_0 , \ldots , g_k) = \mathbf{C}^n - \cup_j H_j \end{equation*} où $H_j$ est le translaté par $g_j^{-1}$ de l'hyperplan $z_1=0$ dans $\mathbf{C}^n$. On procède alors de la même manière que pour obtenir $\mathbf{S}_{\rm aff}^*$ mais en remplaçant la compactification de Tits par celle de Satake $\overline{X}^S$. La convexité de l'ouvert des formes hermitiennes non-nulles semi-définies positives dans $\mathcal{H}$ permet de rétracter $\overline{X}^S$ sur le point (à l'infini) associé à la forme hermitienne $|e_1^* (\cdot )|^2$ de noyau $\langle e_2 , \ldots , e_n \rangle$; notons $$\overline{R} : [0,1] \times \overline{X}^S \times \mathbf{C}^n \to \overline{X}^S \times \mathbf{C}^n$$ l'application correspondante. Comme pour $R$, il correspond à $\overline{R}$ une suite d'applications $G$-équivariantes $$\overline{\rho}_k : \Delta_k \times E_k G \times \mathbf{C}^n \to \overline{X}^S \times \mathbf{C}^n$$ qui font commuter le diagramme \eqref{diag:simpl} de sorte que $\overline{\rho} = (\overline{\rho}_k)$ induit une application $G$-équivariante $$(\overline{\rho})^* : A^\bullet (\overline{X}^S \times \mathbf{C}^n ) \to \mathrm{A}^\bullet (EG \times \mathbf{C}^n ).$$ Mieux, il découle encore de la proposition \ref{P32} que les formes différentielles fermées $$(\overline{\rho}_{n-1})^* \eta (\mathbf{g}) \in A^{2n-1} (\Delta_{n-1} \times U (g_0 , \ldots , g_{n-1}) )$$ sont bien définies, se recollent en une forme simpliciale fermée dans $$\varinjlim A^{n}( \mathbf{C}^n - \cup_j H_j )$$ et définissent un $(n-1)$-cocycle $$(g_0 , \ldots , g_{n-1} ) \mapsto \int_{\Delta_{n-1}} (\overline{\rho}_{n-1})^* \eta (g_0 , \ldots , g_{n-1} ).$$ Ce dernier étant obtenu en appliquant l'argument de l'annexe \ref{A:A}, il est cohomologue au cocycle du corollaire \ref{C38}. Or, d'après la remarque suivant la proposition \ref{P33}, on a $$\overline{\rho}_k (\Delta_k \times \{ \mathbf{g} \} \times \{ z \} ) = \overline{\Delta^\circ (\mathbf{\sigma}^*)} \times \{ z \}$$ où cette fois $\mathbf{\sigma} = (\sigma_0 , \ldots , \sigma_{n-1})$, avec $\sigma_j = g_j^\perp e_1 $, de sorte que $$\mathrm{ker} \ e_1^* (g_j \cdot) = \langle \sigma_0^* , \ldots , \widehat{\sigma_j^*} , \ldots , \sigma_{n-1}^* \rangle.$$ On a donc $$\int_{\Delta_{n-1}} (\overline{\rho})^* \eta (g_0 , \ldots , g_{n-1} ) = \int_{\Delta_{n-1} '} \eta (\mathbf{\sigma}^*)$$ et la proposition \ref{P34} implique finalement que l'intégrale $\int_{\Delta_{n-1}} (\overline{\rho})^* \eta (\mathbf{g})$ est nulle si les $n$ formes linéaires $\ell_j = e_1^* (g_j \cdot)$ ($j \in \{0, \ldots , n-1\}$) sont linéairement dépendantes et qu'elle est égale à $$\frac{1}{(4i\pi)^n} \left( \frac{d \ell_0}{\ell_0} - \overline{\frac{d\ell_0}{\ell_0}} \right) \wedge \ldots \wedge \left( \frac{d \ell_{n-1}}{\ell_{n-1}} - \overline{\frac{d\ell_{n-1}}{\ell_{n-1}}} \right)$$ sinon. En faisant à nouveau appel au théorème de Brieskorn on retrouve que $\mathbf{S}_{\rm aff}$ définit un cocycle mais surtout que celui-ci représente la même classe que $\mathbf{S}_{\rm aff}^*$. Le fait que la classe de cohomologie correspondante $S_{\rm aff} \in H^{n-1} (G , \Omega_{\rm aff}^n)$ soit non nulle résulte finalement de \cite[Theorem 3]{Sczech93} où $\mathbf{S}_{\rm aff}$ est évalué sur un $(n-1)$-cycle d'éléments unipotents. \qed \medskip \noindent {\it Remarque.} Soit $W$ un sous-espace propre et non nul de $\mathbf{C}^n$, autrement dit un sommet de l'immeuble de Tits $\mathbf{T}$. Soit $X(W)$ le sous-espace de $\overline{X}^S$ constitué des matrices hermitiennes semi-définies positives dont le noyau contient $W$; il est homéomorphe à $\overline{X}_{n-\dim W}^S$ et donc contractile. Les sous-ensembles $X(\ell)$, avec $\ell \subset \mathbf{C}^n$ droites, forment un recouvrement acyclique du bord $\partial \overline{X}^S$ de la compactification de Satake. De plus, une intersection $X(\ell_1) \cap \ldots \cap X(\ell_k)$ est non-vide si et seulement si les droites $\ell_1, \ldots , \ell_k$ engendrent un sous-espace propre $W$ de $\mathbf{C}^n$, auquel cas $$X(\ell_1) \cap \ldots \cap X(\ell_k) = X(W).$$ Comme ensemble simplicial, la première subdivision barycentrique du nerf du recouvrement acyclique de $\partial \overline{X}^S$ par les $X(\ell )$ est donc égal à $\mathbf{T}$. C'est la source de la dualité qui relie les deux cocycles $\mathbf{S}_{\rm aff}$ et $\mathbf{S}_{\rm aff}^*$. On a en effet des isomorphismes $$H_{n-1} ( \overline{X}^S , \partial \overline{X}^S ) \cong \widetilde{H}_{n-2} (\partial \overline{X}^S) \cong \underbrace{\widetilde{H}_{n-2} (\mathbf{T})}_{= \mathrm{St} (\mathbf{C}^n )} \cong H_{n-1} ( \overline{X}^T , \partial \overline{X}^T ).$$ Explicitement, l'isomorphisme $$\mathrm{St} (\mathbf{C}^n ) \stackrel{\sim}{\longrightarrow} H_{n-1} ( \overline{X}^T , \partial \overline{X}^T )$$ associe à l'élément $[q_0 , \ldots , q_{n-1}] \in \mathrm{St} (\mathbf{C}^n )$ la classe de $\Delta (\mathbf{q})$ dans $$H_{n-1} ( \overline{X}^T , \partial \overline{X}^T );$$ il est $G$-équivariant ce qui se traduit par le fait que $\mathbf{S}_{\rm aff}^*$ s'étende en un isomorphisme $G$-équivariant de $\mathrm{St} (\mathbf{C}^n )$ vers $\Omega^n_{\rm aff}$. L'isomorphisme $$\mathrm{St} (\mathbf{C}^n ) \stackrel{\sim}{\longrightarrow} H_{n-1} ( \overline{X}^S , \partial \overline{X}^S )$$ associe quant à lui à un élément $[q_0 , \ldots , q_{n-1}] \in \mathrm{St} (\mathbf{C}^n )$ la classe de $[q_0^* , \ldots , q_{n-1}^*]$ dans $H_{n-1} ( \overline{X}^S , \partial \overline{X}^S )$; la représentation de $G$ dans $H_{n-1} ( \overline{X}^S , \partial \overline{X}^S )$ est donc naturellement identifiée à la représentation $\mathrm{St} ((\mathbf{C}^n )^\vee)$. \medskip Dans la suite, on globalise la construction ci-dessus en remplaçant la forme $\eta$ par la valeur en $0$ d'une série d'Eisenstein construite à partir de la fonction test $\psi$ à l'infini. On ne considère que la compactification de Tits, plus naturelle pour notre propos comme le montre la remarque précédente. \chapter{Séries d'Eisenstein associées à $\psi$} \label{C:7} \resettheoremcounters Dans ce chapitre on restreint les formes $\varphi$ et $\psi$ à l'espace symétrique associé à $\mathrm{SL}_n (\mathbf{R})$ que nous noterons $X$ en espérant ne pas créer de confusion. On note donc dorénavant $$S = \mathrm{GL}_n (\mathbf{R}) / \mathrm{SO}_n, \ S^+ = \mathrm{GL}_n (\mathbf{R})^+ / \mathrm{SO}_n \ \mbox{et} \ X = S^+/ \mathbf{R}_{>0} = \mathrm{SL}_n (\mathbf{R}) /\mathrm{SO}_n.$$ On globalise les constructions précédentes en formant une série d'Eisenstein à partir de la fonction $\psi$. La préimage de cette série d'Eisenstein par une section de torsion est étudiée dans un registre plus général par Bismut et Cheeger \cite{BismutCheeger}. Nous considérons ici le fibré en tores; la valeur en $s=0$ de la série d'Eisenstein est une manière de régulariser la moyenne de $\eta$ relativement à un réseau de $\mathbf{R}^n$. Nous travaillons adéliquement. \section{Quotients adéliques} Soit $\mathbf{A}$ l'anneau des adèles de $\mathbf{Q}$ et soit $$[\mathrm{GL}_n] = \mathrm{GL}_n (\mathbf{Q}) \backslash \mathrm{GL}_n (\mathbf{A} ) / \mathrm{SO}_n Z(\mathbf{R})^+.$$ Le théorème d'approximation forte pour $\mathrm{GL}_n$ implique que, pour tout sous-groupe compact ouvert $K \subset \mathrm{GL}_n (\mathbf{A}_f )$, le quotient $$[\mathrm{GL}_n] / K = \mathrm{GL}_n (\mathbf{Q}) \backslash \left( (\mathrm{GL}_n (\mathbf{R} ) / \mathrm{SO}_n Z(\mathbf{R})^+ ) \times \mathrm{GL}_n (\mathbf{A}_f ) \right) / K$$ est une union finie de quotients de $X$ de volumes finis que l'on peut décrire de la manière suivante. \'Ecrivons \begin{equation} \label{E:approxforte} \mathrm{GL}_n (\mathbf{A}_f ) = \bigsqcup_{j} \mathrm{GL}_n (\mathbf{Q})^+ g_j K \end{equation} avec $\mathrm{GL}_n (\mathbf{Q} )^+ = \mathrm{GL}_n (\mathbf{Q} ) \cap \mathrm{GL}_n (\mathbf{R})^+$. Alors \begin{equation} \label{E:quot0} [\mathrm{GL}_n] / K = \bigsqcup_{j} \Gamma_j \backslash X, \end{equation} où $\Gamma_j$ est l'image de $\mathrm{GL}_n (\mathbf{Q} )^+ \cap g_j K g_j^{-1}$ dans $\mathrm{GL}_n (\mathbf{R})^+ / Z(\mathbf{R})^+$. La composante connexe de la classe de l'identité dans $[\mathrm{GL}_n] / K$ est le quotient \begin{equation} \label{E:quot3} \Gamma \backslash X \end{equation} où $\Gamma = K \cap \mathrm{GL}_n (\mathbf{Q})^+$. Soit $V = \mathbf{G}_a^n$ vu comme groupe algébrique sur $\mathbf{Q}$; on a en particulier $$V (\mathbf{Q} ) = \mathbf{Q}^n \quad \mbox{et} \quad V (\mathbf{R} ) = \mathbf{R}^n.$$ Le groupe $\mathrm{GL}_n$, algébrique sur $\mathbf{Q}$, opère naturellement (par multiplication matricielle à gauche) sur $V(\mathbf{C}) = \mathbf{C}^n$. On note $$\mathcal{G} = \mathrm{GL}_n \ltimes V$$ le groupe affine correspondant; on le voit comme groupe algébrique sur $\mathbf{Q}$. Soit $$[\mathcal{G} ] = \mathcal{G} (\mathbf{Q} ) \backslash \left[ (\mathrm{GL}_n (\mathbf{R} ) \ltimes \mathbf{C}^n ) \cdot \mathcal{G} (\mathbf{A}_f ) \right] / \mathrm{SO}_n Z (\mathbf{R})^+.$$ On explique maintenant comment associer à ces données une famille de groupes abéliens isomorphes à $(\mathbf{C}^* )^n$. Soit $$L_f \subset V(\mathbf{A}_f ) = \{ I_n \} \ltimes V(\mathbf{A}_f ) \subset \mathcal{G} (\mathbf{A}_f)$$ un sous-groupe compact ouvert; l'intersection $L= L_f \cap V (\mathbf{Q})$ est un réseau dans $V$. On suppose dorénavant que $K \subset \mathrm{GL}_n (\mathbf{A}_f)$ préserve $L_f$. Le sous-groupe $$\mathcal{K} = \mathcal{K}_{L_f} = K \ltimes L_f \subset \mathcal{G} (\mathbf{A}_f)$$ préserve $L_f$; c'est un sous-groupe compact ouvert. Les quotients \begin{equation} \label{E:quot1} [\mathcal{G} ] / L_f = \mathcal{G} (\mathbf{Q} ) \backslash \left[ (\mathrm{GL}_n (\mathbf{R} ) \ltimes \mathbf{C}^n ) \cdot \mathcal{G} (\mathbf{A}_f ) \right] / \mathrm{SO}_n Z (\mathbf{R})^+ L_f \quad \mbox{et} \quad [\mathcal{G} ] / \mathcal{K} \end{equation} sont des fibrés en quotients de $\mathbf{C}^n$ au-dessus de respectivement $[\mathrm{GL}_n]$ et $[\mathrm{GL}_n] / K$. De \eqref{E:approxforte} on déduit que \begin{equation} \label{E:TK} [\mathcal{G} ] / \mathcal{K} \simeq \bigsqcup_j \Gamma_j \backslash (X \times \mathbf{C}^n/L_j ), \end{equation} où $L_j = g_j (L_f) \cap V(\mathbf{Q})$. L'isomorphisme s'obtient de la manière suivante~: étant donné une double classe $$\mathcal{G} (\mathbf{Q}) [( x , z) , ( g_f , v_f ) ] \mathcal{K},$$ avec $x \in \mathrm{GL}_n (\mathbf{R} )/\mathrm{SO}_n Z (\mathbf{R})^+$, $z \in \mathbf{C}^n$ et $(g_f , v_f) \in \mathcal{G} (\mathbf{A}_f)$, on peut d'abord supposer que $x$ appartient à $X$ en multipliant à gauche par un élément de $\mathcal{G} (\mathbf{Q})$ si nécessaire. On écrit alors $$(g_f , v_f) = (h , w)^{-1} (g_j , 0) k, \quad \mbox{avec } (h,w) \in \mathcal{G} (\mathbf{Q} )^+ \mbox{ et } k \in \mathcal{K}.$$ Alors \begin{equation} \label{E:TK2} \begin{split} \mathcal{G} (\mathbf{Q}) [( x , z) , ( g_f , v_f ) ] \mathcal{K} & = \mathcal{G} (\mathbf{Q}) [( x , z) , (h , w)^{-1} (g_j , 0) ] \mathcal{K} \\ & = \mathcal{G} (\mathbf{Q})(h , w)^{-1} [( hx , hz+w) , (g_j,0) ] \mathcal{K} \end{split} \end{equation} d'image $[hx, hz+w]$ dans $\Gamma_j \backslash (X \times \mathbf{C}^n/L_j )$. Dans la suite, on note $$\mathcal{T}_\mathcal{K} = \Gamma \backslash (X \times \mathbf{C}^n/L )$$ le fibré au-dessus de \eqref{E:quot3}. \medskip \noindent {\it Exemple.} Soit $N$ un entier strictement supérieur à $1$. On note $$K_{0} (N) \subset \mathrm{GL}_n (\widehat{\mathbf{Z}}) = \prod_p \mathrm{GL}_n (\mathbf{Z}_p)$$ le sous-groupe défini par les relations de congruences suivantes aux nombres premiers $p$ divisant $N$~: si $N = \prod p^{v_p (N)}$ alors la $p$-composante de $K_{0} (N)$ est constituée des matrices de $\mathrm{GL}_n (\mathbf{Z}_p )$ de la forme $$\left( \begin{array}{cc} \mathbf{Z}_p^\times & * \\ 0_{1,n-1} & \mathrm{GL}_{n-1} (\mathbf{Z}_p ) \end{array} \right)$$ modulo $p^{v_p (N)}$. Le groupe $K_0 (N)$ est compact ouvert et $$[\mathrm{GL}_n] / K_0 (N) = \Gamma_0 (N) \backslash X$$ avec $$\Gamma_0 (N) = \left\{ A \in \mathrm{SL}_n (\mathbf{Z} ) \; : \; A \equiv \left( \begin{array}{ccc} * & * & * \\ 0 & * & * \\ \vdots & \vdots & \vdots \\ 0 & * & * \end{array} \right) \ (\mathrm{mod} \ N ) \right\}.$$ Le groupe $\mathcal{K}_0 (N) = K_0 (N) \ltimes V (\widehat{\mathbf{Z}} )$ est compact et ouvert dans $\mathcal{G} (\mathbf{A}_f )$ et $$\mathcal{T}_{\mathcal{K}_0 (N)} = \Gamma_0 (N) \backslash \left[ X \times (\mathbf{C}^n / \mathbf{Z}^n) \right].$$ \medskip \section{Fonctions de Schwartz et cycles associés} \label{S:15} Soit $\mathcal{S} (V (\mathbf{A}_f ))$ l'espace de Schwartz de $V(\mathbf{A}_f)$ des fonctions $\varphi_f : V (\mathbf{A}_f) \to \mathbf{C}$ localement constantes et à support compact. Le groupe $\mathcal{G} (\mathbf{A}_f)$ opère sur $\mathcal{S} (V (\mathbf{A}_f ))$ par la ``représentation de Weil'' $$\omega (g , v ) : \mathcal{S} (V (\mathbf{A}_f )) \to \mathcal{S} (V (\mathbf{A}_f )); \quad \phi \mapsto \left( w \mapsto \phi (g^{-1} (w-v) \right).$$ Considérons maintenant l'espace $C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right)$ des fonctions lisses; on fait opérer le groupe $\mathcal{G} (\mathbf{A}_f)$ sur $C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right)$ par la représentation régulière droite~: $$( (h ,w) \cdot f) (g ,v ) = f( g h , g w + v ).$$ Les fonctions lisses sont précisément celles qui sont invariantes sous l'action d'un sous-groupe ouvert de $\mathcal{G} (\mathbf{A}_f )$. L'application \begin{equation} \mathcal{S} (V (\mathbf{A}_f )) \to C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right); \quad \phi \mapsto f_\phi : ( (g,v) \mapsto \phi (-g^{-1} v )) \end{equation} est $\mathcal{G} (\mathbf{A}_f)$-équivariante relativement aux deux actions définies ci-dessus. \'Etant donné une fonction $\varphi_f \in \mathcal{S} (V (\mathbf{A}_f ))$ on note $L_{\varphi_f}$ le réseau des périodes de $\varphi_f$. Si $K \subset \mathrm{GL}_n (\mathbf{A}_f )$ est un sous-groupe compact ouvert qui laisse $\varphi_f$ invariante (et préserve donc $L_{\varphi_f}$), alors la fonction $f_{\varphi_f}$ est invariante (à droite) sous l'action de $\mathcal{K} = K \ltimes L_{\varphi_f} \subset \mathcal{G} (\mathbf{A}_f)$. \begin{definition} Soit $\varphi_f \in \mathcal{S} (V (\mathbf{A}_f ))$ une fonction de Schwartz invariante sous l'action d'un sous-groupe compact ouvert $K\subset \mathrm{GL}_n (\mathbf{A}_f )$. \begin{itemize} \item Soit $D_{\varphi_f}$, resp. $D_{\varphi_f , K}$, l'image de l'application $$\mathcal{G} (\mathbf{Q} ) \left[ \left( \mathrm{GL}_n (\mathbf{R}) \times \{ 0 \} \right) \cdot \mathrm{supp} (f_{\varphi_f} ) \right] \to [\mathcal{G}] /L_{\varphi_f} ,$$ resp. $$\mathcal{G} (\mathbf{Q} ) \left[ \left( \mathrm{GL}_n (\mathbf{R}) \times \{ 0 \} \right) \cdot \mathrm{supp} (f_{\varphi_f} ) \right] \to [\mathcal{G}]/ \mathcal{K},$$ induite par l'inclusion du support de $f_{\varphi_f}$ dans $\mathcal{G} (\mathbf{A}_f)$. \item Soit $$U_{\varphi_f} \subset [\mathcal{G}] /L_{\varphi_f} , \quad \mbox{resp.} \quad U_{\varphi_f, K} \subset [\mathcal{G}]/ \mathcal{K},$$ le complémentaire de $D_{\varphi_f}$, resp. $D_{\varphi_f , K}$. \end{itemize} \end{definition} \medskip \noindent {\it Remarque.} {\it Via} l'isomorphisme \eqref{E:TK} la projection de $D_{\varphi_f , K} \subset [\mathcal{G}]/ \mathcal{K}$ est égale à la réunion finie \begin{equation} \bigsqcup_j \bigcup_\xi \Gamma_j \backslash (X \times (L_j + \xi )/L_j ), \end{equation} où $\xi$ parcourt les éléments de $V(\mathbf{Q}) / L_j$ tels que $\varphi_f (g_j^{-1} \xi )$ soit non nul. \medskip En effet, en suivant \eqref{E:TK2} on constate que $$(g_f , v_f ) = (h^{-1} g_j , - h^{-1} w) k$$ et, la fonction $f_{\varphi_f}$ étant $\mathcal{K}$-invariante à droite, on a $$f_{\varphi_f} (g_f , v_f ) = f_{\varphi_f} (h^{-1} g_j , -h^{-1} w ) = \varphi_f ( g_j^{-1} w ).$$ \medskip L'espace $D_{\varphi_f}$ est donc un revêtement fini de $[\mathrm{GL}_n]$. La fonction $\varphi_f$ induit en outre une fonction localement constante sur $D_{\varphi_f}$, c'est-à-dire un élément de $H^0 (D_{\varphi_f })$. Maintenant, l'isomorphisme de Thom implique que l'on a~: \begin{equation} \label{E:thom} H^0 (D_{\varphi_f }) \stackrel{\sim}{\longrightarrow} H^{2n} \left( [\mathcal{G}]/ L_{\varphi_f} , U_{\varphi_f } \right); \end{equation} on note $$[\varphi_f] \in H^{2n} \left( [\mathcal{G}]/ L_{\varphi_f} , U_{\varphi_f } \right)$$ l'image de $\varphi_f$; cette classe est $K$-invariante, on désigne par $[\varphi_f]_K$ son image dans $$H^{2n} \left( [\mathcal{G}]/ \mathcal{K}, U_{\varphi_f , K } \right).$$ \begin{lemma} \label{L:40} Supposons $\widehat{\varphi}_f (0) =0$, autrement dit que $\int_{V(\mathbf{A}_f )} \varphi_f (v) dv=0$. Alors, l'image de $[\varphi_f]$ par l'application degré $$H^0 (D_{\varphi_f }) \to \mathbf{Z}^{\pi_0 (D_{\varphi_f })}$$ est égale à $0$. \end{lemma} \begin{proof} Montrons en effet que pour tout $j$ on a $$\sum_{\xi \in V(\mathbf{Q}) / L_j} \varphi_f ( g_j^{-1} \xi) = 0.$$ Quitte à remplacer $\varphi_{f}$ par $\omega (g_j ) \varphi_f$ on peut supposer que $g_j$ est l'identité. Mais \begin{equation*} \begin{split} \sum_{\xi \in V(\mathbf{Q}) / L} \varphi_f (\xi ) & = \sum_{ v \in V (\mathbf{A}_f ) / L_{\varphi_f} } \varphi_f (v) \\ & = \frac{1}{\mathrm{vol} (L_{\varphi_f})} \sum_{v \in V (\mathbf{A}_f ) / L_{\varphi_f} } \int_{L_{\varphi_f} } \varphi_f (v+u) du \\ & = \frac{1}{\mathrm{vol} (L_{\varphi_f} )} \int_{V(\mathbf{A}_f )} \varphi_f (v) dv \\ & = \frac{1}{\mathrm{vol} (L_{\varphi_f} )} \widehat{\varphi}_f (0) =0. \end{split} \end{equation*} \end{proof} Dans la suite on désigne par $D_{\varphi_f }^0$ l'intersection de $D_{\varphi_f }$ avec la composante connexe $\mathcal{T}_{\mathcal{K}}$. \medskip \noindent {\it Exemple.} Soit $N$ un entier strictement supérieur à $1$. Alors, le réseau $$L_{\mathcal{K}_0 (N)} = (N^{-1} \widehat{\mathbf{Z}} ) \times \widehat{\mathbf{Z}} \times \ldots \times \widehat{\mathbf{Z}} \subset V (\mathbf{A}_f)$$ est $\mathcal{K}_0 (N)$-invariant. La fonction \begin{equation} \label{E:exvarphif} \sum_{j=0}^{N-1} \delta_{(\frac{j}{N} , 0 , \ldots , 0) + \widehat{\mathbf{Z}}^n} - N \delta_{\widehat{\mathbf{Z}}^n} \in \mathcal{S} (V (\mathbf{A}_f)) \end{equation} est $\mathcal{K}_0 (N)$-invariante, à support dans $L_{\mathcal{K}_0 (N)}$ et de degré $0$. En conservant les notations de l'exemple précédent, on désigne par $D_0 (N)$ le sous-ensemble de $$\mathcal{T}_{\mathcal{K}_0 (N)} = \Gamma_0 (N) \backslash \left[ X \times (\mathbf{C}^n / \mathbf{Z}^n) \right]$$ associé à la fonction \eqref{E:exvarphif}. Il est constitué de tous les points dont la première coordonnée dans la fibre au-dessus de $\Gamma_0 (N) \backslash X$ est de $N$-torsion et dont toutes les autres coordonnées sont nulles. \medskip \section{Série theta adélique} \`A toute fonction $\varphi_f \in \mathcal{S} (V (\mathbf{A}_f))$ il correspond les formes différentielles $$\widetilde{\varphi} \otimes \varphi_f \quad \mbox{et} \quad \widetilde{\psi} \otimes \varphi_f \in A^{\bullet} \left( S \times \mathbf{C}^n , \mathcal{S} (\mathbf{C}^n ) \right) \otimes \mathcal{S} (V (\mathbf{A}_f ))$$ de degrés respectifs $2n$ et $2n-1$. En appliquant la distribution theta dans les fibres, on obtient alors des applications \begin{equation} \label{appl-theta} \theta_\varphi \quad \mbox{et} \quad \theta_\psi : \mathcal{S} (V (\mathbf{A}_f)) \longrightarrow \left[ A^{\bullet} (S \times \mathbf{C}^n) \otimes C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right) \right]^{\mathcal{G} (\mathbf{Q} )}. \end{equation} \medskip \noindent {\it Remarque.} Rappelons que $$A^{\bullet} (S \times \mathbf{C}^n) \cong \left[ \wedge^\bullet (\mathfrak{p} \oplus \mathbf{C}^n)^* \otimes C^{\infty} (\mathrm{GL}_n (\mathbf{R}) \ltimes \mathbf{C}^n ) \right]^{\mathrm{SO}_n}.$$ Le produit tensoriel dans \eqref{appl-theta} est donc plus rigoureusement égal à $$\mathrm{Hom}_{\mathrm{SO}_n} \left( \wedge^\bullet (\mathfrak{p} \oplus \mathbf{C}^n) , C^{\infty} \left( (\mathrm{GL}_n (\mathbf{R}) \ltimes \mathbf{C}^n) \times \mathcal{G} (\mathbf{A}_f ) \right) \right)^{\mathcal{G} (\mathbf{Q} )}$$ où $C^{\infty} \left( (\mathrm{GL}_n (\mathbf{R}) \ltimes \mathbf{C}^n) \times \mathcal{G} (\mathbf{A}_f ) \right) $ est l'espace des fonctions lisses sur un espace adélique. \medskip L'application $\theta_\varphi$ est définie par \begin{equation} \label{appl-theta2} \begin{split} \theta_\varphi (g_f , v_f ; \varphi_f ) & = \sum_{\xi \in V (\mathbf{Q} ) } \widetilde{\varphi} (\xi ) (\omega (g_f , v_f ) \varphi_f ) (\xi ) \\ & = \sum_{\xi \in V(\mathbf{Q} ) } \varphi_f \left( g_f^{-1} (\xi -v_f ) \right) \widetilde{\varphi} (\xi ) \end{split} \end{equation} et de même pour $\theta_\psi$. Rappelons que le groupe $\mathcal{G} (\mathbf{R})$ opère naturellement sur $S \times \mathbf{C}^n$; étant donné un élément $(g,v) \in \mathcal{G} (\mathbf{R})$ et une forme $\alpha \in \mathcal{A}^{\bullet} (S \times \mathbf{C}^n)$ on note $(g,v)^* \alpha$ le tiré en arrière de $\alpha$ par l'application $$(g,v) : S \times \mathbf{C}^n \to S \times \mathbf{C}^n.$$ L'invariance sous le groupe $\mathcal{G}(\mathbf{Q})$ dans \eqref{appl-theta} signifie donc que pour tout $(g,v)$ dans $\mathcal{G} (\mathbf{Q})$ on a~: \begin{equation} \label{E:invtheta} (g,v)^*\theta_\varphi (g g_f , gv_f + v ;\varphi_f)=\theta_\varphi (g_f, v_f ; \varphi_f); \end{equation} ce qui découle de la $\mathcal{G} (\mathbf{R})$-invariance de $\widetilde{\varphi}$, voir \S \ref{S:44}. Les applications $\theta_\varphi$ et $\theta_\psi$ entrelacent par ailleurs les actions naturelles de $\mathcal{G} (\mathbf{A}_f)$ des deux côtés~: pour tout $(h_f , w_f ) \in \mathcal{G} (\mathbf{A}_f )$ et pour $\theta = \theta_\varphi$ ou $\theta_\psi$, on a \begin{equation} \label{E:entrelacetheta} \theta ( \omega (h_f , w_f ) \varphi_f ) = (h_f , w_f ) \cdot \theta (\varphi_f ). \end{equation} En particulier, on a $$\theta_\varphi (\varphi_f ) \quad \mbox{et} \quad \theta_\psi (\varphi_f ) \in \left[ A^{\bullet} (S \times \mathbf{C}^n) \otimes C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right) \right]^{\mathcal{G} (\mathbf{Q} ) \times L_{\varphi_f}} = A^\bullet \left( \widehat{[\mathcal{G}]} / L_{\varphi_f} \right),$$ où $$\widehat{[\mathcal{G}]} = \mathcal{G} (\mathbf{Q}) \backslash \left( (\mathrm{GL}_n (\mathbf{R}) \ltimes \mathbf{C}^n) \times \mathcal{G} (\mathbf{A}_f ) \right) / \mathrm{SO}_n.$$ Il découle en outre de \eqref{E:entrelacetheta} que si $\varphi_f$ est $K$-invariante alors les formes $\theta_\varphi (\varphi_f )$ et $\theta_\psi (\varphi_f )$ sont $K$-invariantes à droite. \medskip \medskip \subsection{Action de l'algèbre de Hecke} \label{algHecke} Soit $p$ un nombre premier. On désigne par $\mathcal{H}_p$ l'algèbre de Hecke locale de $\mathcal{G} (\mathbf{Q}_p )$, c'est-à-dire les fonctions lisses et à support compact dans $\mathcal{G} (\mathbf{Q}_p )$ et le produit de convolution. La fonction caractéristique de $\mathcal{G} (\mathbf{Z}_p)$ appartient à $\mathcal{H}_p$. L'\emph{algèbre de Hecke globale} $\mathcal{H} (\mathcal{G} (\mathbf{A}_f) )$ est le produit restreint des algèbres de Hecke locales $\mathcal{H}_p$ relativement aux fonctions caractéristiques de $\mathcal{G} (\mathbf{Z}_p)$, cf. \cite{Flath}. Un élément de $\mathcal{H} (\mathcal{G} (\mathbf{A}_f) )$ est donc un produit tensoriel $\phi = \otimes \phi_p$, où pour presque tout $p$ la fonction $\phi_p$ est égale à la fonction caractéristique de $\mathcal{G} (\mathbf{Z}_p)$. On désigne par $\mathbf{T}_\phi$ l'{\it opérateur de Hecke} sur $C^\infty (\mathcal{G} (\mathbf{A}_f ))$ qui lui correspond. Il associe à une fonction $f \in C^\infty (\mathcal{G} (\mathbf{A}_f ))$ la fonction $$\mathbf{T}_\phi (f) : (g,v) \mapsto \int_{\mathcal{G} (\mathbf{A}_f)} f(gh , gw+v) \phi (h,w) d(h,w),$$ où la mesure de Haar est normalisée de sorte que pour tout $p$, le volume de $\mathcal{G} (\mathbf{Z}_p )$ soit égal à un. On note encore $$\mathbf{T}_\phi : A^\bullet (S \times \mathbf{C}^n) \otimes C^\infty (\mathcal{G} (\mathbf{A}_f )) \to A^\bullet (S \times \mathbf{C}^n) \otimes C^\infty (\mathcal{G} (\mathbf{A}_f ))$$ l'opérateur de Hecke induit; il commute à l'action (à gauche) de $\mathcal{G} (\mathbf{Q})$. L'algèbre de Hecke $\mathcal{H} (\mathcal{G} (\mathbf{A}_f) )$ opère également sur $ \mathcal{S} (V (\mathbf{A}_f))$ {\it via} les opérateurs $T_\phi : \mathcal{S} (V (\mathbf{A}_f)) \to \mathcal{S} (V (\mathbf{A}_f))$ qui à une fonction de Schwartz $\varphi_f$ associe la fonction $$T_\phi (\varphi_f ) : v \mapsto \int_{\mathcal{G} (\mathbf{A}_f )} \phi (h,w) \varphi_f (h^{-1} (v-w)) d (h,w) .$$ La proposition suivante résulte alors des définitions. \begin{proposition} \label{P:hecke1} Soit $\phi \in \mathcal{H} (\mathcal{G} (\mathbf{A}_f ))$. Alors $\mathbf{T}_\phi$ préserve $$\left[ A^\bullet (S \times \mathbf{C}^n) \otimes C^\infty (\mathcal{G} (\mathbf{A}_f )) \right]^{\mathcal{G} (\mathbf{Q})} $$ et $$\mathbf{T}_\phi ( \theta_{\varphi} (\varphi_f) ) = \theta_{\varphi} ( T_\phi (\varphi_f )) \quad \mbox{et} \quad \mathbf{T}_\phi ( \theta_{\psi} (\varphi_f) ) = \theta_{\psi} ( T_\phi (\varphi_f )).$$ \end{proposition} \subsection{Classes de cohomologie associées} \label{cohomClass} Notons $\widehat{U_{\varphi_f}}$ la préimage de $U_{\varphi_f}$ dans $\widehat{[\mathcal{G}]} / L_{\varphi_f}$ et $\widehat{U_{\varphi_f , K}} $ la projection de $\widehat{U_{\varphi_f}}$ dans $ \widehat{[\mathcal{G}]} / \mathcal{K}$ de sorte que $$\widehat{U_{\varphi_f , K}} = \widehat{[\mathcal{G}]} / \mathcal{K} - \mathcal{G} (\mathbf{Q}) \left( [S^+ \times \{ 0 \}] \cdot \mathrm{supp}(f_{\varphi_f} ) \right) \mathcal{K}.$$ Notons enfin $\widehat{[\varphi_f]}$ l'image de $[\varphi_f]$ dans $$H^{2n} \left( \widehat{[\mathcal{G}]} / L_{\varphi_f} , \widehat{U_{\varphi_f}} \right).$$ \begin{proposition} \label{P:thetacohom} La forme différentielle $\theta_\varphi (\varphi_f )$ est fermée et représente $\widehat{[\varphi_f]}$ dans $H^{2n} \left( \widehat{[\mathcal{G}]} / L_{\varphi_f} , \widehat{U_{\varphi_f}} \right)$. \end{proposition} \begin{proof} La forme $\theta_\varphi (\varphi_f )$ est fermée comme combinaison linéaire de formes fermées. Fixons un sous-groupe compact ouvert $K \subset \mathrm{GL}_n (\mathbf{A}_f)$ tel que $\varphi_f$ soit $K$-invariante. Alors $\theta_\varphi (\varphi_f )$ appartient à $$\left[ A^{2n} (S \times \mathbf{C}^n) \otimes C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right) \right]^{\mathcal{G} (\mathbf{Q} ) \times \mathcal{K}} \cong \bigoplus_j A^{2n} (S^+ \times \mathbf{C}^n )^{\Gamma_j \ltimes L_j},$$ où l'isomorphisme ci-dessus est obtenu en évaluant en $(g_j , 0)$. D'un autre côté, on a montré que l'image de $[\varphi_f ]_{K}$ dans $$H^{2n} \left( \widehat{[\mathcal{G}]} / \mathcal{K} \right) \cong \bigoplus_j H^{2n} (\Gamma_j \backslash (S^+ \times \mathbf{C}^n/L_j ))$$ est égale à $$\bigoplus_j \sum_{\xi \in V(\mathbf{Q})/ L_j} \varphi_f (g_j^{-1} \xi ) \left[ \Gamma_j \backslash (S^+ \times (L_j +\xi )/L_j ) \right].$$ Quitte à remplacer $\varphi_f$ par $\omega (g_j ) \varphi_f$ on peut donc se restreindre à la composante connexe associée à l'identité. On a alors simplement $\Gamma_j = \Gamma$ et $L_j = L$. Maintenant, pour tout $\xi \in V (\mathbf{Q} )$ la forme différentielle $$\varphi_f (\xi ) \widetilde{\varphi} (\xi ) = \varphi_f (\xi) (1,-\xi)^* \varphi \in A^{2n} ( S^+ \times \mathbf{C}^n )$$ est une forme de Thom et représente $$\varphi_f (\xi) [S^+ \times \{ \xi \} ] \in H^{2n} (S^+ \times \mathbf{C}^n , S^+ \times (\mathbf{C}^n - \{ \xi \} )).$$ La moyenne $$\theta_\varphi (1,0;\varphi_f ) = \sum_{\xi \in V (\mathbf{Q})} \varphi_f (\xi ) \widetilde{\varphi} (\xi) \in A^{2n} (S^+ \times \mathbf{C}^n )$$ représente donc l'image de $$\sum_{\xi \in V(\mathbf{Q}) / L} \varphi_f (\xi) \left[ \Gamma \backslash (S^+ \times (L +\xi )/L ) \right]$$ par l'isomorphisme de Thom. \end{proof} La proposition \ref{P:eta} suggère de considérer les formes différentielles $$\theta_{[r]^*\varphi} (\varphi_f ) \quad \mbox{et} \quad \theta_{[r]^*\psi} (\varphi_f ) \in \left[ A^{\bullet} (S \times \mathbf{C}^n) \otimes C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right) \right]^{\mathcal{G} (\mathbf{Q} )} \quad (r >0).$$ Le lemme suivant est une version globale du lemme \ref{L:convcourant}. On le déduit de la formule sommatoire de Poisson. \begin{lemma} \label{L:theta-asympt} 1. Lorsque $r$ tend vers $+\infty$, les formes $\theta_{[r]^* \varphi} (\varphi_f )$ convergent uniformément sur tout compact de $\widehat{U_{\varphi_f , K}}$ exponentiellement vite vers la forme nulle. 2. Vues comme courants dans $$ \left[\mathcal{D}^{2n} (S \times \mathbf{C}^n ) \otimes C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right) \right]^{\mathcal{G} (\mathbf{Q} ) \times \mathcal{K}}$$ les formes $\theta_{[r]^* \varphi} (\varphi_f )$ convergent exponentiellement vite, lorsque $r$ tend vers $+\infty$, vers le courant $[\widehat{D_{\varphi_f , K}}]$ associé à la fonction localement constante $\varphi_f$ et de support $$\mathcal{G} (\mathbf{Q}) \left( [S^+ \times \{ 0 \}] \times \mathrm{supp}(f_{\varphi_f}) \right) \mathcal{K}.$$ 3. L'application qui à $r \in \mathbf{R}_+^*$ associe la forme différentielle $\theta_{[r]^* \varphi} ( \varphi_f )$ sur $S \times \mathbf{C}^n $ se prolonge en une fonction lisse sur $[0, +\infty)$ qui s'annule en $0$. \end{lemma} \begin{proof} Les deux premiers points découlent des points (1) et (2) du lemme \ref{L:convcourant}. On peut en outre remarquer que, lorsque $r$ varie, les formes $\theta_{[r] ^* \varphi} (\varphi_f )$ sont toutes cohomologues d'après la démonstration de la proposition \ref{P:thetacohom}. Le troisième point découle lui du point (3) du lemme \ref{L:convcourant}. \end{proof} \section{Séries d'Eisenstein adéliques} \label{SEA7} Soit $\varphi_f \in \mathcal{S} (V(\mathbf{A}_f))$. Les trois propositions suivantes sont des incarnations adéliques du procédé classique de régularisation de Hecke, voir par exemple \cite[Chapitre IV]{Wielonsky}. Elles se démontrent de la même manière; on ne détaille que la démonstration de la première qui est légèrement plus subtile. \begin{proposition} \label{P:Eis1} L'intégrale \begin{equation} \label{Eis1} E_\varphi ( \varphi_f , s) = \int_0^{\infty} r^{s} \theta_{[r]^*\varphi} (\varphi_f ) \frac{dr}{r}, \end{equation} qui converge absolument, uniformément sur tout compact de $\widehat{U_{\varphi_f , K}}$ si $\mathrm{Re} (s) >0$, possède un prolongement méromorphe à $\mathbf{C}$ tout entier, à valeurs dans l'espace $A^{2n} (\widehat{U_{\varphi_f , K}})$, holomorphe en dehors de pôles au plus simples aux entiers strictement négatifs. \end{proposition} \begin{proof} D'après le lemme \ref{L:theta-asympt} (1), l'intégrale \eqref{Eis1} est absolument convergente sur tout compact de $\widehat{U_{\varphi_f , K}}$ si $\mathrm{Re} (s) >0$. Elle définit donc une forme différentielle dans $A^{2n} (\widehat{U_{\varphi_f , K}})$. Pour prolonger cette fonction de $s$, il suffit de couper l'intégrale en deux morceaux, l'un allant de $0$ à $1$ où l'on utilise un développement limité de la fonction $r \mapsto \theta_{[r]^*\varphi} (\varphi_f )$ au voisinage de $0$, qui est bien défini d'après le lemme \ref{L:theta-asympt} (3), et l'autre de $1$ à $+\infty$ qui ne pose pas de problème d'après le lemme \ref{L:theta-asympt} (1). Le fait que $0$ ne soit pas un pôle découle du fait que $r \mapsto \theta_{[r]^*\varphi} (\varphi_f )$ s'annule en $0$. \end{proof} \begin{proposition} \label{P:Eis1bis} L'intégrale \begin{equation} \label{Eis1bis} E_\varphi ( \varphi_f , s) = \int_0^{\infty} r^{s} \left( \theta_{[r]^*\varphi} (\varphi_f ) - [\widehat{D_{\varphi_f , K}}] \right) \frac{dr}{r}, \end{equation} définit une application méromorphe en $s$ à valeurs dans l'espace des courants $$\left[\mathcal{D}_{\bullet} (S \times \mathbf{C}^n ) \otimes C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right) \right]^{\mathcal{G} (\mathbf{Q} ) \times \mathcal{K}}$$ avec un pôle simple en $s=0$ de résidu $- [\widehat{D_{\varphi_f , K}}]$. \end{proposition} \begin{proof} La démonstration est identique à celle de la proposition \ref{P:Eis1} à ceci près que cette fois on applique le lemme \ref{L:theta-asympt} (2) et que l'application qui à $r$ associe le courant $ \theta_{[r]^*\varphi} (\varphi_f ) - [\widehat{D_{\varphi_f , K}}]$ ne s'annule plus en $0$ mais est égale à $- [\widehat{D_{\varphi_f , K}}]$. La fonction qui à $s$ associe le courant $E_\varphi ( \varphi_f , s)$ a donc cette fois un pôle (simple) en $0$ de résidu $- [\widehat{D_{\varphi_f , K}}]$. \end{proof} \begin{proposition} \label{P:Eis2} L'intégrale \begin{equation} \label{Eis2} E_\psi ( \varphi_f , s) = \int_0^{\infty} r^{s} \theta_{[r]^*\psi} (\varphi_f ) \frac{dr}{r} \end{equation} qui converge absolument, uniformément sur tout compact de $\widehat{U_{\varphi_f , K}}$ si $\mathrm{Re} (s) >0$, possède un prolongement méromorphe à $\mathbf{C}$ tout entier, à valeurs dans l'espace $A^{2n-1} (\widehat{U_{\varphi_f , K}})$, holomorphe en dehors de pôles au plus simples aux entiers strictement négatifs. \end{proposition} \begin{proof} La démonstration est identique à celle de la proposition \ref{P:Eis1}. Le fait que la fonction $r \mapsto \theta_{[r]^*\psi} (\varphi_f )$ soit nulle en $0$ découle cette fois du fait que $\psi$ s'annule en $0$. \end{proof} \medskip Rappelons maintenant que $$[\mathcal{G}] = \widehat{[\mathcal{G} ]} / Z (\mathbf{R})^+$$ où $Z (\mathbf{R})^+$ opère naturellement sur $S^+$ et trivialement sur la fibre $\mathbf{C}^n$. L'action d'un élément $\lambda \in Z(\mathbf{R})^+$ sur $S^+ \times \mathbf{C}^n$ s'obtient donc en composant l'action de $(\lambda , 0)$ dans $\mathcal{G} (\mathbf{R})$ par la multiplication par $\lambda^{-1}$ dans la fibre $\mathbf{C}^n$. La $\mathcal{G} (\mathbf{R})$-équivariance de $\widetilde{\varphi}$ implique alors que pour tout $w \in \mathbf{C}^n$ on a $$\lambda^* \widetilde{\varphi} (w) = [\lambda^{-1}]^* ((\lambda , 0 )^* \widetilde{\varphi} (w)) = [\lambda^{-1}]^* \widetilde{\varphi} (\lambda^{-1} w ) = \widetilde{[\lambda^{-1}]^* \varphi} (w)$$ et de même pour $\psi$. Il s'en suit que pour tout $\lambda \in Z(\mathbf{R})^+$ on a \begin{equation} \lambda^* \widetilde{[r]^*\varphi} = \widetilde{[\lambda^{-1} r]^*\varphi} \quad \mbox{et} \quad \lambda^* \widetilde{[r]^*\psi} = \widetilde{[\lambda^{-1} r]^*\psi} \end{equation} et donc \begin{equation} \label{E:actZ} \lambda^* E_\varphi (\varphi_f , s) = \lambda^{s} E_{\varphi} (\varphi_f , s) \quad \mbox{et} \quad \lambda^*E_\psi (\varphi_f , s) = \lambda^{s} E_\psi (\varphi_f , s). \end{equation} On pose \begin{equation} E_\psi (\varphi_f) = E_\psi (\varphi_f , 0) \in A^{2n-1} \left( [\mathcal{G}] / L_{\varphi_f} - D_{\varphi_f } \right) . \end{equation} \begin{theorem} La forme différentielle $$E_\psi (\varphi_f) \in A^{2n-1} \left( [\mathcal{G}] / L_{\varphi_f} - D_{\varphi_f } \right)$$ est \emph{fermée} et représente une classe de cohomologie qui relève la classe\footnote{Noter que dans le cas (multiplicatif) de ce paragraphe, l'image de $[\varphi_f]$ dans $H^{2n} \left( [\mathcal{G}] / L_{\varphi_f} \right)$ est nulle puisque les fibres sont de dimension cohomologique $n$.} $$[\varphi_f] \in H^{2n} \left( [\mathcal{G}] / L_{\varphi_f} , [\mathcal{G}] / L_{\varphi_f} - D_{\varphi_f }\right)$$ dans la suite exacte longue \begin{multline*} \ldots \to H^{2n-1} \left( [\mathcal{G}] / L_{\varphi_f} - D_{\varphi_f } \right) \\ \to H^{2n} \left( [\mathcal{G}] / L_{\varphi_f} , [\mathcal{G}] / L_{\varphi_f} - D_{\varphi_f } \right) \to H^{2n} \left( [\mathcal{G}] / L_{\varphi_f} \right) \to \ldots \end{multline*} \end{theorem} \begin{proof} C'est une version adélique de \cite[Theorem 19]{Takagi}. Fixons un sous-groupe compact ouvert $K \subset \mathrm{GL}_n (\mathbf{A}_f )$ tel que $\varphi_f$ soit $K$-invariante. Les intégrales \eqref{Eis1} et \eqref{Eis2} étant absolument convergentes sur tout compact de $\widehat{U_{\varphi_f , K}}$, il découle de (\ref{E:tddt1}) que \begin{equation*} \begin{split} d E_\psi ( \varphi_f , s) & = \int_0^\infty r^{s} d (\theta_{[r]^*\psi}^* (\varphi_f )) \frac{dr}{r} \\ & = \int_0^\infty r^{s} \frac{d}{dr} \theta_{[r]^*\varphi} (\varphi_f ) dr. \end{split} \end{equation*} Une intégration par parties donne donc \begin{equation} \label{dEis} d E_\psi ( \varphi_f , s) = - s \int_0^\infty r^{s} \theta_{[r]^*\varphi} (\varphi_f ) \frac{dr}{r} = -s E_{\varphi} (\varphi_f , s ) \end{equation} sur $\widehat{U_{\varphi_f , K}}$. En particulier la forme $E_\psi ( \varphi_f ) = E_\psi ( \varphi_f , 0)$ est fermée sur $\widehat{U_{\varphi_f , K}}$ et la première partie du théorème découle finalement du fait que, d'après \eqref{E:actZ}, la forme $E_\psi ( \varphi_f )$ est invariante sous l'action du centre $Z (\mathbf{R})^+$. Enfin, la proposition \ref{P:Eis1bis} implique que l'identité \eqref{dEis} s'étend sur $\widehat{[\mathcal{G}]} / \mathcal{K}$ en une identité entre courants et qu'en ce sens $$d E_\psi (\varphi_f) =[\widehat{D_{\varphi_f , K}}]$$ et que l'image de $[E_\psi (\varphi_f)] \in H^{2n-1} (\widehat{U_{\varphi_f , K}})$ dans $H^{2n} (\widehat{[\mathcal{G}] }/ \mathcal{K} , \widehat{U_{\varphi_f , K}})$ est égale à $[\varphi_f]_K$. Le théorème s'en déduit en remarquant encore que ces classes sont toutes invariantes sous l'action de $Z (\mathbf{R} )^+$. \end{proof} \medskip Pour conclure ce paragraphe notons que pour tout $g \in \mathrm{GL}_n (\mathbf{Q}) $, il découle de \eqref{E:invtheta} que l'on a \begin{equation} \label{E:glnqinv} g^* (E_\psi ( \varphi_f ) (gg_f , gv_f )) = E_\psi (\varphi_f) (g_f , v_f). \end{equation} En prenant $g$ scalaire et $(g_f, v_f) = (g_j , 0)$ on obtient en particulier que pour $\alpha$ dans $\mathbf{Q}^*$, positif si $n$ est impair, on a $$E_\psi (\varphi_f (\alpha \cdot )) (g_j , 0) = E_{[\alpha^{-1}]^*\psi} (\varphi_f ) (g_j , 0) = E_{\psi} (\varphi_f ) (g_j , 0) .$$ Ce qui implique que $$[E_{\psi} (\varphi_f ) (g_j , 0) ] \in H_{\Gamma_j}^{2n-1} \left( ( \mathbf{C}^n - \cup_\xi (L_j +\xi) ) / L_j \right)^{(1)},$$ au sens de la définition \ref{Def1.7}. La restriction de $E_\psi (\varphi_f)$ à la composante connexe $\mathcal{T}_{\mathcal{K}}$ définit une forme fermée $E_\psi (\varphi_f)^0$ sur $$\Gamma \backslash (X \times \mathbf{C}^n )/ L - D_{\varphi_f }^0$$ et donc une classe de cohomologie équivariante \begin{equation} \label{E:721} [E_\psi (\varphi_f)^0] \in H_\Gamma^{2n-1} \left( ( \mathbf{C}^n - \cup_\xi (L+ \xi) ) / L \right)^{(1)} , \end{equation} où $\xi$ parcourt des éléments de $V (\mathbf{Q}) \cap \mathrm{supp} (\varphi_f )$. \medskip \noindent {\it Exemple.} Soit $N$ un entier strictement supérieur à $1$. En prenant pour $\varphi_f$ la fonction \eqref{E:exvarphif}, on obtient une classe de cohomologie équivariante $$E_{D_0 (N)} \in H_{\Gamma_0 (N)}^{2n-1} (T - T [N])^{(1)}$$ avec $T = \mathbf{C}^n / \mathbf{Z}^n$. Cette classe est associée au cycle invariant de $N$-torsion et de degré $0$ $$D_0 (N) \in H_{\Gamma_0 (N)}^0 (T[N])$$ comme expliqué au paragraphe \ref{S:15}. \medskip \section{Comportement à l'infini de $E_\psi (\varphi_f)$} \label{S192} \begin{definition} \'Etant donné un sous-espace rationnel $W \subset V$, on note $$\int_W : \mathcal{S} (V (\mathbf{A}_f)) \to \mathcal{S} (V (\mathbf{A}_f) / W (\mathbf{A}_f))$$ l'application naturelle d'intégration le long des fibres de la projection $V \to V/ W$. \end{definition} Dans ce paragraphe on fixe une fonction $\varphi_f \in \mathcal{S} (V(\mathbf{A}_f ))$ et un sous-groupe parabolique $Q = Q (W_\bullet )$ dans $\mathrm{SL}_n (\mathbf{Q})$ associé à un drapeau de sous-espaces rationnels de $\mathbf{Q}^n$; voir (\ref{E:flag}) dont on reprend les notations. On se propose d'étudier le comportement de la forme différentielle $E_\psi (\varphi_f)$ en restriction aux ensembles de Siegel associés à $Q$. On fixe également \begin{itemize} \item un réel strictement positif $t_0$, \item un élément $g \in \mathrm{SL}_n (\mathbf{Q})$ tel que $g^{-1} W_\bullet$ soit un drapeau standard $W_J$, \item un sous-ensemble relativement compact $\omega\subset N_J M_J$, et \item un sous-ensemble compact $\kappa$ dans \begin{equation*} \left\{ (z , (g_f , v_f ) ) \in \mathbf{C}^n \times \mathcal{G} (\mathbf{A}_f ) \; \left| \; \begin{array}{l} \forall \xi \in V (\mathbf{Q}), \\ \varphi_f (g_f^{-1} (\xi - v_f)) \neq 0 \Rightarrow z \notin W_k (\mathbf{C}) + \xi \end{array} \right. \right\}. \end{equation*} \end{itemize} \begin{lemma} \label{L:thetaSiegel} Il existe des constantes strictement positives $C$, $\alpha$ et $\beta$ telles que pour tout $t \geq t_0$ les deux propriétés suivantes sont vérifiées. \begin{enumerate} \item Les formes $\theta_{[r]^* \psi} (\varphi_f )$ ($r\geq 1$) sont de norme $$||\theta_{[r]^* \psi} (\varphi_f ) ||_\infty \leq C e^{-r^2 \alpha t^\beta}$$ en restriction à $\mathfrak{S}_{W_\bullet} (g ,t, \omega) \times \kappa$. \item Si l'on suppose de plus que $\int_{W_1} \varphi_f$ est constante égale à $0$, alors les formes $\theta_{[r]^* \psi} (\varphi_f )$ ($r\leq 1$) sont de norme $$||\theta_{[r]^* \psi} (\varphi_f ) ||_\infty \leq C e^{-r^{-2} \alpha t^\beta}$$ en restriction à $\mathfrak{S}_{W_\bullet} (g ,t, \omega) \times \mathbf{C}^n \times \mathcal{G} (\mathbf{A}_f )$. \end{enumerate} \end{lemma} \begin{proof} La premier point découle du lemme \ref{L8} (la convergence uniforme de la somme sur $\xi$ résulte du fait qu'on somme des fonctions gaussiennes). Pour démontrer le deuxième point il suffit de majorer la norme ponctuelle de $$(h,v)^*\theta_{[r]^* \psi} (\varphi_f ) = [r]^* \left(\sum_\xi \varphi_f (\xi) (\omega (r^{-1} h , v) \widetilde{\psi}) (\xi ) \right),$$ en le point base $([e] , 0)$ de $X \times \mathbf{C}^n$, pour $(h,v) \in \mathfrak{S}_{W_\bullet} (g ,t, \omega) \times \mathbf{C}^n$. Pour cela, on utilise à nouveau la formule de Poisson~: \begin{equation*} \sum_\xi \varphi_f (\xi) (\omega (r^{-1} h , v) \widetilde{\psi}) (\xi ) = \sum_\xi \widehat{\varphi}_f (\xi) \widehat{(\omega (r^{-1} h , v) \widetilde{\psi})} (\xi ) . \end{equation*} Maintenant, pour $(h , v) \in \mathrm{GL}_n (\mathbf{C}) \ltimes \mathbf{C}^n$ on a~: $$| \widehat{\omega(h , v) \widetilde{\psi } } (\xi)| = | \det (h) \widehat{\widetilde{\psi}} (h^{\top} \xi )|.$$ On est donc ramené à étudier la croissance de $\widehat{\widetilde{\psi}}$ sur $\mathfrak{S}_{W_\bullet} (g ,t, \omega)^{-\top} \times \{ 0 \}$ où l'image par $h \mapsto h^{-\top}$ de $\mathfrak{S}_{W_\bullet} (g ,t, \omega)$ est un ensemble de Siegel de la forme $\mathfrak{S}_{W_\bullet '} (g' ,t, \omega')$ associé au drapeau $$(0) \varsubsetneq W_k ' \varsubsetneq W_{k-1} ' \varsubsetneq \cdots \varsubsetneq W_1 ' \varsubsetneq \mathbf{Q}^n,$$ avec $g' = w_J g^{-\top} w_J^{-1}$, $\omega' = w_J \omega^{-\top} w_J^{-1}$ et $$w_J = \left(\begin{array}{ccc} & & 1_{n-j_k} \\ & \reflectbox{$\ddots$} & \\ 1_{n-j_1} & & \end{array} \right).$$ Or $$\widehat{\widetilde{\psi}} (h^{\top} \xi ) = (\omega (h^{-\top} , 0 ) \widehat{\widetilde{\psi}} ) (\xi)= ((h^{-\top})^* \widehat{\widetilde{\psi}} ) (\xi)$$ et la démonstration du lemme \ref{L8} --- ou le fait que $\widehat{\widetilde{\psi}}$ soit une forme différentielle dans $A^{2n-1} (E , \mathcal{S}(V))$ --- implique qu'en restriction à $\mathfrak{S}_{W_\bullet '} (g' ,t, \omega')$ la forme $$\widehat{\widetilde{\psi}} (h^\top \xi ) \quad \mbox{a pour norme} \quad O(e^{-\alpha' t^{\beta '} |p_{W_1} (\xi )|^2})$$ pour certaines constantes strictement positives $\alpha '$ et $\beta '$. Finalement, par hypothèse $\int_{W_1} \varphi_f$ est identiquement nulle, de sorte que pour tout $\xi \in W_1 '$, on a\footnote{Ici $\chi$ est un caractère additif fixé.} \begin{equation*} \begin{split} \widehat{\varphi}_f (\xi ) & = \int_{V (\mathbf{A}_f )} \varphi_f (x) \chi (\langle \xi , x \rangle) dx \\ & = \int_{W_1 '} \left( \int_{W_1} \varphi_f (w+w') dw' \right) \chi (\langle \xi , w \rangle) dw \\ & = 0. \end{split} \end{equation*} La somme $$\sum_{\xi \in V (\mathbf{Q} ) } \widehat{\varphi}_f (\xi ) \widehat{\psi} (\xi )$$ ne porte donc que sur les $\xi \notin W_1'$ c'est-à-dire ceux tels que $p_{W_1} (\xi) \neq 0$ et le théorème s'en déduit. \end{proof} \begin{proposition} \label{P12} Supposons $\int_{W_1} \varphi_f$ constante égale à $0$. Il existe alors des constantes strictement positives $C$, $\alpha$ et $\beta$ telles que pour tout $t \geq t_0$, la forme différentielle $E_\psi (\varphi_f )$ soit de norme $\leq C e^{- \alpha t^\beta}$ en restriction à $$\mathcal{G} (\mathbf{Q} ) \left[ \mathfrak{S}_{W_\bullet} (g ,t, \omega) \times \kappa \right] / \mathrm{SO}_n .$$ \end{proposition} \begin{proof} La forme différentielle $E_\psi (\varphi_f )$ est définie par l'intégrale absolument convergente $$E_\psi ( \varphi_f ) = \int_0^{\infty} \theta_{[r]^*\psi} (\varphi_f ) \frac{dr}{r}.$$ La proposition se démontre en décomposant l'intégrale en une somme $\int_0^1 + \int_1^\infty$ et en appliquant à chacune de ces intégrales le lemme \ref{L:thetaSiegel}. \end{proof} \section{\'Evaluation de $E_\psi (\varphi_f )$ sur les symboles modulaires} \label{S:7.eval} Soit $\mathbf{q} = (q_0 , \ldots , q_{k})$ un $(k+1)$-uplet de vecteurs non nuls dans $V(\mathbf{Q})$ avec $k \leq n-1$. Rappelons que l'on a associé à $\mathbf{q}$ une application continue \eqref{E:appDelta} de la première subdivision barycentrique $\Delta_k ' $ du $k$-simplexe standard vers la compactification de Tits~: $$\Delta (\mathbf{q}) : \Delta_k ' \to \overline{X}^T.$$ On considère ici l'application $$\Delta (\mathbf{q}) \times \mathrm{id}_{\mathbf{C}^n \times \mathcal{G} (\mathbf{A}_f)} : \Delta_k ' \times \mathbf{C}^n \times \mathcal{G} (\mathbf{A}_f) \to \overline{X}^T \times \mathbf{C}^n \times \mathcal{G} (\mathbf{A}_f).$$ Pour tout entier $j \in [0 , k]$ on désigne par $W(\mathbf{q})^{(j)}$ le sous-espace $$W(\mathbf{q})^{(j)} = \langle q_0 , \ldots , \widehat{q}_j , \ldots , q_{k} \rangle$$ dans $V$. Soit $\varphi_f \in \mathcal{S} (V (\mathbf{A}_f))$ une fonction de Schwartz telle que pour tout entier $j$ dans $[0 , k]$ on ait \begin{equation} \label{E:condphi} \int_{W(\mathbf{q})^{(j)}} \varphi_f =0 \quad \mbox{dans} \quad \mathcal{S} (V (\mathbf{A}_f) / W(\mathbf{q})^{(j)} (\mathbf{A}_f) ). \end{equation} La proposition \ref{P33} et la proposition \ref{P12} impliquent que, pour tout sous-ensemble compact $\kappa$ dans $$\left\{ (z , (g_f , v_f ) ) \in \mathbf{C}^n \times \mathcal{G} (\mathbf{A}_f ) \; \left| \; \begin{array}{l}\forall \xi \in V (\mathbf{Q}) \mbox{ tel que } \varphi_f (g_f^{-1} (\xi - v_f)) \neq 0, \\ z \notin \bigcup_{j=0}^k (W(\mathbf{q})^{(j)}_\mathbf{C} + \xi) \end{array} \right. \right\},$$ la forme différentielle fermée $E_\psi (\varphi_f )$ est intégrable sur $$\Delta (\mathbf{q}) \times \mathrm{id}_{\mathbf{C}^n \times \mathcal{G} (\mathbf{A}_f)} ( \Delta_k ' \times \kappa ).$$ On note \begin{equation} E_\psi (\varphi_f , \mathbf{q} ) = (\Delta (\mathbf{q}) \times \mathrm{id} )^* E_\psi (\varphi_f); \end{equation} son évaluation en $(g_f , v_f) \in \mathcal{G} (\mathbf{A}_f )$ donne une forme fermée dans $$A^{2n-1} \left( \Delta_{k}' \times \left( \mathbf{C}^n - \bigcup_{j=0}^k \bigcup_{\xi} (W(\mathbf{q})^{(j)}_\mathbf{C} + \xi) \right) \right),$$ où $\xi$ parcourt l'ensemble des vecteurs de $V(\mathbf{Q})$ tels que $\varphi_f (g_f^{-1} (\xi - v_f )) \neq 0$. Dans la proposition suivante, on calcule son intégrale partielle sur $\Delta_k '$. Supposons maintenant que $k=n$ et que $\mathbf{q}$ soit constitué de vecteurs linéairement indépendants. Soit $g \in \mathrm{GL}_n (\mathbf{Q})$ tel que $g \cdot \mathbf{e} = \mathbf{q}$. La considération du diagramme commutatif $$\xymatrixcolsep{5pc}\xymatrix{ \Delta_n ' \times \mathbf{C}^n \times \mathcal{G} (\mathbf{A}_f) \ar[d]^{\mathrm{id} \times g \times \mathrm{id}} \ar[r]^{\Delta (\mathbf{e}) \times \mathrm{id}} & \overline{X}^T \times \mathbf{C}^n \times \mathcal{G} (\mathbf{A}_f) \ar[d]^{g} \\ \Delta_n ' \times \mathbf{C}^n \times \mathcal{G} (\mathbf{A}_f) \ar[r]^{\Delta (\mathbf{q}) \times \mathrm{id}} & \overline{X}^T \times \mathbf{C}^n \times \mathcal{G} (\mathbf{A}_f) }$$ et la formule d'invariance \eqref{E:glnqinv} impliquent que \begin{equation} \label{E:invformsimpl0} g^* (E_{\psi} (\varphi_f , \mathbf{q}) (gg_f , gv_f )) = E_\psi ( \varphi_f , \mathbf{e} ) (g_f , v_f ). \end{equation} Quitte à remplacer $\varphi_f$ par $\omega (g_f , v_f ) \varphi_f$, les calculs explicites se ramènent au cas où $(g_f , v_f) = (1,0)$. On ne considère dans la suite de ce paragraphe que les formes $$E_\psi (\varphi_f , \mathbf{q} )^0 = E_\psi (\varphi_f , \mathbf{q} ) (1,0)$$ qui satisfont \begin{equation} \label{E:invformsimpl} g^* E_{\psi} ( \omega (g, 0) \varphi_f , \mathbf{q})^0 = E_\psi ( \varphi_f , \mathbf{e} )^0. \end{equation} \begin{proposition} \label{P34bis} Soit $\varphi_f \in \mathcal{S} (V (\mathbf{A}_f))$ une fonction vérifiant \eqref{E:condphi}. {\rm 1.} Si $\langle q_0 , \ldots , q_{k} \rangle$ est un sous-espace propre de $V$, alors la forme $\int_{\Delta_{k}'} E_\psi (\varphi_f , \mathbf{q} )^0$ est identiquement nulle. {\rm 2.} Supposons $k=n-1$ et que les vecteurs $q_0 , \ldots , q_{n-1}$ soient linéairement indépendants. On pose $g = (q_0 | \cdots | q_{n-1} ) \in \mathrm{GL}_n (\mathbf{Q})$ et on fixe $\lambda \in \mathbf{Q}^\times$ tel que la matrice $h=\lambda g$ envoie $V(\mathbf{Z}) = \mathbf{Z}^n$ dans $L_{\varphi_f}$. Alors la $n$-forme $\int_{\Delta_{n-1}'} E_\psi (\varphi_f , \mathbf{q} )^0$ est égale à \begin{multline*} \sum_{v \in \mathbf{Q}^n / L_{\varphi_f} } \varphi_f (v ) \sum_{\substack{\xi \in \mathbf{Q}^n/\mathbf{Z}^n \\ h \xi = v \ (\mathrm{mod} \ L_{\varphi_f})}} \mathrm{Re} \left( \varepsilon (\ell_1 - \xi_1 ) d\ell_1 \right) \wedge \cdots \wedge \mathrm{Re} \left( \varepsilon (\ell_{n} - \xi_n) d \ell_{n} \right), \end{multline*} où $\ell_j$ est la forme linéaire sur $\mathbf{C}^n$, de noyau $W(\mathbf{q})^{(j-1)}$, telle que $h^* \ell_j = e_j^*$. \end{proposition} \medskip \noindent {\it Remarque.} Le fait que l'expression soit en fait indépendante du choix de $\lambda$ découle des relations de distribution $$\sum_{j=0}^{m -1} \cot (\pi (z+ j /m )) = m \cot (\pi m z ).$$ \medskip \begin{proof} 1. Dans ce cas l'image de $\Delta (\mathbf{q})$ est contenue dans le bord de $\overline{X}^T$ et il résulte de l'hypothèse faite sur $\varphi_f$, de la proposition \ref{P33} et de la proposition \ref{P12} que l'intégrale $\int_{\Delta_{k}'} E_\psi (\varphi_f , \mathbf{q} )^0$ est nulle sur tout ouvert relativement compact de $$\mathbf{C}^n - \bigcup_{j=0}^k \bigcup_{\xi} (W(\mathbf{q})^{(j)}_\mathbf{C} + \xi).$$ 2. Supposons donc $k=n-1$ et que les vecteurs $q_0 , \ldots , q_{n-1}$ soient linéairement indépendants. En notant toujours $g = (q_0 | \cdots | q_{n-1} ) \in \mathrm{GL}_n (\mathbf{Q})$ l'élément (\ref{E:g}), il découle de \eqref{E:invformsimpl} que \begin{equation*} \int_{\Delta_{n-1}'} E_\psi (\varphi_f , \mathbf{q} )^0 = (h^{-1})^* \left( \int_{\Delta_{n-1}'} E_\psi ( \varphi_f (h \cdot) , \mathbf{e})^0 \right), \end{equation*} où $\mathbf{e} = (e_1 , \ldots , e_n )$ et $h=\lambda g$. On est donc ramené à calculer l'intégrale $$\int_{A\mathrm{SO}_n \mathbf{R}_{>0}} E_\psi (\varphi_f (h \cdot )) (1,0) ,$$ où $$A=\{ \mathrm{diag}(t_1,\ldots,t_n ) \in \mathrm{SL}_n (\mathbf{R}) \; : \; t_j \in \mathbf{R}_{>0}, \ t_1 \cdots t_n = 1 \}.$$ D'après la remarque à la fin du paragraphe \ref{S:42}, on a \begin{multline} \label{E:intti} \int_{A\mathrm{SO}_n \mathbf{R}_{>0}} E_\psi (\varphi_f (h \cdot )) (1,0) \\ = \int_{\{ \mathrm{diag}(t_1,\ldots,t_n) \; : \; t_j \in \mathbf{R}_{>0} \} \mathrm{SO}_n} (t_1 \cdots t_n )^{s} \theta_\varphi (1 , 0 ;\varphi_f (h \cdot)) \ \Big|_{s=0}. \end{multline} Or, en restriction à l'ensemble des matrices symétriques diagonales réelles, le fibré en $\mathbf{C}^n$ se scinde {\it métriquement} en une somme directe de $n$ fibrés en droites, correspondant aux coordonnées $(z_j )_{j=1 , \ldots , n}$ de $z$ et la forme $\varphi$ se décompose en le produit de $n$ formes associées à ces fibrés en droites et égales, d'après (\ref{E:phiN1}), à $$ \varphi^{(j)} = \frac{i}{2\pi} e^{- t_j^2 |z_j |^2 } \left( t_j^2 dz_j \wedge d \overline{z}_j - t_j^2 (z_j d\overline{z}_j - \overline{z}_j dz_j ) \wedge \frac{dt_j}{t_j} \right), \quad j \in \{ 1 , \ldots , n \}. $$ Comme $(1 , \xi )^* \widetilde{\varphi} (\xi ) = \varphi$ on obtient que \begin{multline*} \int_{\{ \mathrm{diag}(t_1,\ldots,t_n) \; : \; t_j \in \mathbf{R}_{>0} \} \mathrm{SO}_n} (t_1 \cdots t_n )^s \widetilde{\varphi} (\xi) \\ = \frac{(-i)^n}{(4\pi)^n} \Gamma (1+ \frac{s}{2})^n \wedge_{j=1}^n \left( \frac{dz_j}{(z_j - \xi_j) | z_j - \xi_j |^{s} } -\frac{d\overline{z}_j}{\overline{z_j - \xi_j} | z_j - \xi_j |^{s}} \right). \end{multline*} L'intégrale \eqref{E:intti} est donc égale à la valeur en $s=0$ de \begin{equation*} \Gamma (1+ \frac{s}{2})^n \sum_{\xi \in V(\mathbf{Q})} \varphi_f (h \xi ) \wedge_{j=1}^n \mathrm{Re} \left( \frac{1}{2i\pi} \frac{dz_j}{(z_j - \xi_j) | z_j - \xi_j |^{s} } \right) \end{equation*} c'est-à-dire, \begin{equation*} \Gamma (1+ \frac{s}{2})^n \sum_{\xi \in V(\mathbf{Q}) / V (\mathbf{Z})} \varphi_f (h \xi) \wedge_{j=1}^n \mathrm{Re} \left( \frac{1}{2i\pi} \sum_{m \in \mathbf{Z}} \frac{dz_j}{(z_j - \xi_j +m) | z_j - \xi_j +m |^{s} } \right) , \end{equation*} où l'on a utilisé que la fonction $\varphi_f ( h \cdot)$ est $V(\mathbf{Z} )$-invariante. Rappelons maintenant que pour tout $z \in \mathbf{C}$, la fonction $$s \mapsto \frac{1}{2i\pi} \sideset{}{'} \sum_{m \in \mathbf{Z}} \frac{1}{(z+m) |z +m|^{s}}$$ admet un prolongement méromorphe au plan des $s \in \mathbf{C}$, qui est égal à $\varepsilon (z) = \frac{1}{2i}\cot ( \pi z)$ en $s=0$; cf. \cite[VII, \S 8, p.56]{Weil}. On en déduit que l'intégrale \eqref{E:intti} n'est autre que $$\sum_{\xi \in V(\mathbf{Q}) / V (\mathbf{Z})} \varphi_f (h \xi) \wedge_{j=1}^n \mathrm{Re} \left(\varepsilon (z_j - \xi_j ) dz_j \right).$$ Finalement, comme $(h^{-1})^* e_{j}^*$ est égale à la forme linéaire $\ell_j$, on conclut que \begin{multline*} \begin{split} \int_{\Delta_{n-1}'} E_\psi (\varphi_f , \mathbf{q} )^0 & = \sum_{\xi \in V(\mathbf{Q}) / V (\mathbf{Z})} \varphi_f (h \xi) \cdot (h^{-1})^* \left( \wedge_{j=1}^n \mathrm{Re} (\varepsilon (z_j - \xi_j ) dz_j ) \right) \\ & = \sum_{\xi \in V(\mathbf{Q}) / V (\mathbf{Z})} \varphi_f (h \xi) \wedge_{j=1}^{n} \mathrm{Re} (\varepsilon ( \ell_j - \xi_{j} ) d\ell_j ) . \end{split} \end{multline*} En rassemblant les vecteurs $\xi$ envoyés par $h$ sur un même vecteur modulo $L_{\varphi_f}$ on obtient la formule annoncée. \end{proof} \chapter{Cocycle multiplicatif du groupe rationnel $\mathrm{GL}_n (\mathbf{Q})^+$} \label{S:chap9} \resettheoremcounters Au chapitre précédent on a défini une forme $E_\psi (\varphi_f)^0$ représentant une classe de cohomologie équivariante \eqref{E:721} qui, d'après le théorème \ref{T:cocycleM}, induit une classe \begin{equation*} S_{\rm mult} [D_{\varphi_f}^0] \in H^{n-1} (\Gamma, \Omega^n_{\rm mer} ( \mathbf{C}^n ) ) . \end{equation*} Dans ce chapitre, on détermine explicitement des cocycles qui représentent ces classes de cohomologie. On suit la même démarche qu'au chapitre \ref{S:6} mais en considérant les séries d'Eisenstein $E_\psi (\varphi_f)$ plutôt que la forme $\eta$. Autrement dit plutôt que la distribution ``évaluation en zéro'' on considère cette fois la distribution theta. On travaille uniquement avec la bordification de Tits car il faut ici prendre garde au fait que la forme $E(\varphi_f)$ ne s'étend pas à tout le bord. \section{Forme simpliciale associée à $E_\psi$} \label{S8.1} L'application (\ref{E:R}) induit une rétraction $$[0,1] \times X \times \mathbf{C}^n \to X \times \mathbf{C}^n ,$$ encore notée $R$, de $X \times \mathbf{C}^n$ sur $\{ x_0 \} \times \mathbf{C}^n$. Soit $\Gamma$ un sous-groupe de congruence dans $\mathrm{SL}_n (\mathbf{Z})$. Il lui correspond un sous-groupe compact ouvert $K$ dans $\mathrm{GL}_n (\mathbf{A}_f )$ tel que $$\Gamma = K \cap \mathrm{GL}_n (\mathbf{Q})^+ .$$ Dans ce chapitre on prend $L = \mathbf{Z}^n$; c'est un réseau $\Gamma$-invariant dans $V(\mathbf{Q})$. De la même manière qu'au paragraphe \ref{S:61}, la rétraction $R$ induit une suite d'applications \begin{equation} \rho_k : \Delta_k \times E_k\Gamma \times \mathbf{C}^n/ \mathbf{Z}^n \longrightarrow X \times \mathbf{C}^n/ \mathbf{Z}^n . \end{equation} \'Etant donné un $(k+1)$-uplet $$\mathbf{g} = (\gamma_0 , \ldots , \gamma_k ) \in E_k\Gamma$$ et un élément $z \in \mathbf{C}^n/ \mathbf{Z}^n$, l'application $\rho_k ( \cdot , \mathbf{g} , z)$ envoie le simplexe $\Delta_k$ sur le simplexe géodésique dans $X$ de sommets $\gamma_0^{-1} x_0$, $\ldots$, $\gamma_k^{-1} x_0$ défini, par récurrence, en prenant le cône géodésique sur le $(k-1)$-simplexe géodésique de sommets $\gamma_1^{-1} x_0 , \ldots , \gamma_k^{-1} x_0$ depuis $\gamma_0^{-1} x_0$. La suite $\rho=(\rho_k )$ est constituée d'applications $\Gamma$-équivariantes et induit donc une application $\Gamma$-équivariante $$\rho^* : A^\bullet (X \times \mathbf{C}^n / \mathbf{Z}^n ) \to \mathrm{A}^\bullet (E\Gamma \times \mathbf{C}^n /\mathbf{Z}^n ),$$ où l'espace de droite est celui des formes différentielles simpliciales sur la variété simpliciale \begin{equation} \label{VS1} E\Gamma \times \mathbf{C}^n / \mathbf{Z}^n . \end{equation} Le groupe $\Gamma$ opère (diagonalement) sur \eqref{VS1} par $$\left( \mathbf{g} , z \right) \stackrel{(h,w)}{\longmapsto} \left( \mathbf{g} h^{-1}, h z \right).$$ Un élément $D \in \mathrm{Div}_\Gamma$ peut-être vu comme une fonction $\Gamma$-invariante sur $\mathbf{C}^n / \mathbf{Z}^n$ à support dans les points de torsion; il lui correspond donc une fonction $\mathcal{K}$-invariante $\varphi_f \in \mathcal{S} (V (\mathbf{A}_f ))$. On a \begin{equation} \label{E:fibresj} \mathbf{C}^n / \mathbf{Z}^n - \mathrm{supp} \ D = \mathbf{C}^n/\mathbf{Z}^n - \bigcup_\xi (\xi + \mathbf{Z}^n )/ \mathbf{Z}^n , \end{equation} où $\xi$ parcourt l'ensemble des éléments de $\mathbf{Q}^n / \mathbf{Z}^n$ tels que $\varphi_f (\xi)$ soit non nul. La proposition suivante découle des définitions. \begin{proposition} \label{P:50} La forme simpliciale $$\mathcal{E}_\psi (\varphi_f ) := \rho^* E_\psi (\varphi_f )^0 \in \mathrm{A}^{2n-1} \left( E \Gamma \times ( \mathbf{C}^n / \mathbf{Z}^n - \mathrm{supp} \ D ) \right)^\Gamma$$ est fermée et représente la classe de cohomologie équivariante $[E_{\psi} (\varphi_f )^{(0)} ]$. \end{proposition} L'ouvert \eqref{E:fibresj} étant de dimension cohomologique $n$, il correspond à la classe de cohomologie équivariante $[E_{\psi} (\varphi_f )^{(0)} ]$ une classe de cohomologie dans $$H^{n-1} ( \Gamma , H^n (( \mathbf{C}^n / \mathbf{Z}^n - \mathrm{supp} \ D )^{(1)}).$$ Il découle même de \cite[Theorem 2.3]{Dupont} que l'intégration sur les $(n-1)$-simplexes associe à la forme simpliciale $\mathcal{E}_\psi (\varphi_f )$ un $(n-1)$-cocycle qui à un $n$-uplet d'éléments de $\Gamma$ associe une $n$-forme fermée sur \eqref{E:fibresj}. Pour obtenir des cocycles à valeurs dans les formes méromorphes on procède comme dans la démonstration du théorème \ref{T:cocycleM}. Il faut pour cela effacer quelques hyperplans de manière à pouvoir invoquer le théorème \ref{P:Brieskorn}. C'est ce que l'on détaille dans le paragraphe qui vient. \section[Les cocycles $\mathbf{S}_{{\rm mult}, \chi_0}$]{Les cocycles $\mathbf{S}_{{\rm mult}, \chi_0}$, démonstration du théorème \ref{T:mult}} \label{S:3.2.2} Fixons un morphisme primitif $\chi_0 : \mathbf{C}^n /\mathbf{Z}^n \to \mathbf{C} / \mathbf{Z}$. Pour tout $(k+1)$-uplet $\mathbf{g} = (\gamma_0 , \ldots , \gamma_k )$ d'éléments de $\Gamma$, on note \begin{equation} \label{Vchi0S2} U(\mathbf{g}) = \mathbf{C}^n / \mathbf{Z}^n - \bigcup_{\xi } \bigcup_{i=0}^k (\xi + \mathrm{ker} (\chi_0 \circ \gamma_i) ), \end{equation} où $\xi$ décrit les éléments du support de $D$. Les ouverts $U(\mathbf{g})$ sont des variétés affines. On peut donc leur appliquer le théorème \ref{P:Brieskorn}. On obtient un cocycle \begin{equation*} \mathbf{S}_{{\rm mult}, \chi_0} [D] : \Gamma^n \longrightarrow \Omega_{\rm mer}^n (\mathbf{C}^n / \mathbf{Z}^n ) \end{equation*} qui représente la classe $S_{\rm mult} [D]$ et est à valeurs dans les formes méromorphes qui sont régulières en dehors des hyperplans affines $\xi + \mathrm{ker} (\chi_0 \circ \gamma )$, avec $\xi \in \mathrm{supp} \ D$ et $\gamma \in \Gamma$. Ce sont les cocycles annoncés dans le théorème \ref{T:mult}; il nous faut encore vérifier les propriétés attendues sous l'action des opérateurs de Hecke. Considérons donc un sous-monoïde $S$ de $M_n (\mathbf{Z})^\circ$ contenant $\Gamma$. À toute double classe $\Gamma a \Gamma$, avec $a \in S$, il correspond la fonction caractéristique de $KaK$, elle appartient à $\mathcal{H} (\mathrm{GL}_n (\mathbf{A}_f) , K)$. L'application $\mathrm{GL}_n (\mathbf{A}_f) \to \mathcal{G} (\mathbf{A}_f )$ qui à un élément $g_f$ associe $(g_f , 0)$ induit un plongement $$K \backslash \mathrm{GL}_n (\mathbf{A}_f) / K \hookrightarrow \mathcal{K} \backslash \mathcal{G} (\mathbf{A}_f ) / \mathcal{K}$$ et donc une inclusion $$\mathcal{H} (\mathrm{GL}_n (\mathbf{A}_f) , K) \hookrightarrow \mathcal{H} (\mathcal{G} (\mathbf{A}_f ) , \mathcal{K})$$ ``extension par $0$''. Dans la suite on identifie une fonction $\phi$ dans $\mathcal{H} (\mathrm{GL}_n (\mathbf{A}_f) , K)$ à son image dans $\mathcal{H} (\mathcal{G} (\mathbf{A}_f ) , \mathcal{K})$. Il découle alors du \S \ref{algHecke} que $\phi$ induit un opérateur de Hecke $\mathbf{T}_\phi$ sur $$H^{n-1} (\Gamma , \Omega_{\rm mer}^n (\mathbf{C}^n / \mathbf{Z}^n ));$$ lorsque $\phi$ est la fonction caractéristique de $KaK$, l'opérateur $\mathbf{T}_\phi$ coïncide avec $\mathbf{T}(a)$ du \S~\ref{S:2-2}. Une fonction $\phi \in \mathcal{H} (\mathrm{GL}_n (\mathbf{A}_f) , K)$ induit aussi un opérateur \begin{equation} \label{E:applphi} T_\phi : \mathcal{S} (V(\mathbf{A}_f ))^K \to \mathcal{S} (V(\mathbf{A}_f ))^K \end{equation} sur l'espace $\mathcal{S} (V(\mathbf{A}_f ))^K$ des fonctions de Schwartz $K$-invariantes. On a \begin{equation*} T_\phi ( \varphi_f) = \sum_{g \in \Gamma \backslash \mathrm{GL}_n (\mathbf{Q}) / \Gamma} \phi (g) \sum_{h \in \Gamma g \Gamma / \Gamma} \varphi_f (h^{-1} \cdot ). \end{equation*} La fonction $\phi$ induit finalement une application \begin{equation} \label{E:applphi} [\phi ] : \mathrm{Div}_\Gamma \to \mathrm{Div}_\Gamma \quad \mbox{avec} \quad [\phi] = \sum_{g \in \Gamma \backslash \mathrm{GL}_n (\mathbf{Q}) / \Gamma} \phi (g) \sum_{h \in \Gamma g \Gamma / \Gamma} h . \end{equation} Lorsque $\phi$ est la fonction caractéristique de $KaK$, on a $[\phi]=[\Gamma a \Gamma]$ (cf. \S~\ref{S:2-2}). Il résulte des définitions que \begin{equation} \label{E:DetT} D_{T_{\phi} \varphi_f} = [\phi]^* D \end{equation} et la proposition \ref{P:hecke1} implique~: \begin{proposition} \label{P:hecke2} Soit $\phi \in \mathcal{H} (\mathrm{GL}_n (\mathbf{A}_f) , K)$. On a \begin{equation} \mathbf{T}_\phi \left[ \mathbf{S}_{{\rm mult}, \chi_0}[D] \right]= \left[ \mathbf{S}_{{\rm mult}, \chi_0 }[[\phi]^* D] \right] . \end{equation} \end{proposition} Ceci conclut la démonstration du théorème \ref{T:mult}. \section{Le cocycle $\mathbf{S}^*_{{\rm mult}}$} Comme dans le cas additif, on peut obtenir un cocycle explicite en appliquant l'argument de l'annexe \ref{A:A} à la forme simpliciale $\mathcal{E}_\psi (\varphi_f )$ et aux ouverts, indexés par les éléments $\mathbf{g} = (\gamma_0 , \ldots , \gamma_k) $ de $E_k \Gamma$ (avec $k \in \mathbf{N}$), \begin{equation} \label{E:86} U^* (\mathbf{g}) = \mathbf{C}^n /\mathbf{Z}^n - \bigcup_\xi \bigcup_{i=0}^k (W(\mathbf{q})_{\mathbf{C}}^{(i)} + \xi +\mathbf{Z}^n ) / \mathbf{Z}^n , \end{equation} où $\mathbf{q} = ( \gamma_0^{-1} e_1 , \ldots , \gamma_{k}^{-1} e_1 )$ et $\xi$ parcourt l'ensemble des éléments de $\mathbf{Q}^n / \mathbf{Z}^n$ tels que $\varphi_f (\xi)$ soit non nul. \subsection{Section simpliciale et homotopie} Au paragraphe \ref{S:63}, on a défini des applications $\varrho_k$ que l'on restreint maintenant à $E\Gamma$. Pour tout entier $k \in [0 , n-1]$ on note encore \begin{equation} \varrho_k : \Delta _k \times [0,1] \times E_k \Gamma \times \mathbf{C}^n \to \overline{X}^T \times \mathbf{C}^n / \mathbf{Z}^n \end{equation} les applications induites. Les images des projections dans $\overline{X}^T$ sont cette fois contenues dans la bordification rationnelle de Tits obtenue en n'ajoutant que les sous-groupes paraboliques $Q=Q (W_\bullet )$ associés à des drapeaux de sous-espaces rationnels de $V(\mathbf{Q})= \mathbf{Q}^n$ engendrés par des vecteurs $q=\gamma^{-1} e_1$ avec $\gamma \in \Gamma$. La proposition \ref{P33} implique que pour tout entier $k \in [0, n-1]$, pour tout $\mathbf{g} \in E_k \Gamma$, pour tout $z \in U^* (\mathbf{g})$ et pour tout réel strictement positif $t$, l'image $$\varrho_k ( \Delta_k \times [0,1] \times \{ \mathbf{g} \} \times \{z \} ) \subset \overline{X}^T \times \mathbf{C}^n / \mathbf{Z}^n $$ est contenue dans une réunion finie $$\left( \Omega \cup_{W_\bullet , h , \omega} \overline{\mathfrak{S}_{W_\bullet } (h , t , \omega )} \right) \times \{z \} ,$$ où $\Omega \subset X$ est relativement compact et chaque drapeau $W_\bullet$ est formé de sous-espaces engendrés par certains des vecteurs $\gamma_i^{-1} e_1$ où $\mathbf{g} = (\gamma_0 , \ldots , \gamma_k)$. La proposition \ref{P12} motive la définition suivante. \begin{definition} \label{def67} Soit $\mathcal{S} (V (\mathbf{A}_f))^{\circ}$ le sous-espace de $\mathcal{S} (V(\mathbf{A}_f))$ constitué des fonctions $\varphi_f$ telles que pour tout sous-espace $W$ contenant $e_1$, l'image $\int_W \varphi_f $ de $\varphi_f$ dans $\mathcal{S} (V (\mathbf{A}_f) / W(\mathbf{A}_f))$ est constante égale à $0$. \end{definition} \medskip \noindent {\it Remarque.} On réserve la notation $\mathcal{S} (V (\mathbf{A}_f))^{0}$ pour l'espace des fonctions $\varphi_f$ dans $\mathcal{S} (V(\mathbf{A}_f))$ telles que $\widehat{\varphi}_f (0) =0$. Noter que puisque $V$ contient $e_1$ on a $$\mathcal{S} (V (\mathbf{A}_f))^{\circ} \subset \mathcal{S} (V (\mathbf{A}_f))^{0}.$$ \medskip La forme $E_\psi (\varphi_f )^{(0)}$ est $(\Gamma \ltimes \mathbf{Z}^n)$-invariante et la proposition \ref{P12} implique que si $\varphi_f \in \mathcal{S} (V (\mathbf{A}_f ))^{\circ}$ alors pour tout $\mathbf{g} \in E_k \Gamma$ la restriction de la forme différentielle fermée $\varrho_k^* E_\psi (\varphi_f )^{(0)}$ à $\Delta_k \times [0,1] \times \mathbf{g} \times U^* ( \mathbf{g})$ est bien définie et fermée. \begin{definition} Supposons $\varphi_f \in \mathcal{S} (V (\mathbf{A}_f ))^{\circ}$. Pour tout entier $k \in [0, n-1]$ et pour tout $(k+1)$-uplet $\mathbf{g} = (\gamma_0 , \ldots , \gamma_k) \in E_k \Gamma$, on pose $$\mathcal{H}_k[\varphi_f] (\gamma_0 , \ldots , \gamma_k ) = \int_{\Delta_k \times [0,1]} \varrho_k^*E_\psi (\varphi_f )^{(0)} (\gamma_0 , \ldots , \gamma_k ).$$ C'est une forme différentielle de degré $2n-2-k$ sur $U^* (\mathbf{g})$. \end{definition} \subsection{Calcul du cocycle} En remplaçant l'appel à la proposition \ref{P32} par l'utilisation de la proposition \ref{P12}, la démonstration du théorème \ref{T37} conduit au résultat suivant. On renvoie à l'annexe \ref{A:A} pour les définitions des opérateurs $\delta$ et $d$. \begin{theorem} \label{T8.8} Supposons $\varphi_f \in \mathcal{S} (V(\mathbf{A}_f))^{\circ}$. Pour tout entier $k \in [0 , n-1]$ et pour tout $\mathbf{g} = (\gamma_0 , \ldots , \gamma_k) \in E_k \Gamma$, l'intégrale $\int_{\Delta_k } \mathcal{E}_{\psi} (\varphi_f ) (\mathbf{g})$ est égale à $$\delta \mathcal{H}_{k-1} [\varphi_f ] (\mathbf{g}) \pm d \mathcal{H}_k [\varphi_f ] (\mathbf{g}), \quad \mbox{si } k < n-1,$$ et $$\int_{\Delta_{n-1}'} E_\psi (\varphi_f , \mathbf{q} )^{(0)} + \delta \mathcal{H}_{n-2} [\varphi_f ] (\mathbf{g}) \pm d \mathcal{H}_{n-1} [\varphi_f ] (\mathbf{g}), \quad \mbox{si } k=n-1,$$ dans $A^{2n-2-k} \left( U^* (\mathbf{g}) ) \right)$ avec toujours $\mathbf{q} = ( \gamma_0^{-1} e_1 , \ldots , \gamma_{k}^{-1} e_1 )$. \end{theorem} \begin{proof} Puisque $E_\psi (\varphi_f)^{(0)}$ est fermée on a~: $$(d_{\Delta_k \times [0,1]} \pm d ) \varrho_k^*E_\psi (\varphi_f )^{(0)} = 0$$ et donc $$\int_{\Delta_k \times [0,1]} d_{\Delta_k \times [0,1]} \varrho_k^* E_\psi (\varphi_f )^{(0)} (\gamma_0 , \ldots , \gamma_k ) \pm d \mathcal{H}_k (\gamma_0 , \ldots , \gamma_k ) =0.$$ Maintenant, d'après le théorème de Stokes on a \begin{multline*} \int_{\Delta_k \times [0,1]} d_{\Delta_k \times [0,1]} \varrho_k^* E_\psi (\varphi_f )^{(0)} (\gamma_0 , \ldots , \gamma_k ) = \int_{\Delta_k \times \{ 0 \}} \varrho_k^* E_\psi (\varphi_f )^{(0)} (\gamma_0 , \ldots , \gamma_k ) \\ + \int_{(\partial \Delta_k) \times [0,1]} \varrho_k^* E_\psi (\varphi_f )^{(0)} (\gamma_0 , \ldots , \gamma_k ) - \int_{\Delta_k \times \{ 1 \}} \varrho_k^* E_\psi (\varphi_f )^{(0)} (\gamma_0 , \ldots , \gamma_k ). \end{multline*} La dernière intégrale est égale à $\int_{\Delta_k '} E_\psi (\varphi_f , \mathbf{q} )^{(0)} $ et est donc nulle si $k < n-1$ d'après la proposition \ref{P34bis}. Finalement, par définition on a $$\int_{\Delta_k \times \{ 0 \}} \varrho_k^* E_\psi (\varphi_f )^{(0)} (\gamma_0 , \ldots , \gamma_k ) = \int_{\Delta_k} \rho^* E_\psi (\varphi_f ) (\gamma_0 , \ldots , \gamma_k) $$ et $$\int_{(\partial \Delta_k ) \times [0,1]} \varrho_k^* E_\psi (\varphi_f )^{(0)} (\gamma_0 , \ldots , \gamma_k ) = - \delta \mathcal{H}_{k-1} (\gamma_0 , \ldots , \gamma_k ).$$ \end{proof} Le théorème précédent motive les définitions suivantes. \begin{definition} \'Etant donné une fonction $K$-invariante $\varphi_f \in \mathcal{S} (V(\mathbf{A}_f))^{\circ}$ on désigne par $$\mathbf{S}_{\rm mult}^* [\varphi_f] : \Gamma^{n} \longrightarrow \Omega^{n}_{\rm mer} (\mathbf{C}^n / \mathbf{Z}^n )$$ l'application qui à un $n$-uplet $(\gamma_0 , \ldots , \gamma_{n-1})$ associe $0$ si les vecteurs $\gamma_j^{-1} e_1$ sont liés et sinon la forme différentielle méromorphe dans $\Omega^n_{\rm mer} (\mathbf{C}^n / \mathbf{Z}^n )$ d'expression \begin{equation} \label{E:87} \sum_{v \in \mathbf{Q}^n / \mathbf{Z}^n} \varphi_f (v) \\ \sum_{\substack{\xi \in \mathbf{Q}^n/\mathbf{Z}^n \\ h \xi = v \ (\mathrm{mod} \ \mathbf{Z}^n )}} \varepsilon (\ell_1 - \xi_1 ) \cdots \varepsilon (\ell_{n} - \xi_n) \cdot d\ell_1 \wedge \cdots \wedge d \ell_{n} , \end{equation} où $h = ( \gamma_0^{-1} e_1 | \cdots | \gamma_{n-1}^{-1} e_1)$ et $h^* \ell_j = e_{j}^*$. \end{definition} On déduit du théorème \ref{T8.8} le théorème suivant. \begin{theorem} \label{T8.8cor} Supposons $\varphi_f \in \mathcal{S} (V(\mathbf{A}_f))^{\circ}$. L'application $$\mathbf{S}_{\rm mult}^*[\varphi_f] : \Gamma^{n} \longrightarrow \Omega^{n}_{\rm mer} (\mathbf{C}^n / \mathbf{Z}^n )$$ définit un $(n-1)$-cocycle homogène non nul du groupe $\Gamma$. Il représente la même classe de cohomologie que $\mathbf{S}_{\rm mult , \chi_0} [\varphi_f]$. \end{theorem} \begin{proof} Il découle du théorème \ref{T8.8} que l'application $$(\gamma_0 , \ldots , \gamma_{n-1}) \mapsto \int_{\Delta_{n-1}'} E_\psi (\varphi_f , \mathbf{q} )^0$$ définit un $(n-1)$-cocycle à valeurs dans $H^n (U^* (\mathbf{g}))$. Or, d'après la proposition \ref{P34bis} la $n$-forme $\int_{\Delta_{n-1}'} E_\psi (\varphi_f , \mathbf{q} )^0$ est nulle si les vecteurs $q_j=\gamma_j^{-1} e_1$ sont liés et elle est égale à \begin{multline} \label{E:8.fd} \sum_{v \in \mathbf{Q}^n / \mathbf{Z}^n} \varphi_f (v) \\ \sum_{\substack{\xi \in \mathbf{Q}^n/\mathbf{Z}^n \\ h \xi = v \ (\mathrm{mod} \ \mathbf{Z}^n)}} \mathrm{Re} (\varepsilon (\ell_1 - \xi_1 ) d \ell_1 ) \wedge \cdots \wedge \mathrm{Re} (\varepsilon (\ell_{n} - \xi_n) d \ell_{n}) , \end{multline} sinon. Remarquons maintenant qu'en posant $q=e^{2i\pi z}$, on a \begin{equation} \label{E:8.cot} \varepsilon (z) dz = \frac{1}{2i} \cot (\pi z) dz = \frac{dz}{e^{2i\pi z} -1} + \frac12 dz = \frac{1}{2i\pi} \left( \frac{dq}{q-1} - \frac{dq}{2q} \right). \end{equation} En particulier les $1$-formes différentielles $$\mathrm{Re} (\varepsilon (z) dz ) \quad \mbox{ et } \quad \varepsilon (z) dz$$ sur $\mathbf{C} / \mathbf{Z}$ sont cohomologues. On en déduit que la forme \eqref{E:8.fd} est cohomologue à $\mathbf{S}_{\rm mult}^* [\varphi_f] (\gamma_0 , \ldots , \gamma_{n-1})$. On conclut alors la démonstration, en suivant celle du théorème \ref{T:Sa} au paragraphe \ref{S:demTSa}, mais en remplaçant le théorème de Brieskorn par sa version multiplicative, le théorème \ref{P:Brieskorn}. Celui-ci s'applique car les relations de distribution pour $\varepsilon$ impliquent que la forme \eqref{E:8.fd} représente bien une classe dans le sous-espace caractéristique associé à la valeur propre $1$ dans la cohomologie de la fibre. \end{proof} La proposition suivante donne une autre expression, parfois plus maniable, du cocycle $\mathbf{S}_{\rm mult}^*$. \begin{proposition} Supposons toujours $\varphi_f \in \mathcal{S} (V (\mathbf{A}_f ))^{\circ}$. L'expression \eqref{E:87} est égale à $$\sum_{v \in \mathbf{Q}^n / \mathbf{Z}^n} \varphi_f (v) \sum_{\substack{\xi \in \mathbf{Q}^n/\mathbf{Z}^n \\ h \xi = v \ (\mathrm{mod} \ \mathbf{Z}^n)}} \frac{d\ell_1 \wedge \cdots \wedge d \ell_{n}}{(e^{2i\pi (\ell_1 - \xi_1 )} -1) \ldots (e^{2i\pi (\ell_n - \xi_n )} -1)}.$$ \end{proposition} \begin{proof} D'après \eqref{E:8.cot}, il suffit de montrer que pour tout sous-ensemble non vide et propre $J \subset \{ 1 , \ldots , n \}$ on a \begin{equation} \label{E:8.smoothing} \sum_{v \in \mathbf{Q}^n / \mathbf{Z}^n} \varphi_f (v) \sum_{\substack{\xi \in \mathbf{Q}^n/\mathbf{Z}^n \\ h \xi = v \ (\mathrm{mod} \ \mathbf{Z}^n )}} \wedge_{j \notin J} \varepsilon (\ell_j - \xi_j ) =0. \end{equation} Posons $L = \mathbf{Z}^n$ et fixons un sous-ensemble $J \subset \{ 1 , \ldots , n \}$ non vide et propre de cardinal $k$. Soit $\overline{V}$ le quotient de $V$ par la droite engendrée par les vecteurs $\gamma_j^{-1} e_1$ avec $j \in J$. Désignons par $\overline{L}$ l'image de $L$ dans $\overline{V}$. En identifiant $\mathbf{Q}^{n-k}$ avec le quotient de $\mathbf{Q}^n$ par l'espace engendré par les vecteurs $e_j$ avec $j \in J$ et $\mathbf{Z}^{n-k}$ avec l'image de $\mathbf{Z}^n$ dans $\mathbf{Q}^{n-k}$, la matrice $h$ induit une application linéaire $\overline{h} : \mathbf{Q}^{n-k} \to \overline{V} $ telle que $\overline{h} (\mathbf{Z}^{n-k} )$ soit contenu dans $\overline{L}$. Pour tout vecteur $v \in V$ d'image $\overline{v}$ dans $\overline{V}$, la projection de $\mathbf{Q}^{n}$ sur $\mathbf{Q}^{n-k}$ induit alors une application surjective $$\{ \xi \in \mathbf{Q}^n / \mathbf{Z}^n \; : \; h \xi = v \ (\mathrm{mod} \ L) \} \to \{ \overline{\xi} \in \mathbf{Q}^{n-k}/\mathbf{Z}^{n-k} \; : \; \overline{h} \overline{\xi} = \overline{v} \ (\mathrm{mod} \ \overline{L}) \}$$ dont les fibres ont toutes le même cardinal, égal à $\frac{[L : h (\mathbf{Z}^{n})]}{[\overline{L} : \overline{h} (\mathbf{Z}^{n-k})]}$. Le membre de gauche de \eqref{E:8.smoothing} est donc égal à \begin{equation*} \sum_{w\in \overline{V} / \overline{L}} \Big( \sum_{\substack{v \in V/L \\ \overline{v}=w}} \varphi_f (v) \Big) \frac{[L : h (\mathbf{Z}^{n})]}{[\overline{L} : \overline{h} (\mathbf{Z}^{n-k})]} \sum_{\substack{\overline{\xi} \in \mathbf{Q}^{n-k}/\mathbf{Z}^{n-k} \\ \overline{h} \overline{\xi} = w \ (\mathrm{mod} \ \overline{L})}} \wedge_{j \notin J} \varepsilon (\ell_j - \xi_j ), \end{equation*} où $\overline{\xi} = (\xi_j )_{j \notin J}$. Pour conclure, remarquons que le noyau de la projection $V \to \overline{V}$ contient un vecteur qui est un translaté de $e_1$ par un élément de $\Gamma$. L'image de ce vecteur dans $V(\mathbf{A}_f)$ est donc égale à $e_1$ modulo $K$. Comme la fonction $K$-invariante $\varphi_f$ appartient à $\mathcal{S} (V (\mathbf{A}_f ))^{\circ}$, on en déduit que chaque somme $$\sum_{\substack{v \in V/L \\ \overline{v}=w}} \varphi_f (v) $$ est égale à zéro et l'on obtient l'identité annoncée \eqref{E:8.smoothing}. \end{proof} \subsection{Démonstration du théorème \ref{T:mult2}} \label{S:8.3.3} Rappelons qu'un élément $D \in \mathrm{Div}_\Gamma$ peut-être vu comme une fonction $\Gamma$-invariante sur $\mathbf{Q}^n / \mathbf{Z}^n$ à support dans un réseau de $\mathbf{Q}^n$. Notons $\varphi_f$ la fonction $K$-invariante correspondante. Sous l'hypothèse que $D \in \mathrm{Div}_\Gamma^{\circ}$ la fonction $\varphi_f$ appartient à $ \mathcal{S} (V (\mathbf{A}_f ))^{\circ}$. On pose alors $$\mathbf{S}^*_{\rm mult} [D] = \mathbf{S}^*_{{\rm mult}}[\varphi_f].$$ C'est un $(n-1)$-cocycle de $\Gamma$ à valeurs dans $\Omega^n_{\rm mer} (\mathbf{C}^n / \mathbf{Z}^n)$. Les cocycles $\mathbf{S}^*_{\rm mult} [D]$ et $\mathbf{S}_{{\rm mult} , \chi_0 }[D]$ représentent la même classe de cohomologie puisqu'ils proviennent tous les deux de la restriction de $\mathcal{E}_\psi (\varphi_f)$. Cela démontre le premier point du théorème \ref{T:mult2}. Le deuxième point du théorème peut se vérifier par un calcul élémentaire; il résulte aussi de la proposition \ref{P:hecke1}. \qed \medskip On conclut ce chapitre en remarquant que la proposition \ref{P:DRgen} découle du théorème \ref{T:mult} sauf en ce qui concerne l'intégralité de la classe $d_n [\Phi_\delta ]$. Cette dernière propriété résulte de \cite{Takagi}. Le lecteur attentif aura pourtant noté que la série d'Eisenstein étudiée dans \cite{Takagi} est de degré total $n-1$ alors que la série d'Eisenstein $E_\psi (\varphi_f)$ étudiée ici est de degré $2n-1$. On passe de la deuxième à la première en remarquant que le fibré $\mathbf{C}^n$ se scinde métriquement en $\mathbf{R}^n \oplus (i \mathbf{R}^n)$ au-dessus de l'espace symétrique associé à $\mathrm{SL}_n (\mathbf{R})$. La forme de Mathai--Quillen $\varphi$ associée au fibré $\mathbf{C}^n$ se décompose alors en le produit $\varphi_{\mathbf{R}^n} \wedge \varphi_{i\mathbf{R}^n}$ de deux formes égales à la forme de Thom (de degré $n$) de \cite[\S 5]{Takagi}. Si l'on applique à $\varphi_{\mathbf{R}^n}$ la distribution theta, qu'on évalue $\varphi_{i\mathbf{R}^n}$ en $0$ et que l'on contracte la forme obtenue à l'aide du multivecteur $\partial_{y_1} \wedge \ldots \wedge \partial_{y_n}$, on obtient la série d'Eisenstein étudiée dans \cite{Takagi}. La propriété d'intégralité annoncée résulte alors de \cite[\S 10 Remark p. 354]{Takagi}. \medskip \noindent {\it Remarque.} L'énoncé de \cite[\S 10 Remark p. 354]{Takagi} comporte malheureusement une erreur. En adoptant les notations de \cite{Takagi}, nous affirmions que pour tout entier strictement positif $m$, la classe $(m^n -1) z(\mathbf{v})$ est $\mathbf{Z}_\ell$-entière pour tout $\ell$ premier à $m$. Toutefois la démonstration requiert que la multiplication par $m$ dans les fibres fixe la section de torsion $\mathbf{v}$, autrement que si $\mathbf{v}$ est associée à un vecteur $v$ dans $\mathbf{Q}^n$ alors $mv=v$ dans $\mathbf{Q}^n / \mathbf{Z}^n$. On ne peut donc pas montrer que $d_n z(\mathbf{v})$ est une classe entière comme annoncé dans la remarque. L'énoncé est d'ailleurs faux même lorsque $n=2$, comme le montre par exemple \cite[\'Equation (11.3)]{Takagi}. D'un autre côté, la classe $[\Phi_\delta ]$, considérée ici, est obtenue en évaluant le cocycle $\mathbf{S}_{{\rm mult}, e_1^*} [D_\delta ]$ en $0$. La section nulle étant invariante par multiplication par \emph{tous} les entiers strictement positifs $m$, la démonstration de \cite[\S 10 Remark p. 354]{Takagi} implique bien que la classe $d_n [\Phi_\delta ]$ est entière. \medskip \chapter{Cocycle elliptique du groupe rationnel $\mathrm{GL}_n (\mathbf{Q})^+$} \label{S:chap10} \resettheoremcounters Dans ce chapitre on explique comment modifier les constructions des deux chapitres précédents pour construire les cocycles elliptiques des théorèmes \ref{T:ell} et~\ref{T:ellbis}. On commence par détailler le quotient adélique avec lequel il nous faut cette fois travailler. On explique ensuite quelles sont les principales différences avec le cas multiplicatif. Elles sont au nombre de trois~: \begin{enumerate} \item De la même manière que le cycle $D$ doit être supposé de degré $0$ dans le théorème \ref{T:cocycleE}, on doit supposer $\widehat{\varphi}_f (0)=0$ pour construire le cocycle $\mathbf{S}_{{\rm ell}, \chi}$; voir Théorème \ref{T:9.4}. \item Pour pouvoir appliquer le théorème \ref{P:Brieskorn} ``à la Orlik--Solomon'', on a besoin d'effacer suffisamment d'hyperplans pour que les fibres soient affines. Cela nous contraint à considérer un $n$-uplet $\chi = (\chi_1 , \ldots , \chi_n )$ de morphismes primitifs; voir \S \ref{S:9.3.3}. \item Le fait que la série d'Eisenstein $E_1$ ne soit pas périodique nous contraint à considérer la série d'Eisenstein non holomorphe $E_1^*$ lors de la construction de $\mathbf{S}_{{\rm ell}}^*$; voir \S \ref{S:9.4.1}. \end{enumerate} \section{Quotients adéliques} Les quotients adéliques que l'on considère dans ce chapitre sont associés au groupe (algébrique sur $\mathbf{Q}$) $$\mathcal{G} = (\mathrm{GL}_n \times \mathrm{SL}_2 ) \ltimes M_{n, 2} .$$ \`A l'infini l'espace est $$S \times \mathcal{H} \times \mathbf{C}^n \cong [(\mathrm{GL}_n (\mathbf{R}) \times \mathrm{SL}_2 (\mathbf{R})) \ltimes M_{n, 2} (\mathbf{R})] / \mathrm{SO}_n \times \mathrm{SO}_2,$$ où $\mathcal{H} = \mathrm{SL}_2 (\mathbf{R}) / \mathrm{SO}_2$ est le demi-plan de Poincaré. L'action de $\mathcal{G} (\mathbf{R})$ sur $S \times \mathcal{H} \times \mathbf{C}^n$ se déduit des actions suivantes~: \begin{itemize} \item le groupe $\mathrm{SL}_2 (\mathbf{R})$ opère par $$(gK, \tau , z) \stackrel{B}{\longmapsto} \left( gK , \frac{a\tau+b}{c\tau+d} , \frac{z}{c \tau +d} \right), \quad \mbox{où } B=\left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right) \in \mathrm{SL}_2 (\mathbf{R} ),$$ \item le groupe $\mathrm{GL}_n (\mathbf{R})$ opère par\footnote{Ici le $z =(z_1 , \ldots , z_n )$ est vu comme un vecteur colonne, l'action $z \mapsto hz$ correspond donc à l'action usuelle sur les vecteur colonnes. On pourrait aussi bien considérer l'action $z \mapsto h^{-\top}z$ mais celle-ci parait plus artificielle, une alternative similaire se présente dans l'étude de la représentation de Weil.} $$(gK , \tau , z) \stackrel{h}{\longmapsto} (hgK, \tau , hz) \quad \left(h \in \mathrm{GL}_n (\mathbf{R}) \right),$$ et \item une matrice $$\left( \begin{array}{cc} u_1 & v_1 \\ \vdots & \vdots \\ u_n & v_n \end{array} \right) \in M_{n, 2} (\mathbf{R})$$ opère par translation $$(gK , \tau , (z_1 , \ldots , z_n)) \mapsto (gK , \tau , (z_1 + u_1 \tau + v_1 , \ldots , z_n +u_n \tau + v_n)).$$ \end{itemize} Noter que la loi de groupe dans le produit semi-direct $$(\mathrm{GL}_n (\mathbf{R}) \times \mathrm{SL}_2 (\mathbf{R})) \ltimes M_{n, 2} (\mathbf{R})$$ est donnée par $$(g,B , x ) \cdot (g' , B' , x') = (gg' , BB' ,gx' B^{-1} + x ).$$ L'action décrite ci-dessus munit le fibré \begin{equation} \label{E:9.fibre2} \xymatrix{ S \times \mathcal{H} \times \mathbf{C}^n \ar[d] \\ S \times \mathcal{H} } \end{equation} d'une structure de fibré $\mathcal{G} (\mathbf{R})$-équivariant. Les structures rationnelles usuelles sur $\mathrm{GL}_n$, $\mathrm{SL}_2$ et $M_{n,2}$ munissent $\mathcal{G}$ d'une structure de groupe algébrique sur $\mathbf{Q}$. Dans la suite de ce chapitre on identifie $M_{n,2}$ au produit $V^2$ avec $V (\mathbf{Q}) = \mathbf{Q}^n$ et on fixe~: \begin{itemize} \item un sous-groupe compact ouvert $K \subset \mathrm{GL}_n (\widehat{\mathbf{Z}})$, et \item un sous-groupe compact ouvert $L \subset \mathrm{SL}_2 (\widehat{\mathbf{Z}})$. \end{itemize} Il correspond à ces données le sous-groupe compact ouvert $$\mathcal{K} = (K \times L) \ltimes V(\widehat{\mathbf{Z}})^2 \subset \mathcal{G} (\mathbf{A}_f)$$ qui préserve le réseau $V(\widehat{\mathbf{Z}})^2$ dans $V (\mathbf{A}_f )^2$. Les quotients $$[\mathcal{G} ] / (L \ltimes V(\widehat{\mathbf{Z}})^2) = \mathcal{G} (\mathbf{Q} ) \backslash \left[ \mathcal{G} (\mathbf{R}) \cdot \mathcal{G} (\mathbf{A}_f ) \right] / (\mathrm{SO}_n \times \mathrm{SO}_2) Z (\mathbf{R})^+ (L \ltimes V(\widehat{\mathbf{Z}})^2)$$ et $$[\mathcal{G} ] / \mathcal{K}$$ sont des fibrés en groupes au-dessus de respectivement $[\mathrm{GL}_n]$ et $[\mathrm{GL}_n] / K$. Les fibres sont des produits (fibrés) $E^n$ de (familles de) courbes elliptiques $$E (= E_{L}) = \Lambda \backslash \left[ (\mathcal{H} \times \mathbf{C}) / \mathbf{Z}^2 \right] \quad \mbox{où} \quad \Lambda = L \cap \mathrm{SL}_2 (\mathbf{Z}).$$ On notera simplement $Y$ la base de cette famille de courbes elliptiques; c'est une courbe modulaire $Y = \Lambda \backslash \mathcal{H}$. Comme dans le cas multiplicatif, on déduit du théorème d'approximation forte que \begin{equation} \label{E:9.TK} Z( \mathbf{A}_f ) \backslash [\mathcal{G} ] / \mathcal{K} \simeq \Gamma \backslash (X \times E^n ), \end{equation} où $\Gamma = K \cap \mathrm{GL}_n (\mathbf{Q})$; c'est un fibré au-dessus de $\Gamma \backslash X$ que, comme dans le cas multiplicatif, nous noterons $\mathcal{T}_\mathcal{K}$. \medskip \noindent {\it Exemple.} Soit $N$ un entier strictement supérieur à $1$. En pratique on considérera surtout le cas où $K=K_{0} (N)$ et $L=L_0 (N)$ de sorte que $Y$ soit égale à la courbe modulaire $Y_0 (N)$ et $$\mathcal{T}_{\mathcal{K}} = \Gamma_0 (N) \backslash \left[ X \times E^n \right].$$ \medskip \section{Fonctions de Schwartz et cycles} \subsection{Formes de Mathai--Quillen} Le fibré \eqref{E:9.fibre2} est naturellement muni d'une orientation et d'une métrique hermitienne $\mathrm{GL}_n (\mathbf{R} ) \times \mathrm{SL}_2 (\mathbf{R})$-invariantes. Le formalisme de Mathai--Quillen s'applique donc encore pour fournir des fonctions de Schwartz $\varphi$ et $\psi$ à l'infini. Les formules sont identiques à ceci près qu'elles dépendent maintenant du paramètre $\tau$. Au-dessus d'un point $$(gK, \tau ) \in S \times \mathcal{H}$$ la métrique sur la fibre $\mathbf{C}^n$ de \eqref{E:9.fibre2} est en effet égale à $$z \mapsto \frac{1}{\mathrm{Im} \ \tau } | g^{-1} z |^2.$$ La construction de Mathai--Quillen expliquée au chapitre \ref{C:4} appliquée au fibré hermitien équivariant \eqref{E:9.fibre2} conduit cette fois à une forme de Thom qui est $(\mathrm{GL}_n (\mathbf{R}) \times \mathrm{SL}_2 (\mathbf{R}))$-invariante \begin{equation} \label{9.varphi} \varphi \in \mathcal{A}^{2n} (S \times \mathcal{H} \times \mathbf{C}^n)^{\mathrm{GL}_n (\mathbf{R}) \times \mathrm{SL}_2 (\mathbf{R})} \end{equation} à décroissance rapide dans les fibres. On définit encore \begin{equation} \label{9.psi} \psi = \iota_X \varphi \in \mathcal{A}^{2n-1} (S \times \mathcal{H} \times \mathbf{C}^n)^{\mathrm{GL}_n (\mathbf{R}) \times \mathrm{SL}_2 (\mathbf{R})} \end{equation} mais il n'est plus vrai dans ce contexte que la forme $$\eta = \int_0^{+\infty} [s]^* \psi \frac{ds}{s} \in A^{2n-1} (X \times \mathcal{H} \times \mathbf{C}^n)^{\mathrm{GL}_n (\mathbf{R}) \times \mathrm{SL}_2 (\mathbf{R})}$$ soit fermée; il découle par exemple de \cite[\S 7.4]{Takagi} que déjà dans le cas $n=1$ on a \begin{equation} \label{9.eta1} \eta = \frac{1}{8\pi} \left( \frac{d\tau}{y} + \frac{d \overline{\tau}}{y} \right) - \frac{i}{4\pi} \left( \frac{dz}{z} - \frac{d\overline{z}}{\overline{z}} \right) \end{equation} dont la différentielle est égale à la forme d'aire $\frac{dx \wedge dy}{4\pi y^2}$ sur $\mathcal{H}$. On appelle maintenant {\it représentation de Weil} la représentation $\omega$ du groupe $\mathcal{G} (\mathbf{R})$ dans l'espace de Schwartz $\mathcal{S} (M_{n , 2} (\mathbf{R}))$ donnée par $$\omega (g , B , x) : \mathcal{S} (M_{n , 2} (\mathbf{R})) \longrightarrow \mathcal{S} (M_{n , 2} (\mathbf{R})); \quad \phi \mapsto \left(y \mapsto \phi \left( g^{-1} (y -x ) B \right) \right) ,$$ avec $(g,B,x) \in (\mathrm{GL}_n (\mathbf{R}) \times \mathrm{SL}_2 (\mathbf{R})) \ltimes M_{n, 2} (\mathbf{R})$. Il correspond encore à $\varphi$ et $\psi$ des formes \begin{equation} \label{9.varphi2} \widetilde{\varphi} \in \mathcal{A}^{2n} \left( S \times \mathcal{H} \times \mathbf{C}^n ; \mathcal{S} (M_{n , 2} (\mathbf{R})) \right)^{\mathcal{G}(\mathbf{R})} \end{equation} et \begin{equation} \label{9.psi2} \widetilde{\psi} \in \mathcal{A}^{2n-1} \left( S \times \mathcal{H} \times \mathbf{C}^n ; \mathcal{S} (M_{n , 2} (\mathbf{R})) \right)^{\mathcal{G}(\mathbf{R})} \end{equation} qui, après évaluation en la matrice nulle, sont respectivement égales à $\varphi$ et $\psi$. Le lemme \ref{L:convcourant} reste valable sauf le dernier point; les formes $$\widetilde{[s]^*\varphi} (0) = [s]^* \varphi$$ convergent cette fois uniformément sur tout compact vers une forme non-nulle mais qui reste invariante sous l'action de $\mathcal{G} (\mathbf{R})$. \subsection{Fonctions de Schwartz aux places finies et cycles invariants} Soit $$\mathcal{S} (M_{n,2} (\mathbf{A}_f)) = \mathcal{S} ( V (\mathbf{A}_f )^2 )$$ l'espace de Schwartz des fonctions $\varphi_f : M_{n,2} (\mathbf{A}_f) \to \mathbf{C}$ localement constantes et à support compact. Le groupe $\mathcal{G} (\mathbf{A}_f)$ opère sur $\mathcal{S} (M_{n,2} (\mathbf{A}_f ))$ par la représentation de Weil $$\omega (g , B ,x) : \mathcal{S} (M_{n,2} (\mathbf{A}_f )) \to \mathcal{S} (M_{n,2} (\mathbf{A}_f )); \quad \phi \mapsto \left( w \mapsto \phi (g^{-1} (w-x) B \right).$$ Considérons maintenant l'espace $C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right)$ des fonctions continues localement constantes; on fait opérer le groupe $\mathcal{G} (\mathbf{A}_f)$ sur $C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right)$ par la représentation régulière à droite~: $$( (h ,y , C) \cdot f) (g ,x , B ) = f( g h , g yB^{-1} + x , BC ).$$ L'application \begin{equation} \mathcal{S} (M_{n,2} (\mathbf{A}_f )) \to \mathcal{G} (\mathbf{A}_f); \quad \phi \mapsto f_\phi : ( (g, x ,B ) \mapsto \phi (-g^{-1} x B)) \end{equation} est $\mathcal{G} (\mathbf{A}_f)$-équivariante relativement aux deux actions définies ci-dessus. \begin{definition} Soit $\varphi_f \in \mathcal{S} (V (\mathbf{A}_f )^2)$ une fonction de Schwartz $\mathcal{K}$-invariante. \begin{itemize} \item Soit $D_{\varphi_f}$, resp. $D_{\varphi_f , K}$, l'image du cycle $$\mathcal{G} (\mathbf{Q} ) \left[ (\mathrm{GL}_n (\mathbf{R}) \times \mathrm{SL}_2 (\mathbf{R} )) \cdot \mathrm{supp} (f_{\varphi_f} ) \right] \to [\mathcal{G}] /(L \ltimes V(\mathbf{Z} )^2) ,$$ resp. $$\mathcal{G} (\mathbf{Q} ) \left[ (\mathrm{GL}_n (\mathbf{R}) \times \mathrm{SL}_2 (\mathbf{R} )) \cdot \mathrm{supp} (f_{\varphi_f} ) \right] \to [\mathcal{G}]/ \mathcal{K},$$ induite par l'inclusion du support de $f_{\varphi_f}$ dans $\mathcal{G} (\mathbf{A}_f)$. \item Soit $$U_{\varphi_f} \subset [\mathcal{G}] /(L \ltimes V(\mathbf{Z} )^2), \quad \mbox{resp.} \quad U_{\varphi_f, K} \subset [\mathcal{G}]/ \mathcal{K},$$ le complémentaire de $D_{\varphi_f}$, resp. $D_{\varphi_f , K}$. \end{itemize} \end{definition} Comme dans le cas multiplicatif, {\it via} l'isomorphisme \eqref{E:9.TK} la projection de $D_{\varphi_f , K} \subset [\mathcal{G}]/ \mathcal{K}$ est égale à la réunion finie \begin{equation} \label{E:9.9} \bigcup_\xi \Gamma \backslash (X \times \{ [\tau , u \tau + v] \in E^n \; : \; \tau \in \mathcal{H} \} ), \end{equation} où $\xi = (u , v)$ parcourt les éléments de $V(\mathbf{Q})^2 / V (\mathbf{Z})^2$ tels que $\varphi_f ( \xi )$ soit non nul. \medskip L'espace $D_{\varphi_f}$ est donc un revêtement fini de $[\mathrm{GL}_n] \times Y$ et $\varphi_f$ induit une fonction localement constante sur $D_{\varphi_f}$ c'est-à-dire un élément de $H^0 (D_{\varphi_f })$. Maintenant, l'isomorphisme de Thom implique que l'on a~: \begin{equation} \label{E:thom} H^0 (D_{\varphi_f }) \stackrel{\sim}{\longrightarrow} H^{2n} \left( [\mathcal{G}]/ (L \ltimes V(\mathbf{Z} )^2) , U_{\varphi_f } \right); \end{equation} on note $$[\varphi_f] \in H^{2n} \left( [\mathcal{G}]/ (L \ltimes V(\mathbf{Z} )^2) , U_{\varphi_f } \right)$$ l'image de $\varphi_f$; cette classe est $K$-invariante et l'on désigne par $[\varphi_f]_K$ son image dans $$H^{2n} \left( [\mathcal{G}]/ \mathcal{K}, U_{\varphi_f , K } \right).$$ En reprenant la démonstration du lemme \ref{L:40} on obtient le lemme analogue suivant. \begin{lemma} \label{L:9.40} Supposons $\widehat{\varphi}_f (0) =0$. Alors, l'application degré $$H^0 (D_{\varphi_f }) \to \mathbf{Z}$$ est égale à $0$. \end{lemma} Suivant la définition \ref{def67} on isole finalement pour la suite un espace privilégié de fonctions de Schwartz sur $M_{n , 2} (\mathbf{A}_f)$. \begin{definition} Soit $\mathcal{S} (M_{n , 2} (\mathbf{A}_f ))^{\circ}$ le sous-espace de $\mathcal{S} (M_{n , 2} (\mathbf{A}_f))$ constitué des fonctions de la forme $$\mathbf{1}_{V (\widehat{\mathbf{Z}})} \oplus \overline{\varphi}_f \in \mathcal{S} (V (\mathbf{A}_f) \oplus V (\mathbf{A}_f)) = \mathcal{S} (M_{n,2} (\mathbf{A}_f )),$$ où $\overline{\varphi}_f \in \mathcal{S} (V (\mathbf{A}_f))^\circ$ est $(K \ltimes V (\widehat{\mathbf{Z}}))$-invariante. \end{definition} \section[Séries theta et séries d'Eisenstein adéliques]{Séries theta et séries d'Eisenstein adéliques; \\ démonstration du théorème \ref{T:ell}} \subsection{Séries theta adéliques} Comme dans le cas multiplicatif, on associe à toute fonction $\varphi_f \in \mathcal{S} (M_{n , 2} (\mathbf{A}_f))$ des formes différentielles $$\widetilde{\varphi} \otimes \varphi_f \quad \mbox{et} \quad \widetilde{\psi} \otimes \varphi_f \in A^{\bullet} \left( S \times \mathcal{H} \times \mathbf{C}^n , \mathcal{S} (M_{n,2} (\mathbf{A} )) \right)^{(\mathrm{GL}_n (\mathbf{R}) \times \mathrm{SL}_2 (\mathbf{R})) \ltimes \mathbf{C}^n}.$$ En appliquant la distribution theta dans les fibres, on obtient alors une application \begin{equation} \label{9.appl-theta} \theta_\varphi \quad \mbox{et} \quad \theta_\psi : \mathcal{S} (M_{n , 2} (\widehat{\mathbf{Z}})) \longrightarrow \left[ A^{\bullet} (S \times \mathcal{H} \times \mathbf{C}^n) \otimes C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right) \right]^{\mathcal{G} (\mathbf{Q} )} \end{equation} définie cette fois par \begin{equation} \label{9.appl-theta2} \begin{split} \theta_\varphi^* (g_f , B_f , x_f ; \varphi_f ) & = \sum_{\xi \in M_{n,2} (\mathbf{Q} )}\widetilde{\varphi} (\xi ) (\omega (g_f , B_f , x_f ) \varphi_f ) (\xi ) \\ & = \sum_{\xi \in M_{n,2} (\mathbf{Q} )} \varphi_f \left( g_f^{-1} (\xi -x_f ) B_f \right) \widetilde{\varphi} (\xi ) \end{split} \end{equation} et de même pour $\theta_\psi$. Rappelons que le groupe $\mathcal{G} (\mathbf{R})$ opère naturellement sur $S \times \mathcal{H} \times \mathbf{C}^n$; étant donné un élément $(g,B,x) \in \mathcal{G} (\mathbf{R})$ et une forme $\alpha \in A^{\bullet} (S \times \mathcal{H} \times \mathbf{C}^n)$ on note $(g,B,x)^* \alpha$ le tiré en arrière de $\alpha$ par l'application $$(g,B,x) : S \times \mathcal{H} \times \mathbf{C}^n \to S \times \mathcal{H} \times \mathbf{C}^n.$$ L'invariance sous le groupe $\mathcal{G}(\mathbf{Q})$ dans \eqref{9.appl-theta} signifie alors que pour tout élément $(g,B,x) \in \mathcal{G} (\mathbf{Q})$ on a~: $$(g,B,x)^*\theta_\varphi^*(g g_f , BB_f , gx_f B^{-1} + x ;\varphi_f)=\theta_\varphi^*(g_f, B_f , x_f ; \varphi_f);$$ ce qui découle de la $\mathcal{G} (\mathbf{R})$-invariance of $\widetilde{\varphi}$, cf. \eqref{9.varphi2}. L'application $\theta_\varphi$ entrelace les actions naturelles de $\mathcal{G} (\mathbf{A}_f)$ des deux côtés. En particulier, en supposant $\varphi_f$ invariante sous le compact ouvert $\mathcal{K}$ on obtient des formes différentielles \begin{multline*} \theta_\varphi (\varphi_f ) \quad \mbox{et} \quad \theta_\psi (\varphi_f ) \in \left[ A^{\bullet} (S \times \mathcal{H} \times \mathbf{C}^n) \otimes C^\infty \left( \mathcal{G} (\mathbf{A}_f ) \right) \right]^{\mathcal{G} (\mathbf{Q} ) \times \mathcal{K}} \\ = A^\bullet (S \times \mathcal{H} \times \mathbf{C}^n)^{(\Gamma \times \Lambda) \ltimes M_{n,2} (\mathbf{Z} )} \end{multline*} autrement dit des formes différentielles sur $\Gamma \backslash (S \times E^n)$. Comme au \S \ref{algHecke}, les séries theta $\theta_\varphi$ et $\theta_\psi$ sont équivariantes relativement aux actions naturelles des opérateurs de Hecke et, comme au \S \ref{cohomClass}, la forme différentielle $\theta_\varphi (\varphi_f)$ est fermée et représente $[\varphi_f]_K$. Finalement les propriétés de décroissance des formes $\theta_{[r]^* \varphi} (\varphi_f )$ sont analogues à celles décrites dans le lemme \ref{L:theta-asympt} à ceci près que cette fois $\theta_{[r]^* \varphi} (\varphi_f )$ ne tend pas nécessairement vers $0$ avec $r$. C'est pour garantir cela que nous supposerons dorénavant que $\widehat{\varphi}_f (0) = 0$; en procédant comme dans la démonstration du lemme \ref{L:thetaSiegel}, il découle en effet alors de la formule de Poisson que sous cette hypothèse $\theta_{[r]^* \varphi} (\varphi_f )$ tend vers $0$ avec $r$. \subsection{Séries d'Eisenstein adéliques} Soit toujours $\varphi_f \in \mathcal{S} (M_{n , 2} (\mathbf{A}_f ))$ une fonction $\mathcal{K}$-invariante dont on supposera de plus qu'elle vérifie $\widehat{\varphi}_f (0) = 0$. Comme au \S \ref{SEA7} on peut alors associer à $\varphi_f$ les séries d'Eisenstein $$E_\varphi (\varphi_f , s) = \int_0^\infty r^s \theta_{[r]^* \varphi} (\varphi_f ) \frac{dr}{r} \quad \mbox{et} \quad E_\psi (\varphi_f , s) = \int_0^\infty r^s \theta_{[r]^* \psi} (\varphi_f ) \frac{dr}{r};$$ ce sont des formes différentielles sur la préimage de $U_{\varphi_f}$ dans $S \times \mathcal{H} \times \mathbf{C}^n$ qui sont bien définies, et invariantes par l'action du centre $Z(\mathbf{R})^+$, en $s=0$. On pose alors \begin{equation} E_\psi (\varphi_f ) = E_\psi (\varphi_f , 0) \in A^{2n-1} \left( [\mathcal{G}]/ \mathcal{K} - D_{\varphi_f , K} \right). \end{equation} Comme dans le cas multiplicatif, on obtient le théorème suivant. \begin{theorem} \label{T:9.4} Supposons $\widehat{\varphi}_f (0) =0$. Alors la forme différentielle $$E_\psi (\varphi_f ) \in A^{2n-1} \left( [\mathcal{G}]/ \mathcal{K} - D_{\varphi_f , K} \right)$$ est \emph{fermée} et représente une classe de cohomologie qui relève la classe $[\varphi_f]_K$ dans la suite exacte longue \begin{multline*} \ldots \to H^{2n-1} \left( [\mathcal{G}]/ \mathcal{K} - D_{\varphi_f , K} \right) \\ \to H^{2n} \left( [\mathcal{G}]/ \mathcal{K} , [\mathcal{G}]/ \mathcal{K} - D_{\varphi_f , K} \right) \to H^{2n} \left( [\mathcal{G}]/ \mathcal{K} \right) \to \ldots \end{multline*} \end{theorem} Noter que, sous l'hypothèse $\widehat{\varphi}_f (0) =0$, le degré de $D_{\varphi_f , K}$ est nul et l'image de $[\varphi_f]_K$ dans $H^{2n} \left( [\mathcal{G}]/ \mathcal{K} \right)$ est égale à $0$. \medskip \subsection{Démonstration du théorème \ref{T:ell}} \label{S:9.3.3} Il correspond à un sous-groupe de congruence $\Gamma$ dans $\mathrm{SL}_n (\mathbf{Z})$ un sous-groupe compact ouvert $K$ dans $\mathrm{GL}_n (\widehat{\mathbf{Z}})$ tel que $K \cap \mathrm{SL}_n (\mathbf{Z}) = \Gamma$. Un élément $D \in \mathrm{Div}_\Gamma$ peut-être vu comme une fonction $\Gamma$-invariante sur $E^n$ à support dans les points de torsion de $E^n$; il lui correspond donc une fonction $\mathcal{K}$-invariante $\varphi_f \in \mathcal{S} (M_{n , 2} (\mathbf{A}_f ))$. Si de plus $D$ est de degré $0$ alors $\widehat{\varphi}_f (0) = 0$. En procédant comme au \S \ref{S8.1} on associe à $E_\psi (\varphi_f )$ une forme simpliciale $$\mathcal{E}_\psi (\varphi_f ) \in \mathrm{A}^{2n-1} \left( E\Gamma \times (E^n - \mathrm{supp} \ D) \right)^{\Gamma}$$ qui est fermée et représente la même classe de cohomologie équivariante que $E_{\psi} (\varphi_f )$. Fixons $n$ morphismes primitifs linéairement indépendants $$\chi_1 , \ldots , \chi_n : \mathbf{Z}^n \to \mathbf{Z}.$$ On note encore $\chi_j : \mathbf{Q}^n \to \mathbf{Q}$ les formes linéaires correspondantes et $\chi_j : E^n \to E$ les morphismes primitifs qu'ils induisent. On pose $$\chi = (\chi_1 , \ldots , \chi_n).$$ Suivant la démonstration du théorème \ref{T:cocycleE} on restreint maintenant la forme simpliciale $\mathcal{E}_\psi (\varphi_f )$ aux ouverts de $E^n$, indexés par les éléments $\mathbf{g} = (\gamma_0 , \ldots , \gamma_k)$ de $E_k \Gamma$, \begin{equation} \label{E:9.860} U (\mathbf{g}) = E^n - \bigcup_\xi \bigcup_{i=1}^n \bigcup_{j=0}^k [\mathrm{ker} (\chi_i \circ \gamma_j) +\xi ], \end{equation} où $\xi$ parcourt l'ensemble des éléments du support de $D$. Il découle du lemme \ref{affineb} que les variétés \eqref{E:9.860} sont affines. On peut donc appliquer le théorème \ref{P:Brieskorn}. Comme dans la démonstration du théorème \ref{T:cocycleE}, l'argument expliqué dans l'annexe \ref{A:A} permet alors d'associer à la forme simpliciale $\mathcal{E}_\psi (\varphi_f )$ un $(n-1)$-cocycle homogène \begin{equation} \mathbf{S}_{{\rm ell}, \chi} [D] \in C^{n-1} \left( \Gamma , \Omega_{\rm mer} (E^n ) \right)^{\Gamma} \end{equation} qui représente la classe $S_{\rm mult} [D]$ et qui est à valeurs dans les formes méromorphes régulières en dehors des hyperplans affines $\mathrm{ker} (\chi_i \circ g) +\xi$, avec $\xi$ dans le support de $D$ et $g$ dans $\Gamma$. Cela démontre les points (1), (2) et (3) du théorème \ref{T:ell}. Comme dans le cas multiplicatif, l'application $\mathrm{GL}_n (\mathbf{A}_f) \to \mathcal{G} (\mathbf{A}_f )$ induit une inclusion entre algèbres de Hecke $$\mathcal{H} (\mathrm{GL}_n (\mathbf{A}_f) , K) \hookrightarrow \mathcal{H} (\mathcal{G} (\mathbf{A}_f ) , \mathcal{K})$$ et à une fonction $\phi \in \mathcal{H} (\mathrm{GL}_n (\mathbf{A}_f) , K)$ on associe un opérateur de Hecke $\mathbf{T}_\phi$. L'analogue de la proposition \ref{P:hecke2} se démontre de la même manière, de sorte que \begin{equation} \mathbf{T}_\phi \left[ \mathbf{S}_{{\rm ell}, \chi}[\varphi_f] \right]= \left[ \mathbf{S}_{{\rm ell}, \chi }[ T_{\phi} \varphi_f] \right] . \end{equation} En prenant pour $\phi$ la fonction caractéristique d'une double classe $KaK$ avec $a$ dans $M_n (\mathbf{Z}) \cap \mathrm{GL}_n (\mathbf{Q})$, l'opérateur $\mathbf{T}_\phi$ est égal à l'opérateur $\mathbf{T} (a)$ du chapitre \ref{C:2}. L'unicité dans le théorème \ref{T:cocycleE} ne nécessite plus de passer au quotient par les formes invariantes, l'analogue de la proposition \ref{P:hecke2} devient alors \begin{equation*} \mathbf{T} (a) \left[ \mathbf{S}_{{\rm ell}, \chi}[D] \right] = [ \mathbf{S}_{{\rm ell}, \chi}[[\Gamma a \Gamma ]^* D] ] \quad \mbox{dans} \quad H^{n-1} (\Gamma , \Omega^n_{\rm mer} (E^n)), \end{equation*} ce qui démontre le point (4) du théorème \ref{T:ell}. Finalement, l'application $\mathrm{SL}_2 (\mathbf{A}_f) \to \mathcal{G} (\mathbf{A}_f )$ induit une inclusion entre algèbres de Hecke $$\mathcal{H} (\mathrm{SL}_2 (\mathbf{A}_f) , L) \hookrightarrow \mathcal{H} (\mathcal{G} (\mathbf{A}_f ) , \mathcal{K}).$$ Il lui correspond une deuxième famille d'opérateurs de Hecke qui contiennent en particulier ceux notés $T(b)$ au chapitre \ref{C:2}. Le point (5) du théorème \ref{T:ell} se démontre alors de la même manière que le point (4). \qed \section[\'Evaluation sur les symboles modulaires]{\'Evaluation sur les symboles modulaires et \\ démonstration du théorème \ref{T:ellbis}} L'étude du comportement à l'infini des séries d'Eisenstein $E_\psi (\varphi_f)$ est similaire à ce que l'on a fait dans le cas multiplicatif au \S \ref{S:7.eval}. On ne détaille pas plus, on calcule par contre l'intégrale de $E_\psi (\varphi_f)$ sur les symboles modulaires. \subsection{\'Evaluation de $E_\psi (\varphi_f)$ sur les symboles modulaires} \label{S:9.4.1} Soit $$\mathbf{q} = (q_0 , \ldots , q_k)$$ un $(k+1)$-uplet de vecteurs non nuls dans $V(\mathbf{Q})$ avec $k \leq n-1$. Soit $\overline{\varphi}_f \in \mathcal{S} (V(\mathbf{A}_f ))$ une fonction de Schwartz $(K \ltimes V( \widehat{\mathbf{Z}} ))$-invariante telle que pour tout entier $j$ dans $[0 , k]$ on ait \begin{equation} \label{E:condphi2} \int_{W(\mathbf{q})^{(j)}} \overline{\varphi}_f =0 \quad \mbox{dans} \quad \mathcal{S} (V (\mathbf{A}_f) / W(\mathbf{q})^{(j)} (\mathbf{A}_f) ). \end{equation} Soit $$\varphi_f = \mathbf{1}_{V (\widehat{\mathbf{Z}})} \oplus \overline{\varphi}_f \in \mathcal{S} (V (\mathbf{A}_f) \oplus V (\mathbf{A}_f)) = \mathcal{S} (M_{n,2} (\mathbf{A}_f )).$$ On considère cette fois l'application $$\Delta (\mathbf{q}) \times \mathrm{id}_{\mathcal{H} \times \mathbf{C}^n} : \Delta_k ' \times \mathcal{H} \times \mathbf{C}^n \to \overline{X}^T \times \mathcal{H} \times \mathbf{C}^n$$ et l'on note \begin{equation} E_\psi (\varphi_f , \mathbf{q} ) = (\Delta (\mathbf{q}) \times \mathrm{id} )^* E_\psi (\varphi_f). \end{equation} On obtient ainsi une forme fermée dans $$A^{2n-1} \Big( \Delta_{k}' \times \big( E^n - \bigcup_{j=0}^k \bigcup_{\xi} (H_\mathbf{q}^{(j)} + \xi) \big) \Big),$$ où $H_\mathbf{q}^{(j)}$ désigne l'image de $$\mathcal{H} \times \langle q_0 , \ldots , \widehat{q_j} , \ldots , q_k \rangle_\mathbf{C} \subset \mathcal{H} \times \mathbf{C}^n$$ dans $E^n$, et $\xi$ est comme dans \eqref{E:9.9} Il s'agit maintenant de calculer les intégrales $\int_{\Delta_{k}'} E_\psi (\varphi_f , \mathbf{q} )$; ce sont des formes différentielles sur des ouverts de $E^n$. Pour simplifier on se contente de calculer, dans la proposition suivante, la restriction de ces formes aux fibres de $E^n \to Y$. \begin{proposition} \label{9.P34bis} 1. Si $\langle q_0 , \ldots , q_{k} \rangle$ est un sous-espace propre de $V$, alors la forme $\int_{\Delta_{k}'} E_\psi (\varphi_f , \mathbf{q} )$ est identiquement nulle. 2. Supposons $k=n-1$ et que les vecteurs $q_0 , \ldots , q_{n-1}$ soient linéairement indépendants. On pose $g = (q_0 | \cdots | q_{n-1} ) \in \mathrm{GL}_n (\mathbf{Q})$ et l'on fixe $\lambda \in \mathbf{Q}^\times$ tel que la matrice $h=\lambda g$ soit entière. Alors, en restriction à la fibre $E_\tau^n$ au-dessus d'un point $[\tau] \in Y$, la $n$-forme $\int_{\Delta_{n-1}'} E_\psi (\varphi_f , \mathbf{q} )$ est égale à \begin{multline*} \frac{1}{\det h} \sum_{v \in V(\mathbf{Q}) / V(\mathbf{Z}) } \overline{\varphi}_f (v ) \cdot \\ \sum_{\substack{\xi \in E^n \\ h \xi = v }} \mathrm{Re} \left( E_1 ( \tau , \ell_1 - \xi_{1} ) dz_1 \right) \wedge \cdots \wedge \mathrm{Re} \left( E_1 (\tau , \ell_{n} - \xi_{n} ) dz_n \right) , \end{multline*} où un vecteur $v \in V(\mathbf{Q}) / V(\mathbf{Z}) = \mathbf{Q}^n / \mathbf{Z}^n$ est identifié à un élément de la courbe elliptique $E_\tau^n = \mathbf{C}^n / (\tau \mathbf{Z}^n + \mathbf{Z}^n )$ et $\ell_j$ est la forme linéaire sur $\mathbf{C}^n$ caractérisée par $h^* \ell_j = e_j^*$. \end{proposition} \begin{proof} On explique comment modifier la démonstration de la proposition \ref{P34bis}. La première partie est identique. Supposons donc $k=n-1$ et que les vecteurs $q_0 , \ldots , q_{n-1}$ sont linéairement indépendants. La propriété d'invariance (\ref{E:invformsimpl}) s'étend naturellement et implique cette fois encore que \begin{equation} \label{E:9.20-} \int_{\Delta_{n-1}'} E_\psi (\varphi_f , \mathbf{q} ) = (h^{-1})^* \left( \int_{\Delta_{n-1}'} E_\psi ( \varphi_f (h \cdot) , \mathbf{e}) \right). \end{equation} On en est donc réduit à calculer l'intégrale \begin{multline} \label{9.E:intti} \int_{A\mathrm{SO}_n \mathbf{R}_{>0}} E_\psi (\varphi_f (h \cdot )) \\ = \int_{\{ \mathrm{diag}(t_1,\ldots,t_n) \; : \; t_j \in \mathbf{R}_{>0} \} \mathrm{SO}_n} (t_1 \cdots t_n )^{s} \theta_\varphi (\varphi_f (h \cdot) (1 , 0) \ \Big|_{s=0}. \end{multline} En restriction à l'ensemble des matrices symétriques diagonales réelles, le fibré en $\mathbf{C}^n$ se scinde encore {\it métriquement} en une somme directe de $n$ fibrés en droites, correspondant aux coordonnées $(z_j )_{j=1 , \ldots , n}$ de $z$ et la forme $\varphi$ se décompose en le produit de $n$ formes associées à ces fibrés en droites. Ces formes font cette fois intervenir la variable $\tau$ (cf. \cite[\S 6.1]{Takagi}) mais d'après \eqref{9.eta1}, en restriction à une fibre $E_\tau^n$, on a \begin{multline*} \int_{\{ \mathrm{diag}(t_1,\ldots,t_n) \; : \; t_j \in \mathbf{R}_{>0} \} \mathrm{SO}_n} (t_1 \cdots t_n )^s \widetilde{\varphi} (\xi) \\ = \frac{(-i)^n}{(4\pi)^n} \Gamma (1+ \frac{s}{2})^n \wedge_{j=1}^n \left( \frac{dz_j}{(z_j - \xi_j) | z_j - \xi_j |^{s} } -\frac{d\overline{z}_j}{\overline{z_j - \xi_j} | z_j - \xi_j |^{s}} \right). \end{multline*} où $\xi \in M_{n,2} (\mathbf{Q}) = V (\mathbf{Q})^2$ est identifié à un élément de $E_\tau^n$ {\it via} l'application $$V(\mathbf{Q})^2 \to E_\tau^n; \quad (u,v) \mapsto \tau u + v.$$ L'intégrale \eqref{9.E:intti} est donc égale à \begin{multline} \label{9.int} \Gamma (1+ \frac{s}{2})^n \sum_{\xi \in M_{n,2} (\mathbf{Q})} \varphi_f (h \xi ) \wedge_{j=1}^n \mathrm{Re} \left( \frac{1}{2i\pi} \frac{dz_j}{(z_j - \xi_j) | z_j - \xi_j |^{s} } \right) \\ = \Gamma (1+ \frac{s}{2})^n \sum_{\xi \in M_{n,2} (\mathbf{Q}) / M_{n,2} (\mathbf{Z})} \varphi_f (h \xi) \wedge_{j=1}^n \mathrm{Re} \left( K_1 (z_j , 0 , 1+s/2) dz_j \right), \end{multline} où $K_1$ est la série de Eisenstein--Kronecker \cite[Chap. VIII, \ (27)]{Weil} définie, pour $\mathrm{Re} (s) > 3/2$, par\footnote{On prendra garde au fait qu'on a ajouté un facteur $1/2i\pi$.} $$K_1 (z,0, s) = \frac{1}{2i \pi} \sideset{}{'} \sum_{w \in \mathbf{Z} \tau + \mathbf{Z}} (\overline{z} + \overline{w}) | z + w|^{-2s}.$$ Il découle de \cite[p. 81]{Weil} que pour tout $z \in \mathbf{C}$, cette fonction admet un prolongement méromorphe au plan des $s \in \mathbf{C}$ qui, en $s=1$, est égal à la série d'Eisenstein {\it non-holomorphe} \begin{equation} \label{E*E} E_1^* (\tau , z) = E_1 (\tau , z) + \frac{1}{y} \mathrm{Im} (z), \end{equation} où $y= \mathrm{Im} (\tau)$. Noter que la série $E_1^* (\tau , z)$ est $(\mathbf{Z} \tau + \mathbf{Z})$-périodique en $z$ et modulaire de poids $1$. En prenant $s=0$ dans l'expression \eqref{9.int} on obtient que $$\int_{\Delta_{n-1}'} E_\psi ( \varphi_f (h \cdot) , \mathbf{e}) = \sum_{\xi \in M_{n,2} (\mathbf{Q}) / M_{n,2} (\mathbf{Z})} \varphi_f (h \xi) \wedge_{j=1}^n \mathrm{Re} \left( E_1^* (\tau , z_j - \xi_j ) dz_j \right) $$ et donc que l'intégrale \eqref{E:9.20-} est égale à \begin{multline*} \sum_{\xi \in M_{n,2} (\mathbf{Q}) / M_{n,2} (\mathbf{Z})} \varphi_f (h \xi) \cdot (h^{-1})^* \left( \wedge_{j=1}^n \mathrm{Re} \left( E_1^* (\tau , z_j - \xi_j ) dz_j \right) \right) \\ \begin{split} & = \frac{1}{\det h} \sum_{\xi \in M_{n,2} (\mathbf{Q}) / M_{n,2} (\mathbf{Z})} \varphi_f (h \xi) \wedge_{j=1}^{n} \mathrm{Re} \left( E_1^* (\tau , \ell_j - \xi_j ) dz_j \right) \\ & = \frac{1}{\det h} \sum_{v \in V(\mathbf{Q}) / V(\mathbf{Z}) } \overline{\varphi}_f (v ) \cdot \\ & \quad \quad \sum_{\substack{\xi \in E^n \\ h \xi = v }} \mathrm{Re} \left( E_1^* ( \tau , \ell_1 - \xi_{1} ) dz_1 \right) \wedge \cdots \wedge \mathrm{Re} \left( E_1^* (\tau , \ell_{n} - \xi_{n} ) dz_n \right) , \end{split} \end{multline*} où la dernière expression, qui découle de la définition de $\varphi_f$, est obtenue en groupant les $\xi$ envoyés par $h$ sur un même élément dans $E^n$. Rappelons que l'on identifie une classe $\xi $ dans le quotient $M_{n,2} (\mathbf{Q}) / M_{n,2} (\mathbf{Z})$ à l'élément $\tau \alpha + \beta$ dans $E^n$, où $\alpha$ et $\beta$ sont les vecteurs colonnes de $\xi$. La dernière somme porte donc sur les classes $\alpha$ et $\beta$ dans $V(\mathbf{Q}) / V(\mathbf{Z})$ telles que $h \alpha$ soit nul modulo $V(\mathbf{Z})$ et $h \beta$ soit égal à $v$ modulo $V(\mathbf{Z})$. \medskip \noindent {\it Remarque.} Le fait que l'expression soit indépendante des choix de $\lambda$ et $h$ découle des relations de distributions pour $E_1^*$ $$\sum_{\xi \in E_\tau [m]} E_1^* (\tau , z- \xi ) = m E_1^* (\tau , m z ).$$ \medskip Il nous reste à expliquer que dans l'expression ci-dessus on peut partout remplacer $E_1^*$ par $E_1$. Pour cela on commence par remarquer que l'expression $$\sum_{v \in V(\mathbf{Q}) / V(\mathbf{Z}) } \overline{\varphi}_f (v ) \sum_{\substack{\xi \in E^n \\ h \xi = v }} \mathrm{Re} \left( E_1^* ( \tau , \ell_1 - \xi_{1} ) dz_1 \right) \wedge \cdots \wedge \mathrm{Re} \left( E_1^* (\tau , \ell_{n} - \xi_{n} ) dz_n \right)$$ peut se réécrire \begin{multline} \label{E:sans*} \sum_{\substack{\alpha \in \mathbf{Q}^n / \mathbf{Z}^n \\ h \alpha = 0 \ (\mathrm{mod} \ \mathbf{Z}^n) }} \sum_{v \in V(\mathbf{Q}) / V(\mathbf{Z}) } \overline{\varphi}_f (v ) \\ \sum_{\substack{\beta \in \mathbf{Q}^n / \mathbf{Z}^n \\ h \beta = v \ (\mathrm{mod} \ \mathbf{Z}^n)}} \mathrm{Re} \left( E_1^* ( \tau , \ell_1 + \alpha_1 \tau + \beta_1 ) dz_1 \right) \wedge \cdots \wedge \mathrm{Re} \left( E_1^* (\tau , \ell_{n} + \alpha_n \tau + \beta_n ) dz_n \right). \end{multline} Puisque d'après \eqref{E*E} les différences $$E_1^* ( \tau , \ell_j + \alpha_j \tau + \beta_j ) - E_1 (\tau , \ell_j + \alpha_j \tau + \beta_j) = \frac{1}{y} \mathrm{Im} (\ell_j ) + \alpha_j$$ sont indépendantes de $v$ et $\beta$, il résulte bien de la démonstration de (\ref{E:8.smoothing}) que l'on peut partout remplacer $E_1^*$ par $E_1$ dans \eqref{E:sans*}. Cela conclut la démonstration de la proposition \ref{9.P34bis}. \end{proof} \subsection{Démonstration du théorème \ref{T:ellbis}} \label{S:9.4.2} On procède comme dans la démonstration du théorème \ref{T:mult2}. On utilise cette fois le résultat suivant. \begin{lemma} Les $1$-formes différentielles $$\mathrm{Re}(E_1^*(\tau,z) dz) \quad \textrm{and} \quad E_1^*( \tau,z) dz$$ sont cohomologues sur $E_\tau - \{ 0 \}$, où $E_\tau$ est la courbe elliptique $\mathbf{C}/(\tau \mathbf{Z}+\mathbf{Z})$. Autrement dit, la forme $\mathrm{Im} (E_1^*( \tau,z) dz)$ est exacte sur $E_\tau - \{ 0 \}$. \end{lemma} \begin{proof} D'après \cite[Eq. (2) p. 57]{deShalit}, on a \begin{equation}\label{eqE1*tolog} 2\pi i E_1^* (\tau , z) =\frac 1 {12} \partial_z \log \theta (\tau , z ) - \pi \frac{\overline{z}}{2y}, \end{equation} où $$\theta (\tau , z ) =e^{6\pi\frac{z(z-\overline{z})}{y}}q_\tau(q_z^{\frac 12}-q_z^{-\frac 12})^{12}\prod_{n\geq 1}(1-q_\tau^nq_z)^{12}(1-q_\tau^nq_z^{-1})^{12}.$$ La fonction réelle $z \mapsto | \theta (\tau , z)| $ est régulière sur $\mathbf{C} - (\tau \mathbf{Z} + \mathbf{Z})$ et $(\tau \mathbf{Z} + \mathbf{Z})$-périodique (cf. \cite[p. 49]{deShalit}). Calculons maintenant sa différentielle~: \begin{equation*} \begin{split} d \log | \theta (\tau , z) | & = \mathrm{Re} (d \log \theta (\tau , z) ) \\ & = \mathrm{Re} \left(\partial_z \log \theta (\tau , z) dz + \partial_{\overline{z}} \log \theta (\tau , z) d\overline{z}\right)\\ & = \mathrm{Re} \left(\partial_z \log \theta (\tau , z) dz -6 \pi \frac{z}{y} d\overline{z} \right)\\ & = \mathrm{Re} \left( 12 (2\pi i) E_1^*(\tau,z) dz + 6\pi \frac{\overline{z}}{y} dz - 6\pi \frac{z}{y} d\overline{z} \right), \end{split} \end{equation*} où la dernière ligne se déduit de (\ref{eqE1*tolog}). Comme $$\mathrm{Re} \left(\overline{z}dz- zd\overline{z}\right)=0,$$ on obtient que $$d \log | \theta (\tau , z) | = -24 \mathrm{Im} (E_1^*( \tau,z) dz).$$ \end{proof} En particulier en restriction à $E_\tau$ les formes \begin{multline*} \sum_{v \in V(\mathbf{Q}) / V(\mathbf{Z}) } \overline{\varphi}_f (v ) \cdot \\ \sum_{\substack{\xi \in E^n \\ h \xi = v }} \mathrm{Re} \left( E_1^* ( \tau , \ell_1 - \xi_{1} ) dz_1 \right) \wedge \cdots \wedge \mathrm{Re} \left( E_1^* (\tau , \ell_{n} - \xi_{n} ) dz_n \right) , \end{multline*} et $$ \sum_{v \in V(\mathbf{Q}) / V(\mathbf{Z}) } \overline{\varphi}_f (v ) \sum_{\substack{\xi \in E^n \\ h \xi = v }} E_1^* ( \tau , \ell_1 - \xi_{1} ) \cdots E_1^* (\tau , \ell_{n} - \xi_{n} ) dz_1 \wedge \cdots \wedge dz_n , $$ sont cohomologues. La démonstration du théorème \ref{T8.8cor} implique alors que pour toute fonction $\mathcal{K}$-invariante $\varphi_f \in \mathcal{S} (V (\mathbf{A}_f ))^\circ$, l'application $$\mathbf{S}_{\rm ell}^* [\varphi_f ] : \Gamma^{n} \longrightarrow \Omega^{n}_{\rm mer} (E^n )$$ qui à un $n$-uplet $(\gamma_0 , \ldots , \gamma_{n-1})$ associe $0$ si les vecteurs $\gamma_j^{-1} e_1$ sont liés et sinon la forme différentielle méromorphe \begin{equation*} \frac{1}{\det h} \sum_{v \in V(\mathbf{Q}) / V(\mathbf{Z}) } \overline{\varphi}_f (v ) \sum_{\substack{\xi \in E^n \\ h \xi = v }} E_1 ( \tau , \ell_1 - \xi_{1} ) \cdots E_1 (\tau , \ell_{n} - \xi_{n} ) dz_1 \wedge \cdots \wedge dz_n, \end{equation*} où $h = ( \gamma_0^{-1} e_1 | \cdots | \gamma_{n-1}^{-1} e_1) \in \mathrm{GL}_n (\mathbf{Q}) \cap M_n (\mathbf{Z} )$ et $h^* \ell_j = e_{j}^*$, définit un $(n-1)$-cocycle homogène non nul du groupe $\Gamma$ qui représente la même classe de cohomologie que $\mathbf{S}_{\rm ell , \chi}$. Rappelons que, pour chaque élément $D \in \mathrm{Div}_{\Gamma,K}$ il correspond une fonction $\mathcal{K}$-invariante $\varphi_f$ dans $\mathcal{S} (V (\mathbf{A}_f ))$ qui, sous l'hypothèse que $D \in \mathrm{Div}_{\Gamma,K}^{\circ}$, appartient à $ \mathcal{S} (V (\mathbf{A}_f ))^{\circ}$. Il s'en suit finalement que le cocycle $$\mathbf{S}^*_{\rm ell} [D] = \mathbf{S}^*_{{\rm ell}}[\varphi_f]$$ vérifie le théorème \ref{T:ellbis}. \qed \medskip \newpage \setcounter{chapter}{1} \setcounter{equation}{0} \numberwithin{equation}{chapter} \begin{appendix} \chapter{Cohomologie équivariante et complexe de de Rham simplicial} \label{A:A} \resettheoremcounters Les résultats de ce volume sont formulés dans le langage de la cohomologie équivariante. Il s'agit d'une théorie cohomologique pour les espaces topologiques munis d'une action de groupe. Comme la cohomologie habituelle, elle peut être calculée à l'aide de chaînes simpliciales ou, alternativement, à l'aide d'une version du complexe de de Rham. Dans cette annexe, nous rassemblons les principales définitions et faits concernant la cohomologie équivariante que nous utilisons. Pour plus de détails, le lecteur peut consulter \cite{DupontBook,Dupont}. \section{Définition de la cohomologie équivariante} Soit $G$ un groupe discret et $X$ un espace topologique raisonnable muni d'une action de $G$. Dans cette situation, on peut définir les groupes de cohomologie équivariante $$ H^*_G(X). $$ Ces groupes contiennent des informations à la fois sur la cohomologie usuelle de $X$ et sur l'action de $G$. L'idée derrière la construction est la suivante : si $G$ opère librement sur $X$ alors on peut définir $$ H^*_G(X) = H^*(X/G), $$ c'est-à-dire la cohomologie usuelle du quotient $X/G$. Pour des actions plus générales, cependant, beaucoup d'informations sont perdues en passant à l'espace quotient $X/G$. L'idée de la cohomologie équivariante est de remplacer le quotient $X/G$ par un {\it quotient d'homotopie} $$ EG \times_G X := |EG \times X|/G. $$ On définit $|EG \times X|$ ci-dessous, retenons pour l'instant que c'est un espace topologique obtenu comme le produit de $X$ avec un espace contractile et sur lequel $G$ opère librement. Ainsi, le remplacement de $X$ par $|EG \times X|$ ne change pas le type d'homotopie sous-jacent, mais maintenant $G$ opère librement sur $|EG \times X|$. Il est donc raisonnable de définir $$ H^*_G(X) = H^*(|EG \times X|/G). $$ Par exemple, quand $X$ est réduit à un point (ou contractile), on a $$ H^*_G(X) = H^*(BG), $$ où $BG:=|EG|/G$ est l'espace classifiant de $G$ et $H^*(BG)$ est la cohomologie du groupe $G$. D'un autre côté, quand $G$ est trivial la cohomologie équivariante se réduit à la cohomologie habituelle. Ci-dessous, nous définissons $EG$ en utilisant des ensembles simpliciaux et le foncteur de réalisation simpliciale. Puis nous décrivons les résultats de Dupont \cite{Dupont} qui permettent de calculer $H^*_G(X)$ en utilisant des formes différentielles lorsque $X$ est une variété. \section{La construction de Borel} Soit $G$ un groupe discret. Un espace classifiant pour $G$ est un espace topologique dont le groupe fondamental est isomorphe à $G$ et dont les groupes d'homotopie supérieurs sont triviaux. Un tel espace existe toujours. On en donne une construction concrète qui fait naturellement le lien avec la cohomologie des groupes. Soit $EG_\bullet$ l'ensemble simplicial dont les $m$-simplexes sont les $(m+1)$-uplets ordonnés $(g_0 , \ldots , g_m )$ d'éléments de $g$. On note $EG_m = G^{m+1}$ l'ensemble de ces $m$-simplexes. Un élément de $(g_0 , \ldots , g_m ) \in EG_m$ est recollé aux $(m-1)$-simplexes $(g_0 , \ldots , g_{i-1},g_{i+1}, \ldots , g_m)$ de la même manière qu'un simplexe standard est recollé à ses faces. Les applications de faces et de dégénérescence sont donc \begin{align} \partial_i(g_0,\ldots,g_m) &= (g_0,\ldots,g_{i-1},g_{i+1},\ldots,g_m), \quad \mbox{et} \\ \sigma_i(g_0,\ldots,g_m) &= (g_0,\ldots,g_i,g_i,\ldots,g_m), \quad i=0,\ldots, m. \end{align} Le complexe $EG_\bullet$ est contractile. Le groupe $G$ opère librement sur $EG_\bullet$ par \begin{equation} g \cdot (g_0,\ldots,g_m) = (g_0g^{-1},\ldots,g_m g^{-1}) \end{equation} de sorte que l'application quotient $EG_\bullet \to EG_\bullet / G$ est un revêtement universel. La base est donc un espace classifiant pour $G$. L'application \begin{equation} (g_0,\ldots,g_m) \mapsto (g_1g_0^{-1},\ldots,g_m g_{m-1}^{-1}) \end{equation} identifie $EG_\bullet / G$ avec l'ensemble simplicial $BG_\bullet$ défini par $BG_m = G^m$ et \begin{align} \partial_i(g_1,\ldots,g_m) &= \begin{cases} (g_2,\ldots,g_m) & i=0 \\ (g_1,\ldots,g_ig_{i+1},\ldots, g_m) & 0 < i < m \\ (g_1,\ldots,g_{m-1}) & i=m, \end{cases} \\ \sigma_i(g_1,\ldots,g_m) &= (g_1,\ldots,g_i,1,g_{i+1},\ldots,g_m), \quad i=0,\ldots, m. \end{align} Notons $|X_\bullet|$ la réalisation géométrique d'un ensemble simplicial $X_\bullet$ : c'est l'espace topologique défini comme $$ |X_\bullet | = \bigsqcup_{k \geq 0}(\Delta_k \times X_k)/\sim, $$ où $\Delta_k$ désigne le $k$-simplexe standard et $\sim$ est la relation d'équivalence donnée par $$(t,f^*x) \sim (f_* t, x)$$ pour tous $t \in \Delta_k$, $x \in X_l$ et toute application croissante $f:[k] \to [l]$. Pour les principales propriétés du foncteur de réalisation simpliciale, nous renvoyons le lecteur à \cite{Segal}. L'espace $|BG_\bullet|$ est donc un espace classifiant pour $G$ et $|EG_\bullet| \to |BG_\bullet|$ est le $G$-fibré universel associé (cf. \cite[Prop. 2.7]{Burgos}). \medskip Considérons maintenant un espace topologique $M$ sur lequel le groupe $G$ opère par homéomorphismes, autrement dit un $G$-espace. Le groupe $G$ opère alors sur $$EG_\bullet \times M$$ par \begin{equation} g \cdot (g_0,\ldots,g_m, x) = (g_0g^{-1},\ldots,g_mg^{-1}, gx ). \end{equation} La \emph{cohomologie équivariante} de $M$ est définie comme étant la cohomologie ordinaire du quotient $$X=|EG_{\bullet} \times_G M|;$$ autrement dit $$H_G^\bullet (M) = H^\bullet (X) = H^\bullet (|EG_{\bullet} \times_G M|).$$ Si $G$ est le groupe trivial, c'est juste la cohomologie de $M$. Si $M$ est contractile, l'espace $X$ est homotopiquement équivalent à l'espace classifiant $BG_\bullet$ et $H_G^\bullet (M) = H^\bullet (BG_\bullet)$ est la cohomologie du groupe $G$. Noter que si $G$ opère librement sur $M$, la projection $EG_{\bullet} \times_G M \to M/G$ est une équivalence d'homotopie et on a $H_G^\bullet (M) = H^\bullet (M/G)$. Donnons quelques exemples qui apparaissent dans le texte principal. \begin{example} Supposons que $G$ soit égal à $\mathbf{Z}^n$ pour un certain entier naturel $n$. Puisque $\mathbf{Z}^n$ opère librement sur $\mathbf{R}^n$, on a $$ BG = |EG/G| \simeq |(\mathbf{R}/\mathbf{Z})^n|. $$ \end{example} \begin{example} Soit $\Gamma$ un sous-groupe discret de $\mathrm{SL}_n (\mathbf{R} )$ et supposons que l'image de $\Gamma$ dans $\mathrm{PSL}_n (\mathbf{R} ) $ est sans torsion. De tels groupes $\Gamma$ abondent car on peut montrer que tout groupe arithmétique de $\mathrm{SL}_n (\mathbf{R} )$ contient un tel sous-groupe d'indice fini. Considérons maintenant l'espace symétrique $$ X=\mathrm{SL}_n (\mathbf{R} )/\mathrm{SO}_n . $$ Cet espace est contractile et porte une action de $\Gamma$ induite par la multiplication à gauche dans $\mathrm{SL}_n(\mathbf{R})$. Notre hypothèse sur $\Gamma$ garantit que cette action est libre. Il s'ensuit que $$ B\Gamma = |E\Gamma /\Gamma| \simeq \Gamma \backslash \mathrm{SL}_n(\mathbf{R})/\mathrm{SO}_n , $$ c'est-à-dire que le quotient de Borel est un espace localement symétrique. \end{example} \begin{example} Les deux exemples ci-dessus peuvent être combinés : considérons le produit semi-direct $\mathrm{SL}_n(\mathbf{Z}) \ltimes \mathbf{Z}^n$ et soit $$ X = (\mathrm{SL}_n(\mathbf{R})/\mathrm{SO}_n ) \times \mathbf{R}^n. $$ Supposons que $\Gamma \subset \mathrm{SL}_n(\mathbf{Z})$ opère librement sur $\mathrm{SL}_n(\mathbf{R})/\mathrm{SO}_n$ et soit $L \subset \mathbf{Z}^n$ un sous-réseau de rang $n$ préservé par $\Gamma$. Alors $\Gamma \ltimes L$ opère librement sur l'espace contractile $X$ et donc $$ B(\Gamma \rtimes L ) \simeq \Gamma \backslash (\mathrm{SL}_n(\mathbf{R})/\mathrm{SO}_n \times \mathbf{R}^n/L) $$ est un fibré sur l'espace localement symétrique $\Gamma \backslash\mathrm{SL}_n(\mathbf{R})/\mathrm{SO}_n$ dont les fibres sont des tores compacts de dimension $n$. \end{example} \section{Formes différentielles simpliciales} Supposons maintenant que $M$ est une $G$-variété. Le quotient $X$ est alors une \emph{variété simpliciale}, c'est-à-dire un ensemble semi-simplicial dont les $m$-simplexes \begin{equation} X_m = \left( EG_{\bullet} \times_G M \right)_m = \left( EG_m \times M \right)/G \end{equation} sont des variétés (lisses) et dont les applications de faces et de dégénérescence sont lisses, voir par exemple \cite{Dupont}. La projection canonique \begin{equation} EG_{\bullet} \times_G M \to BG_\bullet \end{equation} réalise $EG_{\bullet} \times_G M$ comme un fibré simplicial au-dessus de $BG_\bullet$ de fibre $M$. Dans ce contexte, la cohomologie équivariante $H_G^\bullet(M)$ peut également être calculée à l'aide de formes différentielles, comme l'a montré Dupont \cite{Dupont}. Plus précisément, désignons par $\mathrm{A}^\bullet (X)$ le complexe de de Rham simplicial où, par définition, une $p$-forme simpliciale $\alpha$ sur $X$ est une collection d'applications $$ \alpha^{(m)} : G^{m+1} \to A^p (\Delta_m \times M) $$ définies pour $m \geq 0$ et vérifiant les relations de compatibilité \begin{equation}\label{eq:app_simp_1} (\partial_i \times \mathrm{id})^* \alpha^{(m)}(g) = \alpha^{(m-1)}(\partial_i g), \qquad g \in G^{m +1}, \end{equation} dans $\Delta_{m-1} \times M$ pour tout $i \in \{ 0 , \ldots , m\}$ et pour tout $m \geq 1$, ainsi que la relation de $G$-invariance \begin{equation} \label{eq:app_simp_2} g^*\alpha^{(m)}(g \cdot g') = \alpha^{(m)}(g'), \quad g \in G, \quad g' \in G^{m+1}. \end{equation} En d'autres termes, une $p$-forme simpliciale $\alpha$ sur $X$ est une $p$-forme sur la \emph{réalisation grossière} $\| X \|$ de $X$, c'est-à-dire le quotient $$ \|X\| = \bigsqcup_{k \geq 0} (\Delta_k \times X_k)/\sim, $$ où la relation d'équivalence est donnée par $(t,f^*x) \sim (f_*t, x)$ pour toutes les applications croissantes \emph{injectives} $f:[k] \to [l]$ (une telle $f$ est généralement appelée application ``de face''). Sous des hypothèses légères sur $X$ l'application canonique $\|X\| \to |X|$ est une équivalence d'homotopie ; ces hypothèses s'appliquent pour $X=EG \times_G M$ comme le montre Segal \cite[A.1]{Segal}. Comme dans le cas des variétés usuelles, la différentielle extérieure et le cup-produit usuel font de $\mathrm{A}^\bullet (X)$ une algèbre différentielle graduée et Dupont démontre que la cohomologie de ce complexe est isomorphe à la cohomologie de $\|X \|$. Enfin, chaque $\mathrm{A}^p (X)$ se décompose en la somme directe $$\mathrm{A}^p (X) = \bigoplus_{k+l = p} \mathrm{A}^{k,l} (X)$$ où $\mathrm{A}^{k,l} (X)$ est constitué des formes dont la restriction à $\Delta_m \times X_m$ est localement somme de formes $$a dt_{i_1} \wedge \ldots \wedge dt_{i_k} \wedge dx_{j_1} \wedge \ldots \wedge dx_{j_l}$$ où les $x_j$ sont des coordonnées locales de $X_m$ et $(t_0 , \ldots , t_m)$ sont les coordonnées barycentriques de $\Delta_m$. Les différentielles extérieures $d_\Delta$ et $d_X$ relativement aux variables respectives $t$ et $x$ décomposent $(\mathrm{A}^\bullet (X) , d)$ en un double complexe $(\mathrm{A}^{\bullet , \bullet} , d_\Delta , d_X )$. On peut introduire un autre double complexe de formes différentielles $$(\mathcal{A}^{\bullet,\bullet}(X_\bullet),\delta,d_X).$$ Ici $\mathcal{A}^{k, l}(X_\bullet)=A^l(X_k)$ est l'espace les formes différentielles de degré $l$ sur $X_k$. On pose $\delta=\sum_{i}(-1)^i \partial_i^*$. Dupont montre que pour tout $l$, l'application d'intégration sur les simplexes $$\mathcal{I} : (\mathrm{A}^{\bullet ,l} (X) , d_\Delta) \to (A^l (X_\bullet ) , \delta)$$ définit une équivalence d'homotopie entre complexes de chaines. Ces équivalences induisent des isomorphismes entre les suites spectrales calculant la cohomologie des complexes totaux \cite[Cor. 2.8]{Dupont}. Dans le texte principal, on utilise ces équivalences pour obtenir des cocycles du groupe $G$ à partir de certaines classes de cohomologie équivariante. Dans la suite de cette annexe nous expliquons cette construction. L'argument consiste à considérer les applications au bord d'une suite spectrale, mais notre objectif est de décrire ces applications explicitement. L'argument dans les cas affine et multiplicatif étant analogue, nous nous concentrerons sur le cas elliptique. Soit $E$ une courbe elliptique et considérons $E^n$ pour un entier $n \geq 2$. Fixons un sous-groupe d'indice fini $G$ de $\mathrm{SL}_n(\mathbf{Z})$ et un $0$-cycle $G$-invariant $D$ dans $E^n$ de degré zéro et constitué de points de torsion. Soit $M=E^n-|D|$. Au chapitre \ref{C:1} nous expliquons comment $D$ donne naissance à une classe $$ E[D] \in H^{2n-1}_G (M) = H^{2n-1}(\|(EG \times M)/G\|). $$ Nous expliquons maintenant comment associer à $E[D]$ une classe de cohomologie dans $$ H^{n-1}(G,\varinjlim_U H^n(U)) $$ où $U$ décrit l'ensemble des ouverts affines dans $E^n$ obtenus en supprimant un nombre fini d'hyperplans. La construction est la suivante. Choisissons une $(2n-1)$-forme simpliciale fermée $\alpha \in A^\bullet(EG \times_G M)$ représentant $E[D]$. Autrement dit, $\alpha$ est une collection d'applications $$ \alpha^{(m)}: G^{m+1} \to A^{2n-1}(\Delta_m \times M), \qquad m \geq 0, $$ vérifiant \eqref{eq:app_simp_1}, \eqref{eq:app_simp_2} et $$ (d_\Delta+d_M)\alpha^{(m)}=0. $$ Considérons les formes $$ \mathcal{I}\alpha^{(m)} : G^{m+1} \to A^{2n-1-m}(M), \qquad m \geq 0, $$ definies par $$ \mathcal{I}\alpha^{(m)}(g_0,\ldots,g_m) = \int_{\Delta_m} \alpha^{(m)}(g_0,\ldots,g_m). $$ Il découle du théorème de Stokes que \begin{equation} d_M \mathcal{I}\alpha^{(0)}=0 \end{equation} et \begin{equation}\label{eq:app_simp_4} \delta \mathcal{I}\alpha^{(m)} + d_M \mathcal{I}\alpha^{(m+1)}= 0 \end{equation} pour tout $m \geq 0$. Considérons maintenant les formes $$ \widetilde{\alpha}^{(m)} \in \widetilde{A}^{2n-1-m}:=\varinjlim A^{2n-1-m}(U) $$ où $\widetilde{\alpha}^{(m)}$ désigne l'image de $\mathcal{I}\alpha^{(m)}$ par l'application naturelle $$A^{2n-1-m}(M) \to \varinjlim A^{2n-1-m}(U).$$ On construit maintenant une suite d'applications $G$-équivariantes \begin{equation} \beta^{(m)}:G^{m+1} \to \widetilde{A}^{2n-2-m}, \qquad 0\leq m < n-1, \end{equation} vérifiant $d_M \beta^{(0)}=\widetilde{\alpha}^{(0)}$ et \begin{equation} \label{eq:app_simp_5} d_M \beta^{(m)} \pm \delta \beta^{(m-1)} = \widetilde{\alpha}^{(m)} \end{equation} pour tout $m \in \{1,\ldots,n-2\}$. Pour ce faire nous procédons comme suit : notez que tout ouvert affine $U \subset E^n$ satisfait $H^k(U)=0$ pour tout $k>n$. Puisque $d_M \mathcal{I}\alpha^{(0)}(e)=0$, il existe $\beta^{(0)}(e) \in \widetilde{A}^{2n-2} $ tel que $$ d_M \beta^{(0)}(e) = \widetilde{\alpha}^{(0)}(e). $$ On pose $\beta^{(0)}(g)=g^*(\beta^{(0)}(e))$. La forme $\widetilde{\alpha}^{(0)}$ étant $G$-invariante, pour tout $g \in G$ on a $d_M \beta^{(0)}(g) = \widetilde{\alpha}^{(0)}(g)$. Il découle alors de \eqref{eq:app_simp_4} que $$ d_M(\widetilde{\alpha}^{(1)}+\delta\beta^{(0)}) = 0, $$ et en procédant comme ci-dessus on peut trouver une application $G$-invariante $$ \beta^{(1)}: G^2 \to \widetilde{A}^{2n-3} $$ telle que $$ d_M \beta^{(1)} = \widetilde{\alpha}^{(1)}+\delta \beta^{(0)}. $$ En itérant cet argument, on obtient des applications $G$-équivariantes $\beta^{(0)},\ldots, \beta^{(n-2)}$ vérifiant \eqref{eq:app_simp_5}. Maintenant, il découle de \eqref{eq:app_simp_4} que $$ d_M(\widetilde{\alpha}^{(n-1)}\pm \delta\beta^{(n-2)})=0 $$ et, en passant aux classes de cohomologie, on obtient une application $$ c: G^{n} \to \varinjlim H^n(U), \qquad c(g):=[\widetilde{\alpha}^{(n-1)}(g)\pm \delta \beta^{(n-2)}(g)]. $$ Comme $\widetilde{\alpha}$ et $\beta$ sont $G$-équivariantes, l'application $c$ est aussi $G$-équivariante. Noter aussi que $c$ est un $(n-1)$-cocycle : \begin{equation} \begin{split} \delta c(g) &= [\delta(\widetilde{\alpha}^{(n-1)}(g)\pm \delta \beta^{(n-2)}(g))] \\ &= [\delta \widetilde{\alpha}^{(n-1)}(g)] \\ &= [-d_M \widetilde{\alpha}^{(n)}(g)] \\ &= 0. \end{split} \end{equation} Le cocycle $c$ dépend des choix faits dans la construction des applications $\beta^{(k)}$, mais un argument standard montre que sa classe de cohomologie $$ [c] \in H^{n-1}(G,\varinjlim H^n(U)) $$ est indépendante de ces choix. Elle est aussi indépendante de la forme $\alpha$ sélectionnée pour représenter la classe de cohomologie originale dans $H^{2n-1}_G(M)$. \begin{remark} En pratique on trouve souvent des $\widetilde{\alpha}^{(m)}$ telles que pour tout $k$ dans $\{0,\ldots ,n-2\}$ la forme $\widetilde{\alpha}^{(k)}$ soit identiquement nulle. Dans ce cas on peut prendre $\beta^{(0)},\ldots,\beta^{(n-2)}$ identiquement nulles et l'application $$ g \mapsto [\widetilde{\alpha}^{(n-1)}(g)]: G^n \to \varinjlim H^n(U). $$ est un cocycle qui représente la classe de $[c] $. \end{remark} \setcounter{equation}{0} \chapter{Classe d'Eisenstein affine et théorie de l'obstruction} \label{A:B} \resettheoremcounters Soit $F$ un corps, par exemple $\mathbf{C}$, et soit $\mathbf{T}_n = \mathbf{T}_n (F)$ l'immeuble de Tits de $V=F^n$ avec $n \geq 2$. Le groupe $G = \mathrm{GL}_n(F)$ opère naturellement sur $\mathbf{T}_n$ et on considère le fibré \begin{equation} \begin{tikzcd} EG \times_G \mathbf{T}_n \arrow["\pi" ',d] \\ BG \end{tikzcd} \end{equation} de fibre $\mathbf{T}_n$. L'immeuble de Tits $\mathbf{T}_n$ étant $(n-3)$-connexe, la première obstruction à l'existence d'une section de $\pi$ est une classe \begin{equation} \label{E:A1} \omega_{n-2} \in H^{n-1}(BG,\pi_{n-2}(\mathbf{T}_n )), \end{equation} voir par exemple \cite[Obstruction Theory, p. 415]{Hatcher}. Dans \eqref{E:A1} on voit le groupe d'homotopie $\pi_{n-2}(\mathbf{T}_n)$ de la fibre de $\pi$ comme un système local. L'immeuble de Tits $\mathbf{T}_n$ étant $(n-3)$-connexe, on a $$\pi_{n-2}(\mathbf{T}_n) = \widetilde{H}_{n-2} (\mathbf{T}_n ) = \mathrm{St} (F^n).$$ On peut donc voir $\omega_{n-2}$ comme un élément de $H^{n-1}(G,\mathrm{St} (F^n))$. \begin{proposition} La classe d'obstruction $\omega_{n-2}$ coincide avec la classe de cohomologie associée au symbole universel de Ash--Rudolph représenté par le cocycle $$G^n \to \mathrm{St} (F^n ); \quad (g_0 , \ldots , g_{n-1} ) \mapsto [g_0^{-1} e_1 , \ldots , g_{n-1}^{-1} e_1 ] .$$ \end{proposition} \begin{proof} On commence par construire une section explicite $s$ de $\pi$ sur le squelette de dimension $n-2$. On étudie ensuite l'obstruction à étendre cette section au $(n-1)$-squelette. On définit la section $s$ en commençant par le $0$-squelette $BG_0$ puis on l'étend par récurrence de $BG_{\leq k}$ à $BG_{\leq k+1}$~: \begin{equation} \cdots \begin{tikzcd} (EG \times_G \mathbf{T}_n )_1 \arrow[d,"\pi_1"' ] \arrow[r, shift left] \arrow[r, shift right] & (EG \times_G \mathbf{T}_n )_0 \arrow[d,"\pi_0"' ] \\ BG_1 \arrow[u, dashed, bend right, "s_1"'] \arrow[r, shift left] \arrow[r, shift right] & BG_0 \arrow[u, dashed, bend right, "s_0"']. \end{tikzcd} \end{equation} Les $k$-simplexes de $EG \times_G \mathbf{T}_n$ sont les couples $(\mathbf{g} , D_\bullet)$, où $\mathbf{g}$ est un $(k+1)$-uplet $(g_0 , \ldots , g_k) \in G^{k+1}$ et $D_\bullet \in (\mathbf{T}_n )_k$ est un $(k+1)$-drapeau de sous-espaces propres non nuls de $V$, modulo équivalence \begin{equation} ((g_0,\ldots,g_k),D_\bullet) \sim ((g_0g^{-1},\ldots,g_kg^{-1}) , gD_\bullet), \quad g \in G. \end{equation} Tentons maintenant de définir une section $s$ de $\pi$ en commençant par définir $s$ sur le $0$-squelette $BG_0$ puis en l'étendant par récurrence de $BG_{\leq k}$ à $BG_{\leq k+1}$: \begin{equation} \cdots \begin{tikzcd} (EG \times_G \mathbf{T}_n )_1 \arrow[d,"\pi_1"' ] \arrow[r, shift left] \arrow[r, shift right] & (EG \times_G \mathbf{T}_n )_0 \arrow[d,"\pi_0"' ] \\ BG_1 \arrow[u, dashed, bend right, "s_1"'] \arrow[r, shift left] \arrow[r, shift right] & BG_0 \arrow[u, dashed, bend right, "s_0"']. \end{tikzcd} \end{equation} Les $k$-simplexes dans $EG \times_G \mathbf{T}_n $ sont les couples $(\mathbf{g},F_\bullet)$ (où $\mathbf{g} \in G^{k+1}$ et $F_\bullet \in (\mathbf{T}_n )_k$) modulo l'équivalence \begin{equation} ((g_0,\ldots,g_k),F_\bullet) \simeq ((g_0g^{-1},\ldots,g_kg^{-1}),gF_\bullet), \quad g \in G. \end{equation} On note dans la suite $[\mathbf{g},F_\bullet]$ la classe d'équivalence de $(\mathbf{g},F_\bullet)$. Pour définir la section $s_0$ de $\pi_0$ (l'application induite par $\pi$ sur le $0$-squelette) on choisit une droite $L=\langle v \rangle \subset F^n$ et on pose \begin{equation} s_0 ( g_0 ) = [g_0 , g_0^{-1} L]. \end{equation} On pose ensuite \begin{equation} s_1(g_0 , g_1) = [(g_0 ,g_1), \Delta(g_0^{-1} L , g_1^{-1} L)] \end{equation} où \begin{equation} \Delta(g_0^{-1} L , g_1^{-1}L) = \begin{tikzcd} \stackrel{\langle g_0^{-1} v \rangle}{\bullet} \arrow[r, dash] & \stackrel{\langle g_0^{-1} v , g_1^{-1}v \rangle}{\bullet} & \arrow[l, dash] \stackrel{\langle g_1^{-1} v \rangle}{\bullet}, \end{tikzcd} \end{equation} et les deux segments correspondent aux drapeaux\footnote{Ce chemin allant de $g_0^{-1} L$ à $g_1^{-1}L$ dans $\mathbf{T}_n$ n'est pas unique mais tout autre chemin est plus long ou passe par un $0$-simplex correspondant à un sous-espace de $F^n$ de dimension strictement supérieure à $2$.} $$\langle g_0^{-1} v \rangle \subseteq \langle g_0^{-1} v , g_1^{-1}v \rangle \quad \mbox{et} \quad \langle g_1^{-1}v \rangle \subseteq \langle g_0^{-1} v , g_1^{-1}v \rangle.$$ En général, on pose \begin{equation} s_k(g_0 ,\ldots , g_k) = [(g_0 , \ldots , g_k ), \Delta( g_0^{-1} L , \ldots, g_k^{-1} L)], \end{equation} où $\Delta( g_0^{-1} L , \ldots, g_k^{-1} L)$ est le sous-complexe de $\mathbf{T}_n$ isomorphe à la première subdivision barycentrique d'un $k$-simplexe de $( \mathbf{T}_n )_k$ de sommets $g_0^{-1} L, \ldots ,g_k^{-1}L$ et de barycentre le sous-espace $\langle g_0^{-1} v , \ldots , g_k^{-1} v \rangle$ de dimension $k+1$ dans $V$; de sorte que les $k$-simplexes de ce complexe sont associés aux drapeaux $$\langle g_{\sigma (0)}^{-1} v \rangle \subset \langle g_{\sigma (0)}^{-1} v, g_{\sigma (1)}^{-1} v \rangle \subset \cdots \subset \langle g_{\sigma (0)}^{-1} v, g_{\sigma (1)}^{-1} v ,\ldots, g_{\sigma (k)}^{-1}v \rangle \quad (\sigma \in \mathfrak{S}_{k+1} ) .$$ Cette construction inductive est possible tant que $k \leq n-2$, c'est-à-dire tant que $k+1$ est strictement inférieur à la dimension de $V$. On définit ainsi section \begin{equation} s_{\leq n-2}:BG_{\leq n-2} \to (EG \times_G \mathbf{T}_n )_{\leq n-2}. \end{equation} L'obstruction à étendre $s$ au $(n-1)$-squelette $BG_{\leq n-1}$ est alors représentée par l'application $BG_{n-1} \to \pi_{n-2}(\mathbf{T}_n)$ définie par \begin{equation} (g_0 , \ldots , g_{n-1)} \mapsto \sum_{j=0}^{n-1} (-1)^j [s_{n-2}(g_0 , \ldots , \widehat{g_j} , \ldots , g_{n-1})]. \end{equation} La théorie de l'obstruction implique que cette application définit un cocycle $$c_{n-2} \in C^{n-1}(BG,\pi_{n-2}(\mathbf{T}_n ))$$ qui représente $\omega_{n-2}$; de sorte que $[c_{n-2}]$ est indépendant du choix de $L$. Vue comme classe dans $H^{n-1}(G,\mathrm{St}_n(F))$ la classe d'obstruction $\omega_{n-2}$ est finalement donnée par \begin{equation} \omega_1(g_0,\ldots,g_{n-1}) = [g_0^{-1} v ,g_1^{-1}v,\ldots ,g_{n-1}^{-1}v], \end{equation} où $[\cdot]$ désigne le symbole modulaire universel de Ash--Rudolph, cf. \S \ref{S:ARuniv}. \end{proof} \end{appendix} \bibliographystyle {plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Grover proposed a quantum algorithm for solving large database search problems in Ref. \cite{grover97,grover01} . Grover's search algorithm helps searching for an unknown marked item in an unstructured database of $N$ items by accessing the database a minimum number of times. From a classical standpoint, it is necessary to test $N/2$ items, on average, before finding the correct item. With Grover's algorithm however, the same task can be completed successfully with a complexity of order $\sqrt{N}$, that is, with a quadratic speed up. Grover's algorithm was presented in terms of a discrete sequence of unitary logic gates (digital quantum computation). Specifically, the transition probability from the source state $\left\vert \psi_{s}\right\rangle $ to the target state $\left\vert \psi_{w}\right\rangle $ after the $k$-times sequential application of the so-called Grover quantum search iterate $G$ is given by \begin{equation} \mathcal{P}_{\text{Grover}}\left( k\text{, }N\right) \overset{\text{def} {=}\left\vert \left\langle \psi_{w}|G^{k}|\psi_{s}\right\rangle \right\vert ^{2}=\sin^{2}\left[ \left( 2k+1\right) \tan^{-1}\left( \frac{1}{\sqrt {N-1}}\right) \right] \text{.} \label{pgrover \end{equation} In the limit of $N$ approaching infinity, $\mathcal{P}_{\text{Grover}}$ in Eq. (\ref{pgrover}) approaches one if $k=O\left( \sqrt{N}\right) $. We point out that the big $O$-notation $f\left( x\right) =O\left( g\left( x\right) \right) $ means that there exist\emph{ real} constants $c$ and $x_{0}$ such that $\left\vert f\left( x\right) \right\vert \leq c\left\vert g\left( x\right) \right\vert $ for any $x\geq x_{0}$. The temporal evolution of the state vector $\left\vert \psi\left( t\right) \right\rangle $ of a closed quantum system is characterized by the Schr\"{o}dinger equation \begin{equation} i\hslash\partial_{t}\left\vert \psi\left( t\right) \right\rangle =\mathcal{H}\left( t\right) \left\vert \psi\left( t\right) \right\rangle \text{,} \label{H1 \end{equation} where $\hslash\overset{\text{def}}{=}h/\left( 2\pi\right) $ is the reduced Planck constant, $i$ denotes the imaginary complex unit, and $\partial _{t}\overset{\text{def}}{=}\partial/\partial t$. The Hamiltonian $\mathcal{H}\left( t\right) $ in Eq. (\ref{H1}) encodes all relevant information about the time evolution of the quantum system. From a quantum computing standpoint, if the Hamiltonian $\mathcal{H}\left( t\right) $ is known and properly designed, the quantum mechanical motion is known and the initial state (source state, $\left\vert \psi_{s}\right\rangle $) at $t=0$ can potentially evolve to a given final state (target state, $\left\vert \psi _{w}\right\rangle $) at $t=T$. In particular, for any instant $0\leq t\leq T$, the probability $\mathcal{P}_{\left\vert \psi\left( t\right) \right\rangle \rightarrow\left\vert \psi_{w}\right\rangle }$ that the system transitions from the state $\left\vert \psi\left( t\right) \right\rangle $ to the state $\left\vert \psi_{w}\right\rangle $ under the working assumption of constant Hamiltonian is given by \begin{equation} \mathcal{P}_{\left\vert \psi\left( t\right) \right\rangle \rightarrow \left\vert \psi_{w}\right\rangle }\overset{\text{def}}{=}\left\vert \left\langle \psi_{w}|\psi\left( t\right) \right\rangle \right\vert ^{2}=\left\vert \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t |\psi_{s}\right\rangle \right\vert ^{2}\text{. \end{equation} The unitary operator $\mathcal{U}\left( t\right) \overset{\text{def} {=}e^{-\frac{i}{\hslash}\mathcal{H}t}$ denotes the temporal evolution operator. Fig. $1$ displays a graphical depiction of the digital (discrete time) and analog (continuous time) quantum search algorithms. Working in a continuous time quantum computing framework, Farhi and Gutmann proposed an analog version of Grover's algorithm in Ref. \cite{farhi98} where the state of the quantum register evolves continuously in time under the action of a suitably chosen driving Hamiltonian (analog quantum computation). Specifically, the transition probability from the source state $\left\vert \psi_{s}\right\rangle $ to the target state $\left\vert \psi_{w}\right\rangle $ after the application of the unitary continuous time evolution operator $\mathcal{U}_{\text{FG}}\left( t\right) \overset{\text{def}}{=}e^{-\frac {i}{\hslash}\mathcal{H}_{\text{FG}}t}$ for a closed quantum system described by a constant Hamiltonian $\mathcal{H}_{\text{FG}}$ is given by, \begin{equation} \mathcal{P}_{\text{Farhi-Gutmann}}\left( t\text{, }x\right) \overset {\text{def}}{=}\left\vert \left\langle \psi_{w}|e^{-\frac{i}{\hslash }\mathcal{H}_{\text{FG}}t}|\psi_{s}\right\rangle \right\vert ^{2}=\sin ^{2}\left( \frac{Ex}{\hslash}t\right) +x^{2}\cos^{2}\left( \frac {Ex}{\hslash}t\right) \text{,} \label{PFG \end{equation} where $E$ is a energy-like positive and \emph{real }constant coefficient. We point out that $\mathcal{P}_{\text{Farhi-Gutmann}}$ in\ Eq. (\ref{PFG}) approaches one if $t$ approaches $h/(4Ex)$. For recent discussions on the transition from the digital to analog quantum computational setting for Grover's algorithm, we refer to Ref. \cite{carlo1,carlo2, cafaro2017}. Ideally, one seeks to achieve unit success probability (that is, unit fidelity) in the shortest possible time in a quantum search problem. There are however, both practical and foundational issues that can justify the exploration of alternative circumstances. For instance, from a practical standpoint, one would desire to terminate a quantum information processing task in the minimum possible time so as to mitigate decoherent effects that can appear while controlling (by means of an external magnetic field, for instance) the dynamics of a source state driven towards a target state \cite{rabitz12,rabitz15,cappellaro18}. In addition, from a theoretical viewpoint, it is known that no quantum measurement can perfectly discriminate between two nonorthogonal pure states \cite{chefles00,croke09}. Moreover, it is equally notorious that suitably engineered quantum measurements can enhance the transition probability between two pure states \cite{fritz10}. Therefore, minimizing the search time can be important from an experimental standpoint while seeking at any cost perfect overlap between the final state and the target state can be unnecessary from a purely foundational standpoint. Similar lines of reasoning have paved the way to the fascinating exploration of a possible tradeoff between fidelity and time optimal control of quantum unitary transformations in Ref. \cite{rabitz12}. In this paper, motivated by these issues and starting from the consideration of a family of multi-parameter generalized quantum search Hamiltonians originally introduced by Bae and Kwon in Ref. \cite{bae02}, we present a detailed analysis concerning minimum search times and maximal success probabilities that can be obtained from suitably chosen sub-families belonging to the original family of Hamiltonians. In particular, we uncover the existence of quantum search Hamiltonians characterized by minimum search times needed for a perfect search that are smaller than the one required by the Farhi-Gutmann perfect quantum search Hamiltonian. Furthermore, and more interestingly, we report on the existence of imperfect quantum search Hamiltonians that, despite being incapable of guaranteeing perfect search, can outperform (in terms of minimum search time) perfect search Hamiltonians provided that only a very large nearly optimal fidelity value is required to stop the search. The layout of the rest of the paper can be described as follows. In Section II, we provide a detailed computation of a general expression for the transition probability in the case of a quantum mechanical evolution governed by a time-independent generalized quantum search Hamiltonian. In Section III, we discuss a variety of limiting cases that arise from the generalized search Hamiltonian. In particular, we distinguish optimal scenarios (that is, cases where the probability of finding the target state equals one) from suboptimal scenarios (that is, cases where the probability of finding the target state is less than one). Our concluding remarks appear in Section IV. Finally, technical details are presented in Appendices A, B, and C. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth] {fig1}\caption{Gate-level schematic of the (a) digital and (b) analog quantum search algorithms. \label{fig1 \end{figure} \section{Transition probability} In this section, we consider the time-independent search Hamiltonian $\mathcal{H}$ defined as \cite{bae02} \begin{equation} \mathcal{H}\overset{\text{def}}{=}E\left[ \alpha\left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\beta\left\vert \psi _{w}\right\rangle \left\langle \psi_{s}\right\vert +\gamma\left\vert \psi _{s}\right\rangle \left\langle \psi_{w}\right\vert +\delta\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] \text{.} \label{hamilton \end{equation} The adimensional coefficients $\alpha$, $\beta$, $\gamma$, $\delta$ in Eq. (\ref{hamilton}) are \emph{complex} expansion coefficients while $E$ is a \emph{real} constant with energy physical dimensions. We also assume that the quantum state $\left\vert \psi_{w}\right\rangle $ is the normalized target state while $\left\vert \psi_{s}\right\rangle $ is the normalized initial state with quantum overlap $x\overset{\text{def}}{=}\left\langle \psi_{w |\psi_{s}\right\rangle $ that evolves unitarily according to the Schr\"{o}dinger quantum mechanical evolution law \begin{equation} \left\vert \psi_{s}\right\rangle \mapsto e^{-\frac{i}{\hslash}\mathcal{H t}\left\vert \psi_{s}\right\rangle \text{. \end{equation} In general, $x$ is a complex quantity. However, since any phase factor $e^{i\phi_{ws}}$ with $\phi_{ws}\i \mathbb{R} $ in $x\overset{\text{def}}{=}\left\langle \psi_{w}|\psi_{s}\right\rangle =\left\vert \left\langle \psi_{w}|\psi_{s}\right\rangle \right\vert e^{i\phi_{ws}}$ can be incorporated into the state $\left\vert s\right\rangle $, one can assume that $x\i \mathbb{R} _{+}\backslash\left\{ 0\right\} $. Our objective is to find the time $t^{\ast}$ such that $\mathcal{P}\left( t^{\ast}\right) =\mathcal{P}_{\max}$ where $\mathcal{P}\left( t\right) $ is the transition probability defined as \cite{sakurai,picasso} \begin{equation} \mathcal{P}\left( t\right) \overset{\text{def}}{=}\left\vert \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle \right\vert ^{2}\text{.} \label{fidelity \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.3\textwidth] {fig2}\caption{Visualization of the normalized states $\left\vert \psi_{w}\right\rangle $, $\left\vert \psi _{s}\right\rangle $, and $\left\vert \psi_{r}\right\rangle $, as well as the quantum overlap $x$. \label{fig2 \end{figure}Using the Gram-Schmidt orthonormalization technique, we can construct an orthonormal set of quantum state vectors starting from the set $\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi _{s}\right\rangle \right\} $. The transition from a set of linear independent state vectors to a set of orthonormal state vector can be described as \begin{equation} \left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi _{s}\right\rangle \right\} \rightarrow\left\{ \left\vert \psi_{w \right\rangle \text{, }\left\vert \psi_{s}\right\rangle -\left\langle \psi _{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle \right\} \rightarrow\left\{ \frac{\left\vert \psi_{w}\right\rangle }{\left\Vert \left\vert \psi_{w}\right\rangle \right\Vert }\text{, }\frac{\left\vert \psi_{s}\right\rangle -\left\langle \psi_{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle }{\left\Vert \left\vert \psi_{s}\right\rangle -\left\langle \psi_{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle \right\Vert }\right\} \text{. \end{equation} For notational simplicity, let us define the quantum state vector $\left\vert \psi_{r}\right\rangle $ a \begin{equation} \left\vert \psi_{r}\right\rangle \overset{\text{def}}{=}\frac{\left\vert \psi_{s}\right\rangle -\left\langle \psi_{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle }{\left\Vert \left\vert \psi_{s}\right\rangle -\left\langle \psi_{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle \right\Vert }\text{.} \label{erre2 \end{equation} Recalling the definition of the quantum overlap $x$, Eq. (\ref{erre2}) can be expressed as \begin{equation} \left\vert \psi_{r}\right\rangle =\frac{\left\vert \psi_{s}\right\rangle -\left\langle \psi_{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle }{\sqrt{\left\langle \psi_{s}|\psi_{s}\right\rangle -\left\langle \psi _{s}|\psi_{w}\right\rangle ^{2}}}=\frac{1}{\sqrt{1-x^{2}}}\left( \left\vert \psi_{s}\right\rangle -x\left\vert \psi_{w}\right\rangle \right) \text{.} \label{fiar \end{equation} Fig. $2$ displays a graphical depiction of the orthonormal states $\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} $ together with the source state $\left\vert \psi_{s}\right\rangle $ and the quantum overlap $x$. Fig. $3$, instead, is a simple depiction of the orthogonalization and normalization procedures that specify the Gram-Schmidt orthonormalization technique. Note that because of the definition of $\left\vert \psi_{r}\right\rangle $ in Eq. (\ref{fiar}), $x$ must be different from one. In terms of the set of orthonormal basis vectors $\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} $, the source state $\left\vert \psi_{s}\right\rangle $ become \begin{equation} \left\vert \psi_{s}\right\rangle =\left\vert \psi_{s}\right\rangle \left( \left\vert \psi_{w}\right\rangle \left\langle \psi_{w}\right\vert +\left\vert \psi_{r}\right\rangle \left\langle \psi_{r}\right\vert \right) =\left\langle \psi_{w}|\psi_{s}\right\rangle \left\vert \psi_{w}\right\rangle +\left\langle \psi_{r}|\psi_{s}\right\rangle \left\vert \psi_{r}\right\rangle \text{.} \label{chell \end{equation} Note that the quantum mechanical overlap $\left\langle \psi_{r}|\psi _{s}\right\rangle $ in Eq. (\ref{chell}) can be recast as, \begin{equation} \left\langle \psi_{r}|\psi_{s}\right\rangle =\frac{1}{\sqrt{1-x^{2}}}\left( \left\langle \psi_{s}\right\vert -x\left\langle \psi_{w}\right\vert \right) \left( \left\vert \psi_{s}\right\rangle \right) =\frac{1}{\sqrt{1-x^{2} }\left( 1-x^{2}\right) =\sqrt{1-x^{2}}\text{.} \label{chist \end{equation} Therefore, by\textbf{ }using Eq. (\ref{chist}), the state $\left\vert \psi _{s}\right\rangle $ in Eq. (\ref{chell}) becomes \begin{equation} \left\vert \psi_{s}\right\rangle =x\left\vert \psi_{w}\right\rangle +\sqrt{1-x^{2}}\left\vert \psi_{r}\right\rangle \text{.} \label{sr \end{equation} The matrix representation of the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) with respect to the orthonormal basis $\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} $ where $\left\langle \psi_{w}|\psi_{r}\right\rangle =\delta_{wr}$, with $\delta_{wr}$ denoting the Kronecker delta, can be formally written a \begin{equation} \left[ \mathcal{H}\right] _{\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }\overset{\text{def} {=}\left( \begin{array} [c]{cc \left\langle \psi_{w}|\mathcal{H}|\psi_{w}\right\rangle & \left\langle \psi_{w}|\mathcal{H}|\psi_{r}\right\rangle \\ \left\langle \psi_{r}|\mathcal{H}|\psi_{w}\right\rangle & \left\langle \psi_{r}|\mathcal{H}|\psi_{r}\right\rangle \end{array} \right) \text{. \end{equation} Using Eqs. (\ref{hamilton}) and (\ref{sr}) together with the orthonormality conditions $\left\langle \psi_{w}|\psi_{r}\right\rangle =\delta_{wr}$, we hav \begin{equation} \left[ \mathcal{H}\right] _{\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }=\left( \begin{array} [c]{cc H_{11} & H_{12}\\ H_{21} & H_{22 \end{array} \right) \text{,} \label{symm \end{equation} where \begin{align} & H_{11}\overset{\text{def}}{=}E\left[ \alpha+\left( \beta+\gamma\right) x+\delta x^{2}\right] \text{, }H_{12}\overset{\text{def}}{=}E\sqrt{1-x^{2 }\left( \beta+\delta x\right) \text{,}\nonumber\\ & \nonumber\\ & H_{21}\overset{\text{def}}{=}E\sqrt{1-x^{2}}\left( \gamma+\delta x\right) \text{, }H_{22}\overset{\text{def}}{=}E\delta\left( 1-x^{2}\right) \text{.} \label{heq \end{align} \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth] {fig3}\caption{Illustration of the Gram-Schmidt orthonormalization procedure for some vectors $\left\vert a\right\rangle $ and $\left\vert b\right\rangle $. In this case, the resulting orthonormal basis consists of $\left\vert e_{1}\right\rangle \overset {\text{def}}{=}\frac{\left\vert a\right\rangle }{\left\Vert \left\vert a\right\rangle \right\Vert }$ and $\left\vert e_{2}\right\rangle \overset{\text{def}}{=}\frac{\left\vert b\right\rangle -\left\langle a|b\right\rangle \left\vert a\right\rangle }{\left\Vert \left\vert b\right\rangle -\left\langle a|b\right\rangle \left\vert a\right\rangle \right\Vert }$. \label{fig3 \end{figure}Observe that the Hamiltonian $\mathcal{H}$ is an Hermitian operator and, therefore, its eigenvalues must be \emph{real }(for details, see Appendix A). For this reason, recalling the previous constraints on $x$, we finally have $x\in\left( 0\text{,}1\right) $. Furthermore, imposing that $\mathcal{H}=\mathcal{H}^{\dagger}$ where the dagger symbol \textquotedbllef $\dagger$\textquotedblright\ is the Hermitian conjugation operation, we have \begin{equation} \left( \begin{array} [c]{cc H_{11} & H_{12}\\ H_{21} & H_{22 \end{array} \right) =\left( \begin{array} [c]{cc H_{11}^{\ast} & H_{21}^{\ast}\\ H_{12}^{\ast} & H_{22}^{\ast \end{array} \right) \text{.} \label{heq2 \end{equation} Then, from Eqs. (\ref{heq}) and (\ref{heq2}), it follows that $\alpha$ and $\delta$ must be \emph{real} coefficients while $\beta=\gamma^{\ast}$. The symbol \textquotedblleft$\ast$\textquotedblright\ denotes the \emph{complex} conjugation operation. Next, let us diagonalize the Hermitian matrix $\left[ \mathcal{H}\right] _{\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }$ in Eq. (\ref{symm}). The two \emph{real} eigenvalues $\lambda_{\pm}$ of the matrix can be written as, \begin{equation} \lambda_{\pm}\overset{\text{def}}{=}\frac{1}{2}\left[ \left( H_{11 +H_{22}\right) \pm\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21 }\right] \text{.} \label{eigen \end{equation} The eigenspaces $\mathcal{E}_{\lambda_{-}}$ and $\mathcal{E}_{\lambda_{+}}$ that correspond to the eigenvalues $\lambda_{-}$ and $\lambda_{+}$ are defined a \begin{equation} \mathcal{E}_{\lambda_{-}}\overset{\text{def}}{=}\text{\textrm{Span}}\left\{ \left\vert v_{\lambda_{-}}\right\rangle \right\} \text{, and }\mathcal{E _{\lambda_{+}}\overset{\text{def}}{=}\text{\textrm{Span}}\left\{ \left\vert v_{\lambda_{+}}\right\rangle \right\} \text{, \end{equation} respectively. Furthermore, two orthogonal eigenvectors $\left\vert v_{\lambda_{+}}\right\rangle $ and $\left\vert v_{\lambda_{-}}\right\rangle $ corresponding to the distinct eigenvalues $\lambda_{+}$ and $\lambda_{-}$ are given by \begin{equation} \left\vert v_{\lambda_{+}}\right\rangle \overset{\text{def}}{=}\left( \begin{array} [c]{c \frac{1}{2H_{21}}\left[ \left( H_{11}-H_{22}\right) +\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}\right] \\ 1 \end{array} \right) \text{,} \label{v1 \end{equation} and \begin{equation} \left\vert v_{\lambda_{-}}\right\rangle \overset{\text{def}}{=}\left( \begin{array} [c]{c \frac{1}{2H_{21}}\left[ \left( H_{11}-H_{22}\right) -\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}\right] \\ 1 \end{array} \right) \text{,} \label{v2 \end{equation} respectively. For notational simplicity, let us introduce two \emph{complex} quantities $\mathcal{A}$ and $\mathcal{B}$ defined a \begin{equation} \mathcal{A}\overset{\text{def}}{=}\frac{1}{2H_{21}}\left[ \left( H_{11}-H_{22}\right) -\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21 }\right] \text{,} \label{anew \end{equation} and \begin{equation} \mathcal{B}\overset{\text{def}}{=}\frac{1}{2H_{21}}\left[ \left( H_{11}-H_{22}\right) +\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21 }\right] \text{,} \label{bnew \end{equation} respectively. Using Eqs. (\ref{v1}), (\ref{v2}), (\ref{anew}), and (\ref{bnew}), the eigenvector matrix $M_{\mathcal{H}}$ for the matrix $\left[ \mathcal{H}\right] _{\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }$ and its inverse $M_{\mathcal{H }^{-1}$ can be formally written as \begin{equation} M_{\mathcal{H}}\overset{\text{def}}{=}\left( \begin{array} [c]{cc \mathcal{A} & \mathcal{B}\\ 1 & 1 \end{array} \right) \text{,} \label{mmatrix2 \end{equation} and \begin{equation} M_{\mathcal{H}}^{-1}\overset{\text{def}}{=}\frac{1}{\mathcal{A}-\mathcal{B }\left( \begin{array} [c]{cc 1 & -\mathcal{B}\\ -1 & \mathcal{A \end{array} \right) =M_{\mathcal{H}}^{\dagger}\text{,} \label{mimatrix2 \end{equation} respectively. In terms of the matrices $M_{\mathcal{H}}$ in Eq. (\ref{mmatrix2}), $M_{\mathcal{H}}^{-1}$ in Eq. (\ref{mimatrix2}), and the diagonal matrix $H_{\text{diagonal}}$ defined as \begin{equation} H_{\text{diagonal}}\overset{\text{def}}{=}\left[ \mathcal{H}\right] _{\left\{ \left\vert v_{\lambda-}\right\rangle \text{, }\left\vert v_{\lambda_{+}}\right\rangle \right\} }=\left( \begin{array} [c]{cc \left\langle v_{\lambda-}|\mathcal{H}|v_{\lambda-}\right\rangle & \left\langle v_{\lambda-}|\mathcal{H}|v_{\lambda_{+}}\right\rangle \\ \left\langle v_{\lambda_{+}}|\mathcal{H}|v_{\lambda-}\right\rangle & \left\langle v_{\lambda_{+}}|\mathcal{H}|v_{\lambda_{+}}\right\rangle \end{array} \right) =\left( \begin{array} [c]{cc \lambda_{-} & 0\\ 0 & \lambda_{+ \end{array} \right) \text{,} \label{hdiagonal \end{equation} the matrix $\left[ \mathcal{H}\right] _{\left\{ \left\vert \psi _{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }$ in\ Eq. (\ref{symm}) become \begin{equation} \left[ \mathcal{H}\right] _{\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }=M_{\mathcal{H }H_{\text{diagonal}}M_{\mathcal{H}}^{-1}=\left( \begin{array} [c]{cc \mathcal{A} & \mathcal{B}\\ 1 & 1 \end{array} \right) \left( \begin{array} [c]{cc \lambda_{-} & 0\\ 0 & \lambda_{+ \end{array} \right) \left( \begin{array} [c]{cc \frac{1}{\mathcal{A}-\mathcal{B}} & \frac{-\mathcal{B}}{\mathcal{A -\mathcal{B}}\\ \frac{-1}{\mathcal{A}-\mathcal{B}} & \frac{\mathcal{A}}{\mathcal{A -\mathcal{B} \end{array} \right) \text{. \end{equation} We recall that the eigenvalues in\ Eq. (\ref{hdiagonal}) are defined in\ Eq. (\ref{eigen}) while $\mathcal{A}$ and $\mathcal{B}$ appear in\ Eqs. (\ref{anew}) and (\ref{bnew}), respectively. At this juncture, we also recall that our objective is to find the time $t^{\ast}$ such that $\mathcal{P \left( t^{\ast}\right) =\mathcal{P}_{\max}$ where the transition probability $\mathcal{P}\left( t\right) $ is defined in Eq. (\ref{fidelity}). Employing standard linear algebra techniques, $\mathcal{P}\left( t\right) $ can be recast a \begin{equation} \mathcal{P}\left( t\right) \overset{\text{def}}{=}\left\vert \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle \right\vert ^{2}=\left\vert \left\langle \psi_{w}|e^{-\frac{i}{\hslash}M\mathcal{H _{\text{diagonal}}M^{\dagger}t}|\psi_{s}\right\rangle \right\vert ^{2}=\left\vert \left\langle \psi_{w}|Me^{-\frac{i}{\hslash}\mathcal{H _{\text{diagonal}}t}M^{\dagger}|\psi_{s}\right\rangle \right\vert ^{2}\text{,} \label{pt3 \end{equation} where $\mathcal{H}_{\text{diagonal}}$ denotes the Hermitian operator whose matrix representation is $H_{\text{diagonal}}$ in Eq. (\ref{hdiagonal}). Using the matrix notation with components expressed with respect to the orthonormal basis $\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi _{r}\right\rangle \right\} $, quantum states $\left\vert \psi_{w \right\rangle $ and $\left\vert \psi_{s}\right\rangle $ are given by \begin{equation} \left\vert \psi_{w}\right\rangle \overset{\text{def}}{=}\left( \begin{array} [c]{c 1\\ 0 \end{array} \right) \text{, and }\left\vert \psi_{s}\right\rangle \overset{\text{def} {=}\left( \begin{array} [c]{c x\\ \sqrt{1-x^{2} \end{array} \right) \text{,} \label{matic2 \end{equation} respectively. By means of Eqs. (\ref{mmatrix2}), (\ref{mimatrix2}), and (\ref{matic2}), the quantum state amplitude $\left\langle \psi_{w |e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle $ that appears in the expression of the fidelity $\mathcal{P}\left( t\right) $ in\ Eq. (\ref{pt3}) become \begin{equation} \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle =\frac{1}{\mathcal{A}-\mathcal{B}}\left[ \mathcal{A}e^{-\frac{i}{\hslash }\lambda_{-}t}\left( x-\mathcal{B}\sqrt{1-x^{2}}\right) -\mathcal{B e^{-\frac{i}{\hslash}\lambda_{+}t}\left( x-\mathcal{A}\sqrt{1-x^{2}}\right) \right] \text{,} \label{part1b \end{equation} and, as a consequence, its complex conjugate $\left\langle \psi_{w |e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle ^{\ast}$ is, \begin{equation} \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle ^{\ast}=\frac{1}{\mathcal{A}-\mathcal{B}}\left[ \mathcal{A}e^{\frac {i}{\hslash}\lambda_{-}t}\left( x-\mathcal{B}\sqrt{1-x^{2}}\right) -\mathcal{B}e^{\frac{i}{\hslash}\lambda_{+}t}\left( x-\mathcal{A \sqrt{1-x^{2}}\right) \right] \text{.} \label{part2b \end{equation} Observe that \begin{equation} e^{-\frac{i}{\hslash}\lambda_{-}t}=e^{-\frac{i}{\hslash}\frac{H_{11}+H_{22 }{2}t}e^{i\frac{\mathrm{a}}{\hslash}t}\text{ and, }e^{-\frac{i}{\hslash }\lambda_{+}t}=e^{-\frac{i}{\hslash}\frac{H_{11}+H_{22}}{2}t}e^{-i\frac {\mathrm{a}}{\hslash}t} \label{aeq \end{equation} where, recalling Eq. (\ref{eigen}), the \emph{real} quantity $\mathrm{a}$ is defined a \begin{equation} \mathrm{a}\overset{\text{def}}{=}\frac{1}{2}\sqrt{\left( H_{11 -H_{22}\right) ^{2}+4H_{12}H_{21}}\text{.} \label{adef \end{equation} Employing Eq. (\ref{aeq}), the \emph{complex} probability amplitudes in Eqs. (\ref{part1b}) and (\ref{part2b}) becom \begin{equation} \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle =e^{-\frac{i}{\hslash}\frac{H_{11}+H_{22}}{2}t}\left[ \frac{\mathcal{A \left( x-\mathcal{B}\sqrt{1-x^{2}}\right) }{\mathcal{A}-\mathcal{B }e^{i\frac{\mathrm{a}}{\hslash}t}-\frac{\mathcal{B}\left( x-\mathcal{A \sqrt{1-x^{2}}\right) }{\mathcal{A}-\mathcal{B}}e^{-i\frac{\mathrm{a }{\hslash}t}\right] \text{,} \label{part3 \end{equation} and \begin{equation} \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle ^{\ast}=e^{\frac{i}{\hslash}\frac{H_{11}+H_{22}}{2}t}\left[ \frac {\mathcal{A}^{\ast}\left( x-\mathcal{B}^{\ast}\sqrt{1-x^{2}}\right) }{\mathcal{A}^{\ast}-\mathcal{B}^{\ast}}e^{-i\frac{\mathrm{a}}{\hslash t}-\frac{\mathcal{B}^{\ast}\left( x-\mathcal{A}^{\ast}\sqrt{1-x^{2}}\right) }{\mathcal{A}^{\ast}-\mathcal{B}^{\ast}}e^{i\frac{\mathrm{a}}{\hslash t}\right] \text{,} \label{part4 \end{equation} respectively. Using Eqs. (\ref{part3})\ and (\ref{part4}) and introducing the following three quantitie \begin{equation} \tilde{A}\overset{\text{def}}{=}\frac{\mathcal{A}\left( x-\mathcal{B \sqrt{1-x^{2}}\right) }{\mathcal{A}-\mathcal{B}}\text{, }\tilde{B \overset{\text{def}}{=}-\frac{\mathcal{B}\left( x-\mathcal{A}\sqrt{1-x^{2 }\right) }{\mathcal{A}-\mathcal{B}}\text{, and }\tilde{\alpha}=\frac {\mathrm{a}}{\hslash}t\text{,} \label{newroba \end{equation} the transition probability $\mathcal{P}\left( t\right) $ in Eq. (\ref{fidelity}) can be written a \begin{equation} \mathcal{P}\left( \tilde{\alpha}\right) =\left[ \tilde{A}e^{i\tilde{\alpha }}+\tilde{B}e^{-i\tilde{\alpha}}\right] \left[ \tilde{A}^{\ast e^{-i\tilde{\alpha}}+\tilde{B}^{\ast}e^{i\tilde{\alpha}}\right] =\left\vert \tilde{A}\right\vert ^{2}+\left\vert \tilde{B}\right\vert ^{2}+2\tilde {A}\tilde{B}^{\ast}\cos\left( 2\tilde{\alpha}\right) \text{, \end{equation} where we point out that $\tilde{A}\tilde{B}^{\ast}$ is \emph{real} since $H_{12}=H_{21}^{\ast}$. By employing standard trigonometric identities in a convenient sequential order (for details, see Appendix B), we fin \begin{equation} \mathcal{P}\left( \tilde{\alpha}\right) =\left\vert \tilde{A}-\tilde {B}\right\vert ^{2}\sin^{2}\left( \tilde{\alpha}\right) +\left\vert \tilde{A}+\tilde{B}\right\vert ^{2}\cos^{2}\left( \tilde{\alpha}\right) \text{.} \label{fess \end{equation} Using Eqs. (\ref{newroba}), (\ref{adef}), (\ref{anew}), and (\ref{bnew}), the transition probability $\mathcal{P}\left( \tilde{\alpha}\right) $ in Eq. (\ref{fess}) become \begin{equation} \mathcal{P}\left( t\right) =\frac{\left\vert \left( H_{11}-H_{22}\right) x+2H_{12}\sqrt{1-x^{2}}\right\vert ^{2}}{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}\sin^{2}\left( \frac{\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}}{2\hslash}t\right) +x^{2}\cos^{2}\left( \frac {\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}}{2\hslash}t\right) \text{.} \label{it \end{equation} From Eq. (\ref{it}), it follows that the maximum $\mathcal{P}_{\max }=\mathcal{P}\left( t^{\ast}\right) $ of $\mathcal{P}\left( t\right) $ occurs at the instant $t^{\ast}$ \begin{equation} t^{\ast}\overset{\text{def}}{=}\frac{\pi\hslash}{\sqrt{\left( H_{11 -H_{22}\right) ^{2}+4H_{12}H_{21}}}\text{,} \label{start \end{equation} and equal \begin{equation} \mathcal{P}_{\max}=\frac{\left\vert \left( H_{11}-H_{22}\right) x+2H_{12}\sqrt{1-x^{2}}\right\vert ^{2}}{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}\text{.} \label{maxp \end{equation} Finally, making use of Eq.(\ref{heq}) and recalling that $\alpha$ and $\delta$ must be \emph{real} coefficients while $\beta=\gamma^{\ast}$, $\mathcal{P _{\max}$ in Eq. (\ref{maxp}) become \begin{equation} \mathcal{P}_{\max}\left( \alpha\text{, }\beta\text{, }\delta\text{, }x\right) =\frac{\left\vert \left[ \left( \alpha-\delta\right) +\left( \beta+\beta^{\ast}\right) x+2\delta x^{2}\right] x+2\left( \beta+\delta x\right) \left( 1-x^{2}\right) \right\vert ^{2}}{\left[ \left( \alpha-\delta\right) +\left( \beta+\beta^{\ast}\right) x+2\delta x^{2}\right] ^{2}+4\left( 1-x^{2}\right) \left( \beta+\delta x\right) \left( \beta^{\ast}+\delta x\right) }\text{.} \label{nono \end{equation} Note that $\gamma=\beta^{\ast}$, $\beta+\beta^{\ast}=2\operatorname{Re}\left( \beta\right) \i \mathbb{R} $, and the product $\left( \beta+\delta x\right) \left( \beta^{\ast}+\delta x\right) $ is a \emph{real} quantity for any \emph{complex} parameter $\beta$. \section{Discussion} In this section, we discuss a variety of limiting cases that arise from the generalized search Hamiltonian in Eq. (\ref{hamilton}). In particular, we make a distinction between optimal and suboptimal scenarios. The former scenarios are cases where the probability of finding the target state equals one. The latter scenarios, instead, are cases where the probability of finding the target state is less than one. \emph{General Case}: The general case is specified by the conditions $\alpha\neq\delta$ \emph{real} and $\beta=\gamma^{\ast}$ \emph{complex }coefficients. We note that after some straightforward but tedious algebra, $\mathcal{P}_{\max}$ in Eq. (\ref{nono}) can be recast a \begin{equation} \mathcal{P}_{\max}\left( \alpha\text{, }\beta\text{, }\delta\text{, }x\right) =\frac{4\left[ \left\vert \beta\right\vert ^{2}-\operatorname{Re ^{2}\left( \beta\right) \right] x^{4}+\left[ \left( \alpha+\delta\right) ^{2}-8\left( \left\vert \beta\right\vert ^{2}-\operatorname{Re}^{2}\left( \beta\right) \right) \right] x^{2}+4\operatorname{Re}\left( \beta\right) \left( \alpha+\delta\right) x+4\left\vert \beta\right\vert ^{2}}{4\left[ \alpha\delta+\operatorname{Re}^{2}\left( \beta\right) -\left\vert \beta\right\vert ^{2}\right] x^{2}+4\operatorname{Re}\left( \beta\right) \left( \alpha+\delta\right) x+\left[ \left( \alpha-\delta\right) ^{2}+4\left\vert \beta\right\vert ^{2}\right] }\text{.} \label{nano2 \end{equation} Furthermore, by using Eq. (\ref{heq}) in Eq. (\ref{start}), the time $t^{\ast }$ at which this maximum transition probability value $\mathcal{P}_{\max}$ is achieved becomes \begin{equation} t_{\mathcal{H}}^{\ast}\overset{\text{def}}{=}\frac{\pi\hslash}{E\sqrt{\left[ \left( \alpha-\delta\right) +x\left( \beta+\beta^{\ast}\right) +2x^{2}\delta\right] ^{2}+4\left( 1-x^{2}\right) \left( \beta+\delta x\right) \left( \beta^{\ast}+\delta x\right) }}\text{.} \label{nano \end{equation} The subscript $\mathcal{H}$ in $t_{\mathcal{H}}^{\ast}$ denotes the generalized search Hamiltonian in Eq. (\ref{hamilton}). Observe that $t_{\mathcal{H}}^{\ast}$ in Eq. (\ref{nano}) can be rewritten as \begin{equation} t_{\mathcal{H}}^{\ast}\overset{\text{def}}{=}\frac{2}{\sqrt{4\left[ \alpha\delta+\operatorname{Re}^{2}\left( \beta\right) -\left\vert \beta\right\vert ^{2}\right] x^{2}+4\operatorname{Re}\left( \beta\right) \left( \alpha+\delta\right) x+\left[ \left( \alpha-\delta\right) ^{2}+4\left\vert \beta\right\vert ^{2}\right] }}\frac{\pi\hslash}{2E}\text{.} \label{nano1 \end{equation} \begin{table}[t] \centering \begin{tabular} [c]{c|c|c|c|c|c Case & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ & ${\mathcal{P}}_{\text{max} $\\\hline General & $\neq\delta$ & $\gamma^{*}\in\mathbb{C}$ & $\beta^{*}\in\mathbb{C}$ & $\neq\alpha$ & $\leq1$\\ 1 & $\delta$ & $0$ & $0$ & $\alpha$ & $1$\\ 2 & $\neq\delta$ & $0$ & $0$ & $\neq\alpha$ & $\leq1$\\ 3 & $0$ & $\gamma^{*}\in\mathbb{R}$ & $\beta^{*}\in\mathbb{R}$ & $0$ & $1$\\ 4 & $0$ & $\gamma*\in\mathbb{C}$ & $\beta^{*}\in\mathbb{C}$ & $0$ & $\leq1$\\ 5 & $\delta$ & $\gamma^{*}\in\mathbb{R}$ & $\beta^{*}\in\mathbb{R}$ & $\alpha$ & $1$\\ 6 & $\delta$ & $\gamma^{*}\in\mathbb{C}$ & $\beta^{*}\in\mathbb{C}$ & $\alpha$ & $\leq1$\\ 7 & $\neq\delta$ & $\gamma^{*}\in\mathbb{R}$ & $\beta^{*}\in\mathbb{R}$ & $\neq\alpha$ & $\leq1 \end{tabular} \caption{Summary of maximal success probability values $\mathcal{P}_{\max}$ that can be achieved for a variety of choices of the parameters $\alpha$, $\beta$, $\gamma$, and $\delta$ specifying the quantum search Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}). \end{table}In what follows, we choose to briefly discuss a number of limiting cases that arise from Eq. (\ref{nano2}). In particular, the big-calligraphic-$\mathcal{O}$ notation $f\left( \varepsilon\right) =\mathcal{O}\left( g\left( \varepsilon\right) \right) $ means that $f\left( \varepsilon\right) $ is an infinitesimal of order equal to $g\left( \varepsilon\right) $ as $\varepsilon$ approaches zero, that is \begin{equation} \lim_{\varepsilon\rightarrow0}\frac{f\left( \varepsilon\right) }{g\left( \varepsilon\right) }=K\text{, \end{equation} where $K$ denotes a nonzero \emph{real} constant. In Table I we report an overview of the maximal success probability values that can be obtained for a variety of choices of the parameters $\alpha$, $\beta$, $\gamma$, and $\delta$ characterizing the quantum search Hamiltonian $\mathcal{H}$. In particular, we note that the unit success probability $\mathcal{P}_{\max}=1$ can be achieved only when considering the Hamiltonians $\mathcal{H}_{1}$, $\mathcal{H}_{3}$, and $\mathcal{H}_{5}$. Fig. $4$, instead, displays the negative effect on the maximal success probability $\mathcal{P}_{\max}$ due to asymmetries ($\alpha\neq\delta$) and complexities ($\beta\i \mathbb{C} $) in the parameters of the quantum search Hamiltonian $\mathcal{H}$ when the quantum overlap $x$ approaches zero. \emph{Case 1}: $\alpha=\delta$, and $\beta=\gamma=0$. In this case, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) is given by \begin{equation} \mathcal{H}_{1}\overset{\text{def}}{=}\alpha E\left[ \left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] \text{.} \label{h1 \end{equation} Furthermore, the the maximum value of the transition probability in Eq. (\ref{nono}) becomes $\mathcal{P}_{\max}=1$ reached at the time $t_{\mathcal{H}_{1}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{1}}^{\ast}=\frac{1}{\alpha x}\frac{\pi\hslash}{2E}\text{.} \label{tstar1 \end{equation} Observe that when $\alpha=1$ in\ Eq. (\ref{tstar1}), we recover the well-known result by Farhi and Guttmann. As a side remark, we point out that $t_{\mathcal{H}_{1}}^{\ast}$ in Eq. (\ref{tstar1}) is inversely proportional to the quantum overlap $x$ between the initial state $\left\vert \psi _{s}\right\rangle $ and the target state $\left\vert \psi_{w}\right\rangle $. \emph{Case 2}: $\alpha\neq\delta$, and $\beta=\gamma=0$. Using these working assumptions, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) becomes \begin{equation} \mathcal{H}_{2}\overset{\text{def}}{=}E\left[ \alpha\left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\delta\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] \text{.} \label{h2 \end{equation} In this case, $\mathcal{P}_{\max}$ is given by \begin{equation} \mathcal{P}_{\max}=\frac{\left( \alpha+\delta\right) ^{2}x^{2}}{4x^{2 \alpha\delta+\left( \alpha-\delta\right) ^{2}}\text{.} \label{p2 \end{equation} This maximum $\mathcal{P}_{\max}$ with $0\leq\mathcal{P}_{\max}\leq1$ is reached at $t_{\mathcal{H}_{2}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{2}}^{\ast}=\frac{2}{\sqrt{4x^{2}\alpha\delta+\left( \alpha-\delta\right) ^{2}}}\frac{\pi\hslash}{2E}\text{.} \label{tstar2 \end{equation} Note that $\mathcal{P}_{\max}$ in Eq. (\ref{p2}), when viewed as a function of $x$, assumes it maximum value $1$ when $\alpha=\delta$. Furthermore, $\mathcal{P}_{\max}=1$ when $x=1$ for any choice of $\alpha$ and $\delta$. Furthermore, $t_{\mathcal{H}_{1}}^{\ast}\geq$ $t_{\mathcal{H}_{2}}^{\ast}$ when $0\leq\delta/\left( 1-4x^{2}\right) \leq\alpha$. In particular, when $\alpha=\delta/\left( 1-4x^{2}\right) $, we ge \begin{equation} \frac{2E}{\pi\hslash}t_{\mathcal{H}_{1}}^{\ast}=\frac{1-4x^{2}}{\delta x}=\frac{2E}{\pi\hslash}t_{\mathcal{H}_{2}}^{\ast}\text{. \end{equation} Finally, we remark that when $0$ $\leq\alpha-\delta\ll1$, the approximate expression of $\mathcal{P}_{\max}$ in Eq. (\ref{p2}) become \begin{equation} \mathcal{P}_{\max}=1-\frac{1}{4}\frac{1-x^{2}}{\alpha^{2}x^{2}}\left( \alpha-\delta\right) ^{2}+\mathcal{O}\left( \left\vert \alpha-\delta \right\vert ^{3}\right) \text{.} \label{profound \end{equation} This approximate maximum transition probability value $\mathcal{P}_{\max}$ in Eq. (\ref{profound}) is achieved whe \begin{equation} t_{\mathcal{H}_{2}}^{\ast}=\left[ \frac{1}{\alpha x}-\frac{1}{8}\frac{\left( \alpha-\delta\right) ^{2}}{\alpha^{3}x^{3}}+\mathcal{O}\left( \left\vert \alpha-\delta\right\vert ^{3}\right) \right] \frac{\pi\hslash}{2E}\text{, \end{equation} that is, $t_{\mathcal{H}_{2}}^{\ast}=t_{\mathcal{H}_{1}}^{\ast}+\mathcal{O \left( \left\vert \alpha-\delta\right\vert ^{2}\right) $. \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth] {fig4}\caption{Maximal success probability $\mathcal{P}_{\max}$ as a function of $\alpha-\delta$ for $\left\vert \beta\right\vert =0.25$ (dotted line), $\left\vert \beta \right\vert =0.5$ (thin solid line), and $\left\vert \beta\right\vert =1$ (thick solid line) in the working assumption that $x$ approaches zero (left); Maximal success probability $\mathcal{P}_{\max}$ as a function of $\left\vert \beta\right\vert $ for $\alpha-\delta=0$ (dotted line), $\alpha-\delta=0.25$ (thin solid line), and $\alpha-\delta=0.5$ (thick solid line) in the working assumption that $x$ approaches zero (right). \label{fig4 \end{figure} \emph{Case 3}: $\beta=\gamma^{\ast}$ \emph{real}, and $\alpha=\delta=0$. In this case, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) is given by \begin{equation} \mathcal{H}_{3}\overset{\text{def}}{=}\beta E\left[ \left\vert \psi _{w}\right\rangle \left\langle \psi_{s}\right\vert +\left\vert \psi _{s}\right\rangle \left\langle \psi_{w}\right\vert \right] \text{.} \label{h3 \end{equation} The Hamiltonian $\mathcal{H}_{3}$ can be used to search for the target state $\left\vert \psi_{w}\right\rangle $ with certainty since the maximum probability value $\mathcal{P}_{\max}$ is given by $\mathcal{P}_{\max}=1$. This maximum $\mathcal{P}_{\max}$ is reached at $t_{\mathcal{H}_{3}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{3}}^{\ast}\overset{\text{def}}{=}\frac{1}{\beta}\frac {\pi\hslash}{2E}\text{.} \label{tstar3 \end{equation} Note that, unlike the previous two cases, the time $t_{\mathcal{H}_{3}}^{\ast }$ does not depend on the quantum overlap $x$. \emph{Case 4}: $\beta=\gamma^{\ast}$ $\emph{complex}$, and $\alpha=\delta=0$. Using these working assumptions, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) becomes \begin{equation} \mathcal{H}_{4}\overset{\text{def}}{=}E\left[ \beta\left\vert \psi _{w}\right\rangle \left\langle \psi_{s}\right\vert +\beta^{\ast}\left\vert \psi_{s}\right\rangle \left\langle \psi_{w}\right\vert \right] \text{.} \label{h4 \end{equation} In this case, $\mathcal{P}_{\max}$ become \begin{equation} \mathcal{P}_{\max}=\frac{8\left[ \operatorname{Re}\left( \beta\right) \right] ^{2}x^{2}-4\left[ \operatorname{Re}\left( \beta\right) \right] ^{2}x^{4}+4\left\vert \beta\right\vert ^{2}\left( 1-x^{2}\right) ^{2 }{4\left[ \operatorname{Re}\left( \beta\right) \right] ^{2}x^{2 +4\left\vert \beta\right\vert ^{2}\left( 1-x^{2}\right) }\text{.} \label{pmaxcomplex \end{equation} This maximum $\mathcal{P}_{\max}$ in\ Eq. (\ref{pmaxcomplex}) is reached at $t_{\mathcal{H}_{4}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{4}}^{\ast}\overset{\text{def}}{=}\frac{2}{\sqrt{4\left[ \operatorname{Re}\left( \beta\right) \right] ^{2}x^{2}+4\left\vert \beta\right\vert ^{2}\left( 1-x^{2}\right) }}\frac{\pi\hslash}{2E}\text{. \end{equation} Note that, unlike the previous case, the time $t_{\mathcal{H}_{4}}^{\ast}$ does depend on the quantum overlap $x$. Furthermore, observe that if $\operatorname{Re}\left( \beta\right) =0$, $\mathcal{P}_{\max}$ in Eq. (\ref{pmaxcomplex}) becomes $\mathcal{\tilde{P}}_{\max}=1-x^{2}$. This maximum $\mathcal{\tilde{P}}_{\max}$ is reached at $\tilde{t}_{\mathcal{H}_{4}}^{\ast }$ \begin{equation} \tilde{t}_{\mathcal{H}_{4}}^{\ast}\overset{\text{def}}{=}\frac{1 {\sqrt{\left\vert \beta\right\vert ^{2}\left( 1-x^{2}\right) }}\frac {\pi\hslash}{2E}=\frac{1}{\left\vert \beta\right\vert }\left[ 1+\frac{1 {2}x^{2}+\mathcal{O}\left( x^{4}\right) \right] \frac{\pi\hslash {2E}=t_{\mathcal{H}_{3}}^{\ast}+\mathcal{O}\left( x^{2}\right) \text{.} \label{ttilda4 \end{equation} In other words, when $0\leq x\ll1$, the search Hamiltonian $\mathcal{H}_{4}$ behaves approximately like the Hamiltonian $\mathcal{H}_{3}$. As a final remark, we note that when $\beta\overset{\text{def}}{=}2iEx$ we recover Fenner's quantum search Hamiltonian as proposed in\ Ref. \cite{fenner}. \emph{Case 5}: $\alpha=\delta$, and $\beta=\gamma^{\ast}$ \emph{real}. In this case, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) is given b \begin{equation} \mathcal{H}_{5}\overset{\text{def}}{=}\alpha E\left[ \left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] +\beta E\left[ \left\vert \psi_{w}\right\rangle \left\langle \psi_{s}\right\vert +\left\vert \psi_{s}\right\rangle \left\langle \psi_{w}\right\vert \right] \text{.} \label{h5 \end{equation} It happens that given $\mathcal{H}_{5}$, $\mathcal{P}_{\max}$ becomes $\mathcal{P}_{\max}=1$. Furthermore, the maximum $\mathcal{P}_{\max}$ is reached at $t_{\mathcal{H}_{5}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{5}}^{\ast}=\frac{1}{\alpha x+\beta}\frac{\pi\hslash {2E}\text{.} \label{tstar5 \end{equation} Note that for $\beta=0$ and $\alpha=0$, $t_{\mathcal{H}_{5}}^{\ast}$ reduces to $t_{\mathcal{H}_{1}}^{\ast}$ and $t_{\mathcal{H}_{3}}^{\ast}$, respectively. For the sake of completeness, we remark that the Hamiltonian in\ Eq. (\ref{h5}) was originally considered in Ref. \cite{bae02}. \emph{Case 6}: $\alpha=\delta$, and $\beta=\gamma^{\ast}$ \emph{complex}. In this case, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) is given b \begin{equation} \mathcal{H}_{6}\overset{\text{def}}{=}\alpha E\left[ \left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] +E\left[ \beta\left\vert \psi_{w}\right\rangle \left\langle \psi_{s}\right\vert +\beta^{\ast}\left\vert \psi_{s}\right\rangle \left\langle \psi_{w}\right\vert \right] \text{. \end{equation} Moreover, $\mathcal{P}_{\max}$ become \begin{equation} \mathcal{P}_{\max}=\frac{\left\vert \left[ 2\operatorname{Re}\left( \beta\right) x+2\alpha x^{2}\right] x+2\left( \alpha x+\beta\right) \left( 1-x^{2}\right) \right\vert ^{2}}{\left[ 2\operatorname{Re}\left( \beta\right) x+2\alpha x^{2}\right] ^{2}+4\left( 1-x^{2}\right) \left[ \left\vert \beta\right\vert ^{2}+2\alpha\operatorname{Re}\left( \beta\right) x+\alpha^{2}x^{2}\right] }\text{.} \label{pmax2 \end{equation} The maximum $\mathcal{P}_{\max}$ is reached at $t_{\mathcal{H}_{6}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{6}}^{\ast}\overset{\text{def}}{=}\frac{2}{\sqrt{\left[ 2\operatorname{Re}\left( \beta\right) x+2\alpha x^{2}\right] ^{2}+4\left( 1-x^{2}\right) \left[ \left\vert \beta\right\vert ^{2}+2\alpha \operatorname{Re}\left( \beta\right) x+\alpha^{2}x^{2}\right] }}\frac {\pi\hslash}{2E}\text{. \end{equation} \begin{table}[t] \centering \begin{tabular} [c]{c|c|c|c|c $\mathcal{H}$ & ${\mathcal{P}}_{\text{max}}$ & $t_{\mathcal{H}}^{\ast}$ & $(\alpha,\delta)$ & $(\beta,\gamma)$\\\hline $\mathcal{H}_{1}$ & $1$ & $\frac{\pi\hslash}{2E}(\alpha x)^{-1}$ & $\alpha=\delta\neq0$ & $\beta=\gamma^{*}=0$\\ $\mathcal{H}_{3}$ & $1$ & $\frac{\pi\hslash}{2E}(\beta)^{-1}$ & $\alpha =\delta=0$ & $\beta=\gamma^{*}\in\mathbb{R}\backslash\{0\}$\\ $\mathcal{H}_{5}$ & $1$ & $\frac{\pi\hslash}{2E}(\alpha x+\beta)^{-1}$ & $\alpha=\delta\neq0$ & $\beta=\gamma^{*}\in\mathbb{R}\backslash\{0\} \end{tabular} \caption{Summary of cases where unit maximal success probability values $\mathcal{P}_{\max}$ can be achieved for a variety of choices of the parameters $\alpha$, $\beta$, $\gamma$, and $\delta$ specifying the quantum search Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}). \end{table} \emph{Case 7}: $\alpha\neq\delta$, and $\beta=\gamma^{\ast}$ \emph{real}. The Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) is given by, \begin{equation} \mathcal{H}_{7}\overset{\text{def}}{=}E\left[ \alpha\left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\delta\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] +\beta E\left[ \left\vert \psi_{w}\right\rangle \left\langle \psi_{s}\right\vert +\left\vert \psi_{s}\right\rangle \left\langle \psi_{w}\right\vert \right] \text{. \end{equation} In this case, $\mathcal{P}_{\max}$ become \begin{equation} \mathcal{P}_{\max}=\frac{\left[ \left( \alpha+\delta\right) x+2\beta \right] ^{2}}{4\left[ \alpha\delta x^{2}+\left( \alpha\beta+\beta \delta\right) x+\beta^{2}\right] +\left( \alpha-\delta\right) ^{2 }\text{.} \label{pm7 \end{equation} The maximum $\mathcal{P}_{\max}$ in Eq. (\ref{pm7}) is reached at $t_{\mathcal{H}_{7}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{7}}^{\ast}=\frac{2}{\sqrt{4\left[ \alpha\delta x^{2}+\left( \alpha\beta+\beta\delta\right) x+\beta^{2}\right] +\left( \alpha -\delta\right) ^{2}}}\frac{\pi\hslash}{2E}\text{. \end{equation} Finally, we remark that when $0$ $\leq\alpha-\delta\ll1$, the approximate expression of $\mathcal{P}_{\max}$ in Eq. (\ref{pm7}) become \begin{equation} \mathcal{P}_{\max}=1-\frac{1}{4}\frac{1-x^{2}}{\left( \alpha x+\beta\right) ^{2}}\left( \alpha-\delta\right) ^{2}+\mathcal{O}\left( \left\vert \alpha-\delta\right\vert ^{3}\right) \text{.} \label{app \end{equation} This approximate maximum transition probability value in Eq. (\ref{app}) is achieved whe \begin{equation} t_{\mathcal{H}_{7}}^{\ast}=\left[ \frac{1}{\alpha x+\beta}-\frac{1}{8 \frac{\left( \alpha-\delta\right) ^{2}}{\left( \alpha x+\beta\right) ^{3 }+\mathcal{O}\left( \left\vert \alpha-\delta\right\vert ^{3}\right) \right] \frac{\pi\hslash}{2E}\text{, \end{equation} that is, $t_{\mathcal{H}_{7}}^{\ast}=t_{\mathcal{H}_{5}}^{\ast}+\mathcal{O \left( \left\vert \alpha-\delta\right\vert ^{2}\right) $ with $t_{\mathcal{H}_{5}}^{\ast}$ in Eq. (\ref{tstar5}). In Table II we describe the minimum search times $t_{\mathcal{H}_{i}}^{\ast}$ with $i\in\left\{ 1\text{, }3\text{, }5\right\} $ when the maximal success probability $\mathcal{P}_{\max}$ equals one.\textbf{ }Furthermore, Fig\textbf{. }$5$ displays two plots. The plot on the left represents the minimum search time $t^{\ast}$ versus the overlap $x$ assuming $\alpha =\beta=1$ and $E=h=1$. From this plot, we note that $t_{\mathcal{H}_{5} ^{\ast}\leq t_{\mathcal{H}_{3}}^{\ast}\leq t_{\mathcal{H}_{1}}^{\ast}$. The plot on the right, instead, represents the temporal behavior of the success probability $\mathcal{P}\left( t\right) $ assuming $\alpha=\beta=1$, $E=h=1$,\textbf{ }and $x=0.5$. We observe that $\mathcal{P}\left( t\right) $ reaches the ideal unit value with $\mathcal{H}_{5}$ at $t_{\mathcal{H}_{5 }^{\ast}=1/6\simeq0.17$, with $\mathcal{H}_{3}$ at $t_{\mathcal{H}_{3}}^{\ast }=1/4=0.25$\textbf{, }and with $\mathcal{H}_{1}$ at $t_{\mathcal{H}_{1} ^{\ast}=1/2=0.5$. Despite the detrimental effects of asymmetries and complexities on the achievable maximal success probability values represented in Fig\textbf{.} $4$ when $x$ approaches zero and despite the fact as reported in Table II and Fig. $5$ that $\mathcal{H}_{5}$ appears to be the quantum search Hamiltonian that yields the shortest search time needed to achieve unit success probability, we point out that it is possible to suitably choose the Hamiltonian parameters $\alpha$, $\beta$, $\gamma$, and $\delta$ in $\mathcal{H}$ together with the overlap $x$ in such a manner that nearly optimal success probability threshold values can be obtained in search times shorter than those specified by $\mathcal{H}_{5}$. Indeed, Fig. $6$ displays such a circumstance. Assuming $\alpha=\delta=0.5$, $\beta=1$, and $x=0.5$, the unit success probability is obtained with $\mathcal{H}_{5}$ at $t_{\mathcal{H _{5}}^{\ast}=1/5=0.2$ while the chosen threshold value $\mathcal{P _{\text{threshold}}=0.95$ is reached at $\tilde{t}_{\mathcal{H}_{5} \simeq0.1667$. However, assuming $\mathcal{H}$ with $\alpha=0.5$, $\delta=1$, $\beta=1$, and $x=0.5$, despite the fact that the maximal success probability is only nearly optimal with $\mathcal{P}_{\max}\simeq0.9758\leq1$,\textbf{ }the selected threshold value $\mathcal{P}_{\text{threshold}}=0.95$\textbf{ }is reached at $\tilde{t}_{\mathcal{H}}\simeq0.1579\leq\tilde{t _{\mathcal{H}_{5}}$. For a discussion on the choice of the numerical values of the quantum overlap $x$, we refer to Appendix C. \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth] {fig5}\caption{The plot on the left displays the minimum search time $t^{\ast}$ versus the quantum overlap $x$ for the search Hamiltonian $\mathcal{H}_{1}$ (dashed line), $\mathcal{H}_{3}$ (thin solid line), and $\mathcal{H}_{5}$ (thick solid line). The plot on the right, instead, shows $\mathcal{P}\left( t\right) $ versus $t$ for the search Hamiltonian $\mathcal{H}_{1}$ (dashed line), $\mathcal{H}_{3}$ (thin solid line), and $\mathcal{H}_{5}$ (thick solid line). In the former plot, we assume $\alpha=\beta=1$ and $E=h=1$. In the latter plot, we also assume $x=1/2$. \label{fig5 \end{figure} \section{Concluding Remarks} In this paper, we presented a detailed analysis concerning the computational aspects needed to analytically evaluate the transition probability from a source state $\left\vert s\right\rangle $ to a target state $\left\vert w\right\rangle $ in a continuous time quantum search problem defined by a multi-parameter generalized time-independent Hamiltonian $\mathcal{H}$ in\ Eq. (\ref{hamilton}). In particular, quantifying the performance of a quantum search in terms of speed (minimum search time, $t^{\ast}$) and fidelity (high success probability, $\mathcal{P}$), we consider a variety of special cases that emerge from the generalized Hamiltonian. Finally, recovering also the well-known Farhi-Gutmann analog quantum search scenario, we briefly discuss the relevance of a tradeoff between speed and fidelity with emphasis on issues of both theoretical and practical importance to quantum information processing. \subsection{Summary of main results} Our main conclusions can be summarized as follows. \begin{enumerate} \item[{[1]}] First, we provided a detailed analytical computation of the transition probability $\mathcal{P}_{\left\vert s\right\rangle \rightarrow \left\vert w\right\rangle }\left( t\right) $ in Eq. (\ref{it}) from the source state $\left\vert s\right\rangle $ to the target state $\left\vert w\right\rangle $ under the working assumption that the quantum mechanical evolution is governed by the generalized quantum search Hamiltonian $\mathcal{H}$. Such a computation, despite being straightforward, is quite tedious. Therefore, we have reason to believe it can be relevant to the novice with a growing interest in analog quantum search algorithms as well as to the expert seeking to find nearly-optimal solutions in realistic quantum search problems where a tradeoff between fidelity and minimum search time is required; \item[{[2]}] Second, given the family $\mathcal{F}_{\mathcal{H} \overset{\text{def}}{=}\left\{ \mathcal{H}\right\} $ with $\mathcal{H =\mathcal{H}\left( x\text{; }\alpha\text{, }\beta\text{, }\gamma\text{, }\delta\right) $ where $\alpha$ and $\beta\i \mathbb{R} $ while $\beta=\gamma^{\ast}$ $\i \mathbb{C} $, we have conveniently identified two sub-families $\mathcal{F}_{\mathcal{H }^{\left( \text{optimal}\right) }\overset{\text{def}}{=}\left\{ \mathcal{H}_{1}\text{, }\mathcal{H}_{3}\text{, }\mathcal{H}_{5}\right\} $ and $\mathcal{F}_{\mathcal{H}}^{\left( \text{nearly-optimal}\right) \overset{\text{def}}{=}\left\{ \mathcal{H}_{2}\text{, }\mathcal{H}_{4}\text{, }\mathcal{H}_{6}\text{, }\mathcal{H}_{7}\right\} $ that contain quantum search Hamiltonians yielding optimal and nearly-optimal fidelity values, respectively. The former sub-family is specified by the asymmetry between the \emph{real} parameters $\alpha$ and $\delta$. The latter sub-family, instead, is characterized by the complexity (that is, the essence of being \emph{complex}-valued) of the parameters $\beta$ and $\gamma$. Each element of the family has been classified with respect to its maximal success probability and the minimum time necessary to reach such a value.\ An overview of these results appears in Table I. In addition, in Fig. $4$ we report on the detrimental effects caused by the presence of asymmetries and complexities in the parameters that specify the particular quantum search Hamiltonian on the maximal success probability in the limiting working assumption that the source state and the target state are orthogonal. \item[{[3]}] Third, we ranked the performance of each element of the sub-family $\mathcal{F}_{\mathcal{H}}^{\left( \text{optimal}\right) }$ by analyzing the minimum search time required to reach unit fidelity. These results are displayed in Table II. In particular, as evident from Fig. $5$, we find\textbf{ }that $\mathcal{H}_{5}$ can outperform the Farhi-Gutmann search Hamiltonian $\mathcal{H}_{1}$ in terms of speed. \item[{[4]}] Lastly, despite the observed detrimental effects of asymmetries and complexities on the numerical values of the maximal success probabilities, we find that imperfect search Hamiltonians can outperform perfect search Hamiltonians provided that only a large nearly-optimal fidelity value is sought. This finding is reported in Fig. $6$. \end{enumerate} \subsection{Limitations and possible future developments} In what follows, we report some limitations together with possible future improvements of our investigation. \begin{enumerate} \item[{[1]}] First, we have reason to believe our analysis presented in this paper could be a useful starting point for a more rigorous investigation that would include both experimental and theoretical aspects of a tradeoff between fidelity and run time in quantum search algorithms. Indeed, we are aware that it is helpful to decrease the control time of the control fields employed to generate a target quantum state or a target quantum gate in order to mitigate the effect of decoherence originating from the interaction of a quantum system with the environment. Moreover, we also know that it may be convenient to increase the control time beyond a certain critical value to enhance the fidelity of generating such targets and reach values arbitrarily close to the maximum $\mathcal{F}=1$. However, when the control time reaches a certain value that may be close to the critical value, decoherence can become a dominant effect. Therefore, investigating the tradeoff between time control and fidelity can be of great practical importance in quantum computing \cite{rabitz12,rabitz15,cappellaro18}. Given that it is very challenging to find a rigorous optimal time control and in many cases the control is only required to be sufficiently precise and short, one can design algorithms seeking suboptimal control solutions for much reduced computational effort. For instance, the fidelity of tomography experiments is rarely above $99\%$ due to the limited control precision of the tomographic experimental techniques as pointed out in Ref. \cite{rabitz15}. Under such conditions, it is unnecessary to prolong the control time since the departure from the optimal scenario is essentially negligible. Hence, it can certainly prove worthwhile to design slightly suboptimal algorithms that can be much cheaper computationally. \item[{[2]}] Second, we speculate it may be worth pursuing the possibility of borrowing ideas from approximate quantum cloning to design approximate quantum search algorithms capable of finding targets in the presence of\textbf{ }\emph{a priori }information. As a matter of fact, recall that the no-cloning theorem in quantum mechanics states that it is impossible to consider a cloning machine capable of preparing two exact copies of a completely unknown pure qubit state \cite{zurek82}. However, with the so-called universal cloner \cite{hillery96} (that is, a state-independent symmetric cloner) acting on the whole Bloch sphere, it is possible to prepare two approximate copies of an unknown pure qubit state with the same fidelity\textbf{ }$\mathcal{F =5/6<1$\textbf{. }Interestingly, it is possible to enhance these fidelity values achieved with a universal cloner by properly exploiting some relevant\textbf{ }\emph{a priori}\textbf{ }information on a given quantum state that one wishes to clone. This idea of exploiting\textbf{ }\emph{a priori}\textbf{ }information generated a number of state-dependent cloning machines capable of outperforming the universal cloner for some special set of qubits. For instance, phase-covariant cloners are especially successful for cloning qubits chosen from the equator of the Bloch sphere \cite{macchiavello00} while belt quantum cloning machines are very efficient in cloning qubits between two latitudes on the Bloch sphere \cite{wang09}. For an interesting method for improving the cloning fidelity in terms of\textbf{ }\emph{a priori}\textbf{ }amplitude and phase information, we refer to Ref. \cite{kang16}. We shall investigate this line of investigation in forthcoming efforts. \item[{[3]}] Third, from a more applied perspective, despite its relative simplicity, the idea of finding a tradeoff between search time and fidelity in analog quantum searching as presented in this paper could be potentially regarded as a valid starting point for a time-fidelity tradeoff analysis in disease diagnosis in complex biological systems. For these systems, the source and target states are replaced with the source and target patterns, respectively. In particular, the target pattern classifies the type of illness being searched. For recent important investigations based upon the joint use of quantum field theoretic methods and general relativity techniques concerning the transition from source to target patterns in complex biological systems, including DNA\ and protein structures, we refer to Refs. \cite{capozziello1,capozziello2}. More realistic applications of our work are very important and we shall also give a closer look to these aspects in the near future. \item[{[4]}] Fourth, a further possibility could be related to cosmology. As discussed in \cite{capozziello3,capozziello4,luongo19,capozziello13,capozziello11}, there exist possible connections between quantum entanglement and cosmological observational parameters. In fact, assuming that two cosmological epochs are each other entangled, by measuring the entanglement degree, it is possible to recover dynamical properties. Specifically, the effects of the so called \textit{dark energy} could be due to the entanglement between states, since a negative pressure arises. In this process, an \textquotedblleft entanglement weight\textquotedblright, the so-called negativity of entanglement can be defined and then the apparent today observed accelerated expansion occurs when the cosmological parameters are entangled. In this perspective, dark energy could be seen as a straightforward consequence of entanglement without invoking (not yet observed) further material fundamental components. The present analysis could help in this cosmological perspective once the cosmological equations are modeled out as Schr\"{o}dinger-like equations as discussed in \cite{capozziello5}. \item[{[5]}] Lastly, in real life scenarios, searching in a completely unstructured fashion can be unnecessary. Instead, the search can be guided by employing some \emph{prior} relevant information about the location of the target state. Interestingly, this is what happens in the framework of quantum search with advice \cite{montanaro11,montanaro17}. In this framework, the aim is to minimize the expected number of queries with respect to the probability distribution encoding relevant information about where the target might be located. A major advancement in the work we presented in this paper would be figuring out a systematic way to incorporate relevant \emph{prior} information about the possible location of the target directly into the continuous time quantum search Hamiltonian. We leave this intriguing open problem to future scientific endeavours. \end{enumerate} In conclusion, our proposed analysis was partially inspired by some of our previous findings reported in Refs. \cite{cafaro-alsing19A, cafaro-alsing19B} and can be improved in a number of ways in the immediate short term. For instance, it would be interesting to address the following question: How large should the nearly optimal fidelity value be chosen, given the finite precision of quantum measurements and the unavoidable presence of decoherence effects in physical implementations of quantum information processing tasks? We leave the investigation of a realistic tradeoff between speed and fidelity in analog quantum search problems to forthcoming scientific efforts. \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth] {fig6}\caption{The thin and the thick solid lines display $\mathcal{P}\left( t\right) $ versus $t$ for the search Hamiltonians $\mathcal{H}$ and $\mathcal{H}_{5}$, respectively. In the former case, we set $\alpha=0.5$, $\delta=1$, $\beta=1$, and $x=0.5$. In the latter case, instead, we set $\alpha=\delta=0.5$, $\beta=1$, and $x=0.5$. In both cases, we also assume $E=h=1$. The dashed line denotes the chosen threshold success probability value $\mathcal{P}_{\text{threshold}}=0.95$. Finally, the dotted line denotes the optimal success probability $\mathcal{P}=1$. \label{fig6 \end{figure} \begin{acknowledgments} C. C. acknowledges the hospitality of the United States Air Force Research Laboratory (AFRL) in Rome-NY where part of his contribution to this work was completed. S.C. acknowledges partial support of \textit{Istituto Italiano di Fisica Nucleare} (INFN), \textit{iniziative specifiche} QGSKY and MOONLIGHT2. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Submillimeter galaxies are extremely luminous ($L > 10^{12} L_\odot$), high-redshift ($z > 1$), dust-obscured galaxies detected by their thermal dust emission (for a review see Blain et al.\ 2002). The dust is heated by the ultraviolet and optical flux from young stars associated with prodigious inferred star formation rates (SFRs) of $\sim100 - 1000$ M$_\odot$ yr$^{-1}$ \citep{blain02}. Although $\sim \frac{1}{3}$ of sources appear to contain an active galactic nucleus \citep[AGN;][]{alexander03,ivison04}, in nearly all cases the AGNs are not bolometrically important \citep[$<20$\%;][]{alexander04}. Given these SFRs, a burst of duration $10^8$ yr would be sufficient to form all the stars in an elliptical galaxy, making it plausible that submillimeter galaxies are the progenitors of elliptical galaxies and spiral bulges \citep{smail02, swinbank04}. Deep (sub)millimeter surveys with SCUBA \citep{holland99} and MAMBO \citep{bertoldi00} have now resolved 10\%$-$40\% of the cosmic far-infrared background \citep{puget96,hauser98,fixsen98} into submillimeter galaxies in blank-field surveys \citep[e.g.,][]{greve04,borys03, scott02} and 40\%$-$100\% using lensing galaxy clusters \citep{blain99,cowie02}. Photometric redshifts constrain most of the submillimeter galaxies found so far to lie at $z > 1$ \citep[e.g.,][]{ carilli99,yun02,aretxaga03}. Recent observations of submillimeter galaxies with the Keck Low-Resolution Imaging Spectrograph (LRIS; Chapman et al.\ 2003a, 2003b, 2005) have shown that redshifts can be obtained for $\sim$70\% of bright submillimeter galaxies ($S_{\mathrm{850 \mu m}} > $~5 mJy) with bright radio counterparts \citep[$S_{\mathrm{1.4 GHz}} > $~30 $\mu$Jy;][]{blain04} \citep[the fraction of submillimeter galaxies with radio counterparts appears to be $\sim$65\%;][]{ivison02}. This sample of submillimeter galaxies lies in a distribution peaking at $z = 2.4$ with $\Delta z= 0.65$ \citep{ivison02,chapman03b,chapman05}. Six of these galaxies have had their optical spectroscopic redshifts confirmed by millimeter CO line measurements \citep{frayer98,frayer99,neri03,sheth04}. This redshift distribution may be biased with respect to the overall submillimeter galaxy population owing to a number of selection effects: the requirement of precise radio positions prior to spectroscopy, which introduces a bias against cooler galaxies, especially at higher redshifts ($z > 2.5$); limited completeness ($\sim 30\%$) of the spectroscopic observations, which biases the sample against galaxies with weak emission lines; and the redshift gap at $z = 1.2-1.8$ due to the ``spectroscopic desert,'' in which no strong rest-frame ultraviolet lines are redshifted into the optical. Bolocam is a new millimeter-wave bolometer camera for the Caltech Submillimeter Observatory (CSO).\footnote{See http://www.cso.caltech.edu/bolocam.} Bolocam's large field of view (8\arcmin), 31\arcsec\ beams (FWHM at $\lambda = 1.1$ mm), and AC biasing scheme make it particularly well-suited to finding rare, bright submillimeter galaxies and for probing large-scale structure. We have used Bolocam to conduct a survey toward the Lockman Hole for submillimeter galaxies. The Lockman Hole is a region in UMa in which absorbing material, such as dust and galactic hydrogen, is highly rarefied \citep[H$_\mathrm{I}$ column density of $N_\mathrm{H} \approx 4.5 \mathrm{\ x\ } 10^{19}$ cm$^{-2}$;][]{jahoda90}, providing a transparent window for sensitive extragalactic surveys over a wide spectral range, from the infrared and millimeter wavebands to UV and X-ray observations. Submillimeter \citep{scott02, fox02, ivison02, eales03} and millimeter-wave \citep{greve04} surveys for submillimeter galaxies have already been done toward the Lockman Hole. It is one of the one-quarter square degree fields of the SCUBA SHADES,\footnote{See http://www.roe.ac.uk/ifa/shades/links.html.} the focus of a deep extragalactic survey with {\it XMM-Newton} \citep{hasinger01}, and a target field for {\it Spitzer} guaranteed time observations. The coverage of the Lockman Hole region by several surveys therefore makes it an excellent field for intercomparison of galaxy candidate lists and measuring spectral energy distributions (SEDs), which will ultimately enable dust temperatures and redshifts to be constrained. This paper is arranged as follows. In \S\ \ref{section:observations} Bolocam and the observations are described. In \S\ \ref{section:datareduction} the data reduction pipeline, including pointing and flux calibration, cleaning and sky subtraction, and mapping, is described. In \S\ \ref{section:sourcelist} the source candidate list and tests of the robustness of the candidates are presented. We devote \S\ \ref{section:simulations} to the extraction of the number counts versus flux density relation using simulations designed to characterize the systematic effects in the data reduction and the false detections, completeness, and bias in the survey. In \S\ \ref{section:discussion} we discuss the implications of the survey, in \S\ \ref{section:futurework} we describe future work for this program, and in \S\ \ref{section:conclusions} we give conclusions. \section{Observations} \label{section:observations} \subsection{Instrument Description} The heart of Bolocam is an array of 144 silicon nitride micromesh (``spider-web'') bolometers cooled to 260 mK by a three-stage ($^4$He/$^3$He/$^3$He) sorption refrigerator. An array of close-packed ($1.5f\lambda$), straight-walled conical feed horns terminating in cylindrical waveguides and integrating cavities formed by a planar backshort couples the bolometers to cryogenic and room-temperature optics. The illumination on the 10.4 m diameter CSO primary mirror is controlled by the combination of the feed horns and a cold (6 K) Lyot stop, resulting in 31\arcsec\ beams (FWHM). A stack of resonant metal-mesh filters form the passband in conjunction with the waveguides. A $\lambda=2.1$ mm configuration is also available, which is used for observations of the Sunyaev-Zel'dovich effect and secondary anisotropies in the cosmic microwave background radiation. Technical details of Bolocam are given in \cite{glenn98, glenn03} and \cite{haig04}; numerical simulations of the integrating cavities are described in \cite{glenn02}. A key element of Bolocam is the bolometer bias and readout electronics: an AC biasing scheme (130 Hz) with readout by lock-in amplifiers enables the detectors to be biased well above the $1/f$ knee of the electronics. The electronic readout stability, in conjunction with Bolocam's rigorous sky noise subtraction algorithm, eliminates the need to nutate the CSO subreflector. Another advantage of this AC biasing scheme is that it is easy to monitor the bolometer operating voltage; these voltages are determined by the total atmospheric emission in the telescope beam and the responsivity of the bolometers. Thus, a voltage that is a monotonic function of the in-band atmospheric optical depth and bolometer responsivity is continually measured. Sky subtraction is implemented by either a subtraction of the average bolometer voltages or a principal component analysis (PCA) technique, which is described below. \subsection{Scan Strategy} Two sets of observations of the Lockman Hole East ($\mathrm{R.A.} = 10^\mathrm{h}\,52^\mathrm{m}\,08^\mathrm{s}.82$, $\mathrm{decl.} = +57^\circ\,21'\,33\arcsec.80$, J2000.0) were made with Bolocam: 2003 January, when the data were taken with a fast raster scan without chopping (hereafter referred to as raster scan observations), and 2003 May, when the data were acquired with a slow raster scan with chopping as a test of this observing mode (referred to as chopped observations). Approximately 82 hr of integration time over 17 nights were obtained on the field during 2003 January, resulting in 259 separate observations, and 41 hr over 19 nights were obtained in 2003 May, resulting in 64 separate observations. The weather was generally good during the 2003 January run and mediocre to poor during the 2003 May run, where we characterize the weather quality by the sky noise variability (rapid variability of the optical depth).\footnote{Another weather measurement influencing the Bolocam mapping speed is the CSO 225 GHz heterodyne, narrowband, ``tipper tau'' monitor, which measures the zenith atmospheric attenuation. The 2003 January and May Lockman Hole observations yielded $\tau_{225\mathrm{GHz}}$ ranges and 75th percentiles of $\tau_{225\mathrm{GHz}}=0.028-0.129$, $\tau_{75\%}=0.083$ and $\tau_{225\mathrm{GHz}}=0.014-0.307$, $\tau_{75\%}=0.200$, respectively.} One hundred and nineteen bolometer channels were operational during 2003 May; however, only 89 bolometer channels were used in the analysis of the 2003 January data. (The remaining bolometers were not included in making the final Lockman Hole map because of excess noise and/or electronics failures.) Nominally one feed horn per hextant is blocked to enable these dark bolometers to be used for bias and noise monitoring. The 2003 May chopped observations were not deep enough to detect galaxies individually at $>3\ \sigma$ and were used only for pointing verification by cross-correlation with the raster scan observations. During 2003 January, observations were made by scanning the telescope at 60\arcsec\ $\mathrm{s^{-1}}$ in right ascension and stepping in declination by 162\arcsec\ ($\sim \frac{1}{3}$ of a field of view) between subscans (defined as a single raster scan across the sky at a fixed declination) to build up the map. Subsequent scans (which we define as the set of subscans needed to cover the entire declination range of the field) were taken with a $\pm 11$\arcsec\ jitter for each 162\arcsec\ step to minimize coverage variations. Combined with a 5$^\circ$ tilt of the bolometer array relative to azimuth and a fixed Dewar angle, such that the rotation relative to scan direction varies over the night, this yielded even coverage and sub-Nyquist sampling in the cross-scan direction (declination). Sub-Nyquist sampling was automatically achieved in the in-scan direction (right ascension) with 50 Hz sampling of the lock-in amplifiers. In 2003 May, the chopped observations were made with raster scans in azimuth and steps in elevation, but with a scan rate of 5\arcsec\ $\mathrm{s^{-1}}$ and a symmetric chopper throw of $\pm45$\arcsec\ in azimuth, with frequencies of 1 and 2 Hz. A coverage map of the Bolocam Lockman Hole East field from 2003 January is shown in Figure \ref{fig:coverage}, where the integration time per 10\arcsec $\times$10\arcsec\ pixel is shown. Bolocam's 8\arcmin\ field of view is not small compared to the map size; thus, there is a large border around the map where the coverage is reduced and nonuniform compared to the central region. Hence, we define a ``uniform coverage region'' of 324 arcmin$^2$ in the center where the rms in the integration time per pixel is 12\%. Because rms noise varies as the square root of the integration time, the noise dispersion is approximately 6\% in the uniform coverage region (2\% after the map has been optimally filtered as discussed in \S\ \ref{subsection:mapping}). Our analysis is confined to the uniform coverage region. The observational parameters are summarized in Table \ref{table:observationalparameters}. \section{Data Reduction} \label{section:datareduction} \subsection{Basic Pipeline} The Lockman Hole observations were reduced with a custom IDL-based software pipe\-line. The raw files were cleaned with a PCA sky subtraction, where an atmospheric and instrumental noise template was generated through an eigenvector decomposition of the time stream data (\S\ \ref{subsection:cleaning}). In the case of the chopped observations, which are characterized by both positive and negative beams, the time streams were first demodulated followed by a convolution with the expected source crossing structure (first the positive beam, then the negative beam). This results in a positive net peak at the nominal source position with the full source amplitude and symmetric negative beams with half the source amplitude. Once the cleaned time streams were obtained, a map was generated by co-adding individual time streams, weighted by their power spectral densities (PSDs) integrated over the spectral response to a point source. Pointing offsets were applied to individual observations from the global pointing model generated from observations of submillimeter pointing sources (\S\ \ref{subsection:pointing}). Time streams were calibrated from lock-in amplifier voltages to millijanskys using observations of primary and secondary flux calibrators (\S\ \ref{subsection:calibration}). The final map was generated in right ascension and declination using sub-beam-sized pixelization (\S\ \ref{subsection:mapping}) and Wiener filtered to maximize signal-to-noise ratio (S/N) for detections of point sources. \subsection{Cleaning and Sky Subtraction} \label{subsection:cleaning} To facilitate removal of fluctuating atmospheric water vapor emission (sky noise) from the bolometer signals, Bolocam was designed such that the feed horn beams overlap maximally on the primary mirror of the telescope and therefore sample very similar columns of atmosphere. Thus, the sky noise, which dominates the fundamental instrument noise by a factor of $\sim$100, is a nearly common-mode signal. To remove this correlated $1/f$ noise with maximum effectiveness, a PCA technique was developed. The formalism of the PCA analysis is standard \citep[see, e.g.,][]{murtagh87}. Here the covariance matrix is built from the $n$ bolometers by $m$ time elements matrix for each subscan. Eigenfunctions of the orthogonal decomposition that have ``large'' eigenvalues, corresponding to large contributions to the correlated noise, are nulled and the resulting functions are transformed back into individual bolometer time streams. This technique is applicable for the dim ($\lesssim$10 mJy) submillimeter galaxies of the Lockman Hole (and other blank-field surveys) because the source signal contributes negligibly to the sky templates and is largely uncorrelated from bolometer to bolometer. The PCA technique is not appropriate for extended sources, however, in which case the bolometers see correlated astrophysical signals, which are then attenuated. The PCA decomposition was applied to raster scan and chopped data, after chop demodulation in the latter case. Cosmic-ray strikes (spikes in the time streams) are flagged and not included in constructing the eigenfunctions. The precise level of the cut on the large eigenvalues is somewhat arbitrary. The greater the number of eigenfunctions that are nulled, the lower the resulting noise in the cleaned time stream, but the correspondingly greater source flux density removed. Empirically, an iterative cut with the nulling of eigenfunctions with eigenvalues $>3\ \sigma$ from the mean of the eigenvalue distribution produced a balance between sky emission removal and source flux density reduction in simulated observations by maximizing the S/N. Because the distribution of eigenvalues for each observation is characterized by a few outliers (typically $4-7$) at large $\sigma$-values, the overall variance of the time stream is largely dominated by these eigenvalues, resulting in a S/N that is insensitive to the cut threshold for $2-5$ $\sigma$. Furthermore, the distribution of source candidates in the combined Lockman Hole map was invariant under variations in the cut threshold in this range. The PCA sky subtraction attenuates the signal from point sources in addition to the atmospheric signal because it removes low-frequency power from the time streams. The amount of flux density attenuation is determined by the number of PCA components that are removed from the raw time streams, which is controlled by the cut on the eigenvalues: a more aggressive cut results in greater attenuation. Monte Carlo simulations were done to determine the amount by which the flux density of galaxy candidates was reduced by the cleaning. The simulations were done in the following manner: A fake source (Gaussian, 31\arcsec\ FWHM) was injected into a blank Lockman Hole map. A simulated bolometer time stream was generated from the map of the fake source and was added to the raw bolometer time streams of an individual Lockman Hole observation. The time stream data were then cleaned with PCA and mapped in the ordinary manner. The resulting source was fitted by a two-dimensional Gaussian to determine the attenuation of the injected source flux density. This simulation was repeated 1014 times with fake sources injected into random observations at random positions and ranging in flux density from 0.1 to above 1000 mJy (Fig.\ \ref{fig:fluxreduction}). The average reduction in flux density is 0.19 with an rms dispersion of 0.04, independent of flux density to 1 Jy. Above 1 Jy, typical for bright pointing and flux calibrators, the amount of attenuation by PCA was found to depend on the brightness of the fake source. Thus, a different cleaning technique was used for these sources. An atmospheric noise template was generated by simply taking an average of all $n$ bolometers for each time element. The mean-subtracted sky template was then correlated to each of the individual bolometer time streams and the correlated component was subtracted. To prevent the correlation coefficient from being contaminated by the calibrators, multiple scans (including telescope turnaround time between scans) were concatenated and used together to correlate the average sky template to each individual bolometer signal, thus ensuring a small contribution from the point source. A similar analysis to that for PCA flux reduction was performed for the simple average sky subtraction technique, yielding an average flux density reduction of 0.07, independent of source flux, with an rms dispersion of 0.02. \subsection{Pointing} \label{subsection:pointing} Observations of planets, quasars, protostellar sources, H$_{\mathrm{II}}$ regions, and evolved stars were used to construct separate pointing models for the 2003 January and May observing runs. Observations of the pointing sources were taken at the same scanning speeds as the Lockman Hole observations. The pointing fields were generally small (scan areas of $\sim$ 4\arcmin x4\arcmin), although several larger maps (10\arcmin x10\arcmin) were made of Mars so that the source would pass over the entire bolometer array for measuring relative responsivities and beam maps. Pointing observations are generally small because source crossings are only needed in a small subset (15 or so) of bolometers to determine the pointing offsets. These observations were used to map and correct the distortion over the field of view, which is in broad agreement with the distortion predicted by a Zemax$^{\circledR}$ ray-tracing model. The residual rms in the raster-scanned pointing model for the ensemble of all 2003 January sources is 9.1\arcsec, although the local pointing registered to a nearby pointing source is superior.\footnote{A subsequent pointing model for a localized region of sky yields an rms of 4.5\arcsec.} This random pointing error results in an 18\% flux density reduction of the Lockman Hole galaxy candidates (analytically derived from a convolution of the 31\arcsec\ Bolocam beam with a 9.1\arcsec\ Gaussian random pointing error), which is corrected for in the reported flux densities (and uncertainties in these fluxes) of Table \ref{table:sourcelist}. While the 2003 January pointing observations were used to construct a pointing model that was applied to the entire sky, the region of the celestial sphere near the Lockman Hole was not well sampled. A pointing correction derived from sources far away ($>30^\circ$) from the Lockman Hole is therefore susceptible to a systematic offset. Pointing observations were made much more frequently (once per hour) during the 2003 May run and sources near the Lockman Hole were emphasized to create an improved local pointing model; consequently, the 2003 May pointing model near the Lockman Hole was superior to the 2003 January pointing model. No galaxy candidates were detected at $\ge3\ \sigma$ significance in the 2003 May chopped Lockman Hole map owing to poor weather; however, it was cross-correlated with the 2003 January map to compare the pointing models. The cross-correlation yielded a shift of 25\arcsec\ in right ascension of the 2003 January data with respect to the 2003 May data (Fig.\ \ref{fig:crosscorrelation}). Because the pointing on the sky near the Lockman Hole was substantially better and more frequently sampled for the 2003 May run, we attribute this shift to a systematic offset in the 2003 January pointing model. Thus, a systematic 25\arcsec\ shift in right ascension was applied to the 2003 January Lockman Hole map. The need for the shift is also apparent in a comparison between the Bolocam map and the 8 mJy SCUBA 850 $\mu$m and MAMBO 1.2 mm surveys, as several of the Bolocam galaxy candidates become coincident with SCUBA and MAMBO sources in the overlap region of the surveys. Because no pointing observations were taken near the Lockman Hole, it is difficult to quantify the uncertainty in the 9.1\arcsec\ pointing rms. An independent measurement of our pointing uncertainty was performed by examining both 10 VLA\footnote{The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.} radio positions coincident with Bolocam Lockman Hole galaxy candidates and the subset of 5 sources with additional SCUBA and/or MAMBO counterparts (see \S\ \ref{subsection:ancillary}). The rms errors between the Bolocam and radio positions are $10.2^{+3.1}_{-2.4}$ and $9.3^{+4.4}_{-3.0}$ arcsec for the entire 10-source sample and 5-source subset, respectively. The quoted uncertainties are the minimum length 90\% confidence intervals for 10 and 5 degrees of freedom (for both $\delta_{\mathrm{R.A.}}$ and $\delta_{\mathrm{decl.}}$, each of which independently determines the pointing error), respectively. \subsection{Flux Calibration} \label{subsection:calibration} Observations of primary calibrators (planets) and secondary calibrators (protostellar sources, H$_\mathrm{II}$ regions, and evolved stars) were used for flux calibration. Reference planetary flux densities were obtained from the James Clerk Maxwell Telescope (JCMT) calibration Web site,\footnote{See http://www.jach.hawaii.edu/jac-bin/planetflux.pl.} and flux densities of secondary calibrators were obtained from JCMT calibrators \citep{sandell94,jenness02}. The flux density of IRC +10216 is periodic; the flux density was adjusted to the epoch of observation using the 850 $\mu$m SCUBA phase. The reference flux densities were corrected for the Bolocam bandpass, which is centered at 265 GHz (the flux densities in the Bolocam band are 5\% larger than the those quoted by the JCMT for the SCUBA 1.1 mm band). During 2003 January, Saturn had a semidiameter of 10\arcsec; this is not small compared to the 31\arcsec\ Bolocam beam, so corrections for the angular extent of Saturn were required. The standard technique for flux calibration is to calibrate a given science observation using the flux calibrator observations taken nearest in time, which were presumably taken at similar atmospheric opacity and air mass. With Bolocam, we are able to use a more sophisticated technique via continuous monitoring of the bolometer operating resistance using the DC level of the lock-in amplifier output signal. The technique uses the following logic. The atmospheric optical loading increases as the atmospheric optical transmission decreases, which may occur because of changes in zenith opacity (i.e., weather) or intentional changes in telescope elevation. The bolometer resistance decreases monotonically as the atmospheric optical loading increases. Simultaneously, the bolometer responsivity decreases monotonically as the bolometer resistance decreases. Thus, the flux calibration (in nV Jy$^{-1}$, where the voltage drop across the bolometer is proportional to its resistance), which is proportional to the product of atmospheric transmission and bolometer responsivity, is expected to be a monotonic function of the bolometer resistance. This relation is measured empirically, as shown in Figure \ref{fig:calibration}, by plotting the flux calibration (voltage at the bolometer in nV Jy$^{-1}$ of source flux) from each of the ensemble of calibrator observations against the median DC lock-in amplifier voltage measured during the observation. This relation is then combined with the continuously monitored DC lock-in signal to apply the appropriate flux calibration value during science observations. Note that the curve is measured only using sources dim enough to ensure linear bolometer response; Jupiter was dropped for this reason. The flux density calibration derived from Figure \ref{fig:calibration} is biased relative to the blank-field sources by the combination of three effects: reduction in the flux density of calibration sources due to average cleaning, reduction in the flux density of blank-field sources due to PCA cleaning, and reduction in the flux density of blank-field sources due to pointing errors. The first two effects cause the calibration curve of Figure \ref{fig:calibration} to be shifted up by the factor $\epsilon_{\mathrm{avg}}/\epsilon_{\mathrm{PCA}}$, where the flux reduction factors $\epsilon_{\mathrm{avg}}$ and $\epsilon_{\mathrm{PCA}}$ are as determined in \S\ \ref{subsection:cleaning}. The effect of Gaussian random pointing errors of rms $\sigma_p$ on the peak flux density of a source observed with a Gaussian beam of width $\sigma_b$ is equivalent to a convolution of the beam with a Gaussian of rms $\sigma_p$. The resulting reduction in peak height can be analytically calculated as \begin{eqnarray} \nonumber \epsilon_p = \frac{\sigma_b}{\sqrt{\sigma_b^2 + \sigma_p^2}} = 0.82^{+0.09}_{-0.12} \end{eqnarray} for a 31\arcsec\ FWHM beam and a random pointing error of 9.1\arcsec. The uncertainty quoted for $\epsilon_p$ is the minimum length 90\% confidence interval obtained from the rms pointing error between the Bolocam galaxy candidates and coincident radio sources (see \S\ \ref{subsection:ancillary}). (While the local pointing observations around a specific altitude and azimuth are clustered, each with a smaller pointing error rms, the Lockman Hole observations were taken over a large range of zenith and azimuthal angles and thus have an overall pointing error defined by the ensemble of pointing observations.) Thus, the final bias in flux density is \begin{eqnarray} \nonumber \epsilon = \frac{\epsilon_p \epsilon_{\mathrm{PCA}}}{\epsilon_{\mathrm{avg}}} = 0.71^{+0.08}_{-0.10}. \end{eqnarray} All flux densities (as well as uncertainties in these fluxes) quoted in this paper, including the simulations of \S\ \ref{section:simulations}, have been corrected for this flux bias. The uncertainty in the flux bias is a systematic effect that produces a correlated shift in all source fluxes. \subsection{Mapping and Optimal Filtering} \label{subsection:mapping} Bolocam maps are built up by co-adding subscans weighted by their time stream PSDs integrated over the spectral (temporal frequency) band of a point source at the raster scan speed. Data points were binned into 10\arcsec $\times$ 10\arcsec\ pixels, with approximately 30,000 hits per pixel. Each hit represents a 20 ms integration per bolometer channel. Four maps were created: a coverage map with the number of hits per pixel (Fig.\ \ref{fig:coverage}); the PCA-cleaned, optimally filtered astrophysical map; a coverage-normalized map; and a within-pixel rms map. In the coverage-normalized map, each pixel was multiplied by the square root of the number of hits (effectively the integration time) in that pixel to account for the nonuniform coverage in the map when comparing pixels. The dispersion of the bolometer voltages (from each of the hits) within each pixel was recorded in the within-pixel rms map. Because the signal band of interest (point sources) does not fall throughout the entire temporal (or spatial) frequency range of the PSD of the data, we filter the co-added map with an optimal (Wiener) filter, $g(q)$, to attenuate $1/f$ noise at low frequencies and high-frequency noise above the signal frequency: \begin{equation} \label{wiener} g(q) = \frac{s^*(q)/J(q)}{\int{| s(q) |^2 / J(q) d^2q}}, \end{equation} \noindent where $J(q)$ is the average PSD, $s(q)$ is the Fourier transform of the Bolocam beam shape from map space to spatial frequency ($1/x$) space, $q$, and the asterisk indicates complex conjugation. The factor in the denominator is the appropriate normalization factor so that when convolved with a map, peak heights of beam-shaped sources are preserved. $J(q)$ is obtained by transforming the time stream PSDs (averaged over all of the Lockman Hole observations) to a spatial PSD assuming azimuthal symmetry. A two-dimensional map of equation (\ref{wiener}) was thus convolved with the co-added map to maximize S/N for detections of point sources. An analogous filter was applied directly to the demodulated time streams of the chopped observations, with $s(t)$ represented by a positive and negative beam separated by the chop throw (90\arcsec). The cleaned, co-added, optimally filtered map is presented in Figure \ref{fig:map}. There is a perimeter a few arcminutes wide around the map that does not lie within the uniform coverage region (cf.\ Fig.\ \ref{fig:coverage}). There are 17 galaxy candidates at $>3\ \sigma$, apparent as unresolved bright spots, numbered in order of decreasing brightness. Six false detections are expected from simulations (discussed in detail in \S\ \ref{section:simulations}). There are no negative sidelobes associated with the source candidates because the observations are not chopped. The 850 $\mu$m SCUBA 8 mJy \citep{scott02} and 1.2 mm MAMBO \citep{greve04} surveys cover patches with radii of $\sim5$\arcmin\ and $\sim7$\arcmin\ in the center of the map (central 122 and 197 arcmin$^2$, respectively). A comparison of the maps is given in \S\ \ref{section:sourcelist}. \section{Source List} \label{section:sourcelist} \subsection{Source Extraction} \label{subsection:extraction} Source extraction was performed on the PCA-cleaned, optimally filtered, coverage-normalized map consisting of all the raster scan observations co-added together. The algorithm was begun by doing a cut on the uniform coverage region, defined as the set of those pixels for which (1) the coverage is $\ge70$\% of the maximum per pixel coverage and (2) the within-pixel rms is less than 2 $\sigma$ from the mean within-pixel rms. The uniform coverage region is a contiguous region in the center of the map. Next, an rms in sensitivity units (the flux density of each pixel times the square root of the integration time for that pixel in units of mJy s$^{1/2}$) was computed in the uniform coverage region. This rms is valid for the entire uniform coverage region since variations in coverage have been accounted for by the $t_i^{1/2}$ coverage normalization, where $t_i$ is the total integration time for pixel $i$. All pixels with coverage-normalized flux densities exceeding $3\ \sigma$ (``hot pixels'') were flagged as potential sources. Then hot pixels were grouped into multi-pixel sources by making the maximal group of adjacent hot pixels, including those within $\sqrt{2}$ pixels (i.e., diagonally adjacent). The right ascension and declination of the source candidates were computed by centroiding two-dimensional Gaussians on the groups. Because convolution of the map with the Wiener filter properly weights the flux contribution from each pixel, the best estimate of the source flux density in the optimally filtered map is given by the peak value in the group. A histogram of the pixel values in the uniform coverage region is shown in Figure \ref{fig:histogram}. The quantity that is plotted is the pixel sensitivity, with the scaling by $t_i^{1/2}$ accounting for the nonuniform coverage in the map. Note that the sensitivity histogram should not be interpreted as instrument sensitivity as the histogram uses an optimally filtered (smoothed) map but scales by sub-beam-sized integration times. The negative side of the histogram, plotted logarithmically, is extremely Gaussian. A Gaussian fit to the Bolocam noise-only (jackknife) distribution is overplotted by a solid line (see \S\ \ref{subsection:nulltests}), indicating a clear excess on the positive side with respect to the Gaussian. The galaxy candidates make up this excess. Since the pixels are 10\arcsec $\times$10\arcsec\ in size and the beam size is 0.30 arcmin$^2$, there are approximately 11 pixels per source candidate. The source candidate list is presented in Table \ref{table:sourcelist}, where the flux densities are listed in order of decreasing brightness in the fifth column. Seventeen galaxy candidates were detected at $>3\ \sigma$ significance, with the brightest being 6.8 mJy. Seven of the candidates were detected at $>3.5\ \sigma$ significance. The flux densities of the source candidates were attenuated by the PCA cleaning; their corrected flux densities are listed (see \S\ \ref{subsection:cleaning}). The source candidate list is compared to the 850 $\mu$m SCUBA 8 mJy and 1.1 mm MAMBO surveys in \S\ \ref{subsection:ancillary}. \subsection{Tests for Robustness of Galaxy Candidates} \label{subsection:nulltests} Two tests were carried out to check the robustness of the galaxy candidates. The first test was a jackknife test in which 50\% of the observations were randomly chosen and co-added together into a map and the remaining 50\% of the observations were co-added into a second map. If the source candidates are real and coherent over multiple observations, then the positive-side excess of the histogram in Figure \ref{fig:histogram} should disappear when the two maps are differenced. Conversely, if the source candidates arise from spurious events in individual observations, such as cosmic-ray strikes, then the excess would not disappear when the two maps are differenced. This algorithm was repeated 21 times with the first 50\% of the observations randomly selected independently each time, and the histograms were averaged. For such an algorithm, one expects the noise realizations to be approximately independent; the actual correlation was measured to be $\sim$4\%. The result is shown in Figure \ref{fig:jackknife}. A Gaussian distribution fits the jackknife histogram extremely well. The absence of a positive-side excess indicates that the source candidates in the Wiener-filtered map are common to all observations. The negative side of the real map histogram (cf.\ Fig.\ \ref{fig:histogram}) is slightly broader because confusion noise from sources below our threshold is absent in the jackknife histogram. Similar histograms result from jackknife tests of scan direction (+R.A.\ vs. -R.A.), intranight variations (cuts on local sidereal time), and night-to-night variations, indicating that the galaxy candidates are not caused by systematic effects, such as scan-synchronous or elevation-dependent noise. This strong statistical test indicates that the Bolocam source candidates are real. A second test was performed to verify that the source candidates arise from the co-addition of many observations rather than from spurious events. In this test, individual maps were made from each of the 259 observations. These maps were then co-added with fixed-amplitude offsets with random directions (phases). The expectation of this null test is that sources coherent over multiple observations are smeared out onto rings of a fixed radius, resulting in the disappearance of the positive-side excess. (The positive-side excess will be distributed over many pixels and therefore spread over many bins of the histogram.) Source candidates arising from isolated spurious events or characterized by length scales much larger than the Bolocam beam will merely be moved or negligibly broadened, leaving the histogram unchanged. Sixteen iterations were performed at each jitter amplitude ranging from 15\arcsec\ to 70\arcsec\ (Fig.\ \ref{fig:jitter}). The rms of the jittered histograms in excess of the rms of the jackknife distribution of Figure \ref{fig:jackknife} continues to decline out to a random jitter of 70\arcsec. The excess does not go to zero at large amplitudes because the sources are spread out onto annuli with finite radii and will still be present at a low level. Since the area of the annulus increases as $r$, the excess should drop as $r^{-1}$ (at large jitter amplitudes where the beams do not overlap), as indicated by Figure \ref{fig:jitter}. This null test confirms that the excess variance (the positive-side excess in the histogram of Fig.\ \ref{fig:histogram} from source candidates) is contributed to by the ensemble of observations and has a small characteristic length scale (corresponding to point sources). \subsection{Comparison With Other Submillimeter and Millimeter-Wave Surveys} \label{subsection:ancillary} The Bolocam galaxy survey provides a unique contribution to the current state of submillimeter galaxy surveys. The 850 $\mu$m JCMT SCUBA 8 mJy survey (Scott et al.\ 2002; hereafter SCUBA survey), with a 14\arcsec\ beam, implemented a jiggle map strategy with a 30\arcsec\ chop throw over 122 arcmin$^2$ to an rms of 2.5 mJy beam$^{-1}$. The 1.2 mm IRAM MAMBO survey (Greve et al.\ 2004; hereafter MAMBO survey), with a 10.7\arcsec\ beam, scanned at 5\arcsec\ s$^{-1}$ with a chop throw of 36\arcsec$-$45\arcsec\ and a chop frequency of 2 Hz over 197 arcmin$^2$ to an rms of 0.6$-$1.5 mJy beam$^{-1}$. Bolocam's 60\arcsec\ s$^{-1}$ raster scan strategy (without chopping) facilitated a large 324 arcmin$^2$ survey to a uniform rms of 1.4 mJy beam$^{-1}$ (Wiener filtered for detection of point sources). Using a model SED based on nearby, dusty, star-forming galaxies (see \S\ \ref{subsection:luminosities}) gives relative flux densities of 1\,:\,2.0\,:\,0.8 and relative rms of 1\,:\,0.9\,:\,0.6$-$1.4 for the Bolocam, SCUBA, and MAMBO surveys, respectively, for a galaxy redshift of $z=2.4$ (with the range given for MAMBO due to nonuniform noise). Figure \ref{fig:ancillary} provides a cumulative overview of recent far-infrared, submillimeter, and radio observations of the Lockman Hole. The circles of Bolocam, SCUBA, and MAMBO observations correspond to 2 $\sigma$ confidence regions of position, including both beam sizes and stated pointing errors. The 6 cm VLA radio sources of \cite{ciliegi03} and unpublished 21 cm VLA sources of M.\ Yun (2004, private communication), with average noise levels of 11 and 10$-$15 $\mu$Jy beam$^{-1}$, respectively, are identified. The M.\ Yun (2004, private communication) radio field covers the entire Bolocam good coverage region, while the center of \cite{ciliegi03} observations is offset to the northwest, with an overlap of approximately 130 arcmin$^2$. Also shown are the 20 published radio sources from deep 21 cm VLA observations (average noise level of 4.8 $\mu$Jy beam$^{-1}$) from \cite{ivison02} that are coincident with SCUBA sources, as well as an additional 21 cm VLA source discovered by \cite{egami04} from the reexamination of the \cite{ivison02} map. Infrared detections from a recent wide-field 95 $\mu$m ISOPHOT survey \citep{rodighiero04} and recent {\it Spitzer} detections of SCUBA sources \citep{egami04} are also identified. Five SCUBA sources from the \cite{scott02} catalog (LE850.9, 10, 11, 15, 20) were retracted by \cite{ivison02} on the basis of large $\sigma_{\mathrm{850\mu m}}$ values (and lack of radio identifications) and are depicted by crosses through them. Examination of Figure \ref{fig:ancillary} shows discrepancies in detections between the surveys. Table \ref{table:ancillary} summarizes the coincident detections between Bolocam 1.1 mm, SCUBA 850 $\mu$m, MAMBO 1.2 mm, and VLA radio observations. Each row in the table corresponds to the fractional number of counterparts detected by each survey. The five SCUBA sources retracted by \cite{ivison02} are not included in this comparison. The coverage of each survey was taken into account, with only the overlapping uniform coverage regions considered. The surveys have a wide range of agreement, ranging from 23\% (7 of 31 SCUBA sources detected by Bolocam) to 75\% (6 of 8 Bolocam sources detected by SCUBA). Six of the 17 Bolocam detections are galaxies previously detected by the SCUBA 8 mJy survey. Of the remaining 11 Bolocam sources, 9 of them lie outside the SCUBA 8 mJy survey region. Similarly, 7 of the 11 Bolocam sources present within the MAMBO good coverage region were detected with MAMBO at 1.2 mm. Two of the 4 Bolocam source candidates not detected by MAMBO have expected 1.2 mm flux densities (from the model SED of \S\ \ref{subsection:luminosities}) below the MAMBO detection threshold for $z = 2.4$. The large fraction of Bolocam sources detected by SCUBA and MAMBO suggests that these submillimeter galaxy candidates are real. The impact of the converse of this statement is less clear: The majority of SCUBA and MAMBO sources were not detected by Bolocam, although 17 out of 24 nondetected SCUBA sources and 7 out of 15 nondetected MAMBO sources have expected 1.1 mm flux densities (from the aforementioned model SED) below the Bolocam detection threshold, nor is there a strong correlation between SCUBA and MAMBO sources. Some of these sources may not be real or may not be modeled well by the assumed SED. Furthermore, the majority (65\%) of Bolocam source candidates have at least one radio coincidence (Ivison et al.\ 2002; Ciliegi et al.\ 2003; M.\ Yun 2004, private communication), although a 34\% accidental detection rate is expected. (This accidental detection rate is the Poisson likelihood that one or more of these known radio sources, randomly distributed, fall within the 2 $\sigma$ confidence region of the Bolocam beam.)$\:$ To help verify the 9.1\arcsec\ pointing rms of \S\ \ref{subsection:pointing}, the rms positional error of the Bolocam galaxy candidates compared to the VLA radio positions was calculated for both all Bolocam sources with radio counterparts and the subset of sources (1, 5, 8, 16, and 17) with additional SCUBA and/or MAMBO detections. (Bolocam galaxy candidate 14 was excluded owing to source confusion.) Five of the Bolocam source candidates show no counterparts in the other surveys. These may be false detections, although four of these candidates are near the edge or outside of the SCUBA and MAMBO good coverage regions, which may explain the lack of additional submillimeter detections. It is possible that one or more of these four Bolocam source candidates without radio counterparts may instead be sources at high redshift ($z > 3$), where the positive {\it K}-correction (sharp drop in flux density with increasing redshift) causes dim radio counterparts. Four SCUBA detections that have at least two detections from MAMBO, {\it Spitzer}, and VLA were not detected by Bolocam, although the corresponding pixels in the Bolocam map have flux values just below the 4.2 mJy detection threshold for two of these nondetections. A description of each Bolocam detection (as well as nondetections) follows in the next section. A follow-up paper is in preparation that will include a more detailed discussion on individual sources, including redshift/temperature constraints. The Bolocam 1.1 mm beam solid angle is 0.30 arcmin$^2$ and the uniform coverage region of the map is 324 arcmin$^2$. There are thus approximately 1000 beams in the map. With 17 source candidates, or $\sim50$ beams per source, source confusion is not a serious issue. We define ``source confusion'' here as a high spatial density of detected sources that makes it difficult to distinguish individual sources. This should not be confused with ``confusion noise'' from sources below the detection threshold (discussed in detail in \S\ \ref{subsec:ConfusionNoise}). Nevertheless, source confusion exists at some level because several Bolocam sources are either closely spaced or near multiple SCUBA detections. While the clustering properties of submillimeter galaxies remain uncertain, there exists tentative evidence from both two-dimensional angular correlation functions \citep{greve04, scott02, borys03} and clustering analyzed with spectroscopic redshift distributions \citep{blain04} that suggests strong clustering with large correlation lengths \citep[as well as correlation to other classes of high-redshift galaxies, including Lyman break galaxies and X-ray loud AGNs;][]{almaini03}. We do not attempt to quantify source confusion here but address it in a paper in preparation. There are 24 8 mJy SCUBA 850 $\mu$m and 15 MAMBO 1.2 mm sources within our survey region that we did not detect. Statistically, however, we detect the aggregate average of these at significances of 3.3 and 4.0 $\sigma$, respectively, at $\lambda=1.1$ mm. This was done by measuring the distribution of ``sensitivity" values (scaled by $t_i^{1/2}$) for the Wiener-filtered map pixels that coincide with SCUBA or MAMBO sources for which we found no excursion above our detection threshold. If no subthreshold ``counterparts" are present in these pixels, the sensitivity values should follow the noise distribution of the map (albeit truncated at 3 $\sigma$). Such a distribution has a mean value of -0.004 $\sigma$. In the data, we find that the sensitivity values for these map pixels have mean values of $1.0 \pm 0.3$ and $0.8 \pm 0.2\ \sigma$ for the SCUBA and MAMBO nondetections, respectively. Assuming that uncertainties are Gaussian distributed, the probabilities of such large nonzero means to have arisen from pure noise are very low, $1.7 \times 10^{-4}$ and $2.5 \times 10^{-4}$, respectively. Thus, we have statistically detected the ensemble of SCUBA and MAMBO sources below our threshold at the 3.3 and 4.0 $\sigma$ confidence levels, respectively. \subsection{Bolocam Detections} The Bolocam detections are as follows: \noindent {\it Bolocam.LE1100.1.---}This 6.8 mJy galaxy candidate has Bolocam and MAMBO detections but falls outside of region covered by SCUBA observations. A strong 20 cm VLA radio observation (M.\ Yun 2004, private communication) exists within both the Bolocam and MAMBO confidence regions. \noindent {\it Bolocam.LE1100.2.---}This 6.5 mJy Bolocam detection has a coincident 20 cm radio (M.\ Yun 2004, private communication) detection. The source lies outside good coverage regions of the SCUBA, MAMBO, and \cite{ciliegi03} VLA observations. \noindent {\it Bolocam.LE1100.3.---}This source has a 6.0 mJy Bolocam detection with two 20 cm radio (M.\ Yun 2004, private communication) plausible counterparts but lacks a MAMBO detection. The source lies outside the good coverage regions of the SCUBA and \cite{ciliegi03} VLA observations. \noindent {\it Bolocam.LE1100.4.---}This 5.2 mJy galaxy candidate has Bolocam and MAMBO detections with no SCUBA or radio detections. Several SCUBA sources (with radio counterparts) are in close proximity to this source; however, the coincident MAMBO source (with a comparatively small 10.7" beam FWHM) confirms the Bolocam detection at this location. \noindent {\it Bolocam.LE1100.5.---}This 5.1 mJy galaxy candidate has Bolocam, SCUBA, and VLA \citep{ivison02} detections with two coincident MAMBO and M.\ Yun (2004, private communication) detections. \noindent {\it Bolocam.LE1100.6.---}Bolocam detection 6 (5.0 mJy) has three potential 20 cm VLA radio counterparts (M.\ Yun 2004, private communication), and a 95 $\mu$m ISOPHOT \citep{rodighiero04} detection within the Bolocam confidence region. No SCUBA or MAMBO coverage (or Ciliegi et al.\ [2003] VLA coverage) exists for this detection. \noindent {\it Bolocam.LE1100.7, 10, 11, 12, 15.---}These five sources (4.9, 4.7, 4.6, 4.6, and 4.4 mJy, respectively) are Bolocam galaxy candidates with no other coincident detections (including radio counterparts), although sources 7, 11, 12, and 15 fall near the edge or outside of the SCUBA, MAMBO, and VLA \citep{ciliegi03} good coverage regions. These sources may be false detections, since six are statistically expected from simulations (see \S\ \ref{section:simulations}), or possibly galaxies at high redshift ($z > 3$). The likelihood of sources 10, 11, 12, and 15 being false detections is enhanced by the fact that their flux densities are near the 4.2 mJy threshold. \noindent {\it Bolocam.LE1100.8, 16.---}These submillimeter galaxies have Bolocam (4.8 and 4.1 mJy, respectively), SCUBA, MAMBO, and VLA (8, 16, Ivison et al.\ 2002; 8, Ciliegi et al.\ 2003; 8, M.\ Yun 2004, private communication) detections. \noindent {\it Bolocam.LE1100.9.---}This galaxy candidate has a 4.8 mJy Bolocam detection with three 20 cm radio detections (M.\ Yun 2004, private communication) within the Bolocam confidence region. There is no SCUBA, MAMBO, or \cite{ciliegi03} VLA coverage in this region. \noindent {\it Bolocam.LE1100.13.---}This 4.5 mJy Bolocam detection has SCUBA and VLA radio (M.\ Yun 2004, private communication) counterparts, but no \cite{ivison02} or \cite{ciliegi03} VLA detections. \noindent {\it Bolocam.LE1100.14, 17.---}These two closely spaced Bolocam detections (4.4 and 4.0 mJy, respectively) have numerous other detections, including three SCUBA sources, two MAMBO sources, and multiple VLA radio sources (S1, S4, S8, Ivison et al.\ 2002; S1, S4, S8, M.\ Yun 2004, private communication). {\it Spitzer} detections with IRAC at 3.6, 4.5, 5.8, and 8.0 $\mu$m exist for all three SCUBA sources, as well as 24 $\mu$m detections with MIPS for SCUBA sources 1 and 8. (Three IRAC and MIPS sources are seen within a radius of 8\arcsec\ of SCUBA source 8.) Because of the 31\arcsec\ size of the Bolocam beam, we are likely influenced by source confusion. \subsection {Bolocam Nondetections} The Bolocam nondetections are as follows: \noindent {\it SCUBA.LE850.14, 18.---}These galaxy candidates have SCUBA and MAMBO detections, with {\it Spitzer} IRAC and MIPS and VLA (Ivison et al.\ 2002, M.\ Yun 2004, private communication) counterparts. SCUBA source 14 is discernible in the Bolocam observations at 3.9 mJy, just below the 4.2 mJy, 3 $\sigma$ detection threshold. The Bolocam pixel coincident with SCUBA source 18 has a flux density of 1.6 mJy, well below the detection threshold. \noindent {\it SCUBA.LE850.7, 35.---}These sources are detected by SCUBA, {\it Spitzer} IRAC and MIPS, and VLA (7, Ivison et al.\ 2002; 7, 35 M.\ Yun 2004, private communication; 7, Ciliegi et al.\ 2003; 35, Egami et al.\ 2004). The flux density in the Bolocam map coinciding with SCUBA source 7 is 3.4 mJy, below the 3 $\sigma$ detection threshold. At a Bolocam flux density of 0.9 mJy, SCUBA source 35 is well into the Bolocam noise. \section{Number Counts} \label{section:simulations} In this section we discuss the extraction of the number (per unit flux density per unit solid angle) versus flux density relation (``number counts'') from the observed sources. Because of the presence of noise (due to the instrument, the atmosphere, and confused background sources), there is a bias in both the observed flux densities and the observed histogram of number of sources versus flux density. This bias, first noted by \cite{eddington13,eddington40}, is quite generic when attempting to measure a statistical distribution in the presence of noise. Further, because our S/N with respect to these noise sources is not large, this bias is an effect comparable in size to the statistical Poisson errors in determining the number counts. There are two broad approaches to extracting the number counts in the regime where bias is significant. The first is to directly correct the observed number versus flux histogram using some knowledge of the statistics of the survey. The other approach is to assume a model and attempt to match its parameters to the data using a fit, aided by simulation. The direct correction approach does not appear promising for this survey. \citet{eddington40} showed that, in the presence of Gaussian measurement noise, one could apply an asymptotic series correction to the observed distribution to obtain a better estimate of the underlying distribution. This correction involves even-numbered derivatives of the observed distribution, and so, with our observed distribution containing only 17 sources, this method is impractical. Another approach might be to individually correct each source by its expected bias, but \citet{eddington40} also showed that using the distribution of corrected fluxes as a measure of the underlying distribution is fundamentally incorrect. Thus, we have elected to fit a model to the data. The formalism for relating a given underlying number count distribution to the observed number counts is given in \S\ \ref{subsec:Definitions}. This provides the definition of the survey bias, completeness, and false detection rate. The simulations used to determine these quantities are described in \S\ \ref{subsection:noisemaps}, and their actual calculation is given in \S\ \ref{subsec:surveycalculations}. The effect of confusion noise on the survey is discussed in \S\ \ref{subsec:ConfusionNoise}. The method of extracting the underlying counts is given in \S\ \ref{subsection:modeldnc}. Caveats and difficulties in extracting the underlying number counts, as well as suggestions for improvements in a future analysis, are discussed in \S\ \ref{subsec:SysEffectsNumCounts}. \subsection{Formalism} \label{subsec:Definitions} For a given observing frequency band, we denote the differential number count (DNC) distribution of galaxies per unit flux density interval per solid angle as $N'(S)$. The cumulative number count (CNC) distribution will be denoted $N(S)$, with units of number per solid angle. The relation between the true $N'$ and the observed distribution $n'$ must account for the effects of random noise, the presence of a detection threshold, and confusion noise (i.e., a contribution to the variance of the map due to sources below the detection threshold). As a result of all forms of noise, a source having flux density $S$ is in general observed with a different flux density $s$. Let $B(s, S; N')$ be the probability density that a source with true flux density $S$ is observed at a flux density $s$; the implicit dependence on the confusion noise is included by the parametric dependence on $N'$. $B(s, S; N')$ is normalized such that \begin{eqnarray} \label{eq:BNorm} \int_{-\infty}^\infty{B(s, S; N') \,d s} = 1 \end{eqnarray} for all values of $S$. The quantity $B(s, S; N')$ will be referred to as the ``survey bias''. By normalizing according to equation (\ref{eq:BNorm}), one assumes that a source of true flux $S$ will be found at some flux $s$ with probability unity. In the presence of a detection threshold, however, sources whose observed flux fluctuates below the threshold will not be included in $n'$. In this case, the integral in equation (\ref{eq:BNorm}) is not 1, but $C(S; N')$, the ``survey completeness,'' namely, the probability that the source is found at all. Note that this also depends on the confusion noise through $N'$. In addition, there may be {\it false detections} of noise fluctuations, $F(s)$, which contribute to the observed number counts. Thus, the expression for the observed DNC distribution is \begin{equation} \label{eq:finalDNC} n'(s) = F(s) + \int_0^\infty{ B(s, S; N') C(S; N') N' \,d S}. \end{equation} In the following, the dependence of $B$ and $C$ on $N'$ will not be written explicitly. Under the assumptions of uniform Gaussian noise with rms $\sigma$, negligible contribution from confusion noise, and a fixed detection threshold $n\sigma$, analytical expressions for $C(S)$, $B(s, S)$, and $F(s)$ can be derived. For future reference, these are \begin{equation} \label{eq:GaussianCompleteness} C(S) = \frac{1}{\sqrt{2 \pi} \sigma} \int_{m\sigma}^\infty{ \exp{\left[\frac{-(s-S)^2}{2 \sigma^2}\right]}\,d s}, \end{equation} \begin{equation} \label{eq:GaussianBias} B(s, S) = \frac{1}{C(S)} \frac{1}{\sqrt{2 \pi} \sigma} \exp{\left[\frac{-(s-S)^2}{2 \sigma^2}\right]} \Theta(s - m\sigma), \end{equation} \begin{equation} \label{eq:GaussianFalseDetections} F(s) = \frac{{\cal N}}{\sqrt{2 \pi} \sigma} \exp{\left[\frac{-s^2}{2 \sigma^2}\right]} \Theta(s - m\sigma), \end{equation} where $\Theta(x)$ is the unit step function ($\Theta = 1$ for $x > 0$, $\Theta = 0$ for $x < 0$) and ${\cal N}$ is a normalization factor for ${\cal N}$ independent noise elements. \subsection{Simulation of Noise Maps} \label{subsection:noisemaps} Two types of simulations were done to determine the survey bias, completeness, and false detection rate. Both methods simulate only the instrument and atmospheric noise (the ``random noise'') and do not include the effect of confusion noise. This is appropriate for the case in which the random noise dominates. The validity of this assumption is discussed in \S\ \ref{subsec:ConfusionNoise}. In the first suite of simulations, the observational data were used to generate 100 fake maps (noise realizations) by jittering the individual time streams 60\arcsec\ in right ascension/declination coordinates with a random phase before they were co-added to make maps. This had the effect of washing out the point sources as discussed in \S\ \ref{subsection:nulltests}. Note that realizations of these jittered maps are not fully independent because the noise is somewhat correlated between realizations; the average correlation coefficient between maps is 4\%. Statistical error bars on the completeness and bias determined from this simulation method include the contribution from the correlation. The pointing jitter dilutes the variance of sources present in the jittered map to 20\% or less of its value in the unjittered map (see Fig.\ \ref{fig:jitter}), effectively removing confusion noise. Because of the large amount of time required to generate many realizations of maps from real time stream data (as in the case of the jittered maps) and the difficulties of fully simulating time stream realizations of instrument and atmospheric noise, a second simulation method was developed. In this method, the noise properties are derived from the jackknife maps, which represent realizations of signal-free instrument noise. The noise model for the map (before optimal filtering) assumes that the noise map, $n(\vect{x})$, can be described as an independent noise per pixel that scales as $1/\sqrt{t_i}$, where $t_i$ is the integration time in pixel $i$, combined with a mild pixel-to-pixel correlation. This correlation is assumed to be stationary over the map and can thus be described by the two-dimensional PSD of the noise map $\xi^2(\vect{k})$, normalized so that its integral has unit variance. These assumptions are justified because, as shown in \S\ \ref{section:sourcelist}, the coverage variation accounts for most of the point-to-point variation in the noise, and examination of the jackknife map PSD shows that the $1/k$ contribution to the PSD is small compared to the white term, leading to largely uncorrelated pixels. The noise model for the map after Wiener filtering is straightforwardly obtained by convolving the noise map with the Wiener filter. The assumptions above are equivalent to writing the covariance matrix $\vect{C}$ for the unfiltered map as \begin{displaymath} \vect{C} = \vect{D}^{1/2} {\cal F}^{-1} \xi^2 {\cal F} \vect{D}^{1/2}, \end{displaymath} where $\vect{D}$ is diagonal in pixel space and describes the coverage variations, $\xi$ is diagonal in $\vect{k}$-space and describes the pixel noise correlations, and ${\cal F}$ is the discrete Fourier transform. The elements of $\vect{D}$ can be written as \begin{displaymath} D^{1/2}_{ij} = \sqrt{ \frac{A}{t_i}} \sigma \delta_{ij}. \end{displaymath} where $\sigma^2$ is the sample variance of the noise and $A$ is a normalization that ensures that the sum of the pixel variances $\sum_i A\sigma^2/t_i$ is equal to $(N-1)\sigma^2$, the total noise variance in the map. The noise map should satisfy $<n n^T> = \vect{C}$; a given realization is \begin{displaymath} n = \vect{D}^{1/2} {\cal F}^{-1} \xi {\cal F} w, \end{displaymath} where $w$ is a realization of uncorrelated, Gaussian, mean zero, unit variance noise. Determining the noise model then reduces to determining the form of $\xi(\vect{k})$ and the value of $\sigma$. The PSD $\xi^2$ is computed directly from the uniform coverage region of the unfiltered jackknife maps using the discrete Fourier transform; multiple jackknife realizations (which are nearly independent) and adjacent $\vect{k}$-space bins are averaged to reduce the noise on the measurement of the PSD. The overall noise normalization $\sigma$ is determined by requiring that the variance of $n(\vect{x})$, when considered in the good coverage region, Wiener filtered, and multiplied by $\sqrt{t_i}$, is equal to the variance similarly determined from the jackknife maps (\S\ \ref{subsection:nulltests} and Fig.\ \ref{fig:jackknife}). One thousand noise realizations were generated in this way. \subsection{Calculations of False Detection Rate, Bias, and Completeness} \label{subsec:surveycalculations} The false detections were determined by simply running the source detection algorithm on each of the simulated maps for both types of simulations and recording the number and recovered flux density of the detections. Figure \ref{fig:falsedetections} shows the results for both methods. Also plotted is the theoretical prediction, assuming that the normalization ${\cal N}$ is either $N_{\mathrm{beams}}$ or $N_{\mathrm{pixels}}$, which should bracket the possibilities. It is seen that neither Gaussian model describes the simulated false detection rate well, although both methods of simulation agree well with each other. The Gaussian model does not describe the simulation data well in either amplitude or shape. The amplitude discrepancy occurs because ${\cal N}$ is the number of effective independent noise elements, which depends on both the correlations in the noise and the detection algorithm, which does not count all pixels above threshold as source candidates but considers all pixels within a merged group to be a single source. The shape of the Gaussian model fits poorly owing to three effects: First, because of the coverage variation, the threshold is not sharply defined in flux density units, causing some false detections {\em below} the threshold. Second, the grouping algorithm merges closely spaced false detections in the Wiener-filtered map and assigns a single flux density to the brightest pixel, a conditional probability that is flux dependent. Finally, the pixels are not independent, since both $1/f$ noise and the Wiener filter correlate them. Because of the difficulty in deriving an analytic expression for all these effects, the false detection rate as determined by simulation is used in further analysis (see \S\ \ref{subsection:modeldnc}) instead of the Gaussian prediction. The mean number of false detections in the Lockman Hole map as determined from simulation is 6 (Poisson distributed). To find the completeness and bias, sources of known flux density were injected into the noise maps. First, a source-only map was created by adding 30 31\arcsec\ FWHM two-dimensional Gaussians at a specified flux density level to a blank map. The sources were injected at random into the uniform coverage region but were spaced far enough apart that the source detection algorithm could distinguish each of them; this circumvented potential complications involving source confusion. Then, the source-only map was added to a noise map to simulate a sky map post cleaning and mapping. Next, this map was Wiener filtered and run through the source extraction algorithm, which enabled us to determine which sources were detected and their resulting flux densities. Each extracted detection was centroided to determine its position, and then its position was compared with the position of the nearest injected source. If the positions were within 15\arcsec\ (roughly the distance between two adjacent, diagonal pixels), the extracted detection was considered real. The flux density was determined by the maximum value of detected pixels, as is appropriate for the Wiener-filtering algorithm. With these mechanics in place, the completeness was calculated by computing the ratio of the number of detections at a given flux to the number of injected sources. This was repeated for source flux densities ranging from 2.8 to 9.8 mJy for simulations from map statistics and from 1.4 to 9.8 mJy for jittered data simulations in 0.7 mJy intervals, with the results plotted in Figure \ref{fig:completeness}. The two types of simulations agree well. The survey completeness is 50\% at the 3 $\sigma$ detection threshold, as expected, because half of the sources at the threshold will be bumped upward by noise and half will be bumped downward. The simulations also agree with the theoretical prediction for Gaussian noise. The bias was computed by determining the distribution of measured flux densities as a function of injected flux densities. At relatively large flux densities, the bias distribution should approach a Gaussian distribution centered at the injected flux density, with $\sigma$ equal to the map rms. This is seen to be the case in Figure \ref{fig:bias} for injected flux densities $\ge7$ mJy. The figure gives the distribution of the expected observed flux densities (probability density per flux bin, normalized to an integral of unity) for a range of injected flux densities. For low injected flux densities approaching the detection threshold, the distributions become increasingly asymmetric owing to the presence of the threshold. The distributions do not drop abruptly to zero below the threshold because there are variations in the map coverage. Note that sources with true flux below the detection threshold may be detected. The average bias for a source is shown in Figure \ref{fig:bias}; this rises steeply for sources with fluxes near or below the detection threshold. The preceding discussion (in particular the agreement of the simulated bias and completeness with the Gaussian theoretical prediction) indicates that, in spite of coverage variations and correlated noise, the noise in this survey behaves substantially like uniform Gaussian noise. Comparison of the results of the map simulation method with the jitter technique also shows good agreement, indicating that the assumptions that went into the map simulation method are justified and that we have a reasonable model for the survey noise. This gives added confidence to the determination of the false detection rate, which depends only on the noise properties. \subsection{Effects of Confusion Noise on the Bias and Completeness Functions} \label{subsec:ConfusionNoise} The completeness and bias function estimates as determined in \S\S\ \ref{subsection:noisemaps} and \ref{subsec:surveycalculations} do not include the effects of confusion noise. The effect of confusion noise is illustrated by considering two extremes: instrument noise dominant over confusion noise and vice versa. When instrument noise is dominant, the bias function for this survey is correctly described by equation (\ref{eq:GaussianBias}). In the confusion-dominated limit, the bias function takes on the shape of the source count distribution, reflecting the fact that it is the underlying distribution of sources that may bias the flux of a given source. In between these two extremes, the Gaussian bias function acquires additional width and a long positive tail from the source counts distribution. This tail increases the probability that a low flux source will fluctuate above the detection threshold. Consequently, the completeness at low flux densities is increased over the case of Gaussian noise. Note that small changes in the bias tail can cause large changes in the subthreshold completeness. The case at hand falls in this in-between regime. Understanding the modification to the bias function by confusion noise is necessary for accurately estimating how confusion transforms a model source count distribution to an observed one, as in equation (\ref{eq:finalDNC}). It is difficult to precisely model the effects of confusion on bias and completeness because they depend on the source count distribution that one is trying to measure. We can estimate the size of the confusion noise present in our maps by finding the relative contributions of the noise and signal variances. The sample (per pixel) variance of the optimally filtered Lockman map in the good coverage region is found to be 2.37~mJy$^2$. The variance of the optimally filtered jackknife maps in the same region is 1.81~mJy$^2$, leaving 0.56~mJy$^2$ due to sources. The variance contributed by all the sources in Table \ref{table:sourcelist} is approximately 0.33~mJy$^2$, of which 0.10~mJy$^2$ is expected to be due to false detections of random noise peaks. This leaves 0.33~mJy$^2$ due to undetected sources. This represents an S/N per pixel of 0.37 in rms units; considered in quadrature with the 1.81~mJy$^2$ of the noise, it increases the noise estimates and the rms of the bias function by about 9\%. To estimate the effect of confusion noise on the survey completeness and bias, particularly in the tail, sources were injected one at a time into the real map and extracted using the source extraction algorithm, with the completeness and bias calculated as in the noise-only case. This has the effect of making only a small change in the observed distribution of pixel values, effectively preserving that distribution. No effort was taken to avoid the positions of source candidates, as this would bias the procedure by failing to take into account the tail of the distribution. This test showed that the bias acquired a high flux density tail, as expected, and the completeness was increased above its Gaussian noise value. It should be emphasized, however, that this method provides an upper limit because it effectively ``double-counts'' confusion: the map into which the sources are injected is already confused. Positions of high flux in the true map may already consist of two coincident lower flux sources, and so the probability of a third source lying on top of them is not truly as high as the probability we would calculate by this procedure. The determination of the completeness and bias in this way is also limited by the statistics of only having one realization of the confusion noise. Applying these new bias and completeness functions, as well as the Gaussian noise-only bias and completeness (eqs.\ [\ref{eq:GaussianCompleteness}] and [\ref{eq:GaussianBias}]), to a power-law model of the number counts (the best-fit model of \S\ \ref{subsection:modeldnc}), the change in the observed counts is of order the size of the 68\% confidence interval for Poisson errors in the observed counts. Thus, confusion noise is not wholly negligible nor does it dominate. In extracting the number counts, we ignore the confusion noise but discuss how to treat it correctly in \S\ \ref{subsec:SysEffectsNumCounts}. \subsection{Fitting a Model to the Differential Number Counts} \label{subsection:modeldnc} To extract number counts from this data set, we use equation (\ref{eq:finalDNC}) with the simulation-derived false detections and the completeness and bias of equations (\ref{eq:GaussianCompleteness}) and (\ref{eq:GaussianBias}). A model for $N'$ is also required. Because of the small number of detections, the model must have as few free parameters as possible so that the data will be able to constrain the model parameters. This pushes us away from detailed, physically motivated models and toward a simple model in combination with several, somewhat arbitrary, constraints. We use a two-parameter power-law model for $N'$ given by \begin{equation} \label{eq:dnds_model} N'(S;\vect{p}) = A \left(\frac{S_0}{S}\right)^\delta, \end{equation} where $\vect{p} = [A, \delta]$ and $S_0$ is a fixed constant (not a parameter of the model). The choice of this form for $S_0 \neq 1$ reduces the degeneracy between $A$ and $\delta$ that prevails over narrow ranges of $S$, such as in this survey. We have set $S_0 = 4$~mJy. The unaltered model of equation (\ref{eq:dnds_model}) is unsatisfactory at both high and low flux values. At low fluxes, the model diverges, requiring a cutoff on which the result depends. The issue of the low-flux cutoff is discussed further in \S\ \ref{subsec:SysEffectsNumCounts}. For now, we simply impose a low-flux cutoff $S_l = 1$~mJy in the integral over $S$ in equation (\ref{eq:finalDNC}). In addition, if the model is extended indefinitely to high fluxes, it may produce too many sources to be consistent with the lack of observed sources. This constraint nevertheless does not determine the shape of number counts above the highest flux observed. Thus, one must either implement a high-flux cutoff or assume something about the shape of the number counts beyond the region where they are measured. To address this, a single bin of the same width as the other bins has been added to the data at high fluxes, where the data are zero and the model nonzero; beyond $S_h = 7.4$~mJy, the model is zero. Fixing the upper cutoff as above and allowing the lower to float to its best-fit value produces $S_l = 1.3$~mJy. Two additional possibilities were also tried for a high-flux cutoff: (1) setting the model to zero beyond the highest filled bin resulted in a very shallow index ($\delta < 2$), and (2) allowing the highest bin to extend to infinity produced a very steep power law ($\delta > 10$). While both of these cases are unphysical, they illustrate the sensitivity of the power-law model on the high-flux cutoff. Thus, the constraints that have been adopted, while arbitrary, serve to restrict the range of possible models sufficiently to extract reasonable values of $[A, \delta]$. However, in light of this arbitrariness, the resulting constraint on the parameters of the power-law model must be treated with skepticism. To fit to the model, the data are first binned. The number of sources with observed flux between $s_k$ and $s_{k+1}$ is denoted by $n_k$. We assume that the number of sources counted in any interval $ds$ follows an approximate Poisson process and therefore that each $n_k$ is a Poisson-distributed random variable that is independent of $n_j$ for $k \neq j$. The same would not be true of the cumulative counts, and so the differential counts are preferred for this analysis. The likelihood of observing the data $\{n_k\}$ if the model is $\{N_k\}$ is then \begin{equation} \label{eq:likelihood} {\cal L} = \prod_k \frac{N_k^{n_k} \exp{[-N_k]}}{n_k!} \end{equation} because it is assumed that the bins are independent. The value of the model in a given observed bin is defined as \begin{equation} \label{eq:model} N_k(\vect{p}) = \frac{1}{\Delta s} \int_{s_k}^{s_{k+1}}{\left(F(s) + \int_0^\infty{ B(s,S) C(S) N'(S;\vect{p}) \,d S}\right) \,d s}. \end{equation} The function $-\ln{\cal L}$ is minimized with respect to $\vect{p}$ to find the maximum likelihood value of $\vect{p}$. Two modifications of the likelihood equation (\ref{eq:likelihood}) were made for this analysis. The first is that a prior was applied to constrain $\delta > 2$, so that both the integral of the number counts and the integral of the total flux density remained finite for $S > 0$. Thus, \begin{eqnarray} \nonumber {\cal L'} = {\cal L} \Theta(\delta - 2). \end{eqnarray} Second, to extract confidence regions for the fitted parameters, it was necessary to normalize ${\cal L'}$, such that \begin{eqnarray} \nonumber \int{{\cal L'(\vect{p})} \,d \vect{p}} = 1. \end{eqnarray} This normalization was done by numerical evaluation of the likelihood and its integral over the region where it is appreciably nonzero (see Fig.\ \ref{fig:likelihood} below). The various components of this fit are shown in Figure \ref{fig:fit_results}. The data are shown with 68\% confidence interval error bars, based on the observed number of sources in each bin, scaled to an area of a square degree. The error bars were computed according to the prescription of \citet{feldman98} for small number Poisson statistics (which unifies the treatment of upper confidence limits and two-sided confidence intervals). The error bar on the highest flux density bin is an upper limit. The model is clearly consistent with the data given the error bars. (All six model bins falling within the 68\% confidence interval error bars of the data {\em may} imply that the errors have been overestimated, although this has a 10\% probability of occurring.) \ Examining the fit in stages, one finds that the product of the survey completeness and the best-fit number counts shows that the survey incompleteness reduces the number of sources observed at low flux densities as expected; above $\sim$7 mJy, the survey is essentially complete. The effect of the bias, however, combined with the steepness of the number counts, increases the number of sources observed in all bins substantially above that of the underlying source distribution and contributes to the observed number of sources in all bins. In fact, based on the best-fit DNC and computing over the range of fluxes observed, 67\% of real sources will have a true flux density {\em below} the detection threshold. Note that the best-fit number counts lie below the Poisson errors for the raw counts, demonstrating again that the survey bias is a nonnegligible effect. Given the maximum likelihood values of $A$ and $\delta$ (52.0 and 3.16, respectively), the cumulative source count at $S_{1.1\mathrm{mm}} > 2.75$ mJy is 192$^{+108}_{-88}$ deg$^{-2}$. This is consistent with the 1.2 mm MAMBO number count result (378$^{+136}_{-113}$ deg$^{-2}$) for the combined Lockman Hole and European Large-Area ISO Survey (ELAIS) N2 regions (Greve at al.\ 2004). Contours of the likelihood function for this fit are shown in Figure \ref{fig:likelihood}. In calculating the likelihood, the upper and lower flux limits were assumed to be a correct model, and as such, the likelihood does not account for violations of this assumption. The shaded region was obtained by integrating the normalized likelihood ${\cal L'}$ for values ${\cal L'} > {\cal L}_{thresh}$, such that the integral was equal to 0.68. These are Bayesian errors that incorporate the prior belief that $\delta > 2$. \subsection{Difficulties and Caveats} \label{subsec:SysEffectsNumCounts} As the above discussion indicates, the extraction of number counts from this data set is subject to a number of difficulties and caveats. In addition to the small-number statistics and the difficulty in modeling the dependence of the survey bias and completeness on the confusion noise that have already been discussed, a separate discussion of the principal limitations of the preceding analysis is in order. These are the low S/N of the detections and the sensitivity of the result to the lowest flux objects assumed to contribute to the observed counts. These limitations may be overcome with a more sophisticated future analysis, as we discuss. The survey bias, combined with an underlying number counts distribution rising at low fluxes, has a strong effect on the fluxes of sources observed at low S/N. This problem is exacerbated in the presence of confusion noise, but it is present in surveys in which random noise dominates over the confusion noise as well. This point has been appreciated in the historical confusion literature. \citet{crawford70} showed how to use a maximum likelihood method to extract a power-law slope from observed flux densities, and \cite{murdoch73} extended this to the case of sources observed with Gaussian noise. Because of the divergence at low fluxes of a power law, a lower limit in flux must be imposed in order to obtain finite answers. The principal conclusion of \cite{murdoch73} was that the power-law slope of the number counts determined by the maximum likelihood method depends sensitively on this lower cutoff if the S/N of the sources used in the survey is less than 5, whereas above this point the slope determination, while biased, is not dependent on the lower cutoff. This sensitivity to the lower flux cutoff also applies to the amplitude of the power law as well, a point that is clearly described by \citet{marshall85}. Although the \cite{murdoch73} result is for the rather unphysical case of a power law with an abrupt cutoff, the general result that derived number counts based on low-S/N sources will depend sensitively on the assumed behavior of the underlying number counts far below threshold is more general. This may be seen by considering the behavior of $C(S) N'$ as $S \to 0$. As long as this function is increasing, the bias will continue to push some sources of low intrinsic flux up above the detection threshold, and so the low-S/N regime will contain sources from well below threshold. [Note that this is consistent with the behavior of $B(s,S)$, since for $s$ greater than threshold, $B$ is positive for all values of $S$.] One can see from Figure \ref{fig:completeness} that the completeness drops off rather slowly for Gaussian noise (and even more slowly when confusion noise is added) and never vanishes. In fact, the product of the Gaussian completeness times any $S^{-\delta}$, $\delta > 0$, diverges as $S \to 0$, so for many otherwise reasonable number counts models, this problem will occur. Thus, in the presence of any sort of bias, whether due to confusion or noise, deriving accurate counts above threshold requires a nontrivial amount of information about the counts well below threshold if the S/N of the detections is low. Since all of our sources have S/N~$\le$~5, any constraint placed on the power-law amplitude and slope will likely depend sensitively on the lower cutoff chosen. An analysis technique that overcomes all of the shortcomings mentioned above is the so-called fluctuation or ``$P(D)$'' analysis \citep{scheuer57,scheuer74,condon74}. This analysis matches the shape of the observed pixel value distribution (Fig.\ \ref{fig:histogram}) against the prediction for a model distribution combined with the instrument noise. It overcomes the small-number statistics by using the full map instead of only the above-threshold detections. It addresses the nonlinear aspects of equation (\ref{eq:finalDNC}) by directly including the confusion noise. Low S/N is no longer an issue, since individual sources are no longer considered, and the lowest flux considered is naturally at the confusion limit where additional sources contribute only a mean to the observed histogram. The unchopped scan strategy of the Lockman Hole data simplifies such an analysis because of the simplicity of the effective beam. Additional effects such as the attenuation of large angular scale structure by the atmospheric cleaning or the angular correlation of sources may also be straightforwardly included. A paper on this analysis is in preparation. \section{Discussion} \label{section:discussion} \subsection{Comparison with Previous Number Count Results} \label{subsection:comparenumbercounts} A number of other groups have previously published number counts of submillimeter galaxies. Figure \ref{fig:number_counts} shows selected recent results. These include surveys of blank-fields by SCUBA on the JCMT at 850~$\mu$m \citep{barger99, borys03, scott02}, observations of galaxies lensed by clusters, also using SCUBA at 850~$\mu$m \citep{blain99, chapman02, cowie02, smail02}, and blank-field surveys by MAMBO on the IRAM telescope \citep{greve04}. The Bolocam result is plotted as the maximum likelihood cumulative number counts (computed from the DNC described in \S\ \ref{subsection:modeldnc}), evaluated from 1 mJy to the maximum observed flux density (6.8 mJy). The number counts in the figure are not adjusted for the wavelength differences of the surveys. The Bolocam result is in broad agreement with previous measurements; the maximum likelihood cumulative number counts are consistent with the 1200 $\mu$m measurement and below the 850 $\mu$m measurements, as expected if the same population of objects is being measured. The region of 68\% probability in parameter space has been translated to cumulative number counts and is shown by the region between the dashed curves. This region does not correspond to the naive expectation of Poisson errors based on the number of detected sources. This is due to both the strong effect of the bias and the flux cutoffs imposed on the model. Because of the bias, it is inappropriate to assume that the number of observed sources in a bin can be used as a measure of the uncertainty in the underlying number counts in that bin. The effect of assuming an upper flux cutoff is particularly evident in the figure in the rapid drop of the cumulative counts as the cutoff is approached. This causes the error bars to be artificially small, as any model is constrained to be zero beyond this point. The maximum likelihood number count model presented here, as well as its errors, depends strongly on the exact low- and high-flux cutoffs assumed for the underlying distribution and consequently cannot be treated as definitive. In addition to the caveats above, it should be borne in mind that the uncertainty of the flux bias discussed in \S\ \ref{subsection:calibration} (derived from the rms pointing error between the Bolocam galaxy candidates and coincident radio sources) introduces a systematic shift in the simple model of equation (\ref{eq:dnds_model}): the parameter $A$ changes by an amount $(1 \pm \sigma_\epsilon / \epsilon)^{-\delta}$. This gives a steep dependence of the amplitude of the number counts on the calibration error and the presumed power-law index. At high flux densities, where the survey is nearly complete and the effect of the bias is smallest, model-independent constraints may be obtained. In particular, the lack of any observed sources with flux density greater than 8 mJy has been used to place a 90\% upper confidence limit on the cumulative number counts above 8 mJy; this is shown by the dashed horizontal line in the figure. This constraint depends only linearly on the calibration error. Bolocam appears to be measuring near the region where the number counts, based on both the 850 and 1200 $\mu\mathrm{m}$ measurements, would be expected to turn over, but because of the limited survey area, we do not strongly constrain the number counts at the bright end of the luminosity function. \subsection{Integrated Flux Density} \label{subsec:IntegratedFluxDensity} The fraction of the FIRAS integrated far-infrared background light \citep{fixsen98} measured by this survey can be computed in several ways. Summing the flux densities of all observed sources gives 85 mJy, or 3.9\% of the FIRAS background over the survey area at 1.1 mm ($8.0 \times 10^{-22}$ W m$^{-2}$ sr$^{-1}$ Hz$^{-1}$). Subtracting out the expected mean flux of false detections gives 58 mJy, or 2.7\% of the FIRAS background. Integrating the maximum likelihood DNC between 1 and 6.8 mJy (the maximum observed) gives 276 mJy, or $\sim 13$\% of the FIRAS background. Since it seems plausible that the number counts do in fact steepen beyond the upper range of our observations, we conclude that at least $\sim$95\% of the light from submillimeter sources lies below the detection threshold of this survey and $\sim87$\% below the minimum flux derived from our number counts model. \subsection{Implied Luminosities and Star Formation Rates} \label{subsection:luminosities} The flux density of a galaxy at an observed frequency, $\nu$, is related to its intrinsic luminosity, $L$, by \citep{blain02} \begin{eqnarray} \label{eqn:luminosity} S_\nu=\frac{1+z}{4 \pi D_L^2}L \frac{f_{\nu(1+z)}}{\int{f_{\nu'} \,d \nu'}}, \end{eqnarray} where $D_L$ is the luminosity distance to redshift $z$, $f_{\nu(1+z)}$ is the redshifted SED of the galaxy, and $\int{f_{\nu'} \,d \nu'}$ is the integrated rest SED. For a flat ($\Omega_k=0$) cosmology, it can be shown \citep[e.g.,][]{peebles93} that the luminosity distance is given by \begin{eqnarray} \nonumber D_L = \frac{c\ (1+z)}{H_0}\int_0^z{\frac{1}{\Omega_M(1+z')^3+\Omega_\Lambda}\,d z'}. \end{eqnarray} To estimate the bolometric luminosities of the submillimeter galaxies detected by Bolocam, a template SED was constructed that assumes a blackbody emission spectrum modified by a dust emissivity term: \begin{eqnarray} \label{eqn:opticaldepth} f_\nu \propto \epsilon_\nu B_\nu(T) \propto [1-\exp{(-\tau_\nu)}] B_\nu(T), \end{eqnarray} where $B_\nu(T)$ is the Planck function evaluated at dust temperature, $T$, and frequency, $\nu$, and $\tau_\nu$ is the optical depth of the dust: \begin{eqnarray} \nonumber \tau_\nu = \left(\frac{\nu}{\nu_0}\right)^\beta. \end{eqnarray} The dust emissivity index, $\beta$, is believed to lie between 1 and 2 \citep{dunne00}. The form of equation (\ref{eqn:opticaldepth}) is commonly assumed in the literature for dusty nearby galaxies and high-redshift AGNs, including \cite{benford99}, \cite{omont01}, \cite{priddey01}, and \cite{isaak02}. This equation reduces to a simple optically thin emission spectrum, $\epsilon_\nu B_\nu(T) \sim \nu^{2+\beta}$, in the Rayleigh-Jeans limit and $\nu \ll \nu_0$, and it asymptotes to $B_\nu(T)$ at high frequencies (because an emissivity of $> 1$ is unphysical). Observations of luminous low-redshift galaxies (Arp 220 and Mrk 231) and high-redshift galaxies detected by deep submillimeter surveys furthermore suggest that a power law, $f_\nu \propto \nu^\alpha$, is appropriate to model the hotter components of dust on the Wien side of the spectrum \citep{blain99}. We implement such a power law at high frequencies, matched to equation (\ref{eqn:opticaldepth}) at 1.2$\nu_0$. Creating a composite SED of nearby dusty {\it IRAS} galaxies, high-redshift submillimeter galaxies, gravitationally lensed high-redshift galaxies, and high-redshift AGNs \citep[][and references therein]{blain02}, we find that parameters of $T$ = 40 K, $\nu_0$ = 3700 GHz, $\beta$ = 1.6, and $\alpha$ = -1.7 provide a reasonable fit. Assuming a cosmology of $\Omega_\lambda$ = 0.7, $\Omega_M$ = 0.3, and $h_0$ = 0.73 and a galaxy redshift of $z$ = 2.4 (the median redshift that Chapman et al.\ [2003b] derive for their sample of 10 submillimeter galaxies identified using high-resolution radio observations), equation (\ref{eqn:luminosity}) gives extreme bolometric luminosities of $L = (1.0-1.6) \times 10^{13}$ $\mathrm{L}_\odot$ for the range of flux densities detected by Bolocam. The derived luminosities are insensitive to redshift, varying by less than 25\% for $0.6 < z < 12$. Assuming dust temperatures of 30 and 50 K implies luminosities of $(3.5-5.9) \times 10^{12}$ $\mathrm{L}_\odot$ and $(1.8-3.0) \times 10^{13}$ $\mathrm{L}_\odot$, respectively. If these galaxies are lensed, their intrinsic luminosities will be lower. Observations of nearby star-forming galaxies suggest the following relation between the SFR present in a galaxy and its far-infrared luminosity: \begin{eqnarray} \nonumber \mathrm{SFR}=\epsilon \times 10^{-10} \frac{L_{60 \mu \mathrm{m}}}{\mathrm{L}_\odot}\mathrm{M_\odot}\mathrm{yr}^{-1}, \end{eqnarray} where $L_{60 \mu \mathrm{m}}$ is the 60 $\mu$m luminosity. The value of $\epsilon$ varies in the literature from 2.1 to 6.5 \citep{scoville83, thronson86, rowan97} because of different assumptions about the duration of the starburst, different initial mass functions (IMFs), and lower mass limits. In this paper we adopt a value of $\epsilon=2.1$ from a ''cirrus'' model that combines very small grains and polycyclic aromatic hydrocarbons (PAHs) with a Salpeter IMF in a starburst of OBA stars over $2 \times 10^{6}$ yr \citep{thronson86}. Obtaining $L_{60 \mu \mathrm{m}}$ from our model SED yields large SFRs of $480-810$ $\mathrm{M_\odot}\mathrm{yr}^{-1}$. While these calculated luminosities and SFRs are sensitive to the SED model parameters, particularly $T$ and $\beta$, most recent models of local star-forming galaxies nevertheless result in dust temperatures and emissivities that imply extreme luminosities and SFRs. It is possible, however, that these extremely luminous galaxies derive some of their power from AGNs \citep[e.g.,][]{alexander03}, in which case the SFRs have been overestimated. Observations of ultraluminous infrared galaxies (ULIRGs) in the local universe ($z \lesssim 0.1$) with luminosities $> 10^{13}$ $\mathrm{L}_\odot$ show that nearly all of these galaxies possess luminous AGNs and that the dominant power source in the majority of nearby ULIRGs may be AGNs rather than star formation (Sanders 1999). Recent X-ray observations and optical spectroscopic data of $z > 1$ ultraluminous galaxies, however, indicate that in almost all cases the AGNs account for $< 20\%$ of the total bolometric output of higher-redshift galaxies \citep{alexander04}. {\it Spitzer} observations will prove useful in investigating the incidence of AGNs versus star formation in submillimeter galaxies from the shape of the mid-infrared continuum emission; initial results confirm the high-redshift X-ray results and show a mixture of infrared-warm AGNs and cooler starburst-dominated sources \citep{egami04, frayer04}, with a smaller fraction ($\sim$25\%) of energetically important AGNs \citep{ivison04}. \section {Future Work} \label{section:futurework} Observations at shorter submillimeter wavelengths are vital to both confirm the Bolocam galaxy candidate detections and make photometric redshift and temperature estimates. Follow-up 350 $\mu$m photometry of the Bolocam-detected submillimeter galaxies with SHARC-II is planned to fill in the SEDs of these galaxies. Precise astrometry afforded by the radio identifications (as well as the 10\arcsec\ beam size of SHARC-II) will allow optical and infrared counterparts to be identified. Furthermore, {\it Spitzer} far-infrared observations combined with the Bolocam 1.1 mm galaxy survey will provide a flux density ratio that is strongly dependent on redshift for a given temperature. This is because the rest wavelength corresponding to the observed {\it Spitzer} wavelength of 70 $\mu$m is on the rapidly falling Wien side of the greybody spectrum (for a $z \sim 2$ galaxy at 40 K), and Bolocam's 1.1 mm observations are on the steep $\nu^{2+\beta}$ ($\beta\approx$1.5) modified Rayleigh-Jeans side of the SED. The ratio of $S_{1.1\mathrm{mm}}/S_{70\mu\mathrm{m}}$ is thus highly dependent on redshift, growing by a factor of 250 from $z=1$ to 5. {\it Spitzer} IRAC observations will also provide independent photometric redshift determinations from the SEDs of stellar populations of submillimeter galaxies redshifted into the near-IR. These combined observations, in conjunction with the radio-to-far-infrared correlation \citep{yun02}, will thus allow the temperature and redshift distributions of these submillimeter galaxies to be constrained. A detailed discussion of the SEDs and photometric redshift/temperature estimates of the Lockman Hole galaxies will follow in a companion paper (in preparation). As discussed in \S\ \ref{subsection:comparenumbercounts}, this survey does not constrain the number counts at flux densities above 7 mJy, at approximately the break point where the number counts are expected to drop sharply (based on the 850 $\mu$m observations in Fig.\ \ref{fig:number_counts}). This can be addressed with a survey covering a larger area to shallower depth. Such a survey has been started with Bolocam in the COSMOS field,\footnote{See http://www.astro.caltech.edu/cosmos.} which currently covers $\sim 1000$ arcmin$^2$. This survey should allow either determination or a strong upper limit on the 1.1 mm number counts beyond 7 mJy, as well as uncover extremely bright, interesting sources, perhaps with strong AGN components that should be easy to follow up at other wavelengths. In addition to the wide area, the COSMOS observations have extremely uniform coverage ($<$3\% rms) and a highly cross linked scan strategy that aids in rejecting atmospheric $1/f$ noise better than the Lockman Hole observations. \section{Conclusions} \label{section:conclusions} Bolocam is a new bolometer camera with a large field of view and a rapid mapping speed optimized for surveys, including surveys for rare, bright submillimeter galaxies. We have used Bolocam on the Caltech Submillimeter Observatory at a wavelength of 1.1 mm to conduct a survey of 324 arcmin$^2$ toward the Lockman Hole for submillimeter galaxies. Unlike previous submillimeter surveys, the observations were made without chopping, at a fast scan rate of 60\arcsec\ s$^{-1}$. The Bolocam survey encompasses the entire 850 $\mu$m 8 mJy JCMT SCUBA and 1.2 mm IRAM MAMBO surveys to a comparable depth under the assumption of a model SED for a galaxy at $z=2.4$, with relative rms of 1\,:\,0.9\,:\,0.6$-$1.4, respectively. We have reduced the resulting data set using a custom IDL-based software pipeline, in which correlated atmospheric and instrument noise is rigorously removed via a PCA sky subtraction technique. We detect 17 galaxies at a significance of $\ge3\ \sigma$, where the map rms is $\sim$1.4 mJy. A series of simulations have allowed us to verify the robustness of the galaxy candidates. Extensive jackknife and pointing jitter tests reveal that the sources detected in this survey have a small characteristic length scale (point sources) and are contributed to by the ensemble of observations, strongly indicating that the galaxy candidates are real. Simulations of the observations using both synthetic maps and observational data indicate that six false detections should be expected. Comparing our detections to those of other surveys (including SCUBA 850 $\mu$m, MAMBO 1.2 mm, and VLA radio observations) indicates that the majority of Bolocam sources have coincident detections in at least two other wavebands; we conclude that a majority of the Bolocam detections are real. Six of the detections are galaxies previously detected by the SCUBA 8 mJy survey. Of the remaining 11 Bolocam detections, 9 of them lie outside the SCUBA survey region, and we cannot search for counterparts for them. Seven of the 17 Bolocam detections have been detected by the MAMBO 1.2 mm survey, with 6 of the remaining 10 sources lying outside the MAMBO good coverage region. While both the SCUBA and MAMBO surveys detect most of the Bolocam sources in the overlap region, neither Bolocam nor SCUBA/MAMBO detect the majority of the remaining SCUBA and MAMBO sources. A total of 65\% of the 17 Bolocam source candidates have at least one radio coincidence, although the accidental radio detection rate is high (34\%) owing to the size (31\arcsec\ FWHM) of the Bolocam beam. Furthermore, we statistically detect the aggregate average of the SCUBA and MAMBO sources below our 3 $\sigma$ detection threshold at significances of 3.3 and 4.0 $\sigma$, respectively. Further simulations enabled us to estimate the completeness and bias of this survey, which were subsequently used with the false detection rate to fit a simple power-law model of the underlying parent distribution to match the observed number count distribution. This model constrains the submillimeter counts over the flux density range $S_{1.1\mathrm{mm}}$ = 1$-$7 mJy. While the validity of this model is significantly limited by both the effects of confusion noise and the flux density cutoffs assumed for the underlying number count distribution, we find this modeled number count distribution to be consistent with previously published submillimeter galaxy number counts. Integrating the maximum likelihood differential number counts distribution between 1 and 6.8 mJy (the maximum observed flux density) yields 276 mJy in the map, or $\sim$13\% of the FIRAS integrated far-infrared background light. If the Bolocam galaxy candidates lie at redshifts $z>1$, then their inferred luminosities are $L = (1.0 - 1.6) \times 10^{13}$ $\mathrm{L}_\odot$ (assuming a dust temperature of 40 K). Further assuming that they are powered by star formation, large SFRs of $480-810$ $\mathrm{M_\odot}\mathrm{yr}^{-1}$ are implied. Multiwavelength follow-up observations of the Lockman Hole field are underway with {\it Spitzer} and SHARC-II in order to constrain the temperature/redshift distributions of these sources. \acknowledgments With alacrity, we acknowledge the support of the CSO director and staff, the support of Kathy Deniston, and helpful conversations with Andrew Blain, Steven Eales, and Min Yun. This work was supported in part by NSF grants AST-0098737, AST-9980846, and AST-0206158 and PPARC grants PPA/Y/S/2000/00101 and PPA/G/O/2002/00015. D. H. acknowledges the support of a PPARC Ph.D. Fellowship, S. R. G. acknowledges Caltech for the R. A. Millikan Fellowship, and G. T. L. acknowledges NASA for GSRP Fellowship NGT5-50384. \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper, we consider boundedness properties of weak (sub)solutions to the following Dirichlet problem: \begin{eqnarray}\label{dp} \left\{\begin{array}{rcll} -\Div\left(Q\nabla u\right)&=&fv&\textrm{for }x\in\Omega\\ u&=&0&\textrm{for }x\in\partial\Omega \end{array}\right. \end{eqnarray} Throughout this paper, $\Omega$ is a bounded domain (i.e., an open and connected subset) of $\rn$ with $n\geq 3$, $Q=Q(x)$ is a non-negative definite, symmetric, measurable matrix with $Q\in L^{1}_{loc}(\Omega)$, $v$ is a weight (i.e., a non-negative, measurable function) on $\Omega$ such that $v\in L^1(\Omega)$, and the data function $f$ is in $ L^{1}_{loc}(\Omega)$. When $Q$ is a uniformly elliptic matrix and $v(x)=1$, it is a classical result (See Maz$^\prime$ya~\cite{MR0131054,MR0259329}, Stampacchia~\cite{MR192177} and Trudinger~\cite[Theorem~4.1]{MR369884},~\cite[Theorem~8.16]{GT}) that there is a constant $C>0$ such that if $f\in L^q(\Omega)$, then $$\|u\|_{L^\infty(\Omega)} \leq C\|f\|_{L^q(\Omega)}$$ for any non-negative weak subsolution $u\in H^1(\Omega)$ of \eqref{dp} provided $q>\frac{n}{2}$. Moreover, a counter-example shows that this bound is sharp even for the Laplacian and we cannot take $q=\frac{n}{2}$. The standard proof of this result uses the classical Sobolev inequality, $$\|\psi\|_{L^{\frac{2n}{n-2}}(\Omega)} \leq C\|\nabla \psi\|_{L^2(\Omega)},$$ valid for any $\psi\in H^1_0(\Omega)$, combined with Moser iteration. The restriction $q>\frac{n}{2}$ is naturally connected to the classical Sobolev gain factor $\sigma = \frac{n}{n-2}=\left(\frac{n}{2}\right)'$. The goal of this paper is to generalize this result. First we show that this estimate can be improved by replacing the space $L^q(\Omega)$ by an Orlicz space $L^A(\Omega)$ that lies strictly between $L^{\frac{n}{2}}(\Omega)$ and $L^q(\Omega)$ for $q>\frac{n}{2}$. For brevity, we will defer many definitions to Section~\ref{section:prelim} below. \begin{thm} \label{thm:classical} Let $Q$ be a uniformly elliptic matrix, and let $f\in L^A(\Omega)$, where $A(t)=t^{\frac{n}{2}}\log(e+t)^q$, $q>\frac{n}{2}$. Then there exists a constant $C=C(n,q, Q)$ such that, given any non-negative weak subsolution $u$ of \begin{align*} \left\{\begin{array}{rcll} -\Div\left(Q\nabla u\right)&=&f&\textrm{for }x\in\Omega,\\ u&=&0&\textrm{for }x\in\partial\Omega, \end{array}\right. \end{align*} we have the estimate \[ \|u\|_{L^\infty(\Omega)} \leq C\|f\|_{L^A(\Omega)}. \] \end{thm} \begin{rem} After completing this paper we learned that a somewhat more general version of Theorem~\ref{thm:classical} was proved by Cianchi~\cite[Theorem~5]{MR1721824} using very different methods. \end{rem} \medskip We will prove Theorem~\ref{thm:classical} as a special case of a more general result for solutions of~\eqref{dp} that holds for a much larger class of matrices $Q$. We can allow $Q$ to be both degenerate and singular, but must impose some restrictions on the largest and smallest eigenvalues. We encode these restrictions in two assumptions on the integrability of the largest eigenvalue and on the existence of an $L^2$ Sobolev inequality with gain. We state them together as a general hypothesis. \begin{hyp} \label{sobolev} Given the matrix $Q$ and the weight $v\in L^1(\Omega)$, assume that for some constant $k>0$, \[ |Q(x)|_{op} = \sup\{ |Q(x)\xi| : \xi \in \rn, |\xi|= 1\} \leq kv(x) \; \text{a.e.} \] Moreover, assume that there exist constants $\sigma=\sigma(n,Q, v,\Omega)>1,~C_0\geq 1$ such that for every $\psi\in Lip_0(\Omega)$ \begin{equation}\label{sob} \left(\int_\Omega | \psi(x) |^{2\sigma}~v(x)dx\right)^{\frac{1}{2\sigma}} \leq C_0\left(\int_\Omega \left|\sqrt{Q(x)}\nabla \psi(x)\right|^2~dx\right)^{\frac12}. \end{equation} \end{hyp} The first assumption, that $|Q|_{op}\leq kv$, in Hypothesis~\ref{sobolev} is necessary to prove many of the necessary properties of weak derivatives in the corresponding degenerate Sobolev space. The second assumption, that inequality~\eqref{sob} holds, in Hypothesis~\ref{sobolev} reduces to the classical Sobolev inequality if $Q$ is a uniformly elliptic matrix in $\Omega$ and $\sigma=\frac{n}{n-2}=(\frac{n}{2})'$. It allows us to perform the necessary De Giorgi iteration. These assumptions hold in several important special cases. If $v$ satisfies the Muckenhoupt $A_2$ condition, \[ [v]_{A_2} = \sup_B \frac{1}{|B|}\int_B v(x)\,dx \frac{1}{|B|}\int_B v(x)^{-1}\,dx < \infty, \] where the supremum is taken over all balls $B$ in $\rn$, and if $Q$ satisfies the degenerate ellipticity condition \[ \lambda v(x)|\xi|^2 \leq \langle Q(x)\xi,\xi\rangle \leq \Lambda v(x)|\xi|^2, \] where $0<\lambda\leq \Lambda<\infty$, then $|Q|_{op}\leq \Lambda v$ and~\eqref{sob} holds. (See~\cite{MR643158}.) More generally, suppose that $u$ and $v$ are a pair of weights such that $u(x)\leq v(x)$ a.e., $v$ satisfies a doubling condition, $u\in A_2$, and there exists $\sigma>1$ such that given any balls $B_1\subset B_2 \subset \Omega$, \[ \frac{r(B_1)}{r(B_2)} \bigg(\frac{v(B_1)}{v(B_2)}\bigg)^{\frac{1}{2\sigma}} \leq C \bigg(\frac{u(B_1)}{u(B_2)}\bigg)^{\frac{1}{2}}, \] and $Q$ satisfies the degenerate ellipticity condition \[ u(x)|\xi|^2 \leq \langle Q(x)\xi,\xi\rangle \leq v(x)|\xi|^2, \] then $|Q|_{op}\leq v$ and we have the Sobolev inequality \[ \left(\int_\Omega | \psi(x) |^{2\sigma}~v(x)dx\right)^{\frac{1}{2\sigma}} \leq C_0\left(\int_\Omega \left|\nabla \psi(x)\right|^2~u(x)dx\right)^{\frac12}, \] so again ~\eqref{sob} holds. (See~\cite{MR805809}.) \begin{rem} In~\cite{CRR2}, the authors and Rosta proved that when $v=1$, with minor additional hypotheses the global Sobolev inequality \eqref{sob} follows from a weaker, local Sobolev inequality, \[ \left(\frac{1}{|B|}\int_B |\psi(x)|^{2\sigma}\,dx\right)^{\frac{1}{2\sigma}} \leq C \bigg[ \frac{r(B)}{|B|}\int_B |\sqrt{Q}\nabla \psi(x)|^2\,dx + \frac{1}{|B|}\int_B |\psi(x)|^2\,dx \bigg]^{\frac{1}{2}}, \] that holds for all (sufficiently small) balls $B\subset \Omega$. \end{rem} We can now state our main result, which is a generalization of Theorem~\ref{thm:classical} to degenerate elliptic operators. Again, for precise definitions see Section~\ref{section:prelim}. \begin{thm}\label{main0} Given a weight $v$ and the non-negative definite, symmetric matrix $Q$, suppose that Hypothesis~\ref{sobolev} holds for some $\sigma>1$. Let $A(t) = t^{\sigma'}\log(e+t)^q$ where $q>\sigma'$. If $f\in L^A(v;\Omega)$, then any non-negative weak subsolution ${\bf u}=(u,\nabla u)\in QH^1_0(v;\Omega)$ of \eqref{dp} satisfies \begin{equation}\label{b1} \|u\|_{L^\infty(v;\Omega)} \leq C\|f\|_{L^A(v;\Omega)}, \end{equation} where $C$ is independent of both ${\bf u}$ and $f$. \end{thm} We originally conjectured that the exponent $q$ in Theorem~\ref{main0} is sharp in general, but we were not able to prove this or find a counter-example. We then learned that Cianchi~\cite[Theorem~5]{MR1721824}, as a consequence of a more general result, showed that in the classical setting when $Q$ is uniformly elliptic and $\sigma=(\frac{n}{2})'$, we must have that $q>\frac{n}{2}-1$. We round out his result by giving a simple counter-example proving that it is sharp for the Laplacian. \begin{exe} \label{example:not-sharp-but-close} Let $n\geq 3$, and let $\Omega=B(0,1)$. Then there exists a function $f\in L^A(\Omega)$, where $A(t)=t^{\frac{n}{2}}\log(e+t)^q$, $q<\frac{n}{2}-1$, such that the non-negative weak solution of the Poisson equation \begin{align*} \left\{\begin{array}{rcll} -\Delta u&=&f&\textrm{for }x\in\Omega,\\ u&=&0&\textrm{for }x\in\partial\Omega, \end{array}\right. \end{align*} is unbounded. \end{exe} \begin{rem} We conjecture that the sharp exponent in Theorem~\ref{main0} is $q>\sigma'-1$. However, the bound $q>\sigma'$ appears to be intrinsic to our proof, so either our proof needs to be refined or another approach is needed. We note that the proof in~\cite{MR1721824} relies on re-arrangement estimates and so does not readily extend to the case of degenerate operators. \end{rem} \medskip Our second main result shows that inequality~\eqref{b1} can be sharpened so that the right-hand side only depends on the logarithm of the $L^A$ norm. \begin{thm}\label{main1} Given a weight $v$ and a non-negative definite, symmetric matrix $Q$, suppose that Hypothesis~\ref{sobolev} holds for some $\sigma>1$. Let $A(t) = t^{\sigma'}\log(e+t)^q$, where $q>\sigma'$. If $f\in L^A(v;\Omega)$, then any non-negative weak subsolution ${\bf u}=(u,\nabla u)\in QH^1_0(v;\Omega)$ of \eqref{dp} satisfies \begin{equation} \label{eqn:fixed} \|u\|_{L^\infty(v;\Omega)} \leq C\|f\|_{L^{\sigma'}(v;\Omega)} \left(1+\log\left(1+\frac{\|f\|_{L^A(v;\Omega)}} {\|f\|_{L^{\sigma'}(v;\Omega)}}\right)\right), \end{equation} where $C$ is independent of both ${\bf u}$ and $f$. \end{thm} Theorem~\ref{main1} generalizes the main result of Xu~\cite{X}, but we note that there is a mistake in the statement of his main result. Working in the same setting as Theorem~\ref{thm:classical}, he claims to show that \begin{equation} \label{eqn:xu} \|u\|_{L^\infty(\Omega)} \leq C\|f\|_{L^{\frac{n}{2}}(\Omega)} \left(1+\log\left(1+{\|f\|_{L^q(\Omega)}} \right)\right), \end{equation} where $q>\frac{n}{2}$. However, a close examination of his proof shows that he only proves this when $\|f\|_{L^{\frac{n}{2}}(\Omega)}\geq 1$, and in fact what he proves is the analog of Theorem~\ref{main1}. It is straightforward to see that \eqref{eqn:xu} cannot hold if $\|f\|_{L^{\frac{n}{2}}(\Omega)}<1$; if it did, then if we fix $f$ and the corresponding solution $u$, then we could apply this inequality to $f/N$ ($N>1$ large) and $u/N$. Then we could take the limit as $N\rightarrow \infty$ to conclude that $\|u\|_{L^\infty(\Omega)} \leq C\|f\|_{L^{\frac{n}{2}}(\Omega)}$, which is false in general. \begin{rem} The ratio ${\|f\|_{L^A(v;\Omega)}}/ {\|f\|_{L^{\sigma'}(v;\Omega)}}$ in Theorem~\ref{main1} measures how much bigger the Orlicz norm is than the associated Lebesgue space norm. It is similar in spirit, though not in detail, to the ``entropy bump'' conditions introduced in the study of weighted norm inequalities in harmonic analysis~\cite{MR3357767, MR3539383}. \end{rem} \medskip Our two main results are established via De Giorgi iteration on the level sets. De Giorgi's original arguments are in \cite{DeGiorgi} but more helpful descriptions are found in \cite{MR1912731} and in~\cite{Korobenko:2016ue}, where De Giorgi iteration is applied in an infinitely degenerate elliptic regime. We were unable to adapt Moser iteration to work in the context of Orlicz norms, and it remains an open question whether such an approach is possible in this setting. \medskip The remainder of the paper is organized as follows. In Section~\ref{section:prelim} we gather some preliminary results. We give a definition of Young functions and the associated Orlicz spaces, and record some useful properties. We then define weak solutions to the Dirichlet problem. This definition has to include the possibility that the matrix $Q$ can be both degenerate and singular, and we give it in terms of a degenerate Sobolev space, building upon results in~\cite{CRR1} and elsewhere. We prove a number of properties of weak derivatives in this setting; we believe these results should be useful tools for other problems. We also prove that bounded, non-negative subsolutions of~\eqref{dp} must satisfy an exponential integrability condition. This result is a key lemma for the proof of Theorem~\ref{main1} and is modeled on a similar result due to Xu~\cite{X} in the classical setting. For completeness we include the details of the proof. In Section~\ref{section:main0} we prove Theorem~\ref{main0}; as noted above, the proof uses a version of De Giorgi iteration adapted to the scale of Orlicz spaces. This iteration argument was gotten by a careful adaptation of an argument due Korbenko, {\em et al.}~\cite[Section~4.2]{Korobenko:2016ue}. In Section~\ref{section:main1} we prove Theorem~\ref{main1}; our proof is a generalization of the argument in~\cite{X} and requires us to deal with a number of technical obstacles. Finally, in Section~\ref{section:counter-example} we construct Example~\ref{example:not-sharp-but-close}. \section{Preliminaries} \label{section:prelim} In this section we gather some preliminary definitions and results. We begin with some notation. The constant $n$ will always denote the dimension of the underlying space $\rn$. By $C$, $c$, etc. we will mean a constant that may change from appearance to appearance, but whose value depends only on the underlying parameters. If we want to specify this dependence, we will write, for instance, $C(n,p)$, etc. If we write $A\lesssim B$, we mean that there exists a constant $c$ such that $A\leq cB$. If $A\lesssim B$ and $B\lesssim A$, we write $A\approx B$. A weight $v$ will always be a non-negative, measurable function such that $v\in L^1(\Omega)$. Given a set $E\subset \Omega$, $v(E)=\int_E v(x)\,dx$. Given a weight $v$, $L^p(v;\Omega)$ is the collection of all those measurable functions $g:\Omega \rightarrow \mathbb{R}$ for which $$\|g\|_p = \|g\|_{L^p(v;\Omega)} = \displaystyle\left(\int_\Omega |g(x)|^p~v(x)dx\right)^{1/p} <\infty.$$ \subsection*{Orlicz spaces} Our main hypothesis on the data function $f$ in \eqref{dp} is that it belongs to the Orlicz space $L^A(v;\Omega)$. Here we gather some essential results about these spaces but we assume the reader has some familiarity with them. For complete information, we refer to \cite{KR,RR}. For a briefer summary, see \cite[Chapter~5]{CMP}. By a Young function we mean a function $A : [0,\infty)\rightarrow [0,\infty)$ that is continuous, convex, strictly increasing, $A(0)=0$, and $\frac{A(t)}{t}\rightarrow \infty$ as $t\rightarrow \infty$. Given a Young function $A$, define $L^A(v;\Omega)$ to be the Banach space of measurable functions $h:\Omega\rightarrow \mathbb{R}$ equipped with the Luxembourg norm, $$\|h\|_A = \|h\|_{L^A(v;\Omega)} = \inf\bigg\{\lambda>0~:~\int_\Omega A\left(\frac{|f(x)|}{\lambda}\right)~v(x)dx \leq 1\bigg\}<\infty.$$ Given Young functions $A,\,B$ we can compare the associated norms by appealing to a point-wise estimate. We say that $A(t)\preceq B(t)$ if there is a $t_0>0$ and a constant $c\geq 1$ depending only on $A,\,B$ so that $A(t) \leq B(ct)$ for $t\geq t_0$. For a proof of the following result, see~\cite[Theorem~13.3]{KR} or \cite[Section~5.1]{RR}. \begin{lem}\label{normcompare} Given Young functions $A,\,B$, if $A\preceq B$, then there exists a constant $C=C(A,B,v(\Omega))$ such that for every $f\in L^B(v;\Omega)$, $$\|f\|_{L^A(v;\Omega)} \leq C\|f\|_{L^B(v;\Omega)}.$$ \end{lem} Given a Young function $A$, we define the conjugate Orlicz function, $\bar{A}$, via the pointwise formula \[ \bar{A}(t) = \sup\{ st-B(s) : s> 0 \}. \] The pair $A,\,\bar{A}$ satisfy a version of H\"older's inequality in the scale of Orlicz spaces. If $f\in L^A(v;\Omega)$ and $g\in L^{\bar{A}}(v;\Omega)$, then $fg\in L^1(v;\Omega)$ and \begin{equation} \label{holders} \int_\Omega |f(x)g(x)|v(x)\,dx \leq 2\|f\|_A\|g\|_{\bar{A}}. \end{equation} \medskip In our main results we consider Young functions of the form \begin{equation} \label{eqn:log-bump} B(t) = t^p\log(e+t)^q, \end{equation} where $1<p,\,q<\infty$. The inverse and conjugate functions associated with these Young functions are well-known: see, for instance, \cite{CMP}. We have that \begin{gather} \label{YoungB2}\bar{B}(t) \approx \disp\frac{t^{p'}}{\log(t)^q} \approx \disp\frac{t^{p'}}{\log(e+t)^q}, \\ \label{YoungB-1}\bar{B}^{-1}(t) \approx t^{1/p'}\log(e+t)^{\frac{q}{p}}, \end{gather} where the implicit constants depend on $p,\,q$. As a consequence of Lemma~\ref{normcompare} we have the following estimate which we will need below; details are straightforward and are omitted. \begin{lem}\label{SCALE} Let $1\leq p_1\leq p_2<\infty$, $1\leq q_1\leq q_2<\infty$ and define \[ A(t) = t^{p_1}\log(e+t)^{q_1}, \qquad B(t) = t^{p_2}\log(e+t)^{q_2}. \] Then, given $f\in L^{B}(v;\Omega)$, $$\|f\|_{L^{p_1}(v,\Omega)} \lesssim \|f\|_{L^{A}(v;\Omega)} \lesssim \|f\|_{L^{p_2}(v;\Omega)} \lesssim \|f\|_{L^{B}(v;\Omega)}.$$ The implicit constants depend on $p_i$ and $q_i$, $i=1,\,2$, and $v(\Omega)$. \end{lem} We conclude this section with an estimate for the $L^{\bar{B}}(\Omega)$ norm of an indicator function $\mathbbm{1}_S$ for $S\subset\Omega$; this quantity plays an essential role in our proofs of Theorems \ref{main0} and \ref{main1}. This computation is well-known, but to make clear the dependence on the constants we include its short proof. \begin{lem}\label{indicators} Given the Young function $B$ defined by~\eqref{eqn:log-bump} then for any $S\subset\Omega$, $v(S)>0$, \begin{equation}\label{indicator} \|\mathbbm{1}_S\|_{L^{\bar{B}}(v;\Omega)} \leq \disp\frac{cv(S)^{\frac{1}{p'}}} {\log(1+v(S)^{-1})^{\frac{q}{p}}}, \end{equation} where $c=c(p,q)>0$. \end{lem} \begin{proof} Given $B$, $\bar{B}$ is defined by~\eqref{YoungB2}. Set $$F=\left\{\lambda>0~:~ \int_\Omega \bar{B}\left(\frac{\mathbbm{1}_S(x)}{\lambda}\right)v(x)\,dx \leq 1\right\}$$ and notice that $F\neq \emptyset$. For each $\lambda \in F$, \[ v(S)~\bar{B}\left(\frac{1}{\lambda}\right) = \int_\Omega \bar{B} \left(\frac{\mathbbm{1}_S(x)}{\lambda}\right)v(x)\,dx \leq 1. \] Since $\bar{B}$ is invertible and increasing, $$\lambda \geq \left[\bar{B}^{-1}\left(\frac{1}{v(S)}\right)\right]^{-1}=m_0>0.$$ Again by the invertibility of $\bar{B}$, \[ \int_\Omega \bar{B}\left(\frac{\mathbbm{1}_S(x)}{m_0}\right)v(x)\,dx = v(S)\bar{B}(m_0^{-1}) = 1. \] Hence, $m_0\in F$, and it follows that $\|\mathbbm{1}_S\|_{L^{\bar{B}}(\Omega)} = m_0$. By inequality~\eqref{YoungB-1}, $$m_0=\bar{B}^{-1}\left(\frac{1}{v(S)}\right) \geq c(p,q)v(S)^{-\frac{1}{p'}}\log(e+v(S)^{-1})^{\frac{q}{p}}$$ and \eqref{indicator} follows. \end{proof} \subsection*{Degenerate Sobolev spaces and weak solutions} We now give a precise definition of weak (sub)solutions to the Dirichlet problem~\eqref{dp}. This question has been explored in a number of papers by ourselves and others: see~\cite{CW,CRR1,CRR2,GT,MR,MRW,R,SW2}. Here we sketch the relevant details. Given a non-negative definite, symmetric and measurable matrix function $Q$ on $\Omega$ and a weight $v \in L^1_{loc}(\Omega)$, the solution space for the Dirichlet problem is the matrix weighted Sobolev space $QH^1_0(v;\Omega)$. This space is defined as the abstract completion (in terms of Cauchy sequences) of the space $Lip_0(\Omega)$ (i.e., Lipschitz functions with compact support in $\Omega$) with respect to the norm $$\|\psi\|_{QH^1_0(v;\Omega)} = \|\psi\|_{L^2(v;\Omega)} + \|\nabla \psi\|_{L^2_Q(\Omega)},$$ where $L^2_Q(\Omega)$ is the Banach space of $\mathbb{R}^n$ vector-valued functions ${\bf g}$ on $\Omega$ that satisfy $$\|{\bf g}\|_{L^2_Q(\Omega)} = \bigg(\int_\Omega |\sqrt{Q(x)}{\bf g}(x)|^2~dx\bigg)^{\frac{1}{2}}<\infty.$$ This norm is well defined for $\psi\in Lip_0(\Omega)$ provided $|Q|_{op}\in L_{loc}^1(\Omega)$; in particular, $QH^1_0(v;\Omega)$ is well defined if the first assumption in Hypothesis~\ref{sobolev} holds. With this definition, the Sobolev space $QH^1_0(v;\Omega)$ is a collection of equivalence classes of Cauchy sequences of $Lip_0(\Omega)$ functions. However, the spaces $L^2(v;\Omega)$ and $ L^2_Q(\Omega)$ are complete: for a proof that $L^2_Q(\Omega)$ is complete, see~\cite{SW2} or~\cite{CRR1} where it was proved that $L^p_Q(\Omega)$ is complete for $1\leq p<\infty$. Therefore, to each equivalence class $[\{\psi_j\}]$ in $QH^1_0(v;\Omega)$ we can associate a unique pair ${\bf u}=(u, {\bf g}) \in L^2(v;\Omega)\times L^2_Q(\Omega)$ whose norm is given by \begin{align*} \|{\bf u}\|_{QH^1_0(v;\Omega)} &= \|u\|_{L^2(v;\Omega)} + \|{\bf g}\|_{L^2_Q(\Omega)}\\ &=\displaystyle\lim_{j\rightarrow\infty}\left(\|\psi_j\|_{L^2(v;\Omega)} + \|\nabla \psi_j\|_{L^2_Q(\Omega)}\right). \end{align*} Conversely, given a pair $(u,{\bf g})$ we will say that it is in $QH^1_0(v;\Omega)$ if there exists a sequence $\{u_j\}_j\subset Lip_0(\Omega)$ such that $(u_j \nabla u_j)$ converges to $(u,{\bf g})$ in $L^2(v;\Omega)\times L^2_Q(\Omega)$. Hereafter, we will denote ${\bf g}$ by $\nabla u$ since $\bf g$ plays the role of a weak gradient of $u$. However, while we adopt this formal notation we want to stress that the function ${\bf g}$ is not the weak gradient of $u$ in the sense of classical Sobolev spaces. For further details, ~\cite{CRR1} contains the construction of $QH^{1,p}(v;\Omega)$ for $p\geq 1$. Additionally, unweighted constructions of $QH^{1,p}_0(1;\Omega)$ are found in~\cite{CRR2,MRW} for $p\geq 1$ and~\cite{MR,R} for $p=2$. We can extend the second assumption in Hypothesis \ref{sobolev} to functions in $QH^1_0(\Omega)$; this follows from density of $Lip_0(\Omega)$ functions and we omit the proof. \begin{lem}\label{sobolev2} Suppose the Sobolev inequality of Hypothesis~\ref{sobolev} holds. Then, \begin{equation}\label{sob2} \left(\int_\Omega | w(x) |^{2\sigma}v(x)\,dx \right)^{\frac{1}{2\sigma}} \leq C_0\left(\int_\Omega \left|\sqrt{Q(x)}\nabla w\right|^2\,dx \right)^{\frac{1}{2}} \end{equation} for every ${\bf w}=(w,\nabla w)\in QH^1_0(v;\Omega)$ where $C_0$ is the same as in Hypothesis~\ref{sobolev}. \end{lem} \medskip We can now define the weak solution to the Dirichlet problem. \begin{defn}\label{weaksol} A pair ${\bf u} = (u,\nabla u)\in QH^1_0(\Omega)$ is said to be a weak solution of the Dirichlet problem \eqref{dp} if $$\int_\Omega \nabla \psi(x) \cdot Q(x)\nabla u (x) \,dx = \int_\Omega f(x)\psi(x)v(x)\,dx$$ for every $\psi\in Lip_0(\Omega)$. The pair is said to be a non-negative weak subsolution if $u(x)\geq 0$ $v$-a.e. and $$\int_\Omega \nabla \psi(x) \cdot Q(x)\nabla u (x) \,dx \leq \int_\Omega f(x)\psi(x)v(x)\,dx$$ for every non-negative $\psi\in Lip_0(\Omega)$. \end{defn} Note that if ${\bf h} = (h,\nabla h)\in QH^1_0(v;\Omega)$ with $h(x)\geq 0$ $v$-a.e., then by a standard limiting argument we may use $h$ as our test function in Definition~\ref{weaksol}. \begin{rem} The existence of weak solutions to \eqref{dp} when $v=1$ was studied in \cite{GT,MR,R}, and when $v(x)=|Q(x)|_{op}$ in~\cite{CRR1,CRR2}. \end{rem} \subsection*{Properties of weak gradients} We now develop some useful properties of functions in the degenerate Sobolev space $QH^1_0(v;\Omega)$. All of these properties are well known in the classical case: see, for instance,~\cite{GT}. In the degenerate case, we stress that the first assumption in Hypothesis~\ref{sobolev} is critical in proving these results and throughout this subsection we assume that $v\in L^1(\Omega)$ and $|Q|_{op}\leq kv$ a.e. Our first result shows that weak gradients are zero almost everywhere on sets of $v$-measure zero. \begin{lem}\label{meas0} Let ${\bf u}=(u,\nabla u)\in QH^1_0(v;\Omega)$ and $w\in Lip_0(\Omega)$. Then, given any set $E$ of $v$-measure zero, we have that: \begin{enumerate} \item $\|\nabla w\|_{L^2_Q(E)}=0$; \item $\|\nabla u\|_{L^2_Q(E)} = 0$; \item $\sqrt{Q(x)}\nabla u(x)=0=\sqrt{Q(x)}\nabla w(x)$ a.e. $x\in E$. \end{enumerate} \end{lem} \begin{proof} If $w\in Lip_0(\Omega)$, then $\nabla w$ is defined a.e. in $\Omega$ by the Rademacher-Stepanov theorem and is in $L^\infty$. Therefore, for a.e. $x\in \Omega$, \[|\sqrt{Q(x)}\nabla w(x)| \leq |Q(x)|_{op}^{\frac12} |\nabla w(x)| \leq c v(x)^{\frac12} |\nabla w(x)|;\] hence, \[\|\nabla w(x)\|_{L^2_Q(E)}^2 \leq \|\nabla w\|_\infty^2 v(E) = 0,\] which proves (1). Let ${\bf u} \in QH^1_0(v;\Omega)$; then there exists a sequence $\{w_j\}_j\subset Lip_0(\Omega)$ such that $\nabla w_j \rightarrow \nabla u$ in $L^2_Q(\Omega)$. Then by the previous argument, \[\|\nabla u\|_{L^2_Q(E)} = \lim_{j\rightarrow\infty}\|\nabla w_j\|_{L^2_Q(E)} = 0,\] and so (2) holds. Finally, (3) follows immediately from (1) and (2). \end{proof} Our second result shows that non-negative truncations of functions in $QH^1_0(v;\Omega)$ are again in this space. \begin{lem} \label{lemma:main1-lemma} Let ${\bf u}=(u,\nabla u)\in QH^1_0(v;\Omega)$ and fix $r>0$. If $S(r)=\{ x\in \Omega : u(x)>r \}$, then $((u-r)_+, \mathbbm{1}_{S(r)}\nabla u) \in QH^1_0(v;\Omega)$. \end{lem} \begin{proof} By the definition of $QH^1_0(v;\Omega)$ there exists a sequence $\{u_j\}_j$ in $Lip_0(\Omega)$ such that $u_j\rightarrow u$ in $L^2(v;\Omega)$ and $\nabla u_j \rightarrow \nabla u$ in $L^2_Q(\Omega)$. If we pass to a subsequence, we may assume that $u_j\rightarrow u$ pointwise $v$-a.e. We will first prove that $(u_j-r)_+\rightarrow (u-r)_+$ in $L^2(v;\Omega)$. Define $f_j= |(u_j-r)_+ - (u-r)_+|^2$; then $f_j\rightarrow 0$ $v$-a.e. We will show that $f_j$ converges to $0$ in $L^1(v;\Omega)$; this follows from the generalized dominated convergence theorem~\cite[p.~59]{MR1681462} if we show that there exist non-negative functions $g_j,\,g \in L^1(v; \Omega)$ such that $f_j\leq g_j$ and $\|g_j\|_{L^1(v;\Omega)}\rightarrow \|g\|_{L^1(v;\Omega)}$ as $j\rightarrow \infty$. But we have that \[ f_j \leq 2(u_j-r)^2_+ +2(u-r)_+^2 \leq 4(|u_j|^2+r^2)+4(|u|^2+r^2) = g_j. \] Moreover, $g_j$ converges pointwise a.e. to $g=8(|u|^2+r^2)$, and $g,\,g_j\in L^1(v;\Omega)$ since $v(\Omega)<\infty$. Finally, since $u_j\rightarrow u$ in $L^2(v;\Omega)$ we get that $\|g_j\|_{L^1(v;\Omega)}\rightarrow \|g\|_{L^1(v;\Omega)}$. \smallskip Now define $S_j=\{ x\in \Omega : u_j (x)>r \}$; then $\mathbbm{1}_{S_j}\rightarrow \mathbbm{1}_{S}$ $v$-a.e. Moreover, we have that $\nabla(u_j-r)_+ = \nabla u_j \mathbbm{1}_{S_j}$ a.e.~\cite[Lemma~7.6]{GT} and so $v$-a.e. By passing to another subsequence, we assume that $\nabla u_j \mathbbm{1}_{S_j}\rightarrow \nabla u \mathbbm{1}_{S}$ pointwise $v$-a.e. We claim that they converge in $L^2_Q(\Omega)$ as well. If this is the case, then we have shown that $((u_j-r)_+, \nabla(u_j-r)_+)$ is Cauchy in $QH^1_0(v;\Omega)$, and the desired conclusion follows at once. To prove $L^2_Q(\Omega)$ convergence, note that \[ \|\nabla u_j \mathbbm{1}_{S_j}- \nabla u \mathbbm{1}_{S}\|_{L^2_Q(\Omega)} \leq \|\nabla u_j \mathbbm{1}_{S_j}- \nabla u \mathbbm{1}_{S_j}\|_{L^2_Q(\Omega)} + \|\sqrt{Q}\nabla u(\mathbbm{1}_{S_j}-\mathbbm{1}_{S})\|_{L^2(\Omega)}. \] The first term on the right-hand side is less than $\|\nabla u_j - \nabla u\|_{L^2_Q(\Omega)}$ which goes to $0$ as $j\rightarrow \infty$. To estimate the second term, let $E$ be the set of $x\in \Omega$ where $\mathbbm{1}_{S_j}(x)$ does not converge to $\mathbbm{1}_{S}(x)$. Then $v(E)=0$, and so by Lemma~\ref{meas0}, \[ \int_E |\sqrt{Q}\nabla u (\mathbbm{1}_{S_j}-\mathbbm{1}_{S})|^2\,dx \leq \int_E |\sqrt{Q}\nabla u|^2 \,dx = 0. \] Since $|\sqrt{Q}\nabla u (\mathbbm{1}_{S_j}-\mathbbm{1}_{S})|\leq |\sqrt{Q}\nabla u|\in L^2(\Omega)$, by the dominated convergence theorem we have that as $j\rightarrow 0$, \[ \|\sqrt{Q}\nabla u(\mathbbm{1}_{S_j}-\mathbbm{1}_{S})\|_{L^2(\Omega)} = \|\sqrt{Q}\nabla u(\mathbbm{1}_{S_j}-\mathbbm{1}_{S})\|_{L^2(\Omega\setminus E)} \rightarrow 0. \] \end{proof} Our next lemma proves the existence of an approximating sequence of Lipschitz functions with some additional useful properties. \begin{lem} \label{specialapprox} Let ${\bf u} = (u,\nabla u)\in QH^1_0(v;\Omega)$ with $u\in L^\infty(v;\Omega)$ and $u\geq 0$ $v$-a.e. Then there exists a sequence $\{u_j\}_j\in Lip_0(\Omega)$ such that: \begin{enumerate} \item $0\leq u_j(x)\leq \|u\|_{L^\infty(v;\Omega)}+1$ in $\Omega$; \item $u_j\rightarrow u$ $v$-a.e. and also in $L^2(v;\Omega)$; \item $\nabla u_j \rightarrow \nabla u$ in $L^2_Q(\Omega)$ and $\left|\sqrt{Q}\nabla u_j\right| \rightarrow \left|\sqrt{Q}\nabla u\right|$ pointwise a.e.; \item $\|\nabla u_j\|_{L^2_Q(\Omega)}\leq \|\nabla u\|_{L^2_Q(\Omega)}+1$ for each $j\in \mathbb{N}$. \end{enumerate} \end{lem} \begin{proof} By the definition of $QH^1_0(v;\Omega)$ and by passing twice to a subsequence, there exists a sequence $\{z_j\}_j\subset Lip_0(\Omega)$ such that: \begin{enumerate} \item[($1^\prime$)] $z_j\rightarrow u$ both $v$-a.e. and also in $L^2(v;\Omega)$ \item[($2^\prime$)] $\nabla z_j\rightarrow \nabla u$ in $L^2_Q(\Omega)$ and $|\sqrt{Q}\nabla z_j| \rightarrow |\sqrt{Q}\nabla u|$ a.e.; \item[($3^\prime$)] $\|\nabla z_j\|_{L^2_Q(\Omega)} \leq \|\nabla u\|_{L^2_Q(\Omega)}+1.$ \end{enumerate} Now let $w_j = |z_j|$. Since $u$ is non-negative $v$-a.e. in $\Omega$, by the triangle inequality we have that \[|w_j-u| = ||z_j|-| u|| \leq |z_j-u|\] $v$-a.e. Therefore, we have that $w_j$ converges to $u$ both in $L^2(v;\Omega)$ and pointwise $v$-a.e. By the Rademacher-Stepanov theorem~\cite{MR3409135}, $\nabla w_j (x) = sgn(z_j(x))\nabla z_j(x)$ a.e. Hence, $|\sqrt{Q} \nabla w_j(x)| = |\sqrt{Q}\nabla z_j(x)|$ a.e. and so $\|\nabla w_j\|_{L^2_Q(\Omega)}\rightarrow \|\nabla u\|_{L^2_Q(\Omega)}$ as $j\rightarrow \infty$. Thus $w_j\geq 0$ a.e. and properties ($1^\prime$)--($3^\prime$) above hold with $z_j$ replaced by $w_j$. \smallskip We now define the sequence of $Lip_0(\Omega)$ functions $\{u_j\}_j$. Set $M=\|u\|_{L^\infty(v;\Omega)}+1$ and let $\phi : [0,\infty)\rightarrow [0,\infty)$ be such that $\phi\in C^\infty$, $\phi$ is increasing, $\phi(x)=x$ if $0\leq x\leq M-\frac{1}{2}$, $\phi(x)=M$ if $x\geq M+1$, and $\phi'(x)\leq 1$. Define the $u_j$ by $u_j(x)=\phi(w_j(x))$. Then $\phi_j \in Lip_0(\Omega)$; moreover, $\nabla u_j(x) = \phi'(w_j(x))\nabla w_j(x)$ a.e. and so \begin{equation} \label{eqn:grad-uj-bound} |\sqrt{Q(x)}\nabla u_j(x)| \leq |\sqrt{Q(x)}\nabla w_j(x)|. \end{equation} We claim that $\{u_j\}_j$ satisfies properties (1)--(4) above. By the definition of $\phi$, $0\leq u_j \leq M$, so property (1) holds. Property (4) follow immediately from \eqref{eqn:grad-uj-bound} and property ($3^\prime$) for the $w_j$. It remains to prove properties (2) and (3). By the choice of $M$, $u(s) \leq M-1$ for $v$-a.e. $s\in \Omega$. We also have that $w_j(s)\rightarrow u(s)$ $v$-a.e. Let $F$ be the set of all $s\in \Omega$ such that both of these hold. Then $v(\Omega\setminus F)=0$. Given $s\in F$ there exists $N>0$ such that if $j \geq N$, $w_j(s)<M-\frac{1}{2}$, and so $u_j(s)=w_j(s)$. Thus, $u_j\rightarrow u$ pointwise $v$-a.e. Since $u$ is bounded and $v(\Omega)<\infty$, by the dominated convergence theorem we also have that $u_j\rightarrow u$ in $L^2(v;\Omega)$. This proves (2). To prove (3) define the set $F$ as above. For each $s\in F$, there exists $N>0$ such that for each $j\geq N$ there exists a ball $B_{j,s}$ where for $x\in B_{j,s}$, $w_j(x) < M-\frac{1}{2}$; hence, $\nabla u_j(s) = \nabla w_j(s)$ for $j\geq N$. Now let $G$ be the set of $s\in \Omega$ such that $|\sqrt{Q(s)}\nabla w_j(s)|\rightarrow |\sqrt{Q(s)}\nabla u(s)|$; by ($2'$), $|\Omega\setminus G|=0$. Since $v\,dx$ is an absolutely continuous measure, $v(\Omega\setminus G)=0$. Let $H=F\cap G$. Then on $H$ we have that $|\sqrt{Q}\nabla u_j|\rightarrow |\sqrt{Q}\nabla u|$ pointwise. But $v(\Omega\setminus H)=0$ so by Lemma~\ref{meas0} we have that \[ \|\nabla u_j \|_{L^2_Q(\Omega\setminus H)} = 0 = \|\nabla u \|_{L^2_Q(\Omega\setminus H)}. \] This implies that $|\sqrt{Q}\nabla u_j| = 0 = |\sqrt{Q}\nabla u|$ almost everywhere on $\Omega\setminus H$. Therefore, we have that $|\sqrt{Q}\nabla u_j|\rightarrow |\sqrt{Q}\nabla u|$ pointwise a.e. Finally, to prove that $\nabla u_j \rightarrow \nabla u$ in $L^2_Q(\Omega)$ we use the generalized dominated convergence theorem as in the proof of Lemma~\ref{lemma:main1-lemma}. Let $f_j= |\sqrt{Q}(\nabla u_j-\nabla u)|^2$; then $f_j\rightarrow 0$ a.e. Further, by \eqref{eqn:grad-uj-bound} \[ f_j \leq 2|\sqrt{Q}\nabla u_j|^2 +2|\sqrt{Q}\nabla u|^2 \leq 2|\sqrt{Q}\nabla w_j|^2 +2|\sqrt{Q}\nabla u|^2 = g_j. \] Again by ($2^\prime$), $g_j \rightarrow 4|\sqrt{Q}\nabla u|^2=g$ a.e., and since $\nabla w_j \rightarrow \nabla u$ in $L^2_Q(\Omega)$, $g_j\rightarrow g$ in $L^1(\Omega)$. Therefore, $f_j \rightarrow 0$ in $L^1(\Omega)$, which completes the proof of (3). \end{proof} The next two lemmas give the product rule and chain rule associated to pairs in $QH^1_0(\Omega)$. The proofs are adapted from those of similar results in~\cite{MRW}. \begin{lem}\label{prod} Let $(u,\nabla u) \in QH_0^1(v;\Omega)$ and let $\psi\in Lip_0(\Omega)$. Then we have that $(u\psi,\psi\nabla u+u\nabla \psi) \in QH^1_0(v;\Omega)$. \end{lem} \begin{proof} By the definition of $QH_0^1(v;\Omega)$ there exists a sequence $\{w_j\}\subset Lip_0(\Omega)$ such that $w_j\rightarrow u$ in $L^2(v;\Omega)$ and $\nabla w_j \rightarrow \nabla u$ in $L^2_Q(\Omega)$. But then we immediately have that \[ \|w_j\psi-u\psi\|_{L^2(v;\Omega)} \leq \|\psi \|_\infty \|w_j-u\|_{L^2(v;\Omega)}, \] and so $w_j\psi\rightarrow u\psi$ in $L^2(v;\Omega)$. Similarly, since $|Q|_{op}\leq kv$ a.e., we have that \begin{multline*} \|\nabla(w_j\psi) - (u\nabla\psi+\psi\nabla u)\|_{L^2_Q(\Omega)} \leq \|\psi \nabla w_j-\psi\nabla u\|_{L^2_Q(\Omega)} + \|w_j\nabla\psi - u\nabla \psi\|_{L^2_Q(\Omega)} \\ \leq \|\psi\|_\infty \|\nabla w_j - \nabla u \|_{L^2_Q(\Omega)} + k\|\nabla \psi\|_\infty \|w_j-u\|_{L^2(v;\Omega)}. \end{multline*} Thus, $\nabla(w_j\psi)\rightarrow u\nabla\psi+\psi\nabla u$ in $L^2_Q(\Omega)$ and so $(u\psi,u\nabla \psi + \psi\nabla u) \in QH^1_0(v;\Omega)$. \end{proof} \begin{lem}\label{comp} Let $(u,\nabla u)\in QH^1_0(v;\Omega)$ with $u\geq 0$ $v$-a.e. and $u\in L^\infty(v;\Omega)$. Then, given any non-negative function $\varphi\in C^1(\mathbb{R})$ such that $\varphi(0)=0$, the pair $(\varphi(u),\varphi'(u)\nabla u)\in QH^1_0(v;\Omega)$. \end{lem} \begin{proof} Let $\{u_j\}_j\subset Lip_0(\Omega)$ be the sequence associated with $(u,\nabla u)$ given by Lemma \ref{specialapprox}. Since $u_j$ is Lipschitz with compact support in $\Omega$ and $\varphi(0)=0$, $\psi_j=\varphi(u_j)\in Lip_0(\Omega)$. Since $u_j\rightarrow u$ $v$-a.e., the continuity of $\varphi$ implies that $\psi_j\rightarrow \varphi(u) =\psi$ $v$-a.e. By the fundamental theorem of calculus, \[ |\varphi(t)| = \left|\int_0^t \varphi'(s)ds\right| \leq \|\varphi'\|_{L^\infty([0,M])}|t| = A_0|t|\] whenever $0\leq t\leq M =\|u\|_{L^\infty(v;\Omega)}+1$. Since by assumption and property (1) of Lemma \ref{specialapprox}, $0\leq u(x),\, u_j(x) \leq M$ for $v$-a.e. $x\in \Omega$, we have that $v$-a.e., \[ |\psi_j - \psi|^2 \leq 2(|\psi_j|^2+|\psi|^2) \leq 2A_0^2(|u_j|^2 + |u|^2). \] Since $|u_j|^2 + |u|^2 \rightarrow 2|u|^2$ $v$-a.e. and in $L^1(v;\Omega)$, by the generalized Lebesgue dominated convergence theorem we get that $\psi_j\rightarrow \psi$ in $L^2(v;\Omega)$. To show the convergence of the gradients, first note that $\sqrt{Q}\nabla \psi_j = \varphi'(u_j)\sqrt{Q}\nabla u_j$ a.e. in $\Omega$ and so by the continuity of $\varphi'$ and property (3) in Lemma \ref{specialapprox} we get that $\sqrt{Q}\nabla \psi_j \rightarrow \varphi'(u)\sqrt{Q}\nabla u$ a.e. Moreover, \begin{multline*} |\sqrt{Q}\nabla (\psi_j) - \varphi'(u)\sqrt{Q}\nabla u|^2 \leq 2|\varphi'(u_j) \sqrt{Q} \nabla u_j|^2 + 2|\varphi'(u) \sqrt{Q} \nabla u|^2 \\ \leq 2A_0^2(|\sqrt{Q}\nabla u_j|^2 + |\sqrt{Q}\nabla u|^2). \end{multline*} The right-hand term converges to $4A_0^2|\sqrt{Q}\nabla u|^2$ both pointwise a.e. and in $L^1(\Omega)$. Therefore, we can again apply the generalized dominated convergence theorem to get that $\nabla \psi_j\rightarrow \varphi'(u)\nabla u$ in $L^2_Q(\Omega)$. We conclude that $(\varphi(u), \varphi'(u)\nabla u)\in QH^1_0(v;\Omega)$. \end{proof} \subsection*{Exponential results} In this section we give two results which are needed to prove Theorem~\ref{main1}. The first gives a solution to an auxiliary Dirichlet problem and is an application of the previous two lemmas. \begin{lem}\label{AuxProb} Fix $\alpha>0$. If $(u,\nabla u)\in QH^1_0(\Omega)$ is a non-negative bounded weak subsolution of the Dirichlet problem \begin{eqnarray}\label{dpA} \left\{\begin{array}{rcll} -\Div\left(Q\nabla u\right)&=&fv&\textrm{for }x\in\Omega,\\ u&=&0&\textrm{for }x\in\partial\Omega, \end{array} \right. \end{eqnarray} then $(w,\nabla w) = (e^{\alpha u}-1,\alpha e^{\alpha u}\nabla u) \in QH_0^1(v;\Omega)$ is a non-negative weak subsolution of the Dirichlet problem \begin{eqnarray}\label{dpB} \left\{ \begin{array}{rcll} -\Div\left(Q\nabla w\right)&=&\alpha f(w+1)v&\textrm{for }x\in\Omega,\\ w&=&0&\textrm{for }x\in\partial\Omega. \end{array} \right. \end{eqnarray} \end{lem} \begin{proof} Fix a non-negative $\psi\in Lip_0(\Omega)$. By our assumptions on $(u,\nabla u)$ and by Lemmas~\ref{prod} and~\ref{comp} we have that that both $(w,\nabla w)= (e^{\alpha u}-1,\alpha e^{\alpha u}\nabla u)$ and $(\psi (w+1), (w+1)\nabla\psi+\psi\nabla w)$ are in $QH^1_0(\Omega)$. Since $\nabla w = \alpha (w+1)\nabla u$ and $(u,\nabla u)$ is a non-negative weak subsolution of \eqref{dpA}, we have that \begin{align*} \int_\Omega f(w+1)\psi~vdx &\geq \int_\Omega \nabla (\psi(w+1))Q\nabla u~dx\\ & = \int_\Omega (w+1)\nabla \psi Q\nabla u~dx + \int_\Omega \psi\nabla(w+1)Q \nabla w~dx \\ &= \frac{1}{\alpha}\int_\Omega \nabla \psi Q\nabla w~dx + \int_\Omega \psi \nabla w Q\nabla w~dx\\ &\geq \frac{1}{\alpha}\int_\Omega \nabla \psi Q\nabla w~dx. \end{align*} Since $\psi\in Lip_0(\Omega)$ is arbitrary, we conclude that $w$ is a non-negative weak subsolution of~\eqref{dpB}. \end{proof} Our second result gives the exponential integrability of bounded solutions to \eqref{dp}. A version of this result is proved in~\cite[Lemma~B]{X} for uniformly elliptic operators; a qualitative version appeared previously in~\cite[Example~4]{MR1721824}. Here we adapt the proof from \cite{X} to our more general setting. \begin{lem}\label{expint} Suppose Hypothesis~\ref{sobolev} holds. Let $f\in L^{\sigma'}(v;\Omega)$ satisfy $\|f\|_{\sigma';v}\leq 1$, and let $(u,\nabla u) \in QH_0^1(\Omega)$ be a bounded, non-negative weak subsolution of ~\eqref{dp}. Then, for every $\gamma \in (0,\frac{4}{C_0^2})$, with $C_0$ as in \eqref{sob}, there $M= M(\gamma,C_0, v(\Omega))$ such that \begin{equation}\label{expest} \int_\Omega e^{\gamma u(x)}v(x)\,dx \leq M. \end{equation} \end{lem} \begin{proof} Let $f$ and $(u,\nabla u)$ be as in the hypotheses. Define $\varphi = e^{\gamma u} -1$ and $\psi = e^\frac{\gamma u}{2}-1$ with $\gamma>0$ to be chosen below. Since $u$ is bounded, by Lemma~\ref{comp} we have that \[ (\varphi,\nabla \varphi) = ( e^{\gamma u} -1, \gamma e^{\gamma u}\nabla u), \quad (\psi,\nabla \psi) = (e^\frac{\gamma u}{2}-1, \frac{\gamma}{2}e^{\frac{\gamma u}{2}} \nabla u) \] are in $QH_0^1(\Omega)$. Further, we immediately have the identities $\varphi =\psi^2 + 2\psi$, $\nabla \psi = \frac{\gamma}{2}e^{\frac{\gamma u}{2}}\nabla u$, and $\nabla \varphi =2 e^{\frac{\gamma u}{2}}\nabla \psi$. If we apply the Sobolev inequality~\eqref{sob2} and use $\varphi$ as a test function in Definition~\ref{weaksol} we can estimate as follows: \begin{align*} \|\psi\|_{L^{2\sigma}(v;\Omega)}^2 &\leq C_0^2\int_\Omega \left|\sqrt{Q(x)}\nabla \psi(x)\right|^2\,dx\\ & = \frac{C_0^2\gamma}{4}\int_\Omega \nabla\varphi(x) \cdot Q(x)\nabla u(x)\,dx\\ &\leq \frac{C_0^2\gamma}{4}\int_\Omega f(x)\varphi(x) v(x)\,dx\\ &= \frac{C_0^2\gamma}{4} \left( \int_\Omega f(x)\psi(x)^2 v(x)\, dx + 2\int_\Omega f(x) \psi(x) v(x)\,dx\right).\\ \intertext{If we now apply H\"older's inequality with exponents $\sigma$ and $2\sigma$, and then $2$, we get} & = \frac{C_0^2\gamma}{4} \left( \|f\|_{L^{\sigma'}(v;\Omega)}\|\psi^2\|_{L^{\sigma}(v;\Omega)} + 2 \|f\|_{L^{\sigma'}(v;\Omega)}\|\psi\|_{L^{\sigma}(v;\Omega)}\right) \\ & \leq \frac{C_0^2\gamma}{4} \left( \|\psi\|_{L^{2\sigma}(v;\Omega)}^2 + 2 \|\psi\|_{L^{2\sigma}(v;\Omega)}v(\Omega)^{\frac{1}{2\sigma}}\right). \end{align*} If we now fix $\gamma\in (0,\frac{4}{C_0^2})$, then we can re-arrange terms to get \begin{equation} \label{eqn:psi-est} \|\psi\|_{L^{2\sigma}(v;\Omega)} \leq \frac{C_0^2\gamma}{2(1-\frac{C_0^2\gamma}{4})} v(\Omega)^{\frac{1}{2\sigma}}. \end{equation} Therefore, again by H\"older's inequality and by~\eqref{eqn:psi-est} applied twice, we have that \begin{align*} \int_\Omega e^{\gamma u(x)}v(x)\,dx & = \int_\Omega \big(\psi(x)^2 +2\psi(x)\big)v(x)\,dx + v(\Omega) \\ & \leq \|\psi\|_{L^{2\sigma}(v;\Omega)}^2v(\Omega)^{\frac{1}{\sigma'}} + 2 \|\psi\|_{L^{2\sigma}(v;\Omega)}v(\Omega)^{\frac{1}{(2\sigma)'}} + v(\Omega) \\ & \leq C(\gamma,C_0)v(\Omega) \\ & = M(\gamma,C_0,v(\Omega)). \end{align*} \end{proof} \section{Proof of Theorem \ref{main0}} \label{section:main0} Fix $f \in L^A(v;\Omega)$ and let ${\bf u}=(u,\nabla u)\in QH^1_0(\Omega)$ be a non-negative weak subsolution of \eqref{dp}. We may assume without loss of generality that $\|f\|_{L^A(v;\Omega)} >0$ (equivalently, that $f$ is non-zero on a set $E\subset\Omega$ with $v(E)>0$); otherwise, a standard argument shows that $u=0$ $v$-almost everywhere. (Cf.~\eqref{l2est} below.) By Lemma~\ref{SCALE}, $f\in L^{\sigma'}(v;\Omega)$. For each $r>0$ define $\varphi_r = (u-r)_+$ and let $S(r)=\{x\in\Omega~:~u(x) >r\}$. Then by Lemma~\ref{lemma:main1-lemma}, $(\varphi_r,\nabla \varphi_r) = ((u-r)_+,\mathbbm{1}_{S(r)}\nabla u )\in QH^1_0(v;\Omega)$. We now estimate as follows: by the Sobolev inequality~\eqref{sob2}, the definition of a weak subsolution with $\varphi_r$ as the test function, and H\"older's inequality, we have that \begin{multline*} \|\varphi_r\|_{L^{2\sigma}(v;\Omega)}^2 \leq C_0^2\int_{S(r)} |\sqrt{Q}\nabla \varphi_r|^2\,dx = C_0^2\int_{S(r)} \nabla \varphi_r\cdot Q\nabla \varphi_r\,dx\\ = C_0^2\int_{S(r)} \nabla \varphi_r\cdot Q\nabla u\,dx \leq C_0^2\int_{S(r)} f\varphi_r ~vdx \leq C_0^2\|f\|_{L^{(2\sigma)'}(v;S(r))}\|\varphi_r\|_{L^{2\sigma}(v;\Omega)} \end{multline*} since $\nabla u =\nabla \varphi_r$ on $S(r)$. If we divide through by $\|\varphi_r\|_{L^{2\sigma}(v;\Omega)}$, we get \begin{equation} \label{eqn:varphi-est} \|\varphi_r\|_{L^{2\sigma}(v;\Omega)} \leq C\|f\|_{L^{(2\sigma)'}(v;S(r))}. \end{equation} In order to estimate the norm of the right-hand side, recall that since $\sigma>1$, $(2\sigma)' < \sigma'$, we can define the Young function $$B(t) = t^\frac{\sigma'}{(2\sigma)'}\log(e+t)^q.$$ It is immediate that $B_\sigma(t) =B(t^{(2\sigma)'})\preceq A(t)$ and so by Lemma~\ref{holders}, a change of variables in the Luxemburg norm, and Lemmas~\ref{normcompare} and~\ref{indicators} we get \begin{align*} \|f\|_{L^{(2\sigma)'}(v;S(r))}^{(2\sigma)'} &= \int_\Omega |f|^{(2\sigma)'}~\mathbbm{1}_{S(r)}v\,dx \\ &\leq 2\|f ^{(2\sigma)'}\|_{L^B(v;\Omega)} \|\mathbbm{1}_{S(r)}\|_{L^{\bar{B}}(v;\Omega)}\\ & = 2\|f \|_{L^{B_\sigma}(v;\Omega)}^{(2\sigma)'} \|\mathbbm{1}_{S(r)}\|_{L^{\bar{B}}(v;\Omega)}\\ &\leq C\|f\|_{L^A(v;\Omega)}^{(2\sigma)'} \frac{v(S(r))^{\frac{1}{2\sigma-1}}}{\log(e+(v(S(r)))^{-1})^{q\left(\frac{(2\sigma)'}{\sigma'}\right)}}, \end{align*} where $C=C(\sigma,q,v(\Omega))$ is independent of $f,\varphi,$ and ${\bf u}$. \medskip We now turn to our iteration argument. For all $s>r$, $S(s)\subset S(r)$ and, for $x\in S(s)$, $ \varphi_r(x)>s-r>0$. Hence, if we combine the above two inequalities, we get \begin{equation}\label{iterator} v(S(s))^\frac{1}{2\sigma}(s-r) \leq \|\varphi_r \mathbbm{1}_{S(s)}\|{L^{2\sigma}(v;\Omega)} \leq C\|f\|_{L^A(v;\Omega)} ~\displaystyle \frac{v(S(r))^\frac{1}{2\sigma}} {\log(e+(v(S(r)))^{-1})^\frac{q}{\sigma'}}. \end{equation} Define $r_0 = \tau_0\|f\|_{L^A(v;\Omega)}$ with $\tau_0$ to be chosen below. Our goal is to find $\tau_0$ sufficiently large so that $v(S(r_0))=0$, as this immediately implies that \[ \|u\|_{L^\infty(v;\Omega)} \leq \tau_0\|f\|_{L^A(v;\Omega)}, \] which is what we want to prove. To do this, we will use an iteration argument based on De Giorgi iteration. For each $k\in\mathbb{N}$ set \begin{equation}\label{ck} C_k = r_0 (1-(k+1)^{-\epsilon}) \end{equation} where $\epsilon>0$ will be chosen below, and let $C_0=C_1/2$. The sequence $\{C_k\}_{k=0}^\infty$ increases to $r_0$ and by an estimate using the mean-value theorem we have that for each $k\in\mathbb{N}$, \begin{equation}\label{Ckdist} C_{k+1}-C_k \geq \displaystyle\frac{\epsilon ~r_0}{(k+2)^{1+\epsilon}}. \end{equation} If we set $s=C_{k+1},~r=C_k$, $\mu_k = v(S(C_k))$ in inequality~\eqref{iterator}, we get \begin{equation} \label{eqn:mu-est} \mu_{k+1} \leq \left[\displaystyle\frac{C(k+2)^{1+\epsilon}} {\epsilon\tau_0}\right]^{2\sigma} \displaystyle\frac{\mu_k} {\log(e+\mu_k^{-1})^{\frac{2q\sigma}{\sigma'}}} \end{equation} for each $k\in\mathbb{N}$. By the dominated convergence theorem $\mu_k$ converges to $v(S(r_0))$, so to complete the proof we need to prove that $\mu_k\rightarrow 0$. Let $m_k = \log(\mu_k^{-1})$. We will show that $m_k\rightarrow \infty$ as $k\rightarrow \infty$, which is equivalent to the desired limit. To do so, we will show that we can choose $\epsilon$ and $\tau_0$ such that $m_0\geq 2$ and \begin{equation} \label{eqn:best-est} m_{k} \geq m_0 + k \end{equation} for all $k\in\mathbb{N}\cup\{0\}$. Fix $\epsilon=\frac{q}{\sigma'}-1>0$. Since $2\sigma(1+\epsilon) = \frac{2\sigma q}{\sigma'}$, if we take logarithms and re-arrange terms, inequality~\eqref{eqn:mu-est} becomes, for $k\in {\mathbb N}$, \begin{equation}\label{goodest} m_{k+1} \geq 2\sigma\log\left(\displaystyle\frac{\epsilon\tau_0}{C}\right) + \frac{2\sigma q}{\sigma'}\log\left(\displaystyle\frac{m_k}{k+2}\right) +m_k. \end{equation} The first step is to fix $m_0$ by an appropriate choice of $\tau_0>0$. If we argue as we did to prove~\eqref{eqn:varphi-est} using $u$ as the test function in the definition of a weak subsolution, we get \begin{equation}\label{l2est} \|u\|_{L^{2\sigma}(v;\Omega)} \leq C\|f\|_{L^{(2\sigma)'}(v;\Omega)}. \end{equation} If we estimate the right-hand side using H\"older's inequality and Lemma~\ref{SCALE}, we get \[ \|f\|_{L^{(2\sigma)'}(v;\Omega)} \leq Cv(\Omega)^{\frac{1}{2\sigma}} \|f\|_{L^A(v;\Omega)}, \] where the constant $C$ is independent of $f$ and ${\bf u}$. For each $x\in S(C_0)$ we have that $2u(x)/C_1>1$, so by H\"older's inequality and the above two estimates, \begin{multline*} v(S(C_0)) \leq \frac{2}{C_1}\int_{S(C_0)} uv\,dx \leq \frac{2}{C_1} \|u\|_{L^{2\sigma}(v;\Omega)} v(S(C_0))^{\frac{1}{(2\sigma)'} } \\ \leq \frac{2C}{C_1} v(\Omega)^{\frac{1}{2\sigma}}\|f\|_{L^A(v;\Omega)} v(S(C_0))^{\frac{1}{(2\sigma)'}} \leq \frac{2C v(\Omega)^{\frac{1}{2\sigma}}}{\tau_0(1-2^{-\epsilon})} v(S(C_0))^{\frac{1}{(2\sigma)'}}. \end{multline*} If we re-arrange terms, we get \begin{equation*} v(S(C_0)) \leq \left(\frac{C}{\tau_0(1-2^{-\epsilon})}\right)^{2\sigma}, \end{equation*} where again the constant $C$ is independent of $f$ and $\bf u$. Now choose $\tau_0>0$ so that \begin{equation}\label{tau_0res} \mu_0=v(S(C_0)) < e^{-2}, \; \text{ and } \; \tau_0\geq \max\bigg\{ \frac{2^{\epsilon+1}eC}{2^\epsilon-1}, \frac{eC}{\epsilon}\bigg\}, \end{equation} where $C$ is as in \eqref{iterator}. Note that $\tau_0$ is independent of ${\bf u}$ and $f$, and the first inequality implies that $m_0\geq 2$. It is clear that $m_0 \geq m_0$ but for the sake of clarity we also show that $m_1>m_0+1$. Since $k=0$ we cannot use \eqref{goodest}, but instead use \eqref{iterator} directly. If we set $s=C_1$ and $r=C_0$ we find $$\frac{C_1}{2}\mu_1^\frac{1}{2\sigma} \leq C\|f\|_{L^A(v;\Omega)} \frac{\mu_0^{\frac{1}{\ell(2\sigma)'}}}{\log(e+\mu_0^{-1})^\frac{q}{\sigma'}}.$$ If we use the definition of $C_1$ and recall that $m_j=\log(\mu_j^{-1})$, we get \begin{multline*} m_1 \geq 2\sigma\log\left( \frac{(2^{\epsilon}-1)\tau_0}{2^{\epsilon+1}C}\right) +m_0 +2q(\sigma-1)\log(m_0)\\ \geq \log\left( \frac{(2^{\epsilon}-1)\tau_0}{2^{\epsilon+1}eC}\right) +m_0 +1 \geq m_0+1; \end{multline*} the second inequality follows since $m_0\geq 1$, and the third by our choice of $\tau_0$. Now suppose that $m_j \geq m_0 + j $ for some $j\in\mathbb{N}$. Since $m_0\geq 2$, \eqref{goodest} and \eqref{tau_0res} together show that \begin{multline*} m_{j+1} \geq 2\sigma\log\left(\frac{\epsilon\tau_0}{C}\right) + \frac{2\sigma q}{\sigma'}\log\left(\frac{2+j}{2+j}\right) + m_0 + j \\ \geq \log\left(\frac{\epsilon\tau_0}{eC}\right) + m_0 + j + 1 \geq m_0+j+1. \end{multline*} Hence, by induction we have that inequality~\eqref{eqn:best-est} holds for all $k$, and this completes our proof. \section{Proof of Theorem \ref{main1}} \label{section:main1} Our proof requires one technical lemma. \begin{lem}\label{Gamma} Given $\sigma>1$, there exist constants $b\in (\sigma,2\sigma)$, $\bar{b}\in((2\sigma)',\sigma')$, and $p>1$ such that \begin{equation} \label{eqn:gamma1} \frac{1}{b}+\frac{1}{\bar{b}}+\frac{1}{p} = 1, \end{equation} and \begin{equation} \label{eqn:gamma2} \Gamma = \frac{2\sigma}{\bar{b}}\left(\frac{\sigma'-\bar{b}}{\sigma'} +\frac{2\sigma - b}{2\sigma}\right)= 1. \end{equation} \end{lem} \begin{proof} We will first show that we can choose $b$ and $\bar{b}$ so that \eqref{eqn:gamma2} holds, and then show that we can refine our choice so that~\eqref{eqn:gamma1} holds as well. Set $b = 2\sigma(1-\beta)$ and $\bar{b} = (1+\beta)(2\sigma)'$, where $0<\beta < \min( \frac{1}{2}, \frac{\sigma'-(2\sigma)'}{(2\sigma)'})$ will be determined below. With this restriction on $\beta$ it is immediate that $b$ and $\bar{b}$ lie in the specified intervals. Moreover, if we insert these values into the definition of $\Gamma$, we get \begin{multline*} \Gamma = \frac{2\sigma}{(1+\beta)(2\sigma)'} \bigg( \frac{\sigma'-(1+\beta)(2\sigma)'}{\sigma'} + \frac{2\sigma -2\sigma(1-\beta)}{2\sigma}\bigg) \\ =\frac{2\sigma}{(1+\beta)(2\sigma)'} \bigg((1+\beta)\bigg(1- \frac{(2\sigma)'}{\sigma'}\bigg)\bigg) = 2\sigma\bigg(\frac{1}{(2\sigma)'}-\frac{1}{\sigma'}\bigg) = 1. \end{multline*} This gives~\eqref{eqn:gamma2}. To show that we can choose $p>1$ and $\beta$ so that~\eqref{eqn:gamma1} holds, note that \[ \frac{1}{b} + \frac{1}{\bar{b}} = \frac{1}{2\sigma(1-\beta)} + \frac{2\sigma-1}{2\sigma(1+\beta)} = \frac{1+\beta + 2\sigma - 1 -2\beta\sigma+\beta}{2\sigma(1-\beta^2)} = \frac{\sigma-\beta\sigma + \beta}{\sigma(1-\beta^2)}. \] Thus, $\frac{1}{b}+\frac{1}{\bar{b}} < 1$ exactly when $0<\beta < \frac{1}{\sigma'}$. Hence, if we choose $\beta$ sufficiently small we can find $p>1$ such that ~\eqref{eqn:gamma1} holds. \end{proof} \begin{rem} In the proof of Lemma~\ref{Gamma}, the range of possible values for $\beta$ shrinks as the dimension increases. In the classical case, $\sigma'=\frac{n}{2}$, and this value is generally a lower bound on $\sigma'$ in the more degenerate settings. \end{rem} \medskip \begin{proof}[Proof of Theorem~\ref{main1}] Let ${\bf u}=(u,\nabla u)\in QH_0^1(v;\Omega)$ be a non-negative weak subsolution of \eqref{dp}. By the homogeneity of equation~\eqref{dp} and inequality~\eqref{eqn:fixed}, to prove this result it will suffice to assume that $\|f\|_{L^{\sigma'}(v;\Omega)}= 1$ and prove that \begin{equation} \label{eqn:hom} \|u\|_{L^\infty(v;\Omega)} \leq C[ 1+ \log(1+\|f\|_{L^A(v;\Omega)})]. \end{equation} To prove \eqref{eqn:hom} we will apply an iteration argument very similar to that in the proof of Theorem~\ref{main0}, but to the solution of an auxiliary equation we which now define. Given that $\|f\|_{L^{\sigma'}(v;\Omega)}= 1$, and since by Theorem~\ref{main0} $u$ is bounded in $\Omega$, we can apply Lemma \ref{expint} and fix $\gamma\in(0,\frac{4}{C_0^2})$ such that \begin{equation}\label{wexpint} \int_\Omega e^{\gamma u(x)}v(x)\,dx \leq M(\gamma,C_0,v(\Omega)) = M. \end{equation} Define $h= e^{\gamma u /p}$ (where $p>1$ will be determined below) and let $w = h-1$. By Lemma~\ref{AuxProb}, $(w,\frac{\gamma}{p}h\nabla u)\in QH^1_0(v;\Omega)$ is a non-negative weak subsolution of \begin{eqnarray}\label{dpB} \left\{ \begin{array}{rcll} -\Div\left(Q\nabla w\right)&=&\alpha fhv&\textrm{for }x\in\Omega,\\ w&=&0&\textrm{for }x\in\partial\Omega. \end{array} \right. \end{eqnarray} For each $r>0$, let $\varphi_r = (w-r)_+$ and $S(r) = \{ x\in \Omega~:~ w(x)>r\}$. By Lemma~\ref{lemma:main1-lemma}, $(\varphi_r,\nabla \varphi_r)\in QH^{1}_0(v;\Omega)$. By Lemma~\ref{Gamma}, there exist $\bar{b}\in ((2\sigma)',\sigma'),\,b\in (\sigma,2\sigma)$, and $p>1$ such that~\eqref{eqn:gamma1} holds. We can now argue as we did in the proof of Theorem~\ref{main0} with $\varphi_r$ as a test function, and then apply H\"older's inequality twice to get \begin{align}\label{2-10} \|\varphi_r\|_{L^{2\sigma}(v;\Omega)}^2 &\leq C\int_{S(r)} \nabla \varphi_r Q\nabla\varphi_r~dx \nonumber \\ & = C\int_{S(r)} \nabla \varphi_r Q \nabla w~dx \nonumber \\ & \leq C\int_{S(r)} f\varphi_r h ~vdx \nonumber \\ & \leq C\|f {\mathbbm 1}_{S(r)}\|_{L^{\bar{b}}(v;\Omega)} \|\varphi_r\|_{L^{{b}}(v;\Omega)}\|h\|_{L^{p}(v;\Omega)} \nonumber \\ & \leq C\|f {\mathbbm 1}_{S(r)}\|_{L^{\bar{b}}(v;\Omega)} \|\varphi_r\|_{L^{2\sigma}(v;\Omega)} v(S(r))^{\frac{2\sigma-b}{2\sigma}}; \end{align} the last inequality follows since $b<2\sigma$ and since by~\eqref{wexpint}, $h\in L^p(v;\Omega)$ with a constant independent of $\bf u$ and $f$. Now define the Young function $B(t) = t^\frac{\sigma'}{\bar{b}}\log(e+t)^q$ and note that $B(|t|^{\bar{b}})\preceq A(t)$. Therefore, arguing as before, by Lemma~\ref{indicators} and \eqref{2-10} we have that \[ \|\varphi_r\|_{2\sigma} \leq C\|f\|_A \frac{v(S(r))^{\frac{1}{\bar{b}(\sigma'/\bar{b})'}+\frac{2\sigma - b}{2\sigma\bar{b}}}} {\log(e+v(S(r)^{-1}))^\frac{q}{\sigma'}} = C\|f\|_A \frac{v(S(r))^{\frac{\sigma'-\bar{b}}{\bar{b}\sigma'}+\frac{2\sigma - b} {2\sigma\bar{b}}}}{\log(e+v(S(r))^{-1})^\frac{q}{\sigma'}} \] We can now argue as we did in the proof of Theorem~\ref{main0} to get that for all $s>r$, \[ v(S(s)) \leq \left(\frac{C\|f\|_A}{(s-r)}\right)^{2\sigma} \frac{v(S(r))^{\frac{2\sigma}{\bar{b}}\left(\frac{\sigma'-\bar{b}}{\sigma'}+\frac{2\sigma - b}{2\sigma}\right)}}{\log(e+v(S(r))^{-1})^\frac{2q\sigma}{\sigma'}} = \left(\frac{C\|f\|_A}{(s-r)}\right)^{2\sigma} \frac{v(S(r))}{\log(e+v(S(r))^{-1})^\frac{2q\sigma}{\sigma'}}; \] the last inequality holds by~\eqref{eqn:gamma2}. We continue the proof of Theorem \ref{main0} and define $\epsilon = \frac{q}{\sigma'}-1>0$, $C_k$, $k\geq 0$, as in \eqref{ck}, and $m_k = -\log(v(S(C(k)))$ to again get the iteration inequality \begin{equation} \label{eqn:iterate-again} m_{k+1} \geq 2\sigma \log\left(\frac{\epsilon \tau_0}{C}\right) + \frac{2\sigma q}{\sigma'}\log\left(\frac{m_k}{k+2} \right) + m_k. \end{equation} We will again prove that we can choose the parameter $\tau_0$ such that $m_0>1$ and for every $k\in\mathbb{N} \cup \{0\}$, \begin{equation} \label{eqn:final-iteration} m_k \geq m_0 + k \end{equation} Assume for the moment that~\eqref{eqn:final-iteration} holds. Then arguing as before we have that $\|w\|_\infty \leq \tau_0\|f\|_A$: that is, $$e^{c||u||_\infty} \leq \tau_0(\|f\|_A +1),$$ which in turn implies that~\eqref{eqn:hom} holds as desired. \medskip Therefore, to complete the proof we need to show that~\eqref{eqn:final-iteration} holds. The proof is almost identical to the proof of~\eqref{eqn:best-est}: the only difference is in the choice of $m_0$ which we will describe. We first estimate as we did for inequality~\eqref{2-10}: \begin{multline*} \|w\|^2_{L^b(v;\Omega)} \leq \|w\|^2_{L^{2\sigma}(v;\Omega)}v(\Omega)^{\frac{2\sigma-b}{\sigma}} \leq C\int_\Omega fwh\,vdx\; v(\Omega)^{\frac{2\sigma-b}{\sigma}} \\ \leq C\|f\|_{L^{\bar{b}}(v;\Omega) }\|w\|_{L^b(v;\Omega)} \|h\|_{L^p(v;\Omega)} v(\Omega)^{\frac{2\sigma-b}{\sigma}} \leq C\|f\|_{L^{\bar{b}}(v;\Omega)} \|w\|_{L^b(v;\Omega)} v(\Omega)^{\frac{2\sigma-b}{\sigma}}, \end{multline*} where the last inequality holds since $h\in L^p(v;\Omega)$ with norm bounded by a constant. Furthermore, by H\"older's inequality and Lemma~\ref{SCALE}, \[ \|f\|_{L^{\bar{b}}(v;\Omega)}^{\bar{b}} \leq \|f\|_{L^{\sigma'}(v;\Omega)}^{\bar{b}} v(\Omega)^{\frac{1}{(\sigma'/\bar{b})'}} \leq \|f\|_{L^{A}(v;\Omega)}^{\bar{b}} v(\Omega)^{\frac{1}{(\sigma'/\bar{b})'}}. \] Since $C_0 = C_1/2$, for every $x\in S(C_0)$ we have $\frac{2w(x)}{C_1} > 1$. Thus, combining the above inequalities, we get \begin{multline*} v(S(C_0)) \leq \frac{2}{C_1}\int_{S(C_0)} w ~vdx \leq \frac{2}{C_1}\|w\|_{L^b(v;\Omega)} v(S(C_0))^{\frac{1}{\bar{b}}} \\ \leq \frac{2C}{C_1} \|f\|_{L^{A}(v;\Omega)} v(S(C_0))^{\frac{1}{\bar{b}}} v(\Omega)^{\frac{1}{\bar{b}(\sigma'/\bar{b})'}+\frac{2\sigma-b}{\sigma}} = \frac{C}{\tau_0(1-2^{-\epsilon})} v(S(C_0))^{\frac{1}{\bar{b}}}. \end{multline*} Hence, $$v(S_0) \leq \left(\frac{C}{\tau_0(1-2^{-\epsilon})}\right)^{b},$$ and so we can choose $\tau_0>0$ independent of both ${\bf u},f$ such that \[ \mu_0=v(S(C_0)) < e^{-2}, \quad \tau_0\geq \max\bigg\{ \frac{eC}{1-2^{-\epsilon}}, \frac{eC}{\epsilon}\bigg\} \] where $C$ is as in \eqref{eqn:iterate-again}. We may now proceed exactly as in the proof of ~\eqref{eqn:best-est} to get that~\eqref{eqn:final-iteration} holds. This completes our proof. \end{proof} \section{Theorem~\ref{main0} is almost sharp} \label{section:counter-example} In this section we construct Example~\ref{example:not-sharp-but-close} that shows that Theorem \ref{main0} is almost sharp in the case of the Laplacian. Our example is intuitively straightforward. Let our domain $\Omega\subset \mathbb{R}^n$, $n\geq 3$, be the unit ball $B=B(0,1)$, and define \[ f(x) = |x|^{-2}\log(e+|x|^{-1})^{-1}. \] Let $A(t) = t^{\frac{n}{2}}\log(e+t)^q$. We will show that $f\in L^A(B)$ if and only if $q<\frac{n}{2}-1$. Moreover, we claim that, at least formally, if $u$ is the solution of $\Delta u = f$ on $B$, then $u(0)=\infty$. For if we use the well-known fact that the Green's function for the unit ball is $c_n|x|^{2-n}$, then \[ u(0) = c_n \int_B |x|^{-n} \log(e+|x|^{-1})^{-1}\,dx = \infty. \] To make this argument rigorous we must justify our use of Green's formula which requires that the function $f$ be continuous on $B$. To overcome this, we give an approximation argument and show that the inequality \[ \|u\|_{L^\infty(B)} \leq C \|f\|_{L^A(B)} \] cannot hold with a uniform constant. For each $k\geq 1$, let $\chi_k$ be a continuous, non-negative, radial function such that $\chi_k(x)=0$ if $|x|\leq 2^{-k-1}$, and $\chi_k(x)=1$ if $2^{-k} \leq x <1$. Define $f_k=u_k$. Each $f_k$ is continuous, and if $u_k$ is the solution to the Dirichlet problem \[ \begin{cases} \Delta u_k = f_k & x\in B, \\ u_k = 0 & x\in \partial B, \end{cases} \] then at the origin it is given by \[ u_k(0) = c_n\int_B |x|^{2-n} f_k(x)\,dx \geq c_n \int_{2^{-k} \leq |x|<1} |x|^{-n} \log(e+|x|^{-1})^{-1}\,dx. \] It is immediate that $u_k(0) \rightarrow \infty$ as $k\rightarrow\infty$. Since by monotonicity of the norm, $\|f_k\|_{L^A(B)} \leq \|f\|_{L^A(B)}$, we have that the inequality \[ u_k(0) \leq \|u_k\|_{L^\infty(B)} \leq C\|f_k\|_{L^A(B)}\leq C\|f\|_{L^A(B)} \] cannot hold with a uniform constant if $f\in L^A(B)$. Therefore, to complete the proof, it will suffice to show $f\in L^A(B)$ if and only if $q<\frac{n}{2}-1$. By the definition of the Luxemburg norm, it will suffice to show that $f(A) \in L^1(B)$. But this is straightforward: \begin{align*} A(f(x)) & = f(x)^{\frac{n}{2}} \log(e+f(x))^q \\ & = x^{-n}\log(e+|x|^{-1})^{-\frac{n}{2}} \log(e+|x|^{-2}\log(e+|x|^{-1})^{-1})^{q} \\ & \approx x^{-n}\log(e+|x|^{-1})^{-\frac{n}{2}} \log(e+|x|^{-1})^{q}, \end{align*} where the implicit constant only depends on $q$. Thus, $A(f)\in L^1(B)$ if and only if $\frac{n}{2}-q>1$, or equivalently, $q<\frac{n}{2}-1$. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} Anomalous scaling is probably the central problem of the theory of turbulence. In 1941 Kolmogorov formulated his famous theory of developed turbulence \cite{Kol}, where the scaling behavior of different correlation functions of the turbulent velocity was predicted. Experimentally one observes deviations from the scaling exponents, proposed by Kolmogorov \cite{61GSM,62BT,91MS}. It is recognized that the deviations are related to rare strong fluctuations making the main contribution into the correlation functions \cite{53Bat,MY,Frish}. This phenomenon, which is usually called intermittency, is the most striking peculiarity of developed turbulence. One of the classical objects in the theory of turbulence is a passive scalar advected by a fluid. The role of the passive scalar can be played by temperature or by the density of pollutants. Correlation functions of the scalar in a turbulent flow possess a scaling behavior that was established by Obukhov and Corrsin \cite{Obukh,Corrs} in the frame of the theory analogous to that of Kolmogorov . Intermittency enforces deviations from the Obukhov-Corrsin exponents that appear to be even stronger than the deviations from the Kolmogorov exponents for the correlation functions of the velocity \cite{84AHGA,90MS,91Sre,91HY}. Unfortunately, a consistent theory of turbulence describing anomalous scaling has not been constructed yet. This accounts for the difficulties associated with the strong coupling inherent to developed turbulence. This is the reason for attempts to examine the intermittency phenomenon in the framework of different simplified models. The most popular model used for this purpose is Kraichnan's model of passive scalar advection \cite{68Kra-a}, where the advecting velocity is believed to be short correlated in time and have Gaussian distribution. That allows one to examine the statistics of the passive scalar in more detail. The scalar in the Kraichnan model exhibits strong intermittency even if it is absent in the advecting velocity field. This was proved both theoretically \cite{95SS,95GK,95CFKLb,95CF,96BGK,96SS,96BG,97BFL} and numerically \cite{95KYC,97FGLP,98FMV}. In the theoretical works the equation for the $n$-point correlation function $F_n$ was solved assuming that different parameters, such as $\zeta_2$, $2-\zeta_2$, or ${d}^{-1}$, are small (recall, that $\zeta_2$ is the exponent of the second-order correlation function of the passive scalar and ${d}$ is the dimensionality of space). The order of the correlation functions that can be examined in the framework of the methods of the noted papers is bounded from above, which does not allow one to imagine the whole dependence of $\zeta_n$ on $n$. For that it would be enough to get the asymptotic behavior of $\zeta_n$ at $n\gg 1$. There have been several attempts to find the scaling of the correlation functions for larger $n$. In the the work by Kraichnan \cite{closure} a closure was assumed enabling one to find $\zeta_n$ for any $n$. An alternative scheme was proposed in \cite{97Yak}. An attempt to solve the problem at large $n$ was made in \cite{97Chert}, where an $n$-independent asymptotic behavior was found. In the present work we develop a technique based on the path-integral representation of the dynamical correlation functions of classical fields \cite{73MSR,76Dom,76Jan}. We use an idea, formulated in \cite{96FKLM}, that is related to the possibility of exploiting the saddle-point approximation in the path integral at large $n$. The saddle-point conditions are integro-differential equations describing an object that, in analogy to the quantum field theory, we call an instanton. The instantonic method was already successfully used in some contexts. Results concerning Burgers turbulence, conventional Navier-Stokes turbulence, and modifications of the Kraichnan model were obtained with the help of this method in the Refs. \cite{96GM,97BFKL,97FL,98BF}. The formalism presented in this paper enables one to find correlation functions of the passive scalar for arbitrary $n\gg1$ provided ${d}\zeta_2\gg1$. The paper is organized as follows. In Sec. \ref{model} we formulate the Kraichnan model, introduce notation, and write down the standard path integral representation for the correlation functions. This basic representation turns out to be unsuitable for the saddle-point approximation; therefore, we reformulate the problem in Sec. \ref{lagrange}. Passing to new variables that are Lagrangian separations, we get a path integral that already admits the use of the saddle-point approximation. In Sec. \ref{instant} we consider the instantonic equations for the case of the structure functions. We solve these equations in the limit ${d}\zeta_2\gg1$, which enables us to find the anomalous scaling and estimate the $n$ dependence of $S_n$. The main results of the work are presented in Sec. \ref{results} and discussed in Conclusion. Details of calculations are given in Appendixes. \section{Kraichnan Model} \label{model} Advection of a passive scalar $\theta$ by a velocity field ${\bbox v}$ is described by the equation \begin{equation} \partial_t\theta+{\bbox v}\nabla\theta -\kappa\nabla^2\theta=\phi \,, \label{adv} \end{equation} where $\kappa$ is the diffusion coefficient and $\phi$ is the source of the passive scalar (say, if $\theta$ corresponds to fluctuations of temperature, then $\phi$ represents the power of heaters). In a turbulent flow, ${\bbox v}$ is a random function of time and space coordinates. The source $\phi$ is also assumed to be a random function. Then passive scalar correlation functions are determined by the statistics of ${\bbox v}$ and $\phi$. Usually, one is interested in simultaneous correlation functions $F_n=\left\langle\theta({\bbox r}_1)\cdots\theta({\bbox r}_n)\right\rangle$ since a large-scale velocity destroys temporal correlations in the Eulerian frame, whereas simultaneous objects are not influenced by it. It is convenient to examine the anomalous scaling in terms of the structure functions \begin{equation} S_n(r)=\left\langle|\theta({\bbox r}/2) -\theta(-{\bbox r}/2)|^{n}\right\rangle \,. \label{str} \end{equation} One expects a universal behavior of the structure functions in the convective interval of scales $r_d\ll r \ll L$, where $r_d$ is the scale where the diffusivity becomes relevant and $L$ is the correlation length of the scalar source $\phi$. Namely, one observes a scaling dependence on $r$: \begin{equation} S_n(r)\propto r^{\zeta_n} \,. \label{zeta} \end{equation} In the frame of the Obukhov theory \cite{Obukh,Corrs} $\zeta_n=(n/2)\zeta_2$. Therefore, the differences $(n/2)\zeta_2-\zeta_n$, which are usually called anomalous exponents, characterize the anomalous scaling. One can write an estimate \begin{equation} S_n(r)\sim A_n\left[S_2(r)\right]^{n/2}\left(\frac{L}{r}\right) ^{(n/2)\zeta_2-\zeta_n} \,, \label{inter} \end{equation} where $A_n$ is an $n$-dependent factor. Note that Eq. (\ref{inter}) implies that the structure functions in the convective interval do not depend on the diffusion length $r_d$. The intermittency leads to the conclusion that values of the structure functions should be much larger than their naive Obukhov estimations \cite{Frish}. Therefore $(n/2)\zeta_2-\zeta_n>0$ and we conclude that these are the anomalous exponents that reflect the intermittency. \subsection{Formulation of the problem} In the Kraichnan model both ${\bbox v}$ and $\phi$ are assumed to be independent random functions, $\delta$ correlated in time and described by Gaussian statistics homogeneous in space. Therefore, statistical properties of the fields are entirely characterized by the pair correlation functions \begin{eqnarray} && \langle\phi(t_1,{\bbox r}_1) \phi(t_2,{\bbox r}_2)\rangle =\chi(r_{12}) \delta(t_1-t_2) \,, \quad \chi(0)=P_2 \,, \nonumber\\ && \langle v_\alpha(t_1,{\bbox r}_1)v_\beta(t_2,{\bbox r}_2)\rangle ={\cal V}_{\alpha\beta}({\bbox r}_1-{\bbox r}_2)\delta(t_1-t_2)\,. \nonumber\end{eqnarray} Here, $\chi(r)$ is a smooth function decaying on the scale $L$ which is the pumping length. The constant $P_2$ has the meaning of the pumping rate of $\theta^2$. The tensor ${\cal V}_{\alpha\beta}(r)$ has a characteristic scale $L_v$, which has the meaning of the pumping length of the velocity. We will assume that $L_v\gg L$. Since $r\ll L$ in the convective interval, we will need ${\cal V}_{\alpha\beta}$ only at $r\ll L_v$, where one can write \begin{equation} {\cal V}_{\alpha\beta}({\bbox r})={\cal V}_0\delta_{\alpha\beta} -{\cal K}_{\alpha\beta}({\bbox r})\,. \label{bl2}\end{equation} The quantity ${\cal V}_0$ is an $r$-independent constant that is the main contribution to the velocity correlation function on scales less than the velocity pumping length $L_v$. Nevertheless, besides ${\cal V}_0$, we should also keep a small $r$-dependent correction ${\cal K}$, since ${\cal V}_0$ corresponds to advection homogeneous in space and therefore does not contribute to simultaneous correlation functions of $\theta$. The velocity correlation function is assumed to possess some scaling properties. Namely, ${\cal K}({\bbox r})\propto r^{2-\gamma}$, where the exponent $\gamma$ characterizes the roughening degree of the velocity field. The field is smooth in space at $\gamma=0$ and is extremely irregular at $\gamma=2$. We will treat an arbitrary $\gamma$ satisfying the inequality $0<\gamma<2$. The tensorial structure of ${\cal K}_{\alpha\beta}$ is determined by the incompressibility condition ${\rm div}\,{\bbox v}=0$, implied in the Kraichnan model \begin{equation} {\cal K}_{\alpha\beta}({\bbox r})=\frac{D}{d} r^{-\gamma}\left[\frac{2-\gamma}{{d}-1} (r^2\delta_{\alpha\beta}-r_\alpha r_\beta) +r^2\delta_{\alpha\beta}\right] \,. \label{eq2} \end{equation} Here $d$ is the dimensionality of space and $D$ is a constant characterizing the strength of velocity fluctuations. One assumes that the fluctuations are strong enough to ensure the large value of the Peclet number, that is, \begin{equation} DL^{2-\gamma}\gg\kappa \,. \label{mo3} \end{equation} The inequality (\ref{mo3}) ensures the existence of the convective interval of scales since it can be rewritten as $r_d\ll L$, where $r_d$ is the diffusive length \begin{equation} r_d^{2-\gamma}\sim\kappa/D \,. \label{mo2} \end{equation} The assumption of the Gaussian nature and zero correlation time for the fields ${\bbox v}$ and $\phi$ allows one to derive a closed partial differential equation for the $n$th-order correlation function $F_n$ of $\theta$ \cite{68Kra-a,94SS,95CFKLb}. For the simultaneous pair correlation function $F_2(r_{12})=\langle\theta(t,{\bbox r}_1)\theta(t,{\bbox r}_2)\rangle$ one can solve the equation and find the explicit expression for $F_2$. In the convective interval \cite{68Kra-a} \begin{equation} S_2(r)=2[F_2(0)-F_2(r)]\sim \frac{P_2}{D} r^\gamma \,. \label{pair} \end{equation} Comparing Eq. (\ref{pair}) with Eq. (\ref{zeta}), one concludes that the exponent $\gamma$ introduced by Eq. (\ref{eq2}) directly determines the scaling of the second-order structure function $\zeta_2=\gamma$. However, for $n>2$ the equations for $F_n$ are too complicated to be integrated exactly. In \cite{95GK,95CFKLb,95CF,96BGK} the equations were analyzed in the limits $2-\gamma\ll1$ and ${d}\gamma\gg 1$, where the statistics of the passive scalar is close to Gaussian. The analysis led to an anomalous scaling that can be expressed in terms of the exponents $\zeta_n$ of the structure functions (\ref{str}) and (\ref{zeta}) \begin{equation} \zeta_n=\frac{n\gamma}{2}-\frac{2-\gamma}{2({d}+2)}n(n-2) \,. \label{pert} \end{equation} This expression covers both limit cases $2-\gamma\ll1$ and ${d}\gamma\gg1$. The first term on the right-hand side of Eq. (\ref{pert}) represents the normal scaling whereas the second one is just the anomalous scaling exponent. The calculations leading to Eq. (\ref{pert}) are correct if the anomalous contribution is much smaller than the normal one, which implies the inequality \begin{equation} n\ll \frac{{d}\gamma}{2-\gamma} \,. \label{pert1} \end{equation} Below we will develop a different approach to the problem. It will allow us to find the exponents $\zeta_n$ [Eq. (\ref{zeta})] of the structure correlation functions (\ref{str}) for any order $n \gg 1$ under the same additional condition ${d}\gamma\gg1$ as in \cite{95CFKLb,95CF}. \subsection{Path integral} Generally, the statistics of classical fields in the presence of random forces can be examined with the help of the field technique formulated in \cite{73MSR,76Dom,76Jan}. In the framework of the technique, correlation functions are calculated as path integrals with the weight $\exp(i{\cal I})$, where ${\cal I}$ is the effective action related to dynamical equations for the fields. For the passive scalar in the Kraichnan model the effective action is \begin{eqnarray} && i{\cal I}_\theta=i\int {\rm d}t\,{\rm d}{\bbox r}\, \left[p\partial_t\theta+p{\bbox v\nabla}\theta +\kappa\nabla p\nabla\theta\right] \nonumber \\ && -\frac{1}{2}\int {\rm d}t\,{\rm d}{\bbox r}_1\,{\rm d}{\bbox r}_2\, \chi\left(|{\bbox r}_1-{\bbox r}_2|\right) p(t,{\bbox r}_1)p(t,{\bbox r}_2)\,, \label{bl1} \end{eqnarray} where $p$ is an auxiliary field conjugated to $\theta$. The first term in the effective action (\ref{bl1}) is directly related to the left-hand side of Eq. (\ref{adv}). The quadratic in the $p$ term in Eq. (\ref{bl1}) appears as a result of averaging over the statistics of the pumping $\phi$. Simultaneous correlation functions of $\theta$ can be represented as functional derivatives of the generating functional \begin{equation} {\cal Z}(\lambda)=\left\langle\exp\left[i\int {\rm d}{\bbox r} \lambda({\bbox r})\theta(t=0,{\bbox r})\right]\right\rangle \,, \label{zlam}\end{equation} where angular brackets designate averaging over the statistics of $\phi$ and ${\bbox v}$. With the help of the action (\ref{bl1}) the generating functional can be rewritten as the path integral \begin{eqnarray} && {\cal Z}(\lambda)= \int{\cal D}\theta\,{\cal D}p\,{\cal D}{\bbox v}\, \exp\biggl[-{\cal F}({\bbox v}) \nonumber \\ && +i{\cal I}_\theta+i\int {\rm d}{\bbox r}\, \lambda({\bbox r})\theta(t=0,{\bbox r})\biggr] \,. \label{bl3} \end{eqnarray} Here ${\cal F}({\bbox v})$ determines the statistics of the velocity field. Since we assume the Gaussian nature of the statistics, ${\cal F}({\bbox v})$ is a functional of second order over ${\bbox v}$ with the kernel determined by the pair correlation function (\ref{bl2}). Knowing ${\cal Z}(\lambda)$ one can restore the probability distribution function (PDF) of $\theta$. It is convenient to treat the PDF of a particular object \begin{equation} \vartheta=\int {\rm d}{\bbox r}\, \beta({\bbox r})\theta(t=0,{\bbox r}) \,, \label{bb14} \end{equation} with a given function $\beta({\bbox r})$. For example, the set of the structure functions (\ref{str}) can be assembled into the PDF of the scalar difference in two points $\theta({\bbox r}/2)-\theta(-{\bbox r}/2)$ which is the object (\ref{bb14}) with $\beta({\bbox r}_1)=\delta({\bbox r}_1-{\bbox r}/2) -\delta({\bbox r}_1+{\bbox r}/2)$. The PDF of $\vartheta$ is written as \begin{equation} {\cal P}(\vartheta)= \int_{-\infty}^{\infty}\frac{{\rm d} y}{2\pi}\exp(-iy\vartheta) {\cal Z}[y\beta({\bbox r})] \,. \label{bb15} \end{equation} Moments of $\vartheta$ are then expressed as \begin{equation} \langle|\vartheta|^n\rangle =\int_{-\infty}^{\infty} {\rm d}\vartheta\,|\vartheta|^n {\cal P}(\vartheta) \,. \label{mo1} \end{equation} We will be interested in the high-order correlation functions of $\vartheta$ or, in other words, we consider the limit $n\gg 1$. This is equivalent to examining the large $\vartheta$ tail of the PDF (\ref{bb15}). One could expect \cite{96FKLM} that the tail can be calculated in the saddle-point approximation since there is a large parameter $\vartheta$ in the corresponding path integral. Unfortunately, direct application of the method to the integral (\ref{bb15}) or to the moments (\ref{mo1}) does not lead to success. To recognize the reason, let us consider the transformation of the variables \cite{96FKLM} (see also \cite{97FL}) \begin{eqnarray} && {\bbox v}\to X {\bbox v}\,,\ p\to Xp\,,\ t\to X^{-1}t\,,\ y\to Xy\,,\ \kappa\to X\kappa \,. \nonumber \end{eqnarray} One can check that under this transformation all the terms in the square brackets in the right-hand side of Eq. (\ref{bl3}) acquire the factor $X$, which means that in the saddle-point approximation $\ln\left[{\cal Z}(y\beta)\right]=y f(y/\kappa)$ with some unknown function $f$. On the other hand, we expect that correlation functions of the scalar itself (but not of its gradient, for example) do not depend on the diffusivity and the results of the works \cite{95SS,95GK,95CFKLb,95CF,96BGK,96BG,97BFL,94SS} confirm the expectation. Then, at small $\kappa$ the function $f$ remains a $\kappa$-independent constant and we obtain \begin{equation} \ln{\cal Z}(y\beta)\propto|y| \,. \label{tr3} \end{equation} Unfortunately, Eq. (\ref{tr3}) does not help to restore ${\cal P}(\vartheta)$ since after substituting it into Eq. (\ref{bb15}) we realize that the characteristic value of $y$ in the integral can be estimated as $y\sim\vartheta^{-1}$. Therefore, at large $\vartheta$ the main contribution to the integral is determined by the region where Eq. (\ref{tr3}) does not work. We conclude that the naive instantonic approach to the problem fails. The reason is that for the instanton the velocity field is fixed (does not fluctuate) in time and space. Obviously, a saddle-point solution is anisotropic because of the incompressibility condition ${\rm div}\,{\bbox v}=0$. Fluctuations related to smooth variations of the anisotropy axis in time and space are strong and destroy the saddle-point approximation for the tail of the PDF ${\cal P}(\vartheta)$ or for the high moments of $\vartheta$. Thus we should transform the problem to more adequate variables, where fluctuations of the velocity are partly taken into account. This is the only chance to construct an instanton with weak fluctuations on its background. This is the goal of the next section. \section{Lagrange Formulation} \label{lagrange} As we mentioned above, the diffusivity $\kappa$ does not enter the result for the structure functions. Therefore, we will assume $\kappa=0$ in all the following calculations. However, one should be careful since in this case it is impossible to deal with point objects. To provide a regularization, we should assume that the characteristic scales of the function $\beta$ in Eq. (\ref{bb14}) are larger than $r_d$. In addition, the scales are to be much smaller than $L$ since we are going to examine correlation functions in the convective interval. In the diffusionless case the left-hand side of Eq. (\ref{adv}) describes the field $\theta$ moving together with the fluid. Then it is natural to pass into the Lagrangian frame where the process is trivial. For that purpose we introduce Lagrangian trajectories ${\bbox\varrho}(t)$ that obey the equation \begin{equation} \partial_t{\bbox\varrho}={\bbox v}(t,{\bbox\varrho}) \,. \label{bl4} \end{equation} We will label the trajectories by the positions of fluid particles at $t=0$: ${\bbox\varrho}(t=0)={\bbox r}$. Equation (\ref{adv}) (where $\kappa$ is omitted) can easily be solved in terms of the Lagrangian trajectories \begin{eqnarray} && \theta(0,{\bbox r})=\int_{-\infty}^0\!{\rm d}t\, \phi\left[t,{\bbox\varrho}(t,{\bbox r})\right] \,, \label{grf}\end{eqnarray} Since we are interested in the field $\theta$ at $t=0$, due to causality the integration is performed over negative time. Therefore, Eq. (\ref{bl4}) should be solved backwards in time. A simultaneous $n$th-order correlation function of $\theta$ can be written as the product of $n$ integrals (\ref{grf}), averaged over the statistics of ${\bbox v}$ and $\phi$. In this representation, averaging over the pumping is very simple. For example, the two-point correlation function is \begin{eqnarray} && F_2=\int_{-\infty}^0\!{\rm d}t\, \left\langle\chi\left(R_{12}\right)\right\rangle_v \,, \label{F2} \\ && {R}_{12}(t)\equiv{R}(t,{\bbox r}_1,{\bbox r}_2)= |{\bbox\varrho}(t,{\bbox r}_1) -{\bbox\varrho}(t,{\bbox r}_2)| \,. \label{la30}\end{eqnarray} The angular brackets $\langle\rangle_v$ in Eq. (\ref{F2}) denote averaging over the statistics of ${\bbox v}$ only, since the statistics of $\phi$ is already accounted for there. Similar formulas can be written for correlation functions of higher orders. Once this is done, one can assemble them into the generating functional (\ref{zlam}) \begin{eqnarray} {\cal Z}(\lambda)=\left\langle\exp\left\{- \frac{1}{2}\int {\rm d}t{\rm d}{\bbox r}_{1}{\rm d}{\bbox r}_{2}\, \chi\left(R_{12}\right) \lambda_1\lambda_2\right\}\right\rangle_v \,. \label{zlv} \end{eqnarray} where $\lambda_{1,2}=\lambda({\bbox r}_{1,2})$. Calculating the moments of the object (\ref{bb14}) in accordance with Eqs. (\ref{bb15}) and (\ref{mo1}) we get \begin{eqnarray} && \langle|\vartheta|^n\rangle=\int\frac{{\rm d}y\,{\rm d}\vartheta}{2\pi} \left\langle\exp\left(-{\cal F}_\lambda -iy\vartheta +n\ln|\vartheta|\right)\right\rangle_v \,, \label{bb86} \\ && {\cal F}_\lambda= \frac{y^2}{2}\int {\rm d}t\,{\rm d}{\bbox r}_1\,{\rm d}{\bbox r}_2\, \chi(R_{12})\beta({\bbox r}_1)\beta({\bbox r}_2) \,. \label{bb76} \end{eqnarray} At this point, we would like to stress the close connection between the statistics of the passive scalar and that of Lagrangian trajectories \cite{Les}, which can be seen from Eq. (\ref{zlv}). \subsection{Statistics of Lagrangian separations} Equations (\ref{zlv},\ref{bb86}) show that the correlation functions we are interested in us are expressed via the average of $\exp(-{\cal F}_\lambda)$ over the velocity. Note that ${\cal F}_\lambda$ [Eq. (\ref{bb76})] depends only on the absolute values $R_{12}(t)$ of Lagrangian differences (\ref{la30}). Therefore, instead of averaging over the statistics of ${\bbox v}$ one could find the answer by averaging over the statistics of the Lagrangian separations $R_{12}$. Due to zero correlation time of the velocity field, the statistical properties of the field $R_{12}$ appear to be relatively simple. To establish the statistics of $R_{12}$ we start from the relation \begin{equation} \gamma^{-1}\partial_t R_{12}^{\gamma}=\zeta_{12}\equiv R_{12}^{\gamma-2}R_{12\alpha} (v_{1\alpha}-v_{2\alpha}) \,, \label{la31} \end{equation} following from Eqs. (\ref{bl4}) and (\ref{la30}). As shown in Appendix \ref{rich}, the average value of $\zeta_{12}$ is nonzero: \begin{equation} \langle\zeta_{12}\rangle=-D \,. \label{aa35} \end{equation} Next, exploiting the expression (\ref{bl2}) for the velocity correlation function, one can find the irreducible pair correlation function \begin{equation} \langle\langle\zeta_{12}(t_1)\zeta_{34}(t_2)\rangle\rangle =\frac{2D}{{d}}Q_{12,34}\delta(t_1-t_2)\,. \label{la33} \end{equation} The explicit expression for the function $Q$ is rather cumbersome \end{multicols} \begin{eqnarray} && Q_{12,34}=\frac{{d}+1-\gamma}{4({d}-1)} R_{12}^{\gamma-2}R_{34}^{\gamma-2} \left(R_{23}^{2-\gamma}+R_{14}^{2-\gamma} -R_{13}^{2-\gamma}-R_{24}^{2-\gamma}\right) \left(R_{23}^{2}+R_{14}^{2}-R_{13}^{2}-R_{24}^{2}\right) \nonumber \\ && -\frac{2-\gamma}{8({d}-1)} R_{12}^{\gamma-2}R_{34}^{\gamma-2}\biggl\{ \frac{1}{R_{13}^\gamma} \left(R_{12}^{2}+R_{13}^{2}-R_{23}^{2}\right) \left(R_{13}^{2}+R_{34}^{2}-R_{14}^{2}\right) +\frac{1}{R_{23}^\gamma} \left(R_{12}^{2}+R_{23}^{2}-R_{13}^{2}\right) \left(R_{34}^{2}+R_{23}^{2}-R_{24}^{2}\right) \nonumber \\ && +\frac{1}{R_{14}^\gamma} \left(R_{12}^{2}+R_{14}^{2}-R_{24}^{2}\right) \left(R_{14}^{2}+R_{34}^{2}-R_{13}^{2}\right) +\frac{1}{R_{24}^\gamma} \left(R_{12}^{2}+R_{24}^{2}-R_{14}^{2}\right) \left(R_{34}^{2}+R_{24}^{2}-R_{23}^{2}\right) \biggr\} \,. \label{la34} \end{eqnarray} \begin{multicols}{2} \noindent It can be found from the definition of $\zeta_{12}$ [Eq. (\ref{la31})], formula (\ref{eq2}) and relations such as \begin{eqnarray}&& {\bbox R}_{12}{\bbox R}_{13} =\frac{1}{2}(R_{12}^2+R_{13}^2-R_{23}^2) \,, \nonumber \\ && {\bbox R}_{12}{\bbox R}_{34} =\frac{1}{2}(R_{14}^2+R_{23}^2-R_{13}^2-R_{24}^2)\,. \nonumber \end{eqnarray} In the spirit of the conventional procedure \cite{73MSR,76Dom,76Jan}, one can assert that any average over the statistics of $R_{12}$ can be found as the path integral over $R_{12}$ and over an auxiliary field $m_{12}\equiv m(t,{\bbox r}_1,{\bbox r}_2)$ with the weight \begin{eqnarray} \left\langle\exp\left[i \int {\rm d}t\, {\rm d}{\bbox r}_1\,{\rm d}{\bbox r}_2\, \left(m_{12}\gamma^{-1}\partial_t R_{12}^\gamma -m_{12}\zeta_{12}\right)\right]\right\rangle_v \,, \nonumber \end{eqnarray} where angular brackets mean averaging over the statistics of the velocity. Since $\zeta_{12}$ is $\delta$ correlated in time, the average can be expressed in terms of Eqs. (\ref{aa35}) and (\ref{la33}) only. The result is $\exp\left(i{\cal I}_R\right)$, where \begin{eqnarray} && i{\cal I}_R=i \int\limits_{-\infty}^0 {\rm d}t\, \int {\rm d}{\bbox r}_1\,{\rm d}{\bbox r}_2\, m_{12}(\gamma^{-1}\partial_t R_{12}^\gamma+D) \nonumber \\ && -\frac{D}{{d}}\int\limits_{-\infty}^0 {\rm d}t\, \int {\rm d}{\bbox r}_1\,{\rm d}{\bbox r}_2\, {\rm d}{\bbox r}_3\,{\rm d}{\bbox r}_4\,Q_{12,34}m_{12}m_{34} \,. \label{la36} \end{eqnarray} Now, instead of Eq. (\ref{bb86}) we can write \begin{eqnarray} && \langle|\vartheta|^n\rangle=\int\frac{{\rm d}y\,{\rm d}\vartheta}{2\pi} \int {\cal D}R\,{\cal D}m\, e^{i{\cal I}_R-{\cal F}_\lambda -iy\vartheta+n\ln|\vartheta|} \,. \label{bb57} \end{eqnarray} The integration in Eq. (\ref{bb57}) is performed over functions of $t$, ${\bbox r}_1$, and ${\bbox r}_2$ with some boundary conditions imposed on them. The condition for the field $R_{12}$ follows from ${\bbox\varrho}(0)={\bbox r}$ and reads \begin{equation} R_{12}(t=0)=|{\bbox r}_1-{\bbox r}_2| \,. \label{term} \end{equation} The boundary condition for the field $m_{12}$ should be $m_{12}(-\infty)=0$, since we deal with free integration over $R_{12}$ in the remote past. Note that due to the definition (\ref{la30}) the triangle inequalities \begin{equation} R_{12}+R_{23}>R_{13} \label{trian} \end{equation} should be satisfied for any three points. Actually, the inequalities are constraints that should be imposed on the field $R_{12}$ when integrating in Eq. (\ref{bb57}). \subsection{General instantonic equations} \label{genin} In the preceding subsection we derive a formula (\ref{bb57}) for $\langle|\vartheta|^n\rangle$. Its calculation is equivalent to solving some nonlinear field theory. It looks very infeasible to perform this task. We are going to calculate the integral (\ref{bb57}) in the saddle-point approximation regarding the number $n$ large enough. To be consistent, when doing the procedure one should remember about the constraints (\ref{trian}). Unfortunately, it is very hard to take them into account explicitly. We will ignore the constraints, which is correct under the following conditions. First, the inequalities (\ref{trian}) should be valid in the instantonic solution. Second, fluctuations on the background of the instanton should be weak (this is also the applicability condition of the instantonic formalism itself). We argue in Appendixes \ref{triangle} and \ref{fluctu} that those conditions are satisfied if \begin{equation} {d}\gamma\gg1 \,. \label{ineq} \end{equation} Note also that for the condition (\ref{ineq}) fluctuations of a Lagrangian separation near its average value are weak (see Appendix \ref{simul}). The inequality (\ref{ineq}) will be implied below. Thus we obtain from the integral (\ref{bb57}) in the saddle-point approximation \begin{equation} \langle|\vartheta|^n\rangle\sim \left.\exp\left(i{\cal I}_R-{\cal F}_\lambda-iy\vartheta +n\ln|\vartheta|\right)\right|_{\rm inst} \,. \label{as20} \end{equation} Here solutions of the instantonic equations should be substituted which are extrema conditions for the argument of the exponent on the right-hand side of Eq. (\ref{bb57}). Variation over $m_{12}$ and $R_{12}$ gives the instantonic equations \begin{eqnarray} && i(\gamma^{-1}\partial_t R_{12}^\gamma+D) =2\frac{D}{{d}}\int {\rm d}{\bbox r}_3\, {\rm d}{\bbox r}_4\,Q_{12,34}m_{34}\,, \label{la37} \\ && iR_{12}^{\gamma-1}\partial_tm_{12} \!+\!\frac{D}{{d}}\int {\rm d}{\bbox r}_3\,{\rm d}{\bbox r}_4\, \left\{2\frac{\partial Q_{12,34}}{\partial R_{12}}m_{12}m_{34}\right. \nonumber \\ && \left.+4\frac{\partial Q_{13,24}}{\partial R_{12}}m_{13}m_{24}\right\} =-\frac{y^2}{2}\chi'(R_{12})\beta({\bbox r}_1)\beta({\bbox r}_2) \,. \label{la38} \end{eqnarray} The extremum conditions over $y$ and $\vartheta$ read \begin{eqnarray} && \vartheta=iy\int {\rm d}t\,{\rm d}{\bbox r}_1\,{\rm d}{\bbox r}_2\,\chi(R_{12}) \beta({\bbox r}_1)\beta({\bbox r}_2)\,, \label{aa38} \\&& iy=n/\vartheta \,. \label{qq2} \end{eqnarray} Note that only Eqs. (\ref{la37}) and (\ref{la38}) are true dynamical equations, carrying the information about the dynamics of the flow, whereas Eqs. (\ref{aa38}) and (\ref{qq2}) are constraints imposed on the instantonic solution. One needs to add to Eqs. (\ref{la37}) and (\ref{la38}) some boundary conditions. The value of the field $R_{12}$ is fixed at $t=0$ by Eq. (\ref{term}). As for the field $m_{12}$, we already noted that it should tend to zero when $t\to-\infty$. It can be understood as the extremum condition that appears after variation of the effective action over the boundary value of $R_{12}$ in the remote past. One can easily establish the asymptotic behavior of $R_{12}$ at $|t|\to\infty$. There the field $R_{12}$ grows and loses its dependence on ${\bbox r}_{1,2}$. The field $m_{12}$ tends to its vacuum zero value at $|t|\to\infty$. Therefore, at large $|t|$ the term with $m_{12}$ in Eq. (\ref{la37}) can be omitted and we find \begin{equation} R^\gamma\approx\gamma D|t| \,. \label{aa40} \end{equation} The expression (\ref{aa40}) is nothing but the Richardson law for divergence of Lagrangian trajectories \cite{26Rich}. Let us stress that now it holds on the classical (mean-field) level, without taking into account fluctuations on the background of the instanton. To clarify this point, notice that if the velocity field is a deterministic function of time and space (as it is for the naive instanton discussed above) then the Richardson law can not be valid for all the Lagrangian trajectories. In our instanton we get rid of the velocity field that resulted in the emergence of the Richardson law. Note that the triangle inequalities (\ref{trian}) are obviously satisfied both for (\ref{term}) and for the asymptotic behavior (\ref{aa40}). The expression for the action appearing in Eq. (\ref{bb57}) is \begin{eqnarray}&& i{\cal I}\equiv i{\cal I}_R\!-\!{\cal F}_\lambda \!=\!i\int {\rm d}t{\rm d}{\bbox r}_1{\rm d}{\bbox r}_2\, \gamma^{-1}m_{12}\partial_t R_{12}^\gamma \!-\! E \,, \label{aa60} \\ && E=\frac{y^2}{2}\!\int {\rm d}{\bbox r}_1{\rm d}{\bbox r}_2 \chi(R_{12})\beta({\bbox r}_1)\beta({\bbox r}_2) -iD\int {\rm d}{\bbox r}_1{\rm d}{\bbox r}_2 m_{12} \nonumber \\ && +\frac{D}{{d}}\int {\rm d}{\bbox r}_1\,{\rm d}{\bbox r}_2\, {\rm d}{\bbox r}_3\,{\rm d}{\bbox r}_4\,Q_{12,34}m_{12}m_{34} \,. \label{la39} \end{eqnarray} We see from Eq. (\ref{aa60}) that the quantity $E$ plays the role of the Hamiltonian function of the system, while Eqs. (\ref{la37}) and (\ref{la38}) are canonical equations corresponding to the Hamiltonian function. Since $E$ does not explicitly depend on time $t$, its value (which can be called energy) is conserved. Actually the energy is zero on the instantonic solution since at $t\to-\infty$ we have $m_{12}\to0$ and $R_{12}\to\infty$. Note that since the Hamiltonian (\ref{la39}) explicitly depends on the coordinates via $\beta$, there is no momentum conservation law. Before proceeding to the solution of the instanton equations, let us make a remark concerning fluctuations on the background of the instanton. In the linear approximation over the fluctuations we obtain an estimate for the typical fluctuation of $R^\gamma$ \begin{equation} \left(\delta R^\gamma\right)^2 \sim \gamma DR^\gamma |t|{d}^{-1} \,. \label{fl2} \end{equation} Note that the fluctuations of $R$ tend to zero when $t\to0$ since $R_{12}$ is fixed at $t=0$. Comparing the estimate (\ref{fl2}) with Eq. (\ref{aa40}) we obtain \begin{equation} {\left(\delta R^\gamma\right)^2}/ {R^{2\gamma}}\sim{d}^{-1} \,. \label{fl3} \end{equation} We conclude that the fluctuations on the background of our instanton are weak provided ${d}\gg1$. The above evaluations are rough and need a more accurate analysis (see Appendix \ref{fluctu}). Nevertheless, they show that the Richardson behavior (\ref{aa40}) inherent for our instanton suppresses fluctuations on its background. The system (\ref{la37}) and (\ref{la38}) consists of two nonlinear integro-differential equations with boundary conditions imposed on the opposite sides of the time interval, that is, at $t=0$ for $R_{12}$ and at $t=-\infty$ for $m_{12}$. Therefore, in the general case it is very difficult to solve the instanton equations. Nevertheless, one can hope that for some particular objects the system of equations can be reduced to a simpler form allowing the complete solution. This hope comes true for the structure functions. \section{Instanton for Structure Functions} \label{instant} Using the general scheme developed in Sec. \ref{lagrange} we will examine the expressions for the structure functions (\ref{str}) at large $n$. In other words, we will be interested in the statistics of the passive scalar difference taken at the points separated by the distance ${\bbox r}$. Since the diffusivity is neglected, we cannot examine the difference $\theta({\bbox r}/2)-\theta(-{\bbox r}/2)$ itself. Nevertheless, we can treat the statistics of the differences averaged over separations near ${\bbox r}$. So we should consider the object (\ref{bb14}) with \begin{equation} \beta({\bbox r}_1)= \delta_\Lambda\left({\bbox r}_1-\frac{{\bbox r}}{2}\right) -\delta_\Lambda\left({\bbox r}_1+\frac{{\bbox r}}{2}\right) \,. \label{bl23} \end{equation} Here $\delta_\Lambda({\bbox r})$ is the function with the width $\Lambda^{-1}\gg r_d$ satisfying the condition $\int {\rm d}{\bbox r}\,\delta_\Lambda({\bbox r})=1$, which can be called a smeared $\delta$ function. Then we can write \begin{equation} S_n\approx\langle|\vartheta|^n\rangle \sim \left.\exp\left(i{\cal I}-n +n\ln|\vartheta|\right)\right|_{\rm inst} \,, \label{basic} \end{equation} where we used Eq. (\ref{as20}) and substituted Eq. (\ref{qq2}). \subsection{Reduction} \label{reduc} Now we turn to the instantonic equations (\ref{la37}) and (\ref{la38}). Let us observe, that since the source on the right-hand side of Eq. (\ref{la38}) is proportional to $\beta({\bbox r}_{1})\beta({\bbox r}_{2})$, the field $m_{12}$ can be approximated as \begin{eqnarray} && m_{12}=\frac{i m_+}{2}\left\{ \delta_\Lambda\left({\bbox r}_1-\frac{{\bbox r}}{2}\right) \delta_\Lambda\left({\bbox r}_2-\frac{{\bbox r}}{2}\right)\right. \nonumber \\ && +\left.\delta_\Lambda\left({\bbox r}_1+\frac{{\bbox r}}{2}\right) \delta_\Lambda\left({\bbox r}_2+\frac{{\bbox r}}{2}\right)\right\} \nonumber \\ && -\frac{im_-}{2}\left\{ \delta_\Lambda\left({\bbox r}_1-\frac{{\bbox r}}{2}\right) \delta_\Lambda\left({\bbox r}_2+\frac{{\bbox r}}{2}\right)\right. \nonumber \\ && +\left.\delta_\Lambda\left({\bbox r}_1+\frac{{\bbox r}}{2}\right) \delta_\Lambda\left({\bbox r}_2-\frac{{\bbox r}}{2}\right)\right\} \,, \label{la44} \end{eqnarray} where $m_\pm$ are functions of time only. Writing it, we implicitly assumed that the field $R_{12}$ is smooth near the points $\pm{\bbox r}/2$. Then the relations (\ref{aa38}) and (\ref{qq2}) give \begin{eqnarray} \vartheta^2=2n\int\limits_{-\infty}^0 {\rm d}t\, \left\{\chi\left( R_+\right)- \chi\left( R_-\right)\right\} \,, \label{qq3} \end{eqnarray} where we introduced \begin{eqnarray} && R_+(t)=R(t,{\bbox r}/2,{\bbox r}/2)\,,\ R_-(t)=R(t,{\bbox r}/2,-{\bbox r}/2)\,. \label{la45} \end{eqnarray} Substituting the expression (\ref{la44}) into Eqs. (\ref{la37}) and (\ref{la38}) we obtain a closed system of ordinary differential equations for $m_\pm$ and $R_\pm$. It is convenient to proceed in terms of the effective action. Substituting Eq. (\ref{la44}) into Eq. (\ref{aa60}) we get \begin{eqnarray} && i{\cal I}=\int_{-\infty}^0 \!\!dt\left[\gamma^{-1}\left( m_-\partial_tR_-^\gamma-m_+\partial_tR_+^\gamma\right)-E\right] \,, \label{lagr} \\ && E=y^2\left\{\chi\left(R_+\right)- \chi\left(R_-\right)\right\}+D(m_+-m_-) \nonumber \\ && -\frac{D(2-\gamma)}{4{d}({d}-1)}\left\{ m_-^2\varphi_1+2 m_- m_+\varphi_2+m_+^2\varphi_3 \right\} \,. \label{qq1} \end{eqnarray} Here we introduced the designations \begin{eqnarray} && \varphi_1=\frac{4({d}+1-\gamma)}{2-\gamma}R_-^{2\gamma-4} \left[R_-^{2-\gamma}-R_+^{2-\gamma} \right]\left[R_-^{2}-R_+^{2}\right] \nonumber \\ && -R_-^{2\gamma-4}\left[R_+^{4-\gamma} +\frac{(2R_-^{2}-R_+^{2})^2}{R_-^\gamma}\right] \,, \label{varphi} \\ && \varphi_2=R_+^\gamma\left[ \frac{R_+^{2-\gamma}}{R_-^{2-\gamma}}+ \frac{2R_-^2-R_+^2}{R_-^2}\right], \ \varphi_3=-R_+^\gamma\left[ 1+\frac{R_+^\gamma}{R_-^\gamma}\right] \,. \nonumber \end{eqnarray} Since the effective action (\ref{lagr}) depends only on the functions $m_\pm(t)$ and $R_\pm(t)$, one can obtain the system of ordinary differential equations for the functions as extremum conditions of the action. The boundary conditions for the equations are $R_+=0$ and $R_-=r$ at $t=0$ [see Eq. (\ref{term})] and $m_\pm\to0$ at $t\to-\infty$. Resolution of the system allows one to find $m_\pm$ and $R_\pm$ as functions of time. Once they are known, it is possible to restore the function $R_{12}$ in the whole space from Eq. (\ref{la37}). The problem is discussed in Appendix \ref{triangle}. There we argue that the function $R_{12}$ is really smooth in space which is a justification of the procedure described. Since we accept Eq. (\ref{ineq}), ${d}\gg1$. Using the inequality one can keep in the functions (\ref{varphi}) only the terms of the main order over ${d}$. This means that one can neglect in Eq. (\ref{varphi}) the second contribution to $\varphi_1$ in comparison to the first one and also $\varphi_2$, $\varphi_3$ in comparison to $\varphi_1$. Potentially this procedure is dangerous. We will show that due to the smallness of $r/L$, the intervals where $R_--R_+\ll R_-$ play an important role. Then, we see that it is the difference of $R_\pm$ that enters the first term in $\varphi_1$, while the others do not contain this smallness. Therefore, we observe cancellations that could lead to a competition of ${d}$ and $L/r$ (the latter parameter is considered as the largest in the problem). To check the possibility, we performed calculations keeping all the terms in Eq. (\ref{varphi}). The calculations are sketched in Appendix \ref{corr}. They show that in the final expressions only combinations of $\varphi_{1,2,3}$, containing the same cancellations are of importance. The legitimacy of the procedure is proved. Omitting $\varphi_{2,3}$ in the expression (\ref{qq1}) and then varying the action (\ref{lagr}) over $m_+$, we get a trivial equation for $R_+$, \begin{equation} \gamma^{-1}\partial_t R_+^\gamma=-D \,. \label{nn1} \end{equation} Its solution, satisfying the boundary condition $R_+(0)=0$ is simply \begin{equation} R_+^\gamma=\gamma D|t| \,. \label{nn2} \end{equation} To examine the behavior of $R_-$ it is convenient to pass to the new variables \begin{equation} R_+=Le^\xi\,, \quad R_-^\gamma=R_+^\gamma(1+v)\,, \quad \mu=m_-R_+^\gamma \,. \label{nn3} \end{equation} As time $t$ goes from $0$ to $-\infty$, the variable $\xi$ runs from $-\infty$ to $+\infty$ and $v$ runs from $+\infty$ to $0$. The latter is clear from the asymptotic behavior $R_-^\gamma\approx R_+^\gamma=\gamma D|t|$ at $t\to-\infty$. The relation (\ref{qq3}) in terms of the new variables is \begin{equation} \vartheta^2=2n\frac{L^\gamma}{D} \int\limits_{-\infty}^{+\infty} {\rm d}\xi\, e^{\gamma\xi}\left[ \chi\left(R_+\right)-\chi\left(R_-\right)\right] \,. \label{nnk} \end{equation} Recall that the energy $E$ entering the action (\ref{lagr}) is an integral of motion whose value is equal to zero. Thus we can perform the standard procedure of excluding a degree of freedom in a canonical system. Equating the expression (\ref{qq1}) to zero, we can express $m_+$ in terms of $\mu$, $v$, and $\xi$. Substituting the result into Eq. (\ref{lagr}), we get \begin{eqnarray} && -i{\cal I}=\int\limits_{-\infty}^{+\infty} {\rm d}\xi\, (\gamma^{-1}\mu\partial_\xi v-H) \,, \label{nn4} \\ && H\!=\!-\mu v\!+\!\frac{\mu^2}{{d}}\phi(v) \!+\!\frac{|y|^2 L^\gamma}{D}\left[ \chi\left(R_+\right)\!-\!\chi\left(R_-\right) \right] e^{\gamma\xi} \,, \label{ham} \\ && \phi\!=\!(1\!+\!v)^{2-4/\gamma} \left[(1\!+\!v)^{2/\gamma-1}\!-\!1 \right]\left[(1\!+\!v)^{2/\gamma}\!-\!1\right] \,. \label{phi} \end{eqnarray} Here we kept main contributions over ${d}$ only. In Eq. (\ref{ham}) we set $y^2=-|y|^2$ since as follows from Eqs. (\ref{qq2}) and (\ref{qq3}) $y$ is an imaginary number. Extremum conditions for the action (\ref{nn4}) read \begin{equation} \gamma^{-1}\frac{{\rm d}v}{{\rm d}\xi}=\frac{\partial H}{\partial\mu} \,, \qquad \gamma^{-1}\frac{{\rm d}\mu}{{\rm d}\xi}=-\frac{\partial H}{\partial v} \,, \label{nn7} \end{equation} which are canonical equations for the variables $\mu$ and $v$ in the time $\xi$. Of course, Eqs. (\ref{nn7}) could be obtained directly from the extrema conditions for the action (\ref{lagr}). To conclude, we reformulated the problem as follows: find such a value of $y$ that the solution of Eqs. (\ref{nn7}) with the given $y$, being substituted into Eq. (\ref{nnk}), reproduces the correct value of $\vartheta=-in/y$. Below we discuss the first and the most difficult part of the program that is solution of the system (\ref{nn7}). Though it cannot be integrated exactly, we can solve the system approximately by asymptotic matching, which is enough to determine the structure functions $S_n$. \subsection{General structure of the instanton} \label{gener} The evolution of $R_-$ in 'time' $\xi$ can be divided into three stages. During the first stage, starting at $\xi=-\infty$, both $R_+$ and $R_-$ are much less than $L$ and it is possible to substitute both $\chi(R_+)$ and $\chi(R_-)$ by $\chi(0)$. Then the last term in Eq. (\ref{ham}) is equal to zero. During the second stage $R_\pm\sim L$ and the last term in Eq. (\ref{ham}) is of importance. During the final stage, where $R_+\approx R_-\gg L$, one can again neglect the last term in Eq. (\ref{ham}). Note that only the second stage contributes to $\vartheta^2$, which can be seen from Eq. (\ref{nnk}). Since the Hamiltonian $H$ [Eq. (\ref{ham})] does not explicitly depend on `time' $\xi$ during the first and the third stages, its value is conserved there. Actually, the value of $H$ is equal to zero during the third stage since $\mu\to0$ and $R_+\approx R_-\gg L$ at $\xi\to+\infty$. On the other hand, during the first stage the value $H_1$ of the Hamiltonian function $H$ is nonzero. Therefore, during the second stage the value of $H$ diminishes and should finally reach zero when the trivial third stage starts. The value of $H_1$ as a function of $n$ has to be established from the matching of the stages. Now we are going to solve Eq. (\ref{nn7}) for the first stage. Resolving the equation $H=H_1$ in terms of $\mu$ we get \begin{eqnarray} && \mu=\frac{{d}}{2\phi}(v-G) \,, \label{nn8} \\ && G(v)=\pm\sqrt{v^2+\frac{4H_1\phi}{{d}}} \,. \label{nn9} \end{eqnarray} Then we find from Eq. (\ref{nn7}) \begin{equation} \gamma^{-1}\frac{{\rm d}v}{{\rm d}\xi}=-G(v) \,. \label{vvv} \end{equation} At $\xi\to-\infty$ (that is at small $|t|$) the function $v$ should decrease with increasing $\xi$ since $R_-\approx r$ and $R_+$ increases. To ensure the negative value of ${\rm d}v/{\rm d}\xi$ in Eq. (\ref{vvv}) one should take the positive sign of the square root in Eq. (\ref{nn9}), which leads to a positive value of $G$. The sign of $G$ can be changed if during the evolution $G$ turns into zero which corresponds to the presence of a reverse point in the dependence $v$ on $\xi$. Equation (\ref{vvv}) enables one to find $v$ as a function of $\xi$. Let us integrate the equation over $\xi$ from $-\infty$ to some value. Then we get \begin{eqnarray} && \int^v_\infty {\rm d}x\left[\frac{1}{x}-\frac{1}{G(x)}\right]= \ln\left[\frac{v R_+^\gamma}{r^{\gamma}}\right] \,. \label{fv0} \end{eqnarray} To avoid difficulties related to infinite values of $\xi$ and $v$ at the initial point, we subtracted from $G^{-1}$ its asymptote $G^{-1}(x)\approx 1/x$ at large $x$. This enforces the convergence of the integral (\ref{fv0}) at large $x$. The constant of integration in Eq. (\ref{fv0}) was established from the limit $v\to\infty$: since the integral on the left-hand side of Eq. (\ref{fv0}) tends to zero as $v$ increases, the right-hand side of Eq. (\ref{fv0}) should also tend to zero. This requirement is ensured by the $r$-dependent factor in Eq. (\ref{fv0}) since $R_+^\gamma\approx r^\gamma/v$ at $v\to\infty$, as follows from the boundary condition $R_-(0)=r$ and Eq. (\ref{nn3}). The left-hand side of Eq. (\ref{fv0}) should be viewed as a contour integral, which determines its value in the case of the non-monotonic behavior of $v$ as a function of $\xi$. Equation (\ref{fv0}) allows us to establish a relation for the parameters characterizing the first stage. Let us consider the integral over the whole first stage. Then we should substitute $v=v_*$ in Eq. (\ref{fv0}), where $v_*$ is the value of $v$ at the end of the first stage. The initial substage (where $v\gtrsim 1$) gives a constant of order unity in the integral on the left-hand side of Eq. (\ref{fv0}) since $G(x)\approx x$ there. We neglect the contribution substituting $v\sim 1$ as the lower limit in the integral. Then the integral $\int {\rm d}x/x$ produces just $\ln v$, which is canceled by the corresponding term on the right-hand side. Next, at the boundary between the first and the second stages $R_-\sim L$ since the pumping enters the game there. Therefore, with the logarithmic accuracy one can write \begin{eqnarray} && -\int^{v_*}_1\,\frac{{\rm d}x}{G(x)}= \gamma\ln\left(\frac{L}{r}\right) \,. \label{log}\end{eqnarray} We see that there is a large parameter $L/r$ in the argument of the logarithm on the right-hand side of Eq. (\ref{log}). An analysis shows that due to this large parameter there are only two possibilities to satisfy the relation (\ref{log}). Both of them are related to zeros of the function $G$ because only near the points where $G$ is small can the integral reach a large value. The first possibility is realized when $G$ is zero only at $v=0$. In this case $v_*\ll1$ and $v$ is a monotonically decreasing function. The second possibility is that $G$ has a zero at some point $v=v_r$. That is just the reverse point where the derivative ${\rm d}v/{\rm d}\xi$ changes its sign; see Eq. (\ref{vvv}). Then, the integral on the left-hand side of Eq. (\ref{log}) is determined by the vicinity of the point since $G$ is small there. A choice between the possibilities depend on the value of $H_1$. If $H_1>H_c$, where \begin{eqnarray} && H_c=-\frac{{d}\gamma^2}{8(2-\gamma)} \,, \label{kn1} \end{eqnarray} then $G$ cannot be zero (except for the point $v=0$). Then the integral on the left-hand side of Eq. (\ref{log}) reaches its large value at $v\ll1$. Substituting into Eq. (\ref{nn9}) the asymptotic expression \begin{eqnarray}&& \phi\approx\frac{2(2-\gamma)}{\gamma^2}v^2 \,, \label{mk1}\end{eqnarray} valid at $v\ll1$, we can calculate the integral on the left-hand side of Eq. (\ref{log}) with the logarithmic accuracy and find \begin{eqnarray} \ln v_*={\gamma\sqrt{1-H_1/H_c}}\, \ln\left(\frac{r}{L}\right) \,. \label{aaf}\end{eqnarray} We see that due to $r\ll L$, indeed $v_*\ll1$. In the opposite case $H_1<H_c$ the situation is more complicated. From the asymptotic expression \begin{equation} G^2(v)\approx\left[ -\frac{8(2-\gamma)}{{d}\gamma^2}(H_c-H_1)+ \frac{4-\gamma}{2\gamma}v\right]v^2 \,, \label{kn2}\end{equation} valid at $v\ll1$, we see that $G$ is zero at $v=v_r$, where \begin{equation} v_r=\frac{16(2-\gamma)}{{d}\gamma(4-\gamma)}(H_c-H_1) \,. \label{kn3}\end{equation} It is just the reverse point where the derivative ${\rm d}v/{\rm d}\xi$ changes its sign. Therefore, the sign of $G$ is positive during the initial part of the first period and negative during the final one. Thus we should take the upper sign in Eq. (\ref{nn9}) for the first part and the lower sign for the second part. The main contribution to the left-hand side of Eq. (\ref{log}) is determined by the region near the reverse point $v-v_r\sim v_r$ where we can use the expression (\ref{kn2}). The explicit integration gives \begin{equation} \sqrt{\frac{{d}}{2(2-\gamma)}}\frac{\pi}{\sqrt{H_c-H_1}} =\gamma\ln\frac{L}{r} \,. \label{kn5}\end{equation} Since the logarithm is large, $H_1$ is close to $H_c$ and hence $v_r\ll1$, as we implicitly assumed in the expression (\ref{kn2}). Note that Eq. (\ref{kn5}) does not fix the value of $v_*$, as it was for $H_1>H_c$. Now, we should extract additional relations that, along with Eq. (\ref{aaf}) or Eq. (\ref{kn5}) will fix the instantonic solution and determine the final answer for the structure functions. It can be done by establishing the evolution during the second stage and by its subsequent matching with the first stage. Unfortunately, the procedure is rather lengthy and is individual for each particular case. We present the calculations in Appendix \ref{solv}. \subsection{Expressions for structure functions} \label{results} Based on the reasoning given in the preceding subsection and on the calculations described in Appendix \ref{solv}, one can establish expressions for the structure functions from the relation (\ref{basic}). Here we enumerate basic results, referring the reader interested in technical details to Appendix \ref{solv}. The case $H_1>H_c$ is realized if $n<n_c$ (see Appendix \ref{intermt}), where \begin{equation} n_c=\frac{{d}\gamma}{2(2-\gamma)} \,. \label{crit} \end{equation} Calculating the action ${\cal I}$ and $\vartheta$ (see Appendix \ref{intermt}) and substituting the result into Eq. (\ref{basic}), we obtain \begin{eqnarray} && S_n\sim\left(\frac{n}{\gamma}\frac{P_2 C_1}{D} L^\gamma\right)^{n/2}\left(\frac{r}{L}\right)^{\zeta_n} \,. \label{mk11} \\ && \zeta_n=\frac{n\gamma}{2}-\frac{(2-\gamma)n^2}{2{d}} \,. \label{mk12} \end{eqnarray} The quantity $C_1$ in the expression (\ref{mk11}) is a constant of order unity, whose value depends on the shape of $\chi$ (that is on the details of the pumping) and is consequently nonuniversal. Note that the $r$-independent factor in Eq. (\ref{mk11}) is determined by the single-point root-mean-square value of the passive scalar \begin{equation} \theta_{\rm rms}^2\sim \frac{P_2}{D\gamma} L^\gamma \,. \label{rms} \end{equation} Comparing the expression (\ref{mk12}) with Eq. (\ref{pert}), we see that they coincide under the conditions $n\gg1$ and ${d}\gg1$ that were implied in our derivation. Surprisingly, the $n$ dependence of $\zeta_n$ given by Eq. (\ref{pert}) is correct not only in the limit (\ref{pert1}) (that is for $n\ll n_c$), but up to $n=n_c$, which is the boundary value for Eqs. (\ref{mk11}) and (\ref{mk12}). A detailed consideration of the case $H_1>H_c$ is presented in Appendix \ref{remote}. It shows that this possibility is realized at $n>n_c$. Then, the scaling exponents $\zeta_n$ appear to be $n$ independent and equal to the value \begin{equation} \zeta_c=\frac{{d}\gamma^2}{8(2-\gamma)}. \label{zc} \end{equation} The $n$-dependent numerical factors in $S_n$ can be found in two limits: $n-n_c\ll n_c$ and $n\gg n_c$. The former case is discussed below, while in the latter case one can obtain (see Appendix \ref{remote}) \begin{eqnarray} && S_n\sim\left(\frac{n}{\gamma}\frac{P_2 C_2}{D} L^\gamma\right)^{n/2}\left(\frac{r}{L}\right)^{\zeta_c} \,. \label{kkn11} \end{eqnarray} The quantity $C_2$ in Eq. (\ref{kkn11}) is again a non-universal constant of order unity. The expression (\ref{kkn11}) corresponds to the factorized Gaussian PDF \begin{equation} {\cal P}(\vartheta)\sim \left(\frac{r}{L}\right)^{\zeta_c} \exp\left(-\frac{\gamma D\vartheta^2}{2C_2P_2L^\gamma}\right)\,. \label{remo} \end{equation} Let us stress that when calculating $S_n\approx\langle|\vartheta|^n\rangle$ with the help of the PDF (\ref{remo}), the characteristic $\vartheta$ is of the order of the single-point root-mean-square value of the passive scalar (\ref{rms}) and the relatively small value of the result (\ref{kkn11}) compared to a single-point value is ensured only by the small $r$-dependent factor in Eq. (\ref{remo}). In Appendix \ref{special} we establish the inequality \begin{equation} \ln\frac{n}{d}<\gamma\ln\frac{L}{r} \,, \label{jkn12} \end{equation} which restricts the region where expression (\ref{kkn11}) is correct. For larger $n$ the character of the PDF essentially changes and it tends to a single-point PDF that is similar to Eq. (\ref{remo}) but does not contain the $r$-dependent factor. Note that the cases $\gamma\ll1$ and $2-\gamma\ll1$ need a special analysis which is performed in Appendix \ref{special}. The answer (\ref{kkn11}) should be slightly corrected in the case $\gamma\ll1$ and keeps its form at $2-\gamma\ll 1$. We can treat the structure function $S_n$ as a continuous function of $n$. Then the vicinity of the critical value $n=n_c$ requires a separate consideration which is presented in Appendix \ref{critic}. The main peculiarity that appears in the expressions for the structure functions is a critical dependence on $n$. The expression for the structure functions can be written as \begin{equation} S_n\sim\left[\frac{(n-n_c)^2}{\gamma n_c}\frac{P_2 C_\pm}{D} L^\gamma\right]^{n_c/2}\left(\frac{r}{L}\right)^{\zeta_n} \,, \label{strcr} \end{equation} which implies the condition $|n-n_c|\ll n_c$. The factors $C_\pm$ are non-universal constants of order unity which are different for the cases $n<n_c$ and $n>n_c$. The exponents $\zeta_n$ in expression (\ref{strcr}) are determined by Eq. (\ref{mk12}) if $n<n_c$ and $\zeta_n=\zeta_c$ [Eq. (\ref{zc})] if $n>n_c$. In the consideration made above we suggested that $r/L$ is the smallest parameter of our theory. However, if $n\to n_c$, then $|n_c-n|$ starts to compete with $r/L$ and at small enough $n_c-n$ the consideration presented in Appendix \ref{critic} is inapplicable. The criterion that determines the validity of Eq. (\ref{strcr}) is established in Appendix \ref{critic} \begin{equation} \gamma\ln\frac{L}{r}\gg\frac{n_c}{|n-n_c|} \,. \label{crit1} \end{equation} We see that the first factor in Eq. (\ref{strcr}) possesses the critical behavior proportional $|n-n_c|^{n_c}$ that is saturated in the narrow vicinity near $n=n_c$, where {the} condition (\ref{crit1}) is violated. To avoid a misunderstanding, let us stress that despite the critical behavior, $S_n$ remains a monotonically increasing function of $n$ at a fixed $L/r$. This is obvious for $n>n_c$, whereas for $n<n_c$ it accounts for the stronger dependence on $n$ of the second ($r$-dependent) factor in Eq. (\ref{strcr}), which is guaranteed by the inequality (\ref{crit1}). We presented the results of the analysis based on the saddle-point approximation. The account of fluctuations on the background of our instanton could, in principle, change the results. Particularly the value of $\zeta_n$ could increase. Therefore, one should estimate the role of the fluctuations. The corresponding analysis is presented in Appendix \ref{fluctu}. It shows that for the condition (\ref{ineq}) fluctuation effects are weak and cannot essentially change the results obtained. \section*{Conclusion} We have performed an investigation of the structure functions in the Kraichnan model in the framework of the instantonic formalism. Though our approach is correct only for large dimensionalities of space, we observe a nontrivial picture, some peculiarities of which could be realized in a wider context. Below we discuss the results obtained. We have established the $n$ dependence of the scaling exponents, which are determined by the expression (\ref{mk12}) for $n<n_c$ and remain the constant (\ref{zc}) for $n>n_c$ where $n_c$ is defined by Eq. (\ref{crit}). Our results contradict the schemes proposed in \cite{closure,97Yak}. The value (\ref{zc}) is different from and smaller than the constant obtained in \cite{97Chert}, which can be considered really as an estimate from above. For $n\ll n_c$ our expression coincides with the answer obtained perturbatively \cite{95CFKLb,95CF} at large ${d}$. Surprisingly, the quadratic dependence of $\zeta_n$ on $n$ is kept up to $n=n_c$. Such an $n$ dependence of $\zeta_n$ is well known from the so-called log-normal distribution proposed by Kolmogorov \cite{62Kol}. The expressions (\ref{mk11}) and (\ref{kkn11}) reveal the combinatoric prefactors in $S_n$ that are characteristic rather of a Gaussian distribution. A natural explanation can be found in terms of zero mode ideology \cite{95SS,95GK,95CFKLb,95CF,96BGK,98BGK}. We know that for $n>2$ the main contribution to the structure function $S_n$ in the convective interval is related to zero modes of the equation for the $n$th-order correlation function of the passive scalar. The exponents of the modes are determined by the equation (and could be very sensitive to the value of $n$), whereas numerical coefficients before the modes (determining their contribution to $S_n$) have to be extracted from matching on the pumping scale where the statistics of the passive scalar is nearly Gaussian. This explains the combinatoric prefactors in Eqs. (\ref{mk11}) and (\ref{kkn11}). Probably the most striking feature of our results is the unusual behavior of $S_n$ (treated as continuous functions of $n$) near $n=n_c$, which is determined by the expression (\ref{strcr}). Now we briefly discuss the interpretation of our results. The log-normal answer (\ref{mk11}) and (\ref{mk12}) can be obtained if we accept that for large fluctuations, giving the main contribution into the structure function $S_n$, the pumping is inessential and the fluctuation is smooth on the scale $r$. Then one obtains from Eq. (\ref{adv}) the equation for the passive scalar difference taken at the separation ${\bbox r}$, \begin{equation} \partial_t\ln(\Delta\theta)=-{\bbox v}{\bbox r}/r^2 \,, \nonumber\end{equation} where we substituted $\nabla\theta$ by $(\Delta\theta)/r$. We immediately get from this equation a log-normal statistics for $\Delta\theta$ that is a consequence of the central limiting theorem. The saturation at $n>n_c$ can be explained by the presence of quasidiscontinuous structures in the field $\theta$ making the main contribution to the high-order correlation functions of $\theta$. Note also a similar non-analytical behavior of $\zeta_n$ for Burgers's turbulence \cite{Frish}, which is explained by the presence of shocks in the velocity field. Although formally our scheme is applicable only in the limit $d\gamma\gg1$, one can hope that the main features of our results persist for arbitrary values of the parameters. This hope is supported by \cite{98FMVa}, where a saturation of $\zeta_n$ was observed in numerical simulations of the Kraichnan model at ${d}=3$. \acknowledgements We are grateful to G. Falkovich for valuable remarks, and to M. Chertkov, R. Kraichnan, D. Khmelnitskii, and M. Vergassola for useful discussions. E. B. acknowledges support by the Joseph Meyerhoff Foundation and a grant from the Israel Science Foundation. V. L. acknowledges support from the Einstein Center, from the Minerva Center for Nonlinear Physics at the Weizmann Institute, and from the ENS-Landau Institute Twinning Programme. \end{multicols}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction.}\label{intro} \section{Introduction} Stochastic approximation algorithms are sequential non-parametric methods for finding a zero or minimum of a function in the situation where only the noisy observations of the function values are available. Two time-scale stochastic approximation algorithms represent one of the most general subclasses of stochastic approximation methods. These algorithms consist of two coupled recursions which are updated with different (one is considerably smaller than the other) step sizes which in turn facilitate convergence for such algorithms. Two time-scale stochastic approximation algorithms \cite{borkartt} have successfully been applied to several complex problems arising in the areas of reinforcement learning, signal processing and admission control in communication networks. There are many reinforcement learning applications (precisely those where parameterization of value function is implemented) where non-additive Markov noise is present in one or both iterates thus requiring the current two time-scale framework to be extended to include Markov noise (for example, in \cite[p.~5]{sutton2} it is mentioned that in order to generalize the analysis to Markov noise, the theory of two time-scale stochastic approximation needs to include the latter). Here we present a more general framework of two time-scale stochastic approximation with ``controlled'' Markov noise, i.e., the noise is not simply Markov; rather it is driven by the iterates and an additional control process as well. We analyze the asymptotic behaviour of our framework by relating it to limiting differential inclusions in both timescales that are defined in terms of the ergodic occupation measures associated with the controlled Markov processes. Next, using these results for the special case of our framework where the random processes are irreducible Markov chains, we present a solution to the off-policy convergence problem for temporal difference learning with linear function approximation. While the off-policy convergence problem for reinforcement learning (RL) with linear function approximation has been one of the most interesting problems, there are very few solutions available in the current literature. One such work \cite{yu} shows the convergence of the least squares temporal difference learning algorithm with eligibility traces (LSTD($\lambda$)) as well as the TD($\lambda$) algorithm. While the LSTD methods are not feasible when the dimension of the feature vector is large, off-policy TD($\lambda$) is shown to converge only when the eligibility function $\lambda \in [0,1]$ is very close to 1. Another recent work \cite{yu_new} proves weak convergence of several emphatic temporal difference learning algorithms which is also designed to solve the off-policy convergence problem. In \cite{sutton1,sutton,maeith} the gradient temporal difference learning (GTD) algorithms were proposed to solve this problem. However, the authors make the assumption that the data is available in the ``off-policy'' setting (i.e. the off-policy issue is incorporated into the data rather than in the algorithm) whereas, in reality, one has only the ``on-policy'' Markov trajectory corresponding to a given behaviour policy and we are interested in designing an online learning algorithm. We use one of the algorithms from \cite{maeith} called TDC with ``importance-weighting'' which takes the ``on-policy'' data as input and show its convergence using the results we develop. Our convergence analysis can also be extended for the same algorithm with eligibility traces for a sufficiently large range of values of $\lambda$. Our results can be used to provide a convergence analysis for reinforcement learning algorithms such as those in \cite{mannor} for which convergence proofs have not been provided. To the best of our knowledge there are related works such as \cite{Tadic, konda, konda_actor, tadic_new} where two time-scale stochastic approximation algorithms with algorithm iterate dependent non-additive Markov noise is analyzed. In all of them the Markov noise in the recursion is handled using the classic Poisson equation based approach of \cite{benveniste, metivier} and applied to the asymptotic analysis of many algorithms used in machine learning, system identification, signal processing, image analysis and automatic control. However, we show that our method also works if there is another additional control process as well and if the underlying Markov process has non-unique stationary distributions. Further, the mentioned application does not require strong assumption such as aperiodicity for the underlying Markov chain which is a sufficient condition if we use Poisson equation based approach \cite{adam, Tadic}. Additionally, our assumptions are quite different from the assumptions made in the mentioned literature and we give a detailed comparison in Section \ref{defn}. \begin{comment} However, the assumptions made there are not verifiable as the problem lies in the way non-additive Markov noise is handled. For example, consider the single time-scale controlled Markov noise recursion: \begin{equation} \theta_{n+1} = \theta_n + a(n)F(\theta_n, \eta_{n+1}),\nonumber \end{equation} where $\{\eta_n\}$ is controlled Markov with the control process being $\{\theta_n\}$. It is assumed that there exists an $\tilde{F}(.,.)$ and $f(\cdot)$ such that \begin{equation} \tilde{F}(\theta_n,\eta_{n+1}) - E[\tilde{F}(\theta_n,\eta_{n+1})\nobreak|\mathcal{F}_n] = F(\theta_n,\eta_{n+1})- f(\theta_n)\nonumber \end{equation} where $\mathcal{F}_n = \sigma(\theta_m, \eta_{m}, m \leq n)$. The above iteration can then be cast in the usual stochastic approximation setting with $M_{n+1} = F(\theta_n, \eta_{n+1})-f(\theta_n)$ being the martingale difference sequence with filtration $\mathcal{F}_n = \sigma(\theta_m, \eta_{m}, m \leq n)$. However, it is not clear how to find such an $f$ and $\tilde{F}(.,.)$ in general. Thus, the Markov noise problem is only done away with by recasting the same in the usual stochastic approximation framework by imposing extra assumptions. \end{comment} The organization of the paper is as follows: Section \ref{sec_def} formally defines the problem and provides background and assumptions. Section \ref{mres} shows the main results. Section \ref{relax} discusses how one of our assumptions of Section \ref{sec_def} can be relaxed. Section \ref{app} presents an application of our results to the off-policy convergence problem for temporal difference learning with linear function approximation. Finally, we conclude by providing some future research directions. \section{Background, Problem Definition, and Assumptions} \label{sec_def} In the following we describe the preliminaries and notation used in our proofs. Most of the definitions and notation are from \cite{benaim,borkar1,Aubin}. \subsection{\textit{Definition and Notation}} Let $F$ denote a set-valued function mapping each point $\theta \in \mathbb{R}^m$ to a set $F(\theta) \subset \mathbb{R}^m$. $F$ is called a \textit{Marchaud map} if the following hold: \begin{enumerate}[label=(\roman*)] \item $F$ is \textit{upper-semicontinuous} in the sense that if $\theta_n \to \theta$ and $w_n \to w$ with $w_n \in F(\theta_n)$ for all $n\geq 1$, then $w \in F(\theta)$. In order words, the graph of $F$ defined as $\{(\theta,w): w \in F(\theta)\}$ is closed. \item $F(\theta)$ is a non-empty compact convex subset of $\mathbb{R}^m$ for all $\theta \in \mathbb{R}^m$. \item $\exists c >0$ such that for all $\theta \in \mathbb{R}^m$, \begin{equation} \sup_{z\in F(\theta)} \|z\| \leq c(1+\|\theta\|),\nonumber \end{equation} where $\|.\|$ denotes any norm on $\mathbb{R}^m$. \end{enumerate} \textit{A solution for the differential inclusion (d.i.)} \begin{equation} \label{diffin} \dot{\theta}(t) \in F(\theta(t)) \end{equation} with initial point $ \theta_0 \in \mathbb{R}^m$ is an absolutely continuous (on compacts) mapping $\theta : \mathbb{R} \to \mathbb{R}^m$ such that $\theta(0) =\theta_0$ and \begin{equation} \dot{\theta}(t) \in F(\theta(t))\nonumber \end{equation} for almost every $t \in \mathbb{R}$. If $F$ is a Marchaud map, it is well-known that (\ref{diffin}) has solutions (possibly non-unique) through every initial point. The differential inclusion (\ref{diffin}) induces a \textit{set-valued dynamical system} $\{\Phi_t\}_{t\in \mathbb{R}}$ defined by \begin{equation} \Phi_t(\theta_0) = \{\theta(t) : \theta(\cdot) \mbox{ is a solution to (\ref{diffin}) with $\theta(0) =\theta_0$}\}.\nonumber \end{equation} \indent Consider the autonomous ordinary differential equation (o.d.e.) \begin{equation} \label{ode1} \dot{\theta}(t)=h(\theta(t)), \end{equation} where $h$ is Lipschitz continuous. One can write (\ref{ode1}) in the format of (\ref{diffin}) by taking $F(\theta)=\{h(\theta)\}$. It is well-known that (\ref{ode1}) is well-posed, i.e., it has a \textit{unique solution} for every initial point. Hence the set-valued dynamical system induced by the o.d.e. or \textit{flow} is $\{\Phi_t\}_{t\in \mathbb{R}}$ with \begin{equation} \Phi_t(\theta_0) = \{\theta(t)\},\nonumber \end{equation} where $\theta(\cdot)$ is the solution to (\ref{ode1}) with $\theta(0) =\theta_0$. It is also well-known that $\Phi_t(.)$ is a \textit{continuous function} for all $t \in \mathbb{R}$. \\ \indent A set $A \subset \mathbb{R}^m$ is said to be \textit{invariant} (for $F$) if for all $\theta_0\in A$ there exists a solution $\theta(\cdot)$ of (\ref{diffin}) with $\theta(0) = \theta_0$ such that $\theta(\mathbb{R}) \subset A$. \\ \indent Given a set $A \subset \mathbb{R}^m$ and $\theta'',w''\in A$, we write $\theta''\hookrightarrow_A w''$ if for every $\epsilon > 0$ and $T>0$ $\exists n \in \mathbb{N}$, solutions $\theta_1(\cdot), \dots, \theta_n(\cdot)$ to (\ref{diffin}) and real numbers $t_1, t_2, \dots, t_n$ greater than $T$ such that \begin{enumerate}[label=(\roman*)] \item $\theta_i(s) \in A$ for all $0 \leq s \leq t_i$ and for all $i=1, \dots, n,$ \item $\|\theta_i(t_i) - \theta_{i+1}(0)\| \leq \epsilon$ for all $i=1, \dots, n-1,$ \item $\|\theta_1(0) - \theta''\| \leq \epsilon$ and $\|\theta_n(t_n) - w''\| \leq \epsilon.$ \end{enumerate} The sequence $(\theta_1(\cdot), \dots, \theta_n(\cdot))$ is called an $(\epsilon, T)$ chain (in $A$ from $\theta''$ to $w''$) for $F$. A set $A \subset \mathbb{R}^m$ is said to be \textit{internally chain transitive}, provided that $A$ is compact and $\theta'' \hookrightarrow_A w''$ for all $\theta'',w''\in A$. It can be proved that in the above case, $A$ is an invariant set. \\ \indent A compact invariant set $A$ is called an \textit{attractor} for $\Phi$, provided that there is a neighbourhood $U$ of $A$ (i.e., for the induced topology) with the property that $d(\Phi_t(\theta''), A) \to 0$ as $t\to \infty$ \textit{uniformly} in $\theta'' \in U$. Here $d(X, Y) = \sup_{\theta'' \in X}\inf_{w'' \in Y}\|\theta''-w''\|$ for $X,Y \subset \mathbb{R}^m.$ Such a $U$ is called a \textit{fundamental neighbourhood} of the attractor $A$. \textit{An attractor of a well-posed o.d.e.} is an attractor for the set-valued dynamical system induced by the o.d.e. \\ \indent The set \begin{equation} \omega_\Phi(\theta'') = \bigcap_{t \geq 0} \overline{\Phi_{[t, \infty)}(\theta'')} \nonumber \end{equation} is called the \textit{$\omega$-limit} set of a point $\theta'' \in \mathbb{R}^m$. If $A$ is a set, then \begin{equation} B(A) = \{\theta'' \in \mathbb{R}^m : \omega_\Phi(\theta'') \subset A\} \nonumber \end{equation} denotes its \textit{basin of attraction}. A \textit{global attractor} for $\Phi$ is an attractor $A$ whose basin of attraction consists of all $\mathbb{R}^m$. Then the following lemma will be useful for our proofs, see \cite{benaim} for a proof. \begin{lem} \label{ga} Suppose $\Phi$ has a global attractor $A$. Then every internally chain transitive set lies in $A$. \end{lem} We also require another result which will be useful to apply our results to the RL application we mention. Before stating it we recall some definitions from Appendix 11.2.3 of \cite{borkar1}: \\ \indent A point $\theta^*\in \mathbb{R}^m$ is called \textit{Lyapunov stable} for the o.d.e (\ref{ode1}) if for all $\epsilon >0$, there exists a $\delta >0$ such that every trajectory of (\ref{ode1}) initiated in the $\delta$-neighbourhood of $\theta^*$ remains in its $\epsilon$-neighbourhood. $\theta^*$ is called \textit{globally asymptotically stable} if $\theta^*$ is Lyapunov stable and \textit{all} trajectories of the o.d.e. converge to it. \begin{lem} \label{ga2} Consider the autonomous o.d.e. $\dot{\theta}(t)=h(\theta(t))$ where $h$ is Lipschitz continuous. Let $\theta^*$ be globally asymptotically stable. Then $\theta^*$ is the global attractor of the o.d.e. \end{lem} \begin{pf} We refer the readers to Lemma~1 of \cite[Chapter 3]{borkar1} for a proof. \end{pf} We end this subsection with a notation which will be used frequently in the convergence statements in the following sections. \begin{defn} For function $\theta(.)$ defined on $[0, \infty)$, the notation ``$\theta(t) \to A$ as $t \to \infty$'' means that $\cap_{t \geq 0} \overline{\{\theta(s):s \geq t\}} \subset A$. Similar definition applies for a sequence $\{\theta_n\}$. \end{defn} \begin{comment} \begin{lem} \label{markov} Let $U_n$ be an ergodic Markov Chain with stationary distribution $\mathbf{\pi}$ and $V_n$ be a sequence of random variables such that $V_{n} | U_n = q(.|.)$ with all the entries $q(.|.) > 0$. Then $(U_n, V_{n})$ is an ergodic Markov chain. \end{lem} \begin{pf} \begin{align} &P(U_{n+1} = u_{n+1}, V_{n+1} = v_{n+1} | U_{0} = u_{0}, V_{0} = v_{0}, \dots, U_{n} = u_{n}, V_{n} = v_{n})\nonumber\\ &= P(U_{n+1} = u_{n+1} | U_{0} = u_{0}, V_{0} = v_{0}, \dots, U_{n} = u_{n}, V_{n} = v_{n})\nonumber\\ & \cdot P(V_{n+1} = v_{n+1} | U_{0} = u_{0}, V_{0} = v_{0}, \dots, U_{n} = u_{n}, V_{n} = v_{n}, U_{n+1} = u_{n+1})\nonumber\\ &=P(U_{n+1} = u_{n+1}|U_{n} = u_{n}, V_{n} = v_{n})P(V_{n+1} = v_{n+1} | U_{n+1} = u_{n+1}, U_{n} = u_{n}, V_{n} = v_{n})\nonumber\\ &= P(U_{n+1} = u_{n+1}, V_{n+1} = v_{n+1} | U_{n} = u_{n}, V_{n} = v_{n}) \end{align} \end{pf} Hence $(U_n, V_n)$ is a Markov chain. Clearly, $\forall i, j$ \begin{align} &P(U_{n} = i, V_{n} = j | U_{0} = i', V_{0} = j')\nonumber\\ &=P(U_{n} = i | U_{0} = i', V_{0} = j') P(V_{n} = j | U_{n} = i)\nonumber\\ &=P(U_{n} = i | U_{0} = i')q(j|i) > 0 \end{align} when $P(U_{n} = i | U_{0} = i') >0$. Hence $(U_n, V_n)$ is irreducible. Similarly, it can be shown to be aperiodic. Note that $(U_n, V_n)$ has an invariant distribution $\mathbf{r}$ such that $r_{ij} = \pi_i q(j|i)$. Hence, it is positive recurrent, ergodic and the unique invariant distribution is $\mathbf{r}$. \end{comment} \subsection{\textit{Problem Definition}} \label{defn} Our goal is to perform an asymptotic analysis of the following coupled recursions: \begin{eqnarray} \theta_{n+1} &= \theta_n + a(n)\left[h(\theta_n, w_n, Z^{(1)}_n) + M^{(1)}_{n+1}\right],\label{eqn1}\\ w_{n+1} &= w_n + b(n)\left[g(\theta_n, w_n, Z^{(2)}_n) + M^{(2)}_{n+1}\right],\label{eqn2} \end{eqnarray} where $\theta_n \in \mathbb{R}^d, w_n \in \mathbb{R}^k, n\geq 0$ and $\{Z^{(i)}_n\}, \{M^{(i)}_{n}\}, i=1, 2$ are random processes that we describe below. \\ \indent We make the following assumptions: \begin{enumerate}[label=\textbf{(A\arabic*)}] \item $\{Z^{(i)}_n\}$ takes values in a compact metric space $S^{(i)}, i=1,2$. Additionally, the processes $\{Z^{(i)}_n\}, i = 1, 2$ are controlled Markov processes that are controlled by three different control processes: the iterate sequences $\{\theta_m\}, \{w_m\}$ and a random process $\{A^{(i)}_n\}$ taking values in a compact metric space $U^{(i)}$ respectively with their individual dynamics specified by \begin{equation} P(Z^{(i)}_{n+1} \in B^{(i)} |Z^{(i)}_m, A^{(i)}_m, \theta_m, w_m, m\leq n) = \int_{B^{(i)}} p^{(i)}(dy|Z^{(i)}_n, A^{(i)}_n, \theta_n, w_n), n\geq 0, \nonumber \end{equation} for $B^{(i)}$ Borel in $S^{(i)}, i = 1, 2,$ respectively. \begin{rmk} In this context one should note that \cite{benveniste, metivier} require the Markov process to take values in a normed Polish space. \end{rmk} \begin{rmk} In \cite{borkar} it is assumed that the state space where the controlled Markov Process takes values is Polish. This space is then compactified using the fact that a Polish space can be homeomorphically embedded into a dense subset of a compact metric space. The vector field $h(.,.) : \mathbb{R}^d \times S \to \mathbb{R}^d$ is considered bounded when the first component lies in a compact set. This would, however, require a continuous extension of $h': \Bbb R^d \times \phi(S) \to \Bbb R^d$ defined by $h'(x,s') = h(x,\phi^{-1}(s'))$ to $\Bbb R^d \times \overline{\phi(S)}$. Here $\phi(\cdot)$ is the homeomorphism defined by $\phi(s) = (\rho(s, s_1), \rho(s, s_2), \dots) \in [0,1]^{\infty}$, and $\{s_i\}$ and $\rho$ is a countable dense subset and metric of the Polish space respectively. A sufficient condition for the above is $h'$ to be uniformly continuous \cite[Ex:13, p.~99]{Rudin}. However, this is hard to verify. This is the main motivation for us to take the range of the Markov process as compact for our problem. However, there are other reasons for taking compact state space which will be clear in the proofs of this section and the next. \end{rmk} \item $h : \mathbb{R}^{d+k} \times S^{(1)} \to \mathbb{R}^d$ is jointly continuous as well as Lipschitz in its first two arguments uniformly w.r.t the third. The latter condition means that \begin{equation} \forall z^{(1)} \in S^{(1)}, \|h(\theta, w, z^{(1)}) - h(\theta', w', z^{(1)})\| \leq L^{(1)}(\|\theta-\theta'\| + \|w - w'\|).\nonumber \end{equation} Same thing is also true for $g$ where the Lipschitz constant is $L^{(2)}$. Note that the Lipschitz constant $L^{(i)}$ does not depend on $z^{(i)}$ for $i=1,2$. \begin{rmk} We later relax the uniformity of the Lipschitz constant w.r.t the Markov process state space by putting suitable moment assumptions on the Markov process. \end{rmk} \begin{comment} \item $g : \mathbb{R}^{d+k} \times S^{(2)} \to \mathbb{R}^k$ is jointly continuous as well as Lipschitz in its first two arguments uniformly w.r.t the third. The latter condition similarly means that \begin{equation} \forall z^{(2)} \in S^{(2)}, \|g(\theta, w, z^{(2)}) - g(\theta', w', z^{(2)})\| \leq L^{(2)}(\|\theta-\theta'\| + \|w - w'\|).\nonumber \end{equation} Note that the Lipschitz constant $L^{(2)}$ does not depend on $z^{(2)}$. \end{comment} \item $\{M^{(i)}_n\}, i=1, 2$ are martingale difference sequences w.r.t increasing $\sigma$-fields \begin{equation} \mathcal{F}_n = \sigma(\theta_m, w_m, M^{(i)}_{m}, Z^{(i)}_m, m \leq n, i = 1, 2), n \geq 0,\nonumber \end{equation} satisfying \begin{equation} E[\|M^{(i)}_{n+1}\|^2|\mathcal{F}_n] \leq K(1 + \|\theta_n\|^2 + \|w_n\|^2), i = 1, 2,\nonumber \end{equation} for $n \geq 0$ and a given constant $K>0$. \item The stepsizes $\{a(n)\}, \{b(n)\}$ are positive scalars satisfying \begin{equation} \sum_n a(n) = \sum_n b(n) = \infty, \sum_{n}(a(n)^2 + b(n)^2) < \infty, \frac{a(n)}{b(n)} \to 0.\nonumber \end{equation} Moreover, $a(n), b(n), n \geq 0$ are non-increasing. Before stating the assumption on the transition kernel $p^{(i)}, i=1, 2$ we need to define the metric in the space of probability measures $\mathcal{P}(S)$. Here we mention the definitions and main theorems on the spaces of probability measures that we use in our proofs (details can be found in Chapter 2 of \cite{borkar2}). We denote the metric by $d$ and is defined as \begin{equation} d(\mu, \nu) = \sum_{j} 2^{-j}|\int f_j d\mu - \int f_j d\nu|, \mu, \nu \in \mathcal{P}(S),\nonumber \end{equation} where $\{f_j\}$ are countable dense in the unit ball of $C(S)$. Then the following are equivalent: \begin{enumerate}[label=(\roman*)] \item $d(\mu_n, \mu) \to 0,$ \item for all bounded $f$ in $C(S)$, \begin{equation} \int_{S} fd\mu_n \to \int_{S} f d\mu, \end{equation} \item for all $f$ bounded and uniformly continuous, \begin{equation} \int_{S} fd\mu_n \to \int_{S} fd\mu.\nonumber \end{equation} \end{enumerate} Hence we see that $d(\mu_n, \mu) \to 0$ iff $\int_{S} f_jd\mu_n \to \int_{S} f_jd\mu$ for all $j$. Any such sequence of functions $\{f_j\}$ is called a convergence determining class in $\mathcal{P}(S)$. Sometimes we also denote $d(\mu_n, \mu) \to 0$ using the notation $\mu_n \Rightarrow \mu$. \\ \indent Also, we recall the characterization of relative compactness in $\mathcal{P}(S)$ that relies on the definition of tightness. $\mathcal{A}\subset\mathcal{P}(S)$ is a tight set if for any $\epsilon >0$, there exists a compact $K_\epsilon \subset S$ such that $\mu(K_\epsilon) > 1-\epsilon$ for all $\mu \in \mathcal{A}$. Clearly, if $S$ is compact then any $\mathcal{A}\subset\mathcal{P}(S)$ is tight. By Prohorov's theorem, $\mathcal{A}\subset\mathcal{P}(S)$ is relatively compact if and only if it is tight. \\ \indent With the above definitions we assume the following: \item The map $S^{(i)} \times U^{(i)} \times \mathbb{R}^{d+k} \ni (z^{(i)}, a^{(i)}, \theta, w) \to p^{(i)}(dy|z^{(i)}, a^{(i)}, \theta,w) \in \mathcal{P}(S^{(i)})$ is continuous. \begin{rmk} \textbf{(A5)} is much simpler than the assumptions on $n$-step transition kernel in \cite[Part II,Chap. 2, Theorem 6]{benveniste}. \end{rmk} Additionally, unlike \cite[p~140 line 13]{borkar}, we do not require the extra assumption of the continuity in the $\theta$ variable of $p(dy|z,a,\theta)$ to be uniform on compacts w.r.t the other variables. For $\theta_n = \theta, w_n = w$ for all $n$ with a fixed deterministic $(\theta, w) \in \mathbb{R}^{d+k}$ and under any stationary randomized control $\pi^{(i)}$, it follows from Lemma 2.1 and Lemma 3.1 of \cite{borkar} that the time-homogeneous Markov processes $Z^{(i)}_n, i=1, 2$ have (possibly non-unique) invariant distributions $\Psi^{(i)}_{\theta,w,\pi^{(i)}}, i = 1, 2$. . Now, it is well-known that the ergodic occupation measure defined as \begin{equation} \Psi^{(i)}_{\theta, w, \pi^{(i)}}(dz, da):= \Psi^{(i)}_{\theta,w,\pi^{(i)}}(dz) \pi^{(i)}(z, da) \in \mathcal{P}(S^{(i)} \times U^{(i)}) \nonumber \end{equation}satisfies the following: \begin{equation} \label{eqn3} \int_{S^{(i)}}f^{(i)}(z) \Psi^{(i)}_{\theta, w, \pi^{(i)}}(dz, U^{(i)}) = \int_{S^{(i)}\times U^{(i)}}\int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a, \theta, w)\Psi^{(i)}_{\theta, w, \pi^{(i)}}(dz, da) \end{equation} for $f^{(i)}: S^{(i)} \to \mathcal{R} \in C_b(S^{(i)})$. \end{enumerate} We denote by $D^{(i)}(\theta,w), i=1,2$ the set of all such ergodic occupation measures for the prescribed $\theta$ and $w$. In the following we prove some properties of the map $(\theta,w) \to D^{(i)}(\theta,w)$. \begin{lem} \label{lemma1} For all $(\theta,w)$, $D^{(i)}(\theta,w)$ is convex and compact. \end{lem} \begin{pf} The proof trivially follows from \textbf{(A1)}, \textbf{(A5)} and (\ref{eqn3}). \begin{comment} Let $\Psi^{(i)}_{\theta,w,\pi^{(i)}}(j) \in D^{(i)}(\theta,w), j=1,\dots n$ and $\Psi^{(i)}_{\theta,w,\pi^{(i)}} = \sum_{j=1}^{n}\alpha_j \Psi^{(i)}_{\theta,w,\pi^{(i)}}(j), \linebreak \alpha_j \geq 0$ for all $j$ and $\sum_{j=1}^n\alpha_j=1$. Clearly, $\Psi^{(i)}_{\theta,w,\pi^{(i)}}\in \mathcal{P}(S^{(i)}\times U^{(i)})$. Now, from (\ref{eqn3}) we get, \begin{equation} \int_{S^{(i)}}f^{(i)}(z) \Psi^{(i)}_{\theta,w,\pi^{(i)}}(j)(dz, U^{(i)}) = \int_{S^{(i)}}\int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a, \theta, w)\Psi^{(i)}_{\theta,w,\pi^{(i)}}(j)(dz,da)\nonumber \end{equation} for $j=1,\dots,n$. Now, \begin{align} \int_{S^{(i)}}f^{(i)}(z) \Psi^{(i)}_{\theta,w,\pi^{(i)}}(dz, U^{(i)}) & = \sum_{j=1}^{n}\alpha_j \int_{S^{(i)}}f^{(i)}(z) \Psi^{(i)}_{\theta,w,\pi^{(i)}}(j)(dz, U^{(i)})\nonumber\\ & = \sum_{j=1}^{n}\alpha_j \int_{S^{(i)}\times U^{(i)}}\int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a, \theta, w)\Psi^{(i)}_{\theta,w,\pi^{(i)}}(j)(dz,da) \nonumber\\ & = \int_{S^{(i)}}\int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a, \theta, w)(\sum_{j=1}^{n}\alpha_j\Psi^{(i)}_{\theta,w,\pi^{(i)}}(j))(dz,da) \nonumber\\ & = \int_{S^{(i)}}\int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a, \theta, w)\Psi^{(i)}_{\theta,w,\pi^{(i)}}(dz,da).\nonumber \end{align} Hence, the map is convex. Next, we prove that the map is closed. It is sufficient to prove that (\ref{eqn3}) is closed under convergence in $\mathcal{P}(S^{(i)}\times U^{(i)})$. Let $D^{(i)}(\theta,w) \ni \Psi^{(i)}_{\theta,w,\pi^{(i)}}(n) \Rightarrow \Psi^{(i)}_{\theta,w,\pi^{(i)}}\in \mathcal{P}(S^{(i)}\times U^{(i)})$. We now show that $\Psi^{(i)}_{\theta,w,\pi^{(i)}} \in D^{(i)}(\theta,w)$: \begin{align} \int_{S^{(i)}}f^{(i)}(z) \Psi^{(i)}_{\theta,w,\pi^{(i)}}(dz,U^{(i)}) &= \lim_{n\to \infty} \int_{S^{(i)}}f^{(i)}(z) \Psi^{(i)}_{\theta,w,\pi^{(i)}}(n)(dz,U^{(i)})\nonumber\\ &= \lim_{n\to \infty} \int_{S^{(i)}\times U^{(i)}}\int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a, \theta, w)\Psi^{(i)}_{\theta,w,\pi^{(i)}}(n)(dz,da).\nonumber \end{align} It is enough to prove that $g^{(i)}(z,a)= \int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a, \theta, w) \in C_b(S^{(i)}\times U^{(i)})$. Boundedness follows from the fact that $f^{(i)}(\cdot)$ is bounded. Let $(z_n,a_n) \to (z,a)$. Then $p^{(i)}(dy|z_n,a_n, \theta, w) \Rightarrow p^{(i)}(dy|z,a, \theta, w) \in \mathcal{P}(S^{(i)}\times U^{(i)})$. Then using the implication $(i)\Rightarrow (ii)$ in (\ref{equiv}) we get that $g^{(i)}(z_n,a_n) \to g^{(i)}(z,a)$ as $n \to \infty$. \end{comment} \end{pf} \begin{comment} \begin{lem} The map $(\theta,w) \to D^{(i)}(\theta,w)$ is compact. \end{lem} \begin{pf} First we prove that $\mathcal{P}(S^{(i)}\times U^{(i)})$ is compact, i.e., sequentially compact (as the space is a metric space). This follows from the fact that any sequence of probability measures in $\mathcal{P}(S^{(i)}\times U^{(i)})$ is tight due to the compactness of $S^{(i)}$. Because $S^{(i)}$ is compact, it is separable. Then from an application of Prohorov's theorem, it follows that $\mathcal{P}(S^{(i)}\times U^{(i)})$ is compact. Now Lemma~\ref{lemma1} shows that the map is closed and a closed subset of a compact set is compact. \end{pf} \end{comment} \begin{lem} \label{upsem} The map $(\theta,w) \to D^{(i)}(\theta,w)$ is upper-semi-continuous. \end{lem} \begin{pf} Let $\theta_n \to \theta, w_n \to w$ and $\Psi^{(i)}_n \Rightarrow \Psi^{(i)} \in \mathcal{P}(S^{(i)} \times U^{(i)})$ such that $\Psi^{(i)}_n \in D^{(i)}(\theta_n, w_n)$. Let $g^{(i)}_n(z,a) = \int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a, \theta_n, w_n)$ and $g^{(i)}(z,a) = \int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a, \theta, w).$ From (\ref{eqn3}) we get that \begin{align} \int_{S^{(i)}}f^{(i)}(z) \Psi^{(i)}(dz,U^{(i)}) &= \lim_{n\to \infty}\int_{S^{(i)}}f^{(i)}(z) \Psi^{(i)}_n(dz,U^{(i)})\nonumber\\ &= \lim_{n\to \infty}\int_{S^{(i)}\times U^{(i)}}\int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a, \theta_n, w_n)\Psi^{(i)}_n(dz,da)\nonumber\\ &= \lim_{n\to \infty}\int_{S^{(i)}\times U^{(i)}}g^{(i)}_n(z,a)\Psi^{(i)}_n(dz,da).\nonumber \end{align} Now, $p^{(i)}(dy|z,a, \theta_n, w_n) \Rightarrow p^{(i)}(dy|z,a, \theta, w)$ implies $g^{(i)}_n(\cdot, \cdot) \to g^{(i)}(\cdot, \cdot)$ pointwise. We prove that the convergence is indeed uniform. It is enough to prove that this sequence of functions is equicontinuous. Then along with pointwise convergence it will imply uniform convergence on compacts \cite[p.~168, Ex: 16]{Rudin}. This is also a place where \textbf{(A1)} is used. \\ \indent Define $g' : S^{(i)} \times U^{(i)} \times \mathbb{R}^{d+k} \to \mathbb{R}$ by $g'(z',a', \theta',w') = \int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a', \theta', w')$. Then $g'$ is continuous. Let $A= S^{(i)} \times U^{(i)} \times (\{\theta_n\} \cup \theta) \times (\{w_n\} \cup w)$. So, $A$ is compact and $g'|_{A}$ is uniformly continuous. This implies that for all $\epsilon >0$, there exists $\delta >0 $ such that if $\rho'(s_1, s_2) < \delta, \mu'(a_1, a_2) < \delta, \|\theta_1-\theta_2\| < \delta, \|w_1-w_2\| < \delta,$ then $|g'(s_1, a_1, \theta_1, w_1) - g'(s_2, a_2, \theta_2, w_2)|< \epsilon$ where $s_1, s_2 \in S^{(i)}, a_1, a_2 \in U^{(i)}, \theta_1, \theta_2 \in (\{\theta_n\} \cup \theta), w_1, w_2 \in(\{w_n\} \cup w)$ and $\rho'$ and $\mu'$ denote the metrics in $S^{(i)}$ and $U^{(i)}$ respectively. Now use this same $\delta$ for the $\{g^{(i)}_n(\cdot, \cdot)\}$ to get for all $n$ the following for $\rho'(z_1, z_2) < \delta, \mu'(a_1, a_2) < \delta$: \begin{align} |g^{(i)}_n(z_1,a_1) - g^{(i)}_n(z_2,a_2)| = |g'(z_1,a_1, \theta_n,w_n) - g'(z_2,a_2, \theta_n,w_n)| < \epsilon.\nonumber \end{align} Hence $\{g^{(i)}_n(\cdot, \cdot)\}$ is equicontinuous. For large $n$, $\sup_{(z,a) \in S^{(i)} \times U^{(i)}}|g^{(i)}_n(z,a) - g^{(i)}(z,a)| < \epsilon/2$ because of uniform convergence of $\{g^{(i)}_n(\cdot, \cdot)\}$, hence $\int_{S^{(i)}\times U^{(i)}}|g^{(i)}_n(z,a) - g^{(i)}(z,a)|\Psi^{(i)}_n(dz,da) < \epsilon/2$. Now (for $n$ large), \begin{align} \label{limit} &|\int_{S^{(i)}\times U^{(i)}}g^{(i)}_n(z,a)\Psi^{(i)}_n(dz,da) - \int_{S^{(i)}\times U^{(i)}}g^{(i)}(z,a)\Psi^{(i)}(dz,da)|\nonumber\\ &= |\int_{S^{(i)}\times U^{(i)}}[g^{(i)}_n(z,a) - g^{(i)}(z,a)] \Psi^{(i)}_n(dz,da) + \int_{S^{(i)}\times U^{(i)}}g^{(i)}(z,a)\Psi^{(i)}_n(dz,da) - \int_{S^{(i)}\times U^{(i)}}g^{(i)}(z,a)\Psi^{(i)}(dz,da)|\nonumber\\ &< \epsilon/2 + |\int_{S^{(i)}\times U^{(i)}}g^{(i)}(z,a)\Psi^{(i)}_n(dz,da) - \int_{S^{(i)}\times U^{(i)}}g^{(i)}(z,a)\Psi^{(i)}(dz,da)|\nonumber\\ &< \epsilon. \end{align} The last inequality follows the fact that $\Psi^{(i)}_n \Rightarrow \Psi^{(i)}$. Hence from (\ref{limit}) we get, \begin{align} \int_{S^{(i)}}f^{(i)}(z) \Psi^{(i)}(dz, U^{(i)}) = \int_{S^{(i)}\times U^{(i)}}\int_{S^{(i)}}f^{(i)}(y)p^{(i)}(dy|z,a, \theta, w)\Psi^{(i)}(dz,da)\nonumber \end{align} proving that the map is upper-semi-continuous. \end{pf} Define $\tilde{g}(\theta, w, \nu) = \int g(\theta,w,z)\nu(dz, U^{(2)})$ for $\nu \in P(S^{(2)}\times U^{(2)})$ and $\hat{g}(\theta,w) = \{\tilde{g}(\theta, w, \nu): \nu \in D^{(2)}(\theta, w)\}.$ \begin{lem} \label{march} $\hat{g}(\cdot, \cdot)$ is a Marchaud map. \end{lem} \begin{pf} \begin{enumerate}[label=(\roman*)] \item Convexity and compactness follow trivially from the same for the map $(\theta,w) \to D^{(2)}(\theta, w)$. \item \begin{align} &\|\tilde{g}(\theta, w, \nu)\|\nonumber\\ &= \|\int g(\theta,w,z)\nu(dz,U^{(2)})\|\nonumber\\ &\leq \int\|g(\theta,w,z)\|\nu(dz,U^{(2)})\nonumber\\ &\leq \int L^{(2)}(\|\theta\|+ \|w\|+\|g(0,0,z)\|)\nu(dz,U^{(2)})\nonumber\\ &\leq \max (L^{(2)}, L^{(2)}\int \|g(0,0,z)\| \nu(dz,U^{(2)}))(1+\|\theta\|+\|w\|).\nonumber \end{align} Clearly, $K(\theta) = \max (L^{(2)}, L^{(2)}\int \|g(0,0,z)\| \nu(dz,U^{(2)})) > 0$. The above is true for all $\tilde{g}(\theta,w, \nu) \in \hat{g}(\theta,w), \nu \in D^{(2)}(\theta, w)$. \item Let $(\theta_n,w_n) \to (\theta,w), \tilde{g}(\theta_n, w_n, \nu_n) \to m, \nu_n \in D^{(2)}(\theta_n, w_n)$. Now, $\{\nu_n\}$ is tight, hence has a convergent sub-sequence $\{\nu_{n_k}\}$ with $\nu$ being the limit. Then using the arguments similar to the proof of Lemma \ref{upsem} one can show that $m=\tilde{g}(\theta, w, \nu)$ whereas $\nu \in D^{(2)}(\theta, w)$ follows directly from the upper-semi-continuity of the map $(\theta,w) \to D^{(2)}(\theta,w)$ for all $\theta$. \end{enumerate} \end{pf} Note that the map $\hat{h}(\cdot,\cdot)$ can be defined similarly and can be shown to be a Marchaud map using the exact same technique. \subsection{\textit{Other assumptions needed for two time-scale convergence analysis}} \label{assump} We now list the other assumptions required for two time-scale convergence analysis: \begin{enumerate}[label=\textbf{(A\arabic*)}] \setcounter{enumi}{5} \item for all $\theta \in \mathbb{R}^d$, the differential inclusion \begin{equation} \label{fast} \dot{w}(t) \in \hat{g}(\theta,w(t)) \end{equation} has a singleton global attractor $\lambda(\theta)$ where $\lambda : \mathbb{R}^d \to \mathbb{R}^k$ is a Lipschitz map with constant $K$. Additionally, there exists a continuous function $V: \mathbb{R}^{d+k} \to [0,\infty)$ satisfying the hypothesis of Corollary 3.28 of \cite{benaim} with $\Lambda = \{(\theta,\lambda(\theta)):\theta \in \mathbb{R}^d\}$. This is the most important assumption as it links the fast and slow iterates. \item Stability of the iterates: $\sup_n(\|\theta_n\| + \|w_n\|) < \infty$ a.s. \end{enumerate} Let $\bar{\theta}(.), t\geq 0$ be the continuous, piecewise linear trajectory defined by $\bar{\theta}(t(n))=\theta_n, n\geq 0$, with linear interpolation on each interval $[t(n), t(n+1))$, i.e., \begin{equation} \bar{\theta}(t) = \theta_n + (\theta_{n+1} - \theta_n)\frac{t-t(n)}{t(n+1)-t(n)}, t \in [t(n), t(n+1)).\nonumber \end{equation} The following theorem is our main result: \begin{thm}[Slower timescale result]Under assumptions \textbf{(A1)-(A7)}, \label{thm} \begin{equation} (\theta_n, w_n) \to \cup_{\theta^* \in A_0}(\theta^*, \lambda(\theta^*)) \mbox{a.s. as $n \to \infty$.},\nonumber \end{equation} \end{thm} where $A_0 = \cap_{t\geq 0}\overline{\{\bar{\theta}(s): s\geq t\}}$ is almost everywhere an internally chain transitive set of the differential inclusion \begin{equation} \label{slower_di} \dot{\theta}(t) \in \hat{h}(\theta(t)), \end{equation} where $\hat{h}(\theta)=\{\tilde{h}(\theta,\lambda(\theta),\nu) : \nu \in D^{(1)}(\theta, \lambda(\theta))\}$. We call (\ref{fast}) and (\ref{slower_di}) as the faster and slower d.i. to correspond with faster and slower recursions, respectively. \begin{cor} \label{main_col} Under the additional assumption that the inclusion \begin{equation} \dot{\theta}(t)\in \hat{h}(\theta(t))), \nonumber \end{equation} has a global attractor set $A_1$, \begin{equation} (\theta_n, w_n) \to \cup_{\theta^* \in A_1}(\theta^*, \lambda(\theta^*)) \mbox{a.s. as $n \to \infty$.}\nonumber \end{equation} \end{cor} \begin{rmk} In case where the set $D^{(2)}(\theta,w)$ is singleton, we can relax \textbf{(A6)} to local attractors also. The relaxed assumption will be \begin{enumerate}[label=\textbf{(A\arabic*)'}] \setcounter{enumi}{5} \item The function $\hat{g}(\theta, w) = \int g(\theta, w, z)\Gamma^{(2)}_{\theta,w}(dz)$ is Lipschitz continuous where $\Gamma^{(2)}_{\theta,w}$ is the only element of $D^{(2)}(\theta,w)$. Further, for all $\theta \in \mathbb{R}^d$, the o.d.e \begin{equation} \label{cpledode} \dot{w}(t) = \hat{g}(\theta, w(t)) \end{equation} has an asymptotically stable equilibrium $\lambda(\theta)$ with domain of attraction $G_\theta$ where $\lambda : \mathbb{R}^d \to \mathbb{R}^k$ is a Lipschitz map with constant $K$. Also, assume that $\bigcap_{\theta} G_\theta$ is non-empty. Moreover, the function $V': G \to [0,\infty)$ defined by $V'(\theta,w) = V_\theta(w)$ is continuously differentiable where $V_\theta(.)$ is the Lyapunov function (for definition see \cite[Chapter 11.2.3]{borkar1}) for the o.d.e. (\ref{cpledode}) with $\lambda(\theta)$ as its attractor, and $G=\bigcup_{\theta \in \mathbb{R}^d} \{\{\theta\} \times G_\theta$\}. This extra condition is needed so that the set graph($\lambda$):=$\{(\theta,\lambda(\theta)): \theta \in \mathbb{R}^d\}$ becomes an asymptotically stable set of the coupled o.d.e \begin{equation} \dot{w}(t) = \hat{g}(\theta(t),w(t)), \dot{\theta}(t) = 0.\nonumber \end{equation} \end{enumerate} Note that \textbf{(A6)'} allows multiple attractors (at least one of them have to be a point, others can be sets) for the faster o.d.e for every $\theta$. Then the statement of Theorem \ref{thm} will be modified as in the following: \begin{thm}[Slower timescale result when $\lambda(\theta)$ is a local attractor]Under assumptions \textbf{(A1)-(A5), (A6)'} and \textbf{(A7)}, on the event ``$\{w_n\}$ belongs to a compact subset $B$ (depending on the sample point) of $\bigcap_{\theta \in \mathbb R^d} G_\theta$ \textbf{eventually}", \label{thm_local} \begin{equation} (\theta_n, w_n) \to \cup_{\theta^* \in A_0}(\theta^*, \lambda(\theta^*)) \mbox{a.s. as $n \to \infty$.}\nonumber \end{equation} \end{thm} The requirement on $\{w_n\}$ is much stronger than the usual local attractor statement for Kushner-Clarke lemma \cite[Section II.C]{metivier} which requires the iterates to enter a compact set in the domain for attraction of the local attractor \textit{infinitely often} only. The reason for imposing this strong assumption is that graph($\lambda$) is not a subset of any compact set in $\mathbb{R}^{d+k}$, and hence the usual tracking lemma kind of arguments do not go through directly. One has to relate the limit set of the coupled iterate $(\theta_n, w_n)$ to graph($\lambda$) (See the proof of Lemma \ref{fast_res2}). \end{rmk} We present the proof of our main results in the next section. \section{Main Results} \label{mres} We first discuss an extension of the single time-scale controlled Markov noise framework of \cite{borkar} under our assumptions to prove our main results. Note that the results of \cite{borkar} assume that the state space of the controlled Markov process is Polish which may impose additional conditions that are hard to verify. In this section, other than proving our two time-scale results, we prove many of the results in \cite{borkar} (which were only stated there) assuming the state space to be compact. We begin by describing the intuition behind the proof techniques in \cite{borkar}. \\ \indent The space $C([0, \infty); \mathbb{R}^d)$ of continuous functions from $[0,\infty)$ to $\mathbb{R}^d$ is topologized with the coarsest topology such that the map that takes any $f \in C([0, \infty); \mathbb{R}^d)$ to its restriction to $[0,T]$ when viewed as an element of the space $C([0, T]; \mathbb{R}^d)$, is continuous for all $T>0$. In other words, $f_n \to f$ in this space iff $f_n|_{[0,T]} \to f|_{[0,T]}$. The other notations used below are the same as those in \cite{borkar,borkar1}. We present a few for easy reference. \\ \indent Consider the single time-scale stochastic approximation recursion with controlled Markov noise: \begin{equation} \label{cont_mar} x_{n+1} = x_n + a(n)\left[h(x_n, Y_n) + M_{n+1}\right]. \end{equation} Define time instants $t(0)=0, t(n)=\sum_{m=0}^{n-1} a(m), n\geq 1$. Let $\bar{x}(t), t\geq 0$ be the continuous, piecewise linear trajectory defined by $\bar{x}(t(n))=x_n, n\geq 0$, with linear interpolation on each interval $[t(n), t(n+1))$, i.e., \begin{equation} \bar{x}(t) = x_n + (x_{n+1} - x_n)\frac{t-t(n)}{t(n+1)-t(n)}, t \in [t(n), t(n+1)).\nonumber \end{equation} Now, define $\tilde{h}(x,\nu)=\int h(x,z)\nu(dz,U)$ for $\nu \in P(S \times U)$. Let $\mu(t), t\geq 0$ be the random process defined by $\mu(t)=\delta_{(Y_n,Z_n)}$ for $t \in [t(n), t(n+1)), n\geq 0$, where $\delta_{(y,a)}$ is the Dirac measure corresponding to $(y,a)$. Consider the non-autonomous o.d.e. \begin{equation} \label{auto} \dot{x}(t) = \tilde{h}(x(t), \mu(t)). \end{equation} Let $x^s(t), t\geq s$, denote the solution to (\ref{auto}) with $x^s(s)=\bar{x}(s)$, for $s\geq0$. Note that $x^s(t), t\in [s, s+T]$ and $x^s(t), t\geq s$ can be viewed as elements of $C([0, T]; \mathbb{R}^d)$ and $C([0, \infty); \mathbb{R}^d)$ respectively. With this abuse of notation, it is easy to see that $\{x^s(.)|_{[s, s+T]}, s\geq 0\}$ is a pointwise bounded and equicontinuous family of functions in $C([0, T]; \mathbb{R}^d)~\forall T >0$. By Arzela-Ascoli theorem, it is relatively compact. From Lemma~2.2 of \cite{borkar} one can see that for all $s(n)\uparrow \infty, \{\bar{x}(s(n)+.)|_{[s(n), s(n)+T]}, \linebreak n\geq 1\}$ has a limit point in $C([0, T]; \mathbb{R}^d)~\forall T >0$. With the above topology for $C([0, \infty); \mathbb{R}^d)$, $\{x^s(.), s\geq 0\}$ is also relatively compact in $C([0, \infty); \mathbb{R}^d)$ and for all $s(n)\uparrow \infty, \{\bar{x}(s(n)+.), n\geq 1\}$ has a limit point in $C([0, \infty); \mathbb{R}^d)$. \\ \indent One can write from (\ref{cont_mar}) the following: \begin{equation} \bar{x}(u(n)+t) = \bar{x}(u(n)) + \int_{0}^{t}h(\bar{x}(u(n)+\tau), \nu(u(n)+\tau))d\tau + W^n(t),\nonumber \end{equation} where $u(n)\uparrow \infty, \bar{x}(u(n)+.) \to \tilde{x}(\cdot), \nu(t) = (Y_n,Z_n)$ for $t \in [t(n), t(n+1)), n\geq 0$ and $W^n(t) = W(t+u(n)) - W(u(n)), W(t) = W_n + (W_{n+1} - W_n)\frac{t-t(n)}{t(n+1)- t(n)}, W_n=\sum_{k=0}^{n-1}a(k)M_{k+1}, n\geq 0$. From here one cannot directly take limit on both sides as finding limit points of $\nu(s+.)$ as $s \to \infty$ is not meaningful. Now, $h(x,y)=\int h(x,z)\delta_{(y,a)}(dz \times U)$. Hence by defining $\tilde{h}(x,\rho)=\int h(x,z)\rho(dz)$ and $\mu(t) = \delta_{\nu(t)}$ one can write the above as \begin{equation} \label{mu} \bar{x}(u(n)+t) = \bar{x}(u(n)) + \int_{0}^{t}\tilde{h}(\bar{x}(u(n)+\tau), \mu(u(n)+\tau))d\tau + W^n(t). \end{equation}The advantage is that the space $\mathcal{U}$ of measurable functions from $[0, \infty)$ to $\mathcal{P}(S \times U)$ is compact metrizable, so sub-sequential limits exist. Note that $\mu(\cdot)$ is not a member of $\mathcal{U}$, rather we need to fix a sample point, i.e., $\mu(.,\omega) \in \mathcal{U}$. For ease of understanding, we abuse the terminology and talk about the limit points $\tilde{\mu}(\cdot)$ of $\mu(s+.)$. \\ \indent From (\ref{mu}) one can infer that the limit $\tilde{x}(\cdot)$ of $\bar{x}(u(n)+.)$ satisfies the o.d.e. $\dot{x}(t) = \tilde{h}(x(t), \mu(t))$ with $\mu(\cdot)$ replaced by $\tilde{\mu}(\cdot)$. Here each $\tilde{\mu}(t), t \in \mathbb{R}$ in $\tilde{\mu}(\cdot)$ is generated through different limiting processes each one associated with the compact metrizable space $U_t$ = space of measurable functions from $[0,t]$ to $\mathcal{P}(S \times U)$. This will be problematic if we want to further explore the process $\tilde{\mu}(\cdot)$ and convert the non-autonomous o.d.e. into an autonomous one. \\ \indent Hence the main result is proved using an auxiliary lemma \cite[Lemma~2.3]{borkar} other than the tracking lemma (Lemma~2.2 of \cite{borkar}). Let $u(n(k)) \uparrow \infty$ be such that $\bar{x}(u(n(k))+.) \to \tilde{x}(\cdot)$ and $\mu(u(n(k))+.) \to \tilde{\mu}(\cdot)$, then using Lemma~2.2 of \cite{borkar} one can show that $x^{u(n(k))}(\cdot) \to \tilde{x}(\cdot)$. Then the auxiliary lemma shows that the o.d.e. trajectory $x^{u(n(k))}(\cdot)$ associated with $\mu(u(n(k))+.)$ tracks (in the limit) the o.d.e. trajectory associated with $\tilde{\mu}(\cdot)$. Hence Lemma~2.3 of \cite{borkar} links the two limiting processes $\tilde{x}(\cdot)$ and $\tilde{\mu}(\cdot)$ in some sense. Note that Lemma~2.3 of \cite{borkar} involves only the o.d.e. trajectories, not the interpolated trajectory of the algorithm. Consider the iteration \begin{equation} \label{eps2} \theta_{n+1} = \theta_n + a(n)\left[h(\theta_n, Y_n) + \epsilon_n + M_{n+1}\right], \end{equation} where $\epsilon_n \to 0$ and the rest of the notations are same as \cite{borkar}. Specifically, $\{Y_n\}$ is the controlled Markov process driven by $\{\theta_n\}$ and $M_{n+1}, n\geq 0$ is a martingale difference sequence. Let $\bar{\theta}(t), t\geq 0$ be the continuous, piecewise linear trajectory of (\ref{eps2}) defined by $\bar{\theta}(t(n))=\theta_n, n\geq 0$, with linear interpolation on each interval $[t(n), t(n+1))$. Also, let $\theta^s(t), t\geq s$, denote the solution to (\ref{auto}) with $\theta^s(s)=\bar{\theta}(s)$, for $s\geq0$. \\ \indent The convergence analysis of (\ref{eps2}) requires some changes in Lemma~2.2 and 3.1 of \cite{borkar}. The modified versions of them are precisely the following two lemmas. \begin{lem} \label{track_fast} For any $T >0$, $\sup_{t\in [s,s+T]}\|\bar{\theta}(t) - \theta^s(t)\| \to 0,$ a.s. as $s\to \infty$. \end{lem} \begin{pf} The proof follows from the Lemma 2.2 and the remark 3 thereof (p. 144) of \cite{borkar}. \begin{comment} Let $t(n+m)$ be in $[t(n), t(n) + T]$. Then by construction, \begin{equation} \bar{\theta}(t(n+m)) = \bar{\theta}(t(n)) + \sum_{k=0}^{m-1} a(n+k)h(\bar{\theta}(t(n+k)), Y_{n+k})+ \sum_{k=0}^{m-1}a(n+k)\epsilon_{n+k} + \delta_{n, n+m},\nonumber \end{equation} where $\delta_{n, n+m}=\zeta_{n+m}- \zeta_{n}$ with $\zeta_{n} = \sum_{m=0}^{n-1}a(m)M_{m+1}, n \geq 1$. Then the proof goes along the same lines as Lemma~2.2 of \cite{borkar} except that there is an extra term in the R.H.S of the below inequality. \begin{align} \|\bar{\theta}(t(n+m)) - \theta^{t(n)}(t(n+m))\|\nonumber\\ &\leq L\sum_{k=0}^{m-1}a(n+k)\|\bar{\theta}(t(n+k)) - \theta^{t(n)}(t(n+k))\|\nonumber\\ & + C_TL\sum_{k\geq0} a(n+k)^2 + \sum_{k=0}^{m-1}a(n+k)\|\epsilon_{n+k}\|\nonumber\\ & +\sup_{k\geq0}\|\delta_{n,n+k}\|\nonumber\\ &\leq L\sum_{k=0}^{m-1}a(n+k)\|\bar{\theta}(t(n+k)) - \theta^{t(n)}(t(n+k))\|\nonumber\\ & + C_TL\sum_{k\geq0} a(n+k)^2 + T\sup_{k\geq 0}\|\epsilon_{n+k}\|\nonumber\\ & +\sup_{k\geq0}\|\delta_{n,n+k}\|,\nonumber\mbox{ a.s.} \end{align} Define \begin{equation} K_{T,n}=C_TL\sum_{k\geq0} a(n+k)^2 + T\sup_{k\geq 0}\|\epsilon_{n+k}\|+\sup_{k\geq0}\|\delta_{n,n+k}\|.\nonumber \end{equation} So, $K_{T,n} \to 0$ a.s. as $n \to \infty$ and the rest of the proof follows tracking lemma (Lemma~2.1 of \cite[Chapter 2]{borkar1}) of the usual stochastic approximation framework. \end{comment} \end{pf} Now, $\mu$ can be viewed as a random variable taking values in $\mathcal{U}$ = the space of measurable functions from $[0,\infty)$ to $\mathcal{P}(S \times U)$. This space is topologized with the coarsest topology such that the map \begin{equation} \nu(\cdot) \in \mathcal{U} \to \int_{0}^{T} g(t) \int fd\nu(t)dt \in \mathbb{R}\nonumber \end{equation} is continuous for all $f \in C(S), T>0, g \in L_2[0,T]$. Note that $\mathcal{U}$ is compact metrizable. \begin{lem} \label{eps} Almost surely every limit point of $(\mu(s+.), \bar{\theta}(s+.))$ as $s\to \infty$ is of the form $(\tilde{\mu}(\cdot), \tilde{\theta}(\cdot))$ where $\tilde{\mu}(\cdot)$ satisfies $\tilde{\mu}(t) \in D(\tilde{\theta}(t))$ a.e. $t$. \end{lem} \begin{pf} Suppose that $u(n)\uparrow \infty$, $\mu(u(n)+.) \to \tilde{\mu}(\cdot)$ and $\bar{\theta}(u(n)+.) \to \tilde{\theta}(\cdot)$. Let $\{f_i\}$ be countable dense in the unit ball of $C(S)$, hence a separating class, i.e., $\forall i, \int f_i d\mu = \int f_i d\nu$ implies $\mu=\nu$. For each $i$, \begin{equation} \zeta^i_n = \sum_{m=1}^{n-1}a(m)(f_i(Y_{m+1}) - \int f_i(y)p(dy|Y_m,Z_m, \theta_m)), n \geq 1, \nonumber \end{equation} is a zero-mean martingale with $\mathcal{F}_n = \sigma(\theta_m, Y_m, Z_m, m\leq n)$. Moreover, it is a square integrable martingale due to the fact that $f_i$'s are bounded and each $\zeta^i_n$ is a finite sum. Its quadratic variation process \begin{equation} A_{n}=\sum_{m=0}^{n-1}a(m)^2E[(f_i(Y_{m+1}) - \int f_i(y)p(dy|Y_m,Z_m, \theta_m))^2|\mathcal{F}_m] + E[(\zeta^i_0)^2]\nonumber \end{equation} is almost surely convergent. By the martingale convergence theorem, $\zeta^i_n, n\geq 0$ converges a.s. for all $i$. As before let $\tau(n,t)=\min\{m \geq n: t(m) \geq t(n)+t\}$ for $t\geq0, n\geq0$. Then as $n\to\infty$, \begin{equation} \sum_{m=n}^{\tau(n,t)} a(m)(f_i(Y_{m+1})-\int f_i(y)p(dy|Y_m,Z_m,\theta_m))\to 0,\mbox{ a.s.}\nonumber \end{equation} for $t >0$. By our choice of $\{f_i\}$ and the fact that $\{a(n)\}$ is an eventually non-increasing sequence (the latter property is used only here and in Lemma \ref{slowmu}), we have \begin{equation} \sum_{m=n}^{\tau(n,t)}(a(m) - a(m+1))f_i(Y_{m+1}) \to 0,\mbox{ a.s.}\nonumber \end{equation} From the foregoing, \begin{equation} \sum_{m=n}^{\tau(n,t)} (a(m+1)f_i(Y_{m+1})-a(m)\int f_i(y)p(dy|Y_m,Z_m,\theta_m))\to 0,\mbox{ a.s.}\nonumber \end{equation} for all $t >0$, which implies \begin{equation} \sum_{m=n}^{\tau(n,t)} a(m)(f_i(Y_{m})-\int f_i(y)p(dy|Y_m,Z_m,\theta_m))\to 0,\mbox{ a.s.}\nonumber \end{equation} for all $t >0$ due to the fact that $a(n) \to 0$ and $f_i(.)$ are bounded. This implies \begin{equation} \int_{t(n)}^{t(n)+t}(\int(f_i(z) - \int f_i(y)p(dy|z,a,\hat{\theta}(s)))\mu(s,dzda))ds \to 0,\mbox{ a.s.}\nonumber \end{equation} and that in turn implies \begin{equation} \int_{u(n)}^{u(n)+t}(\int(f_i(z) - \int f_i(y)p(dy|z,a,\hat{\theta}(s)))\mu(s,dzda))ds \to 0,\mbox{ a.s.}\nonumber \end{equation} (this is true because $a(n)\to 0$ and $f_i(\cdot)$ is bounded) where $\hat{\theta}(s) = \theta_n$ when $s \in [t(n), t(n+1))$ for $n\geq 0$. Now, one can claim from the above that \begin{equation} \int_{u(n)}^{u(n)+t}(\int(f_i(z) - \int f_i(y)p(dy|z,a,\bar{\theta}(s)))\mu(s,dzda))ds \to 0,\mbox{ a.s.}\nonumber \end{equation} This is due to the fact that the map $S \times U \times \mathbb{R}^{d} \ni (z,a,\theta) \to \int f(y)p(dy|z,a,\theta)$ is continuous and hence uniformly continuous on the compact set $A = S \times U \times M$ where $M$ is the compact set s.t. $\theta_n\in M$ for all $n$. Here we also use the fact that $\|\bar{\theta}(s) - \theta_m\|=\|h(\theta_m, Y_m) + \epsilon_m + M_{m+1}\|(s-s_m) \to 0, s\in [t_m, t_{m+1})$ as the first two terms inside the norm in the R.H.S are bounded. The above convergence is equivalent to \begin{equation} \int_{0}^{t}(\int(f_i(z) - \int f_i(y)p(dy|z,a,\bar{\theta}(s+u(n)))\mu(s+u(n),dzda))ds \to 0,\mbox{ a.s.}\nonumber \end{equation} Fix a sample point in the probability one set on which the convergence above holds for all $i$. Then the convergence above leads to \begin{equation} \label{conjun1} \int_{0}^{t}(\int f_i(z) - \int f_i(y)p(dy|z,a, \tilde{\theta}(s)))\tilde{\mu}(s, dzda)ds =0~\forall i. \end{equation} Here we use one part of the proof from Lemma~2.3 of \cite{borkar} that if $\mu^n(\cdot) \to \mu^{\infty}(\cdot) \in \mathcal{U}$ then for any $t>0$, \begin{equation} \int_{0}^{t} \int \tilde{f}(s,z,a)\mu^n(s,dzda)ds - \int_{0}^{t} \int \tilde{f}(s,z,a)\mu^{\infty}(s,dzda)ds \to 0,\nonumber \end{equation} for all $\tilde{f} \in C([0,t] \times S \times A)$ and the fact that $\tilde{f}_n(s,z,a) = \int f_i(y)p(dy|z,a,\bar{\theta}(s+u(n)))$ converges uniformly to $\tilde{f}(s,z,a) = \int f_i(y)p(dy|z,a,\tilde{\theta}(s))$. To prove the latter, define $g:C([0,t]) \times [0,t] \times S \times A \to \mathbb{R}$ by $g(\theta(\cdot), s,z,a) = \int f_i(y)p(dy|z,a, \theta(s)))$. To see that $g$ is continuous we need to check that if $\theta_n(\cdot) \to \theta(\cdot)$ uniformly and $s(n) \to s$, then $\theta_n(s(n)) \to \theta(s)$. This is because $\|\theta_n(s(n)) - \theta(s)\| = \|\theta_n(s(n)) - \theta(s(n)) + \theta(s(n)) - \theta(s)\| \leq \|\theta_n(s(n)) - \theta(s(n))\| + \|\theta(s(n)) - \theta(s)\|$. The first and second terms go to zero due to the uniform convergence of $\theta_n(\cdot), n\geq 0$ and continuity of $\theta(\cdot)$ respectively. Let $A = \{\bar{\theta}(u(n)+.)|_{[u(n),u(n)+t]}, n\geq 1\} \cup \tilde{\theta}(\cdot)|_{[0,t]}$. $A$ is compact as it is the union of a sequence of functions and their limit. So, $g|_{(A \times [0,t]\times S \times U)}$ is uniformly continuous. Then using the same arguments as in Lemma~\ref{upsem} we can show equicontinuity of $\{\tilde{f}_n(.,.)\}$, that results in uniform convergence and thereby (\ref{conjun1}). An application of Lebesgue's theorem in conjunction with (\ref{conjun1}) shows that \begin{equation} \int (f_i(z) - \int f_i(y)p(dy|z,a,\tilde{\theta}(t)))\tilde{\mu}(t, dzda) = 0~\forall i\nonumber \end{equation} for a.e. $t$. By our choice of $\{f_i\}$, this leads to \begin{equation} \tilde{\mu}(t, dy \times U) = \int p(dy|z,a,\tilde{\theta}(t))\tilde{\mu}(t, dzda)\nonumber \end{equation} a.e. $t$. Therefore the conclusion follows by disintegrating such measure as the product of marginal on $S$ and the regular conditional law on $U$ (\cite[p~140]{borkar}). \end{pf} \begin{rmk} Note that the above invariant distribution does not come ``naturally''; rather it arises from the assumption made to match the natural timescale intuition for the controlled Markov noise component, i.e., the slower iterate should see the average effect of the Markov component. \end{rmk} The proof of the following lemma, in this case, will be unchanged from its original version, so we just mention it for completeness and refer the reader to Lemma 2.3 of \cite{borkar} for its proof. \begin{lem} \label{eps3} Let $\mu^n(\cdot) \to \mu^{\infty}(\cdot) \in \mathcal{U}$. Let $\theta^n(\cdot), n=1, 2, \dots, \infty$ denote solutions to (\ref{auto}) corresponding to the case where $\mu(\cdot)$ is replaced by $\mu^n(\cdot)$, for $n=1,2,\dots \infty$. Suppose $\theta^n(0) \to \theta^{\infty}(0)$. Then \begin{equation} \lim_{n \to \infty} \sup_{t\in [0, T]}\|\theta^n(t) - \theta^{\infty}(t)\| = 0 \nonumber \end{equation} for every $T >0$. \end{lem} \begin{lem} Almost surely, $\{\theta_n\}$ converges to an internally chain transitive set of the differential inclusion \begin{equation} \label{inc} \dot{\theta}(t) \in \hat{h}(\theta(t)), \end{equation} where $\hat{h}(\theta)=\{\tilde{h}(\theta,\nu) : \nu \in D(\theta)\}$. \end{lem} \begin{pf} Lemma~\ref{eps3} shows that every limit point $(\tilde{\mu}(\cdot), \tilde{\theta}(\cdot))$ of $(\mu(s+.),\bar{\theta}(s+.))$ as $s\to \infty$ is such that $\tilde{\theta}(\cdot)$ satisfies (\ref{auto}) with $\mu(\cdot) = \tilde{\mu}(\cdot)$. Hence, $\tilde{\theta}(\cdot)$ is absolutely continuous. Moreover, using Lemma~\ref{eps}, one can see that it satisfies (\ref{inc}) a.e. $t$, hence is a solution to the differential inclusion (\ref{inc}). Hence the proof follows. \end{pf} \begin{lem}[Faster timescale result] $(\theta_n, w_n) \to \{(\theta, \lambda(\theta)) : \theta \in \mathbb{R}^d\}$ a.s. \end{lem} \begin{pf} We first rewrite (\ref{eqn1}) as \begin{equation} \theta_{n+1} = \theta_n + b(n)\left[\epsilon_n + M^{(3)}_{n+1}\right],\nonumber \end{equation} where $\epsilon_n = \frac{a(n)}{b(n)}h(\theta_n, w_n, Z^{(1)}_n)\to 0$ as $n\to \infty$ a.s. and $M^{(3)}_{n+1} = \frac{a(n)}{b(n)} M^{(1)}_{n+1}$ for $n\geq 0$. Let $\alpha_n=(\theta_n, w_n), \alpha=(\theta,w) \in \mathbb{R}^{d+k}, G(\alpha,z)=(0, g(\alpha,z)), \epsilon'_n=(\epsilon_n, 0), M^{(4)}_{n+1}= (M^{(3)}_{n+1}, M^{(2)}_{n+1})$. Then one can write (\ref{eqn1}) and (\ref{eqn2}) in the framework of (\ref{eps2}) as \begin{equation} \label{stability} \alpha_{n+1} = \alpha_n + b(n)\left[G(\alpha_n,Z^{(2)}_n) + \epsilon'_n + M^{(4)}_{n+1}\right], \end{equation} with $\epsilon'_n \to 0$ as $n \to \infty$. $\alpha_n, n\geq 0$ converges almost surely to an internally chain transitive set of the differential inclusion \begin{equation} \dot{\alpha}(t) \in \hat{G}(\alpha(t)),\nonumber \end{equation} where $\hat{G}(\alpha) = \{\tilde{G}(\alpha, \nu) : \nu \in D^{(2)}(\theta,w)\}$ with $\tilde{G}(\alpha,\nu)=(0,\tilde{g}(\theta,w,\nu))$. In other words, $(\theta_n, w_n), n\geq 0$ converges to an internally chain transitive set of the differential inclusion \begin{equation} \label{coupled_di} \dot{w}(t) \in \hat{g}(\theta(t), w(t)), \dot{\theta}(t) = 0.\nonumber \end{equation}The rest follows from the second part of \textbf{(A6)}. \end{pf} \begin{rmk} Under the conditions mentioned in Remark 4 the above faster timescale result should be modified as follows: \begin{lem}[Faster timescale result when $\lambda(\theta)$ is a local attractor] \label{fast_res2} Under assumptions \textbf{(A1) - (A5), (A6)'} and \textbf{(A7)}, on the event ``$\{w_n\}$ belongs to a compact subset $B$ (depending on the sample point) of $\bigcap_{\theta \in \mathbb R^d} G_\theta$ \textit{eventually}'', \begin{equation} (\theta_n, w_n) \to \{(\theta, \lambda(\theta)) : \theta \in \mathbb{R}^d\} \mbox{~~a.s.} \nonumber \end{equation} \end{lem} \begin{pf} Fix a sample point $\omega$. The proof follows from these observations: \begin{enumerate} \item continuity of flow for the coupled o.d.e around the initial point, \item $\sup_n \|\theta_n\| = M_1 < \infty$, \item the fact that the set graph($\lambda$) is Lyapunov stable ($V'(.)$ as mentioned in \textbf{(A6)'} will be a Lyapunov function for this set), and \item the fact that $\bigcap_{t\geq 0} \overline{\bar{\alpha}(s): s \geq t}$ is an internally chain transitive set of the coupled o.d.e \begin{equation} \label{copuled_ode} \dot{w}(t) = \hat{g}(\theta(t),w(t)), \dot{\theta}(t) = 0, \end{equation} where $\bar{\alpha}(.)$ is the interpolated trajectory of the coupled iterate $\{\alpha_n\}$. \end{enumerate} As $\{\theta: \|\theta\| \leq M_1\} \times B \subset \bigcup_{\theta \in \mathbb{R}^d} \{\{\theta\} \times G_\theta \}$, the first three observations show that for all $\epsilon>0$, there exists a $T_\epsilon >0$ such that any o.d.e trajectory for (\ref{copuled_ode}) with starting point on the compact set $\{\theta: \|\theta\| \leq M_1\} \times B$ reaches the $\epsilon$-neighbourhood of graph($\lambda$) after time $T_\epsilon$. Further, \begin{equation} \bigcap_{t\geq 0} \overline{\bar{\alpha}(s): s \geq t} \subset \{\theta: \|\theta\| \leq M_1\} \times B. \nonumber \end{equation} Then one can use the last observation by choosing $T > T_{\epsilon}$ to show the required convergence to the set graph($\lambda$). \end{pf} \end{rmk} \begin{rmk} One interesting question in this context is to analyze whether one can extend the single timescale local attractor convergence statements to the two timescale setting under some \textit{verifiable conditions}. More specifically, if there is a global attractor $A_1$ for \begin{equation} \dot{\theta}(t) \in \hat{h}(\theta(t)), \nonumber \end{equation} then can one provide verifiable conditions to show \begin{equation} P [(\theta_n , w_n ) \to \cup_{\theta \in A_1} (\theta , \lambda(\theta))] > 0. \nonumber \end{equation} Here $\lambda(\theta)$ is a local attractor as mentioned in \textbf{(A6)'}. There are two ways in which this could possibly be tried: \begin{enumerate} \item Use Theorem \ref{thm_local} where we show that on the event $\{w_n\}$ belongs to a compact subset $B$ (depending on the sample point) of $\bigcap_{\theta \in \mathbb R^d} G_\theta$ ``eventually'', \begin{equation} (\theta_n, w_n) \to \cup_{\theta^* \in A_1}(\theta^*, \lambda(\theta^*)) \mbox{a.s. as $n \to \infty$,}\nonumber \end{equation}which is an extension of Kushner-Clarke Lemma to the two timescale case. Therefore the task would be to impose verifiable assumptions so that $P$($\{w_n\}$ belongs to a compact subset $B$ (depending on the sample point) of $\bigcap_{\theta \in \mathbb R^d} G_\theta$ ``eventually'') $>$ 0. In a stochastic approximation scenario it is not immediately clear how one could possibly impose verifiable assumptions so that such a probabilistic statement becomes true. \item The second approach would be to extend the analysis of \cite{benaimode, benaim} to the two timescale case. In our opinion this is very hard as this analysis is based on the attractor introduced by Benaim et al. whereas the coupled o.d.e (\ref{copuled_ode}) which tracks the coupled iterate $(\theta_n,w_n)$ (therefore the interpolated trajectory of the coupled iterate will be an asymptotic pseudo-trajectory \cite{benaimode} for (\ref{copuled_ode})) has no attractor. The reason is that one cannot obtain a fundamental neighbourhood for sets like $\cup_{\theta \in A_1} (\theta, \lambda(\theta))$ as the $\theta$ component will remain constant for any trajectory of the above coupled o.d.e. \end{enumerate} Thus it is immediately not clear as to how this question can be addressed and this will be an interesting future direction. \end{rmk} From the faster timescale results we get, $\|w_n - \lambda(\theta_n)\| \to 0$ a.s., i.e, $\{w_n\}$ asymptotically tracks $\{\lambda(\theta_n)\}$ a.s. \\ \indent Now, consider the non-autonomous o.d.e. \begin{equation} \label{slow} \dot{\theta}(t) = \tilde{h}(\theta(t),\lambda(\theta(t)),\mu(t)), \end{equation} where $\mu(t) = \delta_{Z^{(1)}_n,A^{(1)}_n}$ when $t \in [t(n), t(n+1))$ for $n\geq 0$ and $\tilde{h}(\theta,w,\nu)=\int h(\theta,w,z) \nu(dz)$. Let $\theta^s(t), t\geq s$ denote the solution to (\ref{slow}) with $\theta^s(s) = \bar{\theta}(s)$, for $s \geq 0$. Then \begin{lem} \label{track_slow} For any $T >0, \sup_{t\in [s,s+T]}\|\bar{\theta}(t) - \theta^s(t)\| \to 0,$ a.s. \end{lem} \begin{pf} The slower recursion corresponds to \begin{equation} \theta_{n+1} = \theta_n + a(n)\left[h(\theta_n, w_n, Z^{(1)}_n) + M^{(1)}_{n+1}\right].\nonumber \end{equation} Let $t(n+m) \in [t(n), t(n) + T]$. Let $[t] =\max\{t(k) : t(k) \leq t\}$. Then by construction, \begin{align} \bar{\theta}(t(n+m)) &= \bar{\theta}(t(n)) + \sum_{k=0}^{m-1} a(n+k)h(\bar{\theta}(t(n+k)), w_{n+k}, Z^{(1)}_{n+k}) + \delta_{n, n+m} \nonumber\\ &= \bar{\theta}(t(n)) + \sum_{k=0}^{m-1} a(n+k)h(\bar{\theta}(t(n+k)), \lambda(\bar{\theta}(t(n+k))), Z^{(1)}_{n+k})\nonumber\\ &+\sum_{k=0}^{m-1} a(n+k)(h(\bar{\theta}(t(n+k)), w_{n+k}, Z^{(1)}_{n+k})- h(\bar{\theta}(t(n+k)), \lambda(\theta_{n+k}), Z^{(1)}_{n+k}))\nonumber\\ &+ \delta_{n, n+m},\nonumber \end{align} where $\delta_{n, n+m}=\zeta_{n+m}- \zeta_{n}$ with $\zeta_{n} = \sum_{m=0}^{n-1}a(m)M^{(1)}_{m+1}, n \geq 1$. \begin{align} \theta^{t(n)}(t(m+n)) &= \bar{\theta}(t(n)) + \int_{t(n)}^{t(n+m)} \tilde{h}(\theta^{t(n)}(t), \lambda(\theta^{t(n)}(t)), \mu(t))dt\nonumber\\ &= \bar{\theta}(t(n)) + \sum_{k=0}^{m-1} a(n+k)h(\theta^{t(n)}(t(n+k)), \lambda(\theta^{t(n)}(t(n+k))), Z^{(1)}_{n+k})\nonumber\\ &+ \int_{t(n)}^{t(n+m)} (h(\theta^{t(n)}(t), \lambda(\theta^{t(n)}(t), \mu(t))) - h(\theta^{t(n)}([t]), \lambda(\theta^{t(n)}([t]), \mu([t]))))dt.\nonumber \end{align} Let $t(n) \leq t \leq t(n+m)$. Now, if $0 \leq k \leq (m-1)$ and $t \in (t(n+k), t(n+k+1)],$ \begin{align} \|\theta^{t(n)}(t)\| &\leq \|\bar{\theta}(t(n)\| + \|\int_{t(n)}^{t} \tilde{h}(\theta^{t(n)}(\tau), \lambda(\theta^{t(n)}(\tau)), \mu(\tau))d\tau\|\nonumber\\ &\leq \|\theta_n\| + \sum_{l=0}^{k-1} \int_{t(n+l)}^{t(n+l+1)} (\|h(0,0,Z^{(1)}_{n+l})\|+ L^{(1)}(\|\lambda(0)\|+(K+1)\|\theta^{t(n)}(\tau)\|))d\tau\nonumber\\ & +\int_{t(n+k)}^t(\|h(0,0,Z^{(1)}_{n+k})\|+ L^{(1)}(\|\lambda(0)\|+(K+1)\|\theta^{t(n)}(\tau)\|))d\tau\nonumber\\ &\leq C_0 + (M+L^{(1)}\|\lambda(0)\|)T + L^{(1)}(K+1)\int_{t(n)}^{t}\|\theta^{t(n)}(\tau)\|d\tau,\nonumber \end{align} where $C_0 = \sup_n \|\theta_n\| < \infty, \sup_{z \in S^{(1)}}\|h(0,0,z)\| = M$. By Gronwall's inequality, it follows that \begin{equation} \|\theta^{t(n)}(t)\| \leq (C_0 + (M+L^{(1)}\|\lambda(0)\|)T)e^{L^{(1)}(K+1)T}.\nonumber \end{equation} \begin{align} \|\theta^{t(n)}(t) - \theta^{t(n)}(t(n+k))\| &\leq \int_{t(n+k)}^t\|h(\theta^{t(n)}(s), \lambda(\theta^{t(n)}(s)), Z^{(1)}_{n+k})\|ds\nonumber\\ &\leq (\|h(0,0,Z^{(1)}_{n+k})\|+ L^{(1)}\|\lambda(0)\|)(t-t(n+k))\nonumber\\ &+L^{(1)}(K+1)\int_{t(n+k)}^t\|\theta^{t(n)}(s)\|ds\nonumber\\ &\leq C_Ta(n+k),\nonumber \end{align} where $C_T=(M+ L^{(1)}\|\lambda(0)\|) + L^{(1)}(K+1)(C_0 + (M+L^{(1)}\|\lambda(0)\|)T)e^{L^{(1)}(K+1)T}.$ Thus, \begin{align} &\|\int_{t(n)}^{t(n+m)} (h(\theta^{t(n)}(t), \lambda(\theta^{t(n)}(t)), \mu(t)) - h(\theta^{t(n)}([t]), \lambda(\theta^{t(n)}([t])), \mu([t])))dt\|\nonumber\\ &\leq \sum_{k=0}^{m-1}\int_{t(n+k)}^{t(n+k+1)}\|h(\theta^{t(n)}(t), \lambda(\theta^{t(n)}(t)), Z^{(1)}_{n+k}) - h(\theta^{t(n)}([t]), \lambda(\theta^{t(n)}([t])), Z^{(1)}_{n+k})\|dt\nonumber\\ &\leq L\sum_{k=0}^{m-1}\int_{t(n+k)}^{t(n+k+1)}\|\theta^{t(n)}(t) - \theta^{t(n)}(t(n+k))\|dt\nonumber\\ &\leq C_TL\sum_{k=0}^{m-1}a(n+k)^2\nonumber\\ &\leq C_TL\sum_{k=0}^{\infty}a(n+k)^2 \to 0 \mbox{~as $n \to \infty$},\nonumber \mbox{where $L=L^{(1)}(K+1).$} \end{align} Hence \begin{align} \|\bar{\theta}(t(n+m))-\theta^{t(n)}(t(n+m))&\leq L\sum_{k=0}^{m-1}a(n+k)\|\bar{\theta}(t(n+k)) - \theta^{t(n)}(t(n+k))\|\nonumber\\ & + C_TL\sum_{k=0}^{\infty}a(n+k)^2 + \sup_{k\geq 0}\|\delta_{n,n+k}\|\nonumber\\ & + L^{(1)}\sum_{k=0}^{m-1}a(n+k)\|w_{n+k} -\lambda(\theta_{n+k})\|\nonumber\\ &\leq L\sum_{k=0}^{m-1}a(n+k)\|\bar{\theta}(t(n+k)) - \theta^{t(n)}(t(n+k))\|\nonumber\\ & + C_TL\sum_{k=0}^{\infty}a(n+k)^2 + \sup_{k\geq 0}\|\delta_{n,n+k}\|\nonumber\\ & + L^{(1)}T\sup_{k\geq 0} \|w_{n+k} - \lambda(\theta_{n+k})\|,\nonumber \mbox{ a.s.} \end{align} Define \begin{equation} K_{T,n} = C_TL\sum_{k=0}^{\infty}a(n+k)^2 + \sup_{k\geq 0}\|\delta_{n,n+k}\| + L^{(1)}T\sup_{k\geq 0} \|w_{n+k} - \lambda(\theta_{n+k})\|. \nonumber \end{equation} Note that $K_{T,n} \to 0$ a.s. The remainder of the proof follows in the exact same manner as the tracking lemma, see Lemma 1, Chapter 2 of \cite{borkar1}. \end{pf} \begin{lem} \label{ode} Suppose, $\mu^n(\cdot) \to \mu^{\infty}(\cdot) \in U^{(1)}$. Let $\theta^n(\cdot), n=1, 2, \dots, \infty$ denote solutions to (\ref{slow}) corresponding to the case where $\mu(\cdot)$ is replaced by $\mu^n(\cdot)$, for $n=1,2, \dots, \infty$. Suppose $\theta^n(0) \to \theta^{\infty}(0)$. Then \begin{equation} \lim_{n \to \infty} \sup_{t\in [0, T]}\|\theta^n(t) - \theta^{\infty}(t)\| \to 0 \nonumber \end{equation} for every $T >0$. \end{lem} \begin{pf} It is shown in Lemma~2.3 of \cite{borkar} that \begin{equation} \int_{0}^{t}\int \tilde{f}(s,z)\mu^{n}(s,dz)ds - \int_{0}^{t}\int \tilde{f}(s,z)\mu^{\infty}(s,dz)ds \to 0 \nonumber \end{equation} for any $\tilde{f} \in C([0,T]\times S)$. Using this, one can see that \begin{equation} \|\int_{0}^{t} (\tilde{h}(\theta^{\infty}(s),\lambda(\theta^{\infty}(s)), \mu^n(s)) - \tilde{h}(\theta^{\infty}(s), \lambda(\theta^{\infty}(s)), \mu^{\infty}(s)))ds \| \to 0.\nonumber \end{equation} This follows because $\lambda$ is continuous and $h$ is jointly continuous in its arguments. As a function of $t$, the integral on the left is equicontinuous and pointwise bounded. By the Arzela-Ascoli theorem, this convergence must in fact be uniform for $t$ in a compact set. Now for $t>0$, \begin{align} &\|\theta^n(t)-\theta^{\infty}(t)\|\nonumber\\ &\leq \|\theta^n(0) - \theta^{\infty}(0)\| + \int_{0}^{t} \|\tilde{h}(\theta^n(s), \lambda(\theta^n(s)),\mu^n(s)) - \tilde{h}(\theta^{\infty}(s), \lambda(\theta^{\infty}(s)), \mu^{\infty}(s))\|ds\nonumber\\ &\leq \|\theta^n(0) - \theta^{\infty}(0)\| + \int_{0}^{t} (\|\tilde{h}(\theta^n(s), \lambda(\theta^n(s)),\mu^n(s)) - \tilde{h}(\theta^{\infty}(s), \lambda(\theta^{\infty}(s)), \mu^{n}(s))\|)ds\nonumber\\ &+ \int_{0}^{t} (\|\tilde{h}(\theta^{\infty}(s),\lambda(\theta^{\infty}(s)), \mu^{n}(s)) - \tilde{h}(\theta^{\infty}(s),\lambda(\theta^{\infty}(s)), \mu^{\infty}(s))\|)ds.\nonumber \end{align} Now, using the fact that $\lambda$ is Lipschitz with constant $K$ the remaining part of the proof follows in the same manner as Lemma~2.3 of \cite{borkar}. \end{pf} Note that Lemma~\ref{ode} shows that every limit point $(\tilde{\mu}(\cdot), \tilde{\theta}(\cdot))$ of $(\mu(s+.),\bar{\theta}(s+.))$ as $s\to \infty$ is such that $\tilde{\theta}(\cdot)$ satisfies (\ref{slow}) with $\mu(\cdot) = \tilde{\mu}(\cdot)$. \begin{lem} \label{slowmu} Almost surely every limit point of $(\mu(s+.),\bar{\theta}(s+.))$ as $s \to \infty$ is of the form $(\tilde{\mu}(\cdot), \tilde{\theta}(\cdot))$, where $\tilde{\mu}(\cdot)$ satisfies $\tilde{\mu}(t) \in D^{(1)}(\tilde{\theta}(t), \lambda(\tilde{\theta}(t)))$. \end{lem} \begin{pf} Suppose that $u(n)\uparrow \infty$, $\mu(u(n)+.) \to \tilde{\mu}(\cdot)$ and $\bar{\theta}(u(n)+.) \to \tilde{\theta}(\cdot)$. Let $\{f_i\}$ be countable dense in the unit ball of $C(S)$, hence it is a separating class, i.e., for all $i$, $\int f_i d\mu = \int f_i d\nu$ implies $\mu=\nu$. For each $i$, \begin{equation} \zeta^i_n = \sum_{m=1}^{n-1}a(m)(f_i(Z^{(1)}_{m+1}) - \int f_i(y)p(dy|Z^{(1)}_m, A^{(1)}_m, \theta_m, w_m)),\nonumber \end{equation} is a zero-mean martingale with $\mathcal{F}_n = \sigma(\theta_m, w_m, Z^{(1)}_m,A^{(1)}_m, m\leq n), n\geq 1$. Moreover, it is a square-integrable martingale due to the fact that $f_i$'s are bounded and each $\zeta^i_n$ is a finite sum. Its quadratic variation process \begin{equation} A_n=\sum_{m=0}^{n-1}a(m)^2 E[(f_i(Z^{(1)}_{m+1}) - \int f_i(y)p(dy|Z^{(1)}_m, A^{(1)}_m, \theta_m, w_m))^2|\mathcal{F}_m] + E[(\zeta^i_0)^2]\nonumber \end{equation} is almost surely convergent. By the martingale convergence theorem, $\{\zeta^i_n\}$ converges a.s. Let $\tau(n,t)=\min\{m \geq n: t(m) \geq t(n)+t\}$ for $t\geq0, n\geq0$. Then as $n\to\infty$, \begin{equation} \sum_{m=n}^{\tau(n,t)} a(m)(f_i(Z^{(1)}_{m+1})-\int f_i(y)p(dy|Z^{(1)}_m, A^{(1)}_m,\theta_m, w_m))\to 0,\mbox{ a.s.,}\nonumber \end{equation} for $t >0$. By our choice of $\{f_i\}$ and the fact that $\{a(n)\}$ are eventually non-increasing, \begin{equation} \sum_{m=n}^{\tau(n,t)}(a(m) - a(m+1))f_i(Z^{(1)}_{m+1}) \to 0,\mbox{a.s.}\nonumber \end{equation} Thus, \begin{equation} \sum_{m=n}^{\tau(n,t)} a(m)(f_i(Z^{(1)}_m)-\int f_i(y)p(dy|Z^{(1)}_m, A^{(1)}_m,\theta_m,w_m))\to 0,\mbox{ a.s.}\nonumber \end{equation} which implies \begin{equation} \int_{t(n)}^{t(n)+t}(\int(f_i(z) - \int f_i(y)p(dy|z,a,\hat{\theta}(s),\hat{w}(s)))\mu(s,dzda))ds \to 0,\mbox{ a.s.}\nonumber \end{equation} Recall that $u(n)$ can be any general sequence other than $t(n)$. Therefore \begin{equation} \int_{u(n)}^{u(n)+t}(\int(f_i(z) - \int f_i(y)p(dy|z,a,\hat{\theta}(s),\hat{w}(s)))\mu(s,dzda))ds \to 0,\mbox{ a.s.,}\nonumber \end{equation} (this follows from the fact that $a(n)\to 0$ and $f_i$'s are bounded) where $\hat{\theta}(s) = \theta_n$ and $\hat{w}(s) = w_n$ when $s \in [t(n), t(n+1)), n\geq 0$. Now, one can claim from the above that \begin{equation} \int_{u(n)}^{u(n)+t}(\int(f_i(z) - \int f_i(y)p(dy|z,a,\bar{\theta}(s), \lambda(\bar{\theta}(s))))\mu(s,dzda))ds \to 0,\mbox{ a.s.}\nonumber \end{equation} This is due to the fact that the map $S^{(1)} \times U^{(1)} \times \mathbb{R}^{d+k} \ni (z,a,\theta,w) \to \int f_i(y)p(dy|z,a,\theta,w)$ is continuous and hence uniformly continuous on the compact set $A = \linebreak S^{(1)} \times U^{(1)} \times M_1 \times M_2$ where $M_1$ is the compact set s.t. $\theta_n \in M_1$ for all $n$ and $M_2=\linebreak \{w : \|w\| \leq \max(\sup\|w_n\|, K')\}$ where $K'$ is the bound for the compact set $\lambda(M_1)$. Here we also use the fact that $\|w_m - \lambda(\bar{\theta}(s))\|\to 0$ for $s\in [t_m, t_{m+1})$ as $\lambda$ is Lipschitz and $\|w_m -\lambda(\theta_m)\| \to 0$. The above convergence is equivalent to \begin{equation} \int_{0}^{t}(\int(f_i(z) - \int f_i(y)p(dy|z,a,\bar{\theta}(s+u(n)), \lambda(\bar{\theta}(s+u(n)))))\mu(s+u(n),dzda))ds \to 0\mbox{ a.s.}\nonumber \end{equation} Fix a sample point in the probability one set on which the convergence above holds for all $i$. Then the convergence above leads to \begin{equation} \label{conjun} \int_{0}^{t}(\int f_i(z) - \int f_i(y)p(dy|z,a, \tilde{\theta}(s), \lambda(\tilde{\theta}(s))))\tilde{\mu}(s, dzda)ds =0~\forall i. \end{equation} For showing the above, we use one part of the proof from Lemma~2.3 of \cite{borkar} that if $\mu^n(\cdot) \to \mu^{\infty}(\cdot) \in \mathcal{U}$ then for any $t$, \begin{equation} \int_{0}^{t} \int \tilde{f}(s,z,a)\mu^n(s,dzda)ds - \int_{0}^{t} \int \tilde{f}(s,z,a)\mu^{\infty}(s,dzda)ds \to 0\nonumber \end{equation} for all $\tilde{f} \in C([0,t] \times S^{(1)} \times U^{(1)})$. In addition, we make use of the fact that $\tilde{f}_n(s,z,a) = \linebreak \int f_i(y)p(dy|z,a,\bar{\theta}(s+u(n)), \lambda(\bar{\theta}(s+u(n))))$ converges uniformly to $\tilde{f}(s,z,a) = \int f_i(y)p(dy|z,a,\tilde{\theta}(s), \lambda(\tilde{\theta}(s)))$. To prove this, define $g :C([0,t]) \times [0,t] \times S^{(1)} \times U^{(1)} \to \mathbb{R}$ by $g(\theta(\cdot), s,z,a) = \int f_i(y)p(dy|z,a, \theta(s),\lambda(\theta(s)))$. Let $A' = \{\bar{\theta}(u(n)+.)|_{[u(n),u(n)+t]}, n \geq 1\} \cup \tilde{\theta}(\cdot)|_{[0,t]}$. Using the same argument as in Lemma~\ref{eps} and \textbf{(A6)}, i.e., $\lambda$ is Lipschitz (the latter helps to claim that if $\theta_n(\cdot) \to \theta(\cdot)$ uniformly then $\lambda(\theta_n(\cdot)) \to \lambda(\theta(\cdot))$ uniformly), it can be seen that $g$ is continuous. Then $A'$ is compact as it is a union of a sequence of functions and its limit. So, $g|_{(A'\times [0,t] \times S^{(1)} \times U^{(1)})}$ is uniformly continuous. Then a similar argument as in Lemma~\ref{upsem} shows equicontinuity of $\{\tilde{f}_n(.,.)\}$ that results in uniform convergence and thereby (\ref{conjun}). An application of Lebesgue's theorem in conjunction with (\ref{conjun}) shows that \begin{equation} \int (f_i(z) - \int f_i(y)p(dy|z,a,\tilde{\theta}(t), \lambda(\tilde{\theta}(t)))\tilde{\mu}(t, dzda) = 0~\forall i\nonumber \end{equation} for a.e. $t$. By our choice of $\{f_i\}$, this leads to \begin{equation} \tilde{\mu}(t, dy \times U^{(1)}) = \int p(dy|z,a,\tilde{\theta}(t), \lambda(\tilde{\theta}(t)))\tilde{\mu}(t, dzda),\nonumber \end{equation} a.e. $t$. \end{pf} Lemma~\ref{ode} shows that every limit point $(\tilde{\mu}(\cdot), \tilde{\theta}(\cdot))$ of $(\mu(s+.),\bar{\theta}(s+.))$ as $s\to \infty$ is such that $\tilde{\theta}(\cdot)$ satisfies (\ref{slow}) with $\mu(\cdot) = \tilde{\mu}(\cdot)$. Hence, $\tilde{\theta}(\cdot)$ is absolutely continuous. Moreover, using Lemma~\ref{slowmu}, one can see that it satisfies (\ref{slower_di}) a.e. $t$, hence is a solution to the differential inclusion (\ref{slower_di}). \\ \indent \begin{pf}[Proof of Theorem \ref{thm} and \ref{thm_local}] From the previous three lemmas it is easy to see that $A_0 = \cap_{t\geq 0}\overline{\{\bar{\theta}(s): s\geq t\}}$ is almost everywhere an internally chain transitive set of (\ref{slower_di}). \end{pf} \begin{pf}[Proof of Corollary \ref{main_col}] Follows directly from Theorem \ref{thm} and Lemma~\ref{ga}. \end{pf} \section{Discussion on the assumptions: Relaxation of (A2)} \label{relax} We discuss relaxation of the uniformity of the Lipschitz constant w.r.t state of the controlled Markov process for the vector field. The modified assumption here is \begin{enumerate}[label=\textbf{(A\arabic*)'}] \setcounter{enumi}{1} \item $h : \mathbb{R}^{d+k} \times S^{(1)} \to \mathbb{R}^d$ is jointly continuous as well as Lipschitz in its first two arguments with the third argument fixed to same value and Lipschitz constant is a function of this value. The latter condition means that \begin{equation} \forall z^{(1)} \in S^{(1)}, \|h(\theta, w, z^{(1)}) - h(\theta', w', z^{(1)})\| \leq L^{(1)}(z^{(1)})(\|\theta-\theta'\| + \|w - w'\|).\nonumber \end{equation} A similar condition holds for $g$ where the Lipschitz constant is $L^{(2)}: S^{(2)} \to \mathbb{R}^+$. \end{enumerate} Note that this allows $L^{(i)}(.)$ to be an unbounded measurable function making it discontinuous due to \textbf{(A1)}. The straightforward solution for implementing this is to additionally assume the following: \begin{enumerate}[label=\textbf{(A\arabic*)}] \setcounter{enumi}{7} \item $\sup_n L^{(i)}(Z^{(i)}_n) < \infty$ a.s. \end{enumerate} still allowing $L^{(i)}(.)$ to be an unbounded function. As all our proofs in Section \ref{mres} are shown for every sample point of a probability 1 set, our proofs will go through. In the following we give such an example for the case where the Markov process is uncontrolled. It is enough to consider examples with locally compact $S^{(i)}$ (because then we can take the standard one-point compactification and define $L^{(i)}$ arbitrarily at the extra point). Let $S^{(i)}=\Bbb Z$ and let $Z^{(i)}_n, n \geq 0$ be the Markov Chain on $\Bbb Z$ starting at $0$ with transition probabilities $p(n,n+1)=p$ and $p(n,n-1)=1-p$. We assume $1/2 < p < 1$. Let $L^{(i)}(n) = \big( \frac{1-p}{p} \big)^n$. Note that $Z^{(i)}_n,n \geq 0$ is a transient Markov Chain with $Z^{(i)}_n \to +\infty$ a.s. From this it follows that $\inf_n Z^{(i)}_n>-\infty$, and thus $\sup_n L^{(i)}(Z^{(i)}_n)< \infty$ almost surely. It follows that $(L^{(i)}(Z^{(i)}_n))_{n \in \Bbb N}$ is a bounded sequence with probability $1$, but this bound is clearly not deterministic since there is a non-zero probability that the sample path reaches large negative values. However in the following we discuss on the idea of using moment assumptions to analyze the convergence of single timescale controlled Markov noise framework of \cite{borkar}. We show that the iterates (\ref{eps2}) (with $\epsilon_n=0$) converge to an internally chain transitive set of the o.d.e. (\ref{auto}). For this we prove Lemma \ref{track_fast} under the following assumptions: For all $T >2, i=1,2$, \begin{enumerate}[label=\textbf{(S\arabic*)}] \item The controlled Markov process $Y_n$ as described in \cite{borkar} takes values in a compact metric space. \item For all $n>0$, $0< a(n) \leq 1$, $\sum_n a(n) = \infty$, $\sum_n a(n)^2 < \infty$ and $a(n+1)\leq a(n), n\geq 0$. \item $h : \mathbb{R}^{d} \times S \to \mathbb{R}^d$ Lipschitz in its first argument w.r.t the second. The condition means that \begin{equation} \forall z \in S, \|h(\theta, z) - h(\theta', z)\| \leq L(z)(\|\theta-\theta'\|).\nonumber \end{equation} \item Let $\phi(n,T) = \max(m: a(n) + a(n+1) + \dots + a(n+m) \leq T)$ with the bound depending on $T$. Then \begin{equation} \sup_n E\left[\left(\sup_{0 \leq m \leq \phi(n,T)}L(Y_{n+m})\right)^{16}\right] < \infty. \nonumber \end{equation} \item \begin{equation} \sup_n E\left[e^{8\sum_{m=0} ^ {\phi(n,T)} a(n+m) L(Y_{n+m})}\right] < \infty. \nonumber \end{equation} Note that \textbf{(S4)} and \textbf{(S5)} are trivially satisfied in the case when $L(z)= L$ for all $z \in S$ i.e. the case of Section \ref{sec_def}. \begin{rmk} As long as one can prove Lemma \ref{track_fast} for all $T >2$ it will hold for all $T>0$, thus one can combine \textbf{(S4)} and \textbf{(S5)} into the following assumption: \begin{equation} \sup_n E\left[e^{8T\sup_{0 \leq m \leq \phi(n,T)} L(Y_{n+m})}\right] < \infty. \nonumber \end{equation} As an instance where such an assumption is verified, consider the Markov process of \cite[Eqn. (3.4)]{metivier} defined by \begin{equation} Y_{n+1}= A(\theta_n) Y_n + B(\theta_n) W_{n+1} \nonumber \end{equation} where $A(\theta), B(\theta), \theta \in \mathbb{R}^d$, are $k \times k$-matrices and $(W_n)_{n\geq O}$ are independent and identically distributed $\mathbb{R}^k$-valued random variables. Assume that the following conditions hold true for all $x,y \in S$: \begin{enumerate} \item $L(Y_n)$ is a non-decreasing sequence. \item For $r>0, R>0$, \begin{equation} \sup_{\|\theta\| \leq R} e^{rL(A(\theta) x + B(\theta) y)} \leq L_R {\alpha_R}^r e^{r L(x)} + M_R e^{C_R L(y)} \nonumber \end{equation} for some $C_R, M_R, L_R >0$ and $\alpha_R < 1$. \end{enumerate}Then \begin{align} \nonumber &E\left[e^{rL(Y_n)}|Y_{n-1} = x, \theta_{n-1} = \theta\right] \\ \nonumber &\leq \int e^{rL\left(A(\theta)x + B(\theta) y\right)} \mu_n (dy) \\ \nonumber &\leq L_R {\alpha_R}^r e^{rL(x)} + M_R E\left[e^{C_R L(W_n)}\right] \\ \nonumber &=L_R{\alpha_R}^r e^{r L(x)} + K_R,\nonumber \end{align} with $K_R = M_R E\left[e^{C_R L(W_n)}\right]$ (this follows from the fact that $W_n$ are i.i.d if we assume that $E\left[e^{C_R L(W_1)}\right] < \infty$). Choosing large values of $r$, one can show that \begin{equation} E\left[e^{rL(Y_n)}|Y_{n-1} = x, \theta_{n-1} = \theta\right] \leq \beta_R e^{r L(x)}+ K_R \nonumber \end{equation} where $\beta_R = L_R{\alpha_R}^r < 1$. Using the above, for large $r$ \begin{align} E\left[e^{r L(Y_n)}\right] = E\left[E\left[e^{r L(Y_n)}| Y_{n-1}, \theta_{n-1}\right]\right] \leq \beta_R E\left[e^{rL(Y_{n-1})}\right] + K_R, \nonumber \end{align} which shows that \begin{equation} \sup_n E\left[e^{rL(Y_n)}\right] < \infty.\nonumber \end{equation} Choosing $r > 8T$, \begin{equation} \sup_n E\left[e^{8TL(Y_n)}\right] < \infty.\nonumber \end{equation} Note that this is a much weaker assumption that \textbf{(A8)}. \end{rmk} \item The noise sequence $M_n, n \geq 0$ (need not be a martingale difference sequence) satisfies the following condition \begin{equation} \sup_n E\left[\left(\sum_{m=0} ^ {\phi(n,T)} \|M_{n+m+1}\|\right)^4\right] < \infty. \nonumber \end{equation} \item $\sup_n \|\theta_n\| < \infty$. \end{enumerate} With the above assumptions we prove the following tracking lemma: \begin{lem} \label{track} For any $T >0, \sup_{t\in [s,s+T]}\|\bar{\theta}(t) - \theta^s(t)\| \to 0,$ a.s. \end{lem} \begin{pf} Let $t(n) \leq t \leq t(n+m)$. Now, if $0 \leq k \leq (m-1)$ and $t \in (t(n+k), t(n+k+1)],$ \begin{align} \|\theta^{t(n)}(t)\| &\leq \|\bar{\theta}(t(n)\| + \|\int_{t(n)}^{t} \tilde{h}(\theta^{t(n)}(\tau), \mu(\tau))d\tau\|\nonumber\\ &\leq \|\theta_n\| + \sum_{l=0}^{k-1} \int_{t(n+l)}^{t(n+l+1)} (\|h(0,Y_{n+l})\|+ L(Y_{n+l})\|\theta^{t(n)}(\tau)\|))d\tau\nonumber\\ & +\int_{t(n+k)}^t(\|h(0,Y_{n+k})\|+ L(Y_{n+k})\|\theta^{t(n)}(\tau)\|))d\tau\nonumber\\ &\leq C_0 + MT+ \int_{t(n)}^{t} L(Y(\tau)) \|\theta^{t(n)}(\tau)\|d\tau \nonumber \end{align} where $Y(\tau) = Y_n$ if $\tau \in [t(n),t(n+1))$. Then it follows from an application of Gronwall inequality that \begin{equation} \|\theta^{t(n)}(t)\| \leq C e^{\int_{t(n)}^{t} L(Y(\tau)) d\tau} \mbox{~~a.e. $t$}\nonumber \end{equation} where $C=C_0 + MT$. Next, \begin{align} \|\theta^{t(n)}(t) - \theta^{t(n)}(t(n+k))\| &\leq \int_{t(n+k)}^t\|h(\theta^{t(n)}(s), Y_{n+k})\|ds\nonumber\\ &\leq \|h(0,Y_{n+k})\|(t-t(n+k))+ L(Y_{n+k})\int_{t(n+k)}^t\|\theta^{t(n)}(s)\|ds\nonumber\\ &\leq M a(n+k) + C L(Y_{n+k})\int_{t(n+k)}^te^{\int_{t(n)}^s L(Y(\tau)) d\tau} ds.\nonumber \end{align} Then \begin{align} &\|\int_{t(n)}^{t(n+m)} (h(\theta^{t(n)}(t), \mu(t)) - h(\theta^{t(n)}([t]), \mu([t])))dt\|\nonumber\\ &\leq \sum_{k=0}^{m-1}\int_{t(n+k)}^{t(n+k+1)}\|h(\theta^{t(n)}(t), Y_{n+k}) - h(\theta^{t(n)}([t]), Y_{n+k})\|dt\nonumber\\ &\leq \sum_{k=0}^{m-1}L(Y_{n+k}) \int_{t(n+k)}^{t(n+k+1)} \|\theta^{t(n)}(t) - \theta^{t(n)}(t(n+k))\|dt\nonumber\\ &\leq \sum_{k=0}^{m-1} c_k\nonumber \end{align} where \begin{equation} c_k = L(Y_{n+k})a(n+k)^2\left[M + CL(Y_{n+k}) e^{\sum_{i=0}^{k} a(n+i) L(Y_{n+i})}\right].\nonumber \end{equation} \begin{align} \|\bar{\theta}(t(n+m))-\theta^{t(n)}(t(n+m))\| &\leq \sum_{k=0}^{m-1}L(Y_{n+k})a(n+k)\|\bar{\theta}(t(n+k)) - \theta^{t(n)}(t(n+k))\|\nonumber\\ & + \sum_{k=0}^{m-1} c_k + \|\delta_{n,n+m}\|,\nonumber \end{align} where $\delta_{n,n+m}=\sum_{k=n}^{n+m-1}a(k)M_{k+1}$. Therefore using discrete Gronwall inequality we get \begin{equation} \|\bar{\theta}(t(n+m))-\theta^{t(n)}(t(n+m))\| \leq r(m,n) e^{\sum_{k=0}^{m-1} a(n+k) L(Y_{n+k})}\nonumber \end{equation} where $r(m,n) = \sum_{k=0}^{m-1} (c_k + a(n+k) \|M_{n+k+1}\|)$. Now, for some $\lambda \in [0,1]$, \begin{align} &\|\theta^{t(n)}(t) - \bar{\theta}(t)\| \nonumber \\ &\leq (1-\lambda) \|\theta^{t(n)}(t(n+m+1)) - \bar{\theta}(t(n+m+1)) +\lambda \|\theta^{t(n)}(t(n+m))-\bar{\theta}(t(n+m))\| \nonumber \\ & + \max(\lambda, 1- \lambda) \int_{t(n+m)}^{t(n+m+1)} \|\tilde{h}(\theta^{t(n)}(s),\mu(s))\|ds \nonumber \\ &\leq r(m+1,n) e^{\sum_{k=0}^{m} a(n+k) L(Y_{n+k})} + a(n+m)\left[M + C L(Y_{n+m}) e^{\sum_{k=0}^{m} a(n+k) L(Y_{n+k})}\right]\nonumber. \end{align} Therefore \begin{align} \rho(n,T):= \sup_{t \in [t(n),t(n) + T]} \|\theta^{t(n)}(t) - \bar{\theta}(t)\| &\leq r(\phi(n,T+1),n) e^{\sum_{k=0}^{\phi(n,T)} a(n+k) L(Y_{n+k})} \nonumber \\ &+ a(n)\left[M + C \sup_{0\leq m \leq \phi(n,T)} L(Y_{n+m}) e^{\sum_{k=0}^{\phi(n,T)} a(n+k) L(Y_{n+k})}\right].\nonumber \end{align} Now to prove the a.s. convergence of the quantity in the left hand side as $n \to \infty$, we have using Cauchy-Schwartz inequality: \begin{align} \sum_{n=1}^{\infty} E[{\rho(n,T)}^2] \leq & 2K_T \sum_{n=1}^{\infty}\left(E\left[\left(r(\phi(n,T+1),n)\right)^4\right]\right)^{1/2} + 4M^2\sum_{n=0}^{\infty}a(n)^2 + \nonumber \\ & 4C^2\sum_{n=1}^{\infty} a(n)^2 E\left[\left(\sup_{0\leq m \leq \phi(n,T)} L(Y_{n+m})\right)^2 e^{2\sum_{k=0}^{\phi(n,T)} a(n+k) L(Y_{n+k})}\right],\nonumber \end{align} where $K_T = \sqrt{\sup_n E[e^{4\sum_{k=0}^{\phi(n,T)} a(n+k) L(Y_{n+k})}]}$ which depends only on $T$ due to \textbf{(S5)}. Now, the third term in the R.H.S is clearly finite from the assumptions \textbf{(S4)} and \textbf{(S5)}. Now we analyze the first term i.e. \begin{align} \label{num} \sum_{n=1}^{\infty}\left(E\left[{r(\phi(n,T+1),n)}^4\right]\right)^{1/2} \leq & 2\sqrt{2}\sum_{n=1}^{\infty}\left(E\left[\left(\sum_{k=0}^{\phi(n,T)} c_k\right)^4\right]\right)^{1/2} \nonumber \\ & + 2\sqrt{2}\sum_{n=1}^{\infty} \left(E\left[\left(\sum_{k=0}^{\phi(n,T)} a(n+k) \|M_{n+k+1}\|\right)^4\right]\right)^{1/2}. \end{align} Next we analyze the first term in the R.H.S of (\ref{num}) again using Cauchy-Schwartz inequality: \begin{align} &\sum_{n=1}^{\infty}\left(E\left[\left(\sum_{k=0}^{\phi(n,T)} c_k\right)^4\right]\right)^{1/2} \nonumber \\ &\leq 8 M^2 \sum_{n=1}^{\infty} \phi(n,T)^2 a(n)^4 \left(E\left[\left(\sup_{0 \leq k \leq \phi(n,T)} L(Y_{n+k})\right)^4\right]\right)^{1/2}+ \nonumber \\ & 8 C^2 \sum_{n=1}^{\infty} \phi(n,T)^2 a(n)^4 \left(E\left[\left(\sup_{0 \leq k \leq \phi(n,T)} L(Y_{n+k})\right)^8 e^{4\sum_{i=0}^{\phi(n,T)} a(n+i) L(Y_{n+i})}\right]\right)^{1/2}.\nonumber \end{align} Therefore the the R.H.S will be finite if we can show that $\sum_{n=1}^{\infty} \phi(n,T)^2 a(n)^4$ is finite. For common step-size sequence $a(n) =\frac{1}{n}$, $\phi(n,T)= O(n)$ thus the above series converges clearly. One can make the series converge for all $a(n)= \frac{1}{n^k}$ with $\frac{1}{2} < k \leq 1$ by putting assumptions on higher moments in \textbf{(S4)} and \textbf{(S5)} . In the above we have used the following inequality repeatedly for non-negative random variables $X$ and $Y$: \begin{equation} \sqrt{E\left[\left(X+Y\right)^{2^n}\right]} \leq 2^{\frac{2n-1}{2}}\left[\sqrt{E[X^{2^n}]} + \sqrt{E[Y^{2^n}]}\right]\nonumber \end{equation} with $n \in \mathbb{N}$. Now, \begin{align} \sum_{n=1}^{\infty} \left(E\left[\left(\sum_{k=0}^{\phi(n,T)} a(n+k) \|M_{n+k+1}\|\right)^4\right]\right)^{1/2} \nonumber \\ \leq \sum_{n=1}^{\infty} a(n)^2\left(E\left[\left(\sum_{k=0} ^ {\phi(n,T)} \|M_{n+k+1}\|\right)^4\right]\right)^{1/2}\nonumber \end{align} which is finite under assumption \textbf{(S5)} and the fact that $a(n)$ are non-increasing. \end{pf} \begin{comment} We can use the above results for the individual case of faster and slower timescale with $\phi(n,T)$, Lipschitz constant and Markov process determined in \textbf{(A11)-A(14)} according to the faster and slower timescale parameters respectively. This is justified due to the fact that the evolution of the process $Z^{(i)}_n$ and the martingale difference noise depends on both the faster and slower timescale iterate. Therefore if the dependency is functional then the assumptions may be verifiable. Additionally, need to make the following extra assumption for the slower timescale proof to go through \setcounter{enumi}{14} \item \begin{equation} \sup_n E\left[\left(\sum_{m=0} ^ {\phi(n,T)} \|w_{n+m} - \lambda(\theta_{n+m})\| \right)^{8}\right] < \infty \nonumber \end{equation} \end{enumerate} \end{comment} \begin{comment} \subsection{Sufficient condition for \textbf{(A8)}} \label{stability} In the following we give sufficient conditions for \textbf{(A8)} in case $D^{(i)}(\theta,w)$ is singleton for every $\theta,w$: \begin{enumerate} \item $g_{\infty}(\theta,w) = \lim_{c \uparrow \infty} \frac{\tilde{g}(c\theta, cw,\eta^{(2)}(c\theta,cw)}{c})$ exists uniformly on compacts \item The o.d.e $\dot{w}(t) = g_{\infty}(\theta,w(t))$ is well-posed and has $\lambda_{\infty}(\theta)$ as the unique globally asymptotically stable equilibrium with $\lambda_{\infty}:\mathbb{R}^d \to \mathbb{R}^k$ is a Lipschitz map. Further, $0 \leq \|\lambda_{\infty}(0)\| < 1/4$ \item $h_{\infty}(\theta) = \lim_{c \uparrow \infty} \frac{\tilde{h}(c\theta, c\lambda_{\infty}(\theta),\eta^{(1)}(c\theta,c\lambda_{\infty}(\theta))}{c}$ exists uniformly on compacts \item The o.d.e $\dot{\theta}(t) = h_{\infty}(\theta(t))$ is well posed and has the origin as the unique globally asymptotically stable equilibrium. \end{enumerate} \proof{Proof Outline} The proof is based on the scaling of iterates and is a straightforward combination of the stability proof given in Chap. 3 of \cite{borkar1}, Chap. 6.3 \cite[Theorem 9]{borkar1} and the proofs of Section \ref{mres}. In the following we state the main lemmas: \begin{lem} There exists $T>0$ such that for all initial conditions $w$ on the unit sphere of $\lambda_{\infty}(0)$ \begin{equation} \|\phi_{\infty}(t,w) - \lambda_{\infty}(0)\| < 1/8 \end{equation} \end{lem} where $\phi_{\infty}(t,w)$ is the trajectory of the o.d.e $\dot{w}(t) = g_{\infty}(0,w(t))$ with the initial point $w$. \begin{lem} There exist $c_0 >0$ and $T>0$ such that for all initial conditions $w$ on the unit sphere of $\lambda_{\infty}(0)$, \begin{equation} \|\phi_c(t,w)-\lambda_{\infty}(0)\| < 1/4 \end{equation} for $t \in [T,T+1]$ and $c > c_0$. \end{lem} Here $\phi_c(t,w)$ stands for the trajectory of the o.d.e $\dot{w}(t) = \frac{\tilde{g}(0, cw(t),\eta^{(2)}(0,cw(t))}{c})$ with the initial point $w$. Let $r(n):=\max(\|\bar{w}(T_n)\|, \|\bar{\theta}(T_n)\|, 1)$. Let $\theta^s_n, w^s_n$ denote the corresponding scaled iterates as in \cite[Chap. 3]{borkar1} \textbf{Faster timescale result for scaled trajectory} \begin{lem} \begin{equation} (\theta^s_n, w^s_n)\to \{(\theta, \lambda_{\infty}(\theta)): \theta \in \mathbb{R}^d\} \mbox{a.s.} \end{equation} \end{lem} which implies that $\|w^s_n - \lambda_{\infty}(\theta^s_n)\| \to 0$ a.s. Using this the slower timescale convergence result for the scaled trajectory is straightforward. \begin{rmk} The reason of scaling both $\theta, w$ is the fact that one can prove the convergence of scaled faster timescale iterate only through proving convergence of the coupled iterates $(\theta^s_n,w^s_n)$. \end{rmk} Following the similar lines as in the proof of Theorem 7 of \cite[Chap. 3]{borkar1} one can show that for $r(n) > c_0$ and $n$ sufficiently large, \begin{equation} \frac{\|\bar{w}(T_{n+1})\|}{\|\bar{w}(T_{n})\|} < 1/2 \end{equation} and \begin{equation} \frac{\|\bar{\theta}(T_{n+1})\|}{\|\bar{\theta}(T_{n})\|} < 1/2 \end{equation} which proves that the iterates are stable. \end{comment} \section{Application : Off-policy temporal difference learning with linear function approximation} \label{app} In this section, we present an application of our results in the setting of off-policy temporal difference learning with linear function approximation. In this framework, we need to estimate the value function for a target policy $\pi$ given the continuing evolution of the underlying MDP (with finite state and action spaces $S$ and $A$ respectively, specified by expected reward $r(\cdot,\cdot,\cdot)$ and transition probability kernel $p(\cdot|\cdot,\cdot)$) for a behaviour policy $\pi_b$ with $\pi \neq \pi_b$. The authors of \cite{sutton1,sutton,maeith} have proposed two approaches to solve the problem: \begin{enumerate}[label=(\roman*)] \item Sub-sampling: In this approach, the transitions which are relevant to deterministic target policy are kept and the rest of the data is discarded from the given ``on-policy'' trajectory. We use the triplet $(S,R,S')$ to represent (current state, reward, next state). Therefore one has ``off-policy'' data $(X'_n,R_n,W_n), n\geq 0$ where $E[R_{n}|X'_{n}=s,W_{n}=s'] = r(s,a,s')$, $P(W_{n} =s'|X'_{n}=s) = p(s'|s,a)$ with $\pi(s) =a$, $\pi$ being the target policy and $X'_n, n\geq 0$ is a random process generated by sampling the ``on-policy'' trajectory at increasing stopping times. \item Importance-weighting: In this approach, unlike sub-sampling, all the data from the given ``on-policy'' trajectory is used. One advantage of this method is that we can allow the policy to be randomized in case of both behaviour and target policies unlike the sub-sampling scenario where one can use only deterministic policy as a target policy. \end{enumerate} Then they introduce gradient temporal difference learning algorithms (GTD) \cite{sutton1,sutton,maeith} for both the approaches. \\ \indent Currently, all GTD algorithms make the assumption that data is available in the ``off-policy'' setting i.e. of the form $(X'_n,R_n,W_n),n\geq0$ where $\{X'_n\}$ are i.i.d, $E[R_{n}|X'_{n}=s,W_{n}=s'] = r(s,a,s')$ and $P(W_{n} =s'|X'_{n}=s) = p(s'|s,a)$ with $\pi(s) =a$, $\pi$ being the deterministic target policy. Additionally, the distribution of $\{X'_n\}$ is assumed to be sampled according to the stationary distribution of the Markov chain corresponding to the behaviour policy. However, such data cannot be generated from sub-sampling given only the ``on-policy'' trajectory. The reason is that a Markov chain sampled at increasing stopping times cannot be i.i.d. In the following, we show how gradient temporal-difference learning along with importance weighting can be used to solve the off-policy convergence problem stated above for TD when only the ``on-policy'' trajectory is available. \subsection{\textit{Problem Definition}} Suppose we are given an on-policy trajectory $(X_{n},A_n, R_{n},X_{n+1}), n\geq 0$ where $\{X_n\}$ is a time-homogeneous irreducible Markov chain with unique stationary distribution $\nu$ and generated from a behavior policy $\pi_b \neq \pi$. Here the quadruplet $(S,A,R,S')$ represents (current state, action, reward, next state). Also, assume that $\pi_b(a|s) > 0$ for all $s \in S, a \in A$. We need to find the solution $\theta^*$ for the following: \begin{align} \label{fixpoint} 0&=\sum_{s,a,s'}\nu(s)\pi(a|s)p(s'|s,a)\delta(\theta;s,a,s')\phi(s)\nonumber\\ &=E[\rho_{X,A_n}\delta_{X,R_n,X_{n+1}}(\theta)\phi(X)]\nonumber\\ &= b - A\theta, \end{align} where \begin{enumerate}[label=(\roman*)] \item $\theta \in \mathbb{R}^d$ is the parameter for value function, \item $\phi: S\to \mathbb{R}^d$ is a vector of state features, \item $X \sim \nu$, \item $0<\gamma < 1$ is the discount factor, \item $E[R_n|X_n=s,X_{n+1}=s'] = \sum_{a\in A}\pi_b(a|s)r(s,a,s')$, \item $P(X_{n+1}=s'|X=s) = \sum_{a\in A}\pi_b(a|s) p(s'|s,a)$, \item $\delta(\theta; s,a,s')= r(s,a,s') + \gamma \theta^T\phi(s') - \theta^T\phi(s)$ is the temporal difference term with expected reward, \item $\rho_{X,A_n} = \frac{\pi(A_n | X)}{\pi_b(A_n|X)}$, \item $\delta_{X,R_n,X_{n+1}}=R_n + \gamma \theta^T\phi(X_{n+1}) - \theta^T\phi(X)$ is the online temporal difference, \item $A=E[\rho_{X,A_n}\phi(X)(\phi(X) -\gamma\phi(X_{n+1}))^T]$, \item $b=E[\rho_{X,A_n}R_n\phi(X)]$. \end{enumerate} Hence the desired approximate value function under the target policy $\pi$ is $V_{\pi}^*={\theta^*}^T\phi$. Let $V_\theta = {\theta}^T\phi$. It is well-known (\cite{maeith}) that $\theta^*$ satisfies the projected fixed point equation namely \begin{equation} V_{\theta}= \Pi_{\mathcal{G},\nu}T^{\pi}V_{\theta},\nonumber \end{equation} where \begin{equation} \Pi_{\mathcal{G}, \nu}\hat{V} = \arg\min_{f \in \mathcal{G}} (\|\hat{V} - f\|_{\nu}),\nonumber \end{equation} with $\mathcal{G} = \{V_{\theta} | \theta \in \mathbb{R}^d\}$ and the Bellman operator \begin{equation} T^{\pi}V_\theta(i) = \sum_{j \in S} \sum_{a\in A}\pi(a|i)p(j|i, a)\left[\gamma V_\theta(i) + r(i, a, j)\right]. \nonumber \end{equation} Therefore to find $\theta^*$, the idea is to minimize the mean square projected Bellman error $J(\theta)= \|V_{\theta} - \Pi_{\mathcal{G},\nu}T^{\pi}V_{\theta}\|^2_{\nu}$ using stochastic gradient descent. It can be shown that the expression of gradient contains product of multiple expectations. Such framework can be modelled by two time-scale stochastic approximation where one iterate stores the quasi-stationary estimates of some of the expectations and the other iterate is used for sampling. \subsection{\textit{The TDC Algorithm with importance-weighting}} We consider the TDC (Temporal Difference with Correction) algorithm with importance-weighting from Sections 4.2 and 5.2 of \cite{maeith}. The gradient in this case can be shown to satisfy \begin{align} -\frac{1}{2}\nabla J(\theta)&=E[\rho_{X,R_n}\delta_{X,R_n,X_{n+1}}(\theta)\phi(X)] - \gamma E[\rho_{X,R_n}\phi(X_{n+1})\phi(X)^T]w(\theta),\nonumber\\ w(\theta) &= E[\phi(X)\phi(X)^T]^{-1}E[\rho_{X,R_n}\delta_{X,R_n,X_{n+1}}(\theta)\phi(X)].\nonumber \end{align}Define $\phi_n = \phi(X_n)$, $\phi'_n = \phi(X_{n+1})$, $\delta_n(\theta) = \delta_{X_n, R_n, X_{n+1}}(\theta)$ and $\rho_n=\rho_{X_n,A_n}$. Therefore the associated iterations in this algorithm are: \begin{equation} \label{tdc} \begin{split} \theta_{n+1} &= \theta_n + a(n) \rho_n\left[\delta_{n}(\theta_n)\phi_n - \gamma \phi'_{n}\phi_n^Tw_n\right],\\ w_{n+1} &= w_n + b(n) \left[(\rho_n\delta_{n}(\theta_n) - \phi_n^Tw_n)\phi_n\right], \end{split} \end{equation} \\ \indent with $\{a(n)\}, \{b(n)\}$ satisfying \textbf{(A4)}. \subsection{\textit{Convergence Proof}} \begin{thm}[Convergence of TDC with importance-weighting] \label{th2} Consider the iterations (\ref{tdc}) of the TDC. Assume the following: \begin{enumerate}[label=(\roman*)] \item $\{a(n)\}, \{b(n)\}$ satisfy \textbf{(A4)}. \item $\{(X_n,R_n,X_{n+1}), n\geq0\}$ is such that $\{X_n\}$ is a time-homogeneous finite state irreducible Markov chain generated from the behavior policy $\pi_b$ with unique stationary distribution $\nu$. $E[R_{n}|X_{n}=s,X_{n+1}=s'] = \sum_{a\in A} \pi_b(a|s)r(s,a,s')$ and $P(X_{n+1} =s'|X_{n}=s) = \sum_{a\in A}\pi_b(a|s)p(s'|s,a)$ where $\pi_b$ is the behaviour policy, $\pi \neq \pi_b$. Also, $E[R_n^2 | X_n, X_{n+1}] < \infty$ for all $n$ almost surely, and \item $C=E[\phi(X)\phi(X)^T]$ and $A=E[\rho_{X,R_n}\phi(X)(\phi(X) -\gamma\phi(X_{n+1}))^T]$ are non-singular where $X \sim \nu$. \item $\pi_b(a|s) > 0$ for all $s \in S, a \in A$. \item $\sup_n(\|\theta_n\| + \|w_n\|) < \infty$ w.p. 1. \end{enumerate} Then the parameter vector $\theta_n$ converges with probability one as $n \to \infty$ to the TD(0) solution (\ref{fixpoint}). \end{thm} \begin{pf} The iterations (\ref{tdc}) can be cast into the framework of Section \ref{defn} with \begin{enumerate}[label=(\roman*)] \item $Z^{(i)}_n = X_{n-1}$, \item $h(\theta,w,z) = E[(\rho_n(\delta_{n}(\theta)\phi_n-\gamma \phi'_{n}\phi_n^Tw))|X_{n-1}=z,\theta_n=\theta,w_n=w]$, \item $g(\theta,w,z)=E[((\rho_n\delta_{n}(\theta) - \phi_n^Tw)\phi_n)|X_{n-1}=z,\theta_n=\theta,w_n=w]$, \item $M^{(1)}_{n+1}=\rho_n(\delta_{n}(\theta_n)\phi_n - \gamma \phi'_{n}\phi_n^Tw_n)-E[\rho_n(\delta_{n}(\theta_n)\phi_n - \gamma \phi'_{n}\phi_n^Tw_n)|X_{n-1}, \theta_n, w_n]$, \item $M^{(2)}_{n+1}=(\rho_n\delta_{n}(\theta_n) - \phi_n^Tw_n)\phi_n - E[(\rho_n\delta_{n}(\theta_n) - {\phi_n}^T w_n)\phi_n|X_{n-1}, \theta_n, w_n]$, \item $\mathcal{F}_n = \sigma(\theta_m, w_m, R_{m-1}, X_{m-1},A_{m-1}, m \leq n, i = 1, 2), n \geq 0$. \end{enumerate} Note that in (ii) and (iii) we can define $h$ and $g$ independent of $n$ due to time-homogeneity of $\{X_n\}$. \\ \indent Now, we verify the assumptions \textbf{(A1)-(A7)} (mentioned in Sections \ref{defn} and \ref{assump}) for our application: \begin{enumerate}[label=(\roman*)] \item \textbf{(A1)}: $Z^{(i)}_n, \forall n, i=1,2$ takes values in compact metric space as $\{X_n\}$ is a finite state Markov chain. \item \textbf{(A5)}: Continuity of transition kernel follows trivially from the fact that we have a finite state MDP. \begin{rmk} In fact we don't have to verify this assumption for the special case when the Markov chain is uncontrolled and has unique stationary distribution. The reason is that in such case \textbf{(A5)} will be used only in the proof of Lemma \ref{lemma1}. However, if the Markov chain has unique stationary distribution Lemma \ref{lemma1} trivially follows. \end{rmk} \item \textbf{(A2)} \begin{enumerate}[label=(\alph*)] \item \begin{align} &\|h(\theta, w,z) - h(\theta',w',z)\|\nonumber\\ &=\|E[\rho_n(\theta-\theta')^T(\gamma \phi(X_{n+1}) - \phi(X_n))\phi(X_n) - \gamma \rho_n \phi(X_{n+1})\phi(X_n)^T(w-w')|X_{n-1}=z]\|\nonumber\\ &\leq L(2\|\theta-\theta'\|M^2 + \|w-w'\|M^2)\nonumber, \end{align} where $M=\max_{s\in S}\|\phi(s)\|$ with $S$ being the state space of the MDP and $L=\max_{(s,a)\in (S\times A)}\frac{\pi(a|s)}{\pi_b(a|s)}$. Hence $h$ is Lipschitz continuous in the first two arguments uniformly w.r.t the third. In the last inequality above, we use the Cauchy-Schwarz inequality. \item As with the case of $h$, $g$ can be shown to be Lipschitz continuous in the first two arguments uniformly w.r.t the third. \item Joint continuity of $h$ and $g$ follows from (iii)(a) and (b) respectively as well as the finiteness of $S$. \end{enumerate} \item \textbf{(A3)}: Clearly, $\{M_{n+1}^{(i)}\}, i=1,2$ are martingale difference sequences w.r.t. increasing $\sigma$-fields $\mathcal{F}_n$. Note that $E[\|M_{n+1}^{(i)}\|^2 | \mathcal{F}_n] \leq K(1 + \|\theta_n\|^2 + \|w_n\|^2)$ a.s., $n\geq 0$ since $E[R_n^2 | X_n, X_{n+1}] < \infty$ for all $n$ almost surely and $S$ is finite. \item \textbf{(A4)}: This follows from the conditions (i) in the statement of Theorem \ref{th2}. \end{enumerate} Now, one can see that the faster o.d.e. becomes \begin{equation} \dot{w}(t)=E[\rho_{X,A_n}\delta_{X,R_n,X_{n+1}}(\theta)\phi(X)] - E[\phi(X)\phi(X)^T]w(t).\nonumber \end{equation} Clearly, $C^{-1}E[\rho_{X,A_n}\delta_{X,R_n,X_{n+1}}(\theta)\phi(X)]$ is the globally asymptotically stable equilibrium of the o.d.e. Moreover, $V'(\theta,w) = \frac{1}{2} \|Cw - E[\rho_{X,A_n}\delta_{X,R_n,X_{n+1}}(\theta)\phi(X)]\|^2$ is continuously differentiable. Additionally, $\lambda(\theta)=C^{-1}E[\rho_{X,A_n}\delta_{X,R_n,X_{n+1}}(\theta)\phi(X)]$ and it is Lipschitz continuous in $\theta$, verifying \textbf{(A6)'}. For the slower o.d.e., the global attractor is $A^{-1}E[\rho_{X,A_n}R_n\phi(X)]$ verifying the additional assumption in Corollary \ref{main_col}. The attractor set here is a singleton. Also, \textbf{(A7)} is $(v)$ in the statement of Theorem \ref{th2}. Therefore the assumptions $(\mathbf{A1}) - (\mathbf{A5}), (\mathbf{A6'}), (\mathbf{A7})$ are verified. The proof would then follow from Corollary \ref{main_col}. \end{pf} \begin{comment} \begin{rmk} Our results hold true only when $\{X_n\}$ is time-homogeneous. The reason is that the iteration \begin{equation} \theta_{n+1} = \theta_{n} + a(n)F(\theta_{n}, X_{n+1})\nonumber \end{equation} can be framed into \begin{equation} \theta_{n+1} = \theta_{n} + a(n)\left[h(\theta_{n},X_{n}) + M_{n+1}\right] \nonumber \end{equation} only when $\{X_n\}$ being time homo \end{rmk} \end{comment} \begin{rmk} The reason for using two time-scale framework for the TDC algorithm is to make sure that the o.d.e's have globally asymptotically stable equilibrium. \end{rmk} \begin{rmk} Because of the fact that the gradient is a product of two expectations the scheme is a ``pseudo''-gradient descent which helps to find the global minimum here. \end{rmk} \begin{rmk} Here we assume the stability of the iterates (\ref{tdc}). Certain sufficient conditions have been sketched for showing stability of single timescale stochastic recursions with controlled Markov noise \cite[p.~75, Theorem 9]{borkar1}. This subsequently needs to be extended to the case of two time-scale recursions. Another way to ensure boundedness of the iterates is to use a projection operator. However, projection may introduce spurious fixed points on the boundary of the projection region and finding globally asymptotically stable equilibrium of a projected o.d.e. is hard. Therefore we do not use projection in our algorithm. \end{rmk} \begin{rmk} Convergence analysis for TDC with importance weighting along with eligibility traces cf. \cite[p.~74]{maeith} where it is called GTD($\lambda$)can be done similarly using our results. The main advantage is that it works for $\lambda < \frac{1}{L\gamma}$ ($\lambda\in [0,1]$ being the eligibility function) whereas the analysis in \cite{yu} is shown only for $\lambda$ very close to 1. \end{rmk} \begin{rmk} One can analyze this algorithm when the state space is infinite by imposing assumptions on $\phi$ as well as the target and behavior policies. \end{rmk} \section{Conclusion} We presented a general framework for two time-scale stochastic approximation with controlled Markov noise. Moreover, using a special case of our results, i.e., when the random process is a finite state irreducible time-homogeneous Markov chain (hence has a unique stationary distribution) and uncontrolled (i.e, does not depend on iterates), we provided a rigorous proof of convergence for off-policy temporal difference learning algorithm that is also extendible to eligibility traces (for a sufficiently large range of $\lambda$) with linear function approximation under the assumption that the ``on-policy'' trajectory for a behaviour policy is only available. This has previously not been done to our knowledge. \\ \indent \section*{Acknowledgments.} The authors want to thank Csaba Szepesv\'{a}ri for some useful discussion on the literature of off-policy learning. Our work was partly supported by the Robert Bosch Centre for Cyber-Physical Systems, Indian Institute of Science, Bangalore.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Applications} \subsection*{Groupoids} We denote by $\mathrm{Gpd}$ the $1$-category of small groupoids and by $\mathrm{Gpd}_2$ the $\infty$-category associated to the $(2,1)$-category of groupoids in which the $2$-morphisms are natural transformations. The category $\mathrm{Gpd}$ admits a simplicial model structure in which the equivalences are equivalences of categories and the cofibrations are functors that are injective on the set of objects. In this model structure all objects are cofibrant and fibrant, compare \cite{Casacuberta}. Furthermore if we denote by $\mathrm{Gpd}^\omega$ the full subcategory on groupoids with at most countable many morphisms then $\mathrm{Gpd}^\omega$ inherits the structure of a cofibration category. The following lemma is a well known fact, but we had difficulties finding a clear reference for this so we state it as an extra lemma. \begin{Lemma}\label{lemma3} The canonical map $\mathrm{N}\mathrm{Gpd}[w^{-1}] \to \mathrm{Gpd}_2$ is an equivalence of $\infty$-categories. \end{Lemma} \begin{proof} This follows from the description of the $\infty$-category associated to a simplicial model category, see \cite[Theorem 1.3.4.20]{LurieHA}, as being the the homotopy coherent nerve of the simplicial category of cofibrant and fibrant objects. \end{proof} \begin{Cor}\label{groupoids1} Let $\c$ be an $\infty$-category. Then the canonical map $\mathrm{N} c\:\!\mathrm{Gpd} \to \mathrm{Grp}_2 $ induces an equivalence \begin{tikzeq*} \matrix[diagram] { |(2)| \mathrm{Fun}(\mathrm{Gpd}_2,\c) & |(i)| \mathrm{Fun}^w(\mathrm{N} c\:\!\mathrm{Gpd},\c) \\ }; \draw[->] (2) to node[above] {$\simeq$} (i); \end{tikzeq*} where the superscript $w$ refers to functors that send equivalences of groupoids to equivalences in $\c$. \end{Cor} \begin{proof} Since the canonical map $\mathrm{N}\mathrm{Gpd}[w^{-1}] \to \mathrm{Gpd}_2$ is an equivalence by \cref{lemma3}, this is a direct application of \cref{main theorem}. \end{proof} The following corollary of Proposition \ref{prop} implies that in the approach to assembly maps discussed in \cite[section 2]{DavisLueck} one can directly restrict to functors from groupoids to spectra that are only defined for maps of groupoids that are injective on objects. This resolves the issues illustrated in \cite[Remark 2.3]{DavisLueck}. \begin{Cor}\label{groupoidscor2} Let $\mathrm{Sp}$ be any of the categories of spectra. Then every functor $F: c\:\!\mathrm{Gpd} \to \mathrm{Sp}$ which sends equivalences of groupoids to weak equivalences in $\mathrm{Sp}$ extends uniquely (in the sense of Proposition \ref{prop}) to a functor $\widehat{F}: \mathrm{Gpd} \to \mathrm{Sp}$ which also sends weak equivalences of groupoids to weak equivalences of spectra. \end{Cor} \begin{Rmk} The statements of \cref{groupoids1} and \cref{groupoidscor2} remain true if we replace $\mathrm{Gpd}$ by $\mathrm{Gpd}^\omega$. Furthermore \cref{groupoidscor2} does not depend on the exact choice of model category of spectra as long as it is Quillen equivalent to a combinatorial model category. \end{Rmk} Next we want to demonstrate how to apply these results by \emph{functorially} constructing $C^*$-algebras and topological $K$-theory spectra associated to groupoids. This discussion is similar to the one given in \cite[section 3]{Joachim2} but we use our main theorem to obtain full functoriality instead of an explicit construction. \begin{Def}\label{groupoid algebra} Let $\mathcal{G}$ be a groupoid. We let $\mathbb{C}\mathcal{G}$ be the $\mathbb{C}$-linearization of the set of morphisms of $\mathcal{G}$. This is a $\mathbb{C}$-algebra by linearization of the multiplication on morphisms given by \[ f\cdot g = \begin{cases} f\circ g & \text{ if $f$ and $g$ are composable} \\ 0 & \text{ else.} \end{cases} \] We remark that $\mathbb{C}\mathcal{G}$ is unital if and only if the set of objects of $\mathcal{G}$ is finite. Then we complete $\mathbb{C}\mathcal{G}$ in a universal way, like for the full group $C^*$-algebra, to obtain a $C^*$-algebra $C^*\mathcal{G}$. More precisely, the norm is given by the supremum over all norms of representations of $\mathbb{C}\mathcal{G}$ on a separable Hilbert space. This is isomorphic to the $C^*$-algebra associated to the maximal groupoid $C^*$-category of \cite[Definition 3.16]{DellAmbrogio2} using the construction $\mathcal{C} \mapsto A_\mathcal{C}$ of \cite[section 3]{Joachim2}. \end{Def} The association $\mathcal{G} \mapsto \mathbb{C}^*\mathcal{G}$ is functorial for cofibrations of groupoids but not for general morphisms since it can happen that morphisms are not composable in a groupoid, but become composable after applying a functor, compare the remark \cite[page 214]{DavisLueck}. We observe that the $C^*$-algebra $\mathbb{C}^*\mathcal{G}$ is separable provided $\mathcal{G} \in \mathrm{Gpd}^\omega$. \begin{Lemma}\label{morita invariance} Let $F\colon \mathcal{G}_1 \to \mathcal{G}_2$ be an acyclic cofibration of groupoids. Then the induced morphism \[ C^*F \colon C^*\mathcal{G}_1 \to C^*\mathcal{G}_2 \] is a $\mathrm{KK}$-equivalence. \end{Lemma} \begin{proof} The $C^*$-algebra associated to a groupoid is the product of the $C^*$-algebras associated to each connected component. Thus we may assume that $\mathcal{G}_1$ (and thus $\mathcal{G}_2$) is connected. Let $x \in \mathcal{G}$ be an object. We let $G_1 = \mathrm{End}(x)$ and $G_2 = \mathrm{End}(Fx)$ be the endomorphism groups and notice the fact that $F$ is an equivalence implies that $F$ induces an isomorphism $G_1 \cong G_2$. Then we consider the diagram \begin{tikzeq*} \matrix[diagram] { |(1c)| C^*\mathcal{G}_1 & |(2c)| C^*\mathcal{G}_2 \\ |(1)| C^*G_1 & |(2)| C^*G_2 \\ }; \draw[->] (1c) to node[above] {$C^*F$} (2c); \draw[->] (1) to node[below] {$\cong$} (2); \draw[->] (1) to (1c); \draw[->] (2) to (2c); \end{tikzeq*} in which the lower horizontal arrow is an isomorphism. Thus to show the lemma it suffices to prove the lemma in the special case where $F$ is the inclusion of the endomorphisms of an object $x$ of a connected groupoid $\mathcal{G}$. This can be done using in the abstract setting of corner algebras. For this suppose $A$ is a $C^*$-algebra and $p \in A$ is a projection. It is called full if $ApA$ is dense in $A$. The algebra $pAp$ is called the corner algebra of $p$ in $A$. It is called a full corner if $p$ is a full projection. We write $i_p$ for the inclusion $pAp \subset A$. Given a projection $p$ the module $pA$ is an imprimitivity $pAp-\overline{ApA}$ bimodule, see e.g. \cite[Example 3.6]{Raeburn}. Thus if $p$ is full, then $pA$ gives rise to an invertible element $[pA,i_p,0]= \mathcal{F}(p) \in \mathrm{KK}(pAp,A)$. In this KK-group we have an equality \begin{align*} \mathcal{F}(p) & = [\,pA,i_p,0] + [(1-p)A,0,0] \\ & = [\,pA\oplus(1-p)A,i_p,0] \\ &= [A,i_p,0] = [i_p] , \end{align*} in other words, the inclusion $pAp \to A$ of a corner algebra associated to a full projection is a KK-equivalence. To come back to our situation let us suppose $\mathcal{G}$ is a groupoid, $x \in \mathcal{G}$ is an object and let us denote its endomorphism group by $G = \mathrm{End}(x)$. We can consider the element $p = \mathrm{id}_x \in C^*\mathcal{G}$ which is clearly a projection. Its corner algebra is given by \[ p\cdot C^*\mathcal{G} \cdot p \cong C^*G.\] If $\mathcal{G}$ is connected it follows that every morphism in $\mathcal{G}$ may be factored through $\mathrm{id}_x$ and thus $p$ is full if $\mathcal{G}$ is connected. Hence it follows that the inclusion $C^*G \to C^*\mathcal{G}$ is an embedding of a full corner algebra. Thus by the general theory this inclusion is a KK-equivalence which proves the lemma. \end{proof} Let us denote by $\mathrm{KK}_\infty$ the $\infty$-category given by the localization of the category $\mathrm{C^*Alg}$ of separable $C^*$-algebras at the KK-equivalences, see e.g. \cite[Definition 3.2]{LandNikolaus}. In formulas we have $\mathrm{KK}_\infty := \mathrm{N}\mathrm{C^*Alg}[w^{-1}]$ where $w$ denotes the class of KK-equivalences. The homotopy category of $\mathrm{KK}_\infty$ is Kasparov's KK-category of $C^*$-algebras. \begin{Cor}\label{groupoids2} There exists a functor \[ \mathrm{Gpd}_2^\omega \to \mathrm{KK}_\infty \] which on objects sends a groupoid $\mathcal{G}$ to the full groupoid $C^*$-algebra $C^*\mathcal{G}$. \end{Cor} \begin{Rmk} We notice that the $(2,1)$-category $\mathrm{Orb}^\omega$ consisting of (countable) groups, group homomorphisms, and conjugations is the full subcategory of the $(2,1)$-category of (countable) groupoids on connected groupoids and hence along this inclusion we also obtain a functor \[ \mathrm{Orb}^\omega \to \mathrm{KK}_\infty \] which on objects sends a group to its full group $C^*$-algebra. This will be used in \cite{LandNikolaus} to compare the $L$-theoretic Farrell-Jones conjecture and the Baum-Connes conjecture. \end{Rmk} \begin{proof}[Proof of \cref{groupoids2}] By \cref{groupoids1} and the remark after \cref{groupoidscor2}, we have an equivalence \[ \mathrm{Fun}^w(\mathrm{N} c\:\!\mathrm{Gpd}^\omega,\mathrm{KK}_\infty) \simeq \mathrm{Fun}(\mathrm{Gpd}_2^\omega,\mathrm{KK}_\infty) \] and thus it suffices to construct a functor \[ c\:\!\mathrm{Gpd}^\omega \to \mathrm{C^*Alg} \] which has the \emph{property} that it sends equivalences of groupoids to $\mathrm{KK}$-equivalences. We have established in \cref{morita invariance} that the functor of \cref{groupoid algebra} satisfies this property. \end{proof} \begin{Rmk} In \cite[Proposition 3.7]{LandNikolaus} it is shown that the topological $K$-theory functor \[ K\colon \mathrm{N}\mathrm{C^*Alg} \to \mathrm{Sp} \] factors over $\mathrm{KK}_\infty$, in fact becomes corepresentable there. It thus follows from \cref{groupoids2} that there is a functor sending a groupoid to the topological $K$-theory spectrum of its $C^*$-algebra. \end{Rmk} \section*{The proof of \cref{main theorem}} In this section we will prove \cref{main theorem}. Recall that we consider a cofibration category $(\c,w\c,c\:\!\c)$ and aim to compare the $\infty$-categories associated to the relative categories $(\c,w\c)$ and $(c\:\!\c,wc\:\!\c)$. As our model of the homotopy theory of $(\infty,1)$-categories we will use \emph{complete Segal spaces} of Rezk, see \cite{Rezk}. This homotopy theory is modelled by the Rezk model structure on the category of bisimplicial sets in which fibrant objects are the complete Segal spaces. The model structure is constructed as a Bousfield localization of the Reedy model structure and hence every levelwise weak equivalence of bisimplicial sets is a Rezk equivalence, i.e.\ an equivalence of $\infty$-categories. The $\infty$-category associated to a relative category $(\mathscr{D},w\mathscr{D})$ is modelled by the \emph{classification diagram} $\mathrm{N}^\mathrm{R}\mathscr{D}$ of Rezk which is given by \[ (\mathrm{N}^\mathrm{R}\mathscr{D})_k \mapsto \mathrm{N} w(\mathscr{D}^{[k]}),\] where the weak equivalences in $\mathscr{D}^{[k]}$ are levelwise weak equivalences, compare \cite[section 3.3]{Rezk} and \cite[Theorem 3.8]{Mazel-Gee}. See also the MathOverflow post \cite{Cisinksi2}. The classification diagram is not fibrant in the Rezk model structure, but it is levelwise equivalent to a fibrant object if $\mathscr{D}$ is a cofibration category. Recall that we stated \cref{main theorem} under the following assumption on the cofibration category $\c$. \begin{Def}\label{good-cylinders} A cofibration category $\c$ has \emph{good cylinders} if it has a cylinder functor $I$ such that for every cofibration $X \rightarrowtail Y$ the induced morphism $I X \sqcup_{X \sqcup X} (Y \sqcup Y) \to I Y$ is a cofibration. \end{Def} For example any cofibration category arising from a monoidal model category (or a model category enriched over a monoidal model category) has good cylinders, since they are given by tensoring with a chosen interval object. \begin{Thm}\label{cC-C} If $\c$ has good cylinders, then the inclusion $c\:\!\c \to \c$ induces a levelwise weak equivalence of the classification diagrams $\mathrm{N}^\mathrm{R} c\:\!\c \to \mathrm{N}^\mathrm{R}\c$. \end{Thm} For the proof we will need a series of auxiliary definitions and lemmas. Let us first fix some notation. If $J$ is a category, then $\hat{J}$ denotes $J$ considered as a relative category with all morphisms as weak equivalences. If $J$ is any relative category, then $\c^J$ stands for the cofibration category of all relative diagrams $J \to \c$ with levelwise weak equivalences and cofibrations. If $J$ is any relative direct category, then $\c^J_\mathrm{R}$ stands for the cofibration category of all relative Reedy cofibrant diagrams $J \to \c$ with levelwise weak equivalences and Reedy cofibrations. See \cite[Theorem 9.3.8]{Radulescu-Banu} for the construction of these cofibration categories. \begin{Def} A subcategory $g\c$ of a cofibration category $\c$ is said to be \emph{good} if \begin{itemize} \item all cofibrations are in $g\c$; \item the morphisms of $g\c$ are stable under pushouts along cofibrations; \item $\c$ has functorial factorizations that preserve $g\c$ in the sense that if \begin{tikzeq*} \matrix[diagram] { |(A0)| A_0 & |(B0)| B_0 \\ |(A1)| A_1 & |(B1)| B_1 \\ }; \draw[->] (A0) to (B0); \draw[->] (A1) to (B1); \draw[->] (A0) to (A1); \draw[->] (B0) to (B1); \end{tikzeq*} is a square in $\c$ such that both vertical morphisms are in $g\c$ and \begin{tikzeq*} \matrix[diagram] { |(A0)| A_0 & |(tB0)| \tilde{B}_0 & |(B0)| B_0 \\ |(A1)| A_1 & |(tB1)| \tilde{B}_1 & |(B1)| B_1 \\ }; \draw[cof] (A0) to (tB0); \draw[cof] (A1) to (tB1); \draw[->] (tB0) to node[above] {$\mathrel\sim$} (B0); \draw[->] (tB1) to node[below] {$\mathrel\sim$} (B1); \draw[->] (A0) to (A1); \draw[->] (tB0) to (tB1); \draw[->] (B0) to (B1); \end{tikzeq*} is the resulting factorization, then the induced morphism $A_1 \sqcup_{A_0} \tilde{B}_0 \to \tilde{B}_1$ is also in $g\c$. (In particular, so is $\tilde{B}_0 \to \tilde{B}_1$ by the second condition.) \end{itemize} \end{Def} Now suppose that $\c$ is cofibration category with a good subcategory $g\c$. We let $W\c$ be the bisimplicial set whose $(m,n)$-bisimplices are all diagrams in $\c$ of the form \begin{tikzeq*} \matrix[diagram] { |(00)| X_{0,0} & |(01)| X_{0,1} & |(0l)| \ldots & |(0n)| X_{0,n} \\ |(10)| X_{1,0} & |(11)| X_{1,1} & |(1l)| \ldots & |(1n)| X_{1,n} \\ |(l0)| \vdots & |(l1)| \vdots & & |(ln)| \vdots \\ |(m0)| X_{m,0} & |(m1)| X_{m,1} & |(ml)| \ldots & |(mn)| X_{m,n} \text{,} \\ }; \draw[cof] (00) to node[above] {$\mathrel\sim$} (01); \draw[cof] (01) to node[above] {$\mathrel\sim$} (0l); \draw[cof] (0l) to node[above] {$\mathrel\sim$} (0n); \draw[cof] (10) to node[above] {$\mathrel\sim$} (11); \draw[cof] (11) to node[above] {$\mathrel\sim$} (1l); \draw[cof] (1l) to node[above] {$\mathrel\sim$} (1n); \draw[cof] (m0) to node[below] {$\mathrel\sim$} (m1); \draw[cof] (m1) to node[below] {$\mathrel\sim$} (ml); \draw[cof] (ml) to node[below] {$\mathrel\sim$} (mn); \draw[->] (00) to node[left] {$\mathrel\sim$} node[below right = 0.25 cm and 0.02 cm] {\tiny$g$} (10); \draw[->] (10) to node[left] {$\mathrel\sim$} node[below right = 0.25 cm and 0.02 cm] {\tiny$g$} (l0); \draw[->] (l0) to node[left] {$\mathrel\sim$} node[below right = 0.25 cm and 0.02 cm] {\tiny$g$} (m0); \draw[->] (01) to node[left] {$\mathrel\sim$} node[below right = 0.25 cm and 0.02 cm] {\tiny$g$} (11); \draw[->] (11) to node[left] {$\mathrel\sim$} node[below right = 0.25 cm and 0.02 cm] {\tiny$g$} (l1); \draw[->] (l1) to node[left] {$\mathrel\sim$} node[below right = 0.25 cm and 0.02 cm] {\tiny$g$} (m1); \draw[->] (0n) to node[right] {$\mathrel\sim$} node[below right = 0.25 cm and 0.02 cm] {\tiny$g$} (1n); \draw[->] (1n) to node[right] {$\mathrel\sim$} node[below right = 0.25 cm and 0.02 cm] {\tiny$g$} (ln); \draw[->] (ln) to node[right] {$\mathrel\sim$} node[below right = 0.25 cm and 0.02 cm] {\tiny$g$} (mn); \end{tikzeq*} i.e.\ diagrams $\hat{[m]} \times \hat{[n]} \to \c$ where all horizontal morphisms are cofibrations and all vertical morphisms are in $g\c$. In other words $W\c$ is the nerve of a double category with the same objects as $\c$, whose horizontal morphisms are acyclic cofibrations, vertical morphisms are weak equivalences in $g\c$, and double morphisms are just commutative squares. \begin{Lemma} The bisimplicial set $W\c$ is vertically homotopically constant, i.e.\ every simplicial operator $[n] \to [n']$ induces a weak homotopy equivalence $(W\c)_{*,n'} \to (W\c)_{*,n}$. \end{Lemma} \begin{proof} Note that $(W\c)_{*,n} = \mathrm{N} \tilde{\c}_n$ where $\tilde{\c}_n$ is a category whose objects are diagrams $\hat{[n]} \to c\:\!\c$ and whose morphisms are weak equivalences with all components in $g\c$. It is enough to consider the case $n' = 0$, i.e.\ to show that the constant functor $\operatorname{const} \colon \tilde{\c}_0 \to \tilde{\c}_n$ is a homotopy equivalence. The evaluation at $n$ functor $\operatorname{ev}_n \colon \tilde{\c}_n \to \tilde{\c}_0$ satisfies $\operatorname{ev}_n \operatorname{const} = \mathrm{id}_{\tilde{\c}_0}$. Moreover, the structure maps of every diagram $X \in \tilde{\c}_n$ form a natural weak equivalence $X \to \operatorname{const} \operatorname{ev}_n X$ since every cofibration is in $g\c$. \end{proof} \begin{Lemma}\label{W-horizontal} The bisimplicial set $W\c$ is horizontally homotopically constant, i.e.\ every simplicial operator $[m] \to [m']$ induces a weak homotopy equivalence $(W\c)_{m',*} \to (W\c)_{m,*}$. \end{Lemma} \begin{proof} Note that $(W\c)_{m,*} = \mathrm{N} \bar{\c}_m$ where $\bar{\c}_m$ is a category whose objects are diagrams $\hat{[m]} \to g\c$ and whose morphisms are acyclic levelwise cofibrations. Again, it is enough to consider the case $m' = 0$ and to show that the constant functor $\operatorname{const} \colon \bar{\c}_0 \to \bar{\c}_m$ and the evaluation at $m$ functor $\operatorname{ev}_m \colon \bar{\c}_n \to \bar{\c}_0$ form a homotopy equivalence. We have $\operatorname{ev}_m \operatorname{const} = \mathrm{id}_{\bar{\c}_0}$. Moreover, given any object $X \in \bar{\c}_m$ and $i \in [m]$ we consider the composite weak equivalence $X_i \stackrel{\we}{\to} X_m$. We combine it with the identity $X_m \to X_m$ and factor functorially the resulting morphism $X_i \sqcup X_m \to X_m$ as $X_i \sqcup X_m \rightarrowtail \tilde{X}_i \stackrel{\we}{\to} X_m$. In the square \begin{tikzeq*} \matrix[diagram] { |(mi)| X_m \sqcup X_i & |(i)| X_m \\ |(m1)| X_m \sqcup X_{i+1} & |(1)| X_m \\ }; \draw[->] (mi) to (i); \draw[->] (m1) to (1); \draw[->] (i) to (1); \draw[->] (mi) to (m1); \end{tikzeq*} both vertical morphisms are in $g\c$ (since $g\c$ is closed under pushouts). Thus the induced morphism $\tilde{X}_i \to \tilde{X}_{i+1}$ is in $g\c$. Moreover, we obtain acyclic cofibrations $X_i \stackrel{\we}{\cto} \tilde{X}_i$ and $X_m \stackrel{\we}{\cto} \tilde{X}_i$ that constitute a zig-zag of natural weak equivalences connecting $\operatorname{const} \operatorname{ev}_m$ and $\mathrm{id}_{\bar{\c}_m}$. \end{proof} \begin{Lemma}\label{W-diagonal} The inclusion $\mathrm{N} wc\:\!\c \to \mathrm{N} wg\c$ is a weak homotopy equivalence. \end{Lemma} \begin{proof} Observe that the $0$th row and the $0$th column of $W\c$ are $\mathrm{N} wg\c$ and $\mathrm{N} wc\:\!\c$ respectively. Since $W\c$ is homotopically constant in both directions, it follows from \cite[Proposition IV.1.7]{Goerss-Jardine} that we have weak equivalences \begin{tikzeq*} \matrix[diagram] { |(g)| \mathrm{N} wg\c & |(d)| \mathrm{diag} W\c & |(c)| \mathrm{N} wc\:\!\c \text{.} \\ }; \draw[->] (g) to node[above] {$\mathrel\sim$} (d); \draw[->] (c) to node[above] {$\mathrel\sim$} (d); \end{tikzeq*} Moreover, the restrictions along the diagonal inclusions $[m] \to [m] \times [m]$ induce a simplicial map $\mathrm{diag} W\c \to \mathrm{N} wg\c$ whose composites with the two maps above are the identity on $\mathrm{N} wg\c$ and the inclusion $\mathrm{N} wc\:\!\c \to \mathrm{N} wg\c$. Hence the latter is a weak equivalence by 2-out-of-3. \end{proof} Next we establish that under specific circumstances certain subcategories of $\c$ are good. \begin{Lemma}\label{levelwise-good} Let $\c$ be a cofibration category. \begin{enumerate} \item If $\c$ has functorial factorizations, then $\c$ itself is a good subcategory. \item If $\c$ has good cylinders, then $c\:\!\c$ is a good subcategory of $\c$. \item If $c\:\!\c$ is a good subcategory of $\c$, then the subcategory of levelwise cofibrations is a good subcategory of $\c^{[k]}_\mathrm{R}$ for all $k$. \end{enumerate} \end{Lemma} \begin{proof} \leavevmode \begin{enumerate} \item This is vacuously true. \item We will show that the standard mapping cylinder factorization makes $c\:\!\c$ into a good subcategory. Let \begin{tikzeq*} \matrix[diagram] { |(A0)| A_0 & |(B0)| B_0 \\ |(A1)| A_1 & |(B1)| B_1 \\ }; \draw[->] (A0) to (B0); \draw[->] (A1) to (B1); \draw[cof] (A0) to (A1); \draw[cof] (B0) to (B1); \end{tikzeq*} be a square were both vertical morphisms are cofibrations. The mapping cylinder of $A_i \to B_i$ is constructed as $I A_i \sqcup_{A_i \sqcup A_i} (A_i \sqcup B_i)$. We need to show that the morphism induced by the square \begin{tikzeq*} \matrix[diagram] { |(A0)| A_0 & |(I0)| I A_0 \sqcup_{A_0 \sqcup A_0} (A_0 \sqcup B_0) \\ |(A1)| A_1 & |(I1)| I A_1 \sqcup_{A_1 \sqcup A_1} (A_1 \sqcup B_1) \\ }; \draw[->] (A0) to (I0); \draw[->] (A1) to (I1); \draw[->] (I0) to (I1); \draw[->] (A0) to (A1); \end{tikzeq*} is a cofibration. This morphism coincides with \begin{tikzeq*} \matrix[diagram] { |(0)| I A_0 \sqcup_{A_0 \sqcup A_0} (A_1 \sqcup B_0) & |(1)| I A_1 \sqcup_{A_1 \sqcup A_1} (A_1 \sqcup B_1) \\ }; \draw[->] (0) to (1); \end{tikzeq*} which factors as \begin{tikzeq*} \matrix[diagram] { |(0)| I A_0 \sqcup_{A_0 \sqcup A_0} (A_1 \sqcup B_0) & |(1)| I A_0 \sqcup_{A_0 \sqcup A_0} (A_1 \sqcup B_1) & |(2)| I A_1 \sqcup_{A_1 \sqcup A_1} (A_1 \sqcup B_1) \text{.} \\ }; \draw[->] (0) to (1); \draw[->] (1) to (2); \end{tikzeq*} The first morphism is a pushout of $A_1 \sqcup B_0 \to A_1 \sqcup B_1$ which is a cofibration since $B_0 \to B_1$ is. The second morphism is a pushout of $I A_0 \sqcup_{A_0 \sqcup A_0} (A_1 \sqcup A_1) \to I A_1$ which is a cofibration since $A_0 \to A_1$ is and $\c$ has good cylinders. \item Clearly, every Reedy cofibration is a levelwise cofibration and levelwise cofibrations are stable under pullbacks. Consider a diagram \begin{tikzeq*} \matrix[diagram] { |(A0)| A_0 & |(tB0)| \tilde{B}_0 & |(B0)| B_0 \\ |(A1)| A_1 & |(tB1)| \tilde{B}_1 & |(B1)| B_1 \\ }; \draw[cof] (A0) to (tB0); \draw[cof] (A1) to (tB1); \draw[->] (tB0) to node[above] {$\mathrel\sim$} (B0); \draw[->] (tB1) to node[below] {$\mathrel\sim$} (B1); \draw[->] (A0) to (A1); \draw[->] (tB0) to (tB1); \draw[->] (B0) to (B1); \end{tikzeq*} in $\c^J_\mathrm{R}$ where $\tilde{B}_0$ and $\tilde{B}_1$ are obtained by the standard Reedy factorization induced by the given functorial factorization in $\c$. Assuming that $A_0 \to A_1$ and $B_0 \to B_1$ are levelwise cofibrations, we need to check that $A_{1,i} \sqcup_{A_{0,i}} \tilde{B}_{0,i} \to \tilde{B}_{1,i}$ is a cofibration for every $i \in [m]$. For $i = 0$, this follows directly from the assumption that $c\:\!\c$ is a good subcategory of $\c$. The Reedy factorization is constructed by induction over $[m]$, so assume that the conclusion is already known for $i < m$. The factorization at level $i+1$ arises as \begin{tikzeq*} \matrix[diagram] { |(A0)| A_{0,i+1} \sqcup_{A_{0,i}} \tilde{B}_{0,i} & |(tB0)| \tilde{B}_{0,i+1} & |(B0)| B_{0,i+1} \\ |(A1)| A_{1,i+1} \sqcup_{A_{1,i}} \tilde{B}_{1,i} & |(tB1)| \tilde{B}_{1,i+1} & |(B1)| B_{1,i+1} \\ }; \draw[cof] (A0) to (tB0); \draw[cof] (A1) to (tB1); \draw[->] (tB0) to node[above] {$\mathrel\sim$} (B0); \draw[->] (tB1) to node[below] {$\mathrel\sim$} (B1); \draw[->] (A0) to (A1); \draw[->] (tB0) to (tB1); \draw[->] (B0) to (B1); \end{tikzeq*} where the left square comes from the diagram \begin{tikzeq*} \matrix[diagram,row sep=2em,column sep=2em] { |(A0i)| A_{0,i} & & |(B0i)| \tilde{B}_{0,i} &[1em] \\ & |(A01)| A_{0,i+1} & & |(P0)| \bullet & & |(B01)| \tilde{B}_{0,i+1} \\ |(A1i)| A_{1,i} & & |(B1i)| \tilde{B}_{1,i} & \\ & |(A11)| A_{1,i+1} & & |(P1)| \bullet & & |(B11)| \tilde{B}_{1,i+1} \\ }; \draw[->] (B01) to (B11); \draw[->] (P0) to (B01); \draw[->] (P1) to (B11); \draw[->] (A0i) to (B0i); \draw[->] (A1i) to (B1i); \draw[->] (A0i) to (A1i); \draw[->] (B0i) to (B1i); \draw[->,over] (A01) to (P0); \draw[->] (A11) to (P1); \draw[->,over] (A01) to (A11); \draw[->] (P0) to (P1); \draw[->] (A0i) to (A01); \draw[->] (B0i) to (P0); \draw[->] (A1i) to (A11); \draw[->] (B1i) to (P1); \end{tikzeq*} where the bullets stand for the pushouts above. The conclusion we need to obtain amounts to the composite of the two squares in the front being a Reedy cofibration when seen as a morphism from left to right. The right square is a Reedy cofibration since $c\:\!\c$ is a good subcategory of $\c$ and so is the left one since it is a pushout of the back square which is a Reedy cofibration by the inductive hypothesis. \qedhere \end{enumerate} \end{proof} \begin{Lemma}\label{Reedy-levelwise} The inclusion $\mathrm{N} w(\c^{[k]}_\mathrm{R}) \to \mathrm{N} w(\c^{[k]})$ is a weak homotopy equivalence. \end{Lemma} \begin{proof} Functorial factorization induces a functor in the opposite direction as well as natural weak equivalences connecting both composites with identities. \end{proof} \begin{proof}[Proof of \cref{cC-C}] Recall that we want to show that $\mathrm{N} w((c\:\!\c)^{[k]}) \to \mathrm{N} w(\c^{[k]})$ is a weak equivalence for all $k$. In the diagram \begin{tikzeq*} \matrix[diagram] { & |(wcR)| \mathrm{N} wc\:\!(\c^{[k]}_\mathrm{R}) & |(wR)| \mathrm{N} w(\c^{[k]}_\mathrm{R}) \\ |(w-c)| \mathrm{N} w((c\:\!\c)^{[k]}) & |(wc)| \mathrm{N} wc\:\!(\c^{[k]}) & |(w)| \mathrm{N} w(\c^{[k]}) \\ }; \draw[->] (wcR) to node[above] {\ding{172}} (wR); \draw[->] (wc) to node[below] {\ding{173}} (w); \draw[->] (w-c) to (wc); \draw[->] (wcR) to node[above left] {\ding{174}} (w-c); \draw[->] (wcR) to (wc); \draw[->] (wR) to node[right] {\ding{175}} (w); \end{tikzeq*} the indicated maps are weak equivalences. The map \ding{172} is a weak equivalence by \cref{W-diagonal} applied to $\c^{[k]}_\mathrm{R}$ with itself as a good subcategory and so is \ding{173} by the same argument applied to $\c^{[k]}$. The map \ding{174} is a weak equivalence by \cref{W-diagonal} applied to $\c^{[k]}_\mathrm{R}$ with the good subcategory of levelwise cofibrations, which is indeed good by \cref{levelwise-good}. Finally, \ding{175} is a weak equivalence by \cref{Reedy-levelwise}. Hence by 2-out-of-3, the bottom composite is also a weak equivalence as required. \end{proof} \begin{ackn} The first author was supported by Wolfgang L\"ucks ERC Advanced Grant ``KL2MG-interactions'' (no.662400) granted by the European Research Council. \end{ackn} \bibliographystyle{amsalpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Dust coagulation is an important astrophysical process, particularly in the case of planet formation. To form a planet, micron-sized dust particles have to collide and stick to form ever larger dust aggregates. The main problem with this picture is what is known as the "meter-size barrier": once particles reach a size of decimeter to meter, they acquire relative velocities that are too large to allow hit-and-stick growth. Instead, collisions at these speeds, which are on the order of tens of meters per second, lead to the destruction of the colliding aggregates. This is often called the "fragmentation barrier". Moreover, already at sizes of a millimeter, we encounter the "nonsticking problem", which is often called the "bouncing barrier": while the aggregates do not fragment upon collision at these sizes, they do not stick together either, meaning that the growth stalls. All these barriers are posing a major problem in our understanding of planet formation. In recent years, however, several new ideas have appeared in the literature to overcome these problems. For example, the sweep-up growth scenario has been proposed as a solution for the growth barriers issue \citep{2012A&A...540A..73W}. In this scenario, a few boulders can grow large by colliding with numerous smaller pebbles through the fragmentation with mass transfer process \citep{2005Icar..178..253W,2009MNRAS.393.1584T,2013A&A...559A.123M}. The growth of the pebbles is suppressed due to bouncing and/or fragmentation between themselves, while a few larger aggregates, or seeds, might form if the distribution of impact velocities due to stochastic turbulence is taken into account \citep{2012A&A...544L..16W, 2013ApJ...764..146G,2013ApJ...774L...4M}. Thanks to the low velocity tail of the distribution, some grains can be "lucky" enough to not experience any destructive collisions and undergo only low-velocity sticking collisions, breaking through the growth barriers. This scenario enables the formation of planetary embryos while still keeping the disk dusty, which is consistent with observations. Key to the trustworthiness of the conclusions derived from numerical models is the reliability of the codes and algorithms used. The problem of coagulation is extremely complex and nonlinear, and with the exception of some very simple coagulation kernels, no analytic solutions exist. So how do we know if the results of our codes are indeed correct? One way is to treat the problem with at least two distinct methods and compare the results. Over the years, different approaches have been developed to study the dust coagulation problem. Besides numerous semi-analytic models, two main numerical approaches are used nowadays: direct numerical integration of the Smoluchowski equation and various Monte Carlo codes. The former is traditional approach, which has been used in different versions by \citet{1980Icar...44..172W, 1981Icar...45..517N, 1989Icar...77..330W, 1990Icar...83..205O, 2005A&A...434..971D, 2005ApJ...625..414T, 2006ApJ...640.1099N, 2008A&A...480..859B, 2009ApJ...707.1247O, 2010A&A...513A..79B, 2012ApJ...753..119C} and many others. This approach is often used when comparing dust coagulation models to observations, as it allows us to model the dust evolution in the global disk over very long timescales. The Monte Carlo approach is based on work by \citet{1975MNRAS.170..541G} and is used in one form by \cite{2007A&A...461..215O} and in another by \citet{2008A&A...489..931Z}, subsequently used by \citet{2012A&A...537A.125J,2013A&A...552A.137R} and \citet{2013A&A...556A..37D}. The Monte Carlo approach is useful to test different coagulation models and to include different properties of dust particles, such as the internal grain structure. It is also better suited to use along with hydrodynamic grid codes. The two methods are usually benchmarked using analytical solutions of the coagulation equation that are available for three idealized growth kernels (see, e.g., \citealt{1990Icar...83..205O}, \citealt{1990Icar...88..336W}, and \citealt{2000Icar..143...74L}). However, these kernels do not necessarily represent any realistic growth scenario in the protoplanetary disk. In this work, we perform an explicit comparison between the two approaches for the first time. In this comparison, we focus on the sweep-up growth scenario, which is challenging to model for both of the methods. In particular, it was already asserted by \citet{2012A&A...548C...1W} that an artificial breakthrough may occur when a low mass resolution is used in the Smoluchowski method. We study this issue in more detail, and we show that not only the high resolution but also a careful treatment of interactions in low particle number density bins is needed to avoid the nonphysical growth. Until now, the sweep-up growth triggered by the "lucky" growth was modeled using the Smoluchowski code only. Using a two-dimensional Monte Carlo code, \citet{2013A&A...556A..37D} showed that sweep-up growth can occur at the inner edge of dead zone, but it was triggered by radial transport of big bodies grown in a dead zone in this case. In this paper, we implement the relative velocity distribution in the Monte Carlo code and directly compare results of the two approaches. We also present some of their major features and differences. This paper is organized as follows: we describe both of our numerical models in Sect.\ \ref{sub:model}. In Sect.\ \ref{sub:comp}, we compare results obtained with both codes. We discuss issues related to numerical convergence of both methods in Sect.\ \ref{sub:res}. We summarize our findings in Sect.\ \ref{sub:last}. \section{The numerical models}\label{sub:model} We study the Smoluchowski approach using the code developed by \citet{2008A&A...480..859B} and \citet{2010A&A...513A..79B}, along with an impact velocity distribution implemented as described in \citet{2012A&A...544L..16W}. In this code, we let the dust-grain number density $n(m,r,z)$ be a function of the grain mass $m$, the distance to the star $r$, and the height above the midplane $z$, and give it in number of particles per unit volume per unit mass. The dust evolution can then be solved by integration. However, discretization of the problem is necessary in the integration process, which can lead to a significant numerical diffusion in mass-space, because having a finite number of grid points means that particle collisions do not necessarily lead to particle masses $m_{\rm p}$ that directly correspond to one of the logarithmically spaced sampling points. The approach by \citet{2008A&A...480..859B} was to implement an algorithm that distributes the mass of the resulting particle into two adjacent mass bins corresponding to grid points ${\rm i}$ and ${\rm i}+1$, $m_{\rm i} < m_{\rm p} < m_{\rm i+1}$, according to \begin{equation}\label{epsilon} \epsilon = \frac{m_{\rm p} - m_{\rm i}}{m_{{\rm i}+1}-m_{\rm i}}, \end{equation} where $\epsilon \cdot m_{\rm p}$ is put into mass bin $m_{\rm i+1}$, and $(1-\epsilon)\cdot m_{\rm p}$ is put into mass bin $m_{\rm i}$. This algorithm is based on the work of \citet{1969JAtS...26.1060K} and is adopted by most of the modern dust coagulation codes. This approach, however, leads to some numerical diffusion, as $m_{\rm i+1} > m_{\rm p}$ means that mass is inserted into a mass bin that corresponds to a larger mass than the mass of physical particle created in the collision. If the spacing between the mass bins is too coarse, this leads to a significant, artificial growth rate speed-up \citep{1990Icar...83..205O}. We show that the same effect strongly affects the number of seeds formed in the "lucky growth" scenario; however, the problem is more severe and requires a careful approach to low number density regions in this case. In the Monte Carlo method, we use the representative particles approach described by \citet{2008A&A...489..931Z} and implemented by \citet{2013A&A...556A..37D}. We have slightly modified the method to match the vertical treatment of the Smoluchowski code (described by \citealt{2008A&A...480..859B}). Instead of performing an explicit vertical advection, we redistribute the particles according to a Gaussian distribution with a width: \begin{equation}\label{hdust} H_{\rm{dust}} = H_{\rm{gas}} \left[ 1 + \frac{\min{\left(0.5,\rm{St}\right)\left(1+\rm{St}^2\right)}}{\alpha}\right]^{-\frac{1}{2}}, \end{equation} where $H_{\rm{gas}}$ is the pressure scale height of the gas, $\rm{St}$ is the particle's Stokes number and $\alpha$ is the turbulence strength parameter. In this way, we account for the reduction in the collision rate between small and large grains due to their different vertical settling. This occurs because small particles are more strongly affected by the turbulent diffusion, and, thus, their density in the midplane of the disk is lower than in the case of large particles. One of the assumptions of the representative particle approach that we implement is that one representative particle, or swarm, represents a constant amount of mass, which is equal to \begin{equation}\label{mswarm} M_{\rm{swarm}} = M_{\rm{tot}} / N_{\rm{swarms}}, \end{equation} where $M_{\rm{tot}}$ is the total mass of dust present in the computational domain and $N_{\rm{swarms}}$ is the number of swarms used. In other words, the total mass of dust $M_{\rm{tot}}$ is divided into $N_{\rm{swarms}}$ equal-mass units. Each of these units represents physical particles of mass $m_i$ ($m_i \ll M_{\rm{swarm}}$), but the number of these particles $N_i$ has to be such that $N_i \cdot m_i = M_{\rm{swarm}}$. The algorithm fails, if, for example, a physical coagulation kernel would lead to formation of only one massive particle with mass $m_{\rm p}$, while keeping all the other particles small. If $m_{\rm p}<M_{\rm{swarm}}$, the single big particle cannot be resolved, because it involves less mass than the smallest available unit $M_{\rm{swarm}}$. Increasing the number of swarms, $N_{\rm{swarms}}$, lowers $M_{\rm{swarm}}$ and thus improves the mass resolution of the method; however, the computation time increases quadratically with the number of swarms used. The dynamic mass range of the representative particle approach is limited by the number of swarms used. This issue can be overcome by implementing a more advanced algorithm, as the "distribution method" proposed by \citet{2008ApJ...684.1291O} that involves continuously adjusting $M_{\rm{swarm}}$ by splitting and merging the swarms. This method was later used in the context of accretion among planetesimals \citep{2010Icar..210..507O}, allowing the Monte Carlo method to resolve a runaway coagulation kernel. However, the method was not yet tested with velocity distributions or complicated collision models that are needed to break through the growth barriers. \section{Comparison of results obtained with both methods}\label{sub:comp} Because our Monte Carlo method is not capable of the same dynamic mass range as the Smoluchowski method to directly compare the two methods, we choose a setup where the particle breakthrough (i.e.\ where the "lucky" particles can start to grow by mass transfer) occurs at relatively high particle number density, which is possible to resolve with our representative particle approach (see the previous section). For the disk model, we use the minimum-mass extrasolar nebula \citep{2013MNRAS.431.3444C} at 1\ AU. The gas surface density is $\Sigma_{\rm{gas}}=9900$~g~cm$^{-2}$, and the temperature is $T=280$~K. We assume a turbulence of $\alpha=10^{-2}$ and a standard dust to gas ratio of $10^{-2}$. We take a relative velocity between the dust grains driven by Brownian motion and turbulence into account, by calculating the root-mean-square impact velocity $v_{\rm{rms}}$ from the formulas derived by \citet{2007A&A...466..413O}, and we assume a Maxwellian distribution of the impact velocity. We consider sticking, fragmentation, and mass transfer as possible outcomes of collision, which we refer to as the SF$+$MT model \citep{2010A&A...513A..56G, 2012A&A...544L..16W}. The collision outcome is determined by taking the sticking and fragmentation$\slash$mass transfer probabilities $P_{\rm{s}}(v_{\rm{rms}})$ and $P_{\rm{f\slash mt}}(v_{\rm{rms}})$ into account, which are calculated analogically as in \citet{2012A&A...544L..16W}. We take the fragmentation threshold velocity to be $v_{\rm{f}}=50$ cm~s$^{-1}$. If a collision should lead to fragmentation but the mass ratio between the colliding particles mass is $m_1/m_2>20$ ($m_1 > m_2$), we assume projectile fragmentation with mass transfer with a 10$\%$ efficiency; that is the larger particle gains 10$\%$ of mass of the smaller one during the event. Realistic values of the collision parameters are poorly constrained, as discussed in Sect.~\ref{sub:last}. Implementing the same setup in both of the codes, we perform a number of runs by varying the numerical resolution by around one order of magnitude. In the Smoluchowski code, we use from 3 to 40 mass bins per decade. In the Monte Carlo code, we use from 12,000 to 120,000 representative particles, and we repeat each run ten times with different random seeds. The Monte Carlo method relies on random numbers used to determine which particles are participating in the subsequent collisions and to calculate collision time steps \citep{2008A&A...489..931Z}. Thus, outcomes of the Monte Carlo runs performed with different random seeds vary despite using the same setup. In the velocity distribution case, this effect is even stronger, meaning that multiple runs with different random seeds are necessary. This causes our high resolution Monte Carlo models to need a few days on an 8 core 3.1 GHz AMD machine. For comparison, the Smoluchowski models with 40 bins per mass decade take about of one hour on a single core processor. \begin{figure} \centering \includegraphics[width=\hsize]{MMEN.pdf} \caption{Comparison of the mass distribution evolution obtained using our Smoluchowski and Monte Carlo codes. In a standard case when an insurmountable fragmentation barrier is present, the two methods perfectly agree (three upper panels); however, we encounter some differences between the results obtained with the two approaches (three bottom panels) with the possibility of breakthrough. For the Smoluchowski code, the presented results were obtained with a resolution of 30 mass bins per decade. For the Monte Carlo code, the results are averaged from the simulations using $120,000$ particles. The "check points", m$_1$ and m$_2$, which are indicated with the dotted lines, are used to quantitatively compare dust growth timescales in Sect.\ \ref{sub:timescales}.} \label{fig:SFMT} \end{figure} Figure~\ref{fig:SFMT} shows the time evolution of the dust mass distribution obtained with both of the codes. For the Smoluchowski code, we display results obtained with the resolution of 30 bins per mass decade. For the Monte Carlo code, we display averaged results, along with their scatter, of our highest resolution simulations with 120,000 particles, which additionally showed a breakthrough before 100 yrs; otherwise, the figure gets indecipherable due to high noise. The early evolution, as seen in the upper panels of Fig.\ \ref{fig:SFMT}, is very similar in both codes, and the point at which the dust distribution hits the fragmentation barrier, where the growth of the majority of particles is hindered due to fragmentation that occurs when the impact velocities exceed the fragmentation threshold velocity $v_{\rm{f}}$, is identical ($m \cong 10^{-2}$~g, $t \cong 20$~yrs). If the fragmentation with mass transfer collisions are not included and there is no possibility of growth beyond the fragmentation barrier, the steady state is represented by the third panel of Fig.\ \ref{fig:SFMT}, and the two methods agree perfectly. However, as we include the breakthrough possibility, we encounter some differences between the two approaches, which are visible at later stages of the evolution. Both of the codes reveal that some of the particles can grow beyond the fragmentation barrier thanks to the low-velocity sticking collisions. The growth is very quick in general and meter-sized bodies are formed within $<$1000 yrs. However, the breakthrough generally occurs later in the Monte Carlo code, and the population of big grains lacks the characteristic waves seen in the distribution obtained from the Smoluchowski code, which can be seen in the bottom panel of Fig.\ \ref{fig:SFMT}. The differences in the late stages of evolution are caused by the restricted dynamic mass range of our representative particle approach that was discussed in Sect.\ \ref{sub:model}. We further discuss issues related to resolution dependencies of both methods in the following section. \section{Resolution dependence}\label{sub:res} The results obtained with both codes exhibit resolution dependence. In this section, we discuss specific issues connected to the numerical convergence of both Smoluchowski and Monte Carlo methods. \subsection{Growth timescales}\label{sub:timescales} To quantitatively compare dust growth obtained in both codes, we establish "check points" m$_1$ and m$_2$, as marked with the dotted lines in Fig.\ \ref{fig:SFMT}. We arbitrarily choose m$_1 = 2\cdot10^{-5}$~g and m$_2 = 1$~g, and mark the time at which the peak of the mass distribution reaches a mass corresponding to one of the check points. Figures \ref{fig:m1times} and \ref{fig:m2times} show the results obtained for m$_1$ and m$_2$, respectively. We show the times of crossing the "check points" as a function of the number of bins used in the Smoluchowski code and a number of representative particles used in the Monte Carlo method. We note that the scaling of x-axes of these figures is arbitrary, as there is no method to directly connect the number of bins in the Smoluchowski method to the number of representative particles in the Monte Carlo method. Mass m$_1$ is located before the fragmentation barrier. This part of the evolution corresponds to a standard growth scenario that does not include the "lucky" breakthrough possibility and is well resolved by both of the codes. In the Monte Carlo code case, the time of crossing m$_1$ depends very weakly on the number of particles used. The difference between individual runs, as marked by the shaded region in Fig. \ref{fig:m1times}, is also very low. The Smoluchowski code exhibits much stronger resolution dependence. In the case of our lowest resolution of 3 bins per mass decade, the growth is more than four times faster than with 40 bins. The results obtained with resolution of 40 bins per mass decade converge to a value that is consistent with the one given by the Monte Carlo code. The critical resolution agrees with findings of \citet{2000Icar..143...74L} and \citet{2009ApJ...707.1247O}. \begin{figure} \centering \includegraphics[width=\hsize]{m1times.pdf} \caption{Time at which the peak of mass distribution reaches the mass m$_1 = 2\cdot10^{-5}$~g for both the Smoluchowski and Monte Carlo code. The result changes with mass resolution, given by the number of bins per mass decade in the case of Smoluchowski code and the number of particles used in the Monte Carlo code. The scatter of results obtained in different runs with the same number of particles in the Monte Carlo code is marked by the shaded region around an averaged dependence. The Monte Carlo approach does not exhibit a strong resolution dependence. In contrast, the Smoluchowski algorithm overestimates the growth rate by a factor of few when we do not use enough mass bins.} \label{fig:m1times} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{m2times.pdf} \caption{Figure analogical to Fig.\ \ref{fig:m1times} but for the "check point" m$_2=1$~g. The resolution dependence of the Smoluchowski method is the same as for m$_1$, but it changes significantly in the case of the Monte Carlo code. The Monte Carlo code tends to underestimate the breakthrough possibility when used with a low number of particles due to the limited dynamic mass range.} \label{fig:m2times} \end{figure} The reason for the high diffusion of the Smoluchowski method is the linearity of the algorithm described in Eq.\ \ref{epsilon}, which is a necessity for solvers that employ implicit integration schemes to overcome the numerical stiffness of the equations (see \citealt{2008A&A...480..859B} for further details). Explicit solvers, on the other hand, are capable of implementing higher order mass distribution schemes, which lowers the numerical diffusion. We find that steady states that arise when a bouncing or fragmentation barrier is met and that have no possibility of breaking through are significantly less dependent on mass resolution. Thus, the conclusions of most of the papers that did not include breakthrough are not affected, even if a lower resolution was used. As can be seen in the Fig.\ \ref{fig:m2times}, the situation changes significantly in the case of the second check point m$_2$. This point is located beyond the fragmentation barrier and breakthrough point. Due to limited dynamic mass range, the resolution dependence of the Monte Carlo code is much stronger. The dispersion of results obtained in individual runs is much higher and can reach even an order of magnitude. Generally, the less particles we use, the longer we have to wait for the breakthrough. In the case of the Smoluchowski code, the resolution dependence is the same as for m$_1$, meaning that the lower resolution we use, the faster the breakthrough occurs. This is consistent with the findings of \citet{2013ApJ...764..146G}. The results obtained with both of the codes roughly converge for our highest accuracy. Both of the methods become computationally inefficient when used with even higher resolutions. The different resolution dependence seen in the Fig.\ \ref{fig:m2times} is a result of a fundamental difference between the two approaches. In a real protoplanetary disk, the number of physical particles is so high that even when the breakthrough probability is low, some particles will be able to be "lucky" and overcome the fragmentation barrier quickly. However, the number of representative particles is restricted in the Monte Carlo code and the the breakthrough probability is additionally reduced. In contrast, the breakthrough is resolved much easier because the Smoluchowski code deals with number densities instead of discrete particles. However, this introduces another problem, which is discussed in the following section. \subsection{Breakthrough probability} \label{subsec:breakposs} The "lucky growth" scenario has introduced a new issue in how the numerical diffusion affects the global dust evolution in the Smoluchowski method. In this section, we discuss this issue and introduce a way to limit its effect by including a modulation function to the coagulation algorithm that suppresses the interactions with mass bins containing unrealistically low particle numbers. As discussed above, when velocity distributions are introduced, the collision barriers are naturally smeared out. Because a particle that is more massive than the grains at the mass distribution peak must be "lucky" and has to grow by only interacting with other particles in low-velocity collisions, it becomes necessary to accurately resolve the high-mass tail of the distribution. Otherwise, if not all sticking events are resolved properly, the slope of the tail becomes incorrect, creating artificially large mass ratios between the luckiest grains and those in the peak. As an example of this, we can consider the extreme case where the entire high-mass tail is represented by a single mass bin $m_i$. If two grains in the peak undergo a single sticking event, forming a particle of mass $m \ll m_{i+1}$, some mass will still be put into the mass bin ${i+1}$, even though the particles would need to undergo several consecutive sticking events to reach a mass $m_{i+1}$ in reality. Such a badly resolved large-particle tail could cause an artificial breakthrough of growth barriers, as unrealistically large particles can form that continue to grow by the sweep-up process where no such particles would form in a better resolved case. To show this issue clearly, we perform additional simulations with the Smoluchowski code using a critical mass transfer ratio, that is the mass ratio above which a fragmenting collision leads to mass transfer (see Sect.\ \ref{sub:comp}), of 500. The point of breakthrough then occurs at a very low dust density, which is impossible to resolve with our implementation of the Monte Carlo method, as the particles that should break through would involve less mass than the mass of single swarm in our simulation (see also Sect.\ \ref{sub:model}). We have therefore performed a resolution study with the Smoluchowski code only. Simulations were run with resolutions between 3 and 40 bins per decade of mass with the results shown in Fig.~\ref{fig:Smoluchresol}. As can be seen, the SF$+$MT case is extremely sensitive to the mass resolution. Between the highest and the lowest resolutions, the point of breakthrough differs by more than 25 orders of magnitude in surface density. For resolutions above 30 bins per decade, the point of breakthrough would have occurred at a density lower than the density corresponding to 1 particle in an annulus of 0.1 AU width. \begin{figure} \centering \includegraphics[width=\hsize]{resstudy.pdf} \caption{The effect of mass resolution in the case of SF$+$MT setup with a critical mass ratio of 500 for the Smoluchowski code. The resolutions included a range from 3 to 40, and the mass distribution is shown after 100 yrs of evolution. The point of breakthrough is very sensitive to the resolution used, and the breakthrough happens only in the low resolution cases, which is clearly nonphysical. The dashed lines mark the density corresponding to 1 and $10^6$ particles of a given mass in an annulus of 0.1 AU width.} \label{fig:Smoluchresol} \end{figure} To correctly simulate the lowest dust densities, it becomes necessary to include a modulation function, $f_{\rm mod}$, that limits the interactions between mass bins that have particle numbers that are too low. The reasoning behind this is the following. For breakthrough to occur, we need at least 1 real particle within the simulation domain that is large enough to trigger a sweep-up. Because the Smoluchowski code deals with number densities in a single point in space, the limiting particle density becomes somewhat arbitrary and relies on choosing a reasonable physical domain size. The modulation function works in a similar way to the active bin method introduced by \citet{2000Icar..143...74L}. Instead of speeding up the code by deactivating low-populated mass bins; however, it is a continuous method that prevents growth of bins with unrealistically low particle numbers. The collision frequency between particle species $i$ and $j$ can then be written as \begin{equation}\label{intrates} f_{ij} = n_i n_j K_{ij} \cdot f_{\rm mod}, \end{equation} where $n_i$ and $n_j$ are the number densities, $K_{ij}$ the coagulation kernel. We choose $f_{\rm mod} = \exp{\left( -1/N_i -1/N_j \right)}$, where $N_i$ and $N_j$ are numbers of particles in a given surface of the disk, which we take to be an 0.1~AU wide annulus at 1~AU. The result of this is that mass is allowed to be put into a mass bin with a nonunity amount of particles by coagulation, for example, but the mass inside is unable to coagulate further until the bin contains sufficient mass. For our setup, the modulation function $f_{\rm mod}$ suppresses the breakthrough in the case of resolution larger than 30 bins per mass decade. The breakthrough that occurs in the runs with lower resolution is a result of the numerical diffusion introduced by the mass distribution algorithm (Eq.\ \ref{epsilon}), which is discussed in the Sect.~\ref{sub:model}. This is seen by the sharp cut-off at masses between $10^{-2}-10^{-1}$ g for the highest resolutions. We stress that this particular resolution dependency varies very strongly between setups, and it is necessary to confirm numerical convergence individually. We want to note that even a high resolution run can lead to a nonphysical breakthrough if $f_{\rm mod}$ is not included. In such a case, breakthrough would initially occur at densities corresponding to less than one particle in an annulus of 0.1 AU width, but the density of the high-mass tail could increase over the threshold density during further evolution. Thus, the combination of sufficient numerical resolution and the modulation function is necessary to ultimately confirm the possibility of forming planetesimals in the breakthrough scenario under given conditions. In this section, we clarified the issue of too crude mass resolutions found by \citet{2012A&A...548C...1W}. Although the lower resolutions commonly used in previous papers might work well for less extreme cases, which did not include the possibility of breakthrough, an artificial growth might occur for problems of the kind discussed here. Thus, we stress that careful convergence tests are necessary to confirm breakthrough possibility. We find that the convergence depends strongly on the strength of the growth barrier, and therefore, a separate convergence study is needed for each setup. As was shown in the previous section, the Monte Carlo code underestimates the breakthrough chance with low particle numbers in contrast to the Smoluchowski code, and, thus, the time at which breakthrough happens increases. If the Monte Carlo code is used with not sufficient number of particles, the breakthrough might be completely suppressed. This happens when the mass that should be involved in the breakthrough is lower than $M_{\rm{swarm}}$, the mass represented by a single swarm, that limits the dynamic range of the Monte Carlo method (see Sect.~\ref{sub:model}). The minimum number of representative particles required is dependent on the breakthrough mechanism. Here, the breakthrough is driven by distribution in the impact velocities, which smear out the fragmentation barrier by changing the slope of the mass distribution high-mass end. The breakthrough is only possible if the largest particles are more massive than the particles in the mass distribution peak by a factor defined by the critical mass ratio. Thus, the number of breakthrough particles is defined by the mass distribution function slope and the value of the critical mass transfer ratio. The higher the slope and the critical mass ratio, the lower number of particles can break through. However, it was shown by \citet{2013A&A...556A..37D} that the breakthrough can also be driven by radial mixing of dust aggregates between regions of different grain sizes. In such case, the number of breakthrough particles follows different constraints than here. \section{Discussion and conclusions}\label{sub:last} The issue of numerical resolution of the Smoluchowski code has been discussed for a long time in the context of dust coagulation in protoplanetary disks \citep{1990Icar...83..205O, 1990Icar...88..336W,2000Icar..143...74L}. These codes used different algorithms that do not necessarily result in similar resolution dependencies as the implicit integration scheme with the linear mass distribution algorithm we use. In this paper, we extended the prior studies by implementing a possibility of breaking through growth barriers with impact velocity distributions and including a direct comparison to the Monte Carlo algorithm with representative particle approach. The two methods have never before been explicitly compared. Our work showed that the two methods give consistent results when applied to usual coagulation problems. However, we find that modeling of the recently discovered planetesimal formation via "lucky growth" is much more challenging. Although the results obtained with the two methods converge for sufficient resolution, the approaches are fundamentally different and their limitations have to be realized when performing scientific models. In agreement with previous studies, we find that simulations with the Smoluchowski code require a sufficiently high mass resolution to avoid an artificial speed-up of the growth rate. This problem arises from numerical diffusion and our implementation of the algorithm, which is required to describe how the resulting mass is distributed after a collision event. Additionally, we show that the introduction of the modulation function that prevents interactions between mass bins containing less than one physical particle is necessary to study the dust coagulation at low number densities. In the Sect.~\ref{subsec:breakposs}, we have shown that the numerical issues can change both the quantitative and the qualitative result in the case of the breakthrough scenario. The Monte Carlo approach used to study the breakthrough scenario, in which only a few "lucky" particles break through the growth barriers, results in a high noise. In the presented tests, the individual runs show times of breakthrough that are orders of magnitude different and their averaged value depends on mass resolution. However, contrary to the Smoluchowski approach, the Monte Carlo approach does not suffer a strong resolution dependence when the dust aggregates grow with a relatively narrow size distribution, which usually is the case for small aggregate evolution when no breakthrough or runaway growth are possible. The Monte Carlo methods are generally computationally expensive, and they require numerous runs with different random seeds to reduce the noise. The convergence of each of the methods can be very different for every setup. We cannot present a general recipe for the minimum resolution required to study dust growth, because this can vary enormously from case to case. Thus, it is important to run resolution tests for every new physical model until convergence of results is obtained. Since we find that the numerical convergence is sensitive to the collision model parameters, it is important to use realistic values; however, these are poorly constrained, as the amount of data we have from laboratory experiments is restricted, and it is still not necessarily reproduced by direct numerical simulations. Laboratory experiments show that bouncing collisions start to occur at velocities between 0.1 and 10 cm~s$^{-1}$ \citep{2013Icar..225...75K}, while numerical simulations claim that such collisions only rarely occur, if ever \citep{2011ApJ...737...36W, 2013A&A...551A..65S}. Fragmentation is also hotly discussed, as laboratory experiments find fragmentation to occur at velocities as low as a few cm~s$^{-1}$ between 5 cm grains \citep{2012ApJ...758...35S} and up to a few m~s$^{-1}$ for mm-sized grains \citep{Lammel:mLrgVib9}. Numerical simulations, on the other hand, predict significantly higher threshold velocities for fragmentation, ranging between 1 and 12 m~s$^{-1}$ for 6 to 10 cm-sized grains \citep{2011ApJ...737...36W, 2013MNRAS.435.2371M}. The value of the fragmentation threshold velocity determines the maximum size of particles that we are able to obtain before the breakthrough happens. Mass transfer experiments are even more uncertain with the mass transfer efficiency that ranges between 0 and 60\% \citep{2009MNRAS.393.1584T, 2010ApJ...725.1242K, 2011ApJ...736...34B,2013A&A...559A.123M}, and the critical mass ratio, which is in principle unexplored in the laboratory and only for small ratios numerically \citep{2013MNRAS.435.2371M,2013A&A...559A..62W}. In all of these cases, additional material and collisional properties, such as porosity, composition, structure and impact angle also greatly influence the outcome, which adds to the uncertainty. As discussed in this work, the critical mass transfer ratio might need to be significantly lower than estimated in prior studies, due to the need for a relatively high numerical resolution of Smoluchowski solvers to accurately represent the high-mass tail. The collision models used in sweep-up modeling attempt to simplify the very complex physics of collisions between dust agglomerates. There have been a few attempts at more rigorous models \citep{2010A&A...513A..57Z, 2012A&A...540A..73W} to consolidate the recent progress in laboratory and numerical collision experiments \citep{2007ApJ...661..320W,2008ARA&A..46...21B,2010A&A...513A..56G}, but these are still not necessarily more correct than the simple models that narrow the modeling down to only a few key parameters \citep{2012A&A...544L..16W, 2013ApJ...764..146G, 2013ApJ...774L...4M, 2013A&A...556A..37D}. As the critical mass ratio plays the crucial role in determining if the breakthrough is possible, restricting its realistic values should be a priority for the future laboratory studies. We find the breakthrough scenario to be more sensitive to resolution issues than other problems we have tested, when the dust growth is utterly stopped by bouncing or fragmentation. At the same time, this scenario is of particular importance for the current planetesimal formation theory. Regardless of the method used, modeling of the "lucky growth" requires extreme computational force, and restricting the resolution to make the models more efficient can lead to serious numerical artifacts: the nonphysical breakthrough in the Smoluchowski approach case and lack of breakthrough in the case of the Monte Carlo approach. To conclude, we list features that characterize the two main coagulation methods. One should keep these in mind when deciding which method to use for a particular scientific application. The Smoluchowski equation solver with implicit integration scheme \begin{itemize} \item is capable of simulating dust evolution over long timescales (even at high resolutions, 0D simulations are finished within minutes), \item resolves equilibrium states well, as the implicit integration scheme allows for very large time-steps once the solution approaches steady-state \citep{2010A&A...513A..79B}, \item has a very high dynamical range that allows phenomena involving a single physical particle of tiny mass (compared to the total dust mass) to be resolved, which makes it ideal for breakthrough studies \citep{2012A&A...540A..73W}, and to produce synthetic observations, as the opacity is dominated by small particles, which may contain only a low fraction of the dust mass, \item is slowed by a factor of $\mathcal{O}(n^3)$, where $n$ is the number of mass bins, for each additional dust property beyond mass \citep{2012A&A...540A..73W}, although numerical tricks exist to circumvent this (see, e.g., \citealt{2009ApJ...707.1247O}), \item suffers from high numerical diffusion that affects both growth timescale and breakthrough likelihood. The growth timescale can be benchmarked against the analytical kernels (e.g.\ \citealt{1990Icar...83..205O}), but it depends strongly on the strength of the barrier in the breakthrough case. \end{itemize} The Monte Carlo coagulation algorithm with equal-mass representative particles \begin{itemize} \item makes it easy to implement additional particle properties, such as porosity \citep{2008A&A...489..931Z, 2010A&A...513A..57Z}, because the computation time does not depend significantly on the number of properties that are evolved, but only on the number of collisions performed, \item it is straightforward to develop it to further spatial dimensions \citep{2011A&A...534A..73Z, 2013A&A...556A..37D}, as the representative particles can be treated as Lagrangian tracer bodies, \item can be used along with hydrodynamic grid codes \citep{2012A&A...537A.125J}, \item experiences no numerical diffusion of the mass function in general, so there is no danger of encountering an artificial speed-up of the growth, \item has difficulty resolving features that include low fraction of total mass, which makes it less useful in the case of breaking through the growth barriers or runaway growth modeling, although the algorithm can be developed to overcome this issue \citep{2008ApJ...684.1291O}, \item makes it hard to model evolution over long timescales in general, because it is impossible to use extremely long time-steps, as every collision needs to be resolved. \end{itemize} \begin{acknowledgements} We thank the anonymous referee as well as Chris Ormel, Andras Zsom, Satoshi Okuzumi, Til Birnstiel, Sebastian Stammler and Sebastian Lorek for useful comments that helped to improve the manuscript. J.D. was partially supported by the Innovation Fund FRONTIER of the Heidelberg University. F.W. was funded by the Deutsche Forschungsgemeinschaft within the Forschergruppe 759 "The Formation of Planets: The Critical First Growth Phase". J.D. would also like to acknowledge the use of the computing resources provided by bwGRiD (http:$\backslash\backslash$www.bw-grid.de), member of the German D-Grid initiative, funded by the Ministry for Education and Research (Bundesministerium f\"{u}r Bildung und Forschung) and the Ministry for Science, Research and Arts Baden-Wuerttemberg (Ministerium f\"{u}r Wissenschaft, Forschung und Kunst Baden-W\"{u}rttemberg) \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} There are abundant indications of a remarkable mechanism of superconductivity in cuprate high temperature superconductors, known as the interlayer tunneling mechanism\cite{Anderson1}. At its simplest, the theory is that the confinement kinetic energy of the electrons in the normal state is converted into the superconducting binding energy. From the uncertainty principle, confinement implies a kinetic energy of order $\hbar^2/2m d^2$, where $d$ is the separation between the planes. It is as though the electrons were confined in a deep potential well perpendicular ($c$-axis) to the CuO-planes\cite{foot1}. There is considerable experimental support for this idea, although theoretical controversy persists. The very concept of confinement of the motion of the electrons is at odds with the time honored notion of a Fermi liquid. At issue is the concept of orthogonality catastrophe in a non-Fermi liquid, which posits that the motion along the $c$-axis is accompanied by the overlap of many particle wave functions of $N$ electrons, vanishing as $N\to \infty$. In the absence of controlled non-perturbative methods to treat this inherently non-perturbative phenomenon, little has been settled. It is therefore necessary to present the arguments from the point of view of general principles. Consider the question posed in the title of this paper: ``Do electrons change their $c$-axis kinetic energy upon entering the superconducting state?" It is useful to expand on the precise meaning of this question. In a BCS superconductor, the kinetic energy of the superconducting state is {\em greater} than that of the normal state. The reason is that the normal state is a Fermi liquid in which the kinetic energy is diagonal, the happiest possible situation from the point of view of the kinetic energy. Therefore, any change of state must necessarily increase the kinetic energy. This increase is, however, overwhelmed by the gain in the potential energy. Thus, a BCS superconductor becomes a superconductor despite the increase in the kinetic energy. In contrast, if we considered the transition to a superconducting state from a state in which the kinetic energy is not diagonal, the driving mechanism can be the saving in the kinetic energy. The interlayer mechanism capitalizes on the possibility that the $c$-axis kinetic energy is frustrated in a non-Fermi liquid. The question we ask is whether or not this frustration is relieved in the superconducting state, and whether or not the phenomenology of the cuprates support this theory. How should we view the crossover from two to three dimensions in cuprate superconductors? In particular, what is the role of the fluctuations of the phase of the superconducting order parameter? It will be shown that the issues involving phase fluctuations are separate from the issues involving the microscopic superfluid stiffness. A striking characteristic of the interlayer mechanism is that the coupling between the layers can significantly enhance this stiffness, which is nearly impossible in a conventional BCS superconductor. The phase fluctuations should, however, be similar to those in a conventional superconductor. The prospect of unifying the concepts of phase fluctuations with the concepts of interlayer tunneling then becomes apparent. How can we test the change of the $c$-axis kinetic energy? This question is answered using a powerful sum rule for the $c$-axis conductivity, which, it will be shown, leads to the resolution of an apparent paradox posed by the optical measurements in these materials over the years. The paradox has been that the $c$-axis penetration depth estimated from the change in the kinetic energy alone is apparently the same as that obtained ignoring this change. \section{Superconductivity as a 2$D$ to 3$D$ crossover phenomenon} Superconductivity in cuprates can be viewed as a dimensional crossover between two (2$D$) and three dimensions (3$D$). This is an experimental fact. While the charge transport perpendicular to the CuO-planes in the normal state is indicative of an insulator, there is perfect coherence in the superconducting state. Dimensional crossovers are known even for classical statistical mechanical problems involving phase transitions, or for quantum statistical mechanical problems that can be effectively viewed in terms of order parameters with only classical fluctuations at finite temperatures. What, then, is different here? To answer this question, it is necessary to probe it more carefully. Phase transitions in classical statistical mechanics are independent of the kinetic energy; only the potential energy is relevant. The superconducting transition in BCS superconductors can be described by a classical complex order parameter theory, namely the Ginzburg-Landau theory; quantum mechanics determines merely the parameters of this model. Thus, the kinetic energy cannot play an explicit role in this phase transition. Low dimensional superconductors are known to exhibit considerable fluctuation effects at finite temperatures that are entirely classical in nature. In dimensions less than or equal to two, the fluctuations are so severe that the order parameter vanishes. In two dimensions, a topological phase transition to a superconducting state takes place, but with a vanishing order paramete\cite{Kosterlitz}. In the low temperature state, there is a finite superfluid density as determined from the current response, but no long range order. Imagine now that two dimensional planes are stacked to form a three dimensional superconductor. Conventionally, this is described by the Lawrence-Doniach (LD) model\cite{LD}, which consists of the free energy functional \begin{eqnarray} {\cal F}&=&\sum_n\int d^2x \bigg[\alpha |\psi_n|^2+{1\over 2}\beta|\psi_n|^4\nonumber\\ &+&{\hbar^2\over 2m_{\rm ab}}\left|\nabla\psi_n\right|^2+{\hbar^2\over 2m_cd^2}\left|\psi_n-\psi_{n+1}\right|^2\bigg]\label{LDF}, \end{eqnarray} where $\alpha$, $\beta$, $m_{ab}$, and $m_c$ are parameters that are in general temperature dependent. The order parameter in the plane labeled $n$, $\psi_n(x,y)$, is a function of the $2D$ coordinates $x$ and $y$. The bending energy in the $ab$-plane is expressed in terms of a gradient energy, but the energy in the perpendicular direction is written in its discrete form. This is correct, because although the coherence length in the planes is frequently much larger than the lattice spacing, it is not so in the perpendicular direction, and therefore the continuum limit cannot be taken in this direction. The minimization of this functional determines the order parameter in mean field theory, but to incorporate fluctuations it is necessary to integrate over all possible order parameter configurations in the partition function. This is emphatically a classical model\cite{foot2}. What determines the coupling between the layers? It is argued that this is due to the Josephson effect\cite{Tinkham}. Assume for the moment that the magnitude of the order parameter is independent of $n$, $\psi_n=|\psi|e^{i\phi_n}$. Then, the last term in Eq.~(\ref{LDF}) is \begin{equation} {\hbar^2|\psi|^2\over 2m_cd^2}\left[1-\cos(\phi_n-\phi_{n+1})\right]\ge 0. \label{cosphi} \end{equation} This coupling can represent the Josephson effect only close to $T_c$, where the Josephson coupling energy is indeed proportional to the square of the order parameter in a conventional superconductor, while it is only proportional to the magnitude of the order parameter as $T\to 0$\cite{Chak1}. This is not terribly disturbing because the Ginzburg-Landau functional is only supposed to be valid close to $T_c$. But it must be remembered that there is {\em no} LD model at low temperatures for conventional superconductors, which is a frequently misunderstood point\cite{Leggett}. In contrast, the Josephson effect between two superconductors with non-Fermi liquid normal states can be recast in the language of the LD model\cite{Chak1}. In mean field theory, the free energy functional is minimized by setting the order parameter to be the same everywhere, that is, both its magnitude and phase. If we apply this theory to Eq.~(\ref{LDF}), we are back to uncoupled layers and no enhancement of the mean field transition temperature, $T_c^0$. Sometimes, an enhancement is claimed, which is merely the result of considering the functional in Eq.~(\ref{LDF}) in which the coupling between the layers is taken to be $-(\psi_n^*\psi_{n+1}+ {\rm c. c.})$ instead of $|\psi_n-\psi_{n+1}|^2$. For a conventional superconductor this is incorrect, because the Josephson energy in that case is proportional to $[1-\cos(\phi_n-\phi_{n+1})]$ instead of $-\cos(\phi_n-\phi_{n+1})$. The only way to enhance $T_c^0$ would be to change the parameters of the LD model appropriately. This is difficult to achieve in a BCS superconductor, because the density of states is changed very little by the small hopping matrix elements of the electrons between the layers of a highly anisotropic superconductor. So, what does the coupling between the layers do? It can suppress phase fluctuations by coupling the phases of the layers to raise the true $T_c$ closer to $T_c^0$. Note that, in general, the true $T_c$ is less than $T_c^0$. Thus, a dimensional crossover is driven by suppressing phase fluctuations, and unless the individual two dimensional layers have a high $T_c^0$, we gain little by suppressing phase fluctuations. Of course, phase coherence will be established in all three directions, and those properties that depend on this coherence will certainly be affected. The actual increase of the true transition temperature due to interlayer coupling can be easily estimated from the $XY$-model\cite{Kosterlitz}. A prominent feature of the interlayer tunneling theory is that $T_c^0$ can be enhanced by the coupling between the layers. Phrased in the language of the LD model, it means that the parameters of this model can be changed substantially. The phase fluctuations should, however, be similar to those in a conventional superconductor\cite{Kivelson}. \section{$c$-axis conductivity sum rule} For simplicity, consider a model\cite{Andersen1} in which the microscopic Hamiltonian expressing hopping of electrons along the $c$-axis is \begin{equation} H_c=-t_{\perp}\sum_{jl,s}c^{\dagger}_{jl,s}c_{jl+1,s}+{\rm h. c.}, \end{equation} where the label $j$ refers to the sites of the two-dimensional plane, $l$ refers to the layer index, and $s$ refers to spin; $c^{\dagger}_{jl,s}$ is the electron creation operator. In this section, I focus only on single layer materials for which all CuO-planes are equivalent, such as LSCO, Tl2201, Hg1201, and Bi2201. In the next section, I shall also touch upon multilayer materials. One can now derive a sum rule\cite{Shastry}. First, the frequency and the wavevector dependent $c$-axis conductivity can be written as \begin{equation} \sigma^c(q_c,\omega,T)=-{1\over Ad}\left({ed\over \hbar}\right)^2 \frac{\langle -H_c(T)\rangle-\Lambda^c_{\rm ret}(q_c,\omega,T)}{i(\omega+i\delta)}, \end{equation} where $q_c$ is the momentum transfer perpendicular to the plane, $A$ is the two-dimensional area and $d$ is the separation between the layers. The retarded current-current commutator is $ \Lambda^c_{\rm ret}(l,t,T)=-i\theta(t) \langle [j_{H}^c(l,t),j_{H}^c(0,0)]\rangle. $ The paramagnetic current operator is defined by $ j^c(l)=it_{\perp}\sum_{j}(c^{\dagger}_{jl,s}c_{jl+1,s}- {\rm h. c.}), $ and the corresponding Heisenberg operator, $j^c_{H}$, is defined with respect to the full interacting Hamiltonian. The averages refer to the thermal averages and $ \langle H_c(T)\rangle = -t_{\perp}\sum_{j,s}\langle c^{\dagger}_{jl,s}c_{jl+1,s}+{\rm h. c.}\rangle. $ For optical conductivity, one may set $q_c=0$, and then, noting that the retarded current-current commutator is analytic in the upper-half of the complex $\omega$-plane , we arrive at the $c$-axis conductivity sum rule \begin{equation} \int_{-\infty}^{\infty}d\omega\ {\rm Re}\ \sigma^c(\omega,T)={\pi e^2 d^2\over \hbar^2 Ad}\langle -H_c(T)\rangle, \label{srule} \end{equation} which is a variant of the well-known $f$-sum rule\cite{Martin}. Note that it is necessary that the integral runs between the limits $-\infty$ and $\infty$ to arrive at this sum rule. There are a number of noteworthy points. \begin{itemize} \item It was argued by Kohn\cite{Kohn} that the $f$-sum rule does not hold in a metal, because the unbounded position operator is not a valid hermitian operator. Indeed, all derivations of this sum rule involving the position operator in an extended system do look suspicious. However, the $f$-sum rule {\em is} satisfied\cite{Shastry}. The reason is that the $f$-sum rule can be derived by introducing the exponential operator $e^{i{\bf q\cdot x}}$ and then taking the limit ${\bf q}\to 0$. Of course, there is no such sum rule if the interaction itself is velocity dependent. \item On occasions, this sum rule is written with finite limits, which is assumed to be some interband gap. This is incorrect\cite{foot3}. \item The right hand side of the sum rule is the average of the single particle hopping Hamiltonian. This may be deceptive because it is the true interacting kinetic energy. \item The sum rule is satisfied at any temperature $T$. \item The absence of Galilean invariance on a lattice allows the charge carrying effective mass to vary with temperature and interaction. In the continuum limit, such that $d\to 0$, but $t_{\perp}d^2$ fixed, the right hand side of Eq.~(\ref{srule}) is ${\pi n e^2\over m}$, where ${\hbar^2\over 2m} = t_{\perp}d^2$, and $n$ is the density of electrons in the planes. In this limit, interactions cannot renormalize the effective mass because the current operator commutes with the Hamiltonian. \end{itemize} We shall now put this sum rule to good use. For a superconductor, we can write quite generally \begin{equation} {\rm Re}\ \sigma^{cs}(\omega,T)=D_c(T)\delta(\omega)+{\rm Re}\ \sigma^{cs}_{\rm reg}(\omega,T). \end{equation} The first term signifies the lossless flow of electrons in the superconducting state, while the second is the regular (nonsingular) part of the optical conductivity. The normal state optical conductivity is nonsingular; so, the sum rule can be cast into a more useful form: \begin{eqnarray} D_c(T)&=&\int_0^{\infty}d\omega \bigg[{\rm Re}\ \sigma^{cn}(\omega,T)-{\rm Re}\ \sigma^{cs}_{\rm reg}(\omega,T)\bigg]\nonumber\\ &+&{\pi e^2 d^2\over 2 Ad \hbar^2}\bigg[\langle -H_c(T)\rangle_s-\langle -H_c(T)\rangle_n\bigg] \label{srule2} \end{eqnarray} If the $c$-axis kinetic energy is unchanged between the normal and the superconducting states, as it should be in a conventional layered superconductor, we recover a variant of the Ferrell-Glover-Tinkham sum rule\cite{Tinkham}. The missing area between the $c$-axis conductivities of the normal and the superconducting states is proportional to the $c$-axis superfluid density. Frequently, the sum rule in Eq.~(\ref{srule2}) is not meaningfully applied to high temperature superconductors. Instead of the true sum rule in Eq.~(\ref{srule2}), the following missing area is considered: \begin{equation} D_c'(T)=\int_0^{\infty}d\omega [{\rm Re}\ \sigma^{cn}(\omega,T_c)-{\rm Re}\ \sigma^{cs}_{\rm reg}(\omega,T)]. \label{Dcprime} \end{equation} Under what conditions can $D_c'(T)$ be related to the true $c$-axis penetration depth? It must be assumed that the $c$-axis kinetic energy must be the same for the normal and the superconducting states and independent of temperature, with the implicit assumption that the normal state conductivity would change very little for all temperatures $T\le T_c$, if superconductivity could be suppressed. For a conventional superconductor, these assumptions are justified, but not for cuprates. First, the change in the $c$-axis kinetic energy is strikingly evident. Second, the $c$-axis resistivity is generically semiconducting and strongly temperature dependent, at least in the underdoped and optimally doped regimes\cite{Batlogg}. This temperature dependence should persist if superconductivity could be suppressed, say by applying a magnetic field, and therefore the equality of the conductivity at $T_c$ and those at $T\le T_c$ cannot be assumed. For LSCO, this has been demonstrated experimentally\cite{Ando}. Even the $ab$-plane resistivity was found to be insulating and temperature dependent once superconductivity was suppressed by applying strong magnetic fields. These experiments bring into question theories that are based on the assumption that the $T=0$ state is metallic\cite{Graf}. Therefore, we conclude that the consideration of $D_c'(T)$ begs the interesting question ``Do electrons change their $c$-axis kinetic energy upon entering the superconducting state?" On general grounds, there is little we can say about ${\rm Re}\ \sigma^{cn}(\omega,T)$ for $T\le T_c$. How do we overcome this impasse? To answer this question, consider the sum rule at zero temperature, which can be restated as \begin{equation} D_c(0)\ge {\pi e^2 d^2\over 2 Ad\hbar^2}\left[\langle -H_c(0)\rangle_s-\langle -H_c(0)\rangle_n\right]. \end{equation} I have assumed that the integral in Eq.~(\ref{srule2}) is positive definite. This could be a strict inequality, although I cannot find a rigorous argument. One can see, however, that at very high frequencies the two conductivites should approach each other, and, at low frequencies, $\sigma^{cn}_{\rm reg}(\omega,T=0)\ge \sigma^{cs}_{\rm reg}(\omega,T=0)$, if the superconducting state is at least partially gapped. If the experiments of Ando {\em et al.}\cite{Ando} are taken as an indication, the system, at $T=0$, is insulating along the $c$-axis. It is plausible, therefore, that the integral in Eq.~(\ref{srule2}) is smaller than what one would have guessed for metallic conduction along the $c$-axis. This is because the frequency dependent $c$-axis conductivity in a non-Fermi liquid is expected to vanish as a power law in contrast to the Drude behavior. If this is indeed true, we can make the approximation \begin{equation} D_c(0)\approx{\pi e^2 d^2\over 2 Ad\hbar^2}\left[\langle -H_c(0)\rangle_s-\langle -H_c(0)\rangle_n\right]. \end{equation} Defining $n_s^c(0)$ by $ D_c(0)={\pi n_s^c(0) e^2\over 2m}, $ and $\delta T$ by $\delta T=\left[\langle -H_c(0)\rangle_s-\langle -H_c(0)\rangle_n\right]$, we get the simple equation \begin{equation} {\hbar^2 n_s^c(0)\over m d^2}\approx{\delta T\over Ad}. \label{ILT} \end{equation} The precise definition of the mass, $m$, is irrelevant because the penetration depth depends only on $D_c(0)$, that is, only on the combination $n_s^c(0)e^2\over m$. The left hand side of Eq.~(\ref{ILT}) is of the order of the confinement kinetic energy of a particle in an one dimensional potential well of width $d$, consistent with the uncertainty principle. The $c$-axis penetration depth is given by\cite{foot6} \begin{equation} {1\over\lambda_c^2(0)}={8D_c(0)\over c^2}, \end{equation} where $c$ is the velocity of light. Therefore, it satisfies the inequality \begin{equation} \lambda_c(0)\le {\hbar c\over e d}{1\over\sqrt{4\pi(\delta T/Ad)}}. \end{equation} If we replace $(\delta T/Ad)$ by the condensation energy $U$ of the electrons per unit cell per CuO-layer, including both spin orientations, and replace the inequality by the equality, we arrive at the approximate expression \begin{equation} \lambda_c(0)\approx {\hbar c\over e d}{1\over\sqrt{4\pi U}}\label{lambdac}, \end{equation} which is twice as large as that of Anderson\cite{Anderson2}. While Anderson equates the condensation energy to the Josephson coupling energy, $E_c$, I have equated it to the change in the kinetic energy. I believe that this is more appropriate because there can be situations in which the condensation energy of the superconductor is not derived from the change in the kinetic energy, but $E_c$ is finite---conventional Josephson effect, for example. Anderson\cite{Anderson2} has observed that $\lambda_c$ calculated from the procedure outlined above agrees well with the measured values. Actually, my expression for $\lambda_c$ in Eq.~(\ref{lambdac}) is a factor of 2 larger, but this may not be significant at this time, given the uncertainties involved in extracting the condensation energy from the measured specific heat\cite{Loram}. In LSCO, the $c$-axis reflectivity exhibits a striking plasma edge in the superconducting state whose position is readily determined\cite{Uchida}. As there is little ambiguity in the measured background dielectric constant, which is approximately 25, the penetration depth can be easily read off from the plasma edge. In contrast, the analysis based on the missing area, if not properly carried out, will be flawed\cite{foot5}. For all doping, the agreement found by Anderson is good. It is also reassuring to note that the penetration depth measured from the plasma edge is in agreement with the microwave measurements\cite{Shibauchi}. For the single layer Hg1201, the condensation energy is not known from experiments. Anderson estimated it from the assumption that it is proportional to $T_c^2$. This yields a penetration depth in good agreement with experiments\cite{Cooper}. It must be remembered, however, that this estimate is subject to a greater uncertainty. The fly in the ointment is the measurement of Moler {\em et al.}\cite{KAM} in the single layer Tl2201. The measured penetration depth is almost a factor of 20 too large\cite{Anderson2}. Given the similarities between Tl2201 and Hg1201, this is surprising. However, the $c$-axis resistivity of Tl2201 is very anomalous; not only does it not show insulating behavior, but it is linear in its temperature dependence; the magnitude of the resistivity near $T_c$ is enormous, however. In addition, the material chemistry of Tl2201 is quite curious. The optimally doped materials contain significant interstitial oxygen defects between the two TlO planes, but more surprisingly, they also contain sizable Cu substitution at the Tl site\cite{Jorgensen}. It may be that there are metallic shorts connecting the CuO planes. Thus, it is unclear if this measurement reflects the true penetration depth of this material or not. The material chemistry of Hg1201 appear to be somewhat different\cite{Jorgensen}. It is interesting that the same sum rule can be turned on its ear to argue that conventional explanations of $\lambda_c$ are implausible. In Fermi liquid based theories, the change in the $c$-axis kinetic energy must be zero. The penetration depth is then \begin{equation} \lambda_c(0)={c\over \left(8\int_0^{\infty}d\omega [{\rm Re}\ \sigma^{cn}(\omega,0)-{\rm Re}\ \sigma^{cs}_{\rm reg}(\omega,0)]\right)^{1/2}}. \end{equation} As argued above, the integral on the right hand side of the denominator is likely to be small. Consequently, the penetration depth obtained from this formula is likely to be too large to agree with experiments in LSCO and Hg1201. Note that this sum rule argument is independent of any microscopic details. \section{Interlayer enhancement of the mean field transition temperature} In this section we return to the enhancement of $T_c^0$ in multilayer materials to compare against the observed systematics, providing further support to the theory. The inequality derived in the previous section needs only small modifications. The idea is simple. The coupling between the layers was set by the energy scale ${\hbar^2 n_s^c(0)\over m d^2}$, but now we have to distinguish between the coupling between the close layers and the coupling between the distant layers. Let us define them to be $g_{\perp}$ and $g_{\perp}'$, respectively. Strictly speaking this is a simplification, because the tunneling matrix elements between the various layers in an unit cell and those in the neighboring cell are not all the same. Imagine that not only the true transition temperature including fluctuations, but even $T_c^0$ of an individual layer is very small. I now show that if the coupling between the layers is included, the increase in $T_c^0$ is negligible in BCS theory. In contrast, the interlayer mechanism leads to a striking enhancement of $T_c^0$. The mean field equation for single layer materials, due to interlayer coupling, is \begin{equation} 2g_{\perp}'\chi_{\rm in-plane}(T_c^0)=1, \label{1layer} \end{equation} where $\chi_{\rm in-plane}$ is the in-plane pair susceptibility. For a $n$-layer material, $n\ge 2$, the mean field equation is \begin{equation} {2(n-1)g_{\perp}+2ng_{\perp}'\over n}\chi_{\rm in-plane}(T_c^0)=1. \label{2layer} \end{equation} We must now determine $\chi_{\rm in-plane}(T)$. Our knowledge of the scale of the coupling energy is insufficient for this purpose\cite{foot4}. To apply the mean field argument, it is necessary to know the nature of the coupling between the planes. In particular, it is necessary to know if the Josephson pair tunneling Hamiltonian is diagonal in the parallel momentum or not. One of the striking aspects of the interlayer tunneling theory is that it is approximately diagonal in the parallel momentum\cite{Anderson1,Chakravarty2}. So, the $\chi_{\rm in-plane}$ to be substituted in these mean field equations must correspond to the momentum for which this susceptibility is the largest. To see the striking difference caused by this assumption, it is necessary to consider only the BCS pair susceptibility. If the coupling Hamiltonian is not diagonal in the parallel momentum, it is the momentum integrated pair susceptibility that is relevant, which, however, is only logarithmically divergent because \begin{eqnarray} \chi_{\rm BCS}(T)&=&N(0)\int_0^{\omega_c}{d\varepsilon\over \varepsilon}\tanh({\varepsilon\over 2T}), \nonumber \\ &=&N(0)\ln(1.14\omega_c/2T). \end{eqnarray} where $N(0)$ is the density of states at the Fermi energy and $\omega_c$ is a cutoff of the order of the Debye energy. When substituted in Eqs. (\ref{1layer}, \ref{2layer}), the enhancement of $T_c$ is negligible because $N(0)g'_{\perp}$, $N(0)g_{\perp}$ are small compared to unity. In contrast, if the Josephson pair tunneling is diagonal in the parallel momentum, it is the maximum susceptibility on the Fermi surface that is relevant. As $\varepsilon\to 0$ , \begin{equation} \chi_{\rm BCS}(\varepsilon\to 0,T)={1\over 2T}, \end{equation} diverges faster as $T\to 0$. When this susceptibility is substituted into Eqs. (\ref{1layer}) and (\ref{2layer}), it gives rise to far greater enhancements of the transition temperature. It can be shown from simple models of a non-Fermi liquid that the largest in-plane pair susceptibility (not momentum integrated) remains of the same order\cite{Yin}, that is, it is $a/T$, for T close to $T_c^0$, where $a$ is a number of order unity. Then, the mean field transition temperatures $T_{c1}^0$, $T_{c2}^0$, $T_{c3}^0$, $T_{c4}^0$, for one, two, three, four, \ldots layer materials are given by $ T_{c1}^0={2g'_{\perp}\over a}$, $T_{c2}^0=T_{c1}^0+{g_{\perp}\over a}$, $T_{c3}^0=T_{c1}^0+{4\over 3}{g_{\perp}\over a}$, $T_{c4}^0=T_{c1}^0+{3\over 2}{g_{\perp}\over a}$, etc; the sequence $1, 4/3, 3/2, \ldots$ converges to 2. This pattern of the systematic enhancement of the transition temperatures of the multilayer materials are in accord with experiments. \section{Conclusion} The purpose of this paper has been to present some very general arguments in favor of the interlayer tunneling theory. There are two significant outcomes of the present paper. The first concerns the nature of the 2$D$ to 3$D$ crossover in the superconducting state. It was shown that the interlayer tunneling mechanism can enhance the microscopic superfluid stiffness in a way that is not possible in conventional theories. This stiffness sets the scale at which the amplitude of the order parameter forms. On general grounds, it is difficult to settle whether or not phase fluctuations are important; experimental evidence on this question appears to be mixed. If, however, they are assumed to be present, as has been argued by Emery and Kivelson\cite{Kivelson}, they can be included by combining interlayer tunneling theory with the Lawrence-Doniach model. The pseudogap observed in the underdoped materials in that case would be the superconducting gap calculated within the interlayer tunneling theory, while the true $T_c$ will be determined by the phase fluctuations. The second outcome of the present paper is the resolution of the paradoxical interpretations of the $c$-axis optical measurements, universally evident in the literature. On the one hand, it appeared that one could obtain the correct estimates of the $c$-axis penetration depths only from the change in the kinetic energy of the electrons as they enter the superconducting state\cite{Anderson2}, on the other hand, the same results were apparently obtained from the $c$-axis conductivity sum rule ignoring the change in the kinetic energy\cite{Uchida}. The resolution is that, until now, the sum rule has not been meaningfully applied to high temperature superconductors. The paradox disappears with the correct interpretation of the sum rule. The evidence for the change in electron's kinetic energy, an essential element of the interlayer tunneling theory, appears to be strong in LSCO, reasonably convincing in Hg1201, and nonexistent in Tl2201 on the basis of the recent measurements\cite{KAM}. In regard to Tl2201, important materials questions remain. Future measurements of the $c$-axis optical conductivity and the penetration depth in both single and multilayer materials will be valuable. In particular, I would like to suggest that these experiments be carried out on the single layer Bi2201, which in many respects is as anomalous as its high-$T_c$ cousins. If possible, the optical measurements should be carried out in the presence of a magnetic field necessary to suppress superconductivity. In this low $T_c$ material, the required magnetic field should be considerably smaller than in the experiments of Ando {\em et al.}\cite{Ando} Moreover, the normal state can be pursued and measured more precisely to lower temperatures. \section{Acknowledgement} I thank P. W. Anderson, D. J. Scalapino, and especially D. Basov, S. Kivelson, and K. A. Moler for discussions. This work was supported by a grant from the National Science Foundation: DMR-9531575.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} If a satellite is located at a planetocentric position $\mbox{{\boldmath $\vec r$}}$, it generates a tidal bulge that either advances or retards the satellite motion, depending on the interrelation between the planetary spin rate $\omega_p$ and the tangential part of satellite's velocity $\mbox{{\boldmath $\vec v$}}$ divided by $r\equiv|\mbox{{\boldmath $\vec r$}}|$. It is convenient to imagine (as on Fig.~\ref{fig-JGR.eps}) that the bulge emerges beneath a fictitious satellite located at \begin{eqnarray} \mbox{{\boldmath $\vec r$}}_f\,=\,\mbox{{\boldmath $\vec r$}}\,+\,\mbox{\boldmath $\vec {\boldmath{\,f}}$}\;\;\;, \label{1} \label{401} \end{eqnarray} \begin{figure} \includegraphics[height=.42\textheight, ]{fig-JGR.eps} \caption{\small A planet and a tide-raising moon. This picture illustrates the case of a satellite located below the synchronous orbit, so that its mean motion $\,n\,$ exceeds the planet's spin rate $\,\omega_p\,$, and the tidal bulge is lagging. The angular lag defined as $\,\delta\,\equiv\, {\textstyle{|\mbox{{\boldmath $\vec f$}}|}}/{\textstyle{r}}\,=\,\frac{\textstyle\Delta t}{ \textstyle{r}}\,|\,\mbox{{\boldmath $\vec \omega$}}_p \times\mbox{{\boldmath $\vec r$}}\;-\;\mbox{{\boldmath $\vec v$}}\,|\,$ will, generally, differ from the absolute value of the angle $\,\delta_1 \,$ subtended at the planet's centre between the directions to the satellite and the bulge. Since in our study we consider an example with a small eccentricity and inclination, we make no distinction between $\,\delta\,$ and $\,|\delta_1|\,$. } \label{fig-JGR.eps} \end{figure} ~\\ \noindent where the position lag $\,\mbox{\boldmath $\vec {\boldmath{\,f}}$}\,$ is given by \begin{eqnarray} {\vec{\mbox{\it\textbf{f}}}}\;=\;\Delta t\;\left(\;\mbox{{\boldmath $\vec \omega$}}_p\times \mbox{{\boldmath $\vec r$}}\;-\;\mbox{{\boldmath $\vec v$}}\;\right)\;\;\;. \label{2} \label{402} \end{eqnarray} $\Delta t\,$ is the time lag between the real and fictitious tide-generating satellites, and the inclination and eccentricity of the satellite are assumed sufficiently small. The fictitious satellite is merely a way of illustrating the time lag between the tide-raising potential and the distortion of the body. This concept implies no new physics, and is but a convenient metaphor employed to convey that at each instance of time the dynamical tide is modeled with a static tide where all the time-dependent variables are shifted back by $\,\Delta t\,$, i.e., (a) the moon is rotated back by $\,\mbox{{\boldmath $\vec v$}}\,\Delta t\,$, and (b) the attitude of the planet is rotated back by $\,\mbox{{\boldmath $\vec \omega$}}_p\,\Delta t\,$. From the viewpoint of a planet-based observer, this means that a dynamical response to a satellite located at $\,\mbox{{\boldmath $\vec r$}}\,$ is modeled with a static response to a satellite located at $\,\mbox{{\boldmath $\vec r$}}_f\,\equiv\,\mbox{{\boldmath $\vec r$}}\,-\, \Delta t\,(\mbox{{\boldmath $\vec v$}}\,-\,\mbox{{\boldmath $\vec \omega$}}_p\times\mbox{{\boldmath $\vec r$}})\,$. In this paper, we intend to dwell on geophysical issues -- the frequency-dependence of the attenuation rate and its consequences. Hence, to avoid unnecessary mathematical complications, in the subsequent illustrative example we shall restrict ourselves to the simple case of a tide-raising satellite on a near-equatorial near-circular orbit. In this approximation , the velocity of the satellite relative to the surface is \begin{eqnarray} |\,\mbox{{\boldmath $\vec \omega$}}_p\times\mbox{{\boldmath $\vec r$}}~-~\mbox{{\boldmath $\vec v$}}\,|~=\;r\;|\,\omega_p\,-\,n\,|\;\;\;, \label{4} \label{404} \end{eqnarray} the principal tidal frequency is \begin{eqnarray} \chi\;=\;2\;|\,\omega_p\,-\,n\,| \;\;\;\;, \label{5} \label{405} \end{eqnarray} and the angular lag is \begin{eqnarray} \delta\;=\;\frac{\Delta t}{r}\;|\,\mbox{{\boldmath $\vec \omega$}}_p\times\mbox{{\boldmath $\vec r$}}\;-\; \mbox{{\boldmath $\vec v$}}\,|\;=\;\frac{\Delta t}{2}\;\chi\;\;\;, \label{6} \label{406} \end{eqnarray} $n\,$ being the satellite's mean motion, and $\,\mbox{{\boldmath $\vec \omega$}}_p\,$ being the planet's spin rate. The factor of two emerges in (\ref{5}) since the moon causes two elevations on the opposite sides of the planet. It will also be assumed that $\;\chi \Delta t\,\ll \,1\;$, for which reason we shall neglect the second-order difference between the expression (\ref{6}) and the angle subtended at the planet's centre between the moon and the tidal bulge (rigorously speaking, the sine of the subtended angle is equal to $\;\mbox{{\boldmath $\vec f$}}\, \times\,\mbox{{\boldmath $\vec r$}}/r^2\;$). The starting point of all tidal models is that each elementary volume of the planet is subject to a tide-raising potential, which in general is not periodic but can be expanded into a sum of periodic terms. Within the linear approximation introduced by Love, the tidal perturbations of the potential yield linear response of the shape and linear variations of the stress. In extension of the linearity approximation, it is always implied that the overall dissipation inside the planet may be represented as a sum of attenuation rates corresponding to each periodic disturbance: \begin{eqnarray} \langle\,\dot{E}\;\rangle\;=\;\sum_{i}\;\langle\, \dot{E}(\chi_{\textstyle{_i}})\;\rangle \label{407} \end{eqnarray} where, at each frequency $\,\chi_i\,$, \begin{eqnarray} \langle\,\dot{E}(\chi_{\textstyle{_i}})~\rangle~=~-~2~\chi_{\textstyle{_i}}~ \frac{\,\langle\,E(\chi_{\textstyle{_i}})~\rangle\,}{Q(\chi_{\textstyle{_i}}) }~=\,\;-\;\chi_{\textstyle{_i}}\;\frac{\,E_{_{peak}}(\chi_{\textstyle{_i}}) \,}{Q(\chi_{\textstyle{_i}})}\;\;\;, \label{408} \end{eqnarray} $\langle\,.\,.\,.\,\rangle~$ standing for averaging over flexure cycle, $\,E(\chi_{\textstyle{_i}})\,$ denoting the energy of deformation at the frequency $\,\chi_{\textstyle{_i}}\,$, and $Q(\chi_{\textstyle{_i}})\,$ being the quality factor of the material at this frequency. Introduced empirically as a means to figleaf our lack of knowledge of the attenuation process in its full complexity, the notion of $\,Q\,$ has proven to be practical due to its smooth and universal dependence upon the frequency and temperature. An alternative to employment of the empirical $\,Q\,$ factors would be comprehensive modeling of dissipation using a solution of the equations of motion, given a rheological description of the mantle (Mitrovica \& Peltier 1992; Hanyk, Matyska \& Yuen 1998, 2000; Moore \& Schubert 2000). Though providing a valuable yield for a geophysicist, this comprehensive approach may be avoided in astronomy, where only the final outcome, the frequency dependence of $\,Q\,$, is important. Fortunately, this dependence is already available from observations. In this paper we shall restrict ourselves to the simple case of an equatorial or near-equatorial satellite describing a circular or near-circular orbit. Under these circumstances only the principal tidal frequency (\ref{405}) will matter. \section{The quality factor $Q$ and the geometric lag angle $\delta $.} During tidal flexure, the energy attenuation through friction is, as ever, accompanied by a phase shift between the action and the response. The tidal quality factor is interconnected with the phase lag $\,\epsilon$ and the angular lag $\delta\,$ via \begin{eqnarray} Q^{-1}\;=\;\tan \epsilon\;=\;\tan 2\delta \label{} \label{420} \end{eqnarray} or, for small lag angles, \begin{eqnarray} Q^{-1}\;\approx\;\epsilon\;=\;2\;\delta\;\;\;.~~ \label{} \label{421} \end{eqnarray} The doubling of the lag is a nontrivial issue. Many authors erroneously state that $\,Q^{-1}\,$ is equal simply to the tangent of the lag, with the factor of two omitted. For example, Rainey \& Aharonson (2006) assume that $\,Q^{-1}\,$ is equal to the tangent of the geometric lag. As a result, they arrive at a value of $\,Q\,$ that is about twice larger than those obtained by the other teams. In Bills et al. (2005), one letter, $\,\gamma\,$, is used to denote two different angles. Prior to equation (24) in that paper, $\,\gamma\,$ signifies the {\emph{geometric}} lag (in our notations, $\,\delta_1\,$). Further, in their equations (24) and (25), Bills et al. employ the notation $\,\gamma\,$ to denote the {\emph{phase}} lag (in our notations, $\,\epsilon\,$, which happens to be equal to $\,2\,\delta_1\,$). With this crucial caveat, Bills' equation $\,Q\,=\,1/\tan\gamma\,$ is correct. This inaccuracy in notations has not prevented Bills et al. (2005) from arriving to a reasonable value of the Martian quality factor, $\,85.58\pm0.37\,$. (A more recent study by Lainey et al. (2007) has given a comparable value of $\,79.91 \pm 0.69\,$.) In the Appendix, we offer a simple illustrative calculation, which explains whence this factor of two stems. Formulae (\ref{420} - \ref{421}) look reasonable: the higher the quality factor, the lower the damping rate and, accordingly, the smaller the lag. What looks very far from being OK are the frequency dependencies ensuing from the assertions of $\,\delta\,$ being either constant or linear in frequency: the approach taken by Gerstenkorn (1955), MacDonald (1964), and Kaula (1964) implies that $\,Q\,\sim\,\chi^{0}\,$, while the theory of Singer (1968) and Mignard (1979, 1980) yields $\,Q\,\sim\,\chi^{-1}\,$, neither option being in agreement with the geophysical data. \section{Dissipation in the mantle.} \subsection{Generalities} Back in the 60s and 70s of the past century, when the science of low-frequency seismological measurements was yet under development, it was widely thought that at long time scales the quality factor of the mantle is proportional to the inverse of the frequency. This fallacy proliferated into planetary astronomy where it was received most warmly, because the law $\,Q\,\sim\,1/\chi\,$ turned out to be the only model for which the linear decomposition of the tide gives a set of bulges displaced from the direction to the satellite by the same angle. Any other frequency dependence $\,Q(\chi)\,$ entails superposition of bulges corresponding to the separate frequencies, each bulge being displaced by its own angle. This is the reason why the scaling law $\,Q\,\sim\,1/\chi\,$, long disproved and abandoned in geophysics (at least, for the frequency band of our concern), still remains a pet model in celestial mechanics of the Solar system. Over the past twenty years, considerable progress has been achieved in the low-frequency seismological measurements, both in the lab and in the field. Due to an impressive collective effort undertaken by several teams, it is now a firmly established fact that {\it{for frequencies down to about $\sim 1$ yr$^{-1}$ the quality factor of the mantle is proportional to the frequency to the power of a {\textbf{positive}} fraction}} $\,\alpha\,$. This dependence holds for all rocks within a remarkably broad band of frequencies: from several MHz down to about $\;1\;$yr$^{-1}\,$. At timescales longer than $\,1$ yr, all the way to the Maxwell time (about $\,100 $~yr), attenuation in the mantle is defined by viscosity, so that the quality factor is, for all minerals, well approximated with $\,{\eta}\chi/M\,$, where $\eta$ and $M$ are the shear viscosity and the shear elastic modulus of the mineral. Although the values of both the viscosity coefficients and elastic moduli greatly vary for different minerals and are sensitive to the temperature, the overall quality factor of the mantle at such long timescales still is linear in frequency. At present there is no consensus in the seismological community in regard to the time scales exceeding the Maxwell time. One viewpoint (incompatible with the Maxwell model) is that the linear law $\,Q\,\sim\,\chi\,$ extends all the way down to the zero-frequency limit (Karato 2007). An alternative point of view (prompted by the Maxwell model) is that at scales longer than the Maxwell time we return to the inverse-frequency law $\,Q\,\sim\,1/\chi\,$.\\ ~\\ All in all, we have:\\ \begin{eqnarray} \mbox{For}\;\;\;\;\;\;10^7\;\;\mbox{Hz}\;\;>\,\;\chi\;\;>\;\;1\;yr^{-1}\;:\;\;\; Q\,\sim\,\chi^{\alpha}\;,\,\;\mbox{with}\;\,\alpha\,=\,0.2\,-\,0.4\;\;\;(\,0.2\; \mbox{for partial melts})\;.\;\;\;\, \label{one} \label{422} \end{eqnarray} \begin{eqnarray} \mbox{For}\;\;\;\;\;\;\;\;\;1\;yr^{-1}\,>\,\chi\,>\,10^{-2}\,\;yr^{-1}\;\;:\;\;\; \;Q\,\sim\,\chi\;\;\;.\;\;\;\;~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~ \label{two} \label{423} \end{eqnarray} \begin{eqnarray} \mbox{For}\;\;\;\;\;10^{-2}\;yr^{-1}\,>\,\chi\;:\;\;\;\mbox{arguably, ~it ~is ~still~~}Q\,\sim\,\chi\;.\;\;\;\;\;(\,\mbox{Or~maybe~~~}Q\;\sim\;1/\chi~\,?\,) ~~~~~~~~~~~~~\, \label{three} \label{424} \end{eqnarray} Fortunately, in practical calculations of tides {\emph{in planets}} one never has to transcend the Maxwell time scales, so the controversy remaining in (\ref{three}) bears no relevance to our subject. We leave for a future study the case of synchronous satellites, the unique case of the Pluto-Charon resonance, or the binary asteroids locked in the same resonance. Thus we shall avoid also the frequency band addressed in (\ref{two}), but shall be interested solely in the frequency range described in (\ref{one}). It is important to emphasise that the positive-power scaling law (\ref{one}) is well proven not only for samples in the lab but also for vast seismological basins and, therefore, is universal. Hence, this law may be extended to the tidal friction -- validity of this extension will be discussed below in subsection 3.4.1 Below we provide an extremely condensed review of the published data whence the scaling law (\ref{one}) was derived by the geophysicists. The list of sources will be incomplete, but a full picture can be obtained through the further references contained in the works to be quoted below. For a detailed treatment, see Chapter 11 of the book by Karato (2007) that contains a systematic introduction into the theory of and experiments on attenuation in the mantle. \subsection{Circumstantial evidence: attenuation in minerals.\\ Laboratory measurements and some theory} Even before the subtleties of solid-state mechanics with or without melt are brought up, the positive sign of the power $\,\alpha\,$ in the dependence $\,Q\, \sim\,\chi^{\alpha}\,$ may be anticipated on qualitative physical grounds. For a damped oscillator obeying $\;\ddot{z}\,+\,2\,\beta\,\dot{z}\,+\,\chi^2\,z\,=\,0\; \,$, the quality factor is equal to $\,\chi/(2\beta)\,$, i.e., $\,Q\,\sim\,\chi\,$. Solid-state phenomena causing attenuation in the mantle may be divided into three groups: the point-defect mechanisms, the dislocation mechanisms, and the grain-boundary ones. Among the point-defect mechanisms, most important is the transient diffusional creep, i.e., plastic flow of vacancies, and therefore of atoms, from one grain boundary to another. The flow is called into being by the fact that vacancies (as well as the other point defects) have different energies at grain boundaries of different orientation relative to the applied shear stress. This anelasticity mechanism is wont to obey the power law $\,Q\,\sim\,\chi^{\alpha}\,$ with $\,\alpha\,\approx\,0.5\,$. Anelasticity caused by dislocation mechanisms is governed by the viscosity law $\,Q\,\sim\,\chi\,$ valid for sufficiently low frequencies (or sufficiently high temperatures), i.e., when the viscous motion of dislocations is not restrained by the elastic restoring stress. (At higher frequencies or/and lower temperatures, the restoring force ``pins" the defects. This leads to the law $\,Q\,\sim\,(\textstyle{1\,+\,\tau^2\chi^2})\tau^{-1}\chi^{-1}\,$, parameter $\,\tau\,$ being the relaxation time whose values considerably vary among different mechanisms belonging to this group. As the mantle is warm and viscous, we may ignore this caveat.) The grain-boundary mechanisms, too, are governed by the law $\,Q\sim\chi^{\alpha}$, though with a lower exponent: $\,\alpha\approx 0.2 -0.3$. This behaviour gradually changes to the viscous mode ($\alpha=1$) at higher temperatures and/or at lower frequencies, i.e., when the elastic restoring stress reduces. We see that in all cases the quality factor of minerals should grow with frequency. Accordingly, laboratory measurements confirm that, within the geophysically interesting band of $\,\chi\,$, the quality factor behaves as $\,Q\,\sim\, \chi^{\alpha}\,$ with $\,\alpha\,=\,0.2\,-\,0.4\;$. Such measurements have been described in Karato \& Spetzler (1990) and Karato (1998). Similar results were reported in the works by the team of I. Jackson -- see, for example, the paper (Tan et al. 1997) where numerous earlier publications by that group are also mentioned. In aggregates with partial melt the frequency dependence of $\,Q\,$ keeps the same form, with $\,\alpha\,$ leaning to $\,0.2\,$ -- see, for example, Fontaine et al. (2005) and references therein. \subsection{Direct evidence: attenuation in the mantle.\\ Measurements on seismological basins} As we are interested in the attenuation of tides, we should be prepared to face the possible existence of mechanisms that may show themselves over very large geological structures but not in small samples explored in the lab. No matter whether such mechanisms exist or not, we would find it safer to require that the positive-power scaling law $\,Q\,\sim\,\chi^{\alpha}\,$, even though well proven in the lab, must be propped up by direct seismological evidence gathered over vast zones of the mantle. Fortunately, such data are available, and for the frequency range of our interest these data conform well with the lab results. The low-frequency measurements, performed by different teams over various basins of the Earth's upper mantle, agree on the pivotal fact: the seismological quality factor scales as the frequency to the power of a {\emph{positive}} fraction $\,\alpha\;$ -- see, for example, Mitchell (1995), Stachnik et al. (2004), Shito et al. (2004), and further references given in these sources.\footnote{~So far, Figure 11 in Flanagan \& Wiens (1998) is the only experimental account we know of, which only partially complies with the other teams' results. The figure contains two plots depicting the frequency dependencies of $\,1/Q_{shear}\,$ and $\,1/Q_{compress}\,$. While the behaviour of both parameters remains conventional down to $\,10^{-1}\,$Hz, the shear attenuation surprisingly goes down when the frequency decreases to $\,10^{-3}\,$Hz. Later, one of the Authors wrote to us that ``{\emph{Both P and S wave attenuation becomes greater at low frequencies. The trend towards lower attenuation at the lowest frequencies in Fig. 11 is not well substantiated.}}" (D. Wiens, private communication) Hence, the consensus on (\ref{one}) stays.} \subsection{Consequences for the tides} \subsubsection{Tidal dissipation vs seismic dissipation} For terrestrial planets, the frequency-dependence of the $\,Q\,$ factor of bodily tides is similar to the frequency-dependence (\ref{one} - \ref{two}) of the seismological $\,Q\,$ factor. This premise is based on the fact that the tidal attenuation in the mantle is taking place, much like the seismic attenuation, mainly due to the mantle's rigidity. This is a nontrivial fact because, in distinction from earthquakes, the damping of tides is taking place both due to nonrigidity and self-gravity of the planet. Modeling the planet with a homogeneous sphere of density $\,\rho\,$, rigidity $\,\mu\,$, surface gravity $\,\mbox{g}\,$, and radius $\,R\,$, Goldreich (1963) managed to separate the nonrigidity-caused and self-gravity-caused inputs into the overall tidal attenuation. His expression for the tidal quality factor has the form \begin{eqnarray} Q\;=\;Q_o\left(\;1\;+\;\frac{2}{19}\;\frac{\mbox{g}\;\rho\;R}{\mu}\;\right)\;\;\;, \label{Goldreich} \end{eqnarray} $Q_o\,$ being the value that the quality factor would assume were self-gravity absent. To get an idea of how significant the self-gravity-produced input could be, let us plug there the mass and radius of Mars and the rigidity of the Martian mantle. For the Earth's mantle, $\,\mu\,=\,65\,\div\, 80\,$GPa. Judging by the absence of volcanic activity over the past hundred(s) of millions of years of Mars' history, the temperature of the Martian upper mantle is (to say the least) not higher than that of the terrestrial one. Therefore we may safely approximate the Martian $\,\mu\,$ with the upper limit for the rigidity of the terrestrial mantle: $\,\mu\,=\,10^{11}\,$Pa. All in all, the relative contribution from self-gravity will look as \begin{eqnarray} \frac{2}{19}\;\frac{\mbox{g}\;\rho\;R}{\mu}\;=\; \frac{6}{76\,\pi}\;\frac{\gamma\;M^2}{\mu\;R^4}\; \approx\;\frac{1}{40}\;\frac{(\;6.7\,\times\,10^{\textstyle{^{-11}}}\, \mbox{m}^{\textstyle{^3}}\,\mbox{kg}^{\textstyle{^{-1}}}\,\mbox{s}^{\textstyle{^{-2}}}\;) \;\;(\;6.4\,\times\,10^{\textstyle{^{23}}}\,\mbox{kg}\;)^{\textstyle{^2}}}{(\; 10^{\textstyle{^{11}}}\;\mbox{Pa}\;)\;\;(\;3.4\,\times\,10^{\textstyle{^6}}\,m\;)^{ \textstyle{^4}}}\;\approx\;5.2\,\times\,10^{\textstyle{^{-2}}}\;\;,\;\;\;\; \label{} \end{eqnarray} $\gamma$ denoting the gravity constant. This, very conservative estimate shows that self-gravitation contributes, at most, several percent into the overall count of energy losses due to tides. This is the reason why we extend to the tidal $\,Q\,$ the frequency-dependence law measured for the seismic quality factor. \subsubsection{Dissipation in the planet vs dissipation in the satellite} A special situation is tidal relaxation toward the state where one body shows the same side to another. Numerous satellites show, up to librations, the same face to their primaries. Among the planets, Pluto does this to Charon. Such a complete locking is typical also for binary asteroids. A gradual approach toward the synchronous orbit involves ever-decreasing frequencies, eventually exceeding the limits of equation (\ref{two}) and thus the bounds of the present discussion. Mathematically, this situation still may be tackled by means of (\ref{one}) until the tidal frequency $\,\chi\,$ decreases to $\;1\,$yr$^{-1}$, and then by means of (\ref{two}) while $\,\chi\,$ remains above the inverse Maxwell time of the planet's material. Whether the latter law can be extended to longer time scales remains an open issue of a generic nature that is not related to a specific model of tides or to a particular frequency dependence of $\,Q\;$. The generic problem is whether we at all may use the concept of the quality factor beyond the Maxwell time, or should instead employ, beginning from some low $\,\chi\,$, a comprehensive hydrodynamical model. In the current work, we address solely the satellite-generated tides on the planet. The input from the planet-caused tides on the satellite will be considered elsewhere. The case of Pluto will not be studied here either. Nor shall we address binary asteroids. (Since at present most asteroids are presumed loosely connected, and since we do not expect the dependencies (\ref{one} - \ref{two}) to hold for such aggregates, our theory should not, without some alterations, be applied to such binaries.) Thus, since we are talking only about dissipation inside the planet, and are not addressing the exceptional Pluto-Charon case, we may safely assume the tidal frequency to always exceed $\;1\,$yr$^{-1}$. Thence (\ref{one}) will render, for a typical satellite: \begin{eqnarray} {\left.~~~~~~~~~\right.} Q\,\sim\,\chi^{\alpha}~~~,~~~~\mbox{with}~~\alpha~=~0.2~-~0.4~~~. \label{425} \end{eqnarray} Accordingly, (\ref{421}) will entail: \begin{eqnarray} {\left.~~~~~~~~~\right.} \delta~\sim~\chi^{-\alpha}~~~,~~~~\mbox{with}~~\alpha~=~0.2~-~0.4~~~. \label{426} \end{eqnarray} Another special situation is a satellite {\emph{crossing}} a synchronous orbit. At the moment of crossing, the principal tidal frequency $\,\chi\,= \,2\,|\omega_p\,-\,n|\,$ vanishes. As (\ref{421}) and (\ref{one}) yield $\,Q\,\approx\,(2\delta)^{-1}\,$ and $\,Q\,\sim\,\chi^{\alpha}\,$, then we get $\,\delta\,\sim\,\chi^{-\alpha}\,$ with a positive $\,\alpha\,$. Uncritical employment of these formulae will then make one think that at this instant the lag $\,\delta\,$ grows infinitely, a clearly nonsensical result. The quandary is resolved through the observation that the bulge is lagging not only in its position but also in its height, for which reason the dissipation rate remains finite (Efroimsky 2007). Since in this paper we shall not consider crossing of or approach to synchronous orbits, and since the example we aim at is Phobos, we shall not go deeper into this matter here. \subsection{A thermodynamical aside: the frequency and the temperature} In the beginning of the preceding subsection we already mentioned that though the tidal $Q$ differs from the seismic one, both depend upon the frequency in the same way, because this dependence is determined by the same physical mechanisms. This pertains also to the temperature dependence, which for some fundamental reason combines into one function with the frequency dependence. As explained, from the basic physical principles, by Karato (2007, 1998), the frequency and temperature dependencies of $\,Q\,$ are inseparably connected. The quality factor can, despite its frequency dependence, be dimensionless only if it is a function not just of the frequency {\emph{per se}} but of a dimensionless product of the frequency by the typical time of defect displacement. This time exponentially depends on the activation energy $A^*$, whence the resulting function is \begin{eqnarray} Q\;\sim\;\left[\,\chi\;\exp(A^*/RT)\,\right]^{\alpha}\;\;\;. \label{427} \end{eqnarray} For most minerals of the upper mantle, $A^*$ lies within the limits of $360-540$ kJ mol$^{-1}$. For example, for dry olivine it is about $520$ kJ mol$^{-1}$. Thus, through formulae (\ref{427}) and (\ref{421}), the cooling rate of the planet plays a role in the orbital evolution of satellites: the lower the temperature, the higher the quality factor and, thereby, the smaller the lag $\, \delta\,$. For the sake of a crude estimate, assume that most of the tidal attenuation is taking place in some layer, for which an average temperature $T$ and an average activation energy $A^*$ may be introduced. Then from (\ref{427}) we have: $\,\Delta Q/{Q}\,\approx\,-\,\alpha\,A^*\,\Delta T/RT\,$. For a reasonable choice of values $\,\alpha=0.3\,$ and $\,A^*=\,5.4\,\times \,10^5\,J/mol\,$, a drop of the temperature from $\,T_o=2000\,K\,$ down by $\,\Delta T=200\,K\,$ will result in $\,\Delta Q/Q\,\approx\,1\,$. So a $\,10\,\%\,$ decrease of the temperature can result in an about $\,100\,\%\,$ growth of the quality factor. Below we shall concentrate on the frequency dependence solely. \section{Formulae} The tidal potential perturbation acting on the tide-raising satellite is \begin{eqnarray} U(\delta_1)\;=~\frac{A_2}{r_{\textstyle{_f}}^5\;r^5}\;\left(\,3\;\left(\mbox{{\boldmath $\vec r$}}_f \cdot\mbox{{\boldmath $\vec r$}}\right)^2\;-\;\mbox{{\boldmath $\vec r$}}_f^{\;2}\;\mbox{{\boldmath $\vec r$}}^{\;2}\,\right)\;+ \;\frac{A_3}{r_{\textstyle{_f}}^6\;r^6}\;\left(\,5\;\left(\mbox{{\boldmath $\vec r$}}_f \cdot\mbox{{\boldmath $\vec r$}}\right)^2\;-\;3\;\mbox{{\boldmath $\vec r$}}_f^{\;2}\;\mbox{{\boldmath $\vec r$}}^{\;2}\,\right) \;+\;\;.\;\,.\;\,.~~ \label{16} \end{eqnarray} where $\,r\,\equiv\,|\mbox{{\boldmath $\vec r$}}|\,$ and $\,r_f\,\equiv\,|\mbox{{\boldmath $\vec r$}}_f|\,$, while the constants are given by \begin{eqnarray} A_2\;\equiv\;\frac{k_2\,G\,m\,R^5}{2}\;\;\;\;,\;\;\;\;\;\;A_3\;\equiv\; \frac{k_3\,G\,m\,R^7}{2}\;\;\;\;,\;\;\;\;\;\;\;.\,\;.\,\;.\;\;\;\;, \label{17} \end{eqnarray} $k_n\,$ being the Love numbers. (For derivation of (\ref{16}) see, for example, MacDonald (1964) and literature cited therein.) Three caveats will be appropriate at this point. First, to (\ref{16}) we should add the potential due to the tidal distortion of the moon by the planet. That input contributes mainly to the radial component of the tidal force exerted on the moon, and entails a decrease in eccentricity and semi-major axis (MacDonald 1964). Here we omit this term, since our goal is to clarify the frequency dependence of the lag. Second, we acknowledge that in many realistic situations the $\,k_3\,$ and sometimes even the $\,k_4\,$ term is relevant (Bills et al. 2005). With intention to keep these inputs in our subsequent work, here we shall restrict our consideration to the leading term only. Hence the ensuing formula for the tidal force will read:\footnote{~Be mindful that $\;O(\,{\mbox{{\boldmath $\vec f$}}}^{\;\;{2}}/r^2)\;=\;O(\,\delta^2)\,\ll\,1\;$.} \begin{eqnarray} \nonumber \mbox{{\boldmath $\vec{\cal{F}}$}}\;=\;-\;\frac{3\;k_2\;G\;m^2\;R^5}{r^{7}}\;\left[\;\frac{\mbox{{\boldmath $\vec r$}}}{r}\;- \;\frac{\vec{\mbox{\it\textbf{f}}}}{r}\;-\;2\;\frac{\mbox{{\boldmath $\vec r$}}}{r}\; \frac{\,\mbox{{\boldmath $\vec r$}}\cdot{\vec{\mbox{\it\textbf{f}} }}\,}{r^2}\;+\;O(\,{\mbox{{\boldmath $\vec f$}}}^{\;\;{2}}/r^2) \right]\;+\;O(k_3Gm^2R^7/r^{9})\;\;\;\\ \nonumber\\ \approx\;-\;\frac{3\;k_2\;G\;m^2\;R^5}{r^{10}}\;\left[\;{\mbox{{\boldmath $\vec r$}}}\;{r^2}\;- \;{\vec{\mbox{\it\textbf{f}}}}\;{r^2}\;-\;2\;{\mbox{{\boldmath $\vec r$}}}\;\left(\,\mbox{{\boldmath $\vec r$}} \cdot{\vec{\mbox{\it\textbf{f}}}}\,\right)\;\right]\;\;\;,\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;~~~~~~~~~~~~~~~~~~~~~~ \label{18} \end{eqnarray} The third important caveat is that in our further exploitation of this formula we shall take into account the frequency-dependence of the lag $\,\mbox{\boldmath $\vec {\boldmath{\,f}}$}\,$, but not of the parameter $\,k_2\,$. While the dependence $\,\mbox{\boldmath $\vec {\boldmath{\,f}}$}(\chi)\,$ will be derived through the interconnection of $\,\mbox{\boldmath $\vec {\boldmath{\,f}}$}\,$ with $\,\delta(\chi)\,$ and therefore with $\,Q(\chi)\,$, the value of $\,k_2\,$ will be asserted constant. That the latter is acceptable can be proven through the following formula obtained by Darwin (1908) under the assumption of the planet being a Maxwell body (see also Correia \& Laskar 2003): \begin{eqnarray} \nonumber k_2(\chi)\;=\;k_{\textstyle{_{fluid}}}\;\,\sqrt{\;\frac{1\;+\;\chi^2\;\eta^2/\mu^2}{1\;+\; \left(\,\chi^2\;\eta^2/\mu^2\,\right)\;\left(\,1\;+\;19\;\mu/(2\,g\,\rho\,R)\,\right)^2}\;} \label{k_2}\;\;\;. \end{eqnarray} Here $\,k_{\textstyle{_{fluid}}}\,$ is the so-called fluid Love number. This is the value that $\,k_2\,$ would have assumed had the planet consisted of a perfect fluid with the same mass distribution as the actual planet. Notations $\,\mu\,$, $\,\rho\,$, $\,\mbox{g}\,$, and $\,\mbox{g}\,$ stand for the rigidity, mean density, surface gravity, and the radius of the planet. For these parameters, we shall keep using the estimates from subsection 3.4. The letter $\,\eta\,$ signifies the viscosity. Up to an order or two of magnitude, its value may be approximated, for a terrestrial planet's mantle, with $\,10^{22}\,$kg/(m$\cdot$ s). This will yield: $\,\chi^2\;\eta^2/\mu^2\,=\,(\,\chi\,\cdot\,10^{11}\,\mbox{s}\,)^2\,$, wherefrom we see that in all realistic situations pertaining to terrestrial planets the frequency-dependence in Darwin's formula will cancel out. Thus we shall neglect the frequency-dependence of the Love number $\,k_2\,$ (but shall at the same time take into account the frequency-dependence of $\,Q\,$, for it will induce frequency-dependence of all three lags). The interconnection between the position, time, and angular lags, \begin{eqnarray} \delta\;\equiv\,\;\frac{|{\vec{\mbox{\it\textbf{f}}}} \,|}{r}\,\;=\;\Delta t\;\; \frac{1}{r}\;|\;\mbox{{\boldmath $\vec \omega$}}_p\times\mbox{{\boldmath $\vec r$}}\;-\;\mbox{{\boldmath $\vec v$}}\;|\;\,=\;\frac{\Delta t}{2} \;\chi~~~, \label{19} \end{eqnarray} can be equivalently rewritten as: \begin{eqnarray} {\vec{\mbox{\it\textbf{f}}}}~=~{\bf{\hat{f}}}\;r\;\delta\;=\;r\;\frac{\Delta t}{2} \;\chi\;{\bf{\hat{f}}}\;\;\;, \label{20} \end{eqnarray} where \begin{eqnarray} {\bf{\hat{f}}}\;=\;\frac{~\mbox{{\boldmath $\vec \omega$}}_p \times \mbox{{\boldmath $\vec r$}}\,-\,\mbox{{\boldmath $\vec v$}}}{~|\, \mbox{{\boldmath $\vec \omega$}}_p\times\mbox{{\boldmath $\vec r$}}\,-\,\mbox{{\boldmath $\vec v$}}\,|} \label{21} \end{eqnarray} is a unit vector pointing in the lag direction. Be mindful that we assume the inclination and eccentricity to be small, wherefore the ratio \begin{eqnarray} \nonumber \frac{|{\vec{\mbox{\it\textbf{f}}}} \,|}{r}\,\;=\;\Delta t\;\; \frac{1}{r}\;|\;\mbox{{\boldmath $\vec \omega$}}_p\times\mbox{{\boldmath $\vec r$}}\;-\;\mbox{{\boldmath $\vec v$}}\;| \end{eqnarray} is simply the tangential angular lag, i.e., the geometric angle subtended at the primary's centre between the moon and the bulge. In the general case of a finite inclination or/and eccentricity, all our formulae will remain in force, but the lag $\, \delta\,$ will no longer have the meaning of the subtended angle. At this point, it would be convenient to introduce a dimensional integral parameter $\,{\cal{E}}\,$ describing the overall tidal attenuation rate in the planet. The power scaling law mentioned in section 3 may be expressed as \begin{eqnarray} Q\,=\,{\cal E}^{\alpha}\,\chi^{\alpha}~~~,~~~ \label{Q} \end{eqnarray} where $\,{\cal E}^{\alpha}\,$ is simply the dimensional factor emerging in the relation $\,Q\,\sim\,\chi^{\alpha}\,$. As mentioned in subsection 3.5, cooling of the planet should become a part of long-term orbital calculations. It enters these calculations through evolution of this parameter $\,{\cal{E}}\,$. Under the assumption that most of the tidal dissipation is taking place in some layer, for which an average temperature $T$ and an average activation energy $A^*$ may be defined, (\ref{427}) yields: \begin{eqnarray} \nonumber {\cal E}\;=\;{\cal E}_o\;\exp\left[\,\frac{A^*}{R}\; \left(\,\frac{1}{T_o}\;-\;\frac{1}{T}\;\right)\;\right]\;\;\;, \label{} \end{eqnarray} $T_o\,$ being the temperature of the layer at some fiducial epoch. The physical meaning of the integral parameter $\,{\cal{E}}\,$ is transparent: if the planet were assembled of a homogeneous medium, with a uniform temperature distribution, and if attenuation in this medium were caused by one particular physical mechanism, then $\,{\cal{E}}\,$ would be a relaxation time scale associated with this mechanism (say, the time of defect displacement). For a realistic planet, $\,{\cal{E}}\,$ may be interpreted as a relaxation time averaged (in the sense of $\,Q\,=\,{\cal E}^{\alpha}\,\chi^{\alpha}\,$) over the planet's layers and over the various damping mechanisms acting within these layers. As (\ref{one}) entails $\;\delta\,\approx\,1/(2\,Q)\,=\,(1/2)\, {\cal{E}}^{-\alpha}\,\chi^{-\alpha}\;$, then (\ref{19}) necessitates for the position lag: \begin{eqnarray} {\vec{\mbox{\it\textbf{f}}}}~=~r\;\delta\;{\bf{\hat{f}}}\;=\;\frac{1}{2}\; r\;{\cal E}^{-\alpha}\;\chi^{-\alpha}\;{\bf{\hat{f}}}\;\;\;, \label{23} \end{eqnarray} and for the time lag: \begin{eqnarray} {\Delta t}\;=\;{\cal E}^{-\alpha}\;\chi^{-(\alpha +1)}\;\;\;, \label{22} \end{eqnarray} ${\cal E}\,$ being the planet's integral parameter introduced above, and $\,\chi\,$ being a known function (\ref{5}) of the orbital variables. Putting everything together, we arrive at \begin{eqnarray} {\vec{\mbox{\it\textbf{f}}}}~=\,\frac{1}{2}\,\left({\cal{E}}{\chi}\right)^{-\alpha}\,a\; \,\frac{1\,-\,e^2}{1\,+\,e\;\cos\nu}\;\,\frac{~\mbox{{\boldmath $\vec \omega$}}_p \times\mbox{{\boldmath $\vec r$}}\,-\,\mbox{{\boldmath $\vec v$}}~}{~| \,\mbox{{\boldmath $\vec \omega$}}_p\times\mbox{{\boldmath $\vec r$}}\,-\,\mbox{{\boldmath $\vec v$}}\,|~}~=~\frac{1}{2}\,\left({\cal{E}}{\chi} \right)^{-\alpha}\,a\;\,\frac{~\mbox{{\boldmath $\vec \omega$}}_p \times\mbox{{\boldmath $\vec r$}}\,-\,\mbox{{\boldmath $\vec v$}}~}{~| \,\mbox{{\boldmath $\vec \omega$}}_p\times\mbox{{\boldmath $\vec r$}}\,-\,\mbox{{\boldmath $\vec v$}}\,|~}\;+\;O(e)\;\;,~~ \label{24} \end{eqnarray} where \begin{eqnarray} \chi\,\equiv\;2\;|\omega_p\,-\;n|\;\;.\;\; \label{25} \end{eqnarray} The time lag is, according to (\ref{22}): \begin{eqnarray} \Delta t= {\cal{E}}\;\left(\,2\;{\cal{E}}\;|\omega_p\,-\;n|\, \right)^{\textstyle{^{-\,(\alpha+1)}}}.~ \label{39} \end{eqnarray} Formulae (\ref{39}), (\ref{24}), and (\ref{18}) are sufficient to both compute the orbit evolution and trace the variations of the time lag. \section{The example of Phobos' fall to Mars} As an illustrative example, let us consider how the realistic dependence $\,Q(\chi)\,$ alters the life time left for Phobos. We shall neglect the fact that Phobos is close to its Roche limit, and may be destroyed by tides prior to its fall. We also shall restrict the dynamical interactions between Phobos and Mars to a two-body problem disturbed solely with the tides raised by Phobos on Mars. Thus we shall omit all the other perturbations, like the Martian non-sphericity and precession, or the pull exerted upon Phobos by the Sun, the planets, and Deimos. If, along with these simplifications, we assume the eccentricity and inclination to be small, then we shall be able to describe the evolution of the semi-major axis by means of the following equation (Kaula 1964, p. 677, formula 41): \begin{equation} \frac{da}{dt}\;=\;-\;\frac{3\;k_2\;R^5\;G\,m}{Q\;\sqrt{G\,(M_o\,+\,m)}\; a^{11/2}}\;\;\;, \label{eq:Kaula} \end{equation} with $\,M_o\,$ and $\,m\,$ denoting the masses of Mars and Phobos. This equation can be solved analytically, provided the quality factor $\,Q\,$ is set constant (as in Kaula 1964). The solution is: \begin{equation} a(t)\;=\;\left(\;-\;\frac{39\;k_2\;R^5\;G\,m}{2\;Q\sqrt{G\,(M\,+\,m)}}\;t \;+\;a_o^{13/2}\right)^{2/13} \label{solution} \end{equation} $a_o\,\equiv\,a(t)|_{\textstyle{_{t=0}}}\,$ being the initial value of Phobos' semi-major axis. Unfortunately, neither our model (wherein $\,Q\,$ is given by (\ref{Q})$\,$) nor the Singer-Mignard model (with $\,Q\,$ scaling as $\,1/\chi\;$) admit such an easy analytical solution. This compels us to rely on numerics. In a (quasi)inertial frame centered at Mars, the equation of motion looks: \begin{equation} \frac{d^2\mbox{{\boldmath $\vec r$}}}{dt^2}\;=\;-\;\frac{G\,(M_o\,+\,m)\;\mbox{{\boldmath $\vec r$}}}{r^3}\;-\; \frac{3\;k_2\;G\,(M_o\,+\,m)\;m\;R^5}{r^{10}\;M_o} \;\left[\;-\;{\vec{\mbox{\it\textbf{f}}}}\;r^2\;-\;2\;\mbox{{\boldmath $\vec r$}}\; \left(\;\mbox{{\boldmath $\vec r$}}\cdot{\vec{\mbox{\it\textbf{f}} }}\,\right)\; \right] \label{eq:phobosXYZ} \end{equation} Naturally, its right-hand side consists of the principal, two-body contribution $\;-\,{G(M_o+m)\mbox{{\boldmath $\vec r$}}}/{r^3}\,$ and the disturbing tidal force given by (\ref{18}). It should be noted, however, that we did not bring in here all the terms from (\ref{18}). Following Mignard (1980), we retain in (\ref{eq:phobosXYZ}) only the perturbing terms dependent on $\,{\vec{\mbox{\it\textbf{f}}}}\,$. The other perturbing term (the first term on the right-hand side of (\ref{18})$\,$) is missing in (\ref{eq:phobosXYZ}), because it provides no secular input into the semi-major axis' evolution. For a proof of this fact see Appendix A.3 below. Phobos' orbital motion obeys the planetary equations in the Euler-Gauss form. In assumption of $\,{\it{i}}\,$ and $\,e\,$ % % % % being small, the problem conveniently reduces to one equation: \begin{equation} \frac{da}{dt}=\frac{2S}{n}\;\;\;, \label{eq:gauss} \end{equation} where $\,S\,$ is the tidal acceleration given by the second term of (\ref{eq:phobosXYZ}) projected onto the tangential direction of the satellite motion. This direction is defined by the unit vector $\;{(\mbox{{\boldmath $\vec H$}}\times\mbox{{\boldmath $\vec r$}})}/{|\mbox{{\boldmath $\vec H$}}\times\mbox{{\boldmath $\vec r$}}|}\;$, where $\,\mbox{{\boldmath $\vec H$}}\,$ denotes the angular-momentum vector. Thence the said projection reads: \begin{eqnarray} S\;=\;-\;\frac{3\;k_2\;G\;(M_o\,+\,m)\;m\;R^5}{r^{10}\;M_o}\;\left[\;-\;r^2 \;\Delta t\;\left(\;\mbox{{\boldmath $\vec \omega$}}_p\,\times\,\mbox{{\boldmath $\vec r$}}\;-\;\mbox{{\boldmath $\vec v$}}\;\right)\;+\; 2\;\mbox{{\boldmath $\vec r$}}\;\Delta t\;(\mbox{{\boldmath $\vec r$}}\,\cdot\,\mbox{{\boldmath $\vec v$}})\,\right]\,\cdot\,\frac{(\mbox{{\boldmath $\vec H$}} \times\mbox{{\boldmath $\vec r$}})}{|\mbox{{\boldmath $\vec H$}}\times\mbox{{\boldmath $\vec r$}}|}\;\;\;.\;\;\;\;\; \label{projection} \end{eqnarray} In Mignard (1980) the appropriate expression is given with a wrong sign. This is likely to be a misprint, because the subsequent formulae in his paper are correct. In assumption of $\,{\it{i}}\,$ and $\,e\,$ being negligibly small, $\,(\mbox{{\boldmath $\vec H$}}\,\times\,\mbox{{\boldmath $\vec r$}})\cdot(\,\mbox{{\boldmath $\vec \omega$}}_p\,\times\,\mbox{{\boldmath $\vec r$}}\,)\,$ can be approximated with $\,n\,a^4\,\omega_p\,$, whereafter (\ref{projection}) gets simplified to \begin{equation} S\;=\;-\;\frac{3\;k_2\;R^5\;G\;(M_o\,+\,m)\;m\;\Delta t}{M_o\;a^7}\;\left(n\;-\; \omega_p\right)\;\;\;, \label{eq:S} \end{equation} substitution whereof into (\ref{eq:gauss}) entails the following equation to integrate: \begin{equation} \frac{da}{dt}\;=\;-\;\frac{6\;k_2\;R^5\;n\;m\;\Delta t}{M_o\;a^4}\;\left(n\;-\; \omega_p\right)\;\;\;. \label{eq:Gauss-Simplify} \end{equation} % Our computational scheme was based on the numerical integrator RA15 offered by Everhart (1985). The initial value of $\,a\,$, as well as the values of all the other physical parameters entering (\ref{eq:Gauss-Simplify}), were borrowed from Lainey et al. (2007). These included an estimate of $\,0.6644 $ $min$ for the present-day time lag $\,\Delta t\,$. Four numerical simulations were carried out. One of these implemented the Singer-Mignard model with a tidal-frequency-independent $\,\Delta t\,$. The other three integrations were performed for the realistic frequency-dependence (\ref{39}), with $\,\alpha\,=\,0.2\,$, $0.3$ and $0.4$. To find the integral parameter $\,{\cal E}\,$ emerging in (\ref{22}), we used the present-day values of $\,\Delta t\,$ and $\,n\,$. The resulting values of $\,{\cal E}\,$ were found to be $\,1201\,\times\,10^5$ $day\;rad^{-1}\,$, $\,81028\,~day\;rad^{-1}$ and $\,2104\,$ $day\;rad^{-1}\;$ for $\,\alpha\,=\,0.2\,$, $0.3\,$ and $\,0.4\,$, respectively. We did not take into account the strong temperature-dependence of $\,{\cal E}\,$, leaving this interesting topic for discussion elsewhere. Simultaneous numerical integration of equations (\ref{39}) and (\ref{eq:Gauss-Simplify}) results in plots presented on Figures \ref{Efroimsky-Lainey-fig1.eps} and \ref{Efroimsky-Lainey-fig2.eps}. The first of these pictures shows the evolution of Phobos' semi-major axis from its present value until the satellite crashes on Mars, having descended about $\,6000\,$ km. The leftmost curve reproduces the known result that, according to the Singer-Mignard model with a constant $\,\Delta t\,$, Phobos should fall on Mars in about $\,29\,$ Myr.\footnote{~In the paragraph after his formula (18), Mignard (1981) states that ``Phobos will end its life in about $\,36\,$ million years". Mignard arrived to that number by using an old estimate of $\,20\,$ deg/cyr$^2$ for the initial tidal acceleration. Later studies, like for example Jacobson et al. (1989) and Lainey et al. (2007) have shown that this value should be increased to $\,25.4\,$ deg/cyr$^2$. It is for this reason that our simulation based on the Singer-Mignard model gives not $\,36\,$ but only $\,29\,$ Myr for Phobos' remaining lifetime.} The next curve was obtained not numerically but analytically. It depicts the analytical solution (\ref{solution}) available for the Gerstenkorn-MacDonald-Kaula model with a constant $\,Q\,$, and demonstrates that this model promises to Phobos a longer age, $\,38\,$ Myr. The three curves on the right were obtained by numerical integration of (\ref{39}) and (\ref{eq:Gauss-Simplify}). They correspond to the realistic rheology with $\,\alpha\,$ equal to $\,0.2\,$, $\,0.3\,$ and $\,0.4\,$. It can be seen that within the realistic model Phobos is expected to survive for about $\,40\,-\,43\,$ Myr, dependent upon the actual value of $\,\alpha\,$ of the Martian mantle. This is about $\,15\,$ Myr longer than within the Singer-Mignard model widely accepted hitherto. The difference between the three scenarios shown on Figure \ref{Efroimsky-Lainey-fig1.eps} stems from the different rate of evolution of the lag $\,\Delta t\,$ in the three theories addressed. Within the Singer-Mignard formalism, $\,\Delta t\,$ stays unchanged through the descent. As can be seen from formula (\ref{39}), this is equivalent to setting $\,\alpha\,=\,-\,1\,$, an assertion not supported by geophysical data. Within the Gerstenkorn-MacDonald-Kaula model, the time lag is subject to a gradual decrease described by the formula \begin{equation} \Delta t\;=\;\frac{\arctan(1/Q)}{2\;|n\,-\,\omega_p|} \label{dt} \end{equation} under the assumption that $\,Q\,$ is constant and is equal to its present-day value $\,Q\,=\,79.91\,$ determined in (Lainey et al. 2007). Comparison of (\ref{dt}) with (\ref{39}) reminds us of the simple fact that, in terms of our model, Gerstenkorn-MacDonald-Kaula's theory corresponds to the choice of $\,\alpha\,=\,0\,$, a choice which is closer to the realistic rheology than the Singer-Mignard model. In the realistic model, $\,\alpha\,$ is positive and assumes a value of about $\,0.2\,-\,0.4\,$. As a result, the time lag is gradually decreasing. However this decrease looks different from that in Gerstenkorn-MacDonald-Kaula's model -- for their comparison see Figure~\ref{Efroimsky-Lainey-fig2.eps}. \begin{figure} \includegraphics[width=12.1cm,angle=-90]{Efroimsky-Lainey-fig1_colour_.eps} \caption{\small ~~Evolution of Phobos' semi-major axis, as predicted by different models. The lines (from left to right) correspond to the Singer-Mignard model, to the Gerstenkorn-McDonald-Kaula model, and to the realistic rheology with $\,\alpha\,=\,0.2\,$, with $\,\alpha\,=\,0.3\,$, and with $\,\alpha\,=\,0.4\,$, correspondingly.} \label{Efroimsky-Lainey-fig1.eps} \end{figure} \begin{figure} \includegraphics[width=12.1cm,angle=-90]{Efroimsky-Lainey-fig2_colour_.eps} \caption{\small ~~Evolution of $\Delta t$ computed for three different models. The horizontal green line corresponds to the Singer-Mignard model wherein $\,\Delta t\,$ is set to be constant. The red line corresponds to the Gerstenkorn-MacDonald-Kaula model. The dark-blue, violet, and light-blue lines correspond to the realistic rheological models with $\,\alpha\,=\,0.2\;$, with $\,\alpha\,=\,0.3\,$. and with $\,\alpha\,=\,0.4\,$, correspondingly.} \label{Efroimsky-Lainey-fig2.eps} \end{figure} \section{Conclusions} As the tidal angular lag $\,\delta\,$ is inversely proportional to the tidal $\,Q\,$ factor, the actual frequency-dependence of both $\,\delta\,$ and $\,\Delta t\,$ is unambiguously defined by the frequency-dependence of $\,Q\,$. While in the Gerstenkorn-MacDonald-Kaula theory of tides the geometric lag is assumed frequency-independent, in the Singer-Mignard theory it is the time lag that is spared of frequency dependence. However, neither of these two choices conform to the geophysical data. We introduce a realistic tidal model, which permits the quality factor and, therefore, both the angular lag $\,\delta\,$ and the time lag $\,\Delta t\,$ to depend on the tidal frequency $\,\chi\,$. The quality factor is wont, according to numerous studies, to obey the law $\,Q\,\sim\, \chi^{\alpha}\,$, where $\,\alpha\,$ lies within $\,0.2\,-\,0.4\,$. This makes the time lag $\,\Delta t\,$ not a constant but a function (\ref{22}) of the principal tidal frequency and, through (\ref{39}), of the orbital elements of the satellite. The same pertains to the angular lag $\delta$. Using these tidal-frequency dependencies for the time and angular lags, along with the recently updated values of the Martian parameters, we explored the future of Phobos, taking into account only the tides raised by Phobos on Mars, but not those caused by Mars on Phobos. Our integration shows that Phobos will fall on Mars in $\,40\,-\,43\,$ Myr from now. It is up to $\,50\,\%\,$ longer than the estimate stemming from the Singer-Mignard model employed in the past. This demonstrates that the currently accepted time scales of dynamical evolution, deduced from old tidal models, should be reexamined using the actual frequency dependence of the lags. ~\\ {\underline{\textbf{\Large{Acknowledgments}}}}\\ ~\\ ME would like to deeply thank Peter Goldreich, Francis Nimmo, William Moore, and S. Fred Singer for their helpful comments and recommendations. VL wishes to gratefully acknowledge his fruitful conversation with Attilio Rivoldini concerning dissipation in Mars. The authors' very special gratitude goes to Shun-ichiro Karato whose consultations on the theory and phenomenology of the quality factor were crucially important for the project. ~\\ {\underline{\textbf{\Large{Appendix.}}}}\\ ~\\ The goal of this Appendix is threefold. First, we remind the reader why in the first approximation the quality factor is inversely proportional to the phase lag. Second, we explain why the phase lag is twice the geometric lag angle, as in formulae (\ref{420} - \ref{421}) above. While a comprehensive mathematical derivation of this fact can be found elsewhere (see the unnumbered formula between equations (29) and (30) on p. 673 in Kaula 1964), here we illustrate this counterintuitive result by using the simplest setting. Third, we justify our neglect of the first term in (\ref{18}). ~\\ ~\\ \noindent {\textbf{{A.1.~~The case of a near-circular near-equatorial orbit.}}}\\ ~\\ Consider the simple case of an equatorial moon on a circular orbit. At each point of the planet, the tidal potential produced by this moon will read \begin{eqnarray} W\;=\;W_o\;\cos \chi t\;\;\;, \label{A3} \label{469} \end{eqnarray} the tidal frequency being given by \begin{eqnarray} \chi\,=\,2~|n\;-\;\omega_p|~~~.~~~ \label{A3} \label{470} \end{eqnarray} Let $\,\mbox{g}\,$ denote the free-fall acceleration. An element of the planet's volume lying beneath the satellite's trajectory will then experience a vertical elevation of \begin{eqnarray} \zeta\;=\;\frac{W_o}{\mbox{g}}\;\cos (\chi t\;-\;2\delta)\;\;\;. \label{A4} \label{471} \end{eqnarray} Accordingly, the vertical velocity of this element of the planet's volume will amount to \begin{eqnarray} u\;=\;\dot{\zeta}\;=\;-\;\chi\;\frac{W_o}{\mbox{g}}\;\sin (\chi t \;-\;2\;\delta)\;=\;-\;\chi\;\frac{W_o}{\mbox{g}}\;\left(\sin \chi t\;\cos 2\delta\;-\;\cos \chi t\; \sin 2\delta\right)\;\;.\;\; \label{A5} \label{472} \end{eqnarray} The expression for the velocity has such a simple form because in this case the instantaneous frequency $\,\chi\,$ is constant. The satellite generates two bulges -- on the facing and opposite sides of the planet -- so each point of the surface is uplifted twice through a cycle. This entails the factor of two in the expressions (\ref{470}) for the frequency. The phase in (\ref{A4}), too, is doubled, though the necessity of this is less evident.\footnote{~Let $\,x\,$ signify a position along the equatorial circumference of the planet. In the absence of lag, the radial elevation at a point $\,x\,$ would be: \begin{eqnarray} \nonumber \zeta\;=\;\frac{W_o}{\mbox{g}}\;\cos k(x\;-\;v\;t)\;\;\;,\;\;\;\;\;\;v\,=\,R\,\sigma\;\;\;, \end{eqnarray} $v\,$ being the velocity of the satellite's projection on the ground, $\,R\,$ being the planet's radius, and $\sigma$ being simply $\,|n-\omega_p|\,$ because we are dealing with a circular equatorial orbit. The value of $\,k\,$ must satisfy \begin{eqnarray} \nonumber k\;v\;=\;2\;\sigma\;\;\;,\;\mbox{i.e.,}\;\;\;\;k\;v\;=\;\chi\;\;\;, \end{eqnarray} to make sure that at each $\,x\,$ the ground elevates twice per an orbital cycle. The above two formulae yield: \begin{eqnarray} \nonumber k\;R\;=\;2\;\;\;. \end{eqnarray} In the presence of lag, all above stays in force, except that the formula for radial elevation will read: \begin{eqnarray} \nonumber \zeta\;=\;\frac{W_o}{\mbox{g}}\;\cos k(x\;-\;v\;t\;+\;D)\;\;\;,\;\;\;\; \mbox{where}\;\;\;\; D\;=\;R\;\delta\;\;\;, \end{eqnarray} $D\,$ being the linear lag, and $\,\delta\,$ being the angular one. Since $\,k\,v\,=\,2\,$, we get: \begin{eqnarray} \nonumber \cos \left[\,k\;(x\;-\;v\;t\;+\;R\;\delta_{_1})\,\right]\;=\;\cos \left[\,k\;x \;-\;k\;v\;t\;+\;k\;R\;\delta\,\right]\;=\;\cos \left[\,k\;x\;-\;(k\;v\;t\;-\;2\;\delta)\,\right]\;\;\;, \end{eqnarray} so that, at some fixed point (say, at $\,x\,=\,0\,$) the elevation becomes: \begin{eqnarray} \nonumber \zeta(t)\;=\;\frac{W_o}{\mbox{g}}\;\cos (k\;v\;t\;-\;2\;\delta)\;\;\;. \end{eqnarray} We see that, while the geometric lag is $\;\delta\,$, the phase lag is double thereof.} The energy dissipated over a time cycle $\,T\,=\,2\pi/\chi\,$, per unit mass, will, in neglect of horizontal displacements, be \begin{eqnarray} \nonumber \Delta E_{_{cycle}} &=& \int^{T}_{0}u\left(-\,\frac{\partial W}{ \partial r}\right)dt= \,-\left(-\,\chi \frac{W_o}{\mbox{g}}\right)\,\frac{\partial W_o}{ \partial r}\int^{t=T}_{t=0}\cos \chi t\,\left(\sin \chi t\, \cos 2\delta\,-\,\cos \chi t\, \sin 2\delta\right)dt\\ \nonumber\\ \nonumber\\ &=&\,-\;\chi\;\frac{W_o}{\mbox{g}}\;\frac{\partial W_o}{\partial r}\;\sin 2\delta \;\frac{1}{\chi}\;\int^{\chi t\,=\,2\pi}_{\chi t\,=\,0}\;\cos^2 \chi t\;\;d(\chi t)\;=\;-\;\frac{W_o}{\mbox{g}}\;\frac{\partial W_o}{\partial r}\;\pi\;\sin2\delta \;\;,\;\;\;~~~~~~~~~~~~~~~~~~~~ \label{A6} \label{} \end{eqnarray} while the peak energy stored in the system during the cycle will read: \begin{eqnarray} \nonumber E_{_{peak}}&=&\int^{T/4}_{0} u \left(-\,\frac{\partial W}{ \partial r}\right)dt = \,-\left(-\,\chi\,\frac{W_o}{\mbox{g}}\right)\frac{\partial W_o }{\partial r}\int^{t=T/4}_{t=0}\cos \chi t\,\left(\sin \chi t\,\cos 2\delta\,-\,\cos \chi t\,\sin 2\delta\right)dt\\ \nonumber\\ \nonumber\\ \nonumber &=&\;2\;\sigma\;\frac{W_o}{\mbox{g}}\;\frac{\partial W_o}{\partial r}\;\left[\; \frac{\cos 2\delta}{\chi}\;\int^{\chi t\,=\,\pi/2}_{\chi t\,=\,0} \;\cos \chi t\;\sin \chi t\;\;d(\chi t)\;-\;\frac{\sin 2\delta }{\chi}\;\int^{\chi t\,=\,\pi/2}_{\chi t\,=\,0}\;\cos^2 \chi t \;\;d(\chi t)\;\right]\\ \nonumber\\ \nonumber\\ &=&\;\frac{W_o}{\mbox{g}}\;\frac{\partial W_o}{\partial r}\;\left[\;\frac{1}{2} \;\cos 2\delta\;-\;\frac{\pi}{4}\;\sin 2\delta\;\right]~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \label{A7} \label{} \end{eqnarray} whence \pagebreak \begin{eqnarray} Q^{-1}\;=\;\frac{-\;\Delta E_{_{cycle}}}{2\,\pi\,E_{_{peak}}}\;=\;\frac{1}{2\,\pi} \;\,\frac{\pi\;\sin 2\delta}{~\frac{\textstyle 1}{\textstyle 2}\;\cos2\delta\;-\; \frac{\textstyle\pi}{\textstyle 4}\;\sin 2 \delta}\;\approx\;\tan 2 \delta\;\;\;. \label{A8} \label{} \end{eqnarray} The above formulae were written down in neglect of horizontal displacements, approximation justified below in the language of continuum mechanics. ~\\ ~\\ {\textbf{{A.2.~~On the validity of our neglect of the horizontal displacements}}}\\ In our above derivation of the interrelation between $\,Q\,$ and $\,\delta\,$, we greatly simplified the situation, taking into account only the vertical displacement of the planetary surface, in response to the satellite's pull. Here we shall demonstrate that this approximation is legitimate, at least in the case when the planet is modeled with an incompressible and homogeneous medium. As a starting point, recall that the tidal attenuation rate within a tidally distorted planet is well approximated with the work performed on it by the satellite\footnote{~A small share of this work is being spent for decelerating the planet rotation}: \begin{eqnarray} \dot{E}\;=\;-\;\int\,\rho\;\mbox{{\boldmath $\vec V$}}\;\nabla W\;d^3x \label{A9} \end{eqnarray} where $\,\rho\,,\;\mbox{{\boldmath $\vec V$}}\,$ and $\,W\,$ are the density, velocity, and tidal potential inside the planet. To simplify this expression, we shall employ the equality \begin{eqnarray} \rho\,\mbox{{\boldmath $\vec V$}}\,\nabla W\,=\,\nabla\,\cdot\,(\rho\;\mbox{{\boldmath $\vec V$}}\;W)\;-\;W\;\mbox{{\boldmath $\vec V$}}\cdot \nabla\rho\;-\;W\;\nabla\,\cdot\,(\rho\;\mbox{{\boldmath $\vec V$}})\,=\,\nabla\,\cdot\,(\rho\; \mbox{{\boldmath $\vec V$}}\;\nabla W)\;-\;W\;\mbox{{\boldmath $\vec V$}}\cdot\nabla\rho\;+\;W\; \frac{\partial\rho}{\partial t}\;\;.\;\;\; \label{} \end{eqnarray} For a homogeneous and incompressible primary, both the $\,\mbox{{\boldmath $\vec V$}}\;W \nabla \rho\,$ and $\,\partial\rho/\partial t\,$ terms are nil, wherefrom \begin{eqnarray} \dot{E}\;=\;-\;\int\rho\;W\;\mbox{{\boldmath $\vec V$}}\,\cdot\,{\vec{\bf{n}}}\;d^3x\;\;\;, \label{} \end{eqnarray} ${\vec{\bf{n}}}\,$ being the outward normal to the surface of the planet. We immediately see that, within the hydrodynamical model, it is only the radial elevation rate that matters. Now write the potential as $\;W\,=\,W_o\,\cos(\chi\,t)\;$. Since the response is delayed by $\, \Delta t\,$, the surface-inequality rate will evolve as $\; \mbox{{\boldmath $\vec V$}}\cdot{\vec{\bf{n}}}\,\sim\,\sin\left[\,\chi\,(\,t\,-\,\Delta t\,)\,\right] \;$. All the rest will then be as in subsection A.1 above. ~\\ {\textbf{{A.3.~~On the validity of our neglect of the nondissipative tidal potential}}}\\ The right-hand side of equation (\ref{eq:phobosXYZ}) consists of the principal part, $\;-\,{G\,(M_o\,+\,m)\,\mbox{{\boldmath $\vec r$}}}/{r^3}\,$, and tidal perturbation terms. These are the second and third terms from the right-hand side of (\ref{18}), terms that bear a dependence on $\,\mbox{{\boldmath $\vec f$}}\,$ and, therefore, on $\,\Delta t\,$. The first term from the right-hand side of (\ref{18}) lacks such a dependence and, therefore, is omitted in (\ref{eq:phobosXYZ}). The term was dropped because it would provide no secular input into the history of the semi-major axis. Here we shall provide a proof of this statement. The omitted term corresponds to a potential (Mignard 1980): \begin{eqnarray} U_0\;=\;\frac{k_2\;(M_o\,+\,m)\;G\;m\;R^5}{2\;M_o\;r^6}\;\;\;. \end{eqnarray} From the physical standpoint, $\,U_0\,$ models the effect of the tidal bulges, assuming their direction to coincide with that toward the tide-raising satellite. This potential entails no angular-momentum exchange, and therefore yields no secular effect on the semi-major axis. To prove this, let us decompose this potential into a series over the powers of $\,e\,$. This will require of us to derive the expression of $\,\left( {\textstyle{a}}/{\textstyle{r}}\right)^6\,$. Starting out with the well known development \begin{eqnarray} \nonumber \frac{a}{r}&=&1+\sum_{p=1}^\infty 2J_p(pe)\cos(pM)\\ \nonumber\\ \nonumber &=&1+e\cos(M)+e^2\cos(2M)+\left(\frac{9}{8}\cos(3M)-\frac{1}{8}\cos(M)\right)e^3\\ \nonumber\\ &+&\left(-\frac{1}{3}\cos(2M)+\frac{4}{3}\cos(4M)\right)e^4+\left(-\frac{81}{128} \cos(3M)+\frac{1}{192}\cos(M)+\frac{625}{384}\cos(5M)\right)e^5 \nonumber\\ \nonumber\\ &+&\left(\frac{81}{40}\cos(6M)+\frac{1}{24}\cos(2M)-\frac{16}{15}\cos(4M)\right) e^6+...~~~,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray} one can arrive to the following expansion: \begin{eqnarray} \nonumber \left(\frac{a}{r}\right)^6&=&1+6e\cos(M)+\left(\frac{15}{2}+\frac{27}{2}\cos(2M) \right)e^2+\left(\frac{107}{4}\cos(3M)+\frac{117}{4}\cos(M)\right)e^3\\ \nonumber\\ \nonumber &+&\left(\frac{197}{4}\cos(4M)+\frac{101}{2}\cos(2M)+\frac{105}{4}\right)e^4 \\ \nonumber\\ \nonumber &+&\left(\frac{5157}{64}\cos(3M)+\frac{5529}{64}\cos(5M)+\frac{2721}{32}\cos(M) \right)e^5\\ \nonumber\\ &+&\left(\frac{4839}{40}\cos(4M)+129\cos(2M)+\frac{525}{8}+\frac{732}{5}\cos(6M) \right)e^6+... ~~~, \end{eqnarray} whose average over the mean anomaly looks like: \begin{eqnarray} \langle\;\,\left(\frac{a}{r}\right)^6\;\rangle&=&1+\frac{15}{2}e^2+\frac{105}{4} e^4+\frac{525}{8}e^6+... \end{eqnarray} Hence, the averaged potential will become: \begin{eqnarray} \langle\,U_0\,\rangle=\frac{k_2(M_0+m)GmR^5}{2M_0a^6}\left(1+\frac{15}{2}e^2+ \frac{105}{4}e^4+\frac{525}{8}e^6+...\right) \end{eqnarray} In the case of Phobos, the terms of order $\,O(e^2)\,$ may, in the first approximation, be neglected. This means that out of the six Lagrange-type planetary equations the first five will, in the first order of $\,e\,$, stay unperturbed, and therefore the elements $\,a\,,\,e\,,\omega\,,\,{\it{i}}\,,\,\Omega \,$ will, in the first order over $\,e\,$, remain unchanged. The Lagrange equation for the longitude will be the only one influenced by $\,U_0\,$. That equation will assume the form: \begin{eqnarray} \frac{dL}{dt}\;=\;n\;-\;\frac{2}{n\;a}\;\,\frac{\partial\;\langle\,U_0\,\rangle}{ \partial \,a} \end{eqnarray} which gives \begin{eqnarray} L\;=\;n\;t\;+\;\frac{6\;k_2\;(M_o\,+\,m)\;G\;m\;R^5}{M_o\;n\;a^8}\;t \end{eqnarray} We see that in the first order of $\,e\,$ the only secular effect stemming from the potential $\,U_0\,$ is a linear in time evolution of the longitude.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and background} There are many approaches to the quantization of gravity--without matter couplings--in 3 (2 space, 1 time) dimensions. We shall start with the Einstein action with nonzero cosmological constant \begin{equation} I_{\hbox{\scriptsize \it Ein}} = \int\!d^3x \sqrt{-{}^{\scriptscriptstyle(3)}\!g}\> ({}^{\scriptscriptstyle(3)}\!R - 2\Lambda) \label{bb1}. \end{equation} In the first order-formalism (see Refs~\refcite{Achu}--\refcite{Witt} and Refs~\refcite{NR0}--\refcite{NRZ}) this action is written as \begin{equation} I_{\hbox{\scriptsize \it Ein}} = \int\- (d\omega^{ab}-{\omega^a}_d \wedge\omega^{db} +{\frac{\Lambda}{3}} e^a\wedge e^b)\wedge e^c\,\epsilon_{abc} , \qquad a,b,c=0,1,2. \label{b2} \end{equation} where the triad $e^a$ is related to the metric through \begin{equation} g_{\mu \nu} = {e^a}_{\mu} {e^b}_{\nu} \eta_{ab} , \label{bc0} \end{equation} and the (2+1)-dimensional Ricci curvature and torsion are \begin{equation} R^{ab} = d\omega^{ab} - \omega^{ac}\wedge\omega_{c}{}^{b} ,\quad R^{a} = de^a -\omega^{ab}\wedge e_b \label{bc2} \end{equation} For $\Lambda\ne0$, this action can be written (up to a total derivative) in the Chern-Simons form \begin{equation} I_{\hbox{\scriptsize CS}} = - \frac{\alpha}{4} \int(d\omega^{AB}-\frac{2}{3}\omega^A{}_E\wedge\omega^{EB}) \wedge\omega^{CD} \epsilon_{ABCD} ,\qquad A,B,C,D = 0,1,2,3 \label{b3} \end{equation} with an (anti-)de Sitter spin connection $\omega^{AB}$ \begin{equation} {\omega^A}_B=\left( \begin{array}{cc} \omega^a{}_b& \frac{k e^a}{\alpha}\\[1ex] -\frac{e^b}{\alpha} & 0 \end{array} \right) . \label{bc1} \end{equation} where the tangent space metric is $\eta_{AB}=(-1,1,1,k)$, and $k$ is the sign of $\Lambda$ with $\Lambda = k\alpha^{-2}$. In Eq. \rref{b3} the Levi-Civita density is $\epsilon_{abc3}=-\epsilon_{abc}$, and in \rref{bc1} the triads appear as $e^a=\alpha\omega^{a3}$. The corresponding curvature two-form $R^{AB}=d\omega^{AB}-\omega^{AC}\wedge\omega_C{}^B$ has components $R^{ab}+\Lambda e^a \wedge e^b$, $R^{a3}=\frac{R^a}{\alpha}$, and the field equations derived from the action \rref{b3} are simply $R^{AB}=0$, implying that the torsion vanishes everywhere and that the curvature $R^{ab}$ is constant. This can alternatively be seen from the (2+1)-dimensional splitting of spacetime, where the action \rref{b3} decomposes as \begin{equation} I_{\hbox{\scriptsize CS}} = \frac{\alpha}{4} \int\!dt\int\!d^2x\,\epsilon^{ij}\epsilon_{ABCD}\, (\omega^{CD}{}_j\,{\dot{\omega}}^{AB}{}_i-\omega^{AB}{}_0 R^{CD}{}_{ij}) \label{b5} \end{equation} (with $\epsilon^{0ij} = -\epsilon^{ij}$), from which the constraints are \begin{equation} R^{AB}{}_{ij}=0. \label{b8} \end{equation} The constraints \rref{b8} imply that the (anti-)de Sitter connection $\omega^{AB}{}_i$ is flat. It can therefore be written locally in terms of an $\hbox{SO}(3,1)$ ($\Lambda > 0$) - or $\hbox{SO}(2,2)$ ($\Lambda < 0$) - valued zero-form $\psi^{AB}$ as $d\psi^{AB}=\omega^{AC}\, \psi_C{}^B$. It is actually more convenient to use the spinor groups $\hbox{SL}(2,\mathbb{R})\otimes \hbox{SL}(2,\mathbb{R})$ (for $\hbox{SO}(2,2)$) and $\hbox{SL}(2,\mathbb{C})$ (for $\hbox{SO}(3,1)$). (Details of the spinor group decomposition can be found in Ref.~\refcite{NRZ}.) Define the one-form \begin{equation} \Delta(x) = \Delta_{i}(x)dx^{i} = \frac{1}{4}\omega^{AB}(x)\gamma_{AB} \label{bc4} \end{equation} where $\gamma_{AB} = \frac{1}{2} \left[\gamma_A,\gamma_B \right]$ and the $\gamma_A$ are Dirac matrices. Eq. \rref{b8} now implies that $d\Delta-\Delta\wedge\Delta=0$. The corresponding local or "pure gauge" expression for $\Delta$ is \begin{equation} dS(x)=\Delta(x)S(x) . \label{bc5} \end{equation} where $S$ are multivalued $\hbox{SL}(2,\mathbb{R})$ or $\hbox{SL}(2,\mathbb{C})$ matrices. The above discussion means that the $\hbox{SO}(3,1)$ or $\hbox{SO}(2,2)$ - valued $\psi^{AB}$, or the $\hbox{SL}(2,\mathbb{R})$ or $\hbox{SL}(2,\mathbb{C})$ matrices $S$ can be interpreted as holonomies, when the connections $\omega^{AB}$ or $\Delta$ are integrated along closed paths (loops) on the two--dimensional surface $\Sigma$. The flatness of the connection $\Delta$ implies that each $S[\gamma]$ depends only on the homotopy class of $\gamma$. Further, the matrices $S$ are not gauge invariant, but are gauge covariant i.e. under a gauge transformation (a change of base point), they transform by conjugation. The Einstein action can be used to gain further information about these holonomies. For example, the Poisson brackets of the $\omega^{AB}$ can be read off from \rref{b5}: on a $t=\hbox{const.}$ surface $\Sigma$, \begin{equation} \{\omega^{AB}{}_{i}(x),\omega^{CD}{}_{j}(y)\} =\frac{k}{2\alpha}\epsilon_{ij} \epsilon^{ABCD} \delta^{2}(x-y). \label{b6} \end{equation} and the spinor version is \begin{eqnarray} \{\Delta_{i}^{\pm}(x), \Delta_{j}^{\pm}(y)\} &=& \pm \frac{i}{2\alpha\sqrt{k}}\epsilon_{ij}\sigma^{m}\otimes\sigma^{m}\delta^{2}(x-y) \nonumber\\ \{\Delta_{i}^{+}(x), \Delta_{j}^{-}(y)\} &=& 0 , \label {d1} \end{eqnarray} where the $\sigma^{m}$ are Pauli matrices, the $\pm$ refer to the decomposition of the $4\times 4$ representations of $\Delta(x)$, $S(x)$ into $2\times 2$ irreducible parts (see Ref.~\refcite{NRZ}) and $\sqrt k$ means $+1$ for $k=1$ and $+i$ for $k=-1$. The Poisson brackets Eqs. \rref{b6} and \rref{d1}, when integrated along loops $\gamma, \sigma$ (with $\gamma,\sigma\!\in\!\pi_{1}(\Sigma,x_0)$) yield the Poisson brackets of the components of the holonomies $\psi^{AB}$ \begin{equation} \{{\psi^{AB}}_{\gamma}, {\psi^{CD}}_{\sigma}\} = -k \epsilon^{ABCD}. \label{p1} \end{equation} and a similar (complicated) expression for the $S^\pm$. The matrices $S^{\pm}[\gamma]$ thus furnish a representation of $\pi_{1}(\Sigma,x_0)$ in $\hbox{SL}(2,\mathbb{R})$ or $\hbox{SL}(2,\mathbb{C})$. Under a gauge transformation the $S^\pm$ transform by conjugation, so their traces provide an (overcomplete) set of gauge-invariant Wilson loop variables. The classical Poisson brackets for these trace variables were calculated by hand for the genus $1$ and genus $2$ cases, and then generalized and quantized in Ref.~\refcite{NR1}. A closely related quantum algebra was calculated in Ref.~\refcite{ch} using the technique of "fat graphs". The classical Poisson bracket algebra also appears (see Ref. \refcite{ug}) in the study of Stokes matrices (monodromy data) which relate the solutions of matrix differential equations. For genus 1 the Poisson algebra is \begin{equation} \{R_1^{\pm},R_2^{\pm}\}=\mp\frac{i}{4\alpha\sqrt k}(R_{12}^{\pm}- R_1^{\pm}R_2^{\pm}) \quad \hbox{\it and cyclical permutations}, \label{b7} \end{equation} where $R^{\pm}= \frac{1}{2} \hbox{Tr}S^{\pm}$. Here the subscripts $1$ and $2$ refer to the two independent intersecting circumferences $\gamma_1$, $\gamma_2$ on $\Sigma$ with intersection number $+1$,\footnote{Paths with intersection number 0, $\pm$ 1 are sufficient to characterize the holonomy algebra for genus $1$. For $g>1$, one must in general consider paths with two or more intersections, for which the brackets \rref{b7} are more complicated; see Refs.~\refcite{NR3,NR4}.} while the third traced holonomy, $R^\pm_{12}$, corresponds to the path $\gamma_1\cdot\gamma_2$, which has intersection number $-1$ with $\gamma_1$ and $+1$ with $\gamma_2$. Classically, the six traced holonomies $R^\pm_{1,2,12}$ provide an overcomplete description of the spacetime geometry of $\mathbb{R}\!\times\!T^2$. Consider the cubic polynomials \begin{eqnarray} F^{\pm} &=& 1-(R_1^{\pm})^2-(R_2^{\pm})^2-(R_{12}^{\pm})^2 + 2 R_1^{\pm}R_2^{\pm}R_{12}^{\pm} \nonumber\\ &=& \frac{1}{2}\, \hbox{Tr}\left(I - S^{\pm}[\gamma_1]S^{\pm}[\gamma_2]S^{\pm}[\gamma_1^{-1}]S^{\pm}[\gamma_2^{-1}]\right), \label{b9} \end{eqnarray} where the last equality follows from the identities $$ A + A^{-1} = I\,\hbox{Tr}A $$ for $2 \times 2$ matrices $A$ with determinant $1$. These polynomials have vanishing Poisson brackets with all of the traces $R_a^{\pm}$, and are cyclically symmetric in the $R_a^{\pm}$. The $F^\pm$ vanish classically by the $\hbox{SL}(2,\mathbb{R})$ or $\hbox{SL}(2,\mathbb{C})$ Mandelstam identities, which can be viewed as the application of the fundamental relation of $\pi_{1}$ of the torus \begin{equation} \gamma_1^{\vphantom{-1}}\cdot\gamma_2^{\vphantom{-1}}\cdot\gamma_1^{-1}\cdot\gamma_2^{-1} = {\mathbb{I}} \label{gp} \end{equation} to the representations $S^{\pm}$ occuring in the last line of \rref{b9}. In this approach, the constraints have been solved exactly. There is no Hamiltonian, and no time development. This formalism describes either initial data for some (unspecified) choice of time, or the time-independent spacetime geometry. We can quantize the classical algebra \rref{b7} by firstly replacing the classical Poisson brackets $\{\,,\,\}$ with commutators $[\,,\,]$, with the rule \begin{equation} [x,y]= xy-yx =i \hbar \{x,y\} ; \end{equation} and secondly, on the right hand side (r.h.s.) of \rref{b7}, replacing the product with the symmetrized product, \begin{equation} xy \to \frac{1}{2} (xy +yx) . \end{equation} The resulting operator algebra is given by \begin{equation} \hat R_1^{\pm}\hat R_2^{\pm}e^{\pm i \theta} - \hat R_2^{\pm}\hat R_1^{\pm} e^{\mp i \theta}= \pm 2i\sin\theta\, \hat R_{12}^{\pm} \quad \hbox{\it and cyclical permutations} \label{za} \end{equation} with $\tan\theta= {i \sqrt k\hbar} /{8\alpha}$. Note that for $\Lambda\!<\!0$, $k=-1$, and $\theta$ is real, while for $\Lambda\!>\!0$, $k=1$, and $\theta$ is pure imaginary. The algebra \rref{za} is not a Lie algebra, but it is related to the Lie algebra of the quantum group $\hbox{SU}(2)_q$ Refs. \refcite{NRZ,su}, where $q=\exp{4i\theta}$, and where the cyclically invariant $q$-Casimir is the quantum analog of the cubic polynomial \rref{b9}, \begin{equation} \hat F^{\pm}(\theta) = {\cos}^2\theta- e^{\pm 2i\theta} \left( (\hat R_1^\pm)^2+ (\hat R_{12}^\pm)^2\right) -e^{\mp 2i\theta} (\hat R_2^\pm)^2 + 2e^{\pm i\theta}\cos\theta \hat R_1^\pm \hat R_2^\pm \hat R_{12}^\pm . \end{equation} For $g>1$ it was shown in Ref.~\refcite{gav} that the algebra calculated in Ref.~\refcite{NR1} is isomorphic to a non--standard deformation of $\hbox{SO}(2g+2)_q$. The representations of the algebra \rref{za} have been studied e.g. in Ref.~\refcite{NRZ}. Here we choose to represent each $(\pm)$ copy of the $\hat R_a^{\pm}$ as \begin{equation} \hat R_a = \frac{1}{2} (\hat A_a + \hat A_a^{-1}) \end{equation} where, from \rref{za} the $\hat A_a$ must satisfy (here we discuss the $(+)$ algebra, the $(-)$ algebra has $q^{-1}$ rather than $q$) \begin{equation} \hat A_1 \hat A_2 = q \hat A_2 \hat A_1 \quad \hbox{\it and cyclical permutations} \label{fund} \end{equation} Relations of the type \rref{fund} are called quantum plane relations, or $q$--commutators, or Weyl pair relations. Returning to the {\it untraced matrices} $S$ one notes that, writing them in diagonal form as \begin{equation} S(\gamma_i) = U_i = \left(\begin{array}{clcr}A_i & 0\\0 & A_i^{-1}\end{array}\right) \quad i=1,2 \end{equation} it follows that the {(now \it quantum matrices}) $\hat U_1, \hat U_2$ must satisfy {\it by both matrix and operator multiplication}, the $q$--commutation relation \begin{equation} \hat U_1 \hat U_2 = q \hat U_2 \hat U_1 \label{fund2} \end{equation} i.e. they form a matrix--valued Weyl pair. Equation \rref{fund2} can be understood as a deformation of Eq. \rref{gp}. The present authors decided to study the quantum matrices $\hat U_1, \hat U_2$ which satisfy \rref{fund2}. Consider the diagonal representation \begin{equation} \hat U_i = \left(\begin{array}{clcr}e^{{\hat r_i}}& 0 \\0& e^{-{\hat r_i}}\end{array}\right) = e^{{\hat r_i}\sigma_3} \quad i=1,2 \label{hol} \end{equation} where $\sigma_3$ is a Pauli matrix. From the identity \begin{equation} e^{\hat X} e^{\hat Y}= e^{\hat Y} e^{\hat X} e^{[ \hat X, \hat Y ]}, \label{bch} \end{equation} valid when $[ \hat X, \hat Y ]$ is a $c$--number, it follows that the quantum parameters $\hat r_1, \hat r_2$ (also used in Ref.~\refcite{cn}) satisfy the commutator \begin{equation} [\hat r_1, \hat r_2] = - \frac{i\hbar \sqrt{-\Lambda}}{4}. \label{comm} \end{equation} We note that in order to make the connection with $2+1$--dimensional gravity it is necessary to consider {\it both} $\hbox{SL}(2,\mathbb{R})$ sectors. The mathematical properties of just one sector have been studied in Ref.\refcite{NP3}. In Section \ref{qmp} a brief review of quantum matrices is given, whereas Section \ref{hom} discusses quantum holonomy matrices for homotopic paths, and shows how they are related by the signed area between the two paths. Section \ref{go} uses these concepts to quantize a classical bracket due to Goldman (Ref.~\refcite{gol}), thus obtaining commutators between intersecting loops on surfaces. \section{Quantum Matrix Pairs\label{qmp}} Quantum matrix pairs - namely the quantum matrices $\hat U_1, \hat U_2$ which satisfy \rref{fund2} may, as mathematical objects, be thought of as a simultaneous generalization of two familiar notions of ``quantum mathematics'', namely the quantum plane and quantum groups. Briefly, the quantum plane is described by two non-commuting coordinates $x$ and $y$ satisfying the relation \begin{equation} x y = q y x \label{qplane} \end{equation} whereas, for an example of a quantum group, consider the $2\times 2$ matrices of the form \begin{equation} U=\left( \begin{array}{cc}a &b\\ c&d\end{array} \right) \label{qmatrix} \end{equation} with non-commuting entries satisfying \begin{eqnarray} ab=qba; \quad &ac=qca;\quad &ad-da=(q-q^{-1})bc; \nonumber\\ bc=cb; \quad &bd=qdb; \quad &cd=qdc. \label{qgrel} \end{eqnarray} A good description of matrices of the type \rref{qmatrix} whose entries satisfy \rref{qgrel} can be found in Ref. \refcite{Vokos}, but for our purposes maybe the most important property is that the matrix $U^n$ is another matrix {\it of the same type} with $q$ substituted by $q^n$. These two concepts - the quantum plane and quantum groups - are not unrelated. Consider the column vector whose entries are the non-commuting coordinates $x$ and $y$. It can be checked that the components of the new column vector \begin{equation} \left(\begin{array}{c}{x^{\prime}}\\{y^{\prime}}\end{array}\right) = U \left(\begin{array}{c}{x}\\{y}\end{array}\right) \end{equation} also satisfy \begin{equation} x^{\prime} y^{\prime} = q y^{\prime} x^{\prime} \end{equation} Quantum matrices in both the diagonal and upper--triangular sectors satisfying the fundamental relation \rref{fund2} have been studied in Refs. \refcite{NP1,NP2}. They combine the preservation of internal relations under multiplication, a quantum-group-like feature, with the fundamental $q$-commutation relation which holds between the two matrices. The non-trivial internal commutation relations arose in the following way: in the upper-triangular sector, it was found that trivial internal commutation relations for each matrix were not compatible with the fundamental relation \rref{fund}, in that the resulting products no longer had commuting entries. However it was possible to determine patterns of non-trivial internal relations which are preserved under matrix multiplication. For example, consider the pair of matrices satisfying \rref{fund2} \begin{equation} U_i=\left( \begin{array}{cc} \alpha_i &\beta_i\\ 0&{\alpha_i}^{-1}\end{array} \right), \,\,\,\,i=1,2 \label{Ui} \end{equation} It can be checked that, apart from the mutual relations \begin{equation} \alpha_1\alpha_2=q\alpha_2\alpha_1, \quad \alpha_1\beta_2=q\beta_2{\alpha_1}^{-1}, \quad \alpha_2\beta_1=q^{-1}\beta_1{\alpha_2}^{-1}. \label{mut} \end{equation} which guarantee \rref{fund2}, the entries must also satisfy the following internal relations \begin{equation} \alpha_i\beta_i=\beta_i{\alpha_i}^{-1},\quad i=1,2. \label{int} \end{equation} In Ref.~\refcite{NP2} it was shown that indeed products of powers of these matrices have the same structure of internal relations, and also taking two different products gives rise to new quantum matrix pairs of the same type. However, the internal relations \rref{int} differ in structure from the relations \rref{qgrel}, and, moreover, do not simplify in the limit $q\rightarrow 1$, which distinguishes them from e.g. Majid's braided matrices (see Ref. \refcite{maj}). \section{Homotopy and signed area\label{hom}} Consider quantum holonomy matrices simultaneously conjugated into diagonal form (conjugating both matrices by the same matrix $S\in SL(2,\mathbb{R})$ ) (see Eq. \rref{hol} which for convenience is repeated here) $$ \hat U_i=\left(\begin{array}{clcr}e^{{\hat r_i}}& 0 \\0& e^{- {\hat r_i}} \end{array}\right), \quad i=1,2. $$ where the $\hat r_i, i = 1,2$ satisfy \rref{comm}. They can be thought of as arising from constant connections $\hat A$ as in Ref.~\refcite{mik} \begin{equation} \hat U_i = \exp \int_{\gamma_i} \hat A, \quad \hat A = (\hat r_1 dx + \hat r_2 dy) \left(\begin{array}{clcr} 1& 0 \\0& -1 \end{array}\right). \label{hol1} \end{equation} where $x,y$ are coordinates, with period 1, on the torus $T^2= {\mathbb{R}^2}_{(x,y)}/\mathbb{Z}^2$, and $y$ is constant along $\gamma_1$ and $x$ is constant along $\gamma_2$. We have investigated constant matrix--valued connections which generalize the connections \rref{hol1}, and applied them to a much larger class of loops, extending the assignments $\gamma_1\mapsto U_1,\,\gamma_2 \mapsto U_2$ by using the quantum connection \rref{hol1} in the diagonal case. The larger class of loops are represented by piecewise linear (PL) paths between integer points in $\mathbb{R}^2$, using a representation of $T^2$ as $\mathbb{R}^2/\mathbb{Z}^2$. We show that the matrices for homotopic paths are related by a phase expressed in terms of the signed area between the paths. This leads to a definition of a $q$--deformed representation of the fundamental group where signed area phases relate the quantum matrices assigned to homotopic loops. Consider piecewise-linear (PL) paths on the plane $\mathbb{R}^2$ starting at the origin $(0,0)$ and ending at an integer point $(m,n), \, m,n\in \mathbb{Z}$. Under the identification $T^2=\mathbb{R}^2/\mathbb{Z}^2$, these paths give rise to closed loops on $T^2$. The integers $m$ and $n$ are the winding numbers of the loop in the $\gamma_1$ and $\gamma_2$ directions respectively, and two loops on $T^2$ are homotopic to each other if and only if the corresponding paths in $\mathbb{R}^2$ end at the same point $(m,n)$ Suppose a PL path $p$ consists of $N$ straight segments $p_1, \dots, p_N$. Any such segment $p_i$ may be translated to start at the origin and end at $(m,n)\in \mathbb{R}^2$ (here we use the fact that the connection $A$ is invariant under spatial translations). Then we assign to each segment $p_i$ the quantum matrix \begin{equation} U_{(m,n)} = \exp \int_{p_i} A = \exp \left((mr_1 + nr_2)\sigma_3 \right) = \left(\begin{array}{cc} e^{mr_1+nr_2}& 0 \\0& e^{-mr_1-nr_2} \end{array}\right)\label{phi} \end{equation} where $\sigma_3 = \left(\begin{array}{clcr} 1& 0 \\0& -1 \end{array}\right)$, and to the path $p$ the product matrix \begin{equation} p\mapsto U_p := \prod _{i=1}^N \exp \int_{p_i} A. \label{Up} \end{equation} This assignment is obviously multiplicative under multiplication of paths, $(p,p')\mapsto p\circ p'$, which corresponds to translating $p'$ to start at the endpoint of $p$ and concatenating. Now consider the {\it straight} path from $(0,0)$ to $(m,n)$. For example, with $U_1 = U_{(1,0)}$, $U_2 = U_{(0,1)}$ these correctly obey the fundamental relation \rref{fund2}, which can be generalized to arbitrary straight paths, using Eq. \rref{bch}. \begin{equation} U_{(m,n)} U_{(s,t)} = q^{mt-ns} U_{(s,t)} U_{(m,n)}, \label{uu1} \end{equation} where $U_{(m,n)}$ is given by Eq. \rref{phi}. Equation \rref{uu1} expresses the relation between the quantum matrices assigned to the two paths going from $(0,0)$ to $(m+s,n+t)$ in two different ways around the parallelogram generated by $(m,n)$ and $(s,t)$, It is also straightforward to show a triangle equation \begin{equation} U_{(m,n)} U_{(s,t)} = q^{(mt-ns)/2} U_{(m+s,n+t)}, \label{tri} \end{equation} which can be derived from the identity $$ e^{\hat X} e^{\hat Y}= e^{\hat X + \hat Y} e^{\frac{[\hat X, \hat Y ]}{2}}, $$ which follows from \rref{bch}. Note that in both cases the exponent of $q$ relating the two homotopic paths is equal to the {\em signed area} between the path $p$ on the left hand side (l.h.s.) and the path $p'$ on the r.h.s. i.e. equal to the area between $p$ and $p'$, when the PL loop consisting of $p$ followed by the inverse of $p'$ is oriented anticlockwise, and equal to minus the area between $p$ and $p'$, when it is oriented clockwise. The signed area for the parallelogram is given by $\det \left(\begin{array}{cc} m& s\\n& t \end{array}\right) =mt-ns$ and for the triangle by $\frac{1}{2} (mt-ns)$. The discussion can be generalized to arbitrary non-self-intersecting PL paths $p$ and $p'$ which connect $(0,0)$ to the same integer point $(m,n)$ in $\mathbb{R}^2$. These two paths may intersect each other several times, either transversally, or when they coincide along a shared segment. Together they bound a finite number of finite regions in the $xy$-plane. Now choose a triangulation of a compact region of $\mathbb{R}^2$ containing and compatible with the paths $p,\,p'$, in the sense that each segment of the paths is made up of one or more edges of the triangulation. We take all the triangles in the triangulation to be positively oriented in the sense that their boundary is oriented anticlockwise in $\mathbb{R}^2$. Since $p$ and $p'$ are homotopic, they are homologous, and because $H_3$ of the plane is trivial, there is a unique $2$-chain $c(p,p')$ such that $\partial c(p,p')=p-p'$. Let this chain be given by \begin{equation} c(p,p') = \sum_{\alpha \in R}n_\alpha t_\alpha, \label{2chain} \end{equation} where $t_\alpha$ is a triangle of the triangulation indexed by $\alpha$ in the index set $R$, and $n_\alpha\ = \pm 1$ or $0$. Note that only triangles from the finite regions enclosed by $p$ and $p'$ can belong to the support of the $2$-chain, and that the coefficient of any two triangles in the same finite region is the same. The {\em signed area} between $p$ and $p'$ is \begin{equation} S(p,p') = \sum_{\alpha \in R} n_\alpha A(t_\alpha), \label{spp}\end{equation} where $A(t_\alpha)$ is the area of the triangle $t_\alpha$. This is clearly independent of the choice of triangulation of $\mathbb{R}^2$ compatible with $p,\,p'$, since the sum of the areas of the triangles inside each enclosed region is the area of that region, whatever the triangulation. It follows that \begin{equation} U_p = q^{S(p,p')}U_{p'}. \end{equation} \section{Goldman bracket\label{go}} There is a classical bracket due to Goldman Ref.~\refcite{gol} for functions $T(\gamma)={\rm tr}\, U_\gamma$ defined on homotopy classes of loops $\gamma$, which for $U_\gamma \in SL(2,\mathbb{R})$ is: \begin{equation} \{T(\gamma_1), T(\gamma_2)\} = \sum_{S \in \gamma_1 \sharp \gamma_2} \epsilon(\gamma_1,\gamma_2,S)(T(\gamma_1S\gamma_2) - T(\gamma_1S\gamma_2^{-1})). \label{gold} \end{equation} Here $\gamma_1 \sharp \gamma_2$ denotes the set of (transversal) intersection points of $\gamma_1$ and $\gamma_2$ and $\epsilon(\gamma_1,\gamma_2,S)$ is the intersection index for the intersection point $S$. $\gamma_1S\gamma_2$ and $\gamma_1S\gamma_2^{-1}$ denote loops which are {\it rerouted} at the intersection point $S$. In the following we show how Eq.~\rref{gold} may be quantized using the concept of area phases for homotopic paths which was outlined in Section \ref{hom}. In order to study intersections of ``straight'' loops, represented in $\mathbb{R}^2$ by straight paths between $(0,0)$ and integer points $(m,n)$, consider their reduction to a fundamental domain of $\mathbb{R}^2$, namely the square with vertices $(0,0), (1,0), (1,1), (0,1)$. Here are two examples of fundamental reduction. Figure \ref{21} shows a path in the first quadrant, namely the path $(2,1)$, and its reduction to the fundamental domain \begin{figure}[hbtp] \centering \includegraphics[height=2cm]{21a.eps} \hspace{2cm} \includegraphics[height=2cm]{21br.eps} \caption{The path $(2,1)$ and its fundamental reduction} \label{21} \end{figure} \noindent whereas in other quadrants fundamentally reduced paths start at other vertices (not $(0,0)$). For example in the second quadrant the path $(-1,2)$ will (in the fundamental domain) start at $(1,0)$ and end at $(0,1)$, as shown in Figure \ref{-12}. \begin{figure}[hbtp] \centering \includegraphics[width=2cm]{12a.eps} \hspace{2cm} \includegraphics[width=2cm]{12b.eps} \caption{The path $(-1,2)$ and its fundamental reduction} \label{-12} \end{figure} When the path $(m,n)$ is a multiple of another integer path, we say it is {\it reducible}. Otherwise it is irreducible. It should be clear that two paths intersect at points where their fundamental reductions intersect. We may only consider transversal intersections, namely when their respective tangent vectors are not collinear. For intersecting paths of multiplicity $1$, their intersection number at that point is $+1$ if the angle from the first tangent vector to the second is between $0$ and $180$ degrees, and $-1$ if between $180$ and $360$ degrees. For paths of multiplicity greater than $1$, the intersection number is multiplied by the multiplicities of the paths involved. Denote the intersection number between two paths $p_1$ and $p_2$ at $P$ (or $P, Q, R$ if more than one) by $\epsilon(p_1,p_2,P)$. The total intersection number for two paths is the sum of the intersection numbers for all the intersection points, denoted $\epsilon(p_1,p_2)$. Here are three simple (and not so simple) examples (in the fundamental domain) of single and multiple intersections. \begin{enumerate} \item If $p_1= (1,0)$ and $p_2=(0,1)$ there is a single intersection\\ at $(0,0)$ with $\epsilon=+1$. \hfill\includegraphics[width=2cm]{ga1.eps} \item If $p_1= (2,1)$ and $p_2=(0,1)$ there are two intersections,\\ at $P=(0,0)$ and $Q=(0,\frac{1}{2})$, each with $\epsilon=+1$. The total \\ intersection number is $\epsilon=+2$ \hfill\includegraphics[width=2cm]{ga3.eps} \item If $p_1= (1,2)$ and $p_2=(2,1)$ there are three intersections,\\ at $P=(0,0)$, $Q=(\frac{2}{3},\frac{1}{3})$ and $R(\frac{1}{3},\frac{2}{3})$ (see figure), each with\\ $\epsilon=-1$. The total intersection number is $\epsilon=-3$ (the point\\ $S=(1,1)$ does not contribute since it coincides with the point $P$ \hfill\includegraphics[width=2cm]{ga4.eps} \end{enumerate} It should be noted that \begin{enumerate} \item all intersections between a given pair of straight paths have the same sign, since in this representation their tangent vectors have constant direction along the loops. \item the total intersection number between $p_1=(m,n)$ and $p_2=(s,t)$ is the determinant \begin{equation} \epsilon(p_1,p_2) = \left |\begin{array}{clcr}m&n\\s&t\end{array} \right | = mt - ns \label{detint} \end{equation} since the total intersection number is invariant under deformation, i.e. homotopy \begin{eqnarray} \epsilon((m,n),(s,t)) & = & \epsilon((m,0)+(0,n), (s,0)+(0,t)) \nonumber \\ & = & \epsilon((m,0),(0,t)) + \epsilon((0,n),(s,0)) \nonumber \\ & = & mt-ns. \end{eqnarray} \leftline{Relation \rref{detint} is easily checked for the above examples.} \end{enumerate} Now consider two straight paths $p_1$ and $p_2$ intersecting at the point $P$. Their positive and negative reroutings are denoted $p_1Pp_2$ and $p_1Pp_2^{-1}$ respectively, where $p_2^{-1}= (-s,-t)$ if $p_2=(s,t)$. These reroutings are defined as follows: starting at the basepoint follow $p_1$ to $P$, continue on $p_2$ (or $p_2^{-1}$) back to $P$, then finish along $p_1$. Note that, in accordance with the above rule, if the intersection point $P$ is the basepoint itself, the reroutings $p_1Pp_2$ and $p_1Pp_2^{-1}$ start by following $p_2$ (or $p_2^{-1}$) from the basepoint back to itself, and then follow $p_1$ from the basepoint back to itself. Here we show the reroutings ($p_1Pp_2$ and $p_1Pp_2^{-1}$ respectively, and at the various intersection points $P, Q, R$ if more than one) for the three previous examples, using the non--reduced paths which here are more convenient. \begin{enumerate} \item $P=(0,0), p_1 = (1,0), p_2 = (0,1)$ \hfill \includegraphics[width=2cm]{1a.eps} \hspace{0.5cm} \includegraphics[width=2.2cm]{1b.eps} \item (a)~$P=(0,0), p_1 = (2,1), p_2 = (0,1)$ \hfill \includegraphics[width=2cm]{3pa.eps} \hspace{0.5cm} \includegraphics[width=2cm]{3pb.eps} \vspace{.5cm} \noindent (b)~$Q=(0,\frac{1}{2}), p_1 = (2,1), p_2 = (0,1)$ \hfill \includegraphics[width=2cm]{3qa.eps} \hspace{0.5cm} \includegraphics[width=2cm]{3qb.eps} \item (a)~$P=(0,0), p_1 = (1,2), p_2 = (2,1)$ \hfill \includegraphics[width=2.5cm]{4pa.eps} \hspace{0.5cm} \includegraphics[width=2cm]{4pb.eps} \vspace{.5cm} \noindent (b)~$Q=(\frac{2}{3},\frac{1}{3}), p_1 = (1,2), p_2 = (2,1)$ \hfill \includegraphics[width=2.5cm]{4qa.eps} \hspace{0.5cm} \includegraphics[width=2cm]{4qb.eps} \vspace{.5cm} \noindent (c)~$R=(\frac{1}{3},\frac{2}{3}), p_1 = (1,2), p_2 = (2,1)$ \hfill \includegraphics[width=2.5cm]{4ra.eps} \hspace{0.5cm} \includegraphics[width=2cm]{4rb.eps} \end{enumerate} In each of the above examples, and for each intersection point $P, Q, R$, it is clear that $p_1Pp_2\sim (m+s,n+t)$ and $p_1Pp_2^{-1}\sim(m-s,n-t)$. To return to the bracket \rref{gold}, we assign classical functions to the straight paths $(m,n)$ as follows \begin{equation} T(m,n) = e^{mr_1 + nr_2} + e^{- mr_1 - nr_2}, \label{tch} \end{equation} i.e. $T(m,n) = {\rm tr}~~ U_{(m,n)}$ where $U_{(m,n)}$ is of the form \rref{phi} with $r_1,\,r_2$ classical parameters. Setting $\{r_1,r_2\} = 1$ it follows that the Poisson bracket between these functions for two paths $(m,n)$ and $(s,t)$ is \begin{equation} \{T(m,n), T(s,t)\} = (mt-ns)(T(m+s,n+t) - T(m-s,n-t))\{r_1,r_2\} \label{pbt} \end{equation} Equation \rref{pbt} may be regarded as a particular case of the Goldman bracket \rref{gold} (up to setting $\{r_1,r_2\} = 1$), since $(m,n)$ and $(s,t)$ have total intersection index $mt-ns$, and the rerouted paths $p_1Qp_2$ and $p_1Qp_2^{-1}$, where $p_1=(m,n)$ and $p_2=(s,t)$, are all homotopic to $(m+s,n+t)$ and $(m-s,n-t)$ respectively. The bracket \rref{pbt} is easily quantized using the triangle identity (see \rref{tri}) \begin{equation} e^{mr_1 + nr_2} e^{sr_1 + tr_2} = q^{(mt-ns)/2} e^{(m+s)r_1 + (n+t)r_2} \end{equation} and the result is the commutator \begin{equation} [ T(m,n), T(s,t)] = (q^{\frac{(mt-ns)}{2}} - q^{-\frac{(mt-ns)}{2}}) (T(m+s,n+t) - T(m-s,n-t)). \label{tcomm} \end{equation} The antisymmetry of \rref{tcomm} is evident (from \rref{tch} $T(m,n)=T(-m,-n)$). It can be checked that \rref{tcomm} satisfies the Jacobi identity, and that the classical limit, namely $q \to 1, \hbar \to 0$, of the commutator \rref{tcomm}, given by $$ \{,\} = lim_{\hbar \to 0}\frac{[,]}{i \hbar} $$ is precisely \rref{pbt}. Alternatively, there is a different version of equation \rref{tcomm} which treats each intersection point individually, and uses rerouted paths homotopic to ``straight line'' paths as discussed previously, since we have already seen in Section \ref{hom} that homotopic paths no longer have the same quantum matrix assigned to them, but only the same matrix up to a phase. Thus for an arbitrary PL path $p$ from $(0,0)$ to $(m,n)$, set \begin{equation} T(p) = q^{S(p,(m,n))} T(m,n). \label{tfactor} \end{equation} The factor appearing in \rref{tfactor} is the same as that relating the quantum matrices $U_p$ and $U_{(m,n)}$, where $(m,n)$ is the straight path. We will show how to rewrite \rref{tcomm} in terms of the rerouted paths for example 3, i.e. $p_1=(1,2),\,p_2=(2,1)$. From \rref{tcomm} \begin{equation} [ T(1,2), T(2,1)] = (q^{-3/2} - q^{3/2})(T(3,3) - T(-1,1)). \label{tcommexp} \end{equation} The intersections occur at the points $P, R, Q$ (in that order, counting along $p_1$) as shown in Figure \ref{p20a}. For the positively rerouted paths we have \begin{eqnarray} T((1,2)P(2,1)) & = & T((2,1)(1,2))=q^{3/2} T(3,3) \label{P}\\ T((1,2)R(2,1))& = & q^{-1}T((1,2)P(2,1)) \label{R}\\ T((1,2)Q(2,1)) & = & q^{-1} T((1,2)R(2,1)) \label{Q} \end{eqnarray} and for the negative reroutings \begin{eqnarray} T((1,2)P(-2,-1)) & = & T((-2,-1)(1,2))=q^{-3/2} T(-1,1) \label{P-}\\ T((1,2)R(-2,-1))& = & q T((1,2)P(-2,-1)) \label{R-}\\ T((1,2)Q(-2,-1)) & = & q T((1,2)R(-2,-1)) \label{Q-}. \end{eqnarray} The factors appearing in equations \rref{R}, \rref{Q}, \rref{R-} and \rref{Q-} (the reroutings at $R$ and $Q$) are shown in Figure \ref{p20a}, where it is clear that each large parallelogram is divided into three equal parallelograms, each of unit area. The factors in \rref{P} and \rref{P-} (the reroutings at $P$) come from the triangle equation \rref{tri}, and are shown in Figure \ref{trigs}, where the triangles have signed area $+\frac{3}{2}$ and $-\frac{3}{2}$ respectively. \begin{figure}[hbpt] \centering \includegraphics[height=3cm]{p20a.eps} \hspace{2cm} \includegraphics[height=3cm]{p20b.eps} \caption{ The reroutings $(1,2)S(2,1)$ and $(1,2)S(-2,-1)$ for $S=P,R,Q$ } \label{p20a} \end{figure} \begin{figure}[hbtp] \centering \includegraphics[height=3cm]{33.eps} \hspace{2cm} \includegraphics[height=3cm]{11.eps} \caption{Factors for the reroutings $(1,2)P(2,1)$ and $(1,2)P(-2,-1)$} \label{trigs} \end{figure} Now equation \rref{tcommexp} can be rewritten in the form: \begin{equation} [T(1,2),T(2,1)] =\sum_{S=P,R,Q} (q^{-1} -1) T((1,2)S(2,1)) + (q-1) T((1,2)S(-2,-1)). \label{qgoldexp} \end{equation} In the general case, for $p_1=(m,n)$ and $p_2=(s,t)$ with $mt-ns\neq 0$, we postulate that \begin{equation} [T(p_1), T(p_2)] = \sum_{ Q \in p_1 \sharp p_2} (q^{\epsilon(p_1,p_2,Q)} - 1)T(p_1Qp_2) + (q^{-\epsilon(p_1,p_2,Q)} - 1)T(p_1Qp_2^{-1}) \label{qgold} \end{equation} quantizes the Goldman bracket \rref{gold}. We have proved equation \rref{qgold} as follows: first assume that both $p_1$ and $p_2$ are irreducible, i.e. not multiples of other integer paths, and study the reroutings $p_1 Qp_2$ at $Q$. They are paths similar to those of Figure \ref{p20a}, namely following $p_1$ to $Q$, then rerouting along a path {\it parallel} to $p_2$, then finishing along a path {\it parallel} to $p_1$. The reroutings along $p_2$ must clearly pass through an integer point inside the parallelogram formed by $p_1$ and $p_2$ (apart from when the intersection point is the origin). They also clearly pass through only one integer point since $p_2$ is irreducible. Consider two adjacent lines inside the parallelogram parallel to $p_2$ and passing through integer points. The area of each parallelogram between them is $1$. Consider for instance one of the middle parallelograms in Figure \ref{p20a} (whose area we saw previously was $1$ as the three parallelograms are clearly of equal area and the area of the large parallelogram is $3$). This is the same area as that of a parallelogram with vertices at integer points, as can be shown, for example, by cutting it into two pieces along the line between (1,1) and (2,2), then regluing them together into a parallelogram with vertices at (1,1), (2,2), (3,2) and (4,3), as indicated in Figure \ref{pp20a}. This latter area is equal to $1$ from Pick's theorem \cite{pick} which states that the area $A(P)$ of a lattice polygon $P$ is \begin{equation} A(P) = I(P) + B(P)/2 - 1, \label{pick} \end{equation} where $I(P)$ is the number of interior lattice points and $B(P)$ is the number of boundary points (for the parallelogram in the example $I(P)=0$ since the lines parallel to $p_2$ are adjacent, and $B(P)=4$ from the integer points at the four vertices, so $A(P) = 0+4/2-1=1$.) Therefore in general the parallelogram determined by $p_1$ and $p_2$, whose total area is $A= |mt-ns|$, is divided up into $A$ smaller parallelograms of equal area by lines parallel to $p_2$ passing through the interior integer points of the parallelogram. The fact that the total area is equal to the number of internal integer points $+1$ is again a consequence of Pick's theorem. \begin{figure}[hbpt] \centering \includegraphics[height=4cm]{pp20a.eps} \caption{ The area of the middle parallelogram is $1$} \label{pp20a} \end{figure} We can now calculate the first term (the positive reroutings shown for the example in Figure \ref{p20a}) in the sum on the r.h.s. of \rref{qgold}, using equation \rref{tfactor}, and show that it is equal to the first term on the r.h.s. of \rref{tcomm}. Consider first the case $\epsilon(p_1,p_2,Q) = -1$. The rerouting at the origin satisfies, using the triangle equation \rref{tri}, \[ T(p_1\, (0,0)\, p_2) = q^{A/2} T(m+s,n+t), \] where the area of the parallelogram determined by $p_1,\,p_2$ is $A=-(mt-ns)$. The next rerouted path adjacent to $p_1\, (0,0)\, p_2$, rerouted at $Q_1$ say (in the example $Q_1=R$) satisfies \[ T(p_1\, Q_1\, p_2) = q^{-1}T(p_1\, (0,0)\, p_2) \] since we have shown that the signed area between the paths is $-1$. Similarly each successive adjacent path rerouted at $Q_2, Q_3, \dots$ satisfies \begin{equation} T(p_1\, Q_i\, p_2) = q^{-1}T(p_1\, Q_{i-1}\, p_2). \end{equation} with $Q_0$ the origin $(0,0)$. It follows that \begin{eqnarray} \lefteqn{\sum_{Q \in p_1 \sharp p_2}(q^{-1}-1)T(p_1 Q p_2)}\nonumber \\ &=& (q^{-1}-1)q^{A/2}(1+q^{-1} + \dots + q^{-(A-1)})T(m+s,n+t) \nonumber \\ &=& (q^{-1}-1)q^{A/2}\frac{1-q^{-A}}{1-q^{-1}}T(m+s,n+t)\nonumber \\ &=& (q^{-A/2} - q^{A/2})T(m+s,n+t)\nonumber \\ &=& (q^{(mt-ns)/2} - q^{-(mt-ns)/2})T(m+s,n+t). \label{ep+} \end{eqnarray} When $\epsilon(p_1,p_2,Q)=+1$ the calculation is identical to \rref{ep+} but with $q$ rather than $q^{-1}$, and with the area of the triangle now equal to $A/2$, where $A=mt-ns$, namely \begin{eqnarray} \lefteqn{\sum_{ Q \in p_1 \sharp p_2}(q-1) T(p_1\, Q\, p_2)} \nonumber \\ &=& (q-1)q^{-A/2}(1+q^{1} + \dots + q^{A-1}) T(m+s,n+t)\nonumber \\ &=& (q^{A/2} - q^{-A/2}) T(m+s,n+t)\nonumber \\ &=& (q^{(mt-ns)/2} - q^{-(mt-ns)/2}) T(m+s,n+t). \label{ep-} \end{eqnarray} Diagrammatically this corresponds to dividing up the first parallelogram in Figure \ref{p20a} by lines passing through the integer points in the interior, but parallel to $(1,2)$, as opposed to $(2,1)$. In an entirely analogous way the second terms (the negative reroutings) on the r.h.s. of \rref{tcomm} and \rref{qgold} can be shown to be equal - the second figure of Figure \ref{p20a} can be used as a guide\footnote {The antisymmetry of \rref{qgold} can be checked for our example $p_1=(1,2),p_2=(2,1)$, both irreducible, by noting that the intersections occur at the same points (but in a different order, namely $P, Q, R$).}. When $p_1$ is reducible, i.e. $p_1 =c(m',n'), \, c\in \mathbb{N}, m', n' \in \mathbb{Z}$, and $p_2$ is irreducible, formula \rref{qgold} applies exactly as for the irreducible case, since there are $c$ times as many rerouted paths compared to the case when $p_1=(m',n')$. An example is $p_1=(2,0),\, p_2=(1,2)$, where the first term on the r.h.s. of \rref{tcomm} is equal to the first term on the r.h.s. of \rref{qgold}: \begin{eqnarray} (q^2-q^{-2}) T(3,2) & = & (q-1)q^{-2} (1+q+q^2 + q^3) T(3,2)\nonumber \\ &=& \sum_{ Q \in p_1 \sharp p_2} (q-1) T(p_1Q p_2). \end{eqnarray} There are four rerouted paths in the final summation, rerouting at $(0,0)$, $(1/2,0)$, $(1,0)$ and $(3/2,0)$ along $p_1$. If $p_2$ is reducible we must use multiple intersection numbers in \rref{qgold}, i.e. not simply $\pm 1$. Suppose $p_1=(m,n)$ and $p_2=(s,t) = c(s',t')$, $ c\in \mathbb{N}, s', t' \in \mathbb{Z}$. Then for example the first term on the r.h.s. of \rref{qgold} with $mt-ns>0$ is \begin{eqnarray} \lefteqn{(q^{(mt-ns)/2} - q^{-(mt-ns)/2}) T(m+s,n+t)} \nonumber \\ &=& (q^{c(mt'-ns')}- 1) q^{-(mt-ns)/2} T(m+s,n+t) \nonumber\\ &=& (q^c-1) q^{-(mt-ns)/2} (1 + q^c + \dots + q^{c(mt'-ns'-1)}) T(m+s,n+t)\nonumber \\ &=& \sum_{ Q \in p_1 \sharp p_2} (q^c-1) T(p_1Q p_2). \label{multint} \end{eqnarray} The factor $(q^c-1)$ is the {\it quantum} multiple intersection number at the $mt'-ns'$ intersection points. The calculation can be regarded as doing equation \rref{ep-} backwards and substituting $q$ by $q^c$ and $mt-ns$ by $mt'-ns'$. An example is $p_1=(2,1),\, p_2= (0,2)$, for which double intersections occur along $p_1$ at the origin and at $(1,1/2)$. From \rref{multint} the first term on the r.h.s. of \rref{qgold} is \begin{eqnarray} (q^2-q^{-2}) T(2,3) & = & (q^2-1)q^{-2} (1+q^2) T(2,3) \nonumber \\ &=& (q^2-1) (T(p_1\, (0,0)\, p_2) + T(p_1\, (1,1/2)\, p_2). \end{eqnarray} \section{Conclusions} There are some surprising features of the quantum geometry that emerge from the use of a constant quantum connection. The phase factor appearing in the fundamental relation \rref{fund2} has a geometrical origin as the signed area phase relating two integer PL paths, corresponding to two different loops on the torus. This leads to a natural concept of $q$-deformed surface group representations. It follows that the classical correspondence between flat connections (local geometry) and holonomies, i.e. group homomorphisms from $\pi_1$ to $G$ (non-local geometry) has a natural quantum counterpart. The signed area phases also appear in a quantum version \rref{qgold} of a classical bracket \rref{gold} due to Goldman Ref.~\refcite{gol}, where classical intersection numbers $\pm \epsilon(p_1,p_2,Q)$ are replaced by quantum single and multiple intersection numbers $(q^{\pm \epsilon(p_1,p_2,Q)}-1)$. The quantum bracket for homotopy classes represented by straight lines \rref{tcomm} is easily checked since all the reroutings are homotopic. However the r.h.s. of the bracket \rref{qgold} may be expressed in terms of rerouted paths using the signed area phases and a far subtler picture emerges. It is not difficult to show that the Jacobi identity holds for the commutator for straight paths \rref{tcomm} since the r.h.s. may also be expressed in terms of straight paths, with suitable phases. It must also hold for \rref{qgold} since they are equivalent. We have checked it explicitly for a number of arbitrary PL paths, without identifying homotopic paths. It should also be possible to treat higher genus surfaces (of genus $g$) in a similar fashion by introducing the same constant quantum connection on a domain in the $xy$ plane bounded by a $4g$--gon with the edges suitably identified \cite{hil}. One could then define holonomies of PL loops on this domain and study their behaviour under intersections, as studied here for $g=1$. In fact this treatment is ideal for $g>1$ since there intersections $\epsilon \ge 2$ are necessary (see e.g. Ref.~ \refcite{NR1}). \section*{Acknowledgments} This work was supported by the Istituto Nazionale di Fisica Nucleare (INFN) of Italy, Iniziativa Specifica FI41, the Italian Ministero dell'Universit\`a e della Ricerca Scientifica e Tecnologica (MIUR), and the {\em Programa Operacional Ci\^{e}ncia e Inova\c{c}\~{a}o 2010}, project number POCI/MAT/60352/2004, financed by the {\em Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia} (FCT) and cofinanced by the European Community fund FEDER.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{intro} According to B.-Y. Chen \cite{Chen} an immersion $\varphi:M^m\hookrightarrow{\mathbb R}^n$ is called {\it biharmonic} if \begin{equation}\label{def-chen} \Delta H =(\Delta H_1,\ldots, \Delta H_n)=0 \,\, , \end{equation} where $H=(H_1,\ldots,H_n)$ is the mean curvature vector field and $\Delta$ denotes the Beltrami-Laplace operator on $M$ (our sign convention is such that $\Delta h=-h''$ when $h$ is a function of one real variable). It follows from Beltrami's equation $$ m\, H=-(\Delta\, \varphi_1,\ldots,\Delta\, \varphi_n) $$ that the biharmonicity condition is equivalent to $$ \Delta^{2}\, \varphi=(\Delta^{2}\, \varphi_1,\ldots,\Delta^{2}\, \varphi_n)=0\,\,, $$ which justifies the previous definition of biharmonic immersions. The study of biharmonic immersions in ${\mathbb R}^n$ can be set in a more general variational, Riemannian geometric context. More precisely, we recall that a smooth map $\varphi:(M,g)\to(N,h)$ is a {\it harmonic map} if it is a critical point of the {\em energy} functional \begin{equation}\label{energia} E(\varphi)=\frac{1}{2}\int_{M}\,|d\varphi|^2\,dv_g \,\, , \end{equation} whose Euler-Lagrange equation is $\tau(\varphi)={\trace} \, \nabla d \varphi =0$. A natural generalization of harmonic maps are the so-called {\it biharmonic maps}: these maps are the critical points of the bienergy functional (as suggested by Eells--Lemaire \cite{EL83}) \begin{equation}\label{bienergia} E_2(\varphi)=\frac{1}{2}\int_{M}\,|\tau (\varphi)|^2\,dv_g \,\, . \end{equation} In \cite{Jiang} G.~Jiang showed that the Euler-Lagrange equation associated to $E_2(\varphi)$ is given by $\tau_2(\varphi) =0$, where the {\it bitension field} $\tau_2(\varphi)$ is \begin{equation}\label{bitensionfield} \tau_2(\varphi) = - \Delta \tau(\varphi)- \trace R^N(d \varphi, \tau(\varphi)) d \varphi\,\,. \end{equation} An immersed submanifold into a Riemannian manifold $(N,h)$ is called a {\it biharmonic submanifold} if the immersion is a biharmonic map. In particular, minimal immersions are trivially biharmonic, so that we call {\it proper} biharmonic any biharmonic immersion which is not minimal. We observe that when $N={\mathbb R}^n$ the curvature term in \eqref{bitensionfield} vanishes and $\tau_2(\varphi) =0$ is equivalent to \eqref{def-chen}. Thus the definition of biharmonic immersed submanifolds extends the original definition of Chen. \\ In the case of hypersurfaces, biharmonicity can be expressed by means of the following general result (see \cite{BMO13,C84,LM08,O10,O02}): \begin{theorem}\label{th: bih subm N} Let $\varphi:M^{n-1}\to N^{n}$ be an isometric immersion with mean curvature vector field $H=(f/(n-1))\,\eta$. Then $\varphi$ is biharmonic if and only if the normal and the tangent components of $\tau_2(\varphi)$ vanish, i.e., \begin{subequations} \begin{equation}\label{eq: caract_bih_normal} \Delta {f}+f |A|^2- f\ricci^N(\eta,\eta)=0 \end{equation} and \begin{eqnarray}\label{eq: caract_bih_tangent} 2 A(\grad f)+ f \grad f-2 f \ricci^N(\eta)^{\top}=0 \end{eqnarray} \end{subequations} respectively, where $A$ is the shape operator and $\ricci^N(\eta)^{\top}$ is the tangent component of the Ricci tensor field of $N$ in the direction of the unit normal vector field $\eta$ of $M$ in $N$. \end{theorem} We shall work in the framework of equivariant differential geometry, so let $N$ be a Riemannian manifold and $I(N)$ its full isometry group. It is well-known (see \cite{MST}) that $I(N)$ is a Lie group which acts differentiably on $N$. A Lie subgroup $G$ of $I(N)$ is called an {\it isometry group} of $N$ and, following \cite{HL}, we recall that its {\it cohomogeneity} is defined as the codimension in $N$ of the maximal dimensional orbits, also called the {\it principal} orbits (of course, all the orbits are homogeneous spaces, since they are of the type $G \, / \,H$, where $H$ is the stabilizer). The \emph{cohomogeneity of a G-invariant submanifold} $M$ of $N$ is defined as the dimension of $M$ minus the dimension of the principal orbits. In this paper we shall be interested in the case that $N$ is the Euclidean space ${\mathbb R}^n$ and the cohomogeneity of $G$ is two, so that \emph{$G-$invariant hypersurfaces are cohomogeneity one submanifolds}. This type of isometry groups of ${\mathbb R}^n$ have been fully classified in \cite{HL}. In particular, Hsiang and Lawson, developping the work of various famous authors, including Cartan and Weil, divided the cohomogeneity two isometry groups $G$ acting on ${\mathbb R}^n$ into five types according to the geometric shape of their orbit space $Q={\mathbb R}^n \, / \,G$, which is a linear cone in ${\mathbb R}^2$ of angle $\pi \slash d$, $d=1,\,2,\,3,\,4,\,6$ respectively. In this context, a $G-$invariant hypersurface in ${\mathbb R}^n$ can be completely described by means of its profile curve $\gamma$ into the orbit space $Q$. In particular, it turns out that a $G-$invariant hypersurface is a biharmonic submanifold if and only if the curve $\gamma$ satisfies a certain system of ordinary differential equations (see Proposition~\ref{equazioniridottedibiarmonicity} in Section~\ref{equiv-diff-geom-section} below for details). This approach, which uses the symmetries deriving by the group action to reduce a PDE's problem to an ODE's system, has been very fruitful in various context (construction of harmonic maps, counterexamples for Bernstein's type problems for minimal and CMC immersions (see, for example, \cite{BDG,ER, Hsiang}). In general, reduction to an ODE has been a valuable tool because it has helped to produce new solutions. By contrast, in our case what we obtain is a nonexistence result in the direction of the still open Chen conjecture (see \cite{Chen04,Chen} and \cite[chapter~7]{Chen-book}): {\it biharmonic submanifolds into ${\mathbb R}^n$ are minimal}. Chen's conjecture is still open even for biharmonic hypersurfaces in ${\mathbb R}^n$ though, by a result of Dimitric (see \cite{dimitric}), we know that any biharmonic hypersurface in ${\mathbb R}^n$ with at most two distinct principal curvatures is minimal. Other partial results for low dimensions state that biharmonic hypersurfaces with at most three distinct principal curvatures in ${\mathbb R}^4$ or in ${\mathbb R}^5$ are necessarily minimal (see \cite{Defever, HasVla95}) and very recently Fu (see \cite{Fu2014}) extended Dimitric's result proving that any biharmonic hypersurface in ${\mathbb R}^n$ with at most three distinct principal curvatures is minimal. We can now state our main result: \begin{theorem}\label{Main-theorem} Let $G$ be a cohomogeneity two group of isometries acting on ${\mathbb R}^n$ ($n\geq3$). Then any $G-$invariant biharmonic hypersurface in $R^n$ is minimal. \end{theorem} \begin{remark}The family of $G$-invariant hypersurfaces in Theorem~\ref{Main-theorem} is ample and geometrically significant (see Table~\ref{table} below). The case $d=1$ (i.e., the case of the classical rotational hypersurfaces) is a special instance of the above cited result of \cite{dimitric}. The next case, i.e., $d=2$ and $G=SO(p)\times SO(q)$, with $p+q=n$, was proved in \cite{MOR-AMPA}. For these reasons, the interest of Theorem~\ref{Main-theorem} lies on the three remaining types of $G$-actions, i.e., $d=3,\,4$ and $6$, for which the number of distinct principal curvatures is $4,\,5$ and $7$ respectively. \end{remark} The paper is organized as follows: in the next section we recall several basic facts from equivariant differential geometry. We believe that this short outline could be useful also for other applications in similar contexts. In the final section we provide the details of the proof of Theorem~\ref{Main-theorem}. \section{Basic equivariant differential geometry for cohomogeneity two $G$-actions on ${\mathbb R}^n$}\label{equiv-diff-geom-section} The details, together with some historical references, concerning the results of this section can be found in \cite{BackDoCarmoHsiang, HL, Pedrosa}. Let $G$ be a cohomogeneity two group of isometries acting on ${\mathbb R}^n$. As we mentioned in the introduction, these groups are well-understood and classified since they correspond to the \emph{isotropy representations of symmetric spaces of rank two}. For the sake of completeness, we also wish to point out the deep connection between this branch of the theory of Lie groups and the geometric properties of isoparametric functions on the Euclidean sphere (see, for example, \cite{Munzner,Thor}). The following linear functions $w_{(d,i)}$ will play a key role: \begin{equation}\label{definizionew_{(d,i)}} w_{(d,i)}(x,y)= x\, \sin (i\,\pi \slash \,d)\,-\, y\,\cos (i\,\pi \slash \,d)\,\, , \end{equation} where $d$ is an integer which can be equal to $1,\,2,\,3,\,4$ or $6$, and $i$ is another integer such that $0 \leq i \leq (d-1)$. The orbit space $Q={\mathbb R}^n\, / \,G$ can be identified with a linear cone of angle $(\pi \slash \,d)$ in ${\mathbb R}^2$ described by \begin{equation}\label{descrizionediQ} Q=\left \{\, (x,y) \,\in \, {\mathbb R}^2 \,\, : \,\, y \geq 0 \,\,{\rm and}\,\,x\, \sin (\pi \slash \,d)\,-\, y\,\cos (\pi \slash \,d)\geq 0 \,\right \} \,\,, \end{equation} where the possible cases for $d$ are $1,\,2,\,3,\,4$ or $6$. We also point out that $Q={\mathbb R}^2\, / \,W$, where $W=N(H,G)\, / \, H$ is the Weil group which acts on ${\mathbb R}^2$ by reflections with respect to the lines defined by $w_{(d,i)}(x,y)=0$. For this reason, the orbit space $Q$ is also called the \emph{Weil chamber}. The orbital distance metric $g_Q$ on $Q$ (i.e., the metric which makes the projection map $\Pi \,\, : {\mathbb R}^n \rightarrow Q$ a Riemannian submersion) is flat: \begin{equation}\label{metricadiQ} g_Q=dx^2+dy^2 \end{equation} and any horizontal lift of a tangent vector to $Q$ meets any $G-$orbit perpendicularly. Let $\xi=(x,y)$ be an interior point of $Q$. We denote by $V(\xi)$ the volume of the principal orbit $\Pi^{-1}(\xi)$. The function $V(\xi)$ is called the \emph{volume function} and contains most of the information required to carry out the computation of the second fundamental form $A$ associated to a $G-$invariant hypersurface. More precisely, it turns out that $V(x,y)$ is always a homogeneous polynomial which, for each fixed type $d$, can be expressed (up to a multiplicative constant) in terms of the linear functions \eqref{definizionew_{(d,i)}} in the following form: \begin{equation}\label{genericafunzionevolume} V^2(x,y)= \prod_{i=0}^{(d-1)}\, \left [\,w_{(d,i)}(x,y)\,\right ]^{2\,m_i} \,\, , \end{equation} where the $m_i$'s are positive integers (the cases which can occur are listed in Table~\ref{table}, which can be derived from an analogous Table given in \cite{Hsiang,HL}). \begin{table}[h!] \centering \scriptsize{ \begin{tabular}{|c|c|c|c|c|} \hline &&&&\\ $G$ & Action & $\dim$ Euclidean Space & $d$ & Multiplicities \\ &&&&\\ \hline &&&& \\ $S\mathcal{O}(n-1)$&$1+\rho_{(n-1)}$&$n\geq3$&1&$m_0=(n-2),\,m_1=1$\\ &&&& \\ \hline &&&& \\ $S\mathcal{O}(p)\times S\mathcal{O}(q)$&$\rho_p+\rho_{q}$&$n=(p+q)\geq4$&2&$m_0=(q-1),\,m_1=(p-1)$\\ &&&& \\ \hline &&&&\\ $S\mathcal{O}(3)$ &$S^2_{\rho_{3}}-1$ & $n=5$ & 3 & $m_0=m_1=m_2=1$ \\ &&&&\\ \hline &&&&\\ $SU(3)$ &$Ad$ & $n=8$ & 3 & $m_0=m_1=m_2=2$ \\ &&&&\\ \hline &&&&\\ $Sp(3)$ & $\Lambda^2 \, \nu_{3}-1$& $ n=14$ & 3 & $m_0=m_1=m_2=4$ \\ &&&&\\ \hline &&&&\\ $F_4$ & $\begin{array}{l} 1 \\ \circ \hspace{-1.3mm}- \hspace{-1.3mm}\circ \hspace{-1.4mm}=\hspace{-1.4mm} \circ\hspace{-1.3mm} -\hspace{-1.3mm} \circ \end{array} $& $ n=26 $& 3 & $m_0=m_1=m_2=8$ \\ &&&&\\ \hline &&&&\\ $S\mathcal{O}(5)$ & $Ad$& $ n=10$ & 4 & $m_0=m_1=m_2=m_3=2$ \\ &&&&\\ \hline &&&&\\ $S\mathcal{O}(2)\times S\mathcal{O}(m)$ &$\rho_2 \otimes \rho_m $& $ n=2m \,\geq 6$ & 4 & $m_0=m_2=(m-2), \, m_1=m_3=1$ \\ &&&&\\ \hline &&&&\\ $S \left ( U(2)\times U(m) \right )$ &$[\mu_2 \otimes _{{\mathbb C}}\mu_m]_{{\mathbb R}}$ & $ n=4m \,\geq 8$ & 4 & $m_0=m_2=(2m-3), \, m_1=m_3=2$ \\ &&&&\\ \hline &&&&\\ $Sp(2) \times Sp(m)$&$ \nu_2 \otimes _{{\mathbb H}}\nu_m^*$& $ n=8m \,\geq 16$ & 4 & $m_0=m_2=(4m-5), \, m_1=m_3=4$ \\ &&&&\\ \hline &&&& \\ $U(5)$ &$[\Lambda^2 \mu_5]_{{\mathbb R}}$&$n=20$&$4$&$m_0=m_2=5, \, m_1=m_3=4$\\ &&&& \\ \hline &&&&\\ $ U(1)\times Spin(10)$ &$[\mu_1 \otimes _{{\mathbb C}}\Delta_1^+]_{{\mathbb R}}$ & $ n=32$ & 4 & $m_0=m_2=9, \, m_1=m_3=6$ \\ &&&&\\ \hline &&&&\\ $G_2$ & $Ad$& $ n=14$ & 6 & $m_0=m_1= \dots =m_5=2$ \\ &&&&\\ \hline &&&&\\ $S\mathcal{O}(4)$ &$\begin{array}{l} 1 \hspace{1.3mm}3\\ \circ\hspace{-1.3mm} - \hspace{-1.3mm} \circ \end{array} $ & $ n=8$ & 6 & $m_0=m_1= \dots =m_5=1$ \\ &&&&\\ \hline \end{tabular} } $$ \, $$ $$ \, $$ \caption{Cohomogeneity two $G$-actions on ${\mathbb R}^n$ (see \cite{Hsiang,HL}) (\textbf{Note}: the volume function is given in \eqref{genericafunzionevolume}, the number of distint principal curvatures of $ \Sigma_{\gamma}$ is $(d+1)$ ).}\label{table} \end{table} Next, we observe that any cohomogeneity one $G-$invariant immersion into ${\mathbb R}^n$ can be described by means of what we call its \emph{profile curve} $\gamma(s)=(x(s),\,y(s))$ in the orbit space $Q$. More precisely, the $G-$invariant hypersurface corresponding to a profile curve $\gamma$ is $\Sigma_{\gamma}= \Pi^{-1}(\gamma)$. For convenience, we shall always assume that \begin{equation}\label{sascissacurvilinea} \dot{x}^2+\dot{y}^2 =1 \,\, , \end{equation} and also, to fix orientation, that the unit normal $\eta$ to the hypersurface $\Sigma_{\gamma}$ projects down in $Q$ to \begin{equation}\label{unitnormal} d\Pi(\eta)=\nu =-\,\dot{y}\,\frac{\partial}{\partial x}+ \dot{x}\,\frac{\partial}{\partial y}\,\, . \end{equation} Now, suppose that $\Sigma_{\gamma}$ is a $G-$invariant hypersurface into ${\mathbb R}^n$ of type $d$ ($d=1,\,2,\,3,\,4$ or $6$), with associated volume function given by \eqref{genericafunzionevolume}. Then $\Sigma_{\gamma}$ possesses $(d+1)$ distinct principal curvatures given by: \begin{eqnarray}\label{curvatureprincipali} k_i &=&-\, \frac{1}{2} \, \frac{d}{d \, \nu}\, \ln \left[w_{(d,i)}(x,y) \right]^2\nonumber\\ &=& -\, \frac{1}{2}\, \nu\left(\ln \left[w_{(d,i)}(x,y) \right]^2\right)\,\, , \quad i=0,\, \ldots,\, (d-1) \,\,, \end{eqnarray} each of them with multiplicity equal to $m_i$, and \begin{equation}\label{ultimacurvaturaprincipale} k_d = \ddot{y}\,\dot {x}\,-\,\ddot{x}\,\dot {y} \,\, \end{equation} with multiplicity equal to one. For future use we also observe that, using \eqref{definizionew_{(d,i)}} and \eqref{unitnormal}, \eqref{curvatureprincipali} gives \begin{eqnarray}\label{curvatureprincipali-bis} k_i &=& \frac{\dot{y} \sin (i\,\pi \slash \,d)+\dot{x} \cos (i\,\pi \slash \,d)}{w_{(d,i)}(x,y)}\nonumber\\ &=&\frac{w_{(d,i)}(\dot{y},-\dot{x})}{w_{(d,i)}(x,y)}\,\, , \quad i=0,\, \ldots,\, (d-1) \,\,. \end{eqnarray} In particular, it follows that the terms $f$ and $|A|^2$ in \eqref{eq: caract_bih_normal} and \eqref{eq: caract_bih_tangent} are given by \begin{equation}\label{espressionedif} f=(\, \ddot{y}\,\dot {x}\,-\,\ddot{x}\,\dot {y}\,)+\sum_{i=0}^{(d-1)}\,m_i\, k_i \end{equation} and \begin{equation}\label{espressionediA^2} |A|^2 = (\, \ddot{y}\,\dot {x}\,-\,\ddot{x}\,\dot {y}\,)^2+ \sum_{i=0}^{(d-1)}\,m_i\, (k_i)^2 \end{equation} respectively, where the explicit expressions for the $k_i$'s are those given in \eqref{curvatureprincipali-bis}. We shall need to compute the gradient and the laplacian of $f$: to this purpose, we work by using on $\Sigma_{\gamma}$ a local system of coordinates of type $\{u_1,\ldots,u_{n-2},s\}$, where $u=\{u_1,\ldots,u_{n-2}\}$ are local coordinates of a principal orbit. In particular, we observe that, with respect to these local coordinates, the induced metric $g$ satisfies: \begin{equation}\label{inducedmetric1} \det g = \psi(u)\, V^2(x(s),y(s))\,, \end{equation} where $\psi$ is a positive function on the principal orbit and \begin{equation}\label{inducedmetric2} g_{(n-1),k}=\delta_{(n-1),k}\,,\quad k=1,\ldots,\,(n-1)\,\, . \end{equation} Now, since $f$ depends only on $s$, it follows immediately that \begin{equation}\label{gradientef} \grad f = \dot {f}(s) \,\frac{\partial}{\partial s} \,\,. \end{equation} Next, by using \eqref{inducedmetric1} and \eqref{inducedmetric2} in $$ \Delta f = -\,\frac{1}{\sqrt{\det g}} \,\frac{\partial}{\partial v_i}\left( g^{ij}\,\sqrt{\det g}\, \frac{\partial f}{\partial v_j}\right)\,,\quad v_i=u_i, i=1,\ldots,\,(n-2),\;\; v_{(n-1)}=s\,\,, $$ we obtain \begin{equation}\label{gradienteelaplaciano} \Delta f =- \ddot{f}- \frac{1}{2}\,\left(\frac{d}{d \, s}\, \ln V^2\right ) \, \dot{f}\,\, . \end{equation} We can summarize this discussion in the following proposition which, taking into account that the Ricci tensor field of ${\mathbb R}^n$ vanishes, follows by direct substitution of \eqref{espressionedif}, \eqref{espressionediA^2}, \eqref{gradientef} and \eqref{gradienteelaplaciano} into \eqref{eq: caract_bih_normal} and \eqref{eq: caract_bih_tangent}: \begin{proposition}\label{equazioniridottedibiarmonicity} Let $\Sigma_{\gamma}$ be a $G-$invariant hypersurface into ${\mathbb R}^n$ of type $d$ ($d=1,\,2,\,3,\,4$ or $6$), with associated volume function given by \eqref{genericafunzionevolume}. Then $\Sigma_{\gamma}$ is a biharmonic hypersurface if and only if \begin{equation}\label{bitensionecomponentenormale} \ddot{f}+ \frac{1}{2}\,\left(\frac{d}{d \, s}\, \ln V^2\right ) \, \dot{f}\, - \Big[(\, \ddot{y}\,\dot {x}\,-\,\ddot{x}\,\dot {y}\,)^2+ \sum_{i=0}^{(d-1)}\,m_i\, (k_i)^2\Big]\,f =0 \end{equation} and \begin{equation}\label{bitensionecomponentetangente} \dot{f}\, (f+2\,(\, \ddot{y}\,\dot {x}\,-\,\ddot{x}\,\dot {y}\,)) =0 \,\, . \end{equation} \end{proposition} \begin{remark}\label{re:linearprifile} For future use, here we consider briefly the case where the profile curve $\gamma(s)=(x(s),y(s))$ in $Q$ satisfies $y=m x$, $m\in{\mathbb R}$. Therefore, we can assume that $\gamma$ is parametrized by \begin{equation}\label{parametrizzazione} \gamma (s)= \big( \,s \, \cos \sigma,\, s \, \sin \sigma\, \big ) \,\,, \end{equation} where $\sigma$ is a constant in the interval $(0,\,\pi \slash d)$. In this case, we have \begin{equation}\label{condizioneminimalita} f=-\,\frac{1}{s}\,\,\sum_{i=0}^{(d-1)} \, m_i \, \cot \left (\sigma -\,\frac{i\pi}{d} \right ) \,\, . \end{equation} In particular, by using \eqref{parametrizzazione} and \eqref{condizioneminimalita} into \eqref{bitensionecomponentetangente}, it is immediate to deduce that a curve of this type gives rise to a solution if and only if $f \equiv 0$. In other words, for this family of curves, the associated $\Sigma_{\gamma}$ is biharmonic if and only if it is minimal. \end{remark} \begin{example}\label{esempio} Equations \eqref{bitensionecomponentenormale} and \eqref{bitensionecomponentetangente} express, respectively, the vanishing of the normal and of the tangential component of the bitension field. In order to help the reader, we now compute explicitly the various principal curvatures and the biharmonicity conditions in one specific instance. More precisely, suppose that $\Sigma_{\gamma}$ is a $G-$invariant immersion into ${\mathbb R}^n$ of type $d=4$, with $G=U(5)$, $n=20$ (see Table~\ref{table}). In this case we have: $$ w_{(4,0)}(x,y)= -y \,\, , \quad w_{(4,1)}(x,y)= \frac{\sqrt 2}{2}\,(x-y)\, , $$ $$ w_{(4,2)}(x,y)= x \,\, , \quad w_{(4,3)}(x,y)= \frac{\sqrt 2}{2}\,(x+y)\,, $$ with $m_0=m_2=5$, $m_1=m_3=4$ (note that $1+\sum_{i=0}^{(d-1)}m_i= \dim (\Sigma_\gamma)=(n-1)=19$). Then we have: $$ \begin{aligned} k_0& =-\, \frac{1}{2} \, \frac{d}{d \, \nu}\, \ln \left[w_{(4,0)}(x,y)\right ]^2 \\ &= -\, \frac{1}{w_{(4,0)}(x,y)} \left (- \dot{y}\,\frac{\partial}{\partial x}w_{(4,0)}(x,y) + \dot{x}\,\frac{\partial}{\partial y}w_{(4,0)}(x,y)\right ) = -\,\frac{\dot x}{y} \,\, , \end{aligned} $$ and, similarly, $$ k_1=\frac{\dot{x}+\dot{y}}{x-y},\,\, \quad k_2= \frac{\dot y}{x}, \,\, \quad k_3=\frac{-\dot{x}+\dot{y}}{x+y} \,\, . $$ Moreover, up to an irrelevant multiplicative constant, $$ V^2(x,y)= \prod_{i=0}^{3}\, \left [\,w_{(4,i)}(x,y)\,\right ]^{2\,m_i}= (xy)^{10} \, (x^2-y^2)^8 \,\, . $$ Therefore, taking into account \eqref{espressionedif}, \eqref{espressionediA^2} and \eqref{gradienteelaplaciano} the biharmonicity equations \eqref{bitensionecomponentenormale} and \eqref{bitensionecomponentetangente} become \begin{equation}\label{bitensionecomponentenormaleesempio} \ddot{f}+\,\left( 5\,\frac{\dot{x}}{x}+5\,\frac{\dot{y}}{y}+4\,\frac{\dot{x}+\dot{y}}{x+y}+4\,\frac{\dot{x}-\dot{y}}{x-y} \right ) \, \dot{f}\, - |A|^2\,f =0 \end{equation} and \begin{equation}\label{bitensionecomponentetangenteesempio} \dot{f}\, (f+2\,(\ddot{y}\,\dot {x}\,-\,\ddot{x}\,\dot {y})) =0 \end{equation} respectively, where \begin{equation}\label{f-esempio} f= (\ddot{y}\,\dot {x}\,-\,\ddot{x}\,\dot {y})-5\, \frac{\dot x}{y}+4\,\frac{(\dot{x}+\dot{y})}{(x-y)}+5\, \frac{\dot y}{x}+4\,\frac{(-\dot{x}+\dot{y})}{(x+y)} \end{equation} and \begin{equation}\label{A^2esempio} |A|^2= (\ddot{y}\,\dot {x}\,-\,\ddot{x}\,\dot {y})^2+\,5\, \left(\frac{\dot x}{y}\right)^2+4\,\left(\frac{\dot{x}+\dot{y}}{x-y}\right)^2+5\, \left(\frac{\dot y}{x}\right)^2 +4 \,\left( \frac{-\dot{x}+\dot{y}}{x+y}\right )^2 \,\,. \end{equation} \end{example} \section{Proof of the main Theorem}\label{proofofthemaintheorem} \begin{proof} Now we are in the right position to prove Theorem~\ref{Main-theorem}. The method of proof uses ideas introduced in \cite{HasVla95} and also used in \cite{MOR-AMPA}, where the case $d=2$ was proved. It is enough to show that $\Sigma_{\gamma}$ is a CMC immersion because direct inspection of \eqref{bitensionecomponentenormale} shows that, if we have a solution with $f$ equal to a constant, then necessarily $f \equiv 0$ so that the immersion is minimal. So, let us assume that $\Sigma_{\gamma}$ is not CMC. Then there exists a real open interval $I$ where $\dot{f}(s) \neq 0$ for all $s\in I$, and equation \eqref{bitensionecomponentetangente} is equivalent to \begin{equation}\label{bitensionecomponentetangentebis} f+2\,(\ddot{y}\,\dot {x}\,-\,\ddot{x}\,\dot {y}) =0 \end{equation} on $I$. Now, in order to provide a unified proof which includes all the cases (i.e., $d=1,\,2,\,3,\,4,\,6)$, it is convenient to introduce the following function: \begin{equation}\label{definizionedellafunzioneR} R(x,\,y,\,\dot{x},\,\dot{y})=\sum_{i=0}^{(d-1)}\,m_i\, k_i \,\, , \end{equation} where the $k_i$' are those given in \eqref{curvatureprincipali-bis}. In particular, we observe from \eqref{espressionedif} that \begin{equation}\label{legametra-f-e-R} f= R+(\ddot{y}\,\dot {x}\,-\,\ddot{x}\,\dot {y})\,\, . \end{equation} Next, from equation \eqref{bitensionecomponentetangentebis} we deduce that \begin{equation}\label{legametra-f-e-Rbis} \ddot{y}\,\dot {x}\,-\,\ddot{x}\,\dot {y}\,=\,-\,\frac{R}{3} \end{equation} on $I$. Therefore, multiplying by $\dot{x}$ both sides of \eqref{legametra-f-e-Rbis} and using \eqref{sascissacurvilinea}, we easily obtain \begin{equation}\label{eq-explicity2} \ddot{y}= -\, \frac{R}{3}\,\,\dot{x} \,\,. \end{equation} In the same way, multiplying by $\dot{y}$, we also have \begin{equation}\label{eq-explicitx2} \ddot{x}= \frac{R}{3}\,\,\dot{y} \,\,. \end{equation} We also observe that, using \eqref{eq-explicity2} and \eqref{eq-explicitx2}, we can rewrite the relationship between $f$ and $R$ as follows: \begin{equation}\label{eq-explicitf} f=\frac{2}{3} \, R\,\,. \end{equation} Now, using \eqref{legametra-f-e-Rbis}$-$\eqref{eq-explicitf}, which come from the tangent component \eqref{bitensionecomponentetangente}, we claim that the normal component \eqref{bitensionecomponentenormale} can be written in the following equivalent form: \begin{equation}\label{eq-normal-explicit} A_0(x,y)\, \dot{x}^3 + A_1(x,y) \,\dot{x}^2\, \dot{y} + A_2(x,y)\, \dot{x}\, \dot{y}^2 +A_3(x,y)\, \dot{y}^3=0\,\,, \end{equation} where the $A_j(x,y)$'s ($j=0,\, \dots,\,3$) are homogeneous polynomials of degree $3(d-1)$. In order to verify this claim, one must simply carry out an explicit computation. More precisely, first, by using \eqref{legametra-f-e-Rbis} and \eqref{eq-explicitf}, we rewrite equation \eqref{bitensionecomponentenormale} in terms of $R$ as follows: \begin{equation}\label{bitensionecomponentenormaleinterminidiR} \ddot{R}+ \frac{1}{2}\,\left(\frac{d}{d \, s}\, \ln V^2\right ) \, \dot{R}\, - \Big[(1 \slash 9)\,R^2+ \sum_{i=0}^{(d-1)}\,m_i\, (k_i)^2\Big]\,R =0 \,\,. \end{equation} Next, we define the following homogeneous polynomial of degree $d$ in the variables $x$, $y$: \begin{equation}\label{definizionediQd} Q_d =\prod_{i=0}^{(d-1)}\, w_{(d,i)}(x,y) \,\, \end{equation} and also observe that \begin{equation}\label{ddsw} \frac{d}{ds} w_{(d,i)}(x,y)=w_{(d,i)}(\dot{x},\dot{y})\,\, . \end{equation} Now, we analyse the single terms which appear in \eqref{bitensionecomponentenormaleinterminidiR}. Using \eqref{ddsw} and \eqref{genericafunzionevolume} a direct computation gives \begin{equation}\label{terminedivolume} \frac{1}{2}\,\left(\frac{d}{d \, s}\, \ln V^2\right ) =\sum_{i=0}^{(d-1)} m_i \frac{w_{(d,i)}(\dot{x},\dot{y})}{w_{(d,i)}(x,y)}= \frac{T_{1,(d-1)}\dot{x}+T_{2,(d-1)}\dot{y}}{Q_d} \,\, , \end{equation} where $T_{1,(d-1)}$ and $T_{2,(d-1)}$ are the homogeneous polynomials of degree $(d-1)$ in the variables $x,\,y$ given by $$ T_{1,(d-1)}=\sum_{i=0}^{(d-1)} m_i \frac{Q_d}{w_{(d,i)}(x,y)} \sin(i\pi/d)\,\,, $$ $$ T_{2,(d-1)}=-\sum_{i=0}^{(d-1)} m_i \frac{Q_d}{w_{(d,i)}(x,y)} \cos(i\pi/d)\,\,. $$ Next, using \eqref{curvatureprincipali-bis} in \eqref{definizionedellafunzioneR} we obtain \begin{equation}\label{calcolodiR} R= \frac{-T_{2,(d-1)}\dot{x}+T_{1,(d-1)}\dot{y}}{Q_d}\,\, . \end{equation} Taking the square of \eqref{calcolodiR} and using again \eqref{curvatureprincipali-bis}, we also find \begin{equation}\label{calcolodiA^2} (1 \slash 9)\,R^2+ \sum_{i=0}^{(d-1)}\,m_i\, (k_i)^2= \frac{T_{3,(2d-2)}\dot{x}^2+T_{4,(2d-2)}\dot{x}\dot{y}+T_{5,(2d-2)}\dot{y}^2}{Q_d^2}\,\,, \end{equation} where $T_{3,(2d-2)}$, $T_{4,(2d-2)}$ and $T_{5,(2d-2)}$ are the homogeneous polynomials of degree $(2d-2)$ in the variables $x,\,y$ given by $$ T_{3,(2d-2)}=\frac{1}{9}\,T_{2,(d-1)}^2+\sum_{i=0}^{(d-1)} m_i \frac{Q_d^2}{[w_{(d,i)}(x,y)]^2} \cos^2(i\pi/d) $$ $$ T_{4,(2d-2)}=-\frac{2}{9}\, T_{1,(d-1)}\,T_{2,(d-1)}+2\sum_{i=0}^{(d-1)} m_i \frac{Q_d^2}{[w_{(d,i)}(x,y)]^2} \sin(i\pi/d) \cos(i\pi/d) $$ $$ T_{5,(2d-2)}=\frac{1}{9}\,T_{1,(d-1)}^2+\sum_{i=0}^{(d-1)} m_i \frac{Q_d^2}{[w_{(d,i)}(x,y)]^2} \sin^2(i\pi/d)\,\,. $$ Now, taking the first and the second derivatives of \eqref{calcolodiR} and using \eqref{eq-explicity2} and \eqref{eq-explicitx2}, a direct computation shows that \begin{equation}\label{calcolodiRpunto} \dot{R}= \frac{T_{6,(2d-2)}\dot{x}^2+T_{7,(2d-2)}\dot{x}\dot{y}+T_{8,(2d-2)}\dot{y}^2}{Q_d^2}\,\, , \end{equation} \begin{equation}\label{calcolodiRduepunti} \ddot{R}= \frac{T_{9,(3d-3)}\dot{x}^3+T_{10,(3d-3)}\dot{x}^2\dot{y}+T_{11,(3d-3)}\dot{x}\dot{y}^2+T_{12,(3d-3)}\dot{y}^3}{Q_d^3}\,\, , \end{equation} where $T_{6,(2d-2)},T_{7,(2d-2)},T_{8,(2d-2)}$ are homogeneous polynomials of degree $(2d-2)$ in the variables $x,\,y$, while $T_{9,(3d-3)},T_{10,(3d-3)},T_{11,(3d-3)},T_{12,(3d-3)}$ are homogeneous polynomials of degree $(3d-3)$ in the variables $x,\,y$. Finally, using \eqref{terminedivolume}$-$\eqref{calcolodiRduepunti} into \eqref{bitensionecomponentenormaleinterminidiR}, one obtains (up to a denominator $Q_d^3$) that the claimed equation \eqref{eq-normal-explicit} holds (note that the explicit expressions for the $A_j(x,y)$'s play no active role in the sequel, so we omit further details on this point). Next, for a fixed $s_0\in I$, we put $x_0=x(s_0)$. Since $\dot{x}^2+\dot{y}^2=1$, we can assume that $\dot{x}(s_0)\neq 0$ and we can express $y$ as a function of $x$, $y=y(x)$, with $x\in(x_0-\varepsilon,x_0+\varepsilon)$, and write \begin{equation}\label{eq-non-ex-1} \dot{y}=\frac{dy}{dx}\, \dot{x}\,\,. \end{equation} From $\dot{x}^2+\dot{y}^2=1$ we also obtain \begin{equation}\label{eq-non-ex-2} \dot{x}^2=\dfrac{1}{1+\left(\dfrac{dy}{dx}\right)^2}\,\,. \end{equation} For future use, combining \eqref{eq-non-ex-1} and \eqref{eq-non-ex-2}, we also observe that \begin{equation}\label{eq-non-ex-2-bis} \dot{x}\dot{y}=\dfrac{1}{1+\left(\dfrac{dy}{dx}\right)^2}\, \dfrac{dy}{dx}\,\,. \end{equation} Multiplying \eqref{eq-normal-explicit} by $\dot{x}$ and substituting \eqref{eq-non-ex-2} and \eqref{eq-non-ex-2-bis} we find that, up to a multiplicative factor $1/\left(1+\left({dy}/{dx}\right)^2\right)^2$, \eqref{eq-normal-explicit} becomes equivalent to \begin{equation}\label{eq-non-ex-5} A_3(x,y) \left(\dfrac{dy}{dx}\right)^3+A_2(x,y)\left(\dfrac{dy}{dx}\right)^2+A_1(x,y) \left(\dfrac{dy}{dx}\right)+A_0(x,y)=0\, \, . \end{equation} Next, deriving \eqref{eq-non-ex-1} with respect to $s$, a straightforward computation leads us to \begin{equation}\label{eq-non-ex-3} \ddot{y}=\dfrac{1}{\left(1+\left(\dfrac{dy}{dx}\right)^2\right)^2}\, \dfrac{d^2y}{dx^2}\,\,. \end{equation} Next, we use \eqref{eq-explicity2} into \eqref{eq-non-ex-3}: a computation, which takes into account \eqref{calcolodiR}, \eqref{eq-non-ex-2} and \eqref{eq-non-ex-2-bis}, shows that the expression for the second derivative of $y$ with respect to $x$ is given by: \begin{equation}\label{eq-non-ex-4} \begin{aligned} \frac{d^2y}{dx^2}=& -\frac{1}{3} \, \left(1+\left(\dfrac{dy}{dx}\right)^2\right)^2\, R\, \dot{x}\\ =& -\frac{1}{3} \, \left(1+\left(\dfrac{dy}{dx}\right)^2\right)^2\, \left(\frac{-T_{2,(d-1)}\dot{x}+T_{1,(d-1)}\dot{y}}{Q_d}\right)\, \dot{x}\\ =& \frac{1}{3}\, \left( 1+\left(\dfrac{dy}{dx}\right)^2\right)\, \left (\frac{ T_{2,(d-1)}-T_{1,(d-1)}\,(dy \slash dx) }{Q_d} \right )\,\,. \end{aligned} \end{equation} Next, taking the derivative of \eqref{eq-non-ex-5} with respect to $x$, that is $d/dx$, and using \eqref{eq-non-ex-4}, we obtain, up to $1/Q_d$, the following equation: \begin{eqnarray}\label{eq-non-ex-6} C_5(x,y) \left(\dfrac{dy}{dx}\right)^5&+&C_4 (x,y) \left(\dfrac{dy}{dx}\right)^4 \\ \nonumber +\,\, C_3(x,y)\left(\dfrac{dy}{dx}\right)^3&+&C_2(x,y) \left(\dfrac{dy}{dx}\right)^2+\,C_1(x,y)\left(\dfrac{dy}{dx}\right)+\, C_0(x,y)=0\,\,, \end{eqnarray} where the $C_j(x,y)$'s ($j=0,\, \dots,\,5$) are homogeneous polynomials of degree $4(d-1)$ which are related to the $A_j(x,y)$'s as follows: \begin{equation}\label{definizionedeipolinomiC} \left \{\begin{array}{l} C_0=Q_d \cdot {\partial A_{0}}/{\partial x}+(1/3)\,A_1 \cdot T_{2,(d-1)} \\ C_1=Q_d \cdot ({\partial A_{1}}/{\partial x}+{\partial A_{0}}/{\partial y})+(2/3)\,A_2 \cdot T_{2,(d-1)}-(1/3)\, A_1 \cdot T_{1,(d-1)} \\ C_2=Q_d \cdot ({\partial A_{2}}/{\partial x}+{\partial A_{1}}/{\partial y})+A_3 \cdot T_{2,(d-1)}-(2/3)\,A_2 \cdot T_{1,(d-1)}+(1/3)\,A_1 \cdot T_{2,(d-1)} \\ C_3=Q_d \cdot ({\partial A_{3}}/{\partial x}+{\partial A_{2}}/{\partial y})-A_3 \cdot T_{1,(d-1)}+(2/3)\,A_2 \cdot T_{2,(d-1)}-(1/3)\, A_1 \cdot T_{1,(d-1)}\\ C_4=Q_d \cdot {\partial A_{3}}/{\partial y}+A_3 \cdot T_{2,(d-1)}-(2/3)\,A_2\cdot T_{1,(d-1)} \\ C_5=-\,A_3 \cdot T_{1,(d-1)} \,\,, \end{array} \right . \end{equation} where in this computation we have used $$ \frac{d A_i}{dx}=\frac{\partial A_{i}}{\partial x}+\frac{\partial A_{i}}{\partial y } \dfrac{dy}{dx}\,,\quad i=0,\ldots,3\,. $$ Now we are in the right position to end the proof: for any arbitrarily fixed $x_1\in(x_0-\varepsilon,x_0+\varepsilon)$, setting $y_1=y(x_1)$, \eqref{eq-non-ex-5} and \eqref{eq-non-ex-6} can be thought of as two polynomial equations in $dy/dx$, with coefficients given, respectively, by $A_j(x_1,y_1),\, j=0,\ldots,3$ and $C_j(x_1,y_1),\, j=0,\ldots,5$, which have the common solution $(dy/dx)(x_1)$. Using standard arguments of algebraic geometry (\cite{HasVla95}), this implies that the resultant of the two polynomials is zero for any $x_1\in(x_0-\varepsilon,x_0+\varepsilon)$. Now, since the coefficients $A_j(x,y)$ and $C_j(x,y)$ are {\it homogeneous polynomials} (of degree $3(d-1)$ and $4(d-1)$ respectively), it turns out that the resultant is itself a {\it homogeneous polynomial} of degree $3(d-1)\cdot 5+ 4(d-1)\cdot 3=27(d-1)$. Then, we can divide the resultant by $x^{27(d-1)}$ and putting $z=y/x$ we obtain a polinomial equation in $z$ with constant coefficients. Since $z$ is continuous it must be constant, that is $y=mx$, $m\in{\mathbb R}$. But any such solution is biharmonic if and only if it is minimal (see Remark~\ref{re:linearprifile}): a contradiction with the hypothesis that $\Sigma_{\gamma}$ is not CMC. \end{proof} \begin{remark} We think that it is important to stress the fact the method of proof works because the $G$-invariance is sufficient to guarantee that the biharmonicity conditions \eqref{bitensionecomponentenormale} and \eqref{bitensionecomponentetangente}, when expressed in the form \eqref{eq-non-ex-5} and \eqref{eq-non-ex-6}, have coefficients given by \emph{homogeneous} polynomials. In the direction of proving the Chen conjecture, we also point out that the incompatibility between \eqref{bitensionecomponentenormale} and \eqref{bitensionecomponentetangente} appear to be of a local nature. \end{remark} \begin{example} For the sake of completeness, we report here the explicit expressions of the relevant homogeneous polynomials which appear in the Example~\ref{esempio}. In this case we have: $$ A_1(x,y)=A_2(y,x)=8 \left(-25 x^2 y^7+540 x^4 y^5-118 x^6 y^3+65 x^8 y-10 y^9\right)\,, $$ $$ A_0(x,y)=A_3(y,x)=775 x^7 y^2-363 x^5 y^4+1237 x^3 y^6+130 x y^8-275 x^9\,. $$ \end{example}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:Introduction}Introduction} The Brownian motion study has been developed since Robert Brown studies about fertilization process in flowers \cite{brown}. One of the first to describe Brownian motion was Thorvald N. Thiele, in 1880, in a paper on the method of least squares. At that time the nature of the Brownian motion was uncertain therefore there were many open questions about the particle interactions with their environment. Until 1900, Louis Bachelier, in his PhD thesis applied Brownian motion to the stock and option market fluctuations \cite{librobrown}. The study of Brownian motion was continued by Albert Einstein, who discussed Brownian motion in his work from the point of view of the molecular kinetic theory of heat\cite{Einstein}. It is worth mentioning that Eistein was unaware of the previous work on the subject and he gave the first mathematical description of a free particle Brownian motion. Later, Smoluchowski \cite{Smoluchowski} brought the solution to the problem and attracted the attention of physicists on the problem. In 1908, Langevin \cite{langevin} obtained the same result as Einstein, using a macroscopically description based on the Newton's second law,{\it i.e.}, the dynamical model is based on second-order differential equation with a stochastic term. He said that his approach is ``infinitely simplest" because it was much simpler than the proposed by Einstein. Since the pioneering work of Langevin to model Brownian motion using a stochastic term in the mathematical model, many papers have been devoted to the description of this phenomenon \cite{Smoluchowski}, where features of this behavior have been defined. According with Van Hove \cite{VanHove} method Prigogine and Grigolini\cite{Prigogine,Grigolini} ``believed" that Brownian motion can be derived from deterministic Hamiltonian models of classical mechanics. But this theory, Hamiltonian models of classical mechanics and the Liouville equation, involves grave difficulties as the enormous degrees of freedom and the long times with respect to the duration process. However the first deterministic model of Brownian motion was not proposed until Tref\'an et. al. \cite {Trefan}. It is important to mention that two approaches can be distinguished: The former, dynamical models are based on a stochastic term and the second, without a stochastic term, {\it i.e.} a deterministic Brownian motion. The idea of deterministic Brownian motion has been discussed in hydrodynamic and chemical reactors with oscillatory behavior, where the dynamic is completely deterministic and it is always refereed as ``microscopic chaos". In this sense Tref\'an {\it et. al.} \cite{Trefan} and Huerta-Cuellar {\it et. al.} \cite{huerta} have proposed a Brownian motion generators based on Langevin equation. The idea of Tref\'an {\it et. al.} consist of replacing the stochastic term by a discrete dynamical system which generates pseudo-random numbers; The process drives a Brownian particle and has ``statistical" properties that differ markedly from the standard assumption of Gaussian statistics due to the discrete dynamical system has a ``U-shaped probability" distribution. The approach of Huerta-Cuellar {\it et. al.} is in the same spirit that Tref\'an {\it et. al.} but now the stochastic term is controled by the jerky equation, {\it i.e.}, by adding an additional degree of freedom to the Langevin equation it is possible to transform it into a system of three linear differential equations without a stochastic term; this process display ``like- Gaussian probability" distributions of system variables, which differ from the standard assumption. Based on this approach we use fractional derivatives in this work and get a deterministic Brownian motion with a greatly improved Gaussian probability distributions of all the variables of the system. The Langevin equation has been used in many areas, such as modeling the evacuation processes \cite{Kosinski}, photoelectron counting \cite{Wodkiewicz}, analyzing the stock market \cite{Bouchaud}, studying the fluid suspensions \cite{Hinch}, deuteron-cluster dynamics \cite{Takahashi}, protein dynamics \cite{Schluttig}, self organization in complex systems \cite{Fraaije}, etc. For other applications of the Langevin equation in physical chemistry and electrical engineering, one can refer to Ref. \cite{Coffey}. The classical study of the Brownian motion via the Langevin equation is under the hypothesis that the process is a Markov process, {\it i.e.}, the random forces modeled by the stochastic term are independent so the process does not have memory. Although the Langevin equation plays an important role in many fields, there are still some behaviors such as anomalous diffusion (superdiffusion and subdiffusion), power law, long-range interactions that the Langevin equation can not well describe them. Therefore, various fractional Langevin equations were proposed in Ref.~\cite{Coffey,Kobelev,Mainardi}. In this way the fractional Langevin equation can capture the aforementioned features that the Langevin equation can not achieve. Brownian motion behavior is characterized by specific properties, such as a linear growth in time of the mean square displacement, Gaussian probability distributions of system variables, an exponential in time decay of the positional autocorrelation function. This last property can be characterized by the detrended fluctuation analysis (DFA) which was developed by Peng {\it et al.} \cite{Peng}. So the DFA thecnique characterizes the correlation properties of a signal. In this paper a deterministic fractional model to generate Brownian motion based on Langevin equation and jerky equation is proposed in the same way that Ref.~\cite{huerta}, its behavior is characterized by time series analysis via DFA. This paper is organized as follows: In section 2, some basic concepts of fractional differential equations theory are introduced as differential operators, stability theory in fractional systems and a numerical method to solve fractional systems. In section 3, the proposed model via jerk equation and fractional derivatives is presented with its stability analysis. Section 4 contains the numerical results obtained with the proposed model, and the statistical properties of the time series by means of DFA analysis to confirm the Brownian behavior. Also the maximum Lyapunov exponent is computed. Finally conclusions are drawn in Section 5. \section{\label{secc:cfrac}Basic concepts in fractional calculus} Fractional derivatives and integrals are generalizations of integer ones. Nevertheless, in literature we can find many different definitions for fractional derivatives Ref.~\cite{1,2,3,4} being the Riemann-Liouville and the Caputo definitions the most reported \cite{1}. The fractional derivative of Riemann-Liouville is defined as: \begin{equation} D_a^\alpha f(x)=\frac{1}{\Gamma(n-\alpha)}\frac{d^n}{dx^n}\int_a^x\frac{f(t)}{(x-t)^{\alpha-n+1}}dt, \end{equation} and the Caputo definition is described by: \begin{equation} D_0^\alpha f(x)=\frac{1}{\Gamma(n-\alpha)}\int_a^x\frac{f^{(n)}(t)}{(x-t)^{\alpha-n+1}}dt, \end{equation} with $n=\lceil{\alpha}\rceil$, and $\Gamma$ is the Gamma function which is defined as: \begin{equation} \Gamma(z)=\int_0^\infty t^{z-1} e^{-t}dt. \end{equation} For instance, in fractional order systems the stability region depends on the derivative order $\alpha$ as it is depicted in Fig. 1 of Ref.~\cite{neto}. It is important to note that the stability of an equilibrium point can be controled by means of the derivative order $\alpha$, for example, a saddle hyperbolic equilibrium point of an integer system can be transformed to a stable equilibrium point by changing the system derivative order $\alpha$. This is an important consideration for designing a mathematical model of Brownian motion because we are interested in unstable dynamics. A general commensurate fractional order time invariant system is described as follows: \begin{equation} D^{n_k}_0x(t)=f(t,x(t),D^{n_1}_0x(t),D^{n_2}_0x(t),\cdots,D^{n_{k-1}}_0x(t)), \label{eq:general} \end{equation} subject to initial conditions \[ x^{(j)}(0)=x_0^{(j)}, \text{ with } j=0,1,\dots, \lceil{n_k}\rceil -1, \] where $n_1,n_2,\ldots, n_k\in {\mathbb Q}$, such that $n_k>n_{k-1}> \cdots >n_1>0$, $n_j-n_{j-1}\leq 1$ for all $j=2,3,\ldots,k$ and $0<n_1 \leq 1$. Let $M$ be the least common multiple of the denominator of $n_1,n_2,\ldots,n_k$ and set $\alpha=1/M$ and $N=Mn_k$. Then accordingly with theorem 8.1 given in Ref.~\cite{1} the equation (\ref{eq:general}) is equivalent to the following system of equations. \begin{eqnarray} D^\alpha_0x_0(t)&=& x_1(t), \nonumber\\ D^\alpha_0x_1(t)&=& x_2(t), \nonumber\\ &\vdots&\\ D^\alpha_0x_{N-2}(t)&=& x_{N-1}(t), \nonumber\\ D^\alpha_0x_{N-1}(t)&=& f(t,x_0(t),x_{n1/\alpha}(t),\cdots,x_{n_{k-1}/\alpha}(t)). \nonumber \end{eqnarray} with initial conditions \[ x_j(0)=\left\{ \begin{array}{ll} x_0^{(j/M)}& \text{ if } j/M\in {\mathbb N} \cup \{0\}\\ 0 & \text{other case.} \end{array} \right. \] Furthermore, this linear time invariant system can be expressed in a matrix form as follows: \begin{equation} \frac{d^\alpha {\bf x}(t)}{dt^{\alpha}}=A{\bf x}, \label{eq:lineal} \end{equation} \noindent where ${\bf x}\in {\mathbb R}^n$ is the state vector, $A \in {\mathbb R}^{n \times n}$ is a linear operator, and $\alpha$ is the fractional commensurate derivative order $0<\alpha<1$. The stability of this kind of systems is enunciated in as follows:\\ \begin{enumerate} \item [-] \emph{\bf Asymptotically stable:} The system (\ref{eq:lineal}) is asymptotically stable if and only if $| arg(\lambda)| >\frac{\alpha\pi}{2}$ for all eigenvalues $(\lambda)$ of matrix $A$. In this case, the solution $x(t) \to 0$ as $t\to \infty$.\\ \item [-] \emph{\bf Stable:} The system (\ref{eq:lineal}) is stable if and only if $| arg(\lambda)| \geq \frac{\alpha\pi}{2}$ for all eigenvalues $(\lambda)$ of matrix $A$ obeying that the critical eigenvalues must satisfy $| arg(\lambda)| = \frac{\alpha\pi}{2}$ and have geometric multiplicity of one. \end{enumerate} The interest is to have unstable dynamics in order to generate Brownian motion, thus the system is restricted to have at least one eigenvalue in the unstable region, {\it i.e.}, the system (\ref{eq:lineal}) is unstable if and only if $| arg(\lambda)| < \frac{\alpha\pi}{2}$ for at least one of its eigenvalues $(\lambda)$ of matrix $A$. Additionally, the system given by \eqref{eq:lineal} with equilibrium point at the origen can be generalized by affine linear system as follows: \begin{equation} \frac{d^\alpha {\bf x}(t)}{dt^{\alpha}}=A{\bf x}+B, \label{eq:affinelineal} \end{equation} where $B\in {\mathbb R}^n$ is a constant vector and $A \in {\mathbb R}^{n \times n}$ is a nonsingular linear operator. Now the equilibrium point $p\equiv(x_1^*,x_2^*,\cdots,x_N^*)^T=-A^{-1}B$ of a general commensurate fractional order affine linear system (\ref{eq:affinelineal}) with fractional order $0 < \alpha < 1$, is saddle equilibrium point if its eigenvalues $\lambda_1, \lambda_2, \ldots,\lambda_\kappa,\lambda_{\kappa+1},\ldots,\lambda_n $ of its Jacobian matrix evaluated at the equilibrium point fulfill the following condition. \begin{equation} \begin{array}{ll} |arg(\lambda_i)|>\frac{\alpha\pi}{2}& \;\text{ with}\; i=1,2,\ldots,\kappa,\\ |arg(\lambda_i)|<\frac{\alpha\pi}{2}& \;\text{with}\; i=\kappa+1,\kappa+2,\ldots,n. \end{array} \label{saddlepoint} \end{equation} Note that we are interested in working with unstable systems, {\it i.e.}, systems that do not fulfil the locally asymptotic stable condition. \begin{equation} min|arg(\lambda_i)|>\frac{\alpha\pi}{2}, \text{ for}\; i=1,2,\ldots,n. \label{critest} \end{equation} \subsection{Numerical method to solve fractional differential equations} There are not methods that can provide analytically the exact solution of any fractional differential equation as those given by integer-order systems, therefore it is necessary to use numerical methods. The Adams-Bashforth-Moulton (ABM) method, a predictor–corrector scheme, was reported in Ref.~\cite{Diethelmmet} and it is used to obtain the time evolution of fractional systems. The algorithm is a generalization of the classical Adams-Bashforth-Moulton integrator that is well known for the numerical solution of first-order problems as switching systems \cite{neto}. We now present the method that is well understood and that has been proven to be efficient in many practical applications \cite{Ford,183libro}. Consider the fractional differential equation that is described by \eqref{eq:general} as follows: \begin{equation} \begin{matrix} D^{\alpha}x(t)=f(t,x(t)),~~~~~~0\leq t\leq T;\\ x^{(k)}(0)=x_0^{(k)},~~~~k=0,1,\dots,n-1. \end{matrix} \label{eq:metodo} \end{equation} We assume the function $f$ is such that a unique solution exists on some interval $[0,T]$, and assume that we are working on a uniform grid $\{ t_j = jh : j = 0,1, \dots ,N \}$ with some integer $N$ and $h = T/N$. The solution of (\ref{eq:metodo}) is given by an integral equation of Volterra type as \begin{equation} x(t)=\sum_{k=0}^{\lceil \alpha\rceil -1}x^k_0\frac{t^k}{k!}+\frac{1}{\Gamma(\alpha)}\int_0^t(t-z)^{\alpha-1}f(z,x(z))dz, \end{equation} \begin{equation} x_{k+1}=\sum_{k=0}^{\lceil \alpha\rceil -1}x^k_0\frac{t^k}{k!}+\frac{1}{\Gamma(\alpha)}(\sum_{j=0}^ka_{j,k+1}f(t_j,x_j)+a_{k+1,k+1}f(t_{k+1},x_{k+1}^P)), \end{equation} \noindent where \begin{equation} \begin{array}{l} a_{j,k+1}=\\ \\ \left\{ \begin{array}{ll} \displaystyle \frac{h^\alpha}{\alpha(\alpha+1)}(k^{\alpha+1}-(k-\alpha)(k+1)^\alpha), & if ~~ j=0;\\ \displaystyle\frac{h^\alpha}{\alpha(\alpha+1)}((k-j+2)^{\alpha+1}+(k-j)^{\alpha+1} & \\ -2(k-j+1)^{\alpha+1}),& if ~~ 1\leq j\leq k;\\ \displaystyle\frac{h^\alpha}{\alpha(\alpha+1)}, & if ~~ j=k+1. \end{array} \right. \end{array} \end{equation} \noindent With the predictor structure given as follows. \begin{equation} x_{k+1}^P=x(0)+\frac{1}{\Gamma(n)}\sum_{j=0}^k b_{j,k+1}f(t_j,x_j), \end{equation} \noindent and \begin{equation} b_{j,k+1}=\frac{h^\alpha}{\alpha}((k+1-j)^\alpha-(k-j)^\alpha). \end{equation} The error of this approximation is given by \begin{equation} max_{j=0,1,\dots,N}|x(t_j)-x_h(t_j)|=O(h^p). \end{equation} \section{Deterministic model to generate Brownian motion} The onset of Brownian motion is a suspending particle in fluids. The motion of this particle occurs due to collisions between molecules of the fluid, and considering that in each collision a molecule changes its velocity in small amount. This fact is because of the suspended particle under normal conditions suffers about $10^{21}$ collisions per second, so the accumulated effect results to be considerable. Each of these collisions is always determined by the last event which is produced by physical interactions in the system. Since it can be thought that each collision produces a kink in the path of the particle, one can not hope to follow the path in any detail, i.e., the details of the path are infinitely fine. Thus the Brownian particle make a fluctuating movement. So stochastic models of Brownian motion follow the average motion of a particle, not a particular path of a particule. The stochastic theory of Brownian motion of a free particle (in the absence of an external field of force) is generally governed by the Langevin equation. \begin{equation} \begin{array}{l} \frac{d x}{dt}=v,\\ \frac{d v}{dt}=-\gamma \frac{d x}{dt} +A_f(t), \end{array} \label{eq:langevin} \end{equation} \noindent where $x$ denotes the particle position and $v$ its velocity. According to this equation, the influence of the surrounding medium on the particle motion can be splitted into two parts. The first term $-\gamma \frac{d x}{dt}$ stands for the friction applied to the particle, it is assumed that the friction term is according with the Stokes' law which states that the friction force $6\pi a \sigma v/m$ decelerates a spherical particle of radius $a$ and mass $m$. Hence, the friction coefficient is given as follows: \begin{equation} \gamma=6\pi a \sigma/m, \end{equation} \noindent where $\sigma$ denotes the viscosity of the surrounding fluid. The second term $A_f(t)$ is the fluctuation acceleration which provides a stochastic character of the motion and depends on the fluctuation force $F_f (t)$ as $A_f (t)=F_f (t)/m$, where $m$ is the particle mass. Two principal assumptions were made about this stochastic term $A_f (t)$ in order to produce Brownian motion: \begin{enumerate} \item [-]$A_f (t)$ is independent of $x$ and $v$. \item [-]$A_f (t)$ varies extremely fast as compared with the variation of $v$. \end{enumerate} The latter assumption implies that there exists a time interval $\Delta t$ during which the variations in $v$ are very small. Alternatively, we may say that though $v(t)$ and $v(t+\Delta t)$ are expected to differ by a negligible amount, no correlation between $A_f(t)$ and $A_f(t+\Delta t)$ exists due to it is a stochastic term. Based on fractional calculus, various fractional Langevin equations have beeen proposed to generate Brownian motion, see Ref.~\cite{Coffey,Kobelev,Mainardi}. These models differ from the usual Langevin equation by replacing the derivatives with respect to time by the fractional derivative of $\alpha$ order. \begin{equation} \frac{d^\alpha v}{dt^\alpha}=-\gamma \frac{d^\alpha x}{dt^\alpha} +A_f(t), \label{eq:langevinfrac} \end{equation} \noindent where $x$ denotes the particle position and $v$ its velocity. We can rewrite the fractional Langevin equation (\ref{eq:langevinfrac}) in two fractional differential equations by a change of variables, \begin{equation} \begin{array}{lll} D^\alpha_0x&=& v, \\ D^\alpha_0v&=& -\gamma v+ A_f(t), \end{array} \label{sislanfrac} \end{equation} \noindent where $D^\alpha_0$ is the Caputo derivative operator. In order to generate a deterministic model of Brownian motion an additional freedom degree is added to the system (\ref{sislanfrac}) in order to avoid the stochastic term by means of replacing the fluctuating acceleration $A_f(t)$. We change the stochastic term $A_f$ for a new variable $z$ which is defined by a third order differential equation as it was reported in Ref.~\cite{huerta}. The proposed variable $z$ acts as fluctuating acceleration, and produces a deterministic dynamical motion without stochastic term but the behavior present the statistics features of Brownian motion as it was showed in previous work \cite{Trefan}. However, in our model the fluctuation acceleration has a direct dependence on the position, velocity and acceleration due to the jerky equation involved \cite{33huerta}. When a particle is moving in a fluid, friction and collisions with other particles, existing in the environment, necessarily produces changes in the motion velocity and acceleration; all these changes are considered in the jerky equation. Without loss of generality, we define our approach based in a unstable dissipative systems (UDS)\cite{34huerta,35huerta} as follows \begin{equation} \begin{array}{lll} D^{\alpha}{x}&=&v, \\ D^{\alpha}{v}&=&-\gamma v+ z,\\ D^{\alpha}{z}&=&-a_1 x -a_2 v -a_3 z -a_4(x), \end{array} \label{eq:modeloprop} \end{equation} \noindent where $a_{i }\in {\mathbb R}$ are constant parameters, with $i= 1,2,3$, and $a_4(x)\in {\mathbb R}$ acts as a constant piecewise function, {\it i.e.}, a step function.\\ \begin{figure} \centering \includegraphics[width=0.70\textwidth]{./SW.eps} \caption{Projection of SW perpendicular on to the plane $(x,v)$ (Blue lines), the red dot depict a Brownian particle that moves along one dimension $x$. The SW delimit each potential region and when the particle cross it represent a potential change on the particle} \label{fig:SW} \end{figure} The first two equations of the fractional Langevin equation \eqref{sislanfrac} are derived from thel Langevin equation (\ref{eq:langevin}) with a little change: the stochastic term is replaced by a deterministic term in the same spirit that \cite{34huerta,35huerta}. Now we construct switching surfaces (SW), see Figure \ref{fig:SW}. Without loss of generality, the SW are defined by perpendicular planes to the $x$ axis, so domains are defined between these SW's which are considered as edges of each domain. In case of real systems, SW can be seen as multi-well potential with short fluctuation escape time where each domain defined by SW's preserve its unstable behavior according to lineal part of system. The parameter $a_4$ is defined as follows \begin{equation} a_4(x)=c_1 round(x/c_2), \end{equation} with $c_1, c_2\in {\mathbb R}$ are constants. Here, the $round(x) $ function will be implemented to simplify the SW generation process. The function will be defined as follows: \begin{equation}\label{ec_round_def} round(x)=\left\{% \begin{array}{l} \lceil x-1/2 \rceil, \text{ for } x<0;\\ \lfloor x+1/2 \rfloor, \text{ for } x \geq 0. \end{array}% \right. \end{equation} \begin{figure} \centering \subfigure[]{\includegraphics[width=0.7\textwidth]{./retratobrown8248.eps}} \subfigure[]{\includegraphics[width=0.7\textwidth]{./retratobrown95.eps}} \caption{Solutions of the system (\ref{eq:modeloprop}) on phase space for different $\alpha$ values: (a) $\alpha=0.8248$, (b) $\alpha=0.9500$.} \label{fig:difalpha} \end{figure} \section{Numerical results} In this section, we numerically investigate the long term behavior of the solutions of equation (\ref{eq:modeloprop}) by considering different derivative orders and the following parameter values: $\gamma=7\times 10^{-5};~~a_1 = 1.5; ~~a_2 = 1.2; ~~a_3 = 0.1;~~ C1 = 0.9$ and $C2 = 0.6$. It is worth to mention that these parameter values are the same to those used to generate Brownian behavior in Ref.~\cite{huerta} by considering an integer order system. To study Brownian motion generated by Eq. (\ref{eq:modeloprop}), we fix the parameters values and explore different derivative orders ($\alpha$ values). The stability of fractional order systems is givern by Eq.(\ref{critest}), so the local behavior near the equilibrium point is determined by the Jacobian of the system (\ref{eq:modeloprop}) which has the following spectrum $\Lambda=\{ \lambda_1= -0.8304, \lambda_2= 0.3651 + 1.2935i, \lambda_3= 0.3651 - 1.2935i\}$. This spectrum determines the critical value of derivative order $\alpha_c\approx 0.8249$ to get the system (\ref{eq:modeloprop}) to be stable, \begin{equation} \alpha<\frac{2}{\pi} min|arg(\lambda_i)|\approx 0.8249. \end{equation} Accordingly with the aforementioned comments, in order to preserve the system (\ref{eq:modeloprop}) to be unstable to generate oscillatory behavior, we consider $\alpha>\alpha_c$. Figure \ref{fig:difalpha}~(a) shows a solution of the system (\ref{eq:modeloprop}) for $\alpha=0.8248$ and initial condition $(x_0,v_0,z_0)^T=(1.0,1.0,1.0)^T$, this derivative order value results in a stable behavior due to $\alpha<\alpha_c$. On the othe hand, Figure \ref{fig:difalpha}~(b) shows a solution of the system (\ref{eq:modeloprop}) for $\alpha=0.95$ and the same inictial condition, but now this derivative order value results in an unstable behavior due to $\alpha>\alpha_c$. Numerical simulations are performed using the Adams-Bashforth-Moulton algorithm by exploring different $\alpha$ values. Figure \ref{fig:Brown95} shows a time series of a particle position for $\alpha=0.95$ where characteristic behavior of Brownian motion can be clearly seen. The trajectory of Brownian motion is determined by initial conditions and the parameter values before mentioned. Since there are many steps with short time duration and few steps with long time duration, the predicted mean square displacement in short times is observed. \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{./Brown95.eps} \caption{Time series $x$ of deterministic Brownian motion by the proposed model given by \eqref{eq:modeloprop} with $\alpha=0.95$.} \label{fig:Brown95} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{./desplafracc.eps} \includegraphics[width=0.95\textwidth]{./disx.eps} \includegraphics[width=0.95\textwidth]{./disv.eps} \includegraphics[width=0.95\textwidth]{./disz.eps} \caption{Statistical properties obtained of the system (\ref{eq:modeloprop}). (a) shows a linear growth of the mean square displacement. Probability density obtained from the motion showed by normalized histogram approximation ( dotted blue curve), for displacement (b), velocity (c), and acceleration (d), compared with theoretical Gaussian distribution ( dotted red curve).} \label{fig:dis} \end{figure} Figure \ref{fig:dis} shows the statistical properties obtained for the time series of the system (\ref{eq:modeloprop}). Figure~\ref{fig:dis}~(a) displays the linear growth in time of the mean square displacement predicted, regarding with the traditional Brownian motion zero-mean Gaussian probability distributions Figures~\ref{fig:dis}~(b),~(c),~(d) show the particle probability distributions of displacement, velocity and acceleration, respectively, in which one can see that the obtained distributions of the motion generated by our system have a better Gaussian approximation than the obtained with integer order in Ref.~\cite{huerta}, As it is known, strong sensitivity to initial conditions is an essential characteristic inherent of chaos. In the Brownian motion case also is essential the strong sensibility to initial conditions, and both can be characterized by positive leading Lyapunov exponent. Even though both behaviors can be characterized by Lyapunov exponent the dynamic is completely different; Brownian motion as noise do not form an attractor in the phase space, {\it i.e.}, they are unbounded trayectories. Brownian and noise trajectories tend to infinity, while the chaotic dynamic generate an attractor localized within a certain area of phase space, so chaotic trayectories are bounded. In figure \ref{fig:lyap} the maximum Lyapunov exponent obtained, according with Ref.~\cite{rosenstein}, of the proposed system is shown to confirm the noise behavior of Brownian motion. \begin{figure}[h] \centering \includegraphics[width=0.70\textwidth]{./lyapfracc.eps} \caption{Maximum Lyapunov exponent obtained of the proposed system (\ref{eq:modeloprop}) with $\alpha=0.95$.} \label{fig:lyap} \end{figure} Finally, the correlation property of the signals generated by \eqref{eq:modeloprop} is characterized by the detrended fluctuation analysis (DFA) which was developed by Peng {\it et al.} \cite{Peng}. So the DFA thecnique help us to ensure that the proposed system generates Brownian motion, we apply the DFA evaluation method to the time series obtained with the parameter values before used. The DFA is an important tool for the detection of long-range auto- correlations in time series with non-stationarities. The DFA is based on the random walk theory which consists on a scaling analysis. The main advantages of the DFA over many other methods are that it allows the detection of long-range correlations of a signal embedded in seemingly nonstationary time series, and also avoids the spurious detection of apparent long-range correlations that are an artifact of non-stationarity. The DFA procedure consists on the next four steps \begin{enumerate} \item [*]Compute the time series mean $\bar{x}$. \item [*]The interbeat interval time series (of total length $N$) is first integrated \begin{equation} y(k)=\sum_{i=1}^k[x(i)-\bar{x}]. \end{equation} \item [*]The integrated time series is divided into boxes of equal length $n$. the local trend is obteined by mean-squares and is removed to each box. \item [*]The rootmean- square fluctuation of this integrated and detrended time series is calculated by \begin{equation} f(n)=\sqrt{\frac{1}{N}\sum_{k=1}^N[y(k)-y_n(k)]^2}. \end{equation} \item [*]The fluctuations can be characterized by a scaling exponent $\eta$, the slope of the line relating $log F(n)$ to $log~ n$ \begin{equation} f_m(n)\sim n^\eta. \end{equation} \end{enumerate} When the scaling exponent $\eta > 0.5$ three distinct regimes can be defined as follows.\\ \begin{enumerate} \item [1]If $\eta \approx 1$, DFA defines $1/f$ noise. \item [2]If $\eta > 1$, DFA defines a non stationary or unbounded behavior. \item [3]If $\eta \approx 1.5$, DFA defines Brownian motion or Brownian noise. \end{enumerate} The scaling law with $\eta= 1:5401$ revealed by the DFA and shown in Figure \ref{fig:dfa} confirms the Brownian behavior. \begin{figure}[h] \centering \includegraphics[width=0.70\textwidth]{./dfafrac.eps} \caption{$\eta \approx 1:5$ obtained by DFA indicates the Brownian behavior of the observed motion.} \label{fig:dfa} \end{figure} \section{\label{sec:Conclusions}Concluding remarks} A fractional deterministic model to generate Brownian motion has been presented by considering a fractional derivative order and the jerk equation instead of the stochastic process and interger derivative order in the Langevin equation. These changes modify the Langevin equation by adding an additional degree of freedom, so a three-dimensional model was obtained. By means of considering the fractional derivative order, the statistical properties were improved compared to their integer derivative order. The new variable introduced in the system defined by a third differential equation has a Gaussian probability density distribution which was confirmed with numerical simulations. The statistic analysis of time series obtained with the proposed model displayed typical characteristics of Brownian motion, namely, a linear growth of the mean square displacement, a Gaussian probability density distribution for displacement, velocity and acceleration. Furthermore, the Brownian behavior was confirmed by an approximately 1.5 power law scaling of the fluctuation. These results show that time series obtained of the proposal model fulfill the main characteristics of the Brownian motion via stochastic process. Based on these results which were obtained by using unstable dissipative system, it can be thought that the methodology presented in this work could be used to construct models under external force fields or behaviors of anomalous diffusion. Additionally the development of adequate realistic models with real experimental time series in order to obtain the Brownian motion in real systems. \section{Acknowledgements} H.E.G.V is a doctoral fellow of the CONACYT in the Graduate Program on control and dynamical systems at DMAp-IPICYT. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Subdwarf B stars (sdBs) are core-helium burning stars with thin hydrogen envelopes. They are situated between the main sequence and the white dwarf cooling track at the blueward extension of the horizontal branch, the so-called Extreme or Extended Horizontal Branch \citep{ heb1984, heb1986,saf1994}. Subdwarf B stars have colours and spectral characteristics corresponding to those of a B star, but the Balmer lines are abnormally broad for the colour compared with population I main-sequence B stars due to their high surface gravities ($\log g \simeq 5.0 - 6.0$). Subdwarf B stars have a typical mass of 0.5$\mathrm{\,M_{\odot}}$ \citep{heb1984} and can be found in all Galactic populations. They are thought to be the dominant source for the UV-upturn in early type galaxies \citep{fer1991, bro2000}. A fraction of sdBs pulsate \citep{cha1996, kil1997}, giving great opportunities to derive fundamental parameters (e.g. the stellar mass) and study their internal structure in detail \citep{gre2003, fon2008, ost2009, ost2010}. Subdwarf B stars are also suggested to be very useful as age indicators using evolutionary population synthesis \citep{bro1997}, or as distance indicators \citep{kil1999}. Since a large fraction of sdBs are members of binary systems \citep{max2001} and because they are intrinsically bright and ubiquitous, they are therefore an ideal population in which to study binary star evolution. For a comprehensive review on hot subdwarf stars we refer the reader to \citet{heb2009}. \citet{han2003} describe in detail the formation and evolution of sdBs by using binary population synthesis models. They find that sdBs form via five main evolutionary channels: the first and second common envelope channels, the first and second stable Roche lobe overflow channels and the helium white dwarf merger channel. This last channel is the only one that results in the formation of single sdBs. They find that the contribution of the second Roche lobe overflow channel is not significant, leaving only three channels to form sdB binaries. Each of these three binary formation channels predicts a different orbital period distribution for the population of sdBs. The binaries formed through the first common envelope channel should display orbital periods between 0.5 and $\sim$40 days and the companions to the sdBs will be main sequence stars. Binaries formed via the second common envelope channel are expected to have white dwarf companions and their range of orbital periods will be wider, extending further into the short periods but not to long periods. Note as well that these common envelope phases are not very well understood. \citet{nel2000} have concluded from the observed double white dwarf population that its outcome may not always be a strong reduction of the orbital separation. Finally, sdB binaries formed through the first stable Roche lobe overflow channel will have main sequence companions and will display orbital periods between 0.5 and $\sim$2000 days. \citet{han2003} conclude that their set 2 of simulations is the model that best describes the observed sample of short period sdB binaries \citep{mor2003}. In this particular model (and also in 9 out of the 12 models they describe) the majority of sdB binaries, between 60 and 70 percent of the total, are formed via the first stable Roche lobe overflow channel. At the same time, this is the channel most affected by observational selection effects decreasing the number of observable sdB binaries formed through this channel. These observational effects are primarily that sdBs with companions that are brighter than the sdB itself will not be identified as sdBs at all. The second effect has to do with observational limitations: it is easier to detect radial velocity variations from a short orbital period system than from a long one, as these are smaller and take longer to determine in the second case. Despite extensive observational work, not a single system has been found in this long-period regime (first stable RLOF channel), whereas at present $\sim$100 sdB binaries with short periods are confirmed (\citealt{gei2011} and \citealt{cop2011}). The orbital periods are mostly below 1 day, with a median period of 0.61 days. However, in this work we will report on PG\,1018--047, the first truly long-period subdwarf B binary. While this may be the product of binary evolution as suggested by the \citet{han2002, han2003} models, we will find that it could also be the remnant of a hierarchical triple, as recently outlined by \citet{CW2011}. Our target, PG\,1018--047 has an apparent visual (Str\"omgren) magnitude $m_{y} = 13.32$ and was discovered as an ultraviolet-excess stellar object in the Palomar-Green Survey \citep{GSL1986}. It was subsequently observed by \citet{max2001} to check for radial velocity variations. Although weak spectral features from a late-type companion are visible in the spectrum, \citet{max2001} did not find any significant radial velocity shifts using their variability criteria. This led to the conclusion that PG\,1018--047 is probably not a binary with a short orbital period. The presence of the companion in the spectrum prompted continued follow-up of the system in order to determine how such a binary could have formed in the first place. We present the results after more than a decade of monitoring.\\ \section{Observations and reduction} We have observed PG\,1018--047 spectroscopically with several different instrument setups over a period of ten years. In Table \ref{tbl:obs} we summarize the observing dates, the setup used in each case, the wavelength range covered and the number of spectra obtained during each epoch. The data were obtained using the Isaac Newton Telescope (INT), William Herschel Telescope (WHT)\footnote{Both the INT and WHT belong to the Isaac Newton Group of Telescopes (ING).} and Nordic Optical Telescope (NOT) on the Island of La Palma, the Radcliffe telescope at the South African Astronomical Observatory (SAAO) and the Hobby-Eberly Telescope (HET) located at the McDonald Observatory in Texas. The different instrument setups were as follows: For the \textbf{INT-R}ed spectra the intermediate dispersion spectrograph (IDS) was used. It is a long-slit spectrograph mounted on the Cassegrain Focal Station of the INT. The 500\,mm camera together with the high resolution R1200R grating and a windowed Tek5 CCD centered in $\lambda=6560\,\mathrm{\AA}$ covered the $H_{\alpha}$ region. A 1\,arcsec slit was used. \textbf{INT-B}lue: The INT with the IDS, equipped with the 235 camera, the R1200B grating and a windowed EEV10 CCD was used to obtain these blue spectra. The 2002-2007, 2008-2009 spectra were centered on respectively $\lambda=4348\,\mathrm{\AA}$ and $\lambda=4505\,\mathrm{\AA}$, covering as many Balmer lines to the blue as possible, including $H_{\beta}$ at 4861.327\,\AA. For all exposures a 1\,arcsec slit was used. The \textbf{SAAO} data were obtained using the Radcliffe 1.9\,m telescope together with the grating spectrograph plus the SITe back-illuminated CCD. Grating 4, with 1200 grooves per millimeter was used to obtain spectra covering $H_{\gamma}$ and $H_{\beta}$ with a dispersion of 0.5\,\AA/pix and a resolution of 1\,$\mathrm{\AA}$ at 4600\,\AA. The slit width varied from 1.2 to 1.5\,arcsec depending on the seeing. \textbf{WHT-R}ed: The WHT was equipped with the double arm Intermediate dispersion Spectrograph and Imaging System (ISIS). The R1200R grating and the Red+ CCD were used to obtain the red spectra centered on $\lambda=6560\,\mathrm{\AA}$ (2007 data) and on $\lambda=6521\,\mathrm{\AA}$ (2009 data). A slit width of 1.2\,arcsec was used for the 2007 observations and a 1\,arcsec slit for the 2009 ones. The setup \textbf{WHT-B}lue denotes WHT data obtained using the ISIS spectrograph with the R600B grating and the blue EEV10. The grating was centered on $\lambda=4388\,\mathrm{\AA}$ with a 1\,arcsec slit (2006 observations), on $\lambda=4500\,\mathrm{\AA}$ with a 1.5\,arcsec slit (2007 data), $\lambda=4339\,\mathrm{\AA}$ with a slit width of 0.62\,arcsec (2008) and on $\lambda=4349\,\mathrm{\AA}$ with a 1.04\,arcsec slit during the 2009 observations. \begin{table*} \centering \begin{minipage}{160mm} \caption{Journal of observations. Observers: P. F. L. Maxted (P. M.), T. Augustijn (T. A.), T. R. Marsh (T. M.), Luisa Morales-Rueda (L. M.), G. Nelemans (G. N.), C. Copperwheat (C. C.), R.A. Wade (R. W.) and M. A. Stark (M. S.). }\label{tbl:obs} \centering \begin{tabular}{lccccc}\hline\hline Date & Setup & $\lambda$ region & \# Spectra & Mean dispersion & Observer(s) \\ & & & & ($\rm\AA/pixel$) & \\\hline 11 - 19/04/00 & INT-R & $H_{\alpha}$ & 4 & 0.39 & P. M. \\ 08 - 13/03/01 & INT-R & $H_{\alpha}$ & 11 & 0.39 & P. M. \\ 01 - 07/05/01 & INT-R & $H_{\alpha}$ & 9 & 0.39 & P. M. \\ 26 - 30/03/02 & SAAO & Blue & 6 & 0.49 & T. M. \\ 25 - 27/04/02 & INT-B & Blue & 6 & 0.48 & T. A. \& T. M. \\ 09 - 16/04/03 & INT-B & Blue & 27 & 0.48 & T. M. \\ 30/03 - 05/04/04 & SAAO & Blue & 5 & 0.49 & L. M. \\ 23 - 24/06/05 & SAAO & Blue & 2 & 0.50 & L. M. \\ 06/02/06 & WHT-R & $H_{\alpha}$ & 2 & 0.22 & G. N. \\ & WHT-B & Blue & 2 & 0.44 & G. N. \\ 09/03/07 & WHT-B & Blue & 4 & 0.44 & service \\ 27/03 - 07/04/07 & INT-B & Blue & 11 & 0.48 & T. M. \\ 29 - 31/03/07 & WHT-R & $H_{\alpha}$ & 5 & 0.25 & G. N. \\ & WHT-B & Blue & 5 & 0.44 & G. N. \\ 21 - 22/03/08 & INT-B & Blue & 4 & 0.48 & C. C. \\ 01/05/08 & WHT-R & $H_{\alpha}$ & 2 & 0.49 & P. M. \\ & WHT-B & Blue & 2 & 0.44 & P. M. \\ 11/03/09 & INT-B & Blue & 2 & 0.48 & C. C. \\ 30/04/09 & WHT-R & $H_{\alpha}$ & 4 & 0.25 & T. M. \\ & WHT-B & Blue & 4 & 0.44 & T. M. \\ 03/04/10 & NOT & $H_{\alpha}$ + Blue & 1 & 0.03 & service \\ 06/12/07 - 23/03/10 & HET & $H_{\alpha}$ + Blue & 7 & 0.12 & R. W \& M. S. \\\hline\hline \end{tabular} \end{minipage} \end{table*} The \textbf{NOT} data were taken using the 2.56\,m Nordic Optical Telescope. The FIES (Fibre-fed Echelle Spectrograph) highest resolution fiber (the 1.3 arcsec fibre offering a spectral resolution of R = 67000) covered the entire spectral range 3700 - 7300 $\mathrm{\AA}$ without gaps in a single fixed setting. The final setup, \textbf{HET}, refers to data from the bench-mounted echelle fibre-fed High Resolution Spectrograph, mounted on the 9.2m Hobby-Eberly Telescope operated in its R = 15000 resolution mode. In the ``2x3'' on-chip binning mode that was used, the dispersion was about 6 km/s per binned output pixel. A 2-arcsecond optical fibre was used for the stellar target, and two additional fibers were used to record the sky spectrum. A cross-dispersing grating with 600 grooves per millimeter was used, centering $\lambda \approx 5822\,\mathrm{\AA}$ at the boundary between the ``blue" and ``red" CCDs. The useful wavelength coverage extended from 4810$\,\mathrm{\AA}$ to 6760$\,\mathrm{\AA}$. To reduce the spectra from the INT, WHT and SAAO, standard Starlink routines were used. Flatfields were taken to correct for the pixel to pixel variations in the CCD and the bias correction was carried out by using the overscan region in each CCD frame. The objects were extracted with the optimal extraction algorithm of \citet{mar1989}. CuAr+CuNe arc spectra were taken before and after each target spectrum or after each set of two spectra at the target's position to calibrate these in wavelength. Fourth order polynomials were computed to fit the lines in the arcs and the solutions were used for the calibration of the corresponding spectra.\\ The NOT data were reduced with the automatic data reduction software package FIEStool\footnote{Developed by Eric Stempels (http://www.not.iac.es/instru-ments/fies/fiestool/FIEStool.html)}, which makes use of the IRAF and NumArray packages via a Python interface. After preprocessing, the raw frames were debiased and subsequently divided by a 2D normalized flatfield, correcting for the shape of each spectral order. Next the science spectra were extracted using the optimal extraction algorithm from \citet{hor1986} and corrected for the blaze shape. The wavelength calibration was done from a ThAr lamp spectrum taken right before the science data. For the HET spectra we used standard IRAF tasks, organized using``pipeline" scripts, to process the images and extract the spectra. Observations of PG1018-047 were taken as pairs of 750-second exposures. Each pair was combined within IRAF, using the ``crreject" option in the task ``imcombine" to reduce cosmic-ray contamination; additional rejection of cosmic-ray artifacts was done later by hand. As a result of this observational effort, we have a total of 125 spectra of PG\,1018--047. \section{Results} \subsection{The optical spectrum of PG\,1018--047} \begin{figure*} \centering \begin{minipage}{160mm} \centering \epsfig{file=Fig1.eps, width=6.35cm, angle=270} \caption{The mean spectrum of PG\,1018--047 for the region covering H$_\beta$ to H$_\iota$ (left) and the region around H$_\alpha$ (right). Plotted below the mean spectrum (shifted down for clarity) is a model spectrum for an sdB star (see text for details). Lines from the K-dwarf are clearly seen in the red part, but the lines in the blue are from the sdB. }\label{fig:optspec} \end{minipage} \end{figure*} The normalised optical spectrum of PG\,1018--047 is shown in Figure \ref{fig:optspec}. It is obtained by averaging over a number of the ING/SAAO spectra taken at the same orbital phase, calculated using our best orbital solution from section \ref{sec:orbsol}. In order to normalise the spectra, we fitted third order polynomials to the regions free from absorption lines and divided the spectra by these fits. The mean spectrum shows several metal lines in the region between H$_\beta$ and H$_\gamma$. However, the \ion{He}{I} line at 4472\,\AA, which is typical of sdB stars is absent. Instead the \ion{Si}{iii} triplet (4553, 4568, 4575\,\AA) is the strongest feature. Several lines in the region between 4600 and 4700\,\AA\ can be identified with \ion{O}{ii} lines and possibly also \ion{N}{ii}. We made a fit to the mean spectrum of PG\,1018--047 using the LTE model grids of Heber et al.~(2000), with explicit metals of solar composition and abundances depleted by 0.0, 0.5, 1.0, 1.5 and 2.0 dex relative to solar. A reasonable fit is achieved with $T_{\rm eff}$\,=\,30500\,$\pm$\,200 K, log\,$g$\,=\,5.50\,$\pm$\,0.02, and with the N and O abundance 1/10 of the solar value. In Figure~1, a synthetic spectrum with these parameters broadened to match the resolution of the observed spectrum is shown. The model spectrum contains helium at a fraction log\,N(He)/N(H) = -3.0, but even this is clearly too much. In order to make helium fit with the observed spectrum, the model must be depleted to log\,N(He)/N(H) $<$ -\,4. It is also clear that the abundances of the various elements are quite far from solar composition relative to each other, which is not unusual for the sdBs (Heber et al.~2000). Note that the K-star contributes some light also in the blue part of the spectrum, but insufficient to make any lines clearly visible in the spectrum. However, the contribution to the continuum might still be sufficient to affect the fitting procedure. In the H$_\alpha$ region, however, metal lines from the K-star are clearly seen. Our high S/N mean spectrum has too low resolution to reliably infer the abundances of the individual components, and our high resolution spectra have insufficient S/N. \begin{table*} \centering \begin{minipage}{160mm} \caption{Narrow absorption lines present in the optical spectrum (Figure \ref{fig:optspec}). The identification was done using the mid- and high-resolution spectral library of \citet{mon1997} and also \citet{ral2010}. The lines with a * were included in the template to determine the radial velocities of the secondary (see section \ref{sec:sec}). For reference, we have added the wavelengths for the Balmer lines we adopted in our analysis as well.}\label{tbl:seclines} \centering \begin{tabular}{clclclcl} \hline\hline $\lambda$ (\AA) & Element & $\lambda$ (\AA) & Element & $\lambda$ (\AA) & Element & $\lambda$ (\AA) & Element \\\hline 4153.30 & \ion{O}{ii} & 4596.17 & \ion{O}{ii} & 6337.28* & & 6609.05* & \\ 4164.79 & \ion{Fe}{iii}, blend & 4630.54 & \ion{N}{ii} & 6359.45* & & 6614.42* & \\ 4189.80 & \ion{O}{ii} & 4639.70 & \ion{O}{ii}, blend & 6363.71* & & 6637.09* & \\ 4253.59 & \ion{S}{iii} + \ion{O}{ii} & 4642.26\,\, & \ion{O}{ii}, \ion{N}{ii} & 6394.45* & & 6644.39* & \\ 4276.74 & \ion{O}{ii}, blend & 4649.14\,\, & \ion{O}{ii} & 6400.43* & & 6664.37* & \\ 4414.91 & \ion{O}{ii} & 4661.04\,\, & \ion{O}{ii} & 6408.96* & & 6678.97* & \\ 4416.58 & \ion{O}{ii} & 4676.23\,\, & \ion{O}{ii} & 6412.37* & & & \\ 4442.49 & \ion{O}{ii} & 4700.31\,\, & \ion{O}{ii}, blend & 6422.05* & & 4101.735 & $H_{\delta}$ \\ 4552.65 & \ion{Si}{iii} & 4705.44\,\, & \ion{O}{ii} & 6431.44* & & 4340.465 & $H_{\gamma}$ \\ 4567.87 & \ion{Si}{iii} & 4710.04\,\, & \ion{O}{ii} & 6439.70* & & 4861.327 & H$_{\beta}$ \\ 4574.78 & \ion{Si}{iii} & & & 6450.51* & & 6562.800 & H$_{\alpha}$ \\ 4590.57 & \ion{O}{ii} & 6335.83* & & 6456.49* & & & \\ \hline\hline \end{tabular} \end{minipage} \end{table*} \subsection{The orbit of PG\,1018--047}\label{sec:orbsol} \subsubsection{Radial velocity measurements}\label{sec:radvelmeas} The radial velocities (RVs) of the INT, WHT, SAAO and NOT spectra were determined following the procedure described by \citet{mor2003}, i.e. least squares fitting of a line profile model. This line profile model was built up from three Gaussians per Balmer line with different widths and depths, but with a common central wavelength position which varies between the spectra. The parameters of the Gaussians were optimized by comparing the model to the normalized average spectrum over all observations; see \citet{mmm2000c} for further details of this procedure. For the blue spectra we fit simultaneously for $H_{\beta}$, $H_{\gamma}$\ and $H_{\delta}$, whereas for the red spectra only the $H_{\alpha}$\ line can be fitted. The RV of the NOT spectrum was determined using a model containing $H_{\alpha}$, $H_{\beta}$\ and $H_{\gamma}$\ (see also Table \ref{tbl:seclines}). The radial velocities from the 7 HET spectra are obtained from the $H_{\beta}$\ absorption line using simple Gaussian fitting to the core of the line within the IRAF "splot" task. A list of the radial velocitities and the uncertainties measured is given in Table \ref{tbl:radvel}. \begin{table*} \centering \begin{minipage}{160mm} \caption{The 125 radial velocity measurements for PG\,1018--047 with their formal errors from the least squares fitting routine. }\label{tbl:radvel} \centering \begin{threeparttable} \begin{tabular*}{14cm}{p{2cm}p{2cm}p{2cm}p{2cm}p{2cm}p{2cm}}\hline\hline HJD & RV & HJD & RV & HJD & RV \\ -2450000 & (km/s) & -2450000 & (km/s) & -2450000 & (km/s) \\\hline 1646.47135 & 31.3 $\pm$ 4.4 &2741.52795 & 49.8 $\pm$ 3.0 &4189.54710 & 52.1 $\pm$ 1.4 \\ 1646.47650 & 20.5 $\pm$ 4.3 &2741.53909 & 53.3 $\pm$ 3.1 &4189.54712 & 42.9 $\pm$ 1.0 \\ 1654.46074 & 24.7 $\pm$ 3.8 &2741.54857 & 48.8 $\pm$ 3.4 &4190.40980 & 51.9 $\pm$ 1.0 \\ 1654.46780 & 32.1 $\pm$ 3.7 &2741.55807 & 48.2 $\pm$ 4.3 &4190.40982 & 46.0 $\pm$ 0.7 \\ 1977.48220 & 49.6 $\pm$ 5.3 &2743.42040 & 50.4 $\pm$ 4.5 &4190.42420 & 53.5 $\pm$ 1.0 \\ 1978.62533 & 55.2 $\pm$ 3.8 &2743.43451 & 60.3 $\pm$ 4.5 &4190.42422 & 50.7 $\pm$ 0.7 \\ 1979.54044 & 45.0 $\pm$ 3.2 &2743.52128 & 52.2 $\pm$ 6.3 &4191.59677 & 52.1 $\pm$ 1.4 \\ 1979.59958 & 65.5 $\pm$ 3.3 &2744.35895 & 54.3 $\pm$ 3.5 &4191.59677 & 44.1 $\pm$ 1.0 \\ 1979.60655 & 54.0 $\pm$ 3.3 &2744.37308 & 49.7 $\pm$ 2.8 &4193.37192 & 51.4 $\pm$ 7.8 \\ 1982.51489 & 52.2 $\pm$ 3.9 &2744.39186 & 49.4 $\pm$ 2.8 &4195.35718 & 41.6 $\pm$ 1.9 \\ 1982.52185 & 54.6 $\pm$ 4.1 &2744.40598 & 51.4 $\pm$ 2.7 &4195.37121 & 36.4 $\pm$ 1.7 \\ 1982.55421 & 57.8 $\pm$ 4.0 &2744.42529 & 53.9 $\pm$ 2.6 &4195.61447\tnote{$\spadesuit$} & 35.5 $\pm$ 2.5 \\ 1982.56118 & 52.6 $\pm$ 3.9 &2744.43941 & 52.2 $\pm$ 2.6 &4195.62457\tnote{$\spadesuit$} & 38.1 $\pm$ 4.5 \\ 1982.60059 & 57.5 $\pm$ 3.9 &2745.37982 & 48.8 $\pm$ 3.5 &4197.51456 & 43.8 $\pm$ 1.8 \\ 1982.60755 & 53.2 $\pm$ 4.0 &2745.39394 & 45.3 $\pm$ 3.1 &4197.52858 & 43.6 $\pm$ 1.9 \\ 2031.36769 & 62.1 $\pm$ 8.3 &2745.40987 & 51.4 $\pm$ 2.8 &4441.00811 & 39.7 $\pm$ 2.0 \\ 2031.37466 & 41.4 $\pm$ 8.4 &2745.42399 & 51.9 $\pm$ 2.7 &4469.93158 & 36.0 $\pm$ 2.0 \\ 2031.38641 & 49.5 $\pm$ 4.4 &2746.44131 & 54.6 $\pm$ 2.5 &4502.85094 & 33.5 $\pm$ 2.0 \\% HET 3 2032.48243 & 54.9 $\pm$ 3.0 &2746.45544 & 51.0 $\pm$ 2.5 &4547.37872 & 33.0 $\pm$ 1.7 \\ 2032.49633 & 53.6 $\pm$ 3.5 &2746.47412 & 55.4 $\pm$ 2.2 &4547.40665 & 31.1 $\pm$ 1.6 \\ 2033.39672 & 53.8 $\pm$ 4.1 &2746.48824 & 44.8 $\pm$ 2.3 &4547.62144 & 29.6 $\pm$ 2.0 \\ 2033.40369 & 53.8 $\pm$ 4.3 &3095.38227 & 26.8 $\pm$ 6.7 &4547.64844 & 32.4 $\pm$ 2.2 \\ 2037.45584 & 54.1 $\pm$ 3.8 &3095.40336 & 19.0 $\pm$ 6.6 &4550.71405 & 26.7 $\pm$ 2.0 \\% HET 4 2037.46282 & 50.9 $\pm$ 3.5 &3098.33704 & 12.9 $\pm$ 9.8 &4562.68846 & 29.6 $\pm$ 2.0 \\% HET 5 2360.36683\tnote{*} & -33.8 $\pm$ 25.0 &3101.31890 & 18.8 $\pm$ 9.2 &4588.35123 & 29.7 $\pm$ 2.5 \\ 2360.37745\tnote{*} & -4.6 $\pm$ 22.9 &3101.32956 & 19.7 $\pm$ 8.7 &4588.35125\tnote{$\clubsuit$} & 34.2 $\pm$ 2.6 \\ 2362.36641\tnote{*} & 17.5 $\pm$ 19.3 &3545.21953 & 51.4 $\pm$ 7.2 &4588.35646\tnote{$\clubsuit$} & 22.8 $\pm$ 2.6 \\ 2362.38049\tnote{*} & 5.4 $\pm$ 20.3 &3546.20756 & 52.6 $\pm$ 5.8 &4588.35648 & 29.5 $\pm$ 2.4 \\ 2364.39275 & 23.1 $\pm$ 6.0 &3772.55466 & 35.6 $\pm$ 0.8 &4902.42320 & 34.8 $\pm$ 4.5 \\ 2364.40338 & 17.9 $\pm$ 6.7 &3772.55467 & 26.7 $\pm$ 1.2 &4902.44418 & 36.6 $\pm$ 4.9 \\ 2390.51113 & 34.6 $\pm$ 17.1 &3772.68564\tnote{$\bullet $} & 15.3 $\pm$ 0.9 & 4952.38360 & 55.2 $\pm$ 2.8 \\ 2390.52179 & 32.8 $\pm$ 9.0 &3772.68565 & 34.6 $\pm$ 1.4 &4952.38359 & 54.3 $\pm$ 5.1 \\ 2391.37270 & 30.1 $\pm$ 4.3 &4169.43214 & 43.8 $\pm$ 1.2 &4952.39761 & 53.6 $\pm$ 3.9 \\ 2391.38337 & 32.7 $\pm$ 3.6 &4169.44290 & 47.3 $\pm$ 1.4 &4952.39762 & 49.6 $\pm$ 2.2 \\ 2392.36580 & 28.3 $\pm$ 2.1 &4169.45704 & 56.5 $\pm$ 2.0 &4952.41400 & 58.4 $\pm$ 1.8 \\ 2392.37993 & 24.5 $\pm$ 2.0 &4169.46774 & 52.6 $\pm$ 6.1 &4952.41401 & 51.8 $\pm$ 2.9 \\ 2739.51074 & 45.0 $\pm$ 3.6 &4186.54793 & 38.0 $\pm$ 3.6 &4952.42802 & 51.1 $\pm$ 2.3 \\ 2739.52029 & 46.7 $\pm$ 3.8 &4186.55810 & 40.1 $\pm$ 11.8 &4952.42802 & 58.4 $\pm$ 1.5 \\ 2740.44514 & 49.8 $\pm$ 3.2 &4188.41925 & 39.9 $\pm$ 3.0 &5202.92761 & 36.5 $\pm$ 2.0 \\% HET 6 2740.45465 & 51.9 $\pm$ 3.0 &4188.43328 & 42.1 $\pm$ 2.9 &5278.71693 & 32.8 $\pm$ 2.0 \\% HET 7 2741.50896 & 49.7 $\pm$ 3.1 &4189.53608 & 49.9 $\pm$ 1.5 &5295.96094 & 35.0 $\pm$1.6 \\ 2741.51846 & 49.4 $\pm$ 3.1 &4189.53609 & 42.6 $\pm$ 1.0 & & \\\hline\hline \end{tabular*} \begin{tablenotes} \item[*] \small{The SAAO science frames belonging to these 4 radial velocities were made with very marginal observing conditions. The resulting unreliable RVs are therefore not considered in the remaining analysis.} \item[$\bullet$] \small{This is a discrepant WHT data point, taken during service observations in 2006. It is not in line with the other data taken at on the same night and therefore given no weight in the remaining analysis.} \item[$\spadesuit$] \small{For the blue WHT science frames belonging to these two RVs, only a single arc frame was available, taken before the first of the two subsequent observations.} \item[$\clubsuit$] \small{These two WHT observations in the $H_{\beta}$ - $H_{\delta}$ region had no flat fields or bias frames available.} \end{tablenotes} \end{threeparttable} \end{minipage} \end{table*} \subsubsection{Orbital parameters}\label{sec:orbpar} Once the radial velocities for all spectra were known, we used the floating mean periodogram \citep{CMB1999}, a generalization of the well-known Lomb-Scargle periodogram \citep{lom1976, sca1982}, to determine the most probable frequencies (periods) present in the data. The method consists in fitting the radial velocity data with a model composed of a sinusiod plus a constant of the form: \begin{equation} v = \gamma + K\sin[2\pi f(t-t_{0})],\label{eq:orbit} \end{equation} with $f$ the frequency and $t$ the time of observation. This means we assume the binary system to have a circular orbit with semi-amplitude $K$ and a systemic velocity $\gamma$. For each frequency ($f$=1/$P$) we perform least squares fitting of the data, solving for $\gamma$ and $K$ simultaneously using singular value decomposition \citep{pre2002}. In this way we can obtain the $\chi^{2}$ statistic of the model as a function of frequency, or in other words the periodogram. In our initial period determination, the $\chi^{2}$ values turned out larger than expected given the number of data points, indicating that there must have been an extra unaccounted source of uncertainty, most likely due to systemic effects, intrinsic variability of the star or slit-filling errors (see also \citealt{mor2003}). Such errors are unlikely to be correlated with either the orbit or the statistical errors we have estimated. To allow for this we compute the level of systematic uncertainty ($\sigma$) per telescope that when added in quadrature to our error estimates gives a reduced $\chi^{2}\approx 1$ for each of the INT, WHT subsets, telescope by telescope, relative to the preliminary fit. We increased the errors by $\sigma_{\rm{INT}}=\,$2.15 km s$^{-1}$ and $\sigma_{\rm{WHT}}=\,$3.4 km s$^{-1}$ for respectively the INT and WHT data. The formal NOT and HET errors were left unchanged since these are fibre-fed instruments which do not suffer from normal slit-guiding errors. Also the 9 SAAO RVs were left as they were. Second we used the average residual from each data set to our best orbital fit at that point to apply offsets to the INT, WHT and SAAO data sets\footnote{Table \ref{tbl:fitrvs} presents the RVs at this stage.} (respectively 0.85, -2.00 and 4.63 km s$^{-1}$, predicted minus observed). Finally, we scale the errors of the entire data set multiplicatively by a factor 1.244 to obtain a $\chi^{2}$ value equal to the degrees of freedom (dof). Figure \ref{fig:oripgram} shows the resulting radial velocity periodogram for PG\,1018--047\ in the region where we find the lowest $\chi^{2}$ values. The best solution is found around 760 days, followed by a group of 1 day aliases and the yearly aliases of the long period around 250 days. An overview of the best aliases with their corresponding $\chi^{2}$ values is given in Table \ref{tbl:bestaliases}. Given the large inhomogeneity of the data sets, we realize that the treatment of the errors described above might not be perfect in all details. Therefore we have considered alternative methods to weight the data and errors as well, but the essential result, the long period, was unchanged throughout. \\ \begin{figure} \begin{minipage}{85mm} \epsfig{file=Fig2.eps, width=9.5cm} \caption{The radial velocity periodogram for PG\,1018--047, showing the most probable orbital periods present in the data.}\label{fig:oripgram} \end{minipage} \end{figure} \begin{table} \centering \begin{minipage}{80mm} \caption{The best orbital periods found from the radial velocity periodogram for PG\,1018-047. The $\chi^{2}$ value, mass function of the companion and the minimum mass of the companion obtained by assuming an sdB mass of 0.5$\,\mathrm{M_{\odot}}$ are also given.}\label{tbl:bestaliases} \centering \begin{tabular}{ccccc}\hline\hline Alias & Period (d) & $\chi^{2}$ & $f_{m}$ ($M_{\odot}$) & $M_{2min}$ ($M_{\odot}$) \\\hline 1 & 759.80 & 117 & 0.17245 & 0.58927 \\ 2 & 0.9987 & 188 & 0.00018 & 0.03704 \\ 3 & 241.35 & 238 & 0.02602 & 0.24313 \\ 4 & 267.93 & 258 & 0.05582 & 0.34031 \\ 5 & 1.0184 & 259 & 0.00013 & 0.03332 \\\hline\hline \end{tabular} \end{minipage} \end{table} In Table \ref{tbl:bestaliases} we also quote the mass functions of the companion, calculated using \begin{equation}\label{eq:minmass} f_{m}=\frac{M_{MS}^{3}\sin^{3}i}{(M_{sdB}+M_{MS})^{2}}=\frac{PK_{sdB}^{3}}{2\pi G}. \end{equation} The minimum mass of the companion, assuming a typical sdB mass of $0.5\rm\,M_{\odot}$ \citep{heb1984}, is also given in each case. These numbers will be used later to constrain the nature of the companion star and vice versa to show that our orbital solution is plausible given the properties of the secondary. Table \ref{tbl:bestorbsol} lists the orbital parameters for our best orbital solution found for PG\,1018--047\ and in Figure \ref{fig:bestorbsol} the radial velocity curve folded on the period is shown. \begin{table} \centering \begin{minipage}{80mm} \caption{The best orbital solution for PG\,1018--047\, assuming a circular and eccentric orbit.}\label{tbl:bestorbsol} \centering \begin{tabular}{lrr}\hline\hline & Circular & Eccentric \\\hline $\mathrm{P_{orb}}$ (d) & 759.8$\pm$5.8 & 755.9$\pm$5.1 \\ $\gamma$ (km/s) & 38.2$\pm$0.5 & 38.0$\pm$0.9 \\ $e$ & 0 & 0.246$\pm$0.052 \\ $\omega$ ($^{\circ}$) & & 0$\pm$24 \\ HJD$_{0}$ (d) & 2453335.0$\pm$10.5 & 2453343.0$\pm$14.7 \\ $\mathrm{K_{sdB}}$ (km/s) & 13.0$\pm$0.8 & 12.6$\pm$0.8 \\ $\mathrm{K_{MS}}$ (km/s) & 8.1$\pm$1.0 & \\\hline\hline \end{tabular} \end{minipage} \end{table} \subsubsection{The secondary orbit}\label{sec:sec} Narrow metal lines of a cool companion were present in addition to the Balmer absorption lines from the sdB star, allowing us to obtain radial velocity variations for the secondary star. The blue spectra did not contain enough signal from the companion to be useful, so we focussed on the HET, NOT and red WHT/INT spectra. We determined the radial velocities of the secondary by means of cross correlation with a template spectrum \citep{td1979}. For the HET data we cross-correlated with 61 Cyg B (K7V), which was obtained as part of a different program on April 15, 2010, using HRS in its R = 30,000 mode with a 316 groove/millimeter cross disperser centred at 6948\,\AA. As a result only seven orders from the red CCD (six, on one night) were available for cross correlation, covering 6000 - 6760\,\AA\ (orders containing $H_{\alpha}$\ or strong telluric lines were excluded). For the other data we constructed a template from our data using the average red spectrum of PG\,1018--047 (the upper panel in Figure \ref{fig:optspec}). Every (by eye) recognisable absorption feature of the secondary was fitted with a Gaussian profile, similar to the method described in section \ref{sec:radvelmeas}. All features possibly originating in the primary or of telluric origin were masked out, leaving a total of 19 lines from the cool companion in the template (see also Table \ref{tbl:seclines}). Unfortunately the quality of the secondary radial velocities obtained from the red INT and WHT spectra were not sufficient to derive any trustworthy orbital period from a periodogram or to estimate the orbital semi-amplitude $K_{MS}$ for the secondary. We therefore used our best orbital period obtained from the primary RVs (Table \ref{tbl:bestorbsol}) to phase-bin the 37 medium-resolution red spectra before the cross-correlation routine. In total 7 out of 40 bins are filled. The HET and NOT spectra were left unbinned. \begin{table} \centering \begin{minipage}{80mm} \caption{The radial velocities of the cool companion.}\label{tbl:secrv} \centering \begin{tabular}{cccc}\hline\hline & \# spectra & Average & RV \\ Bin & in bin & phase & (km/s) \\\hline 0.100 - 0.125 & 2 & 0.1247 & 35.7 $\pm$ 1.9 \\ 0.125 - 0.150 & 7 & 0.1277 & 32.9 $\pm$ 1.3 \\ 0.200 - 0.225 & 11 & 0.2176 & 29.2 $\pm$ 3.3 \\ 0.250 - 0.275 & 9 & 0.2867 & 29.0 $\pm$ 4.3 \\ 0.4556 & HET & & 36.7 $\pm$ 2.8 \\ 0.4585 & HET & & 34.8 $\pm$ 0.9 \\ 0.4937 & HET & & 39.0 $\pm$ 1.5 \\ 0.5370 & HET & & 41.3 $\pm$ 1.1 \\ 0.5583 & HET & & 41.9 $\pm$ 0.7 \\ 0.575 - 0.600 & 2 & 0.5759 & 43.5 $\pm$ 2.2 \\ 0.5810 & NOT & & 40.2 $\pm$ 0.6 \\ 0.6000 & HET & & 44.5 $\pm$ 1.1 \\ 0.6158 & HET & & 43.6 $\pm$ 0.6 \\ 0.625 - 0.650 & 2 & 0.6496 & 39.4 $\pm$ 4.9 \\ 0.775 - 0.800 & 4 & 0.7826 & 44.8 $\pm$ 4.9 \\\hline\hline \end{tabular} \end{minipage} \end{table} In Figure \ref{fig:secfit} we have plotted phase versus radial velocity. Using least squares fitting we estimated the projected semi-amplitude of the secondary to be $K_{MS} = 8.1 \pm 1.0$ km/s, which constrains the mass ratio of the secondary to the primary star to be $q = M_{MS}/M_{sdB} = 1.6 \pm 0.2$. The $\chi^{2}$ value of the fit was 13.29, given 15 data points minus 2 fitting parameters. \begin{figure*} \centering \begin{minipage}{160mm} \begin{center} \epsfig{file=Fig3.eps, width=14cm} \caption{Radial velocity curve for the sdB component of PG\,1018--047. We have averaged the RVs per observation run. The small lower panel shows the residuals. The black squares are the 7 RVs from the HET spectra, the red dot is the NOT/FIES data point, yellow squares are the SAAO observations, the blue left triangles and red right triangles are the respectively blue and red INT data points, and the cyan up and magenta down triangles correspond to the blue and red WHT radial velocities.}\label{fig:bestorbsol} \end{center} \end{minipage} \end{figure*} \begin{figure*} \centering \begin{minipage}{160mm} \begin{center} \epsfig{file=Fig4.eps, width=14cm, height=9.5cm} \caption{The orbital fit for the secondary star. The red triangles are the radial velocities from the cool companion, obtained from the intermediate resolution spectroscopy. The NOT measurement is plotted with a blue dot and the HET data with black squares. The radial velocity curve (dashed green) is also shown.}\label{fig:secfit} \end{center} \end{minipage} \end{figure*} \subsubsection{Eccentricity of the orbit} In sections \ref{sec:orbpar} and \ref{sec:sec} the analysis was based upon the assumption that PG1018-047 has a circular orbit. However, we also tried fitting eccentric orbits using the Markov chain Monte Carlo (MCMC; \citealt{GRS1995}) method obtaining an eccentricity of $0.246 \pm 0.052$; the favoured period for these fits decreased from $759.8 \pm 5.7$ days to $755.9 \pm 5.1$ days and had $\chi^{2} = 152$, starting from the radial velocities in Table \ref{tbl:fitrvs}. To obtain $\chi^{2}/\rm{dof} = 1$ we needed to scale the errors by a factor 1.153. Table \ref{tbl:bestorbsol} presents the orbital parameters. The argument of periapsis ($\omega$) is defined as the angle in the orbital plane between the ascending node and the line of apsides. 0 degrees indicates that the major axis of the ellipse is in the plane of sky. Note as well that HJD$_{0}$ is given as the ascending node passage minus $\mathrm{P_{orb}/4}$ to be consistent with our definition for the circular orbit (see eq. (\ref{eq:orbit})). Computing the F-statistic we tested whether eccentricity was needed in our model. Assuming a locally linear model, the variable $X = \chi^{2}_{circ} - \chi^{2}_{ecc}$ should itself have a $\chi^2$ distribution with 2 dof (the eccentricity fit parameters), independent of the $\chi^{2}$ of the 114 degrees of freedom of the eccentric orbit. \begin{equation*} \rm{F} = \frac{(\chi^{2}_{circ} - \chi^{2}_{ecc})/2}{\chi^{2}_{ecc}/114} = 11.24 \end{equation*} should then be distributed as F(2;114) under the null hypothesis that the orbit is circular. The chances of such a large value are very small, meaning we reject the null hypothesis at the 99.9\% significance level in favour of an eccentric orbit. Although formally significant the heterogeneous nature of our data and the limited phase coverage lead us to be wary of claiming a definitive detection of eccentricity. However, it is equally true that the orbit could be significantly non-circular. \subsection{The nature of the companion} We determined the spectral type of the companion star by minimizing the residuals after subtracting different template star spectra from the mean red PG\,1018--047\ spectrum in the wavelength region between 6390\,\AA\ and 6700\,\AA. We masked out $H_{\alpha}$, because the line contains a large contribution from the sdB. This optimal subtraction routine \citep{MRW1994} is sensitive to both the rotational broadening $v_{rot}\sin i$ and the fractional contribution $f$ of the companion star to the total flux. We used template stars from two different origins in the spectral range F0 to M9: Kurucz models (Munari et al. 2005, solar composition) and 43 real star templates \citep{mon1997}. All templates were prepared so that they had the same wavelength coverage and resolution as our normalised PG\,1018--047\ spectrum in the $H_{\alpha}$\ region. Figure \ref{fig:kurucz} clearly shows the difference in $\chi^{2}$ between the dwarf and (sub)giant templates. The main sequence models, which have the lowest $\chi^{2}$-values, show a distinct minimum at spectral type K4 - K6, whereas the (sub)giant models seem to converge to a spectral type G5. This is consistent with the set of template spectra from \citet{mon1997} as well, where we find the best $\chi^{2}$ for the K5V star HD\,201091 (61 Cyg A), followed by a K7V and K3V star. For M stars the worst values are found. The best non-dwarf templates were both G5 stars. \begin{figure} \centering \begin{minipage}{80mm} \epsfig{file=Fig5.eps, width=8cm, height=5.8cm} \caption{Results of the optimal subtraction routine for the Kurucz templates. The circle curve are the results for dwarf stars, whereas the triangles and squares plot the reduced $\chi^{2}$ versus spectral type for respectively subgiant and giant stars.}\label{fig:kurucz} \end{minipage} \end{figure} Since PG\,1018--047\ was observed by 2MASS, we adopt a similar approach as \citet{SW2003} as a complementary way to identify the spectral type of the companion star. We choose to focus on $J-K_{S}$ versus $B-V$ and $J-K_{S}$ versus $V-K_{S}$ and converted the Str\"omgren magnitudes to the Johnson system following the approach of \citet{tur1990}. The calculated colours are given in the lower part of table \ref{tbl:phot}. \begin{table} \centering \begin{minipage}{80mm} \caption[Magnitudes and colours for PG\,1018--047.]{\textit{Magnitudes and colours for PG\,1018--047. The $B-V$ colour is calculated using the transformation formula from Turner (1990)}.}\label{tbl:phot} \centering \begin{tabular}{cccc}\hline\hline \multicolumn{2}{c}{2MASS (infrared)} & \multicolumn{2}{c}{Str\"omgren (visual)} \\ \multicolumn{2}{c}{(\underline{Skrutskie et al. 2006})} & \multicolumn{2}{c}{(\underline{Wesemael et al 1992})} \\ J $\;\:$ & = 13.298 (.026) & y & = 13.320 (.005) \\ H$\;\:$ & = 12.980 (.027) & b-y & = -0.086 (.004) \\ K$_S$ & = 12.928 (.033) & u-b & = -0.073 (.006) \\ & & m$_1$ & = $\;$0.076 (.013) \\\hline & \multicolumn{2}{c}{ \underline{Calculated colours}} & \\ & B-V$\;$ & = -0.20 (.007)\\ & J-K$_S$ & = 0.370 (.042) \\ & V-K$_S$ & = 0.392 (.038) \\\hline\hline \end{tabular} \end{minipage} \end{table} \begin{figure*} \centering \begin{minipage}{160mm} \centering \epsfig{file=Fig6.eps, width=14cm} \caption{Grid of composite colours in $B-V$ versus $J-K_{S}$ space by combining the light from a typical hot subdwarf with that of a population I main-sequence star assuming various fractional contributions to the total light in the $V$ band by the late-type star. The blue dot marks the values for PG\,1018--047. A close-up is shown in the small inset figure. The diagonal cyan line indicates the location of the population I main sequence.} \label{fig:45} \end{minipage} \end{figure*} The next step is to compare the PG\,1018--047\ colours to a theoretical grid of colours for sdB-MS binaries while varying the fraction of light at $V$ that arises from the companion. Defining this fraction as \begin{equation*} f=\frac{F_{V_{\rm{MS}}}}{F_{V}}=\frac{F_{V_{\rm{MS}}}}{F_{V_{\rm{sdB}}}+F_{V_{\rm{MS}}}}, \end{equation*} we find the combined colour (e.g. for B--V) from the expression \small{} \begin{eqnarray*} \lefteqn{(B-V)_{\rm{sdB+MS}} = }\\ & & -2.5\log\left[(1-f)\cdot10^{-(B-V)_{\rm{sdB}}/2.5} + f\cdot 10^{-(B-V)_{\rm{MS}}/2.5}\right]. \end{eqnarray*} \normalsize{} Colour indices for late-type stars are taken from \citet{john1966}. The typical colours for a single sdB we take from \citet{SW2003}. Figure \ref{fig:45} shows that as the fractional contribution $f$ at $V$ from the secondary increases, the $B-V$ index shifts to redder values. On the other hand, the cooler the companion becomes, the more the $K_{S}$-band dominates the $J-K_{S}$ index. Swapping the $B-V$ index for the $V-K_{S}$ colour shows a similar trend. Zooming in on PG\,1018--047, we find a spectral classification $K3 - K6$, which is consistent with the results from the optimal subtraction routine. We estimate the contribution of the secondary to be $6.1\pm1.0\,\%$ in the V-band. \\ \section{Discussion}\label{sec:discussion} The role of the long period binary system PG\,1018--047\ in refining our understanding of the origin and evolution of hot subdwarf stars depends on whether its present orbit is circular or eccentric. \subsection{Evolution assuming a circular orbit} If PG\,1018--047\, indeed has a circular orbit, one can assume that tidal interaction has occurred between the sdB and the dK5 star. This means that from a theoretical point of view PG\,1018--047\ might be an important system, as it becomes a good candidate for formation through the first stable Roche lobe overflow channel described by \citet{han2003}. From Figure 15 in \citet{han2003} we deduce that an sdB binary with a mid-K companion is feasible. However these systems are all subgiants and giants evolved from B stars (i.e. more massive stars at a later evolutionary state). Also, to have stable RLOF on the first giant branch onto a companion that is only 0.6-0.7$\,\rm{M_{\odot}}$ and still to end up with such a long period, mass transfer would have to be close to conservative to avoid excessive angular momentum loss. If we want the mass donor to evolve in a Hubble time, it must have started with an initial mass of 0.8$\,M_{\odot}$ or greater, meaning that the initial mass of the present K star must have been no more than $\sim$0.3-0.4$\,\rm{M_{\odot}}$. But then the initial mass ratio $q_0 \gtrsim 2$ is not compatible with conservative mass transfer on the red giant branch, leading to a contradiction. Second we can consider the alternative common-envelope prescription from \citet{nel2000}. In \citet{nel2010}, a population of sdB stars is simulated in which the first phase of mass transfer can be described by this alternative common-envelope prescription \citep{nel2001a}. Interestingly, in that model a substantial fraction of the sdB stars with low and intermediate mass main sequence companions have rather large orbital periods (100\,-\,1000 days, see their Figure 2) and the parameters of PG\,1018--047\ actually fall right in a densely populated area of the model. Thus a sample of these long-period sdB binaries will give another way of testing the outcome of the common-envelope phase. \subsection{Evolution assuming an eccentric orbit} \citet{CW2011} propose that binaries similar to PG\,1018--047\ can be the remnants of original hierarchical triple systems, in which the inner binary has merged and evolved into the sdB star, and the outer (current sdB+MS) binary was never tidally interacting and is thus irrelevant to the production of the sdB. In the \citet{han2002, han2003} merger scenario, two helium white dwarfs merge to make an object capable of core helium burning, and the resulting range of masses for the sdB star is 0.3-0.8$\,\rm{M_{\odot}}$. In the new formation channel proposed by \citet{CW2011}, a helium white dwarf merges with a low-mass hydrogen-burning star whose resultant total mass is $\sim$0.6$\,\rm{M_{\odot}}$. Depending on the mixing history, this object has either a pre-formed helium core or is helium-enriched throughout, and the star experiences a greatly accelerated evolution to the tip of the red giant branch. Only minimal loss is required to remove the residual hydrogen envelope from this object at the time of normal degenerate helium ignition, so the expected mass range for the sdB star in this channel is narrow and at the "canonical" value $\sim$0.5$\,\rm{M_{\odot}}$. There is ample room inside the $\sim$760 day orbit of the present-day PG\,1018--047\ binary system to accomodate an inner binary which underwent such evolution, and there is no cause for the outer orbit to have been circularised. The \citet{CW2011} scenario could account for stars like PG\,1018--047\ with long (and possibly eccentric) binary orbits. Stable RLOF on the other hand predicts perfectly circular orbits. \section{Conclusions} With an orbital period of 760 $\pm$ 4.7 days, PG\,1018--047\ is the first long period sdB+MS system for which a period has been determined. The spectral type of the companion was found to be a K5 dwarf star, consistent with the mass ratio $M_{\rm MS}/M_{\rm sdB} = 1.6 \pm 0.2$ derived from the radial velocity amplitudes of both stars. However, one has to note that the stated numbers are only indicative of the true orbital parameters, since they are sensitive to the exact uncertainties assigned to the RV data and the assumption of a circular versus eccentric orbit. At first sight PG\,1018--047\ is a good candidate for formation through stable Roche lobe overflow if the orbit can be demonstrated to be circular. The predicted number of sdB binaries formed through this channel amounts to the largest contribution of sdB binaries according to binary population synthesis calculations \citep{han2002, han2003}. At the same time, the number of known binaries that have been confirmed as having formed through this channel is small/nil compared to those formed through the common envelope channels. Alternatively, if the common envelope is governed by the gamma-formalism for the common envelope, as used in \citet{nel2010}, there is a population of long period post-common envelope binaries with low-mass secondaries. Thus observing more of these binaries can constrain the first phase of mass transfer. If the orbit turns out to be eccentric, the present binary may instead be the remnant of a hierarchical triple-star progenitor system, as outlined by \citet{CW2011}. Further observations are needed to better establish the orbit of PG\,1018--047. \normalsize{} \section*{Acknowledgments} TRM and CMC was supported under a Science and Technology Facilities Council (STFC) rolling grant during the course of this work. RH\O\ has received funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007--2013) /ERC grant agreement N$^{\underline{\mathrm o}}$\,227224 ({\sc prosperity}), as well as from the Research Council of K. U. Leuven grant agreement GOA/2008/04. LM-R was supported by a PPARC post-doctoral grant and by NWO-VIDI grant 639.042.201 to P. J. Groot during this work. R.A.W. and M.A.S. gratefully acknowledge support from NSF grant AST-0908642. The Isaac Newton and William Herschel Telescopes are operated on the Island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'isica de Canarias. We thank PATT for their support of this program. We also thank the ING service scheme for obtaining some of the data. This paper uses observations made at the South African Astronomical Observatory (SAAO). A portion of the data reported herein was obtained with the Hobby-Eberly Telescope (HET), which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University University, Ludwig-Maximillians-Universit\"atM\"unchen, and Georg-August-Universit\"at G\"ottingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly. This paper also uses observations made with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'isica de Canarias.\\
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction with Gaussian Relay} \label{sec:intro} Consider the Gaussian relay problem shown in Figure~\ref{fig:gaussian}. \begin{figure}[h!] \vspace*{12pt} \begin{center} \input{gaussian_relay.pstex_t} \end{center} \caption{Gaussian relay channel with a noiseless link.} \label{fig:gaussian} \end{figure} Suppose the receiver $Y$ and the relay $Y_1$ each receive information about the transmitted signal $X$ of power $P$. Specifically, let \begin{align*} Y&= X + Z\\ Y_1 &= X + Z_1, \end{align*} where $(Z, Z_1)$ have correlation coefficient $\rho$ and are jointly Gaussian with zero mean and equal variance $EZ^2 = EZ_1^2 = N$. What should the relay $Y_1$ say to the ultimate receiver $Y$? If the relay sends information at rate $R_0$, what is the capacity $C(R_0)$ of the resulting relay channel? We first note that the capacity from $X$ to $Y$, ignoring the relay, is \[ C(0) =\frac{1}{2} \log\left(1+\frac{P}{N}\right) \quad\text{bits per transmission.} \] The channel from the relay $Y_1$ to the ultimate receiver $Y$ has capacity $R_0$. This relay information is sent on a side channel that does not affect the distribution of $Y$, and the information becomes freely available to $Y$ as long as it doesn't exceed rate $R_0$. We focus on three cases for the noise correlation $\rho$: $\rho= 1, 0,$ and $-1$. If $\rho=1$, then $Y_1=Y$, the relay is useless, and the capacity of the relay channel is $C(R_0)=(1/2)\log(1+P/N) = C(0)$ for all $R_0 \ge 0$. Now consider $\rho=0$, i.e., the noises $Z$ and $Z_1$ are independent. Then the relay $Y_1$ has no more information about $X$ than does $Y$, but the relay furnishes an independent look at $X$. What should the relay say to $Y$? This capacity $C(R_0)$, mentioned in \cite{Cover1987b}, remains unsolved and typifies the primary open problem of the relay channel. As a partial converse, Zhang~\cite{Zhang1988} obtained the strict inequality $C(R_0) < C(0) + R_0$ for all $R_0 > 0$. How about the case $\rho=-1$? This is the problem that we solve and generalize in this note. Here the relay, while having no more information than the receiver $Y$, has much to say, since knowledge of $Y$ and $Y_1$ allows the perfect determination of $X$. However, the relay is limited to communication at rate $R_0$. Thus, by a simple cut-set argument, the total received information is limited to $C(0) + R_0$ bits per transmission. We argue that this rate can actually be achieved. Since it is obviously the best possible rate, the capacity for $\rho=-1$ is given as \[ C(R_0)= C(0) + R_0. \] (See Figure~\ref{fig:graph}.) \begin{figure}[h] \vspace*{12pt} \begin{center} \input{graph.pstex_t} \end{center} \caption{Gaussian relay capacity $C(R_0)$ vs.\@ the relay information rate $R_0$.} \label{fig:graph} \end{figure} Every bit sent by the relay counts as one bit of information, despite the fact that the relay doesn't know what it is doing. We present two distinct methods of achieving the capacity. Our first coding scheme consists of hashing $Y_1^n$ into $nR_0$ bits, then checking the $2^{nC(R_0)}$ codewords $X^n(W)$, $W \in 2^{nC(R_0)}$, one by one, with respect to the ultimate receiver's output $Y^n$ and the hash check of $Y_1^n$. More specifically, we check whether the corresponding estimated noise $\hat{Z}^n=Y^n-X^n(W)$ is typical, and then check whether the resulting $Y_1^n(W) =X^n(W) + \hat{Z}^n$ satisfies the hash of the observed $Y_1^n$. Since the typicality check reduces the uncertainty in $X^n(W)$ by a factor of $2^{n C(0)}$ while the hash check reduces the uncertainty by a factor of $2^{n R_0}$, we can achieve the capacity $C(R_0) = C(0) + R_0$. It turns out hashing is not the unique way of achieving $C(R_0) = C(0) + R_0$. We can compress $Y_1^n$ into $\hat{Y}_1^n$ using $n R_0$ bits with $Y^n$ as side information in the same manner as in Wyner--Ziv source coding~\cite{Wyner--Ziv1976}, which requires \[ R_0 = I(Y_1; \hat{Y}_1 | Y). \] Thus, $nR_0$ bits are sufficient to reveal $\hat{Y}_1^n$ to the ultimate receiver $Y^n$. Then, based upon the observation $(Y^n, \hat{Y}_1^n)$, the decoder can distinguish $2^{nR}$ messages if \[ R < R^* := I(X; Y, \hat{Y}_1). \] For this scheme, we now choose the appropriate distribution of $\hat{Y}_1$ given $Y_1$. Letting \[ \hat{Y}_1 = Y_1 + U, \] where $U \sim N(0, \sigma^2)$ is independent of $(X, Z, Z_1)$, we can obtain the following parametric expression of $R^*(R_0)$ over all $\sigma^2 > 0$: \begin{align} \label{eq:param1} R^*(\sigma^2) &= I(X; Y, \hat{Y}_1) = \frac{1}{2} \log \left( \frac{ (P+N)\sigma^2 + 4 P N }{N\sigma^2} \right)\\ \label{eq:param2} R_0(\sigma^2) &= I(Y_1; \hat{Y}_1|Y) = \frac{1}{2} \log \left( \frac{ (P+N)\sigma^2 + 4 PN}{(P+N)\sigma^2} \right). \end{align} Setting $R_0(\sigma_0^2) = R_0$ in \eqref{eq:param2}, solving for $\sigma_0^2$, and inserting it in \eqref{eq:param1}, we find the achievable rate is given by \[ R^*(\sigma_0^2) = R_0 + \frac{1}{2} \log \left(1 + \frac{P}{N}\right) = C(0) + R_0, \] so ``compress-and-forward'' also achieves the capacity. Inspecting what it is about this problem that allows this solution, we see that the critical ingredient is that the relay output $Y_1=f(X,Y)$ is a deterministic function of the input $X$ and the receiver output $Y$. This leads to the more general result stated in Theorem~\ref{thm:main} in the next section. \section{Main Result} \label{sec:main} We consider the following relay channel with a noiseless link as depicted in Figure~\ref{fig:det-relay}. \begin{figure}[ht] \begin{center} \input{relay.pstex_t} \end{center} \caption{Relay channel with a noiseless link.} \label{fig:det-relay} \end{figure} We define a \emph{relay channel with a noiseless link} $(\mathcal{X}, p(y, y_1|x), \mathcal{Y}\times \mathcal{Y}_1, R_0)$ as the channel where the input signal $X$ is received by the relay $Y_1$ and the receiver $Y$ through a channel $p(y,y_1|x)$, and the relay can communicate to the receiver over a separate noiseless link of rate $R_0$. We wish to communicate a message index $W \in [2^{nR}] = \{1,2,\ldots, 2^{nR}\}$ reliably over this relay channel with a noiseless link.% \footnote{Henceforth, the notation $i \in [2^{nR}]$ is interpreted to mean $i \in \{1,2,\ldots, 2^{nR}\}$.} We specify a $(2^{nR}, n)$ code with an encoding function $X^n: [2^{nR}] \to \mathcal{X}^n$, a relay function $J: \mathcal{Y}_1^n \to [2^{nR_0}]$, and the decoding function $\hat{W}: \mathcal{Y}^n \times [2^{nR_0}] \to [2^{nR}]$. The probability of error is defined by $P_e^{(n)} = \Pr \{ W \ne \hat{W}(Y^n,J(Y_1^n) \}, $ with the message $W$ distributed uniformly over $[2^{nR}]$. The capacity $C(R_0)$ is the supremum of the rates $R$ for which $P_e^{(n)}$ can be made to tend to zero as $n \to \infty$. We state our main result. \begin{theorem} For the relay channel $(\mathcal{X}, p(y,y_1|x), \mathcal{Y} \times \mathcal{Y}_1)$ with a noiseless link of rate $R_0$ from the relay to the receiver, if the relay output $Y_1 = f(X,Y)$ is a deterministic function of the input $X$ and the receiver output $Y$, then the capacity is given by \[ C(R_0) = \max_{p(x)} \min \{ I(X;Y) + R_0,\; I(X;Y_1, Y) \}. \] \label{thm:main} \end{theorem} The converse is immediate from the simple application of the max-flow min-cut theorem on information flow~\cite[Section 15.10]{Cover--Thomas2006}. The achievability has several interesting features. First, as we will show in the next section, a novel application of random binning achieves the cut-set bound. In this coding scheme, the relay simply sends the hash index of its received output $Y_1^n$. What is perhaps more interesting is that the same capacity can be achieved also via the well-known ``compress-and-forward'' coding scheme of Cover and El Gamal~\cite{Cover--El-Gamal1979}. In this coding scheme, the relay compresses its received output $Y_1^n$ as in Wyner--Ziv source coding with the ultimate receiver output $Y^n$ as side information. In both coding schemes, every bit of relay information carries one bit of information about the channel input, although the relay does not know the channel input. And the relay information can be summarized in a manner completely independent of geometry (random binning) or completely dependent on geometry (random covering). More surprisingly, we can partition the relay space using both random binning and random covering. Thus, a combination of ``hash-and-forward'' and ``compress-and-forward'' achieves the capacity. The next section proves the achievability using the ``hash-and-forward'' coding scheme. The ``compress-and-forward'' scheme is deferred to Section~\ref{sec:second} and the combination will be discussed in Sections~\ref{sec:discuss} and \ref{sec:third}. \section{Proof of Achievability (Hash and Forward)} \label{sec:first} We combine the usual random codebook generation with list decoding and random binning of the relay output sequences: \emph{Codebook generation.} Generate $2^{nR}$ independent codewords $X^n(w)$ of length $n$ according to $\prod_{i=1}^n p(x_i)$. Independently, assign all possible relay output sequences in $|\mathcal{Y}_1|^n$ into $2^{nR_0}$ bins uniformly at random. \emph{Encoding.} To send the message index $w \in [2^{nR}]$, the transmitter sends the codeword $X^n(w)$. Upon receiving the output sequence $Y_1^n$, the relay sends the bin index $b(Y_1^n)$ to the receiver. \emph{Decoding.} Let $A_\epsilon^{(n)}$~\cite[Section 7.6]{Cover--Thomas2006} denote the set of jointly typical sequences $(x^n, y^n) \in \mathcal{X}^n \times \mathcal{Y}^n$ under the distribution $p(x,y)$. The receiver constructs a list \[ L(Y^n) = \{X^n(w) : w \in [2^{nR}], (X^n(w), Y^n) \in A_{\epsilon}^{(n)}\} \] of codewords $X^n(w)$ that are jointly typical with $Y^n$. Since the relay output $Y_1$ is a deterministic function of $(X, Y)$, then for each codeword $X^n(w)$ in $L(Y^n)$, we can determine the corresponding relay output $Y_1^n(w) = f(X^n(w), Y^n)$ exactly. The receiver declares $\hat{w} = w$ was sent if there exists a unique codeword $X^n(w)$ with the corresponding relay bin index $b(f(X^n(w), Y^n))$ matching the true bin index $b(Y_1^n)$ received from the relay. \emph{Analysis of the probability of error.} Without loss of generality, assume $W=1$ was sent. The sources of error are as follows (see Figure~\ref{fig:scheme1}): \begin{enumerate} \item[(a)] The pair $(X^n(1), Y^n)$ is not typical. The probability of this event vanishes as $n$ tends to infinity. \item[(b)] The pair $(X^n(1), Y^n)$ is typical, but there is more than one relay output sequence $Y_1^n(w) = f(X^n(w), Y^n)$ with the observed bin index, i.e., $b(Y_1^n(1)) = b(Y_1^n(w))$. By Markov's inequality, the probability of this event is upper bounded by the expected number of codewords in $L(Y^n)$ with the corresponding relay bin index equal to the true bin index $b(Y_1^n(1))$. Since the bin index is assigned independently and uniformly, this is bounded by \[ 2^{nR}\, 2^{-n (I(X;Y)-\epsilon)}\, 2^{-nR_0}, \] which vanishes asymptotically as $n \to \infty$ if $R < I(X;Y) + R_0 - \epsilon$. \item[(c)] The pair $(X^n(1), Y^n)$ is typical and there is exactly one $Y_1^n(w)$ matching the true relay bin index, but there is more than one codeword $X^n(w)$ that is jointly typical with $Y^n$ and corresponds to the same relay output $Y_1^n$, i.e., $f(X^n(1),Y^n) = f(X^n(w), Y^n)$. The probability of this kind of error is upper bounded by \[ 2^{nR} \, 2^{- n(I(X; Y, Y_1) - \epsilon)}, \] which vanishes asymptotically if $R < I(X; Y, Y_1) - \epsilon$.\qedhere \end{enumerate} \vspace*{18pt} \begin{figure}[h] \begin{center} \input{scheme1.pstex_t} \end{center} \caption{Schematic diagram of ``hash-and-forward'' coding scheme. The error happens when (a) the true codeword is not jointly typical with $Y^n$, (b) there is more than one $Y_1^n$ for the same bin index, or (c) there is more than one $X^n$ jointly typical with $(Y^n, Y_1^n)$.} \label{fig:scheme1} \end{figure} \section{Related Work} \label{sec:related} The general relay channel was introduced by van der Meulen~\cite{van-der-Meulen1971}. We refer the readers to Cover and El Gamal~\cite{Cover--El-Gamal1979} for the history and the definition of the general relay channel. For recent progress, refer to Kramer et al.~\cite{Kramer--Gastpar--Gupta2005}, El Gamal et al.~\cite{El-Gamal--Hassanpour--Mammen2006}, and the references therein. We recall the following achievable rate for the general relay channel investigated in~\cite{Cover--El-Gamal1979}. \begin{theorem}[{\cite[Theorem 7]{Cover--El-Gamal1979}}] For any relay channel $(\mathcal{X}\times\mathcal{X}_1, p(y, y_1| x, x, x_1), \mathcal{Y}\times\mathcal{Y}_1)$, the capacity $C$ is lower bounded by \[ C \ge \sup \min \{I(X; Y, \hat{Y}_1| X_1, U) + I (U; Y_1|X_1, V), \; I(X, X_1; Y) - I(\hat{Y}_1; Y_1| X, X_1, Y, U) \} \] where the supremum is taken over all joint probability distributions of the form \[ p(u,v,x,x_1,y,y_1,\hat{y}_1) = p(v) p(u|v) p(x|u) p(x_2|v) p(y,y_1|x,x_1) p(\hat{y}_1|x_1, y_1, u) \] subject to the constraint \[ I({Y}_1; \hat{Y}_1 | X_1, Y, U) \le I(X_1; Y | V). \] \label{thm:ceg} \end{theorem} Roughly speaking, the achievability of the rate in Theorem~\ref{thm:ceg} is based on a superposition of ``decode-and-forward'' (in which the relay decodes the message and sends it to the receiver) and ``compress-and-forward'' (in which the relay compresses its own received signal without decoding and sends it to the receiver). This coding scheme turns out to be optimal for many special cases; Theorem~\ref{thm:ceg} reduces to the capacity when the relay channel is degraded or reversely degraded~\cite{Cover--El-Gamal1979} and when there is feedback from the receiver to the relay~\cite{Cover--El-Gamal1979}. Furthermore, for the semideterministic relay channel with the sender $X$, the relay sender $X_1$, the relay receiver $Y_1 = f(X, X_1),$ and the receiver $Y$, El Gamal and Aref~\cite{El-Gamal--Aref1982} showed that Theorem~\ref{thm:ceg} reduces to the capacity given by \begin{equation} C = \max_{p(x, x_1)} \min \{ I(X, X_1 ; Y),\; H(Y_1 | X_1) + I(X; Y|X_1, Y_1) \}. \label{eq:aref} \end{equation} Although this setup looks similar to ours, we note that neither \eqref{eq:aref} nor Theorem~\ref{thm:main} implies the other. In a sense, our model is more deterministic in the relay-to-receiver link, while the El Gamal--Aref model is more deterministic in the transmitter-to-relay link. A natural question arises whether our Theorem~\ref{thm:main} follows from Theorem~\ref{thm:ceg} as a special case. We first note that in the coding scheme described in Section~\ref{sec:main}, the relay does neither ``decode'' nor ``compress'', but instead ``hashes'' its received output. Indeed, as a coding scheme, this ``hash-and-forward'' appears to be a novel method of summarizing the relay's information. However, ``hash-and-forward'' is not the unique coding scheme achieving the capacity \[ C(R_0) = \max_{p(x)} \min \{ I(X;Y) + R_0,\; I(X;Y_1, Y) \}. \] In the next section, we show that ``compress-and-forward'' can achieve the same rate. \section{Compress and Forward} \label{sec:second} Theorem~\ref{thm:main} was proved using ``hash-and-forward'' in Section~\ref{sec:first}. Here we argue that the capacity in Theorem~\ref{thm:main} can also be achieved by ``compress-and-forward''. We start with a special case of Theorem~\ref{thm:ceg}. The ``compress-and-forward'' part (cf.\@ \cite[Theorem~6]{Cover--El-Gamal1979}), combined with the relay-to-receiver communication of rate $R_0$, gives the achievable rate \begin{equation} \label{eq:wz1} R^*(R_0) = \sup I(X; Y, \hat{Y}_1), \end{equation} where the supremum is over all joint distributions of the form $p(x) p(y,y_1|x) p(\hat{y}_1|y_1)$ satisfying \begin{equation} \label{eq:wz2} I(Y_1;\hat{Y}_1|Y) \le R_0. \end{equation} Here the inequality \eqref{eq:wz2} comes from the Wyner--Ziv compression~\cite{Wyner--Ziv1976} of the relay's output $Y_1^n$ based on the side information $Y^n$. The achievable rate \eqref{eq:wz1} captures the idea of decoding $X^n$ based on the receiver's output $Y^n$ and the compressed version $\hat{Y}_1^n$ of the relay's output $Y_1^n$. We now derive the achievability of the capacity \[ C(R_0) = \max_{p(x)} \min\{ I(X; Y, Y_1), I(X;Y)+ R_0 \} \] from an algebraic reduction of the achievable rate given by \eqref{eq:wz1} and \eqref{eq:wz2}. First observe that, because of the deterministic relationship $Y_1 = f(X, Y)$, we have \[ I(X; \hat{Y}_1 | Y) \ge I(Y_1; \hat{Y}_1 | Y). \] Also note that, for any triple $(X,Y,Y_1)$, if $H(Y_1|Y) > R_0$, there exists a distribution $p(\hat{y}_1|y_1)$ such that $(X,Y) \to Y_1 \to \hat{Y}_1$ and $I(Y_1;\hat{Y}_1|Y) = R_0$. Henceforth, maximums are taken over joint distributions of the form $p(x)p(y,y_1|x)p(\hat{y}_1|y_1)$ with $Y_1 = f(X,Y)$. We have \begin{align} R^*(R_0) &= \sup \{I(X; Y, \hat{Y}_1) : I(Y_1;\hat{Y}_1 | Y) \le R_0\} \notag\\ &\ge \sup \{ I(X; Y, \hat{Y}_1) : I(Y_1;\hat{Y}_1 | Y) \le R_0, H(Y_1|Y) > R_0\}\notag\\ &\ge \sup \{ I(X; Y, \hat{Y}_1) : I(Y_1;\hat{Y}_1 | Y) = R_0, H(Y_1|Y) > R_0\} \notag\\ &= \sup \{ I(X; Y) + I(X; \hat{Y}_1 | Y) : I(Y_1;\hat{Y}_1 | Y) = R_0, H(Y_1|Y) > R_0 \} \notag\\ &\ge \sup \{ I(X; Y) + I(Y_1; \hat{Y}_1 | Y) : I(Y_1;\hat{Y}_1 | Y) = R_0, H(Y_1|Y) > R_0 \} \notag\\ &= \sup \{ I(X; Y) + R_0 : I(Y_1;\hat{Y}_1 | Y) = R_0, H(Y_1|Y) > R_0 \} \notag\\ &= \max \{I(X;Y) + R_0 : H(Y_1|Y) > R_0 \}. \notag \end{align} On the other hand, \begin{align} R^*(R_0) &= \sup \{I(X; Y, \hat{Y}_1) : I(Y_1;\hat{Y}_1 | Y) \le R_0\} \notag \\ &\ge \sup \{ I(X; Y, \hat{Y}_1) : I(Y_1;\hat{Y}_1 | Y) \le R_0, H(Y_1|Y) \le R_0\} \notag\\ &\ge \sup \{ I(X; Y, \hat{Y}_1) : \hat{Y}_1 = Y_1, H(Y_1|Y) \le R_0\} \notag \\ &= \sup \{ I(X; Y, Y_1) : \hat{Y}_1 = Y_1, H(Y_1|Y) \le R_0\} \notag\\ &= \max \{ I(X; Y, Y_1) : H(Y_1|Y) \le R_0\}. \notag \end{align} Thus, we have \begin{align*} R^*(R_0) &\ge \max_{p(x): H(Y_1|Y) > R_0} I(X;Y) + R_0 \intertext{and} R^*(R_0) &\ge \max_{p(x): H(Y_1|Y) \le R_0} I(X; Y, Y_1), \intertext{and therefore,} R^*(R_0) &\ge \max_{p(x)} \min\{ I(X;Y) + R_0,\; I(X; Y, Y_1) \}. \end{align*} In words, ``compress-and-forward'' achieves the capacity. \section{Discussion: Random Binning vs.\@ Random Covering} \label{sec:discuss} It is rather surprising that both ``hash-and-forward'' and ``compress-and-forward'' optimally convey the relay information to the receiver, especially because of the dual nature of compression (random covering) and hashing (random binning). (And the hashing in ``hash-and-forward'' should be distinguished from the hashing in Wyner--Ziv source coding.) The example in Figure~\ref{fig:bsc} illuminates the difference between the two coding schemes. \begin{figure}[h] \begin{center} \input{bsc.pstex_t} \end{center} \caption{Binary symmetric channel with rate-limited state information at receiver.} \label{fig:bsc} \end{figure} Here the binary input $X \in \{0,1\}$ is sent over a binary symmetric channel with cross-over probability $p$, or equivalently, the channel output $Y \in \{0,1\}$ is given as \[ Y = X + S \pmod{2}, \] where the binary additive noise $S \sim \textrm{Bern}(p)$ is independent of the input $X$. With no information on $S$ available at the transmitter or the receiver, the capacity is \[ C(0) = 1 - H(p). \] Now suppose there is an intermediate node which observes $S$ and ``relays'' that information to the decoder through a side channel of rate $R_0$. Since $S = X + Y$ is a deterministic function of $(X,Y)$, Theorem~\ref{thm:main} applies and we have \[ C(R_0) = 1 - H(p) + R_0 \] for $0 \le R_0 \le H(p).$ There are two ways of achieving the capacity. First, hashing. The relay hashes the entire binary $\{0,1\}^n$ into $2^{nR_0}$ bins, then sends the bin index $b(S^n)$ of $S^n$ to the decoder. The decoder checks whether a specific codeword $X^n(w)$ is typical with the received output $Y^n$ and then whether $S^n(w) = X^n(w) + Y^n$ matches the bin index. Next, covering. The relay compresses the state sequence $S^n$ using the binary lossy source code with rate $R_0$. More specifically, we use the standard backward channel for the binary rate distortion problem (see Figure~\ref{fig:dist}): \[ S = \hat{S} + U. \] \begin{figure}[h] \begin{center} \vspace*{12pt} \input{dist.pstex_t} \end{center} \caption{Backward channel for the binary rate distortion problem.} \label{fig:dist} \end{figure} Here $\hat{S} \in \{0,1\}$ is the reconstruction symbol and $U \sim \textrm{Bern}(q)$ is independent of $\hat{S}$ (and $X$) with parameter $q$ satisfying \[ R_0 = I(S; \hat{S}) = H(p) - H(q). \] Thus, using $n R_0$ bits, the ultimate receiver can reconstruct $\hat{S}^n$. Finally, decoding $X^n \sim \textrm{Bern}(1/2)$ based on $(Y^n, \hat{S}^n)$, we can achieve the rate \begin{align*} I(X; Y, \hat{S}) &= I(X; X + S, S + U)\\ &\ge I(X; X + U) \\ &= 1 - H(q)\\ &= 1 - H(p) + R_0. \end{align*} In summary, the optimal relay can partition its received signal space into either random bins or Hamming spheres. The situation is somewhat reminiscent of that of lossless block source coding. Suppose $\{X_i\}$ is independent and identically distributed (i.i.d.) $\sim \textrm{Bern}(p)$. Here are two basic methods of compressing $X^n$ into $n H(p) $ bits with asymptotically negligible error. \begin{enumerate} \item[1)] \emph{Hashing.} The encoder simply hashes $X^n$ into one of $2^{n H(p)}$ indices. With high probability, there is a unique typical sequence with matching hash index. \item[2)] \emph{Enumeration}~\cite{Cover1973}. The encoder enumerates $2^{n H(p)}$ typical sequences. Then $n H(p)$ bits are required to give the enumeration index of the observed typical sequence. With high probability, the given sequence $X^n$ is typical. \end{enumerate} While these two schemes are apparently unrelated, they are both extreme cases of the following coding scheme. \begin{enumerate} \item[3)] \emph{Covering with hashing.} By fixing $p(\hat{x}|x)$ and generating independent sequences $\hat{X}^n(i),$ $i = 1,\ldots, 2^{nI(X;\hat{X})},$ each i.i.d.\@ $\sim p(\hat{x})$, we can induce a set of $2^{n I(X;\hat{X})}$ coverings for the space of typical $X^n$'s. For each cover $\hat{X}^n(i)$, there are $\approx 2^{n H(X|\hat{X})}$ sequences that are jointly typical with $\hat{X}^n(i)$. Therefore, by hashing $X^n$ into one of $2^{nH(X|\hat{X})}$ hash indices and sending it along the cover index, we can recover a typical $X^n$ with high probability. This scheme requires $n(I(X;\hat{X}) + H(X|\hat{X})) = nH(p)$ bits. \end{enumerate} Now if we take $\hat{X}$ independent of $X$, then we have the case of hashing only. On the other hand, if we take $\hat{X} = X$, then we have enumeration only, in which case the covers are Hamming spheres of radius zero. It is interesting to note that the combination scheme works under any $p(\hat{x}|x)$. Thus motivated, we combine ``hash-and-forward'' with ``compress-and-forward'' in the next section. \section{Compress, Hash, and Forward} \label{sec:third} Here we show that a combination of ``compress-and-forward'' and ``hash-and-forward'' can achieve the capacity \[ C(R_0) = \max_{p(x)} \min\{ I(X; Y, Y_1),\enspace I(X;Y)+ R_0 \} \] for the setup in Theorem~\ref{thm:main}. We first fix an \emph{arbitrary} conditional distribution $p(\hat{y}_1|y_1)$ and generate $2^{n(I(Y_1;\hat{Y}_1)+\epsilon)}$ sequences $\hat{Y}_1^n(i),$ $i = 1,2,\ldots,2^{n(I(Y_1;\hat{Y}_1)+\epsilon)},$ each i.i.d.\@ $\sim p(\hat{y}_1)$. Then, with high probability, a typical $Y_1^n$ has a jointly typical cover $\hat{Y}_1^n(Y_1^n)$. (If there is more than one, pick the one with the smallest index. If there is none, assign $\hat{Y}_1^n(1)$.) There are two cases to consider, depending on our choice of $p(\hat{y}_1|y_1)$ (and the input codebook distribution $p(x)$). First suppose \begin{equation} \label{eq:hat-r0} I(Y_1; \hat{Y}_1|Y) \ge R_0. \end{equation} If we treat $\hat{Y}_1^n(Y_1^n)$ as the relay output, $\hat{Y}_1^n$ is a deterministic function of $Y_1^n$ and thus of $(X^n, Y^n)$. Therefore, we can use ``hash-and-forward'' on $\hat{Y}_1^n$ sequences. (Markov lemma~\cite{Berger1978} justifies treating $\hat{Y}_1^n(Y_1^n)$ as the output of the memoryless channel $p(y, \hat{y}_1|x)$.) This implies that we can achieve \[ R^*(R_0) = \min\{ I(X; Y) + R_0, \enspace I(X; Y, \hat{Y}_1) \}. \] But from \eqref{eq:hat-r0} and the functional relationship between $Y_1$ and $(X, Y)$, we have \begin{align*} I(X; Y, \hat{Y}_1) &= I(X; Y) + I(X; \hat{Y}_1|Y)\\ &\ge I(X; Y) + I(Y_1; \hat{Y}_1 | Y) \\ &\ge I(X; Y) + R_0. \end{align*} Therefore, \[ R^*(R_0) = I(X;Y) + R_0, \] which is achieved by the above ``compress-hash-and-forward'' scheme with $p(x)$ and $p(\hat{y}_1|y_1)$ satisfying \eqref{eq:hat-r0}. Alternatively, suppose \begin{equation} \label{eq:hat-r0-2} I(Y_1; \hat{Y}_1|Y) \le R_0. \end{equation} Then, we can easily achieve the rate $I(X; Y, \hat{Y}_1)$ by the ``compress-and-forward'' scheme. The rate $R_0 \ge I(Y_1;\hat{Y}_1|Y)$ suffices to convey $\hat{Y}_1^n$ to the ultimate receiver. But we can do better by using the remaining $\Delta = R_0 - I(Y_1; \hat{Y}_1|Y)$ bits to further hash $Y_1^n$ itself. (This hashing of $Y_1^n$ should be distinguished from that of Wyner--Ziv coding which bins $\hat{Y}_1^n$ codewords.) By treating $(Y, \hat{Y}_1)$ as a new ultimate receiver output and $Y_1$ as the relay output, ``hash-and-forward'' on top of ``compress-and-forward'' can achieve \begin{equation} R^*(R_0) = \min \{ I(X; Y, \hat{Y}_1) + \Delta, \enspace I(X; Y, \hat{Y}_1, Y_1)\}. \label{eq:r0-intermediate} \end{equation} Since \begin{align*} I(X; Y, \hat{Y}_1) + \Delta &= I(X; Y, \hat{Y}_1) - I(Y_1; \hat{Y}_1|Y) + R_0 \\ &\ge I(X; Y, \hat{Y}_1) - I(X; \hat{Y}_1|Y) + R_0 \\ &= I(X; Y) + R_0 \end{align*} and \[ I(X; Y, \hat{Y}_1, Y_1) = I(X; Y, Y_1), \] the achievable rate in \eqref{eq:r0-intermediate} reduces to \[ R^*(R_0) = \min \{ I(X; Y) + R_0, \enspace I(X; Y, Y_1) \}. \] Thus, by maximizing over input distributions $p(x)$, we can achieve the capacity for either case \eqref{eq:hat-r0} or \eqref{eq:hat-r0-2}. It should be stressed that our combined ``compress-hash-and-forward'' is optimal, regardless of the covering distribution $p(\hat{y}_1|y_1)$. In other words, any covering (geometric partitioning) of $Y_1^n$ space achieves the capacity if properly combined with hashing (nongeometric partitioning) of the same space. In particular, taking $\hat{Y}_1 = Y_1$ leads to ``hash-and-forward'' while taking the optimal covering distribution $p^*(\hat{y}_1|y_1)$ for \eqref{eq:wz1} and \eqref{eq:wz2} in Section~\ref{sec:second} leads to ``compress-and-forward''. \section{Ahlswede--Han Conjecture} In this section, we show that Theorem~\ref{thm:main} confirms the following conjecture by Ahlswede and Han~\cite{Ahlswede--Han1983} on the capacity of channels with rate-limited state information at the receiver, for the special case in which the state is a deterministic function of the channel input and the output. First, we discuss the general setup considered by Ahlswede and Han, as shown in Figure~\ref{fig:ah}. \begin{figure}[h] \begin{center} \vspace*{12pt} \input{ah.pstex_t} \end{center} \caption{Channel with rate-limited state information at the decoder.} \label{fig:ah} \end{figure} Here we assume that the channel $p(y|x,s)$ has independent and identically distributed state $S^n$ and the decoder can be informed about the outcome of $S^n$ via a separate communication channel at a fixed rate $R_0$. Ahlswede and Han offered the following conjecture on the capacity of this channel. \begin{conjecture}[{Ahlswede--Han~\cite[Section V]{Ahlswede--Han1983}}] The capacity of the state-dependent channel $p(y|x,s)$ as depicted in Figure~\ref{fig:ah} with rate-limited state information available at the receiver via a separate communication link of rate $R_0$ is given by \begin{equation} \label{eq:ah} C(R_0) = \max I(X; Y | \hat{S}), \end{equation} where the maximum is over all joint distributions of the form $p(x)p(s) p(y|x,s) p(\hat{s}|s)$ such that \[ I(S; \hat{S} | Y) \le R_0 \] and the auxiliary random variable $\hat{S}$ has cardinality $|\hat{\mathcal{S}}| \le |\mathcal{S}| + 1$. \end{conjecture} It is immediately seen that this problem is a special case of a relay channel with a noiseless link (Figure~\ref{fig:det-relay}). Indeed, we can identify the relay output $Y_1$ with the channel state $S$ and identify the relay channel $p(y, y_1|x) = p(y_1|x) p(y|x,y_1)$ with the state-dependent channel $p(s) p(y|x,s)$. Thus, the channel with rate-limited state information at the receiver is a relay channel in which the relay channel output $Y_1$ is independent of the input $X$. The binary symmetric channel example in Section~\ref{sec:discuss} corresponds to this setup. Now when the channel state $S$ is a deterministic function of $(X, Y)$, for example, $S = X + Y$ as in the binary example in Section~\ref{sec:discuss}, Theorem~\ref{thm:main} proves the following. \begin{theorem} For the state-dependent channel $p(y|x,s)$ with state information available at the decoder via a separate communication link of rate $R_0$, if the state $S$ is a deterministic function of the channel input $X$ and the channel output $Y$, then the capacity is given by \begin{equation} \label{eq:cap-ah} C(R_0) = \max_{p(x)} \min\{I(X;Y) + R_0, \; I(X;Y,S)\}. \end{equation} \end{theorem} Our analysis of ``compress-and-forward'' coding scheme in Section~\ref{sec:second} shows that \eqref{eq:ah} reduces to \eqref{eq:cap-ah}, confirming the Ahlswede--Han conjecture when $S$ is a function of $(X,Y)$. On the other hand, our proof of achievability (Section~\ref{sec:first}) shows that ``hash-and-forward'' is equally efficient for informing the decoder of the state information. \section{Concluding Remarks} Even a completely oblivious relay can boost the capacity to the cut set bound, if the relay reception is fully recoverable from the channel input and the ultimate receiver output. And there are two basic alternatives for the optimal relay function---one can either compress the relay information as in the traditional method of ``compress-and-forward,'' or simply hash the relay information. In fact, infinitely many relaying schemes that combine hashing and compression can achieve the capacity. While this development depends heavily on the deterministic nature of the channel, it reveals an interesting role of hashing in communication. \def$'${$'$}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction.} NMR experiments suffer from a range of errors which can be traced back to imperfections in the radiofrequency control fields used to manipulate spin systems. In particular the \Bone\ field is not infinitely strong, leading to off-resonance errors, and is not uniform across the sample, leading to pulse strength errors. One particularly successful approach for tackling such errors is the use of composite pulses \cite{Levitt1979,Levitt1986}, in which a single rotation is replaced by a series of rotations chosen such that in the absence of errors the combined propagator implements the desired rotation, while in the presence of small errors the effects of the errors in individual rotations largely cancel one another. Here we principally consider the problem of tackling pulse strength errors without introducing additional sensitivity to off-resonance errors. Corresponding errors arise in experimental implementations of quantum information processing (QIP) \cite{Bennett2000,Jones2001a} where they are known as systematic errors \cite{Cummins2000}, to distinguish them from random errors arising from decoherence, and where there is considerable interest in performing extremely accurate unitary transformations on quantum systems in the presence of realistic experimental errors. Again composite pulses provide a potential solution, but there are many significant differences between the design of composite pulses for quantum computing and conventional NMR. Firstly, many traditional composite pulses, are designed to act correctly only on particular initial states, and so are not suitable for quantum computing, where fully compensating (Class A) composite pulses \cite{Levitt1986} have to be used \cite{Cummins2000,Cummins2003,McHugh2005,Xiao2006,Jones2011,Ichikawa2011,Ichikawa2012,Bando2013,Merrill2012}. (While fully compensating pulses can be used for conventional NMR experiments, where they have the advantage that they can be immediately substituted for any simple pulse without the need for detailed analysis \cite{Levitt1986}, their use is normally excessive and better results can normally be obtained using pulses tailored to the relevant problem.) Similarly it is necessary to use genuinely fully compensating pulses, rather than using phase cycling or gradient coherence pathway selection to suppress imperfections \cite{Odedra2012}. Secondly, although composite pulses to suppress off-resonance errors, originally developed by Tycko \cite{Tycko1985}, have been used for QIP in NMR \cite{Cummins2000,Cummins2003} and SQUID \cite{Collin2004} experiments, tackling off-resonance errors is (with the exception of dynamic decoupling, discussed below) rarely of much interest in quantum computing, as the aim is to control a previously well characterised spin system, rather than to investigate an unknown system. It is, however, generally desirable to ensure that the sensitivity to off-resonance errors is not greatly increased, and in particular is not significantly worse than that of a simple pulse. Thirdly, it is useful to distinguish between composite pulses designed for single use, and pulses designed for use in a pulse train, such as decoupling sequences, where it may be desirable to combine relatively simple composite pulses with phase cycles and supercycles \cite{Levitt1981,Shaka1987a}. In quantum computing most work has concentrated on single pulses, but in the field of dynamic decoupling \cite{Viola1999,Uhrig2007}, which is likely to play an important role in the design of quantum memories, the use of techniques adapted from conventional decoupling sequences has proved fruitful \cite{Souza2011b,Souza2012}. Here we will only consider the case of single pulses. Finally, quantum computing generally requires very accurate control in the presence of small or moderate errors, rather than moderately accurate control with very large errors. (These two effects sometimes go together, but sometimes do not). For this reason it is common to express the fidelity of the pulse as a Taylor series as described below, and to seek to suppress as many error terms as possible. In particular there has been significant interest in the possibility of developing families of arbitrarily accurate composite pulses \cite{Brown2004,Brown2005}, although actually designing pulses by such methods is normally a complex problem \cite{Alway2007}. Here we will show that it is straightforward to find robust $180^\circ$ pulses with arbitrary tolerance of pulse strength errors, but extending these ideas to pulses with other rotation angles is more challenging and only partially solved. As with other composite pulses developed in NMR these can be extended to apply to spin--spin couplings \cite{Jones2003b} and can also be applied in a wide range of other quantum computing experiments \cite{Gulde2003,Morton2005a,Clayden2012,Ivanov2012}. We will restrict ourselves to $\theta_0$ rotations about the $x$-axis of the Bloch sphere; rotations about other axes can be trivially derived from these by offsetting all pulse phases by the desired phase angle. \section{Pulse fidelity} It is convenient to characterize simple and composite pulses by their propagator fidelity \cite{Jones2011} \begin{equation} \mathcal{F}=\left|\textrm{tr}(VU^\dag)\right|/2 \label{eq:fiddef} \end{equation} where $V$ is the propagator of the pulse in the presence of errors and $U$ is the propagator for the ideal pulse (taking the absolute value is necessary in general to neglect the effects of global phases, and dividing by two simply normalises the fidelity). Equivalently pulses can be categorized by their \textit{infidelity}, defined by $\mathcal{I}=1-\mathcal{F}$. Much work to date has concentrated on pulse strength errors, which arise from errors in the strength of the \Bone\ field. These are closely related to pulse length errors, and this name is often used instead, but a careful distinction between the two errors can and should be made \cite{Levitt1982}. In particular the two cases are not quite the same in the presence of simultaneous off-resonance effects, as it is important to distinguish between measuring the off-resonance field as a fraction of the nominal \Bone\ field and as a fraction of the actual \Bone\ field \cite{Shaka1983}. In reality \textit{both} types of error can be present: while naturally occurring errors usually arise from inhomogeneity in the \Bone\ field strength, many experimental demonstrations of composite pulses introduce additional deliberate errors by varying the pulse length \cite{Levitt1982}. In the absence of off-resonance errors, pulse strength errors replace the rotation angle $\theta$ of each pulse by $\theta'=(1+\epsilon)\,\theta$, where $\epsilon$ is the fractional error. The pulse fidelity can then be conveniently expanded as a Taylor series in $\epsilon$; for a simple pulse this leads to \begin{equation} \mathcal{F}=|\cos(\epsilon\theta/2)|=1-\epsilon^2\theta^2/8+O(\epsilon^4). \end{equation} The BB1 pulse sequence, originally developed by Wimperis \cite{Wimperis1994}, is effective in such cases; it replaces a $\theta_0$ pulse by the pulse sequence \begin{equation} \pi_\beta\,2\pi_{3\beta}\,\pi_\beta\,\theta_0\qquad\beta=\pm\arccos(-\theta/4\pi) \label{eq:BB1} \end{equation} written with time running from left to right. (The choice of sign is arbitrary but must be made consistently.) The initial three pulse sequence performs no overall rotation in the absence of errors, but in the presence of pulse length errors generates a pure error term which largely cancels the error in the final $\theta_0$ pulse, suppressing both second and fourth order infidelities. While Wimperis originally applied the correction sequence at the start of the composite pulse, as shown above, it can instead be placed at the end of the sequence, or indeed in the center of the sequence by splitting the $\theta_0$ pulse into two equal halves \cite{Cummins2003}. We will initially only consider the case of $180^\circ$ pulses; these are used in NMR QIP to implement \NOT\ gates and in more conventional NMR to implement spin echoes and their generalisation dynamic decoupling \cite{Viola1999,Uhrig2007,Souza2011b,Souza2012}. The fidelity of a BB1 $180^\circ$ pulse is \begin{equation} \mathcal{F}_\textit{BB1}=1-\epsilon^6\times5\,\pi^6/1024+O(\epsilon^8). \end{equation} Note that as usual a propagator infidelity of order $2n$ corresponds to an error term in the underlying propagator of order $n$ \cite{Jones2011}, and so the sixth-order sequence BB1 removes the first and second order errors from the propagator. BB1 has found widespread use in quantum computing experiments, but there is interest in designing Class A pulses with even greater error suppression. (While the approaches of arbitrary precision composite pulses have been used to design candidate pulses \cite{Brown2004,Brown2005,Alway2007}, those explored to date have not proved useful as their theoretically superior performance is not confirmed in experiments \cite{Xiao2006}.) The most naive approach, constructing a higher order composite pulse by iteratively replacing each component pulse with a composite pulse, is rarely successful. This is not surprising, as in general there is no reason to expect that the residual errors in a composite pulse should have the same form as the errors in a simple pulse, and so it is unlikely that the very same approach will work at different orders of iteration. Instead methods for generating higher order pulses by recursive or iterative procedures (e.g., \cite{Shaka1983,Levitt1983,Tycko1984}) normally work directly on the error propagator \cite{Odedra2012b}. An unusual partial exception is the \Sn{n} family of composite inversion pulses \cite{Tycko1984}; although this family was designed by considering the error propagator, the resulting sequences are in fact simply naive iterative expansions. However the \Sn{n} family are not good Class A pulses, and so are not generally suitable for quantum computing. More generally Tycko \textit{et al.} have considered the use of naive iterative expansions to design inversion and excitation pulses \cite{Tycko1985a}, and have discussed the conditions under which this approach is successful. As we will discuss below, various families of Class A composite pulses described by Wimperis, can be described by a simple iterative process, and thus provide a simple route to arbitrary precision $180^\circ$ degree pulses for use in quantum computing. Such pulses are also moderately robust at very large pulse strength errors, and so could in principle find applications in experiments using very inhomogeneous \Bone\ fields, such as phase-modulated rotating-frame imaging \cite{Allis1991} or single-sided NMR \cite{Blumich2008}, as well as effectively removing the need to calibrate RF pulse widths in walkup NMR experiments. However in conventional NMR experiments it is likely that tailored approaches will be more effective. While many suitable pulses have been developed using conventional approaches, we also draw attention to the robust inversion sequences developed by Torosov \cite{Torosov2011}, while in walkup NMR shaped pulses developed using optimal control theory \cite{Khaneja2005}, such as the Fanta4 pulses \cite{Nimbalkar2013} may also be of interest. \section{Antisymmetric composite pulses} The Wimperis error correcting sequence is time symmetric, and the BB1 sequence as a whole will similarly be symmetric if the correction sequence is placed in the middle of the main pulse. As we will see later such symmetric pulses have certain advantages, especially in the presence of off-resonance errors, and many composite pulses have this property. Here we instead consider time antisymmetric composite pulses $180^\circ$ pulses, which can have particular advantages in some conventional NMR experiments \cite{Wimperis1991,Odedra2012}. As we shall see, it is possible to combine many of the best features of the two approaches, by first designing antisymmetric composite pulses and subsequently converting these into corresponding symmetric pulses, or \textit{vice-versa}. The design of antisymmetric composite $\pi$ pulses rests on the observation that, in the absence of errors, any time antisymmetric sequence of $\pi$ pulses such as \begin{equation} \pi_{-\phi_n}\dots\pi_{-\phi_1}\,\pi_0\,\pi_{\phi_1}\dots\pi_{\phi_n} \label{eq:antisym} \end{equation} is equivalent to a simple $\pi_0$ pulse \cite{Wimperis1991}. Thus the phases of the outer pulses, $\{\phi_1,\,\phi_2,\dots\phi_n\}$, can be adjusted at will, with the possibility of creating error tolerance. For the case of a five pulse sequence, with two controllable phase angles, the system can be solved analytically, and the second order infidelity term can be removed by choosing $\phi_1=\pm\phi$ and $\phi_2=\pm3\phi$ with $\phi=\arccos(-1/4)\approx104.5^\circ$. Once again the choice of sign is arbitrary, but must be made consistently, and in this case negating the phases corresponds to replacing the composite pulse with its time reversed form. In passing we note that such antisymmetric pulses have the useful property that the rotation axis of the overall propagator always lies in the $xz$ plane \cite{Tycko1985a}, a fact which has considerable importance in some conventional NMR applications \cite{Odedra2012}, but which is not directly relevant in QIP. Intriguingly these phases are identical to the BB1 phase angles for a $\pi$ pulse, and just like BB1 this pulse also suppresses the fourth order infidelity; indeed the fidelity of the antisymmetric pulse is \textit{identical} to that of the BB1 $\pi$ pulse. This is not coincidental, but rather can be traced to two simple facts. Firstly the fidelity definition, Eq.~\ref{eq:fiddef}, is invariant under cyclic reordering, and secondly we can use the identity \begin{equation} \pi_0\,\theta_\phi\equiv\theta_{-\phi}\pi_0 \label{eq:piswap} \end{equation} for a $\pi_0$ pulse and any other pulse. Thus \begin{align} \mathcal{F}&=\textrm{tr}\left(\pi'_{-\phi_2}\,\pi'_{-\phi_1}\,\pi'_0\,\pi'_{\phi_1}\,\pi'_{\phi_2}\,\pi_0\right)/2\\ &=\textrm{tr}\left(\pi'_0\,\pi'_{\phi_1}\,\pi'_{\phi_2}\,\pi_0\,\pi'_{-\phi_2}\,\pi'_{-\phi_1}\right)/2\\ &=\textrm{tr}\left(\pi'_0\,\pi'_{\phi_1}\,\pi'_{\phi_2}\,\pi'_{\phi_2}\,\pi'_{\phi_1}\,\pi_0\right)/2 \end{align} where $\pi'=\pi(1+\epsilon)$ as before. Any antisymmetric sequence of five $\pi$ pulses can be converted into the same form as a BB1 pulse, and so the optimal error correcting sequences will occur at the same phase angles. The ambiguity in the sign of the arccos function in Eq.~\ref{eq:BB1} is now seen to correspond to the equivalence of an antisymmetric pulse sequence and its time-reversed form. This antisymmetric composite pulse was previously described by Wimperis \cite{Wimperis1991,Odedra2012}, who called it an \Fn1 pulse. It was originally derived by using a toggling frame description of an antisymmetric composite pulse to isolate the error term, and then removing this term to first order \cite{Wimperis1991}. \Fn1 is, however, simply the first member of the \Fn{n} family of pulses with rapidly growing error tolerance. Wimperis explicitly described \Fn2, the second member of the family, and gave an outline prescription for designing further members. Wimperis designed the family using an iterative expansion of the error propagator, but it is instead possible to proceed by a naive iterative expansion, effectively replacing each $\pi$ pulse in an \Fn1 sequence by an \Fn1 sequence. It is, however, necessary to replace alternate pulses by ``standard'' \Fn1 sequences and their time-reversed equivalents. This naive iterative approach to the design of Class A composite pulses is not usually successful, but its application to time antisymmetric sequences of $\pi$ pulses, and the need to alternate standard and time-reversed variants, can be understood by considering the composite pulse in the toggling frame \cite{Wimperis1991}. By contrast with other approaches \cite{Brown2004,Brown2005,Alway2007} the \Fn{n} family provides a simple and direct method for explicitly constructing Class A composite pulses with arbitrary precision: the whole family can be easily described by listing the phases of successive $\pi$ pulses, with the iterative form \begin{equation} \phi_{n+1}=\{-3\phi+\phi_n,\,-\phi-\phi_n,\,\phi_n,\,\phi-\phi_n,\,3\phi+\phi_n\} \label{eq:phin} \end{equation} with $\phi_0=0$ and $\phi=\arccos(-1/4)$. Thus \begin{equation} \phi_1=\{-3,\,-1,\,0,\,1,\,3\}\times\phi \end{equation} reproducing \Fn1, and \begin{multline} \phi_2=\{-6,\,-4,\,-3,\,-2,\,0,\,2,\,0,\,-1,\,-2,\,-4,\,-3,\\ -1,\,\,0,\,1,\,3,\,4,\,2,\,1,\,0,\,-2,\,0,\,2,\,3,\,4,\,6\}\times\phi \end{multline} describes the 25 pulses making up \Fn2. In this notation the direct implementation of a \NOT\ gate as a single pulse can be considered as the family member \Fn0. Note that the alternating pattern of plus and minus signs in Eq.~\ref{eq:phin} corresponds to the alternate use of standard and negative phase (time-reversed) pulses; this is necessary because the phase of the error term in the toggling frame is alternately positive and negative \cite{Wimperis1991}, and the phase steps of the component pulses must be alternately negated to follow this pattern. The \Fn2 composite pulse suppresses all infidelity terms below eighteenth order, with \begin{equation} \mathcal{F}_{\Fn2}=1-\epsilon^{18}\times625\,\pi^{18}/2147483648+O(\epsilon^{20}), \end{equation} and the performance continues to improve rapidly for higher members of the family, as listed in Table~\ref{tab:F}. (These results were obtained by direct calculation with pulse sequences of the form of Eq.~\ref{eq:phin}.) In general \Fn{n} is a sequence of $5^n$ pulses with a fidelity given empirically by \begin{equation} \mathcal{F}_{\Fn{n}}=1-\epsilon^{2q}\times5^{(q-1)/2}\times\pi^{2q}\times2^{(1-7q)/2}+O(\epsilon^{2q+2}) \label{eq:FFn} \end{equation} with $q=3^n$ (this expression has not been checked beyond \Fn5, which suppresses all infidelity terms below $486^\textit{th}$ order). For the higher members of the family the first non-zero coefficient is very large, but has little effect at moderate $\epsilon$ as it is combined with an \textit{extremely} high power of $\epsilon$. Note that when evaluating the properties of sequences beyond \Fn2 it is essential to use either fully analytic methods or very high precision numerical calculations in order to capture fully the complex pattern of cancelations that leads to very high order error tolerances. \begin{table} \begin{tabular}{lllll} \hline $n$&pulses&order&coefficient&numerical value\\ \hline 0&1&$\epsilon^2$&$\pi^2/2^3$&$1.234$\\ 1&5&$\epsilon^6$&$5\times\pi^6/2^{10}$&$4.694$\\ 2&25&$\epsilon^{18}$&$5^5\times\pi^{18}/2^{31}$&$258.6$\\ 3&125&$\epsilon^{54}$&$5^{13}\times\pi^{54}/2^{94}$&$4.324\times10^7$\\ 4&625&$\epsilon^{162}$&$5^{40}\times\pi^{162}/2^{283}$&$2.021\times10^{23}$\\ \hline \end{tabular} \caption{Summary of the performance of \Fn{n} pulses for moderate values of $n$. \Fn{n} pulses are only perfect at $\epsilon=0$ but near this point show infidelities of the orders shown.} \label{tab:F} \end{table} \begin{figure} \includegraphics{fig1.eps} \caption{Fidelity $\mathcal{F}$ and infidelity $\mathcal{I}$ as a function of pulse strength error $\epsilon$ for the \Fn{n} family of pulses from \Fn0 (plotted in red) to \Fn3 (plotted in black).}\label{fig:Fplot} \end{figure} The performance of some of the lower members of the \Fn{n} family is shown in Fig.~\ref{fig:Fplot}, clearly showing the very broad error tolerance achievable. Theoretically these pulses allow astonishingly precise rotations to be performed, but in practice such extremely precise rotations are unlikely to be either required or achievable, and for this reason the infidelity plots shown here concentrate on infidelities around $10^{-5}$. These practical limits arise because no experimental implementation will ever be completely perfect in every regard. In particular, it is implausible that the required phase shifts can be produced precisely, and it is likely that relaxation during pulses will become a problem with the very long pulses required for high members of the family. A similar approach can be used to describe the \Gn{n} family of composite pulses \cite{Wimperis1991}. Rather than seeking to create a composite pulse with high fidelity over a central region, these pulses are designed to give perfect fidelity at certain particular error values, in the hope that this will lead to moderate error tolerance over a wide range of intermediate values. (Error tolerance of this kind is not likely to be particularly useful for quantum information processing, but may find applications in more conventional NMR.) The first non-trivial member, \Gn1, is obtained by forming an antisymmetric sequence of $\pi$ pulses as before and then choosing the phases to maximise the pulse fidelity at $\epsilon=\pm0.5$. As before the problem can be solved analytically, giving the sequences of phases \begin{equation} \gamma_1=\{1,\,-2,\,0,\,2,\,-1\}\times\gamma \end{equation} with $\gamma=\pi/4$, a sequence which Wimperis called \Gn1. The antisymmetric structure of the pulse guarantees that it will also be perfect at $\epsilon=0$, and the infidelity is quadratic in $\epsilon$ around these fixed points, with a shallow minimum in the fidelity around $\epsilon\approx0.28$. Converting this sequence into a time symmetric form gives the Wimperis composite pulse BB2 \cite{Wimperis1994}. There is nothing special about the choice of $\epsilon=\pm0.5$ for the points of ideal fidelity, and close relatives of \Gn1 can be obtained for any value up to $0.8$. However beyond $0.5$ the depth of the fidelity minimum increases rapidly, removing the desired broad error tolerance, while for small values the composite pulse sequence becomes very similar to \Fn1. We will, therefore, stick to the original choice of $0.5$. Once again this composite pulse can be expanded iteratively to get higher members of the \Gn{n} family. The rule for pulse phases is entirely analogous to that for \Fn{n}, and is \begin{equation} \gamma_{n+1}=\{\gamma+\gamma_n,\,-2\gamma-\gamma_n,\,\gamma_n,\,2\gamma-\gamma_n,\,-\gamma+\gamma_n\} \label{eq:gamn} \end{equation} with $\gamma_0=0$ and $\gamma=\pi/4$. \Gn2 has perfect fidelity at $\epsilon=0$ and $\epsilon=0.5$, just like \Gn1, but also has perfect fidelity at two additional points at $\epsilon\approx\pm0.786$. In general \Gn{n} has all the same perfect points as \Gn{n-1}, and a pair of additional points further out, as shown in Table~\ref{tab:G} and Fig.~\ref{fig:Gplot}. Beyond \Gn1 there is no simple formula for the location of these points, which must be found numerically. \begin{table} \begin{tabular}{lllllll} \hline $n$&pulses&$\epsilon$\\ \hline 0&1&$0.000$\\ 1&5&$0.000$&$\pm0.500$\\ 2&25&$0.000$&$\pm0.500$&$\pm0.786$\\ 3&125&$0.000$&$\pm0.500$&$\pm0.786$&$\pm0.911$\\ 4&625&$0.000$&$\pm0.500$&$\pm0.786$&$\pm0.911$&$\pm0.963$\\ \hline \end{tabular} \caption{\Gn{n} pulses exhibit perfect fidelity at $2n+1$ points with the approximate values shown.} \label{tab:G} \end{table} \begin{figure} \includegraphics{fig2.eps} \caption{Fidelity $\mathcal{F}$ and infidelity $\mathcal{I}$ as a function of pulse strength error $\epsilon$ for the \Gn{n} family of pulses from \Gn0 (red) to \Gn3 (black). Note that only the middle three infidelity minima occur at rational values of $\epsilon$, and the outer minima are not well digitised.}\label{fig:Gplot} \end{figure} \section{Narrowband and passband pulses} In addition to describing two symmetric families of broadband sequences, BB1 which corresponds to \Fn1 and BB2 which corresponds to \Gn1, Wimperis also described \cite{Wimperis1994} two families of narrowband sequences (which only give effective pulses when $\epsilon$ is close to zero) and two families of passband sequences (which act as error correcting composite pulses for moderate values of $\epsilon$, but act as moderately robust \textsc{identity} operators for values of $\epsilon$ near $\pm1$). As usual we will consider the members corresponding to a nominal $\pi_0$ rotation. These families are all time-symmetric, but as before we can convert them into time-antisymmetric sequences of $\pi$ pulses. Having done this we can use the iterative approach to form higher members of the same family, and then (if desired) reverse the transformation to create time-symmetric sequences. We will only discuss the narrowband family derived from NB1, which we call \Nn{n}, and the passband family derived from PB1, which we call \Pn{n}. Applications of passband pulses in solid state NMR have recently been described \cite{Odedra2012a}, and we will demonstrate an application of narrowband pulses below. An NB1 $\pi_0$ pulse takes the form \begin{equation} \pi_\nu\,2\pi_{-\nu}\,\pi_\nu\,\pi_0\qquad\nu=\arccos(-1/4) \label{eq:NB1} \end{equation} and can be rearranged to get the antisymmetric pulse \Nn{1}, defined by the phases \begin{equation} \nu_1=\{1,\,-1,\,0,\,1,\,-1\}\times\nu. \end{equation} (A related pulse was previously discussed by Tycko \textit{et al.} \cite{Tycko1985a}, but they only considered its properties as an inversion pulse.) As usual the higher members are defined by \begin{equation} \nu_{n+1}=\{\nu+\nu_n,\,-\nu-\nu_n,\,\nu_n,\,\nu-\nu_n,\,-\nu+\nu_n\}. \label{eq:nun} \end{equation} All these pulses have second order infidelity around $\epsilon=0$, with the coefficient of this error term rising with increasing $n$ \begin{equation} \mathcal{F}_{\Nn{n}}=1-\epsilon^{2}\times\pi^{2}/8\times(15/4)^n+O(\epsilon^{4}), \end{equation} creating a pulse which is only effective over increasingly narrow regions, as shown in Fig.~\ref{fig:Nplot}. \begin{figure} \includegraphics{fig3.eps} \caption{Fidelity $\mathcal{F}$ as a function of pulse strength error $\epsilon$ for the \Nn{n} family of pulses from \Nn0 (plotted in red) to \Nn3 (plotted in black).}\label{fig:Nplot} \end{figure} A PB1 $\pi_0$ pulse takes the form \begin{equation} 2\pi_\psi\,4\pi_{-\psi}\,2\pi_\psi\,\pi_0\qquad\psi=\arccos(-1/8) \label{eq:PB1} \end{equation} with a fidelity compared with an ideal $\pi$ pulse of \begin{equation} \mathcal{F}_{PB1}=1-\epsilon^6\times63\,\pi^6/1024+O(\epsilon^8) \end{equation} and a fidelity compared with an \textsc{identity} gate of \begin{equation} \mathcal{F}=1-(1+\epsilon)^4\times63\,\pi^4/512+O((1+\epsilon)^6) \end{equation} where the expansion of the fidelity has been taken around $\epsilon=-1$. PB1 has slightly worse suppression of small errors than BB1, but gives very little excitation in regions of extreme pulse strength error. PB1 can be rearranged to give the \Pn1 sequence \begin{equation} \pi_{-\psi}\,\pi_{-\psi}\,\pi_{\psi}\,\pi_{\psi}\,\pi_0\,\pi_{-\psi}\,\pi_{-\psi}\,\pi_{\psi}\,\pi_{\psi} \label{eq:P1} \end{equation} which has exactly the same fidelity but is now described as a time-antisymmetric sequence of 9 $\pi$ pulses. (This pulse was previous discussed by Odedra and Wimperis \cite{Odedra2012a} under the name $\textrm{APB}_1$.) This can be expanded iteratively to obtain \Pn2 and so on. The \Pn2 sequence of 81 $\pi$ pulses has a fidelity \begin{equation} \mathcal{F}_{\Pn2}=1-\epsilon^{18}\times3^8\,7^4\,\pi^{18}/2^{31}+O(\epsilon^{20}) \end{equation} with the expected eighteenth order behaviour. However the fidelity to an \textsc{identity} gate around $\epsilon=-1$ remains fourth order, with the same coefficient. The behaviour of higher members of the family is shown in Fig.~\ref{fig:Pplot}, demonstrating increasingly robust passband behaviour. \begin{figure} \includegraphics{fig4.eps} \caption{Fidelity as a function of pulse strength error $\epsilon$ for the \Pn{n} family of composite pulses from \Pn0 (red) to \Pn3 (black).}\label{fig:Pplot} \end{figure} \section{Combining pulses} The results for F and G pulses discussed above are largely implicit in the earlier work of Wimperis \cite{Wimperis1991}, although he did not give an explicit form for \Fn{n} and \Gn{n}, or derive the remarkable error tolerances shown here, but the results for \Nn{n} are new, as are those for \Pn{n} for the sequences beyond \Pn1. Our approach is not, however, confined to these four families. All these composite pulses are built up iteratively, by replacing each $\pi$ pulse in a sequence by an error corrected $\pi$ pulse, but it is not necessary to use the same error-corrected pulse at each stage of the iteration, and the phase patterns described above can be applied more generally. (Similar ideas have been explored in the design of composite inversion and excitation pulses \cite{Shaka1984,Tycko1985a}.) In particular, consider the GF pulse obtained by applying the G pattern of phases to \Fn1 \begin{equation} \psi_\textit{GF}=\{\gamma+\phi_1,\,-2\gamma-\phi_1,\,\phi_1,\,2\gamma-\phi_1,\,-\gamma+\phi_1\} \end{equation} to give a composite pulse made up from 25 $\pi$ pulses. In common with \Fn1 this pulse has sixth order infidelity around $\epsilon=0$, and in common with \Gn1 this pulse has perfect fidelity at two additional values of $\epsilon$, although in this case they are moved out beyond 0.5, with quadratic error terms around these additional points. This results in good error tolerance over the central region, and moderate error tolerance over a much wider region as shown in Fig.~\ref{fig:FGFplot}. These new perfect points occur at $\epsilon\approx\pm0.720$ where the underlying \Fn1 pulse has a fidelity of $1/\sqrt{2}$, the same as the fidelity of a naive pulse at $\epsilon=0.5$. \begin{figure} \includegraphics{fig5.eps} \caption{Fidelity $\mathcal{F}$ and infidelity $\mathcal{I}$ as a function of pulse strength error $\epsilon$ for F (red), GF (green), FGF (blue) and FFGF (black) composite pulses.}\label{fig:FGFplot} \end{figure} In the same way the FG pulse can be obtained by applying the F pattern of phases to \Gn1 \begin{equation} \psi_\textit{FG}=\{-3\phi+\gamma_1,\,-\phi-\gamma_1,\,\gamma_1,\,\phi-\gamma_1,\,3\phi+\gamma_1\} \end{equation} to get another 25 pulse sequence. In common with \Gn1 this pulse exhibits perfect fidelity at $\epsilon=\pm0.5$, and in common with \Fn1 it has sixth order infidelity around $\epsilon=0$. However this pulse is also sixth order around $\epsilon=\pm0.5$, giving good error tolerance over a wide region. The process can be extended in the obvious way: for example the 125 pulse sequence FFG (or \Fn2G), obtained by applying the F phase pattern to FG, has perfect fidelity at $\epsilon=0$ and $\epsilon=\pm0.5$, and then shows eighteenth order infidelity around these three points. Similarly the sequence FGF has eighteenth order infidelity around $\epsilon=0$ and sixth order infidelity around $\epsilon\approx0.720$ as shown in Fig.~\ref{fig:FGFplot}. Any combination of F and G iterations can be applied: each application of G creates an additional pair of points at which the composite pulse has perfect fidelity, and each application of F triples the order of the infidelity around any existing perfect points. While the most obvious applications of this approach lie in the creation of broadband pulses, such as FGF, it is also possible to consider more exotic combinations. For example, applying the N pattern of phases to a G pulse gives a family of composite pulse which in effect select for three particular control field strengths. All such pulses show perfect fidelity at $\epsilon=0$ and $\epsilon=\pm0.5$, but as the N pattern of phases is repeatedly applied the fidelity away from these points is reduced, as shown in Fig.~\ref{fig:NGplot}. \begin{figure} \includegraphics{fig6.eps} \caption{Fidelity $\mathcal{F}$ as a function of pulse strength error $\epsilon$ for a G pulse (red), NG (green), \Nn{2}G (blue), and \Nn{3}G (black); the \Nn{3}G pulse is only effective near $\epsilon=0$ and $\epsilon=\pm0.5$.}\label{fig:NGplot} \end{figure} \section{Off resonance errors} This ability to sculpt the tolerance of pulse strength errors is remarkable, but it is important to check that it is not bought at the cost of increased sensitivity to other errors, in particular off-resonance errors. Note that we do not consider here the design of pulses which are simultaneously resistant to off-resonance and pulse-length errors; discussions of this problem can be found elsewhere \cite{Alway2007,Ichikawa2011,Souza2011b,Souza2012,Odedra2012b,Bando2013}. In the presence of simultaneous errors the propagator for a nominal $\theta_\phi$ pulse is \begin{equation} V=\exp(-\textrm{i}\theta[(1+\epsilon)(\sigma_x\cos\phi+\sigma_y\sin\phi)+f\sigma_z]/2) \end{equation} where $\epsilon$ is the pulse strength error as before, and $f$ is the \textit{off-resonance fraction} defined here as the ratio between the frequency offset of the transition from the pulse and the nominal rotation frequency of the pulse in the absence of any errors. (Minor differences between the results shown here and those in previous work \cite{Cummins2003} reflect the fact that this earlier work considered pulse length errors rather than pulse strength errors.) In the absence of pulse strength errors the pulse fidelity is given by \begin{equation} \mathcal{F}=1-f^2\times\sin^2(\theta/2)/2+O(f^4) \end{equation} so the pulse infidelity is largest when $\theta=\pi$, for which $\mathcal{I}\approx f^2/2$, and smallest when $\theta=2n\pi$, where the first order error term completely cancels out. In the absence of pulse length errors the BB1 composite pulse has the same sensitivity to small off-resonance errors as a single pulse; this favorable behavior can be traced to the fact that the time symmetric correction sequence is simply a $2\pi$ pulse inside another $2\pi$ pulse, and such nested sequences of $2\pi$ pulses have no first order error terms, so that the dominant source of infidelity is the main $\theta_0$ pulse. This is not true for \Fn{n} and \Gn{n} pulses: direct calculations indicate that each iteration of the F phase sequence increases the size of the quadratic infidelity term by a factor of 16, while each iteration of G increases the infidelity by $9+2\sqrt{2}\approx11.8$. Thus for an FG pulse, for example, the off-resonance infidelity is $\mathcal{I}\approx189.2f^2$. This greatly increased sensitivity to off-resonance errors might appear a critical flaw, as it would render the higher members of the F and G families completely unusable in practice, but fortunately it can be easily overcome. As noted above the BB1 composite $\pi$ pulse is very closely related to \Fn1, and can be obtained from it by moving the front part of the pulse sequence to the back with all the phases negated \begin{equation} \pi_{-3\phi}\,\pi_{-\phi}\,\pi_0\,\pi_\phi\,\pi_{3\phi}\longrightarrow \pi_0\,\pi_\phi\,\pi_{3\phi}\,\pi_{3\phi}\,\pi_\phi. \end{equation} As implemented above all the pulses \textit{before} the main $\pi_0$ pulse have been moved, but it is also possible to move this pulse as well, or to move \textit{half} of the pulse to the back, resulting in a fully time-symmetric sequence, which has the advantage that the fidelity is then a purely even function of $f$. This transformation can be applied to any time antisymmetric sequence of $\pi$ pulses, such as the F and G families, producing a nested sequence of $2\pi$ pulses: the extreme tolerance of pulse strength errors is retained, but now with little cost in sensitivity to off-resonance errors. The calculations above only apply in the absence of pulse strength errors, and it is necessary to consider the effects of simultaneous errors which lead to cross terms in the fidelity expression. These are best investigated numerically, as shown in Fig.~\ref{fig:BFGplot} which plots the fidelity of various simple and composite pulses over a wide range of errors. While the simple FG sequence shows much greater tolerance of pulse strength errors than BB1, this is achieved at the cost of greatly increased sensitivity to off-resonance errors. In contrast the time symmetric FG sequence achieves the same pulse strength tolerance as the antisymmetric FG sequence with only a modest increase in sensitivity to off-resonance errors compared with a simple pulse. \begin{figure*}[t] \includegraphics{fig7.eps} \caption{Fidelity as a function of off resonance error $f$ and pulse strength error $\epsilon$ for \NOT\ gates implemented using simple pulses and BB1, FG, and symmetric BFG composite pulses; contours are drawn at fidelities of 0.9, 0.99 and 0.999. Note that the subsidiary maxima in the FG and symmetric FG plots are not normally useful, and the key indicator is the size of the principal maximum in the centre of the plot.}\label{fig:BFGplot} \end{figure*} \section{Other pulse angles} With the exception of the BB1 pulse, Eq.~\ref{eq:BB1}, all the results described above only apply to $\pi_0$ pulses (\NOT\ gates), and it would obviously be desirable to extend these to a wider range of pulse angles (the extension to $\pi$ pulses with other phase angles is trivially achieved by offsetting all the phases in a $\pi_0$ composite pulse by the desired phase angle). In particular it would be helpful to extend these methods to $\pi/2$ pulses, both for applications in conventional NMR and in QIP. Note that in QIP the set of $\pi/2$ pulses with arbitrary phases is universal for single qubit gates, and in NMR systems a robust $\pi/2$ rotation can be extended by analogy to produce a controlled-\NOT\ gate \cite{Jones2003b}, and thus robust universal quantum logic. This process might seem simple, as the BB1 composite pulse, Eq.~\ref{eq:BB1}, works for any rotation angle, and the BB1 $\pi_0$ pulse can be converted into an \Fn1 pulse. Such optimism is, however, unjustified, as the identity in Eq.~\ref{eq:piswap} only applies for $\pi$ pulses, and so the transformation can not be applied more generally. Furthermore the key property of antisymmetric pulses of the form of Eq.~\ref{eq:antisym} only applies when the pulses are $\pi$ pulses. For example, the sequence \begin{equation} \theta_{-\phi_2}\,\theta_{-\phi_1}\,\theta_0\,\theta_{\phi_1}\,\theta_{\phi_2}\qquad\theta\ne\pi \label{eq:antisym90} \end{equation} is only equivalent to $\theta_0$ if $\phi_2=\phi_1+\pi$, and in general it is necessary to introduce similar relationships between the pulse phase angles. While we have been able to develop some low order antisymmetric $\pi/2$ pulses, for applications in QIP these sequences are inferior to the time symmetric sequences discussed below, and are not considered further here. We begin by rewriting the correction sequence at the start of the BB1 pulse, Eq.~\ref{eq:BB1}, in the general time-symmetric form \begin{equation} \pi_{\phi_1}\,\pi_{\phi_2}\,\pi_{\phi_2}\,\pi_{\phi_1} \label{eq:W1} \end{equation} which we refer to as a \Wn1 sequence. Clearly such a pulse can be summarised by listing the two phase angles, $\phi_1$ and $\phi_2$, which can be adjusted to optimise the fidelity of the combined pulse by removing terms in the Taylor series expansion. In particular it is possible to eliminate the second and fourth order error terms, and for a \Wn1 $90^\circ$ pulse this occurs at $\phi_2=3\phi_1$ and $\phi_1=\arccos(-1/8)$, the well known BB1 result \cite{Wimperis1994}. The process can then be extended by using larger numbers of adjustable phase angles and seeking to remove higher order terms. With only two adjustable phases it is straightforward to tackle the problem analytically, seeking to set Taylor series coefficients to zero, but as the number of pulses is increased this method becomes intractable. Instead we have adopted a numerical approach, effectively replacing the solution of simultaneous non-linear equations with a minimisation problem by seeking to minimise the sum of squares of the first $n$ coefficients: if this sum of squares can be reduced to a value indistinguishable from zero then the corresponding terms have been effectively eliminated from the series. (This approach is known to be unreliable in general \cite{NumRec1992}, but seems to work fairly well in this particular case). Similar ideas have been explored elsewhere \cite{Torosov2011,Ivanov2012}. Our numerical searches show that adding a third adjustable phase does not help; in particular the sixth order error cannot be eliminated. However adding a fourth adjustable phase permits \textit{both} the sixth order and the eight order terms to be eliminated with a sequence of eight $\pi$ pulses. This pattern continues throughout the range of searches we have conducted: the \Wn{n} sequence of $4n$ pulses with $2n$ adjustable phases \begin{equation} \pi_{\phi_1}\,\pi_{\phi_2}\,\dots\,\pi_{\phi_{2n}}\,\pi_{\phi_{2n}}\,\dots\,\pi_{\phi_2}\,\pi_{\phi_1} \label{eq:Wn} \end{equation} permits all the error terms up to order $4n$ to be removed. For the case of \Wn1 there is only a single solution (the well known BB1 composite pulse), neglecting the trivial variant formed by negating all the pulse phases, but for \Wn2 there are two distinct solutions. These appear in theory to be effectively identical; in particular the tenth order coefficient has the same size in the two cases. At \Wn3 the search becomes computationally challenging, but once again two distinct solutions have been located. All these solutions for $90^\circ$ pulses are listed in Table~\ref{tab:Wn90}. Preliminary searches indicate that at least one \Wn4 sequence exists, but this has not been accurately located and it is not yet known whether any other sequences exist. \begin{table} \begin{tabular}{rrrrrrr} \hline $90^\circ$&$\phi_1$&$\phi_2$&$\phi_3$&$\phi_4$&$\phi_5$&$\phi_6$\\ \hline \Wn1& 97.2&291.5&&&&\\ \Wn2& 84.3&162.0&345.5&286.7&&\\ \Wn2&132.3&339.1& 26.4&222.2&&\\ \Wn3& 22.0&186.0& 89.3&319.2&178.2&325.7\\ \Wn3& 79.9&119.2&257.3& 81.0&308.4&286.9\\ \hline \end{tabular} \caption{Composite $90^\circ$ pulses: the \Wn{n} correction sequence contains $4n$ pulses with $2n$ adjustable phases, and the phases (in degrees) listed permit all error terms up to order $4n$ to be eliminated when the correction sequence is combined with a naive $90^\circ$ pulse.} \label{tab:Wn90} \end{table} This approach can of course be extended from $90^\circ$ pulses to other angles; in particular it is possible to find phases for $180^\circ$ pulses, as listed in Table~\ref{tab:Wn180}. \begin{table} \begin{tabular}{rrrrrrr} \hline $180^\circ$&$\phi_1$&$\phi_2$&$\phi_3$&$\phi_4$&$\phi_5$&$\phi_6$\\ \hline \Wn1&104.5&313.4&&&&\\ \Wn2& 79.2&193.4& 24.9&307.5&&\\ \Wn2&130.0& 1.5& 56.7&259.9&&\\ \Wn3&341.9&147.9&100.5&355.9&207.3&339.4\\ \Wn3& 69.5&141.7&289.4&121.4&350.1&307.3\\ \hline \end{tabular} \caption{Composite $180^\circ$ pulses: the phases (in degrees) listed permit all error terms up to order $4n$ to be eliminated when the \Wn{n} correction sequence is combined with a naive $180^\circ$ pulse.} \label{tab:Wn180} \end{table} These composite $180^\circ$ pulses can then be converted into antisymmetric forms, and interestingly these appear to have the same property as \Fn{1} pulses, allowing them to be recursively nested. However this point is not explored further here. \section{Experiments} Experiments were carried out in Oxford using a Varian Unity Inova system with a nominal \nuc{1}{H} frequency of 600\,MHz to study a sample of HOD doped with $\textrm{GdCl}_3$ to reduce the relaxation times. Experiments in Okayama used a homebuilt system with a \nuc{19}{F} frequency of 376\,MHz to study a sample of perfluorobenzene dissolved in deuterated benzene. These two systems differ substantially in experimental detail, potentially allowing the effects of different experimental errors to be distinguished. As our systems should not suffer from significant off-resonance errors it is possible to introduce pulse rotation angle errors either by adjusting the pulse length (pulse-length errors) or by adjusting the \Bone\ field strength (pulse strength errors) \cite{Levitt1982}. We chose to introduce deliberate errors by adjusting the pulse length, but as discussed below the system also contains underlying pulse strength errors. We begin by demonstrating the use of composite $90^\circ$ pulses based on \Wn{n} correction sequences; for simplicity we simply show their performance as excitation pulses, but as Class A composite pulses the performance should be very similar for other initial states. The \Bone\ field strength on the Oxford system was carefully adjusted to 25\,kHz, corresponding to a nominal $90^\circ$ pulse length of $10\,\mu\textrm{s}$, and then naive and composite pulses were applied using pulse lengths varying between $1$ and $19\,\mu\textrm{s}$. The signal intensity was determined by integration, and as the vertical scale is largely arbitrary all intensities were normalised by dividing them by the intensity from a naive pulse with $\epsilon=0$. The results are shown in Fig.~\ref{fig:Wndata}, with a vertical expansion in the lower panel; note that in this figure the lines simply join the experimental data points and are plotted to guide the eye. All experiments used the first of the the two choices for the \Wn2 and \Wn3 sequences; initial experiments (data not shown) indicated that very similar results were obtained for the other two sequences. \begin{figure} \includegraphics{fig8.eps} \caption{Experimental signal intensity (arbitrary units) as a function of pulse length error $\epsilon$ for naive (red circles), \Wn1 (green triangles), \Wn2 (blue squares), and \Wn3 (black diamonds) $90^\circ$ pulses used as excitation pulses. The lines simply join the experimental points and are plotted to guide the eye. The lower panel shows a vertical expansion of the upper panel.}\label{fig:Wndata} \end{figure} It is clear that the \Wn{n} composite pulses perform largely as expected, with wider error tolerance for higher values of $n$. However closer examination of the results, aided by the vertical expansion in the lower panel, shows two further effects. Firstly the overall signal intensity is larger with composite pulses than with naive pulses, and secondly all four plots lean to the right, showing higher intensities for positive values of $\epsilon$ in comparison with the equivalent negative values. Both of these effects can be ascribed to additional pulse strength errors arising from \Bone\ inhomogeneity. The visible increase in overall intensity indicates that a significant fraction of the sample experiences a \Bone\ field substantially different from the central value, while the lean to the right indicates that these regions experience a \textit{smaller} \Bone\ field than the bulk of the sample. This behaviour is expected for traditional RF coil designs. Attempts were made to reduce these effects in experiments on the Oka\-yama system by replacing the conventional sample tube by a small spherical sample in the centre of the RF coil, which should reduce the RF inhomogeneity over the sample. The results, shown in Fig.~\ref{fig:WOkayamadata}, indicate partial success. In particular the increase in overall intensity is reduced, but a lean to the right remains visible. \begin{figure} \includegraphics{fig9.eps} \caption{Experimental signal intensity (arbitrary units) as a function of pulse length error $\epsilon$ for naive (red circles), \Wn1 (green triangles), \Wn2 (blue squares), and \Wn3 (black diamonds) $90^\circ$ pulses used as excitation pulses with a small spherical sample. }\label{fig:WOkayamadata} \end{figure} To make further progress we used narrowband $\pi$ pulses to select signal from regions of the sample with a highly homogeneous \Bone\ field strength. The double pulse field gradient spin echo (DPFGSE) sequence \cite{Hwang1995} is effective in selecting for spins which experience a $180^\circ$ pulse at the centre of each spin echo. The classic use is to suppress water signals by combining a hard $\pi_x$ pulse with a soft $\pi_{-x}$ pulse which selects water transitions, but it can be used more widely to select subsets of spins. Here we use an \Nn{2} composite $\pi$ pulse which only excites spins where the \Bone\ field strength is close to its nominal value, and compare the results with those from an \Fn{2} pulse, which should pass a very wide range of field strengths. (This is simpler than comparing the \Nn{2} filtered data with spectra acquired without a selective filter, as the filtration process also causes signal loss due to the effects of diffusion \cite{Stejskal1965}.) These filters were explored using a nutation sequence to measure RF inhomogeneity, as shown in Fig.~\ref{fig:pwFN}. The nominal \Bone\ nutation frequency was adjusted to 25\,kHz by adjusting the RF power until a 40\,$\mu$s pulse corresponded to a notation of $360^\circ$, and the nutation time varied between 0 and 127\,$\mu$s. The \Nn2 data is well described by a single narrow Gaussian distribution of nutation frequencies, with a central value of 25\,kHz and a width of 2.5\,kHz, but the \Fn2 data needs a second Gaussian component, centred around 20\,kHz with a width of about 10\,kHz. This low frequency broad component leads to the deviations from ideal behaviour seen in Fig.~\ref{fig:Wndata} and discussed above. \begin{figure} \includegraphics{fig10.eps} \caption{Nutation experiments to measure RF inhomogeneity. In the upper panel the experimental signal intensity (arbitrary units) is plotted as a function of pulse length with \Fn{2} (red) and \Nn{2} (green) DPFGSE filtration sequences. The points show experimental values while lines show fitted curves. The \Nn2 data is well described by a single narrow Gaussian component centred at 25\,kHz, while the \Fn2 data requires a two Gaussian fit, revealing a broad component centred around 20\,kHz. The lower panel shows the distribution of nutation frequencies corresponding to these fits.}\label{fig:pwFN} \end{figure} Finally we show the effect of combining a \Wn{n} excitation pulse with a \Nn{2} filtration sequence. In this case the aggravating pulse strength errors arising from RF inhomogeneity should be well suppressed, leaving only the artificially imposed pulse length errors. The result, shown in Fig.~\ref{fig:WN2data} confirms this, with the experimental data points lying very close to theoretical predictions. \begin{figure} \includegraphics{fig11.eps} \caption{Experimental signal intensity (arbitrary units) as a function of pulse length error $\epsilon$ for naive (red circles), \Wn1 (green triangles), \Wn2 (blue squares), and \Wn3 (black diamonds) $90^\circ$ pulses used as excitation pulses followed by the use of a DPFGSE \Nn2 sequence to select a small region with high \Bone\ homogeneity. The points show experimental values while lines show the theoretical curves. The lower panel shows a vertical expansion of the upper panel.}\label{fig:WN2data} \end{figure} Note that in this plot the theoretical curves are simply plotted directly on the same scale as the experimental data, rather than being optimised to fit this data, showing the very close match between theory and experiment. The remaining mismatch is most severe for the \Wn3 experiment, and may partly reflect spin--spin relaxation during this long composite pulse. Remaining errors are likely to be due to imperfections in the phase-control of the RF source, and slow drifts in the power of the RF amplifier. \section{Conclusions} The ability to interconvert time symmetric and time antisymmetric $\pi$ pulses allows us to interpret previous results with a common framework, and then to extend these results to produce new analytic families of composite $\pi$ pulses with unprecedented tolerance of pulse strength errors at little or no cost in sensitivity to off-resonance errors. This approach is currently confined to $\pi$ pulses, and cannot be applied to pulses with other rotation angles. It is, however, possible to use numeric methods to design symmetric composite pulses for arbitrary angles, and several families have been located. Experimental implementations confirm that these pulses work very much as expected, but it is necessary to perform these experiments carefully in order to avoid confusion arising from multiple sources of pulse strength error. \section*{Acknowledgments} We thank the UK EPSRC and BBSRC for financial support. We are grateful to Stephen Jones, Ben Rowland and Steve Wimperis for helpful conversations. We thank two reviewers for useful suggestions. \bibliographystyle{elsarticle-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{SecIntroduction} Color-kinematic duality (BCJ duality), which was suggested by Bern Carrasco and Johansson \cite{Bern:2008qj, Bern:2010ue}, provides a deep insight into the study of scattering amplitudes. According to BCJ duality, full color-dressed Yang-Mills amplitudes are expressed by summing over trivalent (Feynman-like) diagrams, each of which is associated with a color factor and a kinematic factor (BCJ numerator) sharing the same algebraic properties ({\it i.e.}, antisymmetry and Jacobi identity). Once the color factors are replaced by BCJ numerators of another copy Yang-Mills amplitude, we obtain a gravity amplitude. A significant consequence of BCJ duality is that tree-level color-ordered Yang-Mills amplitudes satisfy BCJ relations where the coefficients for amplitudes are functions of Mandelstam variables. Together with the earlier proposed Kleiss-Kuijf \cite{Kleiss:1988ne} (KK) relations, BCJ relations reduce the number of independent color-ordered Yang-Mills amplitudes to $(n-3)!$ (see the field theory proofs \cite{Feng:2010my,Chen:2011jxa} and string theory approaches \cite{BjerrumBohr:2009rd,Stieberger:2009hq}). Though BCJ relations are first discovered in Yang-Mills theory, they actually hold for amplitudes in many other theories including: bi-scalar theory, NLSM \cite{Chen:2013fya}, which can be uniformly described in the framework of CHY formulation \cite{Cachazo:2013gna,Cachazo:2013hca,Cachazo:2013iea,Cachazo:2014xea}. It was pointed out that fundamental BCJ relation can be regarded as the most elementary one since the minimal basis \cite{Feng:2010my} and a set of more general BCJ relations \cite{BjerrumBohr:2009rd,Chen:2011jxa} are generated by them \cite{Ma:2011um}. Nevertheless, in some situations, one may encounter BCJ relations which have much more complicated forms than knowns ones. Such relations can be neither directly understood as a result of fundamental relations nor straightforwardly proved by Britto-Cachazo-Feng-Witten \cite{Britto:2004ap,Britto:2005fq} recursion or CHY formula. Therefore, a new approach to nontrivial BCJ relations is required. Apart from the BCJ relations for amplitudes, the construction of BCJ numerators in various theories is also an important direction. In NLSM, there are three distinct constructions of BCJ numerators, all of which are polynomial functions of Mandelstam variables. (i) A construction based on off-shell extended BCJ relation (see \cite{Chen:2013fya}) was suggested by Fu and one of the current authors \cite{Du:2016tbc} (DF). In DF approach, {the set of half-ladder numerators with the first and the last lines fixed (which serves as a basis of BCJ numerators)} are expressed by proper combinations of momentum kernels \cite{Kawai:1985xq,Bern:1998ug,BjerrumBohr:2010ta,BjerrumBohr:2010zb,BjerrumBohr:2010yc,BjerrumBohr:2010hn}. Since the off-shell extended BCJ relation \cite{Chen:2013fya} was proved by the use of Berends-Giele recursion (Feyman rules), the DF type BCJ numerators can be essentially regarded as a result of Feyman rules. (ii) {A much more compact} construction of BCJ numerators in NLSM, {which was based on Abelian Z theory}, was provided by Carrasco, Mafra and Schlotterer (CMS) \cite{Carrasco:2016ldy}. A half ladder numerator of CMS type is {elegantly expressed} by only one momentum kernel. (iii) In a more recent work \cite{Du:2017kpo}, a graphic approach to polynomial BCJ numerators (DT type numerator) in NLSM, which was based on CHY formula was proposed by Teng and one current author. All the three distinct constructions given above must produce the same scattering amplitudes in NLSM, but this equivalence is still not proven {explicitly}. In this paper, we derive highly nontrivial generalized BCJ relations (gauge invariance induced relations) by imposing gauge invariance conditions and CHY-inspired dimensional reduction on the recent discovered graphic expansion of color-ordered Einstein-Yang-Mills (EYM) amplitudes \cite{Du:2017kpo}. Expansion of EYM amplitudes was first proposed in \cite{Stieberger:2016lng} and further studied in \cite{Nandan:2016pya,delaCruz:2016gnm,Schlotterer:2016cxa,Fu:2017uzt,Chiodaroli:2017ngp,Teng:2017tbo,Du:2017kpo,Du:2017gnh}. In the series work \cite{Fu:2017uzt,Teng:2017tbo,Du:2017kpo,Du:2017gnh}, general recursive expansion for all tree-level EYM amplitudes and the graphic expansion of EYM amplitudes in terms of pure Yang-Mills ones were established. When gauge invariance condition for the so-called fiducial graviton is imposed, the recursive expansion of EYM amplitudes induces relations between those amplitudes with fewer gravitons. Equivalently, when the graphic expansion \cite{Du:2017kpo} is considered, such gauge invariance induced relation implies a relation between color ordered Yang-Mills amplitudes whose coefficients are functions of both momenta and polarizations. To induce amplitude relations where all coefficients are functions of Mandelstam variables, one should convert all polarizations in the coefficients into momenta. In the current paper, we propose gauge invariance induced relations based on the following two crucial observations: (i) One can impose the gauge invariance conditions for several gravitons simultaneously. (ii) The gauge invariance conditions are independent of dimensions. With these two {critical observations} in hand and inspired by the dimensional reduction in CHY formula \cite{Cachazo:2014xea}, we define $(d+d)$-dimensional polarizations and momenta whose { nonzero} components are expressed by only $d$-dimensional momenta. Imposing the gauge invariance in $(d+d)$ dimensions on the graphic expansion \cite{Du:2017kpo} of {single-trace} EYM amplitudes, we naturally induce nontrivial amplitude relations where all coefficients are polynomials of Mandelstam variables (in $d$ dimensions). In the framework of CHY formula, such relations become nontrivial relations between Parke-Taylor factors. As a consequence, the gauge invariance induced relations hold for not only color-ordered Yang-Mills amplitudes but also {color-ordered amplitudes in other theories such as bi-scalar theory and NLSM.} An interesting application of our gauge invariance induced relation is the proof of equivalence between different approaches to NLSM amplitudes. Full color-dressed NLSM amplitudes can be spanned in terms of bi-scalar amplitudes via dual Del Duca-Dixon-Maltoni (DDM) \cite{DelDuca:1999rs} decomposition (The dual DDM decomposition for Yang-Mills amplitudes are given in \cite{Kiermaier,BjerrumBohr:2010hn,Bern:2010yg,Mafra:2011kj,Du:2011js,Fu:2012uy,Cachazo:2013iea,Fu:2013qna, Du:2013sha, Fu:2014pya}, for NLSM amplitudes are provided in \cite{Chen:2013fya,Du:2016tbc,Carrasco:2016ldy,Du:2017kpo}), in which the coefficients are half-ladder BCJ numerators {with fixing the first and the last lines}. Although the three distinct approaches: Feyman rules, Abelian Z theory and CHY formula provide different types of half-ladder BCJ numerators, they must produce the same NLSM amplitudes through the dual DDM decomposition. This equivalence condition then requires nontrivial relations between color-ordered bi-scalar amplitudes. By using the gauge invariance induced relations and defining partial momentum kernel, we prove that the three distinct constructions of BCJ numerators produce the same NLSM amplitudes precisely. In other words, the equivalence between the three different approaches to NLSM amplitudes is {explicitly} proven. The relation between main results of this paper is provided as \begin{eqnarray} \begin{array}{c} \text{gauge invariance}\\ +\\ \text{dimensional reduction}\\ \end{array} \Rightarrow\text{generalized BCJ \eqref{Eq:NewGaugeIDAmp1}}\Rightarrow\begin{array}{ccc} &\text{relation \eqref{Eq:GenEquiv1}}&\Rightarrow\text{equivalence between CMS $\&$ DT}\\ \nearrow & & \\ \searrow & & \\ &\text{relation \eqref{Eq:GenEquiv2}} &\Rightarrow\text{equivalence between DF $\&$ CMS}\\ \end{array}.\nonumber \end{eqnarray} The structure of this paper is given as follows. In section \ref{Sec:CHY}, we provide a review of the background knowledge including CHY formula, the recursive expansion and the graphic expansion of EYM amplitudes. In section \ref{BCJ-Gauge}, we induce generalized BCJ relations by combining gauge invariance conditions and dimensional reduction. Partial momentum kernel, which is important for the discussions in this paper, is introduced in section \ref{BCJ-Gauge}. A review of the three distinct constructions of BCJ numerators in NLSM is provided in section \ref{Sec:NumeratorsNLSM1}. In section \ref{Sec:DTCMS}, we prove the equivalence between CMS type and DT type numerators by inducing identities expressed by partial momentum kernel. The proof of equivalence between DF type and CMS type numerators is given in section \ref{Sec:DFCMS}. We summarized this paper in section \ref{Sec:Conclusions}. Complicated graphs and proofs are included by appendices. \section{A review of CHY formula and the expansion of EYM amplitudes}\label{Sec:CHY} In this section, we review the CHY formula \cite{Cachazo:2013gna,Cachazo:2013hca,Cachazo:2013iea,Cachazo:2014nsa,Cachazo:2014xea} for various theories and the recursive/graphic expansion of EYM amplitudes which will be used in the coming sections. \subsection{CHY formula} CHY formula expresses a tree level on-shell amplitude with $n$ massless particles by integration over $n$ scattering variables $z_i$ \begin{eqnarray} A=\int d\Omega_{\text{CHY}} \mathcal{I}_L \mathcal{I}_R,~~\label{Eq:CHY} \end{eqnarray} where $d\Omega_{\text{CHY}}$ is M\"{o}bius invariant measure which contains the condition that scattering variables satisfy the following scattering equations \begin{eqnarray} \sum\limits_{j\neq i}{k_i\cdot k_j\over z_i-z_j}=0,~~~\text(i=1,\dots,n).\label{Eq:SE} \end{eqnarray} Here $k_i$ denotes the momenta of the particle $i$. The integrand $\mathcal{I}_L \mathcal{I}_R$ in \eqref{Eq:CHY} relies on theories. An important feature is that the CHY formula is independent of dimensions. \subsubsection*{The CHY integrand for BS, CS, YM, EYM and GR amplitudes} The CHY integrands for color-ordered bi-scalar {(BS)}, Yang-Mills {(YM)}, single-trace EYM amplitudes {(EYM)} as well as gravity {(GR)} amplitudes are given by\footnote{ The total signs follows from the paper \cite{Teng:2017tbo}.} \begin{eqnarray} \mathcal{I}^{\text{BS}}_L(\pmb{\sigma}_{1,n})&=&(-1)^{{(n+1)(n+2)\over 2}}\text{PT}(\pmb{\sigma}_{1,n}),~~~~~~~~\mathcal{I}^{\text{BS}}_R(\pmb{\rho}_{1,n})=(-1)^{{(n+1)(n+2)\over 2}}\text{PT}(\pmb{\rho}_{1,n})~~\label{Eq:CHYBSintegrand} \\ \mathcal{I}^{\text{YM}}_L(\pmb{\sigma}_{1,n})&=&(-1)^{{(n+1)(n+2)\over 2}}\text{PT}(\pmb{\sigma}_{1,n}),~~~~~~~~~~~~~~~~~~~~~\mathcal{I}^{\text{YM}}_R\,={\mbox{Pf}}\,'[\Psi]~~\label{Eq:CHYYMintegrand}\\ \mathcal{I}^{\text{EYM}}_L(\pmb{\sigma}_{1,r})&=&(-1)^{{(n+1)(n+2)+s(s+1)\over 2}}\text{PT}(\pmb{\sigma}_{1,r}){\mbox{Pf}}[\Psi_{\mathsf{H}}],~~~~\mathcal{I}^{\text{EYM}}_R={\mbox{Pf}}\,'[\Psi]~~\label{Eq:CHYEYMintegrand}\\ \mathcal{I}^{\text{GR}}_R&=&{\mbox{Pf}}\,'[\Psi],~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\mathcal{I}^{\text{GR}}_R\,={\mbox{Pf}}\,'[\Psi].~~\label{Eq:CHYGRintegrand} \end{eqnarray} In \eqref{Eq:CHYBSintegrand} and \eqref{Eq:CHYYMintegrand}, the boldface Greek letters $\pmb{\sigma}_{1,n}$ and $\pmb{\rho}_{1,n}$ denote permutations of all $n$ external particles $1, 2,\dots, n$. The Parke-Taylor factor $\text{PT}(\pmb{\sigma}_{1,n})$ is defined by \begin{eqnarray} \text{PT}(\pmb{\sigma}_{1,n})={1\over z_{\sigma(1)\sigma(2)}z_{\sigma(2)\sigma(3)}\dots z_{\sigma(n)\sigma(1)}},~~~~z_{ij}\equiv z_i-z_j. \end{eqnarray} The reduced Pfaffian ${\mbox{Pf}}\,'[\Psi]$ in \eqref{Eq:CHYYMintegrand}, \eqref{Eq:CHYEYMintegrand} and \eqref{Eq:CHYGRintegrand} is given by \begin{eqnarray} {\mbox{Pf}}\,'\left[\Psi\right]\equiv{(-1)^{i+j}\over z_{ij}}{\mbox{Pf}}\left[\Psi^{i,j}_{i,j}\right],~~~~~~~~~~~~~\Psi=\left(\begin{array}{cc} {A} & -{C}^{T} \\ {C} &{B} \end{array}\right)\,, \end{eqnarray} where $\Psi^{i,j}_{i,j}$ means that the $i$, $j$-th ($1\leq i,j\leq n$) rows and columns are removed. Building blocks of the $2n\times 2n$-skew matrix $\Psi$ are \begin{equation} {A}_{ab}=\Biggl\{\begin{array}{cc} {k_a\cdot k_b\over z_{ab}} &~~a\neq b \\ 0 &~~a=b \end{array}~~~~{B}_{ab}=\Biggl\{\begin{array}{cc} {\epsilon_a\cdot \epsilon_b\over z_{ab}} &~~a\neq b \\ 0 &~~a=b \end{array} ~~~~{C}_{ab}=\Biggl\{\begin{array}{cc} {\epsilon_a\cdot k_b\over z_{ab}} &~~a\neq b \\ -\sum\limits_{c\neq a}{\epsilon_a\cdot k_c\over z_{ac}} &~~a=b \end{array},\label{Eq:CHYBlocks} \end{equation} in which $k_a$ and $\epsilon_a$ are momentum and polarization of the particle $a$. In the CHY expression of single-trace EYM amplitude \eqref{Eq:CHYEYMintegrand}, $\text{PT}(\pmb{\sigma}_{1,r})$ denotes the Parke-Taylor factor for $r$ gluons with the order $\sigma(1),\sigma(2),\dots,\sigma(r)$. The matrix $\Psi_{\mathsf{H}}$ is the one obtained by removing those rows and columns with respect to gluons in $\Psi$. \subsubsection*{The CHY integrand for NLSM amplitudes} The CHY integrands for color-ordered NLSM amplitudes are obtained by dimensional reduction strategy \cite{Cachazo:2014xea}. In particular, $\mathcal{I}_L^{\text{NLSM}}$ has the same expression with $\mathcal{I}_L^{\text{YM}}$, while $\mathcal{I}_R^{\text{NLSM}}$ is obtained by extending $\mathcal{I}_R^{\text{YM}}$ to $(d+d+d)$-dimensions and defining momenta and polarizations as follows: \begin{align} \mathcal{K}_{a}=(k_a;0;0)& &\mathcal{E}_{a}=\left\{\begin{array}{>{\displaystyle}l @{\hspace{1.5em}} >{\displaystyle}l} (0;0;\frac{\epsilon_a}{\sqrt{k_1\cdot k_n}}) & a=1\text{ and }n \\ (0;\epsilon_a;0) & a=2\ldots n-1 \end{array}\right.\,.\label{Eq:CHYreduction} \end{align} The matrix $\Psi^{(d+d+d)}$ is thus written as \begin{equation} \Psi^{(d+d+d)}=\left(\begin{array}{cc} \mathbb{A} & -\mathbb{C}^{T} \\ \mathbb{C}& \mathbb{B} \end{array}\right)\,, \end{equation} where the $\mathbb{A}$, $\mathbb{B}$, $\mathbb{C}$ are defined via replacing the polarizations and momenta in \eqref{Eq:CHYBlocks} by the $(d+d+d)$-dimensional ones $\mathcal{E}$ and $\mathcal{K}$ correspondingly. With the explicit components given in \eqref{Eq:CHYreduction}, we immediately arrive $\mathbb{C}=0$, $\mathbb{A}=A$ and $\mathbb{B}=B$. As a consequence, the reduced Pfaffian ${\mbox{Pf}}\,'\left[\Psi^{(d+d+d)}\right]$ is factorized into: \begin{equation} \Pfp\big[\Psi^{(d+d+d)}\big]=\Pfp(A){\mbox{Pf}}({B})=\frac{(-1)^{n+1}}{\sigma_{1n}}\frac{\epsilon_1\cdot \epsilon_n }{{k_1\cdot k_n}}\,\Pfp(A)\,{\mbox{Pf}}(B_{1,n}^{1,n})\,. \end{equation} By a further replacement $\epsilon_a\rightarrow k_a$, we reduce $\Pfp\big[\Psi^{(d+d+d)}\big]$ to the final expression of the NLSM integrand $\mathcal{I}_{R}^{\text{NLSM}}$ \begin{equation} \label{eq:DR} \left.\Pfp\big[\Psi^{(d+d+d)}\big]\right|_{\epsilon_a\rightarrow k_a}=\left[\Pfp(A)\right]^{2}=\mathcal{I}_R^{\text{NLSM}}\,. \end{equation} \textit{{To sum up, NLSM amplitudes are obtained by performing the following replacements on Yang-Mills amplitudes}} \begin{align} \label{eq:NLSMreplacement} &\epsilon_a\cdot k_b\;\rightarrow\;0 \nonumber\\ &\epsilon_a\cdot\epsilon_b\;\rightarrow\;\left\{\begin{array}{>{\displaystyle}l @{\hspace{1.5em}} >{\displaystyle}l} k_a\cdot k_b & \{a,b\}\subset\{2\ldots n-1\}\\ 1 & \{a,b\}=\{1,n\}\\ 0 & a\in\{1,n\}\text{ and }b\in\{2\ldots n-1\}\,\text{, or vice versa} \end{array}\right . \end{align} \subsection{Expansions of EYM amplitudes}\label{Sec:Expansion} Tree level color-ordered EYM amplitude can be expressed recursively by ones with fewer gravitons and/or fewer traces. One can repeat this expansion until all amplitudes become pure Yang-Mills ones, then the expansion coefficients are constructed by graphic rules. Now we review the expansions of single-trace EYM amplitudes. The expansions of multi-trace amplitudes can be found in \cite{Du:2017gnh}. \subsubsection*{The recursive expansion of single-trace EYM amplitudes} Single-trace EYM amplitude $A(1,2,\dots,r\Vert\,\mathsf{H})$ with $r$ gluons and $s$ gravitons was shown to satisfy the following recursive expansion \cite{Fu:2017uzt} \begin{eqnarray} A(1,2,\dots,r\Vert\,\mathsf{H})&=&\sum\limits_{\pmb{h}\vert\,\W{\mathsf{h}}}C_{h_i}(\pmb{h}) A(1,\{2,\dots,r-1\}\shuffle\{\pmb{h},h_i\},r\Vert\,\W{\mathsf{h}}).\label{Eq:RecursiveExpansion} \end{eqnarray} In the above equation, we choose a fiducial graviton $h_i\in\mathsf{H}$. The summation notation stands for the sum over all possible splittings of the graviton set $\mathsf{H}\setminus {h_i}\to \pmb{h}\vert\,\W{\mathsf{h}}$ and sum over all permutations of elements in $\pmb{h}$ for a given splitting. For example, if we have three gravitons $\mathsf{H}=\{h_1,h_2,h_3\}$ and choose $h_3$ as the fiducial graviton, then $\pmb{h}\vert\,\W{\mathsf{h}}$ implies the following five terms \begin{eqnarray} ~~\mathsf{H}\setminus \{h_3\}&\to& \emptyset\,|\,\{h_1,h_2\};\nonumber \\ ~~\mathsf{H}\setminus \{h_3\}&\to& \{h_1\}\,|\,\{h_2\};~~\mathsf{H}\setminus \{h_3\}\to \{h_2\}\,|\,\{h_1\};\nonumber \\ ~~\mathsf{H}\setminus \{h_3\}&\to& \{h_1,h_2\}\,|\,\emptyset;~~\mathsf{H}\setminus \{h_3\}\to \{h_2,h_1\}\,|\,\emptyset. \label{Eq:Terms3GRExample} \end{eqnarray} Assuming the permutation of elements of given $\pmb{h}$ is $\{i_1,i_2,\dots,i_j\}$, the coefficient $C_{h_i}(\pmb{h})$ is defined by \begin{eqnarray} C_{h_i}(\pmb{h}_1)\equiv \epsilon_{h_i}\cdot F_{i_j}\cdot F_{i_{j-1}}\cdot\dots \cdot F_{i_1}\cdot Y_{i_1},\label{Eq:RecExpCoefficient} \end{eqnarray} where $F_a^{\mu\nu}$ is the linearized field strength of particle $a$ \begin{eqnarray} F_a^{\mu\nu}\equiv k_a^{\mu}\epsilon_a^{\nu}-\epsilon_a^{\mu} k_a^{\nu} \end{eqnarray} and $Y_{i_1}$ denotes the sum of all momenta of gluons in the original gluon set which appear on the left hand side of $i_1$. An explicit example is given by the expansion of the single-trace EYM amplitude $A(1,2,\dots,r\Vert\,h_1,h_2,h_3)$ with $r$ gluons and three gravitons. By choosing $h_3$ as the fiducial graviton and summing over the five terms in \eqref{Eq:Terms3GRExample}, we finally express the single-trace EYM amplitude with three gravitons by those amplitudes with two, one and no graviton: \begin{eqnarray} A(1,2,\dots,r\Vert\,h_1,h_2,h_3)&=&(\epsilon_{h_3}\cdot Y_{h_3})A(1,\{2,\dots,r-1\}\shuffle\{h_3\},r\Vert\,h_1,h_2)\nonumber \\ &+&(\epsilon_{h_3}\cdot F_{h_1}\cdot Y_{h_1})A(1,\{2,\dots,r-1\}\shuffle\{h_1,h_3\},r\Vert\,h_2)\nonumber \\ &+&(\epsilon_{h_3}\cdot F_{h_2}\cdot Y_{h_2})A(1,\{2,\dots,r-1\}\shuffle\{h_2,h_3\},r\Vert\,h_1)\nonumber \\ &+&(\epsilon_{h_3}\cdot F_{h_1}\cdot F_{h_2}\cdot Y_{h_2})A(1,\{2,\dots,r-1\}\shuffle\{h_2,h_1,h_3\},r)\nonumber \\ &+&(\epsilon_{h_3}\cdot F_{h_2}\cdot F_{h_1}\cdot Y_{h_1})A(1,\{2,\dots,r-1\}\shuffle\{h_1,h_2,h_3\},r). \end{eqnarray} \subsubsection*{Graphic rule for the pure Yang-Mills expansion of single-trace EYM amplitudes} Applying the recursive expansion \eqref{Eq:RecursiveExpansion} repeatedly until there is no graviton remaining in the graviton set, we finally expand the single-trace EYM amplitude in terms of color-ordered Yang-Mills amplitudes \begin{eqnarray} A(1,2,\dots,r\Vert{\mathsf{H}})&=&\sum\limits_{\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,{\mathsf{H}}} \mathcal{C}(1,\pmb{\sigma},r)A(1,\pmb{\sigma},r).\label{Eq:PureYMExpansion} \end{eqnarray} Here, we summed over all possible permutations obtained by merging together the original gluon set $\{2,\dots,r-1\}$ and the set of gluons (`half gravitons') which come from the graviton set $\mathsf{H}$. The relative order of gluons should be preserved, while the `perms' under the summation notation means that all possible relative orders of elements in $\mathsf{H}$ should be considered. Given order $\pmb{\sigma}$, the full coefficient $\mathcal{C}(1,\pmb{\sigma},r)$ can be determined by the following graphic rule\footnote{The interpretation of this rule is different from that given in \cite{Du:2017kpo}, for the convenience of discussions in the coming sections.}. \\ ~~~~~~~~~~~~~~~~~~~~~~\textbf{\emph{Graphic rule for the expansion of EYM amplitudes:}} \begin{itemize} \item [(1)] Define a reference order $\pmb{\rho}$ of gravitons, then all gravitons are arranged into an ordered set \begin{eqnarray} \mathsf{R}=\{h_{\rho(1)},h_{\rho(2)},\dots,h_{\rho(s)}\}.~~\label{Eq:ReferenceOrder} \end{eqnarray} \item [(2)] Pick the last graviton $h_{\rho(s)}$ in the ordered set $\mathsf{R}$, an arbitrary gluon $l\in \{1,2,\dots,r-1\}$ (noting that the gluon $r$ is not considered here) as well as gravitons $h_{i_1},h_{i_2}, \dots,h_{i_j}\in \mathsf{H}$ s.t. the relative order of them in $\pmb{\sigma}$ satisfies\footnote{In this paper, element in the $i$-th position of permutation $\pmb{\sigma}$ is denoted by $\sigma(i)$. If $\sigma(i)=a$, the position of $a$ in this permutation is denoted by $i=\sigma^{-1}(a)$. } $\sigma^{-1}(l)< \sigma^{-1}(h_{i_1})< \sigma^{-1}(h_{i_2})<\dots \sigma^{-1}(h_{i_j})< \sigma^{-1}(h_{\rho(s)})$. Now consider each particle in the set $\{l,h_{i_1},h_{i_2}, \dots,h_{i_j},h_{\rho(s)}\}$ as a node, we define a \emph{chain} starting from the node $h_{\rho(s)}$ and ending at the node $l$. The graviton $h_{\rho(s)}$ here is mentioned as a \emph{the starting point of this chain}, while the gluon $l$ is mentioned as a \emph{root}. All other gravitons on this chain are mentioned as \emph{internal nodes of this chain}. The factor associated to this chain is \begin{eqnarray} \epsilon_{h_{\rho(s)}}\cdot F_{h_{i_j}}\cdot F_{h_{i_{j-1}}}\cdot \dots \cdot F_{h_{i_1}}\cdot k_l. \end{eqnarray} Remove $h_{i_1}$, $h_{i_2}$, ..., $h_{i_j}$, $h_{\rho(s)}$ from the ordered set $\mathsf{R}$ and redefine $\mathsf{R}$ \begin{eqnarray} \mathsf{R}\to\mathsf{R}\,'=\mathsf{R}\setminus \{h_{i_1},h_{i_2}, ...,h_{i_j},h_{\rho(s)}\}. \end{eqnarray} \item [(3)] Picking $l'\in \{1,2,\dots,r-1\}\cup\{h_{i_1},h_{i_2}, ...,h_{i_j},h_{\rho(s)}\}$, the last element $h_{\rho'(s')}$ in $\mathsf{R}\,'$ as well as gravitons $h_{i'_1}$, $h_{i'_2}$, ..., $h_{i'_{j'}}$ {in $\mathsf{R}\,'$} s.t., $\sigma^{-1}(l')<\sigma^{-1}(h_{i_1'})<\sigma^{-1}(h_{i_2'})<\dots<\sigma^{-1}(h_{i_{j'}'})<\sigma^{-1}(h_{\rho'(s')})$, we define a chain $\{l',h_{i_1'},h_{i_2'},\dots,h_{i_{j'}'},h_{\rho(s')}\}$ starting from $h_{\rho(s')}$ and ending at $l'$. This chain is associated with a factor % \begin{eqnarray} \epsilon_{h_{\rho'(s')}}\cdot F_{h_{i'_{j'}}}\cdot F_{h_{i'_{{j'-1}}}}\cdot \dots \cdot F_{h_{i_{1}'}}\cdot k_{l'}. \end{eqnarray} % Remove $h_{i_1'}$,$h_{i_2'}$, ..., $h_{i_{j'}'}$, $h_{\rho'(s')}$ from $\mathsf{R}\,'$ and redefine $\mathsf{R}\to\mathsf{R}\,''=\mathsf{R}'\setminus\{h_{i_1'},h_{i_2'},\dots, h_{i_{j'}'},h_{\rho'(s')}\}$. \item [(4)] Repeating the above steps until the ordered set $\mathsf{R}$ becomes empty, we get a graph (`forest') with gluons as roots of trees \footnote{Note that a starting point of a chain is not necessary a leaf of a tree.}. For a given graph $\mathcal{F}$, the product of the factors accompanied to all chains produces a term $\mathcal{C}^{[\mathcal{F}]}(\pmb{\sigma})$ in the coefficient $\mathcal{C}(1,\pmb{\sigma},r)$ in \eqref{Eq:PureYMExpansion}. Thus the final expression of $\mathcal{C}(1,\pmb{\sigma},r)$ is given by summing over all possible graphs defined above % \begin{eqnarray} \mathcal{C}(1,\pmb{\sigma},r)=\sum\limits_{\mathcal{F}\in\{\text{Graphs}\}}\mathcal{C}^{[\mathcal{F}]}(1,\pmb{\sigma},r).\label{Eq:Coefficients} \end{eqnarray} % \end{itemize} \subsubsection*{The expansions of Pfaffians in the CHY formula of single-trace EYM amplitudes} It is worth closing this section by translating the expansions \eqref{Eq:RecursiveExpansion}, \eqref{Eq:PureYMExpansion} of EYM amplitudes into the language of CHY formulation (see \cite{Teng:2017tbo}). In CHY formulation, the recursive expansion \eqref{Eq:RecursiveExpansion} reflects \begin{eqnarray}(-1)^{s(s+1)\over 2}\text{PT}(1,2,\dots,r){\mbox{Pf}}\left[\Psi_{\mathsf{H}}\right] &=&\sum\limits_{\pmb{h}\vert\,\W{\mathsf{h}}}(-1)^{|\W{\mathsf{h}}|(|\W{\mathsf{h}}|+1)\over 2}C_{h_1}(\pmb{h})\text{PT}(1,\{2,\dots,r-1\}\shuffle\{\pmb{h},h_1\},r){\mbox{Pf}}\left[\Psi_{\W{\mathsf{h}}}\right] ,\label{Eq:RecPfaffian}\nonumber \\ \end{eqnarray} where $r$ and $s$ are the numbers of gluons and gravitons respectively, $|\W{\mathsf{h}}|$ denotes the number of elements in the set $\W{\mathsf{h}}$. The pure Yang-Mills expansion \eqref{Eq:PureYMExpansion} implies \begin{eqnarray} (-1)^{s(s+1)\over 2}\text{PT}(1,2,\dots,r){\mbox{Pf}}\left[\Psi_{\mathsf{H}}\right]=\sum\limits_{\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,{\mathsf{H}}} \mathcal{C}(1,\pmb{\sigma},r) \text{PT}(1,\pmb{\sigma},r).\label{Eq:GraphicPfaffian} \end{eqnarray} The expansion coefficients $C_{h_1}(\pmb{h})$ and $\mathcal{C}(1,\pmb{\sigma},r)$ in \eqref{Eq:RecPfaffian} and \eqref{Eq:GraphicPfaffian} are given by \eqref{Eq:RecExpCoefficient} and \eqref{Eq:Coefficients} respectively. We emphasize that the relations \eqref{Eq:RecPfaffian} and \eqref{Eq:GraphicPfaffian} hold for arbitrary dimensions. \section{Gauge invariance induced relations} \label{BCJ-Gauge} In this section, we induce nontrivial generalized BCJ relations for color-ordered Yang-Mills amplitudes (also bi-scalar amplitudes and color-ordered NLSM amplitudes) by combining gauge invariance conditions with CHY inspired dimensional reductions. The coefficients of amplitudes in the gauge invariance induced relations are polynomials of Mandelstam variables. \subsection{Inducing generalized BCJ relations by gauge invariance and dimensional reduction} In the pure Yang-Mills expansion \eqref{Eq:PureYMExpansion} of EYM amplitude $A(1,2,\dots,r\Vert\,\mathsf{H})$, each term $\mathcal{C}^{[\mathcal{F}]}(1,\pmb{\sigma},r)$ (see \eqref{Eq:Coefficients}) of the expansion coefficient $\mathcal{C}(1,\pmb{\sigma},r)$ is expressed as a product of Lorentz invariants $\epsilon\cdot k$, $\epsilon\cdot\epsilon$ and $k\cdot k$ and constructed by the grapic rule in section \ref{Sec:Expansion}. The gauge invariance states that the amplitude $A(1,2,\dots,r\Vert\,\mathsf{H})$ has to vanish under the replacement $\epsilon_{h}\to k_h$ for any given graviton $h\in\mathsf{H}$. Hence, a relation for pure Yang-Mills amplitudes \cite{Fu:2017uzt} follows \begin{eqnarray} 0=\sum\limits_{\sigma\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,{\mathsf{H}}} \mathcal{C}(1,\pmb{\sigma},r)\Big|_{\epsilon_h\to k_h}A(1,\pmb{\sigma},r). \end{eqnarray} % For a given graph in the expansion of $\mathcal{C}(1,\pmb{\sigma},r)$, the graviton $h$ can be either an \textit{internal node} or a \textit{starting point of a chain}. In the former case, the gauge invariance condition is naturally encoded by $F_{h}^{\mu\nu}|_{\epsilon_h\to k_h}=0$, thus this contribution has to vanish. The only nontrivial contributions are those graphs in which the graviton $h$ plays as the starting point of a chain. The gauge invariance condition is then reduced to \begin{eqnarray} 0=\sum\limits_{\sigma\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,{\mathsf{H}}} \sum\limits_{\mathcal{F}\in{\mathcal{G}^{\pmb{\sigma}}_{\mathsf{H}}[h]} }\mathcal{C}^{[\mathcal{F}]}(1,\pmb{\sigma},r)\Big|_{\scriptsize{\epsilon_{h}\to k_{h}}}A(1,\pmb{\sigma},r),\label{Eq:NewGaugeID0} \end{eqnarray} % where $\mathcal{G}^{\pmb{\sigma}}_{\mathsf{H}}[h]$ denotes the set of graphs for permutation $\pmb{\sigma}$, where $h$ plays as starting point of a chain. As shown by examples in \cite{Fu:2017uzt,Du:2017gnh} ({ similar discussions on the gauge invariance relations can be found in \cite{Barreiro:2013dpa,Stieberger:2016lng,Nandan:2016pya,Boels:2016xhc,Chiodaroli:2017ngp,Boels:2017gyc}}), \eqref{Eq:NewGaugeID0} is generated by known BCJ relations, thus it is not new relation beyond known BCJ relations. Nevertheless, a systematical study on the connection between \eqref{Eq:NewGaugeID0} and the standard KK and BCJ relations still deserves future work. Coefficients in the relation \eqref{Eq:NewGaugeID0} still contain polarizations. To induce a relation where coefficients are only functions of Mandelstam variables $s_{ij}=k_i\cdot k_j$, we should \textit{`turn' all polarizations in the expansion of coefficients to momenta.} One reasonable approach to realize this point is combining gauge invariance conditions with dimensional reduction inspired by CHY formulation. Our discussion is based on the following crucial observations: \begin{itemize} \item[(1)] \emph{Gauge invariance conditions for more than one graviton can be imposed simultaneously.} This can be understood from two different aspects. (i) Since the pure Yang-Mills expansion \eqref{Eq:PureYMExpansion} is obtained by applying the recursive expansion \eqref{Eq:RecursiveExpansion} repeatedly, we can take gauge invariance condition for \eqref{Eq:RecursiveExpansion} instead. If we replace $\epsilon_{h_a}$ by $k_{h_a}$ for more than one graviton $h_a\in \mathsf{A}\subseteq\mathsf{H}$ ($\mathsf{A}$ consists of at least two gravitons) on the RHS of \eqref{Eq:RecursiveExpansion}, there is at most one graviton plays as the fiducial one. The polarizations of the rest of the gravitons belonging to $\mathsf{A}$ are contained by either $F^{\mu\nu}$ or an EYM amplitude with fewer gravitons. When replacing $\epsilon_{h_a}$ by $k_{h_a}$ for all $h_a\in \mathsf{A}$ on the RHS of \eqref{Eq:RecursiveExpansion}, every term has to vanish due to the antisymmetry of $F^{\mu\nu}$ or/and the gauge invariance condition for EYM amplitudes with fewer gravitons (as an inductive assumption). (ii) In the language of CHY formula \eqref{Eq:CHY}, polarizations are packaged into (reduced) Pffafians. When the replacement $\epsilon_{h}\to k_{h}$ for a given graviton $h\in \mathsf{H}$ is imposed, the $\Psi_{\mathsf{H}}$ matrix becomes degenerate because two rows/columns coincide with each other (Noting the diagonal entry $C_{h_ah_a}$ for $C$ matrix vanishes due to scattering equation \eqref{Eq:SE}) as shown by the left matrix in the following \begin{eqnarray} \left( \begin{array}{ccc|ccc} \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ \cdots & {k_{h_a}\cdot k_{h_b}\over z_{h_ah_b}} & \cdots & \cdots & {k_{h_a}\cdot \epsilon_{h_b}\over z_{h_ah_b}} & \cdots \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ \hline \cdots & \cdots & \cdots & \cdots & \cdots & \cdots\\ \cdots & {k_{h_a}\cdot k_{h_b}\over z_{h_ah_b}} & \cdots & \cdots & {k_{h_a}\cdot \epsilon_{h_b}\over z_{h_ah_b}} & \dots \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots\\ \end{array} \right)~~~~~\to~~~~~\left( \begin{array}{ccc|ccc} \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ \cdots & {k_{h_a}\cdot k_{h_b}\over z_{h_ah_b}} & \cdots & \cdots & {k_{h_a}\cdot k_{h_b}\over z_{h_ah_b}} & \cdots \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ \hline \cdots & \cdots & \cdots & \cdots & \cdots & \cdots\\ \cdots & {k_{h_a}\cdot k_{h_b}\over z_{h_ah_b}} & \cdots & \cdots & {k_{h_a}\cdot k_{h_b}\over z_{h_ah_b}} & \dots \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots\\ \end{array} \right).\label{Eq:GaugeInvPfaffian} \end{eqnarray} If we take gauge invariance conditions for more than one graviton, e.g. $h_a$ and $h_b$, the matrix $\Psi$ is also degenerate for the same reason (see the right matrix in \eqref{Eq:GaugeInvPfaffian}), thus the Pfaffian has to vanish. \item [(2)] \emph{The gauge invariance conditions are independent of dimensions.} This is because the statements (i) and (ii) in (1) hold for arbitrary dimension space. \end{itemize} Having (1) and (2), we can conveniently carry on our discussions in the framework of CHY formula. The recursive and graphic expansions for amplitude reflect corresponding relations for Pfaffians \eqref{Eq:RecPfaffian} and \eqref{Eq:GraphicPfaffian}. Since CHY formula does not depend on the dimension of space, we can extend the Pfaffian ${\mbox{Pf}}\left[\Psi_{\mathsf{H}}\right]$ in the graphic expansion \eqref{Eq:GraphicPfaffian} to $(d+d)$-dimensions by defining $(d+d)$-dimensional polarizations $\mathcal{E}_{h_a}$ (all $h_a\in \mathsf{H}$) and $(d+d)$-dimensional momenta $\mathcal{K}_i$ for all external particles, so that \begin{eqnarray} \mathcal{E}_{h_a}\cdot \mathcal{K}_{h_a}=0,~~~(\text{for all~} h_a\in \mathsf{H});~~~\mathcal{K}_i\cdot \mathcal{K}_i=0~~~(\text{for all particles $i$});~~~\sum\limits_{i=1}^{r+s}\mathcal{K}_i=0\label{Eq:CondtionsdPlusd} \end{eqnarray} are satisfied. According to our observations (1) and (2), the Pfaffian ${\mbox{Pf}}\left[\Psi_{\mathsf{H}}\right]$ in $(d+d)$ dimensions on the LHS of \eqref{Eq:GraphicPfaffian} must vanish under the replacement $\mathcal{E}_{h_a}\to \mathcal{K}_{h_a}$ for all $h_a\in\mathsf{A}$ where $\mathsf{A}$ is a nonempty subset of $\mathsf{H}$. Consequently, the RHS of the graphic expansion \eqref{Eq:GraphicPfaffian} in $d+d$ dimensions has to vanish when $\mathcal{E}_{h_a}$ are replaced by $\mathcal{K}_{h_a}$ for all $h_a\in\mathsf{A}\subseteq \mathsf{H}$: \begin{eqnarray} 0=\sum\limits_{\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,{\mathsf{H}}} \mathcal{C}(1,\pmb{\sigma},r)\Big|_{\substack{\mathcal{E}_{h_a}\to \mathcal{K}_{h_a}\\\text{for all~}h_a\in\mathsf{A}}}\text{PT}(1,\pmb{\sigma},r).\label{Eq:GaugeInvdPlusd} \end{eqnarray} Once the coefficients $\mathcal{C}(1,\pmb{\sigma},r)$ in the above equation are expressed by graphs (see eq. \eqref{Eq:Coefficients}) and the gauge invariance conditions are imposed, a chain in which any $h_a\in\mathsf{A}\subseteq \mathsf{H}$ plays as an internal node vanishes due to the antisymmetry of the $(d+d)$ dimensional strength tensor $\mathbf{F}_{h_{a}}^{UV}\equiv \mathcal{K}_{h_a}^{U}\mathcal{E}_{h_a}^V-\mathcal{K}_{h_a}^V\mathcal{E}_{h_a}^U$. Thus only those graphs where all $h_a\in \mathsf{A}$ play as starting points of chains survive. The relation \eqref{Eq:GaugeInvdPlusd} then turns to \begin{eqnarray} 0=\sum\limits_{\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,{\mathsf{H}}}\biggl[\sum\limits_{\mathcal{F}\in{\mathcal{G}^{\pmb{\sigma}}_{\mathsf{H}}[\mathsf{A}]} }\mathcal{C}^{[\mathcal{F}]}(1,\pmb{\sigma},r)\Big|_{\substack{\scriptsize{\mathcal{E}_{h_a}\to \mathcal{K}_{h_a}}\\\text{for all~}h_a\in\mathsf{A}}}\biggr]\text{PT}(1,\pmb{\sigma},r).\label{Eq:NewGaugeIDPf} \end{eqnarray} % Here, $\mathcal{G}^{\pmb{\sigma}}_{\mathsf{H}}[\mathsf{A}]$ denotes the set of graphs corresponding to the permutation $\pmb{\sigma}$, where all elements in the nonempty subset $\mathsf{A}$ play as starting points of chains (Note that other elements in $\mathsf{H}$ may also be starting points of chains). The equation \eqref{Eq:NewGaugeIDPf} does not rely on details of $(d+d)$-dimensional polarizations $\mathcal{E}$ and momenta $\mathcal{K}$, only the conditions \eqref{Eq:CondtionsdPlusd} are required. Thus, we can assign details of polarizations and momenta in $(d+d)$ dimensions appropriately s.t. \eqref{Eq:CondtionsdPlusd} is satisfied. A reasonable definition inspired by the dimensional reduction strategy (see \eqref{Eq:CHYreduction}) in the CHY formula is \begin{eqnarray} \mathcal{K}_{i}=(k_{i};0),~~~(\text{for all external particles});&~~~~&\mathcal{E}_{h_a}=(0;k_{h_a}),~~~~{h_a\in \mathsf{H}}\label{Eq:DimensionalReduction} \end{eqnarray} which apparently satisfies \eqref{Eq:CondtionsdPlusd}. With this assignment, the coefficients in the gauge invariance condition \eqref{Eq:NewGaugeIDPf} become polynomial functions of Mandelstam variables. When the coefficients $ \mathcal{C}(1,\pmb{\sigma},r)$ in $(d+d)$ dimensions are expressed by the graphic rules and $\mathcal{E}_{h_a}$ in $\mathcal{C}(1,\pmb{\sigma},r)$ are replaced by $\mathcal{K}_{h_a}$ ($h_a\in\mathsf{A}\subseteq \mathsf{H}$), chains in the graphs are classified into two types: \begin{itemize} \item [(i)]\emph{ Type-1} Chains started by $(d+d)$-dimensional polarizations $\mathcal{E}_{a}$ ($a\in \mathsf{H}\setminus \mathsf{A}$) have the general form \begin{eqnarray} \mathcal{E}_{a}\cdot \mathbf{F}_{h_{i_j}}\cdot \mathbf{F}_{h_{i_{j-1}}}\dots \mathbf{F}_{h_{i_1}}\cdot \mathcal{K}_b. \end{eqnarray} A chain of this type has to vanish if its length is odd, because we cannot avoid a factor of the form $\mathcal{E}_{i}\cdot \mathcal{K}_j$ which is zero in the definition \eqref{Eq:DimensionalReduction}. Thus the length of nonvanishing type-1 chains must be even. When plugging the components \eqref{Eq:DimensionalReduction} into an even-length chain of type-1, we get a chain expressed by $d$-dimensional Mandelstam variables % \begin{eqnarray} s_{ah_{i_j}}s_{h_{i_{j}}h_{i_{j-1}}}\dots s_{h_{i_1}b} \end{eqnarray} % associated with a factor $(-1)^{j+1\over 2}$, where $j$ is odd. Since the length $L$ of this chain is $j+1$, the prefactor can be given by $(-1)^{L\over 2}$. \item [(ii)] \emph{Type-2} Chains started by $(d+d)$-dimensional momenta $\mathcal{K}_a$ have the general form % \begin{eqnarray} \mathcal{K}_{a}\cdot \mathbf{F}_{h_{i_j}}\cdot \mathbf{F}_{h_{i_{j-1}}}\dots \mathbf{F}_{h_{i_1}}\cdot \mathcal{K}_b. \end{eqnarray} A chain of this type vanishes if its length is even, for an even-length type-2 chain must contain a vanishing factor of the form $\mathcal{E}_{i}\cdot \mathcal{K}_j$. Thus the length of nonvanishing type-2 chains are odd. Inserting the choice of $(d+d)$-dimensional polarizations and momenta \eqref{Eq:DimensionalReduction} into an odd-length chain of this type, we arrive % \begin{eqnarray} s_{ah_{i_j}}s_{h_{i_j}h_{i_{j-1}}}\dots s_{h_{i_1}b} \end{eqnarray} % associated with a factor $(-1)^{j\over 2}$, where $j$ is even. The prefactor for this chain is further expressed by the length $L$ of the chain as $(-1)^{L-1\over 2}$. \end{itemize} Collecting all nonzero chains together, we induce the following relation for $\text{PT}$ factors in $d$ dimensions from the $(d+d)$-dimensional gauge invariance condition \eqref{Eq:GaugeInvdPlusd}: \begin{eqnarray} 0=\sum\limits_{\sigma\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,{\mathsf{H}}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\,\pmb{\sigma}}_{\mathsf{H}}[\mathsf{A}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\sigma},r)\text{PT}(1,\pmb{\sigma},r).\label{Eq:NewGaugeIDPf1} \end{eqnarray} Here, $\mathcal{G}'^{\,\pmb{\sigma}}_{\mathsf{H}}[\mathsf{A}]$ denotes the set of graphs (constructed by the same rule in section \ref{Sec:Expansion}) where \emph{all elements in $\mathsf{A}\subseteq \mathsf{H}$ ($\mathsf{A}\neq\emptyset$) play as starting points of all odd-length chains.} Possible chains of even length must be started by elements in $\mathsf{H}\setminus \mathsf{A}$. For a given permutation $\pmb{\sigma}$ and a given graph $\mathcal{F}\in\mathcal{G}'^{\pmb{\sigma}}_{\mathsf{H}}[\mathsf{A}]$, $\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\sigma},r)$ is obtained by associating chains with factors of the form \begin{eqnarray} s_{ah_{i_j}}s_{h_{i_j}h_{i_{j-1}}}\dots s_{h_{i_1}b}, \end{eqnarray} in which $a$ and $b$ are the starting points and ending points of a chain, while $h_{i_1}$, ..., $h_{i_j}$ are internal nodes of this chain. Note that the prefactors of all chains in any given graph in $\mathcal{G}'^{\,\pmb{\sigma}}_{\mathsf{H}}[\mathsf{A}]$ together produce a same total factor $(-1)^{{s\over 2}-{1\over 2}{N_{o}}}$, where $s$ is the number of elements in the set $\mathsf{H}$ and equal to the total length of all chains, $N_o$ denotes the number of odd-length chains and is equal to the order of the set $\mathsf{A}$. The total factor thus does not appear in the equation \eqref{Eq:NewGaugeIDPf1}. To translate the gauge invariance induced relation \eqref{Eq:NewGaugeIDPf1} for Parke-Taylor factors into amplitude relation, we consider the expression \begin{eqnarray} \sum\limits_{\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,{\mathsf{H}}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\,\pmb{\sigma}}_{\mathsf{H}}[\mathsf{A}]} }(-1)^{{(n+1)(n+2)\over 2}}\int d\Omega_{\text{CHY}}\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\sigma},r)\text{PT}(1,\pmb{\sigma},r)\mathcal{I}_R, \end{eqnarray} where $\mathcal{I}_R$ can be $\mathcal{I}^{\text{BS}}_R$, $\mathcal{I}^{\text{YM}}_R$ or $\mathcal{I}^{\text{NLSM}}_R$ in \eqref{Eq:CHYBSintegrand}, \eqref{Eq:CHYYMintegrand} or \eqref{eq:DR} correspondingly. Since the coefficients $\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\sigma},r)$ are independent of the scattering variables, it can be moved outside the integration. The relation for Parke-Taylor factors \eqref{Eq:NewGaugeIDPf1} then gives the following gauge invariance induced amplitude relations \begin{eqnarray} \boxed{0=\sum\limits_{\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,{\mathsf{H}}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\,\pmb{\sigma}}_{\mathsf{H}}[\mathsf{A}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\sigma},r)A(1,\pmb{\sigma},r)\label{Eq:NewGaugeIDAmp1}}, \end{eqnarray} for any nonempty $\mathsf{A}$ ($\mathsf{A}\subseteq \mathsf{H}$). \subsection{Examples for the gauge invariance induced relation \eqref{Eq:NewGaugeIDAmp1}} Now let us present several examples of the gauge invariance induced amplitude relation \eqref{Eq:NewGaugeIDAmp1}. \begin{figure} \centering \includegraphics[width=7in]{4ptGraphs.jpg} \caption{All possible graphs with $\mathsf{H}=\{h_1,h_2\}$ and reference order $\mathsf{R}=\{h_1,h_2\}$. Graphs $(a)$ and $(b)$ correspond to the permutations $\{2,\dots,r-1\}\shuffle\{h_1,h_2\}$, while graphs $(a)$ and $(b)$ correspond to the permutations $\{2,\dots,r-1\}\shuffle\{h_2,h_1\}$.}\label{Fig:Figure1} \end{figure} \subsubsection{$\mathsf{H}=\{h_1,h_2\}$} The first example is given by $\mathsf{H}=\{h_1,h_2\}$. If the reference order is fixed as $\mathsf{R}=\{h_1,h_2\}$, all graphs given by the graphic rule in section \ref{Sec:Expansion} are displayed in figure \ref{Fig:Figure1}. The graphs $(a)$, $(b)$ in figure \ref{Fig:Figure1} contribute to permutations $\{2,\dots,r-1\}\shuffle\{h_1,h_2\}$, while $(c)$, $(d)$ contribute to the relative order $\{2,\dots,r-1\}\shuffle\{h_2,h_1\}$. In the gauge invariance induced relation \eqref{Eq:NewGaugeIDAmp1}, the nonempty subset $\mathsf{A}$ cannot contain only one element because the total length of all chains is an even number $2$. If $\mathsf{A}$ contains for example $h_1$, {\it i.e.}, there is an odd-length chain started by $h_1$, we must have another odd-length chain started by $h_2$ so that the total length of all chains is even. Thus the nonempty subset $\mathsf{A}$ of $\mathsf{H}$ can only be chosen as $\{h_1,h_2\}$ while $h_1$ and $h_2$ are starting points of two length-1 chains in this example. The graph $(b)$ which contains a length-2 chain does not appear in our gauge invariance induced relation. The relation \eqref{Eq:NewGaugeIDAmp1} for $\mathsf{A}=\{h_1,h_2\}$ reads \begin{eqnarray} 0&=&\sum\limits_{\pmb{\sigma}}s_{h_2X_{h_2}}s_{h_1X_{h_1}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_1,h_2\},r)\nonumber \\ &&+\sum\limits_{\pmb{\sigma}}s_{h_2X_{h_2}}(s_{h_1X_{h_1}}+s_{h_1h_2})A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_2,h_1\},r),\label{Eq:GaugeInducedExample1} \end{eqnarray} where $s_{aX_a}\equiv\sum\limits_{\scriptsize\substack{i\in\{1,2,\dots,r-1\}\\\text{s.t.} \sigma^{-1}(i)<\sigma^{-1}(a)}}s_{ai}$. This relation is in agreement with a fundamental BCJ relation. \subsubsection{$\mathsf{H}=\{h_1,h_2,h_3\}$} We consider the examples with $\mathsf{H}=\{h_1,h_2,h_3\}$. For the reference order $\mathsf{R}=\{h_1,h_2,h_3\}$, all possible graphs constructed by the graphic rules are provided by figure \ref{Fig:5ptGraphs} in appendix \ref{App:5Pt}. For any graph, the total length of all chains must be $3$. As a result, the nonempty subset $\mathsf{A}$ in the relation \eqref{Eq:NewGaugeIDAmp1} can only contain odd number of elements, {\it i.e.}, $\mathsf{A}$ can be $\{h_1\}$, $\{h_2\}$, $\{h_3\}$ or $\{h_1,h_2,h_3\}$. \subsubsection*{$\mathsf{A}=\{h_1\}$} If $\mathsf{A}$ contains only one element $h_1$. Then $h_1$ must leads to a length-1 chain while $h_3$ must leads to a length-2 chain $s_{h_3h_2}s_{h_2a}$ with an internal node $h_2$. Among the graphs in figure \ref{Fig:5ptGraphs}, only $(a5)$ (for the relative order $\{h_1,h_2,h_3\}$), $(c3)$, $(c4)$ (for the relative order $\{h_2,h_1,h_3\}$) and $(d2)$, $(d4)$, $(d6)$ (for the relative order $\{h_2,h_3,h_1\}$) contribute. Hence the relation for $\mathsf{A}=\{h_1\}$ is \begin{eqnarray} 0&=&\sum\limits_{\pmb{\sigma}}s_{h_1X_{h_1}}s_{h_3h_2}s_{h_2X_{h_2}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_1,h_2,h_3\},r)\nonumber \\ &+&\sum\limits_{\pmb{\sigma}}(s_{h_1X_{h_1}}+s_{h_1h_2})s_{h_3h_2}s_{h_2X_{h_2}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_2,h_1,h_3\},r)\nonumber \\ &+&\sum\limits_{\pmb{\sigma}}(s_{h_1X_{h_1}}+s_{h_1h_2}+s_{h_1h_3})s_{h_3h_2}s_{h_2X_{h_2}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_2,h_3,h_1\},r).\label{Eq:GaugeInducedExample2} \end{eqnarray} This relation is consistent with a fundamental BCJ relation. \subsubsection*{$\mathsf{A}=\{h_2\}$} If $\mathsf{A}=\{h_2\}$, $h_2$ must be the starting point of a length-1 chain under the choice of reference order $\mathsf{R}=\{h_1,h_2,h_3\}$, while $h_3$ must start a length-2 chain with the internal node $h_1$. The graphs $(a3)$, $(a4)$, $(b2)$, $(b4)$, $(b6)$ and $(c5)$ have nonvanishing contributions and the relation \eqref{Eq:NewGaugeIDAmp1} gives \begin{eqnarray} 0&=&\sum\limits_{\pmb{\sigma}}s_{h_2X_{h_2}} s_{h_3h_1}s_{h_1X_{h_1}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_2,h_1,h_3\},r)\nonumber \\ &+&\sum\limits_{\pmb{\sigma}}(s_{h_2X_{h_2}}+s_{h_2h_1})s_{h_3h_1}s_{h_1X_{h_1}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_1,h_2,h_3\},r)\nonumber \\ &+&\sum\limits_{\pmb{\sigma}}(s_{h_2X_{h_2}}+s_{h_2h_1}+s_{h_2h_3})s_{h_3h_1}s_{h_1X_{h_1}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_1,h_3,h_2\},r).\label{Eq:GaugeInducedExample3} \end{eqnarray} Again, the vanish of RHS can be considered as a result of fundamental BCJ relation. \subsubsection*{$\mathsf{A}=\{h_3\}$} If $\mathsf{A}=\{h_3\}$, the element $h_3$ can start either a length-$3$ chain or a length-$1$ chain. In the former case, both $h_1$ and $h_2$ must be internal nodes of the length-3 chain ($(a6)$ and $(c6)$ in figure \ref{Fig:5ptGraphs}), while in the latter case $h_2$ must start a length-$2$ chain with $h_1$ as the internal node ((a2), (b3), (e5) and (e6) in figure \ref{Fig:5ptGraphs}). All together, the relation \eqref{Eq:NewGaugeIDAmp1} turns to \begin{eqnarray} 0&=&\sum\limits_{\pmb{\sigma}}(s_{h_3h_2}s_{h_2h_1}s_{h_1X_{h_1}}+s_{h_3X_{h_3}}s_{h_2h_1}s_{h_1X_{h_1}})A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_1,h_2,h_3\},r)\nonumber \\ &&+\sum\limits_{\pmb{\sigma}}s_{h_3h_1}s_{h_1h_2}s_{h_2X_{h_2}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_2,h_1,h_3\},r)\nonumber \\ &&+\sum\limits_{\pmb{\sigma}}s_{h_3X_{h_3}}s_{h_2h_1}s_{h_1X_{h_1}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_1,h_3,h_2\},r)\nonumber \\ &&+\sum\limits_{\pmb{\sigma}}s_{h_3X_{h_3}}s_{h_2h_1}(s_{h_1X_{h_1}}+s_{h_1h_3})A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_3,h_1,h_2\},r),\label{Eq:GaugeInducedExample4} \end{eqnarray} which is not as trivial as previous examples. One can check this identity by expanding all amplitudes in terms of BCJ basis amplitudes. \subsubsection*{$\mathsf{A}=\{h_1,h_2,h_3\}$} Now we consider the case $\mathsf{A}=\{h_1,h_2,h_3\}$, for which all elements in $\mathsf{H}$ play as starting points of odd-length chains. The only possibility is that all chains are of length $1$. The relation \eqref{Eq:NewGaugeIDAmp1} then gives rise \begin{eqnarray} 0&=&\sum\limits_{\pmb{\sigma}}s_{h_1X_{h_1}}s_{h_2X_{h_2}}s_{h_3X_{h_3}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_1,h_2,h_3\},r)\nonumber \\ &&+\sum\limits_{\pmb{\sigma}}(s_{h_1X_{h_1}}+s_{h_1h_2})s_{h_2X_{h_2}}s_{h_3X_{h_3}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_2,h_1,h_3\},r)\nonumber \\ &&+\sum\limits_{\pmb{\sigma}}(s_{h_1X_{h_1}}+s_{h_1h_2}+s_{h_1h_3})s_{h_2X_{h_2}}s_{h_3X_{h_3}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_2,h_3,h_1\},r)\nonumber \\ &&+\sum\limits_{\pmb{\sigma}}s_{h_1X_{h_1}}(s_{h_2X_{h_2}}+s_{h_2h_3})s_{h_3X_{h_3}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_1,h_3,h_2\},r)\nonumber \\ &&+\sum\limits_{\pmb{\sigma}}(s_{h_1X_{h_1}}+s_{h_1h_3})(s_{h_2X_{h_2}}+s_{h_2h_3})s_{h_3X_{h_3}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_3,h_1,h_2\},r)\nonumber \\ &&+\sum\limits_{\pmb{\sigma}}(s_{h_1X_{h_1}}+s_{h_1h_3}+s_{h_1h_2})(s_{h_2X_{h_2}}+s_{h_2h_3})s_{h_3X_{h_3}}A(1,\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\{h_3,h_2,h_1\},r).\label{Eq:GaugeInducedExample5} \end{eqnarray} The RHS of the above relation gets contributions from eighteen graphs $(a1)$, $(b1)$, $(b5)$, $(c1)$, $(c2)$, $(d1)$, $(d3)$, $(d5)$, $(e1)$, $(e2)$, $(e3)$, $(e4)$, $(f1)$, $(f2)$, $(f3)$, $(f4)$, $(f5)$ and $(f6)$. Both the sum of the first three rows and the sum of the last three rows vanish due to fundamental BCJ relation. \subsubsection{$\mathsf{H}=\{h_1,h_2,h_3,h_4\}$} We consider a much more nontrivial case with $\mathsf{H}=\{h_1,h_2,h_3,h_4\}$ as the last example. The nonempty subset in \eqref{Eq:NewGaugeIDAmp1} is chosen as $\mathsf{A}=\{h_3,h_4\}$ and the reference order is chosen as $\mathsf{R}=\{h_1,h_2,h_3,h_4\}$. If $h_4$ ($h_3$) is starting point of a length-$3$ chain, $h_3$ ($h_4$) must be starting point of a length-$1$ chain. Such graphs contain only two chains; If both $h_4$ and $h_3$ are starting points of length-$1$ chains, we must also have an length-$2$ chain of the form $s_{h_2h_1}s_{h_1Y_{h_1}}$. The coefficients for all possible permutations are displayed as follows ($\{h_1h_2h_3h_4\}$ is used to denote the permutation $1,\{2,\dots,r-1\}\shuffle\{h_1,h_2,h_3,h_4\},r$ for short) \begin{eqnarray} &&\{h_3h_1h_2h_4\}:s_{h_4h_2}s_{h_2h_1}s_{h_1X_{h_1}}s_{h_3X_{h_3}}+s_{h_4X_{h_4}}s_{h_3X_{h_3}}s_{h_2h_1}(s_{h_1X_{h_1}}+s_{h_1h_3}),\nonumber \\ &&\{h_1h_3h_2h_4\}:s_{h_4h_2}s_{h_2h_1}s_{h_1X_{h_1}}(s_{h_3X_{h_3}}+s_{h_3h_1})+s_{h_4X_{h_4}}s_{h_3X_{h_3}}s_{h_2h_1}s_{h_1X_{h_1}},\nonumber \\ &&\{h_1h_2h_3h_4\}: s_{h_4h_2}s_{h_2h_1}s_{h_1X_{h_1}}(s_{h_3X_{h_3}}+s_{h_3h_1}+s_{h_3h_2})+s_{h_4X_{h_4}}(s_{h_3X_{h_3}}+s_{h_3h_2})s_{h_2h_1}s_{h_1X_{h_1}},\nonumber \\ &&\{h_1h_2h_4h_3\}:s_{h_4h_2}s_{h_2h_1}s_{h_1X_{h_1}}(s_{h_3X_{h_3}}+s_{h_3h_1}+s_{h_3h_2}+s_{h_3h_4})+s_{h_4X_{h_4}}(s_{h_3X_{h_3}}+s_{h_3h_2}+s_{h_3h_4})s_{h_2h_1}s_{h_1X_{h_1}},\nonumber \\ &&\{h_3h_1h_4h_2\}:s_{h_4X_{h_4}}s_{h_3X_{h_3}}s_{h_2h_1}(s_{h_1X_{h_1}}+s_{h_1h_3}),~~~~\{h_1h_3h_4h_2\}:s_{h_4X_{h_4}}s_{h_3X_{h_3}}s_{h_2h_1}s_{h_1X_{h_1}},\nonumber \\ &&\{h_1h_4h_3h_2\}:s_{h_4X_{h_4}}(s_{h_3X_{h_3}}+s_{h_3h_4})s_{h_2h_1}s_{h_1X_{h_1}},~~~~\{h_1h_4h_2h_3\}:s_{h_4X_{h_4}}(s_{h_3X_{h_3}}+s_{h_3h_4}+s_{h_3h_2})s_{h_2h_1}s_{h_1X_{h_1}},\nonumber \\ &&\{h_3h_4h_1h_2\}:s_{h_4X_{h_4}}s_{h_3X_{h_3}}s_{h_2h_1}(s_{h_1X_{h_1}}+s_{h_1h_3}+s_{h_1h_4}),\nonumber \\ &&\{h_4h_3h_1h_2\}:s_{h_4X_{h_4}}(s_{h_3X_{h_3}}+s_{h_3h_4})s_{h_2h_1}(s_{h_1X_{h_1}}+s_{h_1h_3}+s_{h_1h_4}),\nonumber \\ &&\{h_4h_1h_3h_2\}:s_{h_4X_{h_4}}(s_{h_3X_{h_3}}+s_{h_3h_4})s_{h_2h_1}(s_{h_1X_{h_1}}+s_{h_1h_4}),\nonumber \\ &&\{h_4h_1h_2h_3\}:s_{h_4X_{h_4}}(s_{h_3X_{h_3}}+s_{h_3h_4}+s_{h_3h_2})s_{h_2h_1}s_{h_1X_{h_1}},\nonumber \\ &&\{h_3h_2h_1h_4\}:s_{h_4h_1}s_{h_1h_2}s_{h_2X_{h_2}}s_{h_3X_{h_3}},~~~~~~~~~~~~~~~~~~\{h_2h_3h_1h_4\}:s_{h_4h_1}s_{h_1h_2}s_{h_2X_{h_2}}(s_{h_3X_{h_3}}+s_{h_3h_2}),\nonumber \\ &&\{h_2h_1h_3h_4\}:s_{h_4h_1}s_{h_1h_2}s_{h_2X_{h_2}}(s_{h_3X_{h_3}}+s_{h_3h_2}+s_{h_3h_1})+s_{h_4X_{h_4}}s_{h_3h_1}s_{h_1h_2}s_{h_2X_{h_2}},\nonumber \\ &&\{h_2h_1h_4h_3\}:s_{h_4h_1}s_{h_1h_2}s_{h_2X_{h_2}}(s_{h_3X_{h_3}}+s_{h_3h_2}+s_{h_3h_1}+s_{h_3h_4})+s_{h_4X_{h_4}}s_{h_3h_1}s_{h_1h_2}s_{h_2X_{h_2}}\nonumber \\ &&\{h_2h_4h_1h_3\}:s_{h_4X_{h_4}}s_{h_3h_1}s_{h_1h_2}s_{h_2X_{h_2}},~~~~~~~~~~~~~~~~~~\{h_4h_2h_1h_3\}:s_{h_4X_{h_4}}s_{h_3h_1}s_{h_1h_2}(s_{h_2X_{h_2}}+s_{h_2h_4}).\label{Eq:GaugeInducedExample6} \end{eqnarray} \subsection{The boundary case $\mathsf{A}=\mathsf{H}$ and partial momentum kernel} When we set $\mathsf{A}=\mathsf{H}$, every graph in the gauge invariance induced relation \eqref{Eq:NewGaugeIDAmp1} only contains length-1 chains (as shown by examples \eqref{Eq:GaugeInducedExample1} and \eqref{Eq:GaugeInducedExample5}). Then the relation \eqref{Eq:NewGaugeIDAmp1} becomes \begin{eqnarray} 0=\sum\limits_{\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,{\mathsf{H}}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\sigma}}_{\mathsf{H}}[\mathsf{H}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\sigma},r)A(1,\pmb{\sigma},r). \label{Eq:NewGaugeIDAmpBoundary} \end{eqnarray} Assuming that the reference order is $\mathsf{R}=\left\{h_{\rho(1)},h_{\rho(2)},\dots,h_{\rho(s)}\right\}$, let us analyze the coefficients in the above equation in more detail. A length-1 chain started by $h_{\rho(s)}$ can end at any gluon $l_s\in\{1,\dots,r-1\}$ s.t. $\sigma^{-1}(l_s)<\sigma^{-1}(h_{\rho(s)})$ and is associated with a factor $s_{h_{\rho(s)}l_s}$. A length-1 chain started by $h_{\rho(s-1)}$ can end at any element $l_{s-1}\in \{1,\dots,r-1\}\cup\{h_{\rho(s)}\}$ s.t., $\sigma^{-1}(l_{s-1})<\sigma^{-1}(h_{\rho(s-1)})$ and is associated with a factor $s_{h_{\rho(s-1)}l_{s-1}}$. This observation can be extended to arbitrary case: a length-1 chain started by $h_{\rho(i)}$ in \eqref{Eq:NewGaugeIDAmpBoundary} can end at any $l_{i}\in \{1,\dots,r-1\}\cup\{h_{\rho(i+1)},\dots,h_{\rho(s)}\}$ s.t., $\sigma^{-1}(l_{i})<\sigma^{-1}(h_{\rho(i)})$. The coefficient for given permutation $\pmb{\sigma}$ then reads \begin{eqnarray} \sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\sigma}}_{\mathsf{H}}[\mathsf{H}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\sigma},r)&=&\sum\limits_{\substack{l_i\in\{1,2,\dots,r-1\}\cup\{h_{\rho(i+1)},\dots,h_{\rho(s)}\}\\\text{s.t.\,} \sigma^{-1}(l_i)<\sigma^{-1}(h_{\rho(i)})\,\text{for all}\,i=1,\dots,s}}s_{h_{\rho(1)}l_1}s_{h_{\rho(2)}l_2}\dots s_{h_{\rho(s)}l_s}.\label{Eq:BoundaryCoefficient} \end{eqnarray} An interesting observation is that we can reexpress the coefficient \eqref{Eq:BoundaryCoefficient} by defining `partial momentum kernel'. Given two permutations $\pmb{\sigma}$ and $\pmb{\rho}$ of elements in $\{2,\dots,m\}$ and a nonempty subset $\mathsf{H}$ of $\{2,\dots,m\}$, the partial momentum kernel $\W S_{\mathsf{H}}[\pmb{\sigma}|\pmb{\rho}]$ is defined by \begin{eqnarray} \W S_{\mathsf{H}}[\pmb{\sigma}|\pmb{\rho}]\equiv\prod\limits_{a\in \mathsf{H}}\biggl[s_{a1}+\sum\limits_{l\in \{2,\dots,m\}}\theta(\sigma^{-1}(a)-\sigma^{-1}(l))\theta(\rho^{-1}(a)-\rho^{-1}(l))s_{al}\biggr],\label{Eq:PartialMomentumKernal} \end{eqnarray} where $\sigma^{-1}(a)$ and $\rho^{-1}(a)$ denote the positions of $a$ in the permutations $\pmb{\sigma}$ and $\pmb{\rho}$ respectively . Given $a\in \mathsf{H}$ and $l\in \{2,\dots,m\}$, the product of two step functions in \eqref{Eq:PartialMomentumKernal} is $1$ if both $\sigma^{-1}(a)>\sigma^{-1}(l)$ and $\rho^{-1}(a)>\rho^{-1}(l)$ are satisfied, otherwise $0$. Explicit examples of the partial momentum kernel are given as \begin{eqnarray} \W S_{\{2\}}[2345|2543]&=&s_{21},~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\,\W S_{\{3\}}[2345|5423]=s_{31}+s_{32}, \nonumber \\ \W S_{\{2,5\}}[2345|4235]&=&s_{21}(s_{51}+s_{52}+s_{53}+s_{54}),~\W S_{\{2,3,4\}}[2345|3542]=s_{21}s_{31}(s_{41}+s_{43}). \end{eqnarray} There are many useful properties satisfied by partial momentum kernels: \begin{itemize} \item [(i)] Partial momentum kernel $\W S_{\mathsf{H}}[\pmb{\sigma}|\pmb{\rho}]$ is symmetric under exchanging of permutations $\pmb{\sigma}$ and $\pmb{\rho}$, {\it i.e.}, \begin{eqnarray} \W S_{\mathsf{H}}[\pmb{\sigma}|\pmb{\rho}]=\W S_{\mathsf{H}}[\pmb{\rho}|\pmb{\sigma}].\label{Eq:prop1} \end{eqnarray} \item [(ii)] If the subset $\mathsf{H}$ is chosen as the full set $\{2,\dots,m\}$, we arrive the usual momentum kernel \begin{eqnarray} \W S_{\{2,\dots,m\}}[\pmb{\sigma}_{2,m}|\pmb{\rho}_{2,m}]=S[\pmb{\sigma}_{2,m}|\pmb{\rho}_{2,m}].\label{Eq:prop2} \end{eqnarray} \item [(iii)] Assuming that $\pmb{\rho}_{\mathsf{B}}$ and $\pmb{\rho}'_{\mathsf{B}}$ are two permutations of elements of a set $\mathsf{B}$, while $\pmb{\rho}_{\mathsf{C}}$ is a permutation of elements of $\mathsf{C}$, the partial momentum kernel $\W S_{\mathsf{C}}\left[\pmb{\rho}_{\mathsf{B}},\pmb{\rho}_{\mathsf{C}}|\pmb{\sigma}_{\mathsf{B}}\shuffle\pmb{\sigma}_{\mathsf{C}}\right]$ satisfies \begin{eqnarray} \W S_{\mathsf{C}}\left[\pmb{\rho}_{\mathsf{B}},\pmb{\rho}_{\mathsf{C}}|\pmb{\sigma}_{\mathsf{B}}\shuffle\pmb{\sigma}_{\mathsf{C}}\right]=\W S_{\mathsf{C}}\left[\pmb{\rho}'_{\mathsf{B}},\pmb{\rho}_{\mathsf{C}}|\pmb{\sigma}_{\mathsf{B}}\shuffle\pmb{\sigma}_{\mathsf{C}}\right].\label{Eq:prop3} \end{eqnarray} \item [(iv)] The following property which relates usual momentum kernel and partial momentum kernel will be useful in the coming sections: \begin{eqnarray} S\left[\pmb{\rho}_{\mathsf{B}},\pmb{\rho}_{\mathsf{C}}|\pmb{\sigma}_{\mathsf{B}}\shuffle\pmb{\sigma}_{\mathsf{C}}\right]=S\left[\pmb{\rho}_{\mathsf{B}}|\pmb{\sigma}_{\mathsf{B}}\right]\W S_{\mathsf{C}}[\pmb{\rho}_{\mathsf{B}},\pmb{\rho}_{\mathsf{C}}|\pmb{\sigma}_{\mathsf{B}}\shuffle\pmb{\sigma}_{\mathsf{C}}].\label{Eq:prop4} \end{eqnarray} \end{itemize} Having defined the partial momentum kernel \eqref{Eq:PartialMomentumKernal} and choosing the the reference order as $\mathsf{R}=\{h_{\rho(1)},h_{\rho(2)},\dots,h_{\rho(s)}\}$, we naturally write the coefficient \eqref{Eq:BoundaryCoefficient} as \begin{eqnarray} \sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\sigma}}_{\mathsf{H}}[\mathsf{H}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\sigma},r)=\W S_{\mathsf{H}}[\pmb{\sigma}|2,\dots,r-1,h_{\rho(s)},h_{\rho(s-1)},\dots,h_{\rho(1)}]. \end{eqnarray} The relation \eqref{Eq:NewGaugeIDAmpBoundary} for $\mathsf{A}=\mathsf{H}$ is then conveniently given by \begin{eqnarray} 0=\sum\limits_{\pmb{\sigma}\in\{2,\dots,r-1\}\shuffle\text{perms~} \mathsf{H}}\W S_{\mathsf{H}}[\pmb{\sigma}|2,\dots,r-1,h_{\rho(s)},h_{\rho(s-1)},\dots,h_{\rho(1)}]A(1,\pmb{\sigma},r).\label{Eq:NewGaugeIDAmpBoundary1} \end{eqnarray} For the cases with $\mathsf{H}=\{h_1,h_2\}$ and $\mathsf{H}=\{h_1,h_2,h_3\}$, \eqref{Eq:NewGaugeIDAmpBoundary1} returns to the examples \eqref{Eq:GaugeInducedExample1} and \eqref{Eq:GaugeInducedExample5} respectively. In fact, the relation \eqref{Eq:NewGaugeIDAmpBoundary1} is consistent with the following fundamental BCJ relation for given permutation $\pmb{\eta}\in\{2,\dots,r-1\}\shuffle\text{perms\,}\{\mathsf{H}\setminus \{h_{\rho(s)}\}\}$ \begin{eqnarray} 0&=&s_{h_{\rho(s)}1}A(1,h_{\rho(s)},\eta(1),\eta(2),\dots,\eta(r+s-2),r)\nonumber \\ &&+(s_{h_{\rho(s)}1}+s_{h_{\rho(s)}\eta(1)})A(1,\eta(1),h_{\rho(s)},\eta(2),\dots,\eta(r+s-2),r)\nonumber \\ &&+\dots+(s_{h_{\rho(s)}1}+s_{h_{\rho(s)}\eta(1)}+\dots+s_{h_{\rho(s)}\eta(r+s-2)})A(1,\eta(1),\eta(2),\dots,\eta(r+s-2),h_{\rho(s)},r). \end{eqnarray} \section{Three types of BCJ numerators in NLSM}\label{Sec:NumeratorsNLSM1} As an application of the gauge invariance induced relation \eqref{Eq:NewGaugeIDAmp1}, we will prove the equivalence between distinct approaches to scattering amplitudes in NLSM: (i) traditional Feynman diagrams, (ii) the CHY formula and (iii) the Abelian Z theory in the remaining sections. The starting point of our proof is the fact that all three approaches result \emph{dual DDM formula} \begin{eqnarray} M(1,\dots,n)=\sum\limits_{\pmb{\sigma}\in S_{n-2}}n_{1|\pmb{\sigma}|n} A(1,\pmb{\sigma},n),~~~(\text{$n$ is even})\label{Eq:DualDDM} \end{eqnarray} with distinct (DF, CMS and DT) expressions of BCJ numerators $n_{1|\pmb{\sigma}|n}$ (as polynomial functions of Mandelstam variables). The $A(1,\pmb{\sigma},n)$ in \eqref{Eq:DualDDM} are bi-scalar amplitudes. Thus the three approaches are equivalent to each other if and only if the following relations for bi-scalar amplitudes are satisfied: \begin{eqnarray} \sum\limits_{\pmb{\sigma}\in S_{n-2}}n^{\text{DF}}_{1|\pmb{\sigma}|n}A(1,\pmb{\sigma},n)=\sum\limits_{\pmb{\sigma}\in S_{n-2}}n^{\text{CMS}}_{1|\pmb{\sigma}|n} A(1,\pmb{\sigma},n)=\sum\limits_{\pmb{\sigma}\in S_{n-2}}n^{\text{DT}}_{1|\pmb{\sigma}|n}A(1,\pmb{\sigma},n).\label{Eq:Equivalence} \end{eqnarray} We will review the three types of BCJ numerators in this section and prove the equivalence condition \eqref{Eq:Equivalence} by using \eqref{Eq:NewGaugeIDAmp1} in sections \ref{Sec:DTCMS} and \ref{Sec:DFCMS}. \subsection{Three distinct constructions of BCJ numerators in NLSM }\label{Sec:NumeratorsNLSM} Now let us review the DF, CMS and DT types of BCJ numerators which correspond to the Feynman diagram approach, Abelian Z theory and CHY formula. \subsubsection*{The DF type numerators} The DF type BCJ numerator was derived by applying off-shell extended BCJ relation \cite{Chen:2013fya, Du:2016tbc}, which is based on Berends-Giele recursion (thus Feynman diagrams). The explicit expression of DF type BCJ numerator is given by a proper combination of momentum kernel:\footnote{We adjust the total sign by $(-1)$ to agree with the CMS type numerators.} \begin{eqnarray} n^{\text{DF}}_{1|\pmb{\sigma}|n}=(-1)\sum\limits_{\pmb{\rho}\in \mathsf{\Gamma}}S[\pmb{\sigma}|\pmb{\rho}],\label{Eq:DFform} \end{eqnarray} where we summed over permutations $\pmb{\rho}$ in $\mathsf{\Gamma}$ which is defined as the collection of permutations satisfying the following conditions. For any $a\in\{2,\dots,n-1\}$, we assume $b$ ($c$) is the nearest element on the LHS (RHS) of $a$ in the permutation $\pmb{\rho}$, which satisfies $\sigma^{-1}(b)>\sigma^{-1}(a)$ ($\sigma^{-1}(c)>\sigma^{-1}(a)$) \footnote{ Here $1$ and $n$ are correspondingly considered as the first and the last elements in both permutations $\pmb{\sigma}$ and $\pmb{\rho}$. There is always a particle $n$ (maybe not the nearest ) on the RHS and LHS of $a$ in the permutation $\pmb{\rho}$ s.t. $\sigma^{-1}(n)=n>\sigma^{-1}(a)$ in the sense of cyclicity, see \cite{ Du:2016tbc}.} . The permutations $\pmb{\rho}$ in the DF type numerator \eqref{Eq:DFform} are those satisfying either of the following two conditions: (i) There are odd number of elements between $a$, $b$ as well as $a$, $c$ in the permutation $\pmb{\rho}$. (ii) There is no element between both $a$, $b$ and $a$, $c$ in the permutation $\pmb{\rho}$. Explicit examples are given as \begin{eqnarray} n^{\text{DF}}_{1|23|4}&=&S[23|32]=-s_{21}s_{31},\label{Eq:DF4Pt}\\ n^{\text{DF}}_{1|2345|6}&=&-(S[2345|5243]+S[2345|5342]+S[2345|4352]+S[2345|4253]+S[2345|3254])\label{Eq:DF6Pt}\nonumber \\ &=&(-1)\Bigl[s_{51}\left(s_{41}+s_{42}\right)(s_{31}+s_{32})s_{21}+s_{51}\left(s_{41}+s_{43}\right)s_{31}s_{21}\nonumber \\ &&+(s_{51}+s_{54}+s_{53})s_{41}s_{31}s_{21}+(s_{51}+s_{54}+s_{52})s_{41}(s_{31}+s_{32})s_{21}\nonumber \\ &&+(s_{51}+s_{52}+s_{53})(s_{41}+s_{42}+s_{43})s_{31}s_{21}\Bigr]. \end{eqnarray} \subsubsection*{The CMS type numerators} The CMS type BCJ numerator, which comes from Abelian Z theory \cite{Carrasco:2016ldy}, expresses each numerator in dual DDM decomposition by only one momentum kernel: \begin{eqnarray} n^{\text{CMS}}_{1|\pmb{\sigma}|n}=(-1)^{n\over 2}S[\sigma(2),\sigma(3),\dots,\sigma(n-1)|\sigma(2),\sigma(3),\dots,\sigma(n-1)].\label{Eq:CMSform} \end{eqnarray} Explicit expressions for four- and six-point cases are \begin{eqnarray} n^{\text{CMS}}_{1|23|4}&=&S[23|23]=s_{21}(s_{31}+s_{32})\nonumber \\ n^{\text{CMS}}_{1|2345|6}&=&S[2345|2345]=s_{21}(s_{31}+s_{32})(s_{41}+s_{42}+s_{43})(s_{51}+s_{52}+s_{53}+s_{54}). \end{eqnarray} It is worthy emphasizing that both DF and CMS types BCJ numerators manifest the relabeling symmetry of $n-2$ elements, {\it i.e.}, $n_{1|\sigma(2),\dots,\sigma(n-1)|n}$ can be obtained from $n_{1|2,\dots,n-1|n}$ by the replacement $2,3,\dots,n-1\to\sigma(2),\sigma(3),\dots,\sigma(n-1)$. \subsubsection*{The DT type numerators} Being different from the previous two constructions, the DT type numerator which is based on the graphic expansion of amplitudes and the dimensional reduction in CHY formula is not a symmetric form. This type of BCJ numerators are expanded by graphic rule instead of momentum kernels. The construction of $n^{\text{DT}}_{1|\pmb{\sigma}|n}$ is given by \begin{itemize} \item Consider $1$ as the root of a tree and define a reference order of elements in $\{2,\dots,n-1\}$, say $\mathsf{R}\equiv\{\rho(1),\dots,\rho(s=n-2)\}$. \item Pick $\rho(s)$ in $\{\sigma(2),\dots,\sigma(n-1)\}$. Construct a chain $\mathbb{C}[1]\equiv\{l=1,i_1,\dots,i_j,\rho(s)\}$ of even length started by $\rho(s)$ towards $1$ with internal nodes $i_1, i_2, \dots, i_{j}$ ($j$ is odd) s.t. $\sigma^{-1}(l=1)<\sigma^{-1}({i_1})<\sigma^{-1}({i_2})< \dots < \sigma^{-1}(i_j)<\sigma^{-1}(\rho(s))$. This chain is associated with a factor % \begin{eqnarray} s_{\rho(s)i_j}s_{i_{j}i_{j-1}}\dots s_{i_2i_1}s_{i_11}. \end{eqnarray} % Remove this chain from the ordered set $\mathsf{R}\to \mathsf{R}'=\mathsf{R}\setminus \{i_1,i_2, \dots, i_{j},\rho(s)\}\equiv\{\rho'(1),\dots,\rho'(s')\}$. \item Repeat the previous step: Pick $\rho'(s')\in \mathsf{R}'$ and construct a chain $\mathbb{C}[2]\equiv\{l',i'_1,\dots,i'_{j'},\rho'({s'})\}$ of even length ($j'$ is odd), which starts from $\rho'(s')$ towards a node $l'$ on $\mathbb{C}[1]$ and satisfies $\sigma^{-1}(l')<\sigma^{-1}({i'_1})< \dots < \sigma^{-1}(i'_{j'})<\sigma^{-1}(\rho'(s'))$. The new chain $\mathbb{C}[2]$ is associated with a factor % \begin{eqnarray} s_{\rho'(s')i'_{j'}}s_{i'_{j'}i'_{j'-1}}\dots s_{i'_2i'_1}s_{i'_1l'}. \end{eqnarray} Remove this chain from the ordered set $\mathsf{R}\to \mathsf{R}''=\mathsf{R}'\setminus \{i'_1,i'_2, \dots, i'_{j},\rho'(s')\}\equiv\{\rho''(1),\dots,\rho''(s'')\}$. \item Repeat the above steps until the ordered set $\mathsf{R}$ becomes empty. Each new even-length chain is attached to nodes which have been used and associated with a factor. Collecting the factors corresponding to all chains in a graph and summing over all possible graphs (noting that the total phase factor is $(-1)^{{n\over 2}-1}$), we finally get the BCJ numerator $n^{\text{DT}}_{1|\pmb{\sigma}|n}$. % \end{itemize} By means of the conventions of notations established for the gauge invariance induced relation \eqref{Eq:NewGaugeIDAmp1}, we can write the numerators of DT type as\footnote{The prefactor $(-1)^{n-2 \over 2}$ is adjusted by $(-1)$ to agree with that in CMS type. This adjustment does not affect our discussions.} \begin{eqnarray} n^{\text{DT}}_{1|\pmb{\sigma}|n}=(-1)^{n\over 2}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\,\pmb{\sigma}}_{\{2,\dots,n-1\}}[\emptyset]}, }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\sigma},n)\label{Eq:DTform} \end{eqnarray} where the $\mathsf{H}$ set, whose elements serve as starting points or internal nodes of trees, is chosen as $\{2,\dots,n-1\}$. The empty set $\emptyset$ in $\mathcal{G}'^{\,\pmb{\sigma}}_{\{2,\dots,n-1\}}[\emptyset]$ means that all chains are of even length. The explicit expressions for four-point numerators $n^{\text{DT}}_{1|23|4}$ and $n^{\text{DT}}_{1|32|4}$ are given by \begin{eqnarray} n^{\text{DT}}_{1|23|4}=-s_{32}s_{21},~~~~~~n^{\text{DT}}_{1|32|4}=0, \end{eqnarray} where the reference order is chosen as $\mathsf{R}=\{2,3\}$. \section{The equivalence between DT and CMS constructions of NLSM amplitudes}\label{Sec:DTCMS} The DT and the CMS types of numerators produce the same amplitude if and only if the second equality in \eqref{Eq:Equivalence} holds. Substituting \eqref{Eq:DTform} and \eqref{Eq:CMSform} into \eqref{Eq:Equivalence}, we arrive the following relation for bi-scalar amplitudes $A(1,\pmb{\sigma},n)$ \begin{eqnarray} \boxed{\sum\limits_{\pmb{\sigma}\in S_{n-2}}S[\pmb{\sigma}|\pmb{\sigma}]A(1,\pmb{\sigma},n)=\sum\limits_{\pmb{\sigma}\in S_{n-2}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\sigma}}_{\{2,\dots,n-1\}}[\emptyset]}, }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\sigma},n) A(1,\pmb{\sigma},n).~~(\text{for even $n$}) \label{Eq:EquivDTCMS}} \end{eqnarray} To prove the equivalence condition \eqref{Eq:EquivDTCMS}, we carry on our discussions in a more generic framework: \begin{itemize} \item [(i)]The momentum kernel $S[\pmb{\sigma}|\pmb{\sigma}]$ is generalized to the partial momentum kernel \begin{eqnarray}\W S_{\mathsf{H}}\left[\{2,\dots,r-1\}\shuffle\pmb{\sigma}_{\mathsf{H}}|2,\dots,r-1,\pmb{\sigma}_{\mathsf{H}}\right]\end{eqnarray} where $\mathsf{H}$ is an arbitrary nonempty set with $s$ elements. When setting $\{2,\dots,r-1\}=\emptyset$ and $\mathsf{H}=\{2,\dots,n-1\}$, we return to the original momentum kernel $S[\pmb{\sigma}|\pmb{\sigma}]$ ($\pmb{\sigma}\in S_{n-2}$). \item [(ii)] The number of external particles is not limited to be even. Amplitudes with odd number external particles are also under consideration. \item [(iii)] The amplitude $A(1,\pmb{\sigma},n)$ can be color-ordered Yang-Mills, bi-scalar or color-ordered NLSM amplitudes. \end{itemize} Having the above generalizations, we will prove the following two relations \begin{eqnarray} \boxed{\sum\limits_{\pmb{\sigma}_{\mathsf{H}}}\sum\limits_{\substack{\pmb{\alpha}\,\in\,\{2,\dots,r-1\}\\\,\,\shuffle\,\pmb{\sigma}_{\mathsf{H}}}}\W S_{\mathsf{H}}\left[\pmb{\alpha}|2,\dots,r-1,\pmb{\sigma}_{\mathsf{H}}\right]A(1,\pmb{\alpha},r)=\sum\limits_{\substack{\pmb{\alpha}\in\{2,\dots,r-1\}\\\,\shuffle\text{perms}\,{\mathsf{H}}}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\,\pmb{\alpha}}_{\mathsf{H}}[\emptyset]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)A(1,\pmb{\alpha},r)\,(\text{for even $s$})}\label{Eq:GenEquiv1}\nonumber \\ \end{eqnarray} and \begin{eqnarray} \boxed{\sum\limits_{\pmb{\sigma}_{\mathsf{H}}}\sum\limits_{\substack{\pmb{\alpha}\,\in\,\{2,\dots,r-1\}\\\,\,\shuffle\,\pmb{\sigma}_{\mathsf{H}}}}\W S_{\mathsf{H}}\left[\pmb{\alpha}|2,\dots,r-1,\pmb{\sigma}_{\mathsf{H}}\right]A(1,\pmb{\alpha},r)=0\,(\text{for odd $s$})}\label{Eq:GenEquiv2} \end{eqnarray} corresponding to whether the number of elements in the set $\mathsf{H}$ is even or odd. Coefficients of amplitudes therein are expressed by partial momentum kernels, while the summation $\sum\limits_{\pmb{\sigma}_{\mathsf{H}}}$ means that we sum over all possible permutations of elements in $\mathsf{H}$. Consequences of the relations \eqref{Eq:GenEquiv1} and \eqref{Eq:GenEquiv2} are deduced: \begin{itemize} \item When we we set $r=n$ (for even $n$), $\mathsf{H}=\{2,\dots,n-1\}$ and $\{2,\dots,r-1\}\to\emptyset$, the relation \eqref{Eq:GenEquiv1} naturally returns to the equivalence condition \eqref{Eq:EquivDTCMS} for even $n$. Thus the equivalence condition \eqref{Eq:EquivDTCMS} between DT and CMS constructions is proven. \item When we set $\{2,\dots,r-1\}\to\{\rho(2),\dots,\rho(r-1)\}$ in the partial momentum kernel $\W S_{\mathsf{H}}$ in \eqref{Eq:GenEquiv2} and apply the property \eqref{Eq:prop3} and \eqref{Eq:prop1}, the relation \eqref{Eq:GenEquiv2} then becomes \begin{eqnarray} \sum\limits_{\pmb{\sigma}_{\mathsf{H}}}\sum\limits_{\pmb{\alpha}\,\in\,\pmb{\rho}\,\shuffle\,\pmb{\sigma}_{\mathsf{H}}}\W S_{\mathsf{H}}\left[\pmb{\alpha}|2,\dots,r-1,\pmb{\sigma}_{\mathsf{H}}\right]A(1,\pmb{\alpha},r)=0\,(\text{for odd $s$}). \end{eqnarray} Multiplying a momentum kernel $S[\rho(2),\rho(3),\dots,\rho(r-1)|2,3,\dots,r-1]$ to both sides of the above relation and {applying} the relation \eqref{Eq:prop4} between usual momentum kernel and partial momentum kernel, we arrive an amplitude relation expressed by usual momentum kernels \begin{eqnarray} \boxed{\sum\limits_{\pmb{\sigma}_{\mathsf{H}}}\sum\limits_{\substack{\pmb{\alpha}\,\in\,\pmb{\rho}\,\shuffle\,\pmb{\sigma}_{\mathsf{H}}}} S\left[\pmb{\alpha}|2,\dots,r-1,\pmb{\sigma}_{\mathsf{H}}\right]A(1,\pmb{\alpha},r)=0\,(\text{for odd $s$})},\label{Eq:GenEquiv2-1} \end{eqnarray} where $\pmb{\rho}$ is an arbitrary permutation of elements in $\{2,\dots,r-1\}$. The boundary case with $\mathsf{H}=\{2,\dots,n-1\}$, $\{1,\dots,r\}\to \{1,n\}$ shows very interesting relation for amplitudes with odd number of external particles \begin{eqnarray} \boxed{\sum\limits_{\pmb{\sigma}\in S_{n-2}} S\left[\pmb{\sigma}|\pmb{\sigma}\right]A(1,\pmb{\sigma},n)=0\,(\text{for odd $n$})}.\label{Eq:EquivDTCMS2} \end{eqnarray} \end{itemize} Although the relation \eqref{Eq:GenEquiv2} for odd $s$ is not used in the proof of the equivalence condition \eqref{Eq:EquivDTCMS} between the DT and the CMS constructions of NLSM amplitudes, the relation \eqref{Eq:GenEquiv2-1} as a result of \eqref{Eq:GenEquiv2}, plays a crucial role in the proof of the equivalence between the DF and CMS constructions in the next section. In the remaining discussions of this section, we establish the graphic expansion of the partial momentum kernel $\W S_{\mathsf{H}}[\{2,\dots,r-1\}\shuffle\pmb{\sigma}_{\mathsf{H}}|2,\dots,r-1,\pmb{\sigma}_{\mathsf{H}}]$ and prove the relations \eqref{Eq:GenEquiv1} and \eqref{Eq:GenEquiv2}. \subsection{Expressing partial momentum kernel by graphs} The partial momentum kernel $\W S_{\mathsf{H}}[\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\pmb{\sigma}_{\mathsf{H}}|2,\dots,r-1,\pmb{\sigma}_{\mathsf{H}}]$ can be conveniently expanded by the graphic rule in section \eqref{Sec:Expansion}, when replacing the factors $\epsilon_{h_a}\cdot F_{h_{i_1}}\cdot\dots\cdot F_{h_{i_j}}\cdot k_b$ for each chain by $s_{h_ah_{i_1}}s_{h_{i_1}h_{i_2}}\dots s_{h_{i_j}b}$. The reference order $\mathsf{R}=\{h_{\rho(1)},h_{\rho(2)},\dots,h_{\rho(s)}\}$ is chosen arbitrarily. We demonstrate this expansion by examples first. \subsubsection*{Example-1: $\mathsf{H}=\{h_1,h_2\}$} The $\sigma_{\mathsf{H}}$ in the partial momentum kernel $\W S_{\{h_1,h_2\}}[\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle \sigma_{\mathsf{H}}|2,\dots,r-1,\sigma_{\mathsf{H}}]$ can be either $\{h_1,h_2\}$ or $\{h_2,h_1\}$. If we define reference order $\mathsf{R}=\{h_1,h_2\}$, the partial momentum kernel with $\sigma_{\mathsf{H}}=\{h_1,h_2\}$ is expressed by the sum of $(a)$ and $(b)$ in figure \ref{Fig:Figure1}, while the partial momentum kernel with $\sigma_{\mathsf{H}}=\{h_2,h_1\}$ is expressed by the sum of $(c)$ and $(d)$ in figure \ref{Fig:Figure1}. If we change the reference order to $\mathsf{R}=\{h_2,h_1\}$, graphs contributing to $\pmb{\sigma}_{\mathsf{H}}=\{h_1,h_2\}$ ($\pmb{\sigma}_{\mathsf{H}}=\{h_2,h_1\}$) become the graphs $(c)$ and $(d)$ ($(a)$ and $(b)$) in figure \ref{Fig:Figure1} with exchanging $h_1$ and $h_2$. Though the chain structures are different for different choices of reference order, the expression of each partial momentum kernel $\W S_{\{h_1,h_2\}}[\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle \sigma_{\mathsf{H}}|2,\dots,r-1,\sigma_{\mathsf{H}}]$ is not changed. \subsubsection*{Example-2: $\mathsf{H}=\{h_1,h_2,h_3\}$} We now consider the partial momentum kernel \begin{eqnarray} \W S_{\{h_1,h_2,h_3\}}[\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\{h_1,h_3,h_2\}|2,\dots,r-1,\{h_1,h_3,h_2\}]\label{Eq:PartialKerGraphExample} \end{eqnarray} where $\mathsf{H}$ contains three elements {and $\pmb{\sigma}_{\mathsf{H}}$ in this example is chosen as $\pmb{\sigma}_{\mathsf{H}}=\{h_1,h_3,h_2\}$}. From the definition \eqref{Eq:PartialMomentumKernal}, \eqref{Eq:PartialKerGraphExample} is given by the product of three factors \begin{eqnarray} \Biggl[s_{h_11}+\sum\limits_{\small\substack{i\in\{2,\dots,r-1\}\\ \alpha^{-1}(i)<\alpha^{-1}(h_1) }}s_{h_1i}\Biggr]\Biggl[s_{h_31}+s_{h_3h_1}+\sum\limits_{\small\substack{i\in\{2,\dots,r-1\}\\ \alpha^{-1}(i)<\alpha^{-1}(h_3) }}s_{h_3i}\Biggr]\Biggl[s_{h_21}+s_{h_2h_1}+s_{h_2h_3}+\sum\limits_{\small\substack{i\in\{2,\dots,r-1\}\\\alpha^{-1}(i)<\alpha^{-1}(h_2) }}s_{h_2i}\Biggr].\label{Eq:PartialKerGraphExample1}\nonumber \\ \end{eqnarray} This partial momentum kernel can be obtained as follows: \begin{itemize} \item Define a reference order of elements in $\mathsf{H}$, e.g., $\mathsf{R}=\{h_1,h_2,h_3\}$. \item Pick the last element $h_3$ in the ordered set $\mathsf{R}=\{h_1,h_2,h_3\}$ and pick a term from the factor corresponding to $h_3$ in \eqref{Eq:PartialKerGraphExample1}. Such a term has the form $s_{h_3j}$, where $j$ can be any element in $\{h_1\}\cup\{1,2,\dots,r-1\}$ s.t., $\alpha^{-1}(j)<\alpha^{-1}(h_3)$. If $j$ is an element in $\{1,2,\dots,r-1\}$, we get a length-1 chain started from $h_3$ towards $\{1,2,\dots,r-1\}$. Else, if $j=h_1$, we further pick a factor $s_{h_1k}$ for $k\in\{1,2,\dots,r-1\}$ satisfying $\alpha^{-1}(k)<\alpha^{-1}(h_1)$, then a chain $s_{h_3h_1}s_{h_1k}$ started from $h_3$ towards $k$ have been constructed. We take the $j=h_1$ case for instance and continue our discussion. \item Remove the starting node $h_3$ and the internal node $h_1$ of the chain which have been already constructed, from the ordered set $\mathsf{R}=\{h_1,h_2,h_3\}$ and redefine $\mathsf{R}$ as $\mathsf{R}\to\mathsf{R}'=\{h_2\}$. Construct a chain started from the element $h_2$ in $\mathsf{R}'$ towards $l\in\{h_1,h_3\}\cup\{1,2,\dots,r-1\}$. Then we have a factor $s_{h_2l}$. For example, we choose $l=h_1$. \item Remove $h_2$ from $\mathsf{R}'$, then the set $\mathsf{R}'$ becomes empty. Putting the chains obtained together, we arrive a term $s_{h_3h_1}s_{h_1k}s_{h_2h_1}$ corresponding to the graph $(b4)$ of figure \ref{Fig:5ptGraphs}. \item The full partial momentum kernel in this example is obtained by summing over all possible graphs constructed by the above steps (displayed by the graphs $(b1)\sim(b6)$ in figure \ref{Fig:5ptGraphs}). \end{itemize} Again, we emphasize that the reference order $\mathsf{R}$ can be chosen arbitrarily. If we change the reference order, only the chains are changed, the structure of graphs and the final expression of partial momentum kernel are not changed. Now we extend our discussions to the graphic expansion of any partial momentum kernel with the form: \begin{eqnarray} &&\W S_{\mathsf{H}}[\{2,\dots,r-1\}\shuffle\pmb{\sigma}_{\mathsf{H}}|2,\dots,r-1,\pmb{\sigma}_{\mathsf{H}}]\nonumber \\ &=&\Biggl[s_{{\sigma}_{\mathsf{H}}(1)1}+\sum\limits_{\small \substack{i\in\{2,\dots,r-1\}\\ \alpha^{-1}(i)<\alpha^{-1}(\sigma_{{\mathsf{H}}(1)})}}s_{{\sigma}_{\mathsf{H}}(1)i}\Biggr]\Biggl[s_{{\sigma}_{\mathsf{H}}(2)1}+s_{{\sigma}_{\mathsf{H}}(2)\sigma_{\mathsf{H}}(1)}+\sum\limits_{\small\substack{i\in\{2,\dots,r-1\}\\ \alpha^{-1}(i)<\alpha^{-1}(\sigma_{{\mathsf{H}}(2)})}}s_{{\sigma}_{\mathsf{H}}(2)i}\Biggr]\nonumber \\ &&\times\dots\times\Biggl[s_{{\sigma}_{\mathsf{H}}(s)1}+s_{{\sigma}_{\mathsf{H}}(s)\sigma_{\mathsf{H}}(1)}+\dots s_{{\sigma}_{\mathsf{H}}(s)\sigma_{\mathsf{H}}(s-1)}+\sum\limits_{\small\substack{i\in\{2,\dots,r-1\}\\ \alpha^{-1}(i)<\alpha^{-1}(\sigma_{{\mathsf{H}}(s)})}}s_{{\sigma}_{\mathsf{H}}(s)i}\Biggr].\label{Eq:PartialKerGraph} \end{eqnarray} \begin{itemize} \item Define a reference order $\mathsf{R}=\{h_{\rho(1)},h_{\rho(2)},\dots,h_{\rho(s)}\}$ for elements in the set $\mathsf{H}$ (assume there are $s$ elements in the set $\mathsf{H}$). Pick $h_{\rho(s)}$ and an arbitrary term $s_{h_{\rho(s)h_{i_j}}}$ ($\sigma^{-1}(h_{i_j})<\sigma^{-1}(h_{\rho(s)})$) from the factor corresponding to $h_{\rho(s)}$. Then pick an arbitrary term $s_{h_{i_j}h_{i_{j-1}}}$ ($\sigma^{-1}(h_{i_{j-1}})<\sigma^{-1}(h_{i_j})$) from the factor corresponding to $h_{i_j}$. Next, pick a term of the form $s_{h_{i_{j-1}}h_{i_{j-2}}}$ ($\sigma^{-1}(h_{i_{j-2}})<\sigma^{-1}(h_{i_{j-1}})$) from the factor corresponding to $h_{i_{j-1}}$, and so on. This procedure is terminated at a factor $s_{h_{i_1}l}$ where $l$ belongs to the set $\{1,2,\dots,r-1\}$. Putting all factors together, we get a chain $s_{h_{\rho(s)h_{i_j}}}s_{h_{i_j}h_{i_{j-1}}}\dots s_{h_{i_1}l}$. Redefine $\mathsf{R}$ by removing the internal nodes and the starting point of the chain which was already constructed: $\mathsf{R}\to \mathsf{R}'=\mathsf{R}\setminus \{h_{i_1},\dots,h_{i_j},h_{\rho(s)}\}\equiv \{h_{\rho'(1)},h_{\rho'(2)},\dots,h_{\rho'(s')}\}$. \item We construct a chain from $h_{\rho'(s')}$ towards an element {$l'\in\{1,2,\dots,r\}\cup\{h_{i_1},\dots,h_{i_j},h_{\rho(s)}\}$} by picking $s_{h_{\rho'(s')}h_{i'_{j'}}}$, $s_{h_{i'_{j'}}h_{i'_{j'-1}}}$, ..., $s_{h_{i'_{1}}l'}$ ($\sigma^{-1}(l')<\sigma^{-1}(h_{i'_{1}})<\dots<\sigma^{-1}(h_{i'_{j'}})<\sigma^{-1}(h_{\rho(s')})$) from the factors corresponding to $h_{\rho(s')}$, $h_{i'_{j'}}$, ..., $h_{i'_{1}}$ in the partial momentum kernel \eqref{Eq:PartialKerGraph}. The we get another chain $s_{h_{\rho'(s')}h_{i'_{j'}}}s_{h_{i'_{j'}}h_{i'_{j'-1}}}\dots s_{h_{i'_{1}}l'}$. Redefine $\mathsf{R}$ by $\mathsf{R}\to \mathsf{R}''=\mathsf{R}'\setminus \{h_{i'_{1}},\dots,h_{i'_{j'}},h_{\rho'(s')}\}$. \item Repeat the above steps until the $\mathsf{R}$ set becomes empty. Then putting all chains together, we get a graph. The sum of all possible graphs gives the partial momentum kernel \eqref{Eq:PartialKerGraph}. \end{itemize} Obviously, if we define a unique reference order $\mathsf{R}$ for permutations $\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\pmb{\sigma}_{\mathsf{H}}$ with all possible $\pmb{\sigma}_{\mathsf{H}}$, the above graphic expansions of partial momentum kernels $\W S_{\mathsf{H}}[\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\pmb{\sigma}_{\mathsf{H}}|2,\dots,r-1,\pmb{\sigma}_{\mathsf{H}}]$ are related with the graphic expansion of $\mathcal{C}(1,\pmb{\sigma},r)$ (see \eqref{Eq:Coefficients}) in section \eqref{Sec:Expansion} via replacing the factor $\epsilon_{h_a}\cdot F_{h_{i_j}}\cdot\dots\cdot F_{h_{i_1}}\cdot k_b$ for every chain by $s_{h_ah_{i_j}}s_{h_{i_j}h_{i_{j-1}}}\dots s_{h_{i_1}b}$. \subsection{Proof of the relations \eqref{Eq:GenEquiv1} and \eqref{Eq:GenEquiv2}} We have already shown that the equivalence condition \eqref{Eq:EquivDTCMS} is a special case of the relation \eqref{Eq:GenEquiv1} with even $s$. In addition, we also have the relation \eqref{Eq:GenEquiv2} with odd $s$. Now let us prove both relations \eqref{Eq:GenEquiv1} and \eqref{Eq:GenEquiv2} by expanding the partial momentum kernels into graphs. \subsubsection{The proof of \eqref{Eq:GenEquiv1}} To prove the relation \eqref{Eq:GenEquiv1} for even $s$, we first investigate two examples. {\bf Example-1: $\mathsf{H}=\{h_1,h_2\}$}~~The simplest example for even $s$ is the case $\mathsf{H}=\{h_1,h_2\}$ (hence $s=2$). If we choose reference order as $\mathsf{R}=\{h_1,h_2\}$, the graphs corresponding to $\sigma_{\mathsf{H}}=\{h_1,h_2\}$ ($\sigma_{\mathsf{H}}=\{h_2,h_1\}$) are explicitly given by $(a)$ and $(b)$ ($(c)$ and $(d)$)in figure \ref{Fig:Figure1}. The LHS of \eqref{Eq:GenEquiv1} for this case reads \begin{eqnarray} &&\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\{h_1,h_2\}}\W S_{\{h_1,h_2\}}\bigl[\pmb{\alpha}\big|2,\dots,r-1,h_1,h_2\bigr]A(1,\pmb{\alpha},r)\nonumber \\ &+&\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\{h_2,h_1\}}\W S_{\{h_1,h_2\}}\bigl[\pmb{\alpha}\big|2,\dots,r-1,h_2,h_1\bigr]A(1,\pmb{\alpha},r). \end{eqnarray} Expanding the partial momentum kernels into graphs (see figure \ref{Fig:Figure1}), we rewrite the above expression as \begin{eqnarray} &&\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\{h_1,h_2\}}\Bigl[\mathcal{D}^{[(a)]}(1,\pmb{\alpha},r)+\mathcal{D}^{[(b)]}(1,\pmb{\alpha},r)\Bigr]A(1,\pmb{\alpha},r)\nonumber \\ &+&\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\{h_2,h_1\}}\Bigl[\mathcal{D}^{[(c)]}(1,\pmb{\alpha},r)+\mathcal{D}^{[(d)]}(1,\pmb{\alpha},r)\Bigr]A(1,\pmb{\alpha},r), \end{eqnarray} where $\mathcal{D}^{[(a)]}(1,\pmb{\alpha},r)$, $\mathcal{D}^{[(b)]}(1,\pmb{\alpha},r)$, $\mathcal{D}^{[(c)]}(1,\pmb{\alpha},r)$ and $\mathcal{D}^{[(d)]}(1,\pmb{\alpha},r)$ are coefficients associating to the graphs $(a)$, $(b)$, $(c)$ and $(d)$ in figure \ref{Fig:Figure1}. The graphs $(a)$, $(c)$ and $(d)$ in the above equation contain two length-1 chains. They together contribute \begin{eqnarray} &&\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\{h_1,h_2\}}\mathcal{D}^{[(c)]}(1,\pmb{\alpha},r)A(1,\pmb{\alpha},r)+\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\{h_2,h_1\}}\Bigl[\mathcal{D}^{[(c)]}(1,\pmb{\alpha},r)+\mathcal{D}^{[(d)]}(1,\pmb{\alpha},r)\Bigr]A(1,\pmb{\alpha},r)\nonumber \\ &=&\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,\{h_1,h_2\}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\{h_1,h_2\}}[\{h_1,h_2\}]} }\,\,\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)A(1,\pmb{\alpha},r), \end{eqnarray} which is nothing but the RHS of the example \eqref{Eq:GaugeInducedExample1}, thus have to vanish. The only term that survives is the graph $(b)$ which contains no odd length chain \begin{eqnarray} \sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\{h_1,h_2\}}\mathcal{D}^{[(b)]}(1,\pmb{\alpha},r)A(1,\pmb{\alpha},r)=\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,\{h_1,h_2\}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\{h_1,h_2\}}[\emptyset]} }\,\,\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)A(1,\pmb{\alpha},r),\nonumber \\ \end{eqnarray} agrees with the RHS of \eqref{Eq:GenEquiv1} with $\mathsf{H}=\{h_1,h_2\}$. {\bf Example-2: $\mathsf{H}=\{h_1,h_2,h_3,h_4\}$}~~Inspired by the previous example with $s=2$, one can expand all partial momentum kernels on the LHS of \eqref{Eq:GenEquiv1} in terms of graphs for a given reference order $\mathsf{R}$. For the case $\mathsf{H}=\{h_1,h_2,h_3,h_4\}$, the total length of all chains of each expansion graph should equal to $4$. On the other hand, the total length $L^{\text{total}}$ of all chains is given by \begin{eqnarray} L^{\text{total}}=L^{\text{odd}}+L^{\text{even}}, \end{eqnarray} where $L^{\text{odd}}$ and $L^{\text{even}}$ denote the total lengths of all odd- and even-length chains, respectively. If a graph contains odd number of odd-length chains, the total length must be odd according to the above equation. This conflicts with the fact $L^{\text{total}}=4$. Therefore, the number of odd-length chains must be even. In this example, each graph can contain 0, 2 or 4 odd-length chains. Thus for $\mathsf{H}=\{h_1,h_2,h_3,h_4\}$, the LHS of \eqref{Eq:GenEquiv1} is expanded as \begin{eqnarray} &&\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,\mathsf{H}}\,\,\,\,\,\,\,\,\,\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\emptyset]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)A(1,\pmb{\alpha},r)\nonumber \\ % &+&\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,\mathsf{H}}\biggl[\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_1,h_2\}]}} +\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_1,h_3\}]}}+\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_1,h_4\}]}}\nonumber \\ &&~~~~~~~~~~~~~~~~~~~~~~+\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_2,h_3\}]}} +\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_2,h_4\}]}}+\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_3,h_4\}]}}\biggr]\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)A(1,\pmb{\alpha},r)\nonumber \\ % &+&\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,\mathsf{H}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_1,h_2,h_3,h_4\}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)A(1,\pmb{\alpha},r). \end{eqnarray} The last three lines vanishes due to the gauge invariance induced relation \eqref{Eq:NewGaugeIDAmp1} with $\mathsf{A}=\{h_i,h_j\}$ ($h_i,h_j\in \mathsf{H}$) and $\mathsf{A}=\{h_1, h_2, h_3, h_4\}$ (the case with $\mathsf{A}=\{h_3,h_4\}$ and $\mathsf{R}=\{h_1,h_2,h_3,h_4\}$ is explicitly given by the example \eqref{Eq:GaugeInducedExample6}), while the first line is the RHS of \eqref{Eq:GenEquiv1} for $s=4$. {\bf General proof of \eqref{Eq:GenEquiv1}}~~If $\mathsf{H}$ contains an arbitrary even number of elements ({\it i.e.}, $s$ is even), the number of odd-length chains in any graph has to be even, as analyzed in the $s=4$ example. Thus the partial momentum kernel can be written as \begin{eqnarray} &&S_{\mathsf{H}}\Bigl[\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\pmb{\sigma}_{\mathsf{H}}\Big|2,\dots,r-1,\pmb{\sigma}_{\mathsf{H}}\Bigr]\\ &=&\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\emptyset]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)+\sum\limits_{\{h_{i_1},h_{i_2}\}\subset\mathsf{H}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_{i_1},h_{i_2}\}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)+\dots+\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\mathsf{H}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r).~~\text{(for even $s$)}\nonumber \end{eqnarray} Then the combination of amplitudes on the LHS of \eqref{Eq:GenEquiv1} turns to \begin{eqnarray} &&\sum\limits_{\substack{\pmb{\alpha}\,\in\,\{2,\dots,r-1\}\\\,\,\shuffle\,\text{perms\,}{\mathsf{H}}}} \Bigl[\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\emptyset]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)+\sum\limits_{\{h_{i_1},h_{i_2}\}\subset\mathsf{H}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_{i_1},h_{i_2}\}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)\nonumber \\ % &&~~~~~~~~~~~~~~+\dots+\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\mathsf{H}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)\Bigr]A(1,\pmb{\alpha},r)\label{Eq:PartialKerGraph1} \end{eqnarray} Every term in the above expression have the general form \begin{eqnarray} \sum\limits_{\substack{\pmb{\alpha}\,\in\,\{2,\dots,r-1\}\\\,\,\shuffle\,\text{perms}\, \mathsf{H}}} \sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_{i_1}h_{i_2},\dots,h_{i_j}\}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)A(1,\pmb{\alpha},r).~~\text{(for even $s$ and $j$)} \end{eqnarray} If $j\neq 0$, the set $\{h_{i_1}h_{i_2},\dots,h_{i_j}\}\subseteq \mathsf{H}$ is nonempty. Such a term has to vanish due to the gauge invariance induced relation \eqref{Eq:NewGaugeIDAmp1} for the nonempty subset $\mathsf{A}$ with even number of elements. The first term in \eqref{Eq:PartialKerGraph1} (the case $j=0$) is given by summing over all graphs consisting of only even length chains, which is the RHS of \eqref{Eq:GenEquiv1}. \subsubsection{The proof of \eqref{Eq:GenEquiv2}} The first nontrivial example of \eqref{Eq:GenEquiv2} for odd $s$ is given by $\mathsf{H}=\{h_1,h_2,h_3\}$. Let us study this case before the general proof of \eqref{Eq:GenEquiv2}. {\bf Example: $\mathsf{H}=\{h_1,h_2,h_3\}$}~~We expand the partial momentum kernels on the LHS of \eqref{Eq:GenEquiv2} in terms of graphs for a fixed reference order $\mathsf{R}=\{h_{\rho(1)},h_{\rho(2)},h_{\rho(3)}\}$. For a given graph, the total length of all chains must be $3$. As a consequence, the number of odd length chains in each graph must be odd (in this example it can be $1$ or $3$). Thus the LHS of \eqref{Eq:GenEquiv2} for $s=3$ is decomposed into \begin{eqnarray} &&\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,\mathsf{H}}\biggl[\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_1\}]}} +\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_2\}]}}+\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_3\}]}}\biggr]\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)A(1,\pmb{\alpha},r)\nonumber \\ % &+&\sum\limits_{\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\,\text{perms}\,\mathsf{H}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_1,h_2,h_3\}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)A(1,\pmb{\alpha},r), \end{eqnarray} where each term on the first line vanishes due to the gauge invariance induced relation \eqref{Eq:NewGaugeIDAmp1} with $\mathsf{A}=\{h_i\}$ ($i=1,2,3$) (see the examples \eqref{Eq:GaugeInducedExample2}, \eqref{Eq:GaugeInducedExample3} and \eqref{Eq:GaugeInducedExample4} for $\mathsf{R}=\{h_{1},h_{2},h_{3}\}$), while the last line vanishes because of the relation \eqref{Eq:NewGaugeIDAmp1} with $\mathsf{A}=\{h_1,h_2,h_3\}$ (see the example \eqref{Eq:GaugeInducedExample5} for $\mathsf{R}=\{h_{1},h_{2},h_{3}\}$). Hence all terms of the LHS of \eqref{Eq:GenEquiv2} for $s=3$ vanish and the equation \eqref{Eq:GenEquiv2} for $s=3$ is proven. {\bf General proof of \eqref{Eq:GenEquiv2}}~~ If $\mathsf{H}$ contains an arbitrary odd number of elements ({\it i.e.}, $s$ is odd), the number of odd length chains in any graph must be odd as shown in the $s=3$ example. The graphic expansions of partial momentum kernels then read \begin{eqnarray} &&S_{\mathsf{H}}\Bigl[\pmb{\alpha}\in\{2,\dots,r-1\}\shuffle\pmb{\sigma}_{\mathsf{H}}\Big|2,\dots,r-1,\pmb{\sigma}_{\mathsf{H}}\Bigr]\nonumber \\ &=&\sum\limits_{\{h_{i_1}\}\subset\mathsf{H}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_{i_1}\}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)+\sum\limits_{\{h_{i_1},h_{i_2},h_{i_3}\}\subset \mathsf{H}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_{i_1},h_{i_2},h_{i_3}\}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)+\dots+\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\mathsf{H}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)\nonumber \\ &&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\,\text{(for odd $s$)}. \end{eqnarray} The combination of amplitudes in the LHS of \eqref{Eq:GenEquiv2} leads to \begin{eqnarray} &&\sum\limits_{\substack{\pmb{\alpha}\,\in\,\{2,\dots,r-1\}\\\,\,\shuffle\,\text{perms\,}\mathsf{H}}} \Bigl[\sum\limits_{\{h_{i_1}\}\subset \mathsf{H}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_{i_1}\}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)+\sum\limits_{\{h_{i_1},h_{i_2},h_{i_3}\}\subset \mathsf{H}}\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\{h_{i_1},h_{i_2},h_{i_3}\}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)\nonumber \\ % &&~~~~~~~~~~~~~~+\dots+\sum\limits_{\mathcal{F}\in{\mathcal{G}'^{\pmb{\alpha}}_{\mathsf{H}}[\mathsf{H}]} }\mathcal{D}^{[\mathcal{F}]}(1,\pmb{\alpha},r)\Bigr]A(1,\pmb{\alpha},n), \end{eqnarray} in which, all terms must vanish due to the gauge invariance induced relation \eqref{Eq:NewGaugeIDAmp1} for $\mathsf{A}$ with odd number of elements. Thus the relation \eqref{Eq:GenEquiv2} is proven. \section{The equivalence between DF and CMS constructions of NLSM amplitudes}\label{Sec:DFCMS} The equivalence between DF and CMS constructions of NLSM amplitudes, {\it i.e.}, the first equality of \eqref{Eq:Equivalence} can be explicitly expressed by the following amplitude relation \begin{eqnarray} \boxed{\sum\limits_{\pmb{\sigma}\in S_{n-2}}\sum\limits_{\pmb{\rho}\in \mathsf{\Gamma}}S[\pmb{\sigma}|\pmb{\rho}]A(1,\pmb{\sigma},n)=(-1)^{n-2\over 2}\sum\limits_{\pmb{\sigma}\in S_{n-2}}S[\pmb{\sigma}|\pmb{\sigma}]A(1,\pmb{\sigma},n),~~~(\text{for even $n$}) }\label{Eq:EquivDFCMS} \end{eqnarray} where $\mathsf{\Gamma}$ is defined in section 4. In this section, we will prove the relation \eqref{Eq:EquivDFCMS}. {The identity \eqref{Eq:GenEquiv2-1} (as a consequence \eqref{Eq:GenEquiv2})} with odd $s$ is crucial for the proof. To show the pattern, let us first discuss the four- and six-point examples as a warmup. \subsection{Warm-up examples} Now we take the cases with $n=4$ and $n=6$ as examples. \subsubsection*{Four-point example}The simplest example is the four-point case, which have already been discussed in \cite{Carrasco:2016ldy} and \cite{Du:2017kpo}. The LHS of the relation \eqref{Eq:EquivDFCMS} for $n=4$ is explicitly written as \begin{eqnarray} S[23|32]A(1,2,3,4)+S[32|23]A(1,3,2,4). \end{eqnarray} Applying the relation \eqref{Eq:GenEquiv2-1} with $\mathsf{H}=\{2\}$ and $\mathsf{H}=\{3\}$ on the first and the second terms respectively, we immediately get \begin{eqnarray} -S[32|32]A(1,3,2,4)-S[23|23]A(1,2,3,4), \end{eqnarray} which is the RHS of \eqref{Eq:EquivDFCMS} for four-point case. \subsubsection*{Six-point example} The relation \eqref{Eq:EquivDFCMS} for six-point amplitudes is much more nontrivial. By substituting the six-point numerators of DF type \eqref{Eq:DF6Pt} into the LHS of \eqref{Eq:EquivDFCMS}, we get \begin{eqnarray} &&\sum\limits_{\pmb{\sigma}\in S_{4}}\Bigl(S\left[\pmb{\sigma}|\pmb{\rho}=\{\sigma(5),\sigma(2),\sigma(4),\sigma(3)\}\right]+S\left[\pmb{\sigma}|\pmb{\rho}=\{\sigma(5),\sigma(3),\sigma(4),\sigma(2)\}\right]\nonumber \\ &&~~~\,\,+S\left[\pmb{\sigma}|\pmb{\rho}=\{\sigma(4),\sigma(3),\sigma(5),\sigma(2)\}\right]+S\left[\pmb{\sigma}|\pmb{\rho}=\{\sigma(4),\sigma(2),\sigma(5),\sigma(3)\}\right] \nonumber \\ &&~~~\,\,+S\left[\pmb{\sigma}|\pmb{\rho}=\{\sigma(3),\sigma(2),\sigma(5),\sigma(4)\}\right]\Bigr)A(1,\pmb{\sigma},n).\label{Eq:EquivDFCMS6Pt} \end{eqnarray} To prove this expression equals to the RHS of \eqref{Eq:EquivDFCMS} with $n=6$, we perform our discussions by the following steps. \newline {\bf\emph{Step-1}} Collect those terms with a same $\pmb{\rho}$. For example, if $\pmb{\rho}=\{2,3,4,5\}$, one finds that the corresponding $\pmb{\sigma}$ in \eqref{Eq:EquivDFCMS6Pt} can be \begin{eqnarray} \{5,3,4,2\},~~~~\{5,3,2,4\},~~~~\{3,5,4,2\},~~~~\{3,5,2,4\},~~~~\{3,2,5,4\}. \label{Eq:DFCMSPerms6Pt} \end{eqnarray} An interesting observation is that above permutations are those satisfying the `zigzag pattern': $\sigma^{-1}(5)<\sigma^{-1}(4)$, $\sigma^{-1}(4)>\sigma^{-1}(3)$ and $\sigma^{-1}(3)<\sigma^{-1}(2)$. For convenience, we define the collection of such permutations by $\mathsf{Z}\{2|3|4|5\}$: \begin{eqnarray} \mathsf{Z}\{2|3|4|5\}\equiv\left\{\pmb{\sigma}\,|\,\sigma\in S_4\text{~s.t~} \sigma^{-1}(5)<\sigma^{-1}(4),\,\sigma^{-1}(4)>\sigma^{-1}(3),\,\sigma^{-1}(3)<\sigma^{-1}(2)\right\}. \end{eqnarray} Under this definition, terms with $\pmb{\rho}=\{2,3,4,5\}$ in \eqref{Eq:EquivDFCMS6Pt} then give rise \begin{eqnarray} T(2|3|4|5)\equiv \sum\limits_{\pmb{\sigma}\in\mathsf{Z}\{2|3|4|5\}}S[\pmb{\sigma}|2,3,4,5]A(1,\pmb{\sigma},6). \label{Eq:DFCMSExample1} \end{eqnarray} Terms corresponding to arbitrary $\pmb{\rho}$ can be obtained by relabeling the above expression \begin{eqnarray} T(\rho(2)|\rho(3)|\rho(4)|\rho(5))\equiv \sum\limits_{\pmb{\sigma}\in\mathsf{Z}\{\rho(2)|\rho(3)|\rho(4)|\rho(5)\}}S[\pmb{\sigma}|\pmb{\rho}]A(1,\pmb{\sigma},6).\label{Eq:DFCMSExampleT0} \end{eqnarray} All together, \eqref{Eq:EquivDFCMS6Pt} becomes \begin{eqnarray} \sum\limits_{\pmb{\rho}\in S_4}T(\rho(2)|\rho(3)|\rho(4)|\rho(5)).\label{Eq:DFCMSExampleT1} \end{eqnarray} \newline {\bf\emph{ Step-2}} For a given $\pmb{\rho}$, we collect terms corresponding to those permutations $\pmb{\sigma}$ ($\pmb{\sigma}\in\mathsf{Z}\{\rho(2)|\rho(3)|\rho(4)|\rho(5)\}$) in which $\rho(2)$, $\rho(3)$ and $\rho(4)$ have a same relative order. For instance, in the case $\pmb{\rho}=\{2,3,4,5\}$, $T(2|3|4|5)$ then becomes \begin{eqnarray} T(2|3|4|5)&=&\Bigl[S[5,3,4,2|2,3,4,5]A(1,5,3,4,2,6)+S[3,5,4,2|2,3,4,5]A(1,3,5,4,2,6)\Bigr]\nonumber \\ &&+\Bigl[S[5,3,2,4|2,3,4,5]A(1,5,3,2,4,6)+S[3,5,2,4|2,3,4,5]A(1,3,5,2,4,6)\nonumber \\ &&~~+S[3,2,5,4|2,3,4,5]A(1,3,2,5,4,6)\Bigr], \end{eqnarray} where the first line gets contribution from permutations $\pmb{\sigma}\in\mathsf{Z}\{2|3|4|5\}$ with the relative order $\{3,4,2\}$; the second and the third lines get contributions from $\pmb{\sigma}\in\mathsf{Z}\{2|3|4|5\}$ with the relative order $\{3,2,4\}$. By means of the property \eqref{Eq:GenEquiv2-1} with ($\mathsf{H}=\{5\}$), we write the first line in the above expression as \begin{eqnarray} -S[3,4,5,2|2,3,4,5]A(1,3,4,5,2,6)-S[3,4,2,5|2,3,4,5]A(1,3,4,2,5,6). \label{Eq:EquivDFCMS6Pt1} \end{eqnarray} Similarly, the second and the third lines sum to \begin{eqnarray} -S[3,2,4,5|2,3,4,5]A(1,3,2,4,5,6).\label{Eq:EquivDFCMS6Pt2} \end{eqnarray} If we define \begin{eqnarray} \mathsf{Z}\{2|3,4,5\}\equiv\left\{\pmb{\sigma}\,|\,\sigma\in S_4\text{~s.t~} \sigma^{-1}(3)<\sigma^{-1}(4)<\sigma^{-1}(5),\,\sigma^{-1}(3)<\sigma^{-1}(2)\right\}, \end{eqnarray} the sum of \eqref{Eq:EquivDFCMS6Pt1} and \eqref{Eq:EquivDFCMS6Pt2} are further expressed by \begin{eqnarray} T(2|3|4|5)=(-1)T(2|3,4,5)\equiv (-1)\sum\limits_{\pmb{\sigma}\in\mathsf{Z}\{2|3,4,5\}}S[\pmb{\sigma}|2,3,4,5]A(1,\pmb{\sigma},6). \end{eqnarray} For the same reason, $T(\rho(2)|\rho(3)|\rho(4)|\rho(5))$ for arbitrary $\pmb{\rho}$ is written as \begin{eqnarray} T(\rho(2)|\rho(3)|\rho(4)|\rho(5))&=&(-1)T(\rho(2)|\rho(3),\rho(4),\rho(5))\nonumber \\ &\equiv& (-1)\sum\limits_{\pmb{\sigma}\in\mathsf{Z}\{\rho(2)|\rho(3),\rho(4),\rho(5)\}}S[\pmb{\sigma}|\rho(2),\rho(3),\rho(4),\rho(5)]A(1,\pmb{\sigma},6). \end{eqnarray} Therefore \eqref{Eq:DFCMSExampleT1} turns to \begin{eqnarray} (-1)\sum\limits_{\pmb{\rho}\in S_4}T(\rho(2)|\rho(3),\rho(4),\rho(5)).\label{Eq:DFCMSExampleT2} \end{eqnarray} \newline {\bf \emph{Step}-3} Now we collect terms in the combination of amplitudes \eqref{Eq:DFCMSExampleT2} for a given element $\rho(2)\in\{2,3,4,5\}$. In the case of $\rho(2)=2$, we have \begin{eqnarray} (-1)\sum\limits_{\pmb{\sigma}\in\text{perms~}\{3,4,5\}}T(2|\pmb{\sigma})=(-1)\sum\limits_{\pmb{\sigma}\in\text{perms~}\{3,4,5\}}\sum\limits_{\pmb{\alpha}\in\mathsf{Z}\{2|\sigma(3),\sigma(4),\sigma(5)\}}S[\pmb{\alpha}|2,\sigma(3),\sigma(4),\sigma(5)]A(1,\pmb{\alpha},6). \end{eqnarray} For each relative order $\pmb{\sigma}\in\text{perms~}\{3,4,5\}$, the sum over $\pmb{\alpha}\in\mathsf{Z}\{2|\sigma(3),\sigma(4),\sigma(5)\}$ means summing over all possible permutations $\pmb{\alpha}\in \{2\}\shuffle\{\sigma(3),\sigma(4),\sigma(5)\}$ with $\alpha^{-1}(2)>\alpha^{-1}(\sigma(3))$. When all possible $\pmb{\sigma}\in\text{perms~}\{3,4,5\}$ are taken into account, according to the relation \eqref{Eq:GenEquiv2-1} with $\mathsf{H}=\{3,4,5\}$, the above equation converts to the sum of all terms with $\pmb{\alpha}\in \{2\}\shuffle\{\sigma(3),\sigma(4),\sigma(5)\}$ s.t. $\alpha^{-1}(2)<\alpha^{-1}(\sigma(3))$ for all $\pmb{\sigma}\in\text{perms~}\{3,4,5\}$, accompanied by a total minus. Hence, we arrive \begin{eqnarray} &&(-1)\sum\limits_{\pmb{\sigma}\in\text{perms~}\{3,4,5\}}T(2|\pmb{\sigma})\\ &=&\sum\limits_{\pmb{\sigma}\in\text{perms~}\{3,4,5\}}T(2,\pmb{\sigma})\equiv \sum\limits_{\pmb{\sigma}\in\text{perms~}\{3,4,5\}}S[2,\sigma(3),\sigma(4),\sigma(5)|2,\sigma(3),\sigma(4),\sigma(5)]A(1,2,\sigma(3),\sigma(4),\sigma(5),6).\nonumber \end{eqnarray} The cases $\rho(2)=3,4,5$ are obtained similarly. Finally, \eqref{Eq:DFCMSExampleT2} becomes \begin{eqnarray} &&\sum\limits_{\pmb{\sigma}\in\text{perms~}\{3,4,5\}}T(2,\pmb{\sigma})+\sum\limits_{\pmb{\sigma}\in\text{perms~}\{2,4,5\}}T(3,\pmb{\sigma})+\sum\limits_{\pmb{\sigma}\in\text{perms~}\{2,3,5\}}T(4,\pmb{\sigma})+\sum\limits_{\pmb{\sigma}\in\text{perms~}\{2,3,4\}}T(5,\pmb{\sigma})\nonumber \\ &=&\sum\limits_{\pmb{\sigma}\in S_4}S[\pmb{\sigma}|\pmb{\sigma}]A(1,\pmb{\sigma},6), \end{eqnarray} which is the RHS of the equivalence condition \eqref{Eq:EquivDFCMS} for $n=6$. To summarize the above steps, the six-point example for \eqref{Eq:EquivDFCMS} is proved by \begin{eqnarray} \Bigl[\text{LHS of \eqref{Eq:EquivDFCMS} (for $n=6$)}\Bigr]&=&\sum\limits_{\pmb{\rho}\in S_4}\,T(\rho(2)|\rho(3)|\rho(4)|\rho(5))\,\,\,=(-1)\sum\limits_{\pmb{\rho}\in S_4}\,T(\rho(2)|\rho(3),\rho(4),\rho(5))\nonumber \\ &=&\sum\limits_{\pmb{\rho}\in S_4}\,T(\rho(2),\rho(3),\rho(4),\rho(5))=\Bigl[\text{RHS of \eqref{Eq:EquivDFCMS} (for $n=6$)}\Bigr].\label{Eq:EquivDFCMS6Pattern} \end{eqnarray} \subsection{General proof of the relation \eqref{Eq:EquivDFCMS}} Now let us extend the six-point example to a general proof of \eqref{Eq:EquivDFCMS}. As in six-point example, we introduce zigzag permutations for any given $\pmb{\rho}\in S_{n-2}$ by \begin{eqnarray} &&~~\mathsf{Z}\{\rho(2)|\rho(3)|\dots|\rho(2j)|\rho(2j+1),\dots,\rho(n-1)\}\label{Eq:Zigzag}\\ &\equiv&\{\pmb{\sigma}|\pmb{\sigma}\in S_{n-2}, \text{s.t.~} \sigma^{-1}({\rho(n-1)})>\sigma^{-1}({\rho(n-2)})>\dots\sigma^{-1}(\rho(2j+2))>\sigma^{-1}(\rho(2j+1)),\nonumber \\ &&~~\sigma^{-1}(\rho(2j+1))<\sigma^{-1}(\rho(2j)), \sigma^{-1}(\rho(2j))>\sigma^{-1}(\rho(2j-1)),\dots,\sigma^{-1}(\rho(3))<\sigma^{-1}(\rho(2))\}\nonumber \\ &&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\text{(for $j\geq 0$ and even $n$)},\nonumber \end{eqnarray} where the $j=0$ case is understood as $\mathsf{Z}\{\rho(2),\rho(3),\dots,\rho(n-1)\}\equiv\pmb{\rho}$. We further define a linear combination of amplitudes \begin{eqnarray} &&T(\rho(2)|\dots|\rho(2j)|\rho(2j+1),\dots,\rho(n-1)) \equiv\sum\limits_{\small\pmb{\sigma}\in \mathsf{Z}\{\rho(2)|\dots|\rho(2j)|\rho(2j+1),\dots,\rho(n-1)\}}S[\pmb{\sigma}|\pmb{\rho}]A(1,\pmb{\sigma},n),\label{Eq:DefT} \end{eqnarray} in which, the coefficients are momentum kernels. The six-point example (see \eqref{Eq:EquivDFCMS6Pattern}) implies the following recursive relation between $T(\rho(2)|\dots|\rho(2j)|\rho(2j+1),\dots,\rho(n-1))$: \begin{eqnarray} &&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\sum\limits_{\pmb{\rho}\in S_{n-2}}T(\rho(2)|\dots|\rho(2j)|\rho(2j+1),\dots,\rho(n-1))\nonumber \\&=&(-1)\sum\limits_{\pmb{\rho}\in S_{n-2}}\,T(\rho(2)|\dots|\rho(2j-2)|\rho(2j-1),\dots,\rho(n-1))\label{Eq:CombinationT1}~~~~(0\leq j\leq {n-2\over 2}). \end{eqnarray} The proof of \eqref{Eq:CombinationT1} is provided in appendix \ref{App:T}. We consider two boundaries of this relation: \begin{itemize} \item [(i)] The upper boundary is $j={n-2\over 2}$, for which the LHS of \eqref{Eq:CombinationT1} is {${\sum_{\pmb{\rho}\in S_{n-2}}}T(\rho(2)|\rho(3)|\dots|\rho(n-1))$}. In appendix \ref{Sec:Appendix}, we show that the collection of all $\pmb{\sigma}$ corresponding to a same $\pmb{\rho}$ on the LHS of \eqref{Eq:EquivDFCMS} is $\mathsf{Z}\{\rho(2)|\rho(3)|\dots|\rho(n-1)\}$ ({\it i.e.}, $j={n-2\over 2}$). Thus the LHS of \eqref{Eq:CombinationT1} for $j={n-2\over 2}$ is \begin{eqnarray} {\sum\limits_{\pmb{\rho}\in S_{n-2}}}T(\rho(2)|\rho(3)|\dots|\rho(n-1))=\sum\limits_{\pmb{\sigma}\in S_{n-2}}\sum\limits_{\pmb{\rho}\in \mathsf{\Gamma}}S[\pmb{\sigma}|\pmb{\rho}]A(1,\pmb{\sigma},n), \end{eqnarray} which is the LHS of the equivalence condition \eqref{Eq:EquivDFCMS}. \item [(ii)] The lower boundary is $j=0$. In this case, the sum on the RHS of \eqref{Eq:CombinationT1} is given by \begin{eqnarray}\sum\limits_{\pmb{\rho}\in S_{n-2}}T(\rho(2),\rho(3),\dots,\rho(n-1))=\sum\limits_{\pmb{\sigma}\in S_{n-2}}S[\pmb{\sigma}|\pmb{\sigma}]A(1,\pmb{\sigma},n)\end{eqnarray} which is nothing but (upto a factor ${(-1)^{n-2\over 2}}$) the RHS of \eqref{Eq:EquivDFCMS}. \end{itemize} When we start from the upper boundary and apply the relation \eqref{Eq:CombinationT1} by ${n-2 \over 2}$ times, we arrive the lower boundary with the correct factor {$(-1)^{n-2 \over 2}$}. Thus the equivalence condition \eqref{Eq:EquivDFCMS} is proven. \section{Conclusions}\label{Sec:Conclusions} In this paper, we derived highly-nontrivial generalized BCJ relation \eqref{Eq:NewGaugeIDAmp1} by imposing gauge invariance and dimensional reduction on the graphic expansion of EYM amplitudes. Two additional relations \eqref{Eq:GenEquiv1} and \eqref{Eq:GenEquiv2} expressed by partial momentum kernels are consequent results of the gauge invariance induced relation \eqref{Eq:NewGaugeIDAmp1}. As an application, we proved the equivalence between amplitudes constructed by three different types of BCJ numerators. Thus the three approaches (Feynman rules, Abelian Z theory and CHY formula) to NLSM amplitudes are equivalent to each other. This way we prove the CHY formula of NLSM directly instead of relying on incomplete evidence, like the enhanced soft behavior \cite{Cheung:2015ota}. There are several further directions. (i) First, generalized BCJ relations induced from the gauge invariance of multi-trace amplitudes deserves further consideration. (ii) {Second, it seems that the CHY-inspired dimensional reduction is not the unique way to reduce the Lorentz invariants to pure Mandelstam variables. Along the line of unifying relation \cite{Cheung:2017ems}, one can also turn the polarizations to momenta. In addition, other formulations of gauge invariance identities were depicted in \cite{Barreiro:2013dpa,Nandan:2016pya,Boels:2016xhc,Chiodaroli:2017ngp,Boels:2017gyc}. Thus it will be interesting to give a more comprehensive understanding of the gauge invariance induced relations by considering \cite{Cheung:2017ems} and \cite{Barreiro:2013dpa,Nandan:2016pya,Boels:2016xhc,Chiodaroli:2017ngp,Boels:2017gyc}\footnote{We thank Rutger Boels for helpful comments on this point.}.} (iii) As we have seen, the gauge invariance induced relations bridge the DF type BCJ numerators of NLSM amplitudes and the compact CMS type ones . Maybe they will help us to find compact polynomial BCJ numerators of YM amplitudes which are independent of any reference ordering from that of DF type. We know the sum of BCJ numerators of all possible reference orderings satisfy this requirement, but how about more compact ones? (iv) Last but not least, the gauge invariance induced relations should also exists in string theory. How about their applications in string amplitudes? \section*{Acknowledgments} YD would like to acknowledge Jiangsu Ministry of Science and Technology under contract BK20170410, NSFC under Grant Nos.11105118, 111547310 as well as the "Fundamental Research Funds for the Central Universities".
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Alarge variety of applications rely on the tracking of three dimensional position and orientation. Such as decoding gesture input and providing force feedback on fingertips or tools \cite{wearable_finger, handTrackingXIDIAN, finexus, magnetips, Omni} via magnetic tracking. Tracking and guiding medical instruments, such as capsules and catheters \cite{magnetic_catheter, magnet_capsule1, medical_instrument, magnet_capsule2, Luo2019} also rely on magnetic markers. This widespread interest is partially driven by the fact that magnetic signals can permeate through a large variety of materials, including human tissues, without disturbance or occlusion. In addition, these systems can also benefit from comparatively affordable and more lightweight instrumentation, both in the tracker sensors and the markers. Existing work distincts between active and passive magnetic markers. Active markers can benefit from a higher signal-to-noise ratio by allowing frequency filtering. However, tracking active emitters requires either a wired connection to the tracked object or a battery-powered circuitry \cite{finexus, auraring}. In contrast, Passive magnetic localization is simpler - it only requires the target to be instrumented with a permanent magnet, eliminating any need for tethered connections or active electronics on the moving elements \cite{Omni, cube_em_loc, multiMagnet}. Interestingly, \cite{wireless_marker} has proposed a third group of semi-passive markers. Although the tracked elements are passive, they respond with a resonant signal when excited by the nearby source. Although our work focuses on passive magnetic localization, due to its simplicity and robustness, future work could apply to many of the ideas deployed here to these other types of magnetic markers. A vital part of the system is the algorithm that transforms magnetic signals into the marker's 5-degree-of-freedom (5DoF) position and orientation. One common solution is to minimize the difference between a mathematical magnetic field model and the sensors' readings. For this purpose, several non-linear optimization algorithms have been proposed and tested \cite{ChaoHu2005, ThanReview2012}. In \cite{multiMagnet}, the authors implemented an analytical computation of gradients to accelerate the optimization process. In \cite{Yousefi2021}, the proposed algorithm uncouples the marker's orientation from its position, which improves the calculation speed and provides guarantees in terms of the global minima. Nonetheless, these iterative approaches come with significant challenges. First, gradient-descent algorithms are computationally expensive, which results in a trade-off between tracking precision and frequency. Second, an iterative non-convex optimization may also suffer from the non-uniqueness of the solution, converging to local minima. These challenges result in such methods being strongly dependent on their initialization. Other approaches have explored ways to reduce the problem complexity to a linear variant, ensuring increased convergence speed and global optimality of the solution. The most drastic simplification is to directly interpolate the intensity of the reading within the sensors' array. In \cite{gaussMarbles,gaussSense}, the authors use this approach to track a stylus and a gaming object in a two-dimensional plane. In a more sophisticated configuration (cf., \cite{utrack,finexus,magnetips}) multiple sensors with known spatial arrangements give an over-constrained system of equations so that there is a unique solution to the state of a magnet. These triangulation methods are fast, however, less robust to the noise of sensors and restricted to only the positional DoFs. Crucially, any methods' effectiveness depends on the magnetic model's validity. Most, if not all, of the above methods, used the first term of the multi-pole series expansion derived from Maxwell's equations. The magnetic dipole approximation assumes any magnet to be spherical. On the one hand, this approximation gives the simplest explicit expression for the magnetic field as a function of the distance to the source. On the other hand, applying the approximation to non-spherical magnets only yields reliable results when the magnets are far from the sensor \cite{Petruska2013}. However, a small magnetic field approximation error can lead to large positional differences. In recent years, machine learning has emerged as a useful approach to bypass the computational burden of iterative methods. Machine learning, especially Neural Networks (NN), is powerful in approximating non-linearities via sequential multiply-accumulate operations. In addition, there is no need for initial estimates, and often an isolated one-shot inference is enough to attain reliable results. While inference is fast, it requires a large and representative dataset for training to achieve good generalization to unseen samples. Creating a dataset can be challenging and time-consuming. In \cite{Sebkhi2019, medical_instrument} machine learning is used to predict the locations of the tracked magnet with input from magnetic sensors. The data for training the neural network is collected in-vivo by placing magnets at different known locations and then gathering sensor readings. The data collection process is lengthy and does not scale to new scenarios and markers. Recently, authors in \cite{auraring} and \cite{Sasaki2020} demonstrated the use of neural networks trained on synthetic data to track active coils markers. Even if we share some conceptual goals with these two works, their results apply to active coils emitters and do not always outperform the ones based on iterative optimization. As in most data-driven methods, the challenge lies in collecting a large amount of precisely labeled data to train the estimator. For this, we look towards Finite Element Methods (FEM) to simulate the complete set of Maxwell's equations. FEM is computationally expensive, especially when a fine mesh is applied to attain accurate results. Hence, FEM is unusable in real-time modeling. However, FEM can be used to provide noise-free synthetic datasets for training of machine learning models. Other fields have successfully applied the use of FEM-generated synthetic datasets to train neural networks, albeit not for magnetic tracking. For instance, for mechanical deformations \cite{hyperelastic}, elastoplasticity \cite{solidMechanics}, material inspection \cite{stellDefect}, and nanostructures \cite{nanostructure}. In this paper, we propose a supervised learning approach to enable the real-time tracking of arbitrary axisymmetric magnets based on Neural Networks. Our data-driven approach is enabled by using FEM to model the magnetic field of cylindrical magnets. We take advantage of Neural Networks in modeling non-linearities to approximate the inverse function of the magnetic field. Specifically, we use a Multi-Layer Perceptron (MLP). This approach allows us to bypass the expensive iterative optimization process, which opens up the grounds for truly portable tracking devices. We make the following contribution to the magnetic tracking scheme: \begin{enumerate} \item To obtain high-resolution magnetic field data for training neural networks, we introduce a novel coordinate transformation algorithm that utilizes our markers' symmetric properties and converts FEM-simulation results from 2D to 3D. \item To enable feasible and accurate tracking of magnetic markers by a neural network, we propose and test a feature engineering function inspired by the physical property of the magnetic field. \item To evaluate the efficacy of our system, we evaluate our method in both simulation and experiments. We assess various cylindrical magnets with $5$ degrees of freedom and achieve an averaged error of $4$ mm and an orientation error of less than $8$ degrees, running on a portable device with an interactive rate of $75$ Hz when using $8$ sensors. \end{enumerate} \subsection{Hardware Implementation} \label{sec:hardware} The core of our hardware comprises $16$ triaxial magnetometers (\emph{MLX90393, Malexis)}. They have a linear range up to $0.05$ Tesla, which is compatible with the magnetic fields we expected from our testing magnets at close range. While the sensors allow for a data output rate of $716.9$ Hz, our default configuration is set to $35.4$ Hz. We also set the sensors to the $0-10^{-1}$ Tesla range, covering most of the cases when sensing locations are $\geq 2$ cm away from the magnets. In addition, we use the ellipsoid-fitting method in \cite{calibration} to calibrate our system. The 16 sensors are placed in a $4 \times 4$ grid with an interval of $52$ mm between the centers of neighboring sensors, as shown if Figure \ref{fig:sensor_loc}(a). To keep instrumentation low and enable portable applications, e.g. prosthetics, we implemented the tracking inference in a AI-oriented single board computer (\emph{Jetson Nano, NVIDIA}). This device only weights 138 grams and also allows for reading the sensors directly using the I2C communication protocol. The lower bound of time of reading $16$ sensors sequentially is in the order of $24$ milliseconds. \begin{figure}[!t] \centering \begin{subfigure}[b]{0.7\columnwidth} \includegraphics[width=\textwidth]{figures/sensor_photo.jpg}\llap{\makebox[\textwidth][l]{\raisebox{3.9cm}{\frame{\includegraphics[height=1.6cm]{figures/marker_holder.jpg}}}}} \end{subfigure} \caption[Placement of Sensors]{Hall sensor array in a 4x4 grid. The center-to-center distance is 52mm. The array is connected to a Jetson Nano for the complete inference pipeline. The inset shows a rigid-tree to collect Optitrack groundtruth. \label{fig:sensor_loc}} \end{figure} \subsection{Synthetic Dataset} \label{sec:fem_sim} We focus on permanent magnet localization with axisymmetric symmetry; any shape and size of magnets as long as it possesses rotational symmetry about its magnetization axis. This set includes all cylinders, spheres, and arbitrary cross-section toroids magnetized on its principal axis, enclosing all the most used types of permanent magnets. This symmetry of the magnets allows us to make only one high-resolution 2D FEM simulation for every magnet shape. We revolve this 2D cross-section around its principle axis to obtain the 3D volume then place it anywhere in the space. From a single 2D FEM simulation, we generate synthetic readings for all sensors, with arbitrary location and orientation of the magnet. Such a synthetic dataset results in an improvement of the computational efficiency during training, letting us keep a good balance between granularity in simulation and requiring negligible storage space for the data. \begin{table}[t] \centering \caption[Variables in Coordinate Transformation Algorithm]{Variables used in Algorithm \ref{algo:coor_trans}.\label{tab:var_trans}} \begin{tabular*}{\columnwidth}{ll} \hline \emph{Variables} & \emph{Description}\\ \hline \parbox[][16pt][c]{30pt}{$\mathbf{p}^{C}_i$} & Positional vector. For $^C$ and $_i$ see below.\\ $C \in \{D, M\}$ & Coordinate system on device ($D$) or magnet ($M$)\\ $i \in \{s, m\}$ & Positional vector of sensor ($S$) or magnet ($m)$\\ $\mathbf{u, v, w}$ & {Axis in magnet's coordinate ($C_{D}$)}\\ $\mathbf{w}$ & Magnetic moment direction in $C_{D}$\\ $\mathbf{B}$ & Magnetic flux density\\ \hline \end{tabular*} \begin{algorithm}[H] \caption{Synthethic dataset generation}\label{algo:coor_trans} \begin{algorithmic}[1] \State\hskip-\ALG@thistlm \textbf{Input} $\mathbf{p}^{D}_s$ \State\hskip-\ALG@thistlm $\mathbf{p}_d^{D} \gets \mathbf{p}^{D}_m - \mathbf{p}^{D}_s$ \Comment Magnet-sensor vector \State\hskip-\ALG@thistlm $\mathbf{v} \gets \mathbf{w} \times \mathbf{p}_d^{D}$ \While {$\mathbf{v} = \mathbf{0}$} \Comment If $\mathbf{w}$ and $\mathbf{p}_d^{D}$ are (anti-)parallel \State random $\mathbf{q}$ \State $\mathbf{v} \gets \mathbf{q} \times \mathbf{w}$ \EndWhile \State\hskip-\ALG@thistlm $\mathbf{u} \gets \mathbf{v} \times \mathbf{w}$ \State\hskip-\ALG@thistlm $dw \gets \mathbf{p}_d^{D} \cdot \frac{\mathbf{w}}{||\mathbf{w}||}$ \Comment the projection of $\mathbf{p}_d^{D}$ on $\mathbf{w}$ \State\hskip-\ALG@thistlm $du \gets \sqrt{|\mathbf{p}_d^{D}|^2 - dw^2}$ \Comment the projection of $\mathbf{p}_d^{D}$ on $\mathbf{u}$ \State\hskip-\ALG@thistlm $dv \gets 0 $ \State\hskip-\ALG@thistlm $\mathbf{B}^{M} \gets \mathbf{2D-FEM}(du, dw)$ \State\hskip-\ALG@thistlm \parbox[][28pt][c]{232pt}{$\mathbf{M} \gets \big[\begin{smallmatrix} \frac{\mathbf{u}}{||\mathbf{u}||}^\top & \frac{\mathbf{v}}{||\mathbf{v}||}^\top & \frac{\mathbf{w}}{||\mathbf{w}||}^\top \end{smallmatrix}\big]^\top $ \\ \phantom{====} \Comment Coordinate transformation matrix} \State\hskip-\ALG@thistlm $\mathbf{B}^{D} \gets \mathbf{B}^{M} \cdot \mathbf{M}$ \end{algorithmic} \end{algorithm} \end{table} We used COMSOL Multiphysics to obtain the FEM data. The simulation is centered at the magnet and constrained to only the upper-right half of the cross-section. We obtain the other half of the magnetic field values anti-symmetrically. We use the transformation described in Algorithm \ref{algo:coor_trans} to generate the data for training and evaluate the neural network. The algorithm transforms $2$D FEM data into $3$D synthetic readings on each sensor location as follows. First, we input the current position of the magnet and each sensor location, all in a fixed coordinate system ($\coordinateSystem_D$) centered at the device (L1-L2). Secondly, we build up a coordinate system centered on the magnet's current position and orientation, $\coordinateSystem_M$ (L3-L10). Thirdly, the algorithm computes the magnetic flux density $\mathbf{B}^{M}$ at the sensor's location in $\coordinateSystem_M$ using the FEM data (L11). Finally, we transform the $\mathbf{B}^{M}$ into the original global coordinates $\coordinateSystem_D$ to serve as features for training the MLP (L12-L13). All definitions of variables used in Algorithm \ref{algo:coor_trans} are described in Table \ref{tab:var_trans}. \subsection{Tracking with Neural Networks} \label{sec:track} \subsubsection{Multi-Layer Perceptron} \label{subsub:mlp} We used a Multi-Layer Perceptron (MLP) as network architecture. The input of the MLP is a $3n$-element vector containing the ($x,y,z$) magnetic flux densities $\mathbf{B}$ of the $n$ sensors. The output is a tuple of $6$ elements composed of the magnet position, $\mathbf{p} = [p_m^x, p_m^y, p_m^z]$, and its orientation, $\mathbf{o} = [o_m^x, o_m^y, o_m^z]$. We can then write the MLP as a non-linear mapping from the sensor readings to the tracking variables: \begin{equation} \mathcal{F}:\mathbb{R}^{3n}\rightarrow\mathbb{R}^6;\mathcal{F}(\mathbf{[\mathbf{B}_1, ..., \mathbf{B}_{3n}]}) =[\mathbf{p}, \mathbf{o}] \end{equation} Note that even when we have 5-DoF output, three in $\mathbf{p}$ and two in $\mathbf{o}$, we express the orientation vector in cartesian coordinates to avoid the numerical discontinuity when the azimuth angle jumps from $359^{\circ}$ to $0^{\circ}$. Our MLP architecture is shown in Figure \ref{fig:mlp}. We use a $3$-layer perceptron, with $2048$ units per layer, except for the input and output layers. Rectified linear units are used as activation functions. \subsubsection{Pre-processing} Importantly, we empirically validated that the systems training does not converge if magnetic readings are used directly as inputs. One of the reasons may be that the input values vary by several orders of magnitude when the magnet moves from close to a sensor (~$10^{-2}$ Tesla) to the extension of the working volume (~$10^{-6}$ Tesla). As stated in the dipole model, \begin{equation} \label{eq:dipole} {\mathbf{B}}({{\mathbf{p}}_m}) = \frac{{{\mu _0}}}{{4\pi }}\left[ {\frac{{3{\mathbf{r}}({\mathbf{m}} \cdot {\mathbf{r}})}}{{{r^5}}} - \frac{{\mathbf{m}}}{{{r^3}}}} \right] \, \end{equation} \noindent the magnetic field decays as $1/r^3$, where $r$ is the distance between the source and the sensors and $\mathbf{m}$ is the dipole moment. In order to re-scale the input signals, we take the cubic root of the input data, $f(B) = \sqrt[3]{B}$ (see Figure \ref{fig:mlp}). We found that training with cubic root re-scaling convergences in $\sim 40$ epochs. \subsubsection{Training Loss} \label{sec:training_nn} We train the neural networks with randomly sampled data in a cubic volume, $0.2 \times 0.2 \times 0.15$ m$^3$, with the sensor array covering the bottom face. The magnets' orientation is sampled as points on a unit sphere, then paired with the locations as labels for training. Then, the labeled data for training is obtained as in Algorithm \ref{algo:coor_trans}. As loss function we used the weighted sum of positional and orientation differences averaged over all $n$ points in a training batch: \begin{equation} \mathcal{L}= \frac{1}{n} \sum_{i=1}^{n} \big(\|\mathbf{p}_{true} - \mathbf{p}_{pred}\|^2 + \eta \|\frac{\mathbf{o}_{true}}{\|\mathbf{o}_{true}\|} - \frac{\mathbf{o}_{pred}}{\|\mathbf{o}_{pred}\|}\|^2 \big) \end{equation} \noindent where we aim to minimise both the positional and oriental disparity between the prediction and ground truth. We add the weight $\eta$ to compensate for different orientation and positional error term scales. From parameter tuning, we find $\eta = 10^{-5}$ and use it as a default in all experiments. We implement the MLP in python with PyTorch. The optimizer Adam \cite{adam} is adopted with an initial learning rate $\gamma=10^{-4}$. We train for 40 epochs, and the learning rate decays by $0.98$ after each epoch in training. We generate $10^6$ random points per epoch to train the model with a batch size of $256$. Using Algorithm \ref{algo:coor_trans} we generate the training data independently and identically distributed. Data generation and training of the MLP takes $\sim 1$ hour. \subsubsection{Running Time} running time of MLP and RNN on Jetson Nano, a portable device with ARM Cortex-A57 processor and $4$ GB of RAM. The inference time of multi-layer perceptron and recurrent network running on Jetson Nano is comparable. Table \ref{tab:nn_runtime} shows that when leveraging the Graphics Processing Unit (GPU) on Jetson Nano and parallelizing the data collection process on CPU and inference on GPU, the tracking algorithm runs at an interaction frequency of $75$ Hz, enabling fluid interactive applications. \begin{table} \caption[Running Time of Tracking Algorithm Based on NN-Inference on Jetson Nano]{The running time of direct-predicting method with neural network on Jetson Nano.\label{tab:nn_runtime}} \centering \begin{tabular}{m{40pt}|m{35pt}m{35pt}m{35pt}m{35pt}} \hline Inference performed on & Data collection duration & Inference duration & One-step duration & Interactive frequency\\ \hline \\ GPU & $12.97$ ms & $8.55$ ms & $13.39$ ms & $74.69$ Hz\\ CPU & $15.79$ ms & $20.75$ ms & $21.34$ ms & $46.86$ Hz\\ \hline \end{tabular} \end{table} \section{Method} \begin{figure}[t] \centering \includegraphics[width=0.85\linewidth]{figures/abstract.pdf} \caption[Multi-Layer Perceptron for Directly Predicting Magnet Positions]{Schematic pipeline overview. We use a Multi-Layer Perceptron (MLP) to output the location and orientation of the magnet directly. During the training phase, the MLP outputs are compared with the FEM ground truth. During the inference, the input to the MLP is the sensor data to track the position and orientation of the magnet. \label{fig:mlp}} \end{figure} The core of our contribution is a novel tracking method which utilizes supervised learning. Specially, we use a Multi-Layer Perceptron. Our tracking pipeline has two different instances: during the training phase, the output of the MLP is compared to the labeled data generated by FEM, during device operation (inference), the sensors readings are fed to the MLP to obtain the magnetic marker position and orientation (Figure \ref{fig:mlp}). In this section, we first elaborate on how we create a high-resolution synthetic dataset (Sec. \ref{sec:fem_sim}). Second, we detail our neural network architecture and training (Sec. \ref{sec:track}). Finally, we describe the hardware setup used for simulation (Sec. \ref{sec:hardware}). \input{content/02-2_fem_sim} \input{content/02-3_mlp_tracking} \input{content/02-1_hardware} \section{Comparison between our data-driven and an optimization-based tracking} In this section, we contrast the performances of our neural network tracking method with an iterative gradient-descent based technique. Performing this comparison with simulated sensor readings allows us to control experimental conditions such as the initialization of the iterative optimization-based algorithm. The data is also noise-free, thus comparing the ideal performance of the two methods. The optimization uses PyTorch with the magnet position and orientation set as variables with automatic differentiation \cite{auto_diff} enabled. For the optimizer, we select L-BFGS \cite{lbfgs}, a quasi-Newton method, with line search to establish the optimal step size. The internal physical model to evaluate magnetic fields is the magnetic dipole model, implemented as in the loss function as proposed in \cite{multiMagnet}. For a fair comparison, we distinguish two cases: i) we train the MLP network using the FEM data, and ii) using the magnetic dipole model directly, which is equivalent to the optimization method model. Furthermore, we show the results obtained via the optimization-based method for different initialization errors. Finally, we compare the running time of the two approaches for tracking synthetic data. \subsection{Influence of initialization on the optimization methods} \label{sec:syn_results} \begin{figure}[t] \centering \begin{subfigure}[b]{0.8\columnwidth} \includegraphics[width=\textwidth]{figures/opt_pos_init_evo_up.pdf} \caption{} \end{subfigure} \hfill \begin{subfigure}[b]{0.8\columnwidth} \includegraphics[width=\textwidth]{figures/opt_pos_init_evo_down.pdf} \caption{} \end{subfigure} \caption[]{ Positional errors of the iterative method in simulation for different number of iterations. \textup{(a)} We vary to initial orientation mismatch keeping a fixed distance of $80$ mm to the truth. \textup{(b)} We vary the positional error initialization keeping a fixed orientation difference of $45^{\circ}$. \label{fig:evo_angle_perturb}} \end{figure} For this evaluation, we randomly sample the magnet's positions and orientations as in Section \ref{sec:training_nn}, and we compute corresponding magnetic flux densities with the magnetic dipole model at the locations of sensors (Eq. \ref{eq:dipole}). In this way, the characteristics of the synthetic signals agree with the internal physical model in the optimization method. To have the initial guess in optimization, we disturb this target position and orientation. We configure the iterative algorithm so the optimization stops when reaching the maximal number of iterations we allowed. In Figure \ref{fig:evo_angle_perturb} we present the results for the optimization method, reporting the tracking errors as a function of the of the maximal number of iteration for different values of positional miss-match (Figure \ref{fig:evo_angle_perturb}.a) and orientation miss-match (Figure \ref{fig:evo_angle_perturb}.b). Each box-plot is based in $400$ randomly target positions and orientations. For detailed analysis, we report the statistics of tracking results after $50$ iterations in Table \ref{tab:sim_opt_stats} (corresponding with the right most set of results in Figure \ref{fig:evo_angle_perturb}). At the bottom of table we include the position and orientation errors for the same synthetic readings using our MLP method. Note that MLPs do not require an initial estimation of the position and orientation of the magnet. They are independent of the initialization, as it is not part of their feature vector. Our MLP is outperformed only in the cases with most iterations (50), and with the best initialization: $80$ mm of initial positional error and $10^\circ$ of initial orientation error, and $30$ mm of initial positional error and $45^\circ$ of initial orientation error. However, the computational time of this optimization-based method is about 50x longer than the time consumed by predicting via the MLP (see Sec. \ref{sec:eva_syn_runtime}). When we reduce the number of iterations, the error increases considerably. With only ten iterations, the median positional error exceeds $200$ mm in some cases, especially when the seed orientation is far from the target. However, we find that even in cases where the initial orientation is close to the target (the error is $45^\circ$), the positional errors obtained are as large as the miss-match in initialization. \begin{table}[t] \caption[]{Accuracy results for the iterative method after 50 Iterations. For comparison, we include results obtained via MLP tracking on the same evaluation data.} \label{tab:sim_opt_stats} \centering \begin{tabular}{>{\raggedleft}p{0.9cm}>{\raggedleft}p{0.9cm}|>{\raggedleft}p{0.9cm} >{\raggedleft}p{0.9cm} >{\raggedleft}p{1.3cm} >{\raggedleft\arraybackslash}p{1.3cm}} \hline Initial $e_p$ in mm & Initial $e_{\theta}$ in degrees & $\ \mathtt{median}$ $e_p$ in mm & $\mathtt{median}$ $e_{\theta}$ in degrees & $\mathtt{3^{rd} quartile}$ $e_p$ in mm & $\mathtt{3^{rd} quartile}$ $e_{\theta}$ in degrees \\ \hline \multicolumn{6}{l}{}\\ \multicolumn{2}{l}{\textit{\textbf{Iter. Optimization}}}\\ \hline \hline 80& 10& 0.92& 1.63& 32.96& 9.98\\ 80& 30& 4.17& 8.54& 47.17& 29.99\\ 80& 90& 40.83& 87.16& 83.80& 90.00\\ 80& 180& 90.88& 179.90& 158.11& 179.99\\ \hline 30& 45& 0.42& 0.74& 12.86& 36.12\\ 60& 45& 3.19& 6.89& 33.47& 44.84\\ 90& 45& 6.62& 15.17& 41.29& 44.91\\ 120& 45& 10.47& 28.94& 59.43& 45.01\\ \hline \multicolumn{6}{l}{ }\\ \multicolumn{2}{l}{\textit{\textbf{MLP (ours)}}}\\ \hline \hline -- & -- & 1.34& 3.45& 1.91& 5.16 \\ \hline \end{tabular} \end{table} \subsection{Running Time} \label{sec:eva_syn_runtime} \begin{figure}[t] \centering \includegraphics[width=0.85\columnwidth]{figures/running_time_boxplot_noTitle.pdf}\\ \caption[]{Inference time comparison between MLP and L-BFGS implementations for a different number of iterations. Iterative methods are strongly dependent on initialization and it takes between 20 and 50 iterations (i.e. $>$ 10 ms) to achieve results similar to those given by a single MLP inference (0.8 ms). \label{fig:running_time}} \end{figure} Our MLP implementation produces readings after a single inference step involving feature engineering, additions, and multiplications in hidden layers. In contrast, optimization-based methods require computations of second-order gradients concerning the estimated locations of every iteration until it converges. We compare here the differences in speed in our two tested methods. In Figure \ref{fig:running_time} we compare the end-to-end time of our data-driven method (MLP) vs different maxmimum number of iterations steps of the optimization-based algorithm we implemented. One 5 DoF tracking inference in MLP takes $0.8$ ms, including the feature engineering process. In contrast, the L-BFGS optimizer takes around $1$ ms for each iteration. As shown in Figure \ref{fig:evo_angle_perturb}, we need $10$s of iterations to achieve satisfying results depending on the initialization. This results in $10$s of milliseconds. We note that for this particular speed comparison, we run both methods on the same laptop with an Intel i7-7500U CPU. \subsection{Neural Networks trained with FEM vs Dipole Model} \label{sec:syn_shapes} In previous works authors train MLPs on data generated with the magnetic dipole model. In this work, we propose a step forward by modeling beyond the dipole approximation and take full advantage of the powerful representation of Neural Networks for non-linear systems. We propose six different shapes of magnets for our evaluation, identical to those we adopt in hands-on experiments in Section \ref{sec:opt_eval}. The selection goes from very tall bars to very flat disks, all magnetized in the principal axis, plus a sphere, given that the dipole model is the exact solution in this last case. We use $1000$ points per magnet to evaluate the position error, and the target positions and orientations are identical for all $6$ magnets. In Figure \ref{fig:dipole_vs_mlp_nn} we compare the performance of our MLP method when trained with two different synthetic datasets: one obtained FEM simulations as explain in Section \ref{sec:fem_sim}, and the other using the magnetic dipole approximation (Eq. \ref{eq:dipole}). The shapes of magnets are ordered from tall bar-shape to thin disc-shaped, getting closer to a spherical shape and then deviating from it. We used a Shapiro-Wilk test to validate the normal distribution of our data and then applied a Student's T-Test to test for significant differences between dipole- and FEM-data trained MLPs. We found significant differences in all cases of cylindrical magnets and no significant difference for the spherical magnet. The p-values of the Student's T-Tests are $<.005$ with the exception of the spherical case ($p=0.95$). As expected, we observed greater differences in the results as the shape of the magnet distances itself from the sphere. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figures/dipole_vs_fem_MLP_pointwise_2.pdf}\\ \caption[]{Comparison of positional errors on MLP neural networks when trained with FEM- or dipole model-generated datasets. \label{fig:dipole_vs_mlp_nn}} \end{figure} \section{Experimental Evaluation} \label{sec:opt_eval} In this evaluation, we compare the results of our MLP tracking method to experimental ground truth data collected with Optitrack. Apart from evaluating accuracy in position and orientation, we present the speed performance in this portable computer considering both the data-collection process and the inference process running simultaneously on the same device. \subsection{Magnetic plus Optical Tracking Setup} The magnetic tracking hardware consists of 16 magnetic sensors distributed in a 4 x 4 array and a Jetson Nano portable computer to read the sensors and perform MLP inferences. For details see Section \ref{sec:hardware}. The permanent magnet is attached to a tree-shaped 3D-printed holder, which holds five IR reflective markers on its branches to collect the ground-truth data (as in the inset of Figure \ref{fig:sensor_loc}). We use ten OptiTrack cameras to track this rigid body, and the cylindrical magnets used here are the same as the simulated in Section \ref{sec:syn_shapes}. During the experiment, the magnet is moved and tilted freely in the space over the sensor array while ensuring that the cameras capture most of the optical markers. The speed of moving is comparable to playing games with joysticks. For those cases where we compare different numbers of sensors, we log the readings from the sensors array which are then fed offline into MLPs with varying input dimensions. We correct the time-sync mismatch between two tracking systems by adjusting a time-offset variable in the magnetic signals and by searching for the lowest tracking error during a calibration step. In addition, we use PCHIP interpolation \cite{pchip} to adjust difference in the sampling frequency. \subsection{Results} In Figure \ref{fig:3d_traj} we show a single trajectory tracked via our MLP method as well as optitrack clipped within the same time range. For this particular trajectory we use the magnet with diameter $10$mm, height $20$ mm ($d\ 10\ h\ 20$). The deviation between the two trajectories is within the $4$ mm limit more than half of the time, with the largest deviation happening in fast turns and never exceeding $10$ mm. We note that the discontinuities in the OptiTrack trajectories come from partial occlusion of the IR markers. In figure \ref{fig:mlp_errors}, we show the statistics of tracking results obtained from five different cylindrical magnets. As expected, the higher the number of sensors deployed, the lower the tracking errors for all the magnets tested. When we use the full set of $16$ sensors, the average tracking errors in position and orientation are generally smaller than $4$ mm and $8^\circ$, except for the pole-shape magnet ($d\ 5\ h\ 25$). \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figures/compare_scatters_6_legend_withGrid.pdf} \caption{Comparison of trajectories obtained by OptiTrack and MLP. The difference is generally smaller than 4mm} \label{fig:3d_traj} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[b]{0.7\columnwidth} \includegraphics[width=\linewidth]{figures/sensors_used.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.8\columnwidth} \includegraphics[width=\linewidth]{figures/MLP_traj_swarmplot_1_shapesPlotted.pdf} \caption{} \end{subfigure} \caption[]{Experimental positional and orientation errors obtained with MLP tracking, for different numbers of input sensor and magnet shapes. \label{fig:mlp_errors}} \end{figure} Finally, we conduct experiments to test the interactive frequency of our method when the inference is done either on the CPU or the GPU of Jetson Nano. Note that process of reading sensors is always running on the CPU via the I2C protocol, giving an average time of $1.75$ ms per sensor. For the inference process, the duration of inference running on the CPU is around $28$ ms and stays relatively steady as the dimension of input features increases. When the inference runs on GPU, the duration for the prediction process increases from $10$ ms when using 4 sensors to $15$ ms when using the complete set of 16 sensors. \section{Discussion} In Section \ref{sec:syn_results} we recovered a known shortcoming in iterative methods: tracking accuracy is highly dependent on the seed, particularly in the initial orientation of the magnet. The optimization tends to fall into a local minimum when the first orientation estimate is far from the actual value. In contrast, Figure \ref{fig:evo_angle_perturb} shows that with a relatively good first orientation (miss-match less than $45^{\circ}$), the convergence is less sensitive to other perturbations. In results after 50 iteration steps (Table \ref{tab:sim_opt_stats}), the $3^{rd}$ quartile errors are as large as the initialization errors, showcasing that the optimization method might fail to reach the global minimum after $50$ epochs. On the other hand, tracking via neural networks is initialization-independent. In Table \ref{tab:sim_opt_stats} we see that MLP can outperform the optimization-based method, except for the cases with the most reliable initial estimation and the maximum number of iterations. Moreover, the maximal positional and orientation errors are all bounded by acceptable limits when tracking via MLP, testifying to its stability. When deploying our method in an experimental prototype, tracking errors are only slightly increased due to noise in the actual sensors: see Figure \ref{fig:dipole_vs_mlp_nn} (training via FEM) and Figure \ref{fig:mlp_errors} (for $N_{sensors} = 16$). We plot in Figure \ref{fig:error_vs_dist} the results of positional error obtained in Section \ref{sec:opt_eval} as a function of the distance to the center of array, showing that the method is robust even at large distances. It is remarkable how we could directly use the models trained with simulations in the experimental tests without fitting any hyperparameters. We leave for future work the possibility of including background and sensor noise during the training of the neural networks, similar to what has been shown in other magnetic tracking systems \cite{Shao2019, Li2021}. This will reduce the sim-to-real mismatch, improving accuracy of the MLP method. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figures/poserror_vs_center_dist.pdf} \caption[]{Positional tracking errors versus distance between the magnet position and the center of the array. \label{fig:error_vs_dist}} \end{figure} In Section \ref{sec:syn_shapes}, we tested the effect of using training datasets generated via FEM or the magnetic dipole model. As expected, the dipole model has closer performances to FEM-trained models when the magnet is in shape closer/equal to a sphere, testified by more significant p-values of Student's T-tests. For magnets of cylindrical shapes, FEM-Simulated trained models improve the tracking by $0.2$ mm to $1.2$ mm. We observe, however, that the tracking performance of MLP also degrades as the magnets shapes differ more from a sphere (see Figure \ref{fig:mlp_errors}, may be due to demagnetizing factors not included in this work. We observe that the one-shot inference total time in MLP is comparable to each (of the many) optimization steps of the iterative methods (Section \ref{sec:eva_syn_runtime}). As remarkable as its speed, the MLP can be called on demand, sporadically, without the need to keep the target tracked and locked in order to ensure a correct convergence within a few iteration steps. We also show that it is possible to implement a MLP tracking algorithm on a portable, energy-constrained device. We notice that the inference time via GPU takes about the same time than reading $8$ sensors via the I2C protocol via the CPU. Therefore, the sensor reading process throttles the refresh rate of the current prototype. Protocols such as SPI can improve this issue. A shortcoming of data-driven methods is that the neural network needs to be retrained every new condition, e.g., different amount of sensors and their placements or when the magnet shape is changed. Although our training take only $\sim 1$ hour including data generation, it prevents some applications such as the online optimization of sensor location. \section{Conclusion} In this paper, we show the accuracy and efficiency of using neural networks to directly predict the location and orientation of magnets. We combine 2D FEM-simulated data with a coordinate transformation algorithm to generate on-demand synthetic training for any type of axis-symmetric magnet. The tracking performed by neural networks is stable and does not suffer from convergence failure in optimization-based tracking. When conducting hands-on experiments, the average positional error is generally smaller than $4$ mm within the entire working volume. While now we run at $35$ Hz, this frequency could be doubled by moving to another sensor's reading protocol. The experiments show that we can now move the tracking algorithms to run on an energy-restricted device and make interactive magnetic applications portable. Promising future research directions include using neural networks to track multiple magnets, using the prediction of neural networks to initialize optimization-based methods, and investigating recurrent neural networks to improve temporal consistency. \section{Introduction} Alarge variety of applications rely on the tracking of three dimensional position and orientation. Such as decoding gesture input and providing force feedback on fingertips or tools \cite{wearable_finger, handTrackingXIDIAN, finexus, magnetips, Omni} via magnetic tracking. Tracking and guiding medical instruments, such as capsules and catheters \cite{magnetic_catheter, magnet_capsule1, medical_instrument, magnet_capsule2, Luo2019} also rely on magnetic markers. This widespread interest is partially driven by the fact that magnetic signals can permeate through a large variety of materials, including human tissues, without disturbance or occlusion. In addition, these systems can also benefit from comparatively affordable and more lightweight instrumentation, both in the tracker sensors and the markers. Existing work distincts between active and passive magnetic markers. Active markers can benefit from a higher signal-to-noise ratio by allowing frequency filtering. However, tracking active emitters requires either a wired connection to the tracked object or a battery-powered circuitry \cite{finexus, auraring}. In contrast, Passive magnetic localization is simpler - it only requires the target to be instrumented with a permanent magnet, eliminating any need for tethered connections or active electronics on the moving elements \cite{Omni, cube_em_loc, multiMagnet}. Interestingly, \cite{wireless_marker} has proposed a third group of semi-passive markers. Although the tracked elements are passive, they respond with a resonant signal when excited by the nearby source. Although our work focuses on passive magnetic localization, due to its simplicity and robustness, future work could apply to many of the ideas deployed here to these other types of magnetic markers. A vital part of the system is the algorithm that transforms magnetic signals into the marker's 5-degree-of-freedom (5DoF) position and orientation. One common solution is to minimize the difference between a mathematical magnetic field model and the sensors' readings. For this purpose, several non-linear optimization algorithms have been proposed and tested \cite{ChaoHu2005, ThanReview2012}. In \cite{multiMagnet}, the authors implemented an analytical computation of gradients to accelerate the optimization process. In \cite{Yousefi2021}, the proposed algorithm uncouples the marker's orientation from its position, which improves the calculation speed and provides guarantees in terms of the global minima. Nonetheless, these iterative approaches come with significant challenges. First, gradient-descent algorithms are computationally expensive, which results in a trade-off between tracking precision and frequency. Second, an iterative non-convex optimization may also suffer from the non-uniqueness of the solution, converging to local minima. These challenges result in such methods being strongly dependent on their initialization. Other approaches have explored ways to reduce the problem complexity to a linear variant, ensuring increased convergence speed and global optimality of the solution. The most drastic simplification is to directly interpolate the intensity of the reading within the sensors' array. In \cite{gaussMarbles,gaussSense}, the authors use this approach to track a stylus and a gaming object in a two-dimensional plane. In a more sophisticated configuration (cf., \cite{utrack,finexus,magnetips}) multiple sensors with known spatial arrangements give an over-constrained system of equations so that there is a unique solution to the state of a magnet. These triangulation methods are fast, however, less robust to the noise of sensors and restricted to only the positional DoFs. Crucially, any methods' effectiveness depends on the magnetic model's validity. Most, if not all, of the above methods, used the first term of the multi-pole series expansion derived from Maxwell's equations. The magnetic dipole approximation assumes any magnet to be spherical. On the one hand, this approximation gives the simplest explicit expression for the magnetic field as a function of the distance to the source. On the other hand, applying the approximation to non-spherical magnets only yields reliable results when the magnets are far from the sensor \cite{Petruska2013}. However, a small magnetic field approximation error can lead to large positional differences. In recent years, machine learning has emerged as a useful approach to bypass the computational burden of iterative methods. Machine learning, especially Neural Networks (NN), is powerful in approximating non-linearities via sequential multiply-accumulate operations. In addition, there is no need for initial estimates, and often an isolated one-shot inference is enough to attain reliable results. While inference is fast, it requires a large and representative dataset for training to achieve good generalization to unseen samples. Creating a dataset can be challenging and time-consuming. In \cite{Sebkhi2019, medical_instrument} machine learning is used to predict the locations of the tracked magnet with input from magnetic sensors. The data for training the neural network is collected in-vivo by placing magnets at different known locations and then gathering sensor readings. The data collection process is lengthy and does not scale to new scenarios and markers. Recently, authors in \cite{auraring} and \cite{Sasaki2020} demonstrated the use of neural networks trained on synthetic data to track active coils markers. Even if we share some conceptual goals with these two works, their results apply to active coils emitters and do not always outperform the ones based on iterative optimization. As in most data-driven methods, the challenge lies in collecting a large amount of precisely labeled data to train the estimator. For this, we look towards Finite Element Methods (FEM) to simulate the complete set of Maxwell's equations. FEM is computationally expensive, especially when a fine mesh is applied to attain accurate results. Hence, FEM is unusable in real-time modeling. However, FEM can be used to provide noise-free synthetic datasets for training of machine learning models. Other fields have successfully applied the use of FEM-generated synthetic datasets to train neural networks, albeit not for magnetic tracking. For instance, for mechanical deformations \cite{hyperelastic}, elastoplasticity \cite{solidMechanics}, material inspection \cite{stellDefect}, and nanostructures \cite{nanostructure}. In this paper, we propose a supervised learning approach to enable the real-time tracking of arbitrary axisymmetric magnets based on Neural Networks. Our data-driven approach is enabled by using FEM to model the magnetic field of cylindrical magnets. We take advantage of Neural Networks in modeling non-linearities to approximate the inverse function of the magnetic field. Specifically, we use a Multi-Layer Perceptron (MLP). This approach allows us to bypass the expensive iterative optimization process, which opens up the grounds for truly portable tracking devices. We make the following contribution to the magnetic tracking scheme: \begin{enumerate} \item To obtain high-resolution magnetic field data for training neural networks, we introduce a novel coordinate transformation algorithm that utilizes our markers' symmetric properties and converts FEM-simulation results from 2D to 3D. \item To enable feasible and accurate tracking of magnetic markers by a neural network, we propose and test a feature engineering function inspired by the physical property of the magnetic field. \item To evaluate the efficacy of our system, we evaluate our method in both simulation and experiments. We assess various cylindrical magnets with $5$ degrees of freedom and achieve an averaged error of $4$ mm and an orientation error of less than $8$ degrees, running on a portable device with an interactive rate of $75$ Hz when using $8$ sensors. \end{enumerate} \subsection{Hardware Implementation} \label{sec:hardware} The core of our hardware comprises $16$ triaxial magnetometers (\emph{MLX90393, Malexis)}. They have a linear range up to $0.05$ Tesla, which is compatible with the magnetic fields we expected from our testing magnets at close range. While the sensors allow for a data output rate of $716.9$ Hz, our default configuration is set to $35.4$ Hz. We also set the sensors to the $0-10^{-1}$ Tesla range, covering most of the cases when sensing locations are $\geq 2$ cm away from the magnets. In addition, we use the ellipsoid-fitting method in \cite{calibration} to calibrate our system. The 16 sensors are placed in a $4 \times 4$ grid with an interval of $52$ mm between the centers of neighboring sensors, as shown if Figure \ref{fig:sensor_loc}(a). To keep instrumentation low and enable portable applications, e.g. prosthetics, we implemented the tracking inference in a AI-oriented single board computer (\emph{Jetson Nano, NVIDIA}). This device only weights 138 grams and also allows for reading the sensors directly using the I2C communication protocol. The lower bound of time of reading $16$ sensors sequentially is in the order of $24$ milliseconds. \begin{figure}[!t] \centering \begin{subfigure}[b]{0.7\columnwidth} \includegraphics[width=\textwidth]{figures/sensor_photo.jpg}\llap{\makebox[\textwidth][l]{\raisebox{3.9cm}{\frame{\includegraphics[height=1.6cm]{figures/marker_holder.jpg}}}}} \end{subfigure} \caption[Placement of Sensors]{Hall sensor array in a 4x4 grid. The center-to-center distance is 52mm. The array is connected to a Jetson Nano for the complete inference pipeline. The inset shows a rigid-tree to collect Optitrack groundtruth. \label{fig:sensor_loc}} \end{figure} \subsection{Synthetic Dataset} \label{sec:fem_sim} We focus on permanent magnet localization with axisymmetric symmetry; any shape and size of magnets as long as it possesses rotational symmetry about its magnetization axis. This set includes all cylinders, spheres, and arbitrary cross-section toroids magnetized on its principal axis, enclosing all the most used types of permanent magnets. This symmetry of the magnets allows us to make only one high-resolution 2D FEM simulation for every magnet shape. We revolve this 2D cross-section around its principle axis to obtain the 3D volume then place it anywhere in the space. From a single 2D FEM simulation, we generate synthetic readings for all sensors, with arbitrary location and orientation of the magnet. Such a synthetic dataset results in an improvement of the computational efficiency during training, letting us keep a good balance between granularity in simulation and requiring negligible storage space for the data. \begin{table}[t] \centering \caption[Variables in Coordinate Transformation Algorithm]{Variables used in Algorithm \ref{algo:coor_trans}.\label{tab:var_trans}} \begin{tabular*}{\columnwidth}{ll} \hline \emph{Variables} & \emph{Description}\\ \hline \parbox[][16pt][c]{30pt}{$\mathbf{p}^{C}_i$} & Positional vector. For $^C$ and $_i$ see below.\\ $C \in \{D, M\}$ & Coordinate system on device ($D$) or magnet ($M$)\\ $i \in \{s, m\}$ & Positional vector of sensor ($S$) or magnet ($m)$\\ $\mathbf{u, v, w}$ & {Axis in magnet's coordinate ($C_{D}$)}\\ $\mathbf{w}$ & Magnetic moment direction in $C_{D}$\\ $\mathbf{B}$ & Magnetic flux density\\ \hline \end{tabular*} \begin{algorithm}[H] \caption{Synthethic dataset generation}\label{algo:coor_trans} \begin{algorithmic}[1] \State\hskip-\ALG@thistlm \textbf{Input} $\mathbf{p}^{D}_s$ \State\hskip-\ALG@thistlm $\mathbf{p}_d^{D} \gets \mathbf{p}^{D}_m - \mathbf{p}^{D}_s$ \Comment Magnet-sensor vector \State\hskip-\ALG@thistlm $\mathbf{v} \gets \mathbf{w} \times \mathbf{p}_d^{D}$ \While {$\mathbf{v} = \mathbf{0}$} \Comment If $\mathbf{w}$ and $\mathbf{p}_d^{D}$ are (anti-)parallel \State random $\mathbf{q}$ \State $\mathbf{v} \gets \mathbf{q} \times \mathbf{w}$ \EndWhile \State\hskip-\ALG@thistlm $\mathbf{u} \gets \mathbf{v} \times \mathbf{w}$ \State\hskip-\ALG@thistlm $dw \gets \mathbf{p}_d^{D} \cdot \frac{\mathbf{w}}{||\mathbf{w}||}$ \Comment the projection of $\mathbf{p}_d^{D}$ on $\mathbf{w}$ \State\hskip-\ALG@thistlm $du \gets \sqrt{|\mathbf{p}_d^{D}|^2 - dw^2}$ \Comment the projection of $\mathbf{p}_d^{D}$ on $\mathbf{u}$ \State\hskip-\ALG@thistlm $dv \gets 0 $ \State\hskip-\ALG@thistlm $\mathbf{B}^{M} \gets \mathbf{2D-FEM}(du, dw)$ \State\hskip-\ALG@thistlm \parbox[][28pt][c]{232pt}{$\mathbf{M} \gets \big[\begin{smallmatrix} \frac{\mathbf{u}}{||\mathbf{u}||}^\top & \frac{\mathbf{v}}{||\mathbf{v}||}^\top & \frac{\mathbf{w}}{||\mathbf{w}||}^\top \end{smallmatrix}\big]^\top $ \\ \phantom{====} \Comment Coordinate transformation matrix} \State\hskip-\ALG@thistlm $\mathbf{B}^{D} \gets \mathbf{B}^{M} \cdot \mathbf{M}$ \end{algorithmic} \end{algorithm} \end{table} We used COMSOL Multiphysics to obtain the FEM data. The simulation is centered at the magnet and constrained to only the upper-right half of the cross-section. We obtain the other half of the magnetic field values anti-symmetrically. We use the transformation described in Algorithm \ref{algo:coor_trans} to generate the data for training and evaluate the neural network. The algorithm transforms $2$D FEM data into $3$D synthetic readings on each sensor location as follows. First, we input the current position of the magnet and each sensor location, all in a fixed coordinate system ($\coordinateSystem_D$) centered at the device (L1-L2). Secondly, we build up a coordinate system centered on the magnet's current position and orientation, $\coordinateSystem_M$ (L3-L10). Thirdly, the algorithm computes the magnetic flux density $\mathbf{B}^{M}$ at the sensor's location in $\coordinateSystem_M$ using the FEM data (L11). Finally, we transform the $\mathbf{B}^{M}$ into the original global coordinates $\coordinateSystem_D$ to serve as features for training the MLP (L12-L13). All definitions of variables used in Algorithm \ref{algo:coor_trans} are described in Table \ref{tab:var_trans}. \subsection{Tracking with Neural Networks} \label{sec:track} \subsubsection{Multi-Layer Perceptron} \label{subsub:mlp} We used a Multi-Layer Perceptron (MLP) as network architecture. The input of the MLP is a $3n$-element vector containing the ($x,y,z$) magnetic flux densities $\mathbf{B}$ of the $n$ sensors. The output is a tuple of $6$ elements composed of the magnet position, $\mathbf{p} = [p_m^x, p_m^y, p_m^z]$, and its orientation, $\mathbf{o} = [o_m^x, o_m^y, o_m^z]$. We can then write the MLP as a non-linear mapping from the sensor readings to the tracking variables: \begin{equation} \mathcal{F}:\mathbb{R}^{3n}\rightarrow\mathbb{R}^6;\mathcal{F}(\mathbf{[\mathbf{B}_1, ..., \mathbf{B}_{3n}]}) =[\mathbf{p}, \mathbf{o}] \end{equation} Note that even when we have 5-DoF output, three in $\mathbf{p}$ and two in $\mathbf{o}$, we express the orientation vector in cartesian coordinates to avoid the numerical discontinuity when the azimuth angle jumps from $359^{\circ}$ to $0^{\circ}$. Our MLP architecture is shown in Figure \ref{fig:mlp}. We use a $3$-layer perceptron, with $2048$ units per layer, except for the input and output layers. Rectified linear units are used as activation functions. \subsubsection{Pre-processing} Importantly, we empirically validated that the systems training does not converge if magnetic readings are used directly as inputs. One of the reasons may be that the input values vary by several orders of magnitude when the magnet moves from close to a sensor (~$10^{-2}$ Tesla) to the extension of the working volume (~$10^{-6}$ Tesla). As stated in the dipole model, \begin{equation} \label{eq:dipole} {\mathbf{B}}({{\mathbf{p}}_m}) = \frac{{{\mu _0}}}{{4\pi }}\left[ {\frac{{3{\mathbf{r}}({\mathbf{m}} \cdot {\mathbf{r}})}}{{{r^5}}} - \frac{{\mathbf{m}}}{{{r^3}}}} \right] \, \end{equation} \noindent the magnetic field decays as $1/r^3$, where $r$ is the distance between the source and the sensors and $\mathbf{m}$ is the dipole moment. In order to re-scale the input signals, we take the cubic root of the input data, $f(B) = \sqrt[3]{B}$ (see Figure \ref{fig:mlp}). We found that training with cubic root re-scaling convergences in $\sim 40$ epochs. \subsubsection{Training Loss} \label{sec:training_nn} We train the neural networks with randomly sampled data in a cubic volume, $0.2 \times 0.2 \times 0.15$ m$^3$, with the sensor array covering the bottom face. The magnets' orientation is sampled as points on a unit sphere, then paired with the locations as labels for training. Then, the labeled data for training is obtained as in Algorithm \ref{algo:coor_trans}. As loss function we used the weighted sum of positional and orientation differences averaged over all $n$ points in a training batch: \begin{equation} \mathcal{L}= \frac{1}{n} \sum_{i=1}^{n} \big(\|\mathbf{p}_{true} - \mathbf{p}_{pred}\|^2 + \eta \|\frac{\mathbf{o}_{true}}{\|\mathbf{o}_{true}\|} - \frac{\mathbf{o}_{pred}}{\|\mathbf{o}_{pred}\|}\|^2 \big) \end{equation} \noindent where we aim to minimise both the positional and oriental disparity between the prediction and ground truth. We add the weight $\eta$ to compensate for different orientation and positional error term scales. From parameter tuning, we find $\eta = 10^{-5}$ and use it as a default in all experiments. We implement the MLP in python with PyTorch. The optimizer Adam \cite{adam} is adopted with an initial learning rate $\gamma=10^{-4}$. We train for 40 epochs, and the learning rate decays by $0.98$ after each epoch in training. We generate $10^6$ random points per epoch to train the model with a batch size of $256$. Using Algorithm \ref{algo:coor_trans} we generate the training data independently and identically distributed. Data generation and training of the MLP takes $\sim 1$ hour. \subsubsection{Running Time} running time of MLP and RNN on Jetson Nano, a portable device with ARM Cortex-A57 processor and $4$ GB of RAM. The inference time of multi-layer perceptron and recurrent network running on Jetson Nano is comparable. Table \ref{tab:nn_runtime} shows that when leveraging the Graphics Processing Unit (GPU) on Jetson Nano and parallelizing the data collection process on CPU and inference on GPU, the tracking algorithm runs at an interaction frequency of $75$ Hz, enabling fluid interactive applications. \begin{table} \caption[Running Time of Tracking Algorithm Based on NN-Inference on Jetson Nano]{The running time of direct-predicting method with neural network on Jetson Nano.\label{tab:nn_runtime}} \centering \begin{tabular}{m{40pt}|m{35pt}m{35pt}m{35pt}m{35pt}} \hline Inference performed on & Data collection duration & Inference duration & One-step duration & Interactive frequency\\ \hline \\ GPU & $12.97$ ms & $8.55$ ms & $13.39$ ms & $74.69$ Hz\\ CPU & $15.79$ ms & $20.75$ ms & $21.34$ ms & $46.86$ Hz\\ \hline \end{tabular} \end{table} \section{Method} \begin{figure}[t] \centering \includegraphics[width=0.85\linewidth]{figures/abstract.pdf} \caption[Multi-Layer Perceptron for Directly Predicting Magnet Positions]{Schematic pipeline overview. We use a Multi-Layer Perceptron (MLP) to output the location and orientation of the magnet directly. During the training phase, the MLP outputs are compared with the FEM ground truth. During the inference, the input to the MLP is the sensor data to track the position and orientation of the magnet. \label{fig:mlp}} \end{figure} The core of our contribution is a novel tracking method which utilizes supervised learning. Specially, we use a Multi-Layer Perceptron. Our tracking pipeline has two different instances: during the training phase, the output of the MLP is compared to the labeled data generated by FEM, during device operation (inference), the sensors readings are fed to the MLP to obtain the magnetic marker position and orientation (Figure \ref{fig:mlp}). In this section, we first elaborate on how we create a high-resolution synthetic dataset (Sec. \ref{sec:fem_sim}). Second, we detail our neural network architecture and training (Sec. \ref{sec:track}). Finally, we describe the hardware setup used for simulation (Sec. \ref{sec:hardware}). \input{content/02-2_fem_sim} \input{content/02-3_mlp_tracking} \input{content/02-1_hardware} \section{Comparison between our data-driven and an optimization-based tracking} In this section, we contrast the performances of our neural network tracking method with an iterative gradient-descent based technique. Performing this comparison with simulated sensor readings allows us to control experimental conditions such as the initialization of the iterative optimization-based algorithm. The data is also noise-free, thus comparing the ideal performance of the two methods. The optimization uses PyTorch with the magnet position and orientation set as variables with automatic differentiation \cite{auto_diff} enabled. For the optimizer, we select L-BFGS \cite{lbfgs}, a quasi-Newton method, with line search to establish the optimal step size. The internal physical model to evaluate magnetic fields is the magnetic dipole model, implemented as in the loss function as proposed in \cite{multiMagnet}. For a fair comparison, we distinguish two cases: i) we train the MLP network using the FEM data, and ii) using the magnetic dipole model directly, which is equivalent to the optimization method model. Furthermore, we show the results obtained via the optimization-based method for different initialization errors. Finally, we compare the running time of the two approaches for tracking synthetic data. \subsection{Influence of initialization on the optimization methods} \label{sec:syn_results} \begin{figure}[t] \centering \begin{subfigure}[b]{0.8\columnwidth} \includegraphics[width=\textwidth]{figures/opt_pos_init_evo_up.pdf} \caption{} \end{subfigure} \hfill \begin{subfigure}[b]{0.8\columnwidth} \includegraphics[width=\textwidth]{figures/opt_pos_init_evo_down.pdf} \caption{} \end{subfigure} \caption[]{ Positional errors of the iterative method in simulation for different number of iterations. \textup{(a)} We vary to initial orientation mismatch keeping a fixed distance of $80$ mm to the truth. \textup{(b)} We vary the positional error initialization keeping a fixed orientation difference of $45^{\circ}$. \label{fig:evo_angle_perturb}} \end{figure} For this evaluation, we randomly sample the magnet's positions and orientations as in Section \ref{sec:training_nn}, and we compute corresponding magnetic flux densities with the magnetic dipole model at the locations of sensors (Eq. \ref{eq:dipole}). In this way, the characteristics of the synthetic signals agree with the internal physical model in the optimization method. To have the initial guess in optimization, we disturb this target position and orientation. We configure the iterative algorithm so the optimization stops when reaching the maximal number of iterations we allowed. In Figure \ref{fig:evo_angle_perturb} we present the results for the optimization method, reporting the tracking errors as a function of the of the maximal number of iteration for different values of positional miss-match (Figure \ref{fig:evo_angle_perturb}.a) and orientation miss-match (Figure \ref{fig:evo_angle_perturb}.b). Each box-plot is based in $400$ randomly target positions and orientations. For detailed analysis, we report the statistics of tracking results after $50$ iterations in Table \ref{tab:sim_opt_stats} (corresponding with the right most set of results in Figure \ref{fig:evo_angle_perturb}). At the bottom of table we include the position and orientation errors for the same synthetic readings using our MLP method. Note that MLPs do not require an initial estimation of the position and orientation of the magnet. They are independent of the initialization, as it is not part of their feature vector. Our MLP is outperformed only in the cases with most iterations (50), and with the best initialization: $80$ mm of initial positional error and $10^\circ$ of initial orientation error, and $30$ mm of initial positional error and $45^\circ$ of initial orientation error. However, the computational time of this optimization-based method is about 50x longer than the time consumed by predicting via the MLP (see Sec. \ref{sec:eva_syn_runtime}). When we reduce the number of iterations, the error increases considerably. With only ten iterations, the median positional error exceeds $200$ mm in some cases, especially when the seed orientation is far from the target. However, we find that even in cases where the initial orientation is close to the target (the error is $45^\circ$), the positional errors obtained are as large as the miss-match in initialization. \begin{table}[t] \caption[]{Accuracy results for the iterative method after 50 Iterations. For comparison, we include results obtained via MLP tracking on the same evaluation data.} \label{tab:sim_opt_stats} \centering \begin{tabular}{>{\raggedleft}p{0.9cm}>{\raggedleft}p{0.9cm}|>{\raggedleft}p{0.9cm} >{\raggedleft}p{0.9cm} >{\raggedleft}p{1.3cm} >{\raggedleft\arraybackslash}p{1.3cm}} \hline Initial $e_p$ in mm & Initial $e_{\theta}$ in degrees & $\ \mathtt{median}$ $e_p$ in mm & $\mathtt{median}$ $e_{\theta}$ in degrees & $\mathtt{3^{rd} quartile}$ $e_p$ in mm & $\mathtt{3^{rd} quartile}$ $e_{\theta}$ in degrees \\ \hline \multicolumn{6}{l}{}\\ \multicolumn{2}{l}{\textit{\textbf{Iter. Optimization}}}\\ \hline \hline 80& 10& 0.92& 1.63& 32.96& 9.98\\ 80& 30& 4.17& 8.54& 47.17& 29.99\\ 80& 90& 40.83& 87.16& 83.80& 90.00\\ 80& 180& 90.88& 179.90& 158.11& 179.99\\ \hline 30& 45& 0.42& 0.74& 12.86& 36.12\\ 60& 45& 3.19& 6.89& 33.47& 44.84\\ 90& 45& 6.62& 15.17& 41.29& 44.91\\ 120& 45& 10.47& 28.94& 59.43& 45.01\\ \hline \multicolumn{6}{l}{ }\\ \multicolumn{2}{l}{\textit{\textbf{MLP (ours)}}}\\ \hline \hline -- & -- & 1.34& 3.45& 1.91& 5.16 \\ \hline \end{tabular} \end{table} \subsection{Running Time} \label{sec:eva_syn_runtime} \begin{figure}[t] \centering \includegraphics[width=0.85\columnwidth]{figures/running_time_boxplot_noTitle.pdf}\\ \caption[]{Inference time comparison between MLP and L-BFGS implementations for a different number of iterations. Iterative methods are strongly dependent on initialization and it takes between 20 and 50 iterations (i.e. $>$ 10 ms) to achieve results similar to those given by a single MLP inference (0.8 ms). \label{fig:running_time}} \end{figure} Our MLP implementation produces readings after a single inference step involving feature engineering, additions, and multiplications in hidden layers. In contrast, optimization-based methods require computations of second-order gradients concerning the estimated locations of every iteration until it converges. We compare here the differences in speed in our two tested methods. In Figure \ref{fig:running_time} we compare the end-to-end time of our data-driven method (MLP) vs different maxmimum number of iterations steps of the optimization-based algorithm we implemented. One 5 DoF tracking inference in MLP takes $0.8$ ms, including the feature engineering process. In contrast, the L-BFGS optimizer takes around $1$ ms for each iteration. As shown in Figure \ref{fig:evo_angle_perturb}, we need $10$s of iterations to achieve satisfying results depending on the initialization. This results in $10$s of milliseconds. We note that for this particular speed comparison, we run both methods on the same laptop with an Intel i7-7500U CPU. \subsection{Neural Networks trained with FEM vs Dipole Model} \label{sec:syn_shapes} In previous works authors train MLPs on data generated with the magnetic dipole model. In this work, we propose a step forward by modeling beyond the dipole approximation and take full advantage of the powerful representation of Neural Networks for non-linear systems. We propose six different shapes of magnets for our evaluation, identical to those we adopt in hands-on experiments in Section \ref{sec:opt_eval}. The selection goes from very tall bars to very flat disks, all magnetized in the principal axis, plus a sphere, given that the dipole model is the exact solution in this last case. We use $1000$ points per magnet to evaluate the position error, and the target positions and orientations are identical for all $6$ magnets. In Figure \ref{fig:dipole_vs_mlp_nn} we compare the performance of our MLP method when trained with two different synthetic datasets: one obtained FEM simulations as explain in Section \ref{sec:fem_sim}, and the other using the magnetic dipole approximation (Eq. \ref{eq:dipole}). The shapes of magnets are ordered from tall bar-shape to thin disc-shaped, getting closer to a spherical shape and then deviating from it. We used a Shapiro-Wilk test to validate the normal distribution of our data and then applied a Student's T-Test to test for significant differences between dipole- and FEM-data trained MLPs. We found significant differences in all cases of cylindrical magnets and no significant difference for the spherical magnet. The p-values of the Student's T-Tests are $<.005$ with the exception of the spherical case ($p=0.95$). As expected, we observed greater differences in the results as the shape of the magnet distances itself from the sphere. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figures/dipole_vs_fem_MLP_pointwise_2.pdf}\\ \caption[]{Comparison of positional errors on MLP neural networks when trained with FEM- or dipole model-generated datasets. \label{fig:dipole_vs_mlp_nn}} \end{figure} \section{Experimental Evaluation} \label{sec:opt_eval} In this evaluation, we compare the results of our MLP tracking method to experimental ground truth data collected with Optitrack. Apart from evaluating accuracy in position and orientation, we present the speed performance in this portable computer considering both the data-collection process and the inference process running simultaneously on the same device. \subsection{Magnetic plus Optical Tracking Setup} The magnetic tracking hardware consists of 16 magnetic sensors distributed in a 4 x 4 array and a Jetson Nano portable computer to read the sensors and perform MLP inferences. For details see Section \ref{sec:hardware}. The permanent magnet is attached to a tree-shaped 3D-printed holder, which holds five IR reflective markers on its branches to collect the ground-truth data (as in the inset of Figure \ref{fig:sensor_loc}). We use ten OptiTrack cameras to track this rigid body, and the cylindrical magnets used here are the same as the simulated in Section \ref{sec:syn_shapes}. During the experiment, the magnet is moved and tilted freely in the space over the sensor array while ensuring that the cameras capture most of the optical markers. The speed of moving is comparable to playing games with joysticks. For those cases where we compare different numbers of sensors, we log the readings from the sensors array which are then fed offline into MLPs with varying input dimensions. We correct the time-sync mismatch between two tracking systems by adjusting a time-offset variable in the magnetic signals and by searching for the lowest tracking error during a calibration step. In addition, we use PCHIP interpolation \cite{pchip} to adjust difference in the sampling frequency. \subsection{Results} In Figure \ref{fig:3d_traj} we show a single trajectory tracked via our MLP method as well as optitrack clipped within the same time range. For this particular trajectory we use the magnet with diameter $10$mm, height $20$ mm ($d\ 10\ h\ 20$). The deviation between the two trajectories is within the $4$ mm limit more than half of the time, with the largest deviation happening in fast turns and never exceeding $10$ mm. We note that the discontinuities in the OptiTrack trajectories come from partial occlusion of the IR markers. In figure \ref{fig:mlp_errors}, we show the statistics of tracking results obtained from five different cylindrical magnets. As expected, the higher the number of sensors deployed, the lower the tracking errors for all the magnets tested. When we use the full set of $16$ sensors, the average tracking errors in position and orientation are generally smaller than $4$ mm and $8^\circ$, except for the pole-shape magnet ($d\ 5\ h\ 25$). \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figures/compare_scatters_6_legend_withGrid.pdf} \caption{Comparison of trajectories obtained by OptiTrack and MLP. The difference is generally smaller than 4mm} \label{fig:3d_traj} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[b]{0.7\columnwidth} \includegraphics[width=\linewidth]{figures/sensors_used.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.8\columnwidth} \includegraphics[width=\linewidth]{figures/MLP_traj_swarmplot_1_shapesPlotted.pdf} \caption{} \end{subfigure} \caption[]{Experimental positional and orientation errors obtained with MLP tracking, for different numbers of input sensor and magnet shapes. \label{fig:mlp_errors}} \end{figure} Finally, we conduct experiments to test the interactive frequency of our method when the inference is done either on the CPU or the GPU of Jetson Nano. Note that process of reading sensors is always running on the CPU via the I2C protocol, giving an average time of $1.75$ ms per sensor. For the inference process, the duration of inference running on the CPU is around $28$ ms and stays relatively steady as the dimension of input features increases. When the inference runs on GPU, the duration for the prediction process increases from $10$ ms when using 4 sensors to $15$ ms when using the complete set of 16 sensors. \section{Discussion} In Section \ref{sec:syn_results} we recovered a known shortcoming in iterative methods: tracking accuracy is highly dependent on the seed, particularly in the initial orientation of the magnet. The optimization tends to fall into a local minimum when the first orientation estimate is far from the actual value. In contrast, Figure \ref{fig:evo_angle_perturb} shows that with a relatively good first orientation (miss-match less than $45^{\circ}$), the convergence is less sensitive to other perturbations. In results after 50 iteration steps (Table \ref{tab:sim_opt_stats}), the $3^{rd}$ quartile errors are as large as the initialization errors, showcasing that the optimization method might fail to reach the global minimum after $50$ epochs. On the other hand, tracking via neural networks is initialization-independent. In Table \ref{tab:sim_opt_stats} we see that MLP can outperform the optimization-based method, except for the cases with the most reliable initial estimation and the maximum number of iterations. Moreover, the maximal positional and orientation errors are all bounded by acceptable limits when tracking via MLP, testifying to its stability. When deploying our method in an experimental prototype, tracking errors are only slightly increased due to noise in the actual sensors: see Figure \ref{fig:dipole_vs_mlp_nn} (training via FEM) and Figure \ref{fig:mlp_errors} (for $N_{sensors} = 16$). We plot in Figure \ref{fig:error_vs_dist} the results of positional error obtained in Section \ref{sec:opt_eval} as a function of the distance to the center of array, showing that the method is robust even at large distances. It is remarkable how we could directly use the models trained with simulations in the experimental tests without fitting any hyperparameters. We leave for future work the possibility of including background and sensor noise during the training of the neural networks, similar to what has been shown in other magnetic tracking systems \cite{Shao2019, Li2021}. This will reduce the sim-to-real mismatch, improving accuracy of the MLP method. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figures/poserror_vs_center_dist.pdf} \caption[]{Positional tracking errors versus distance between the magnet position and the center of the array. \label{fig:error_vs_dist}} \end{figure} In Section \ref{sec:syn_shapes}, we tested the effect of using training datasets generated via FEM or the magnetic dipole model. As expected, the dipole model has closer performances to FEM-trained models when the magnet is in shape closer/equal to a sphere, testified by more significant p-values of Student's T-tests. For magnets of cylindrical shapes, FEM-Simulated trained models improve the tracking by $0.2$ mm to $1.2$ mm. We observe, however, that the tracking performance of MLP also degrades as the magnets shapes differ more from a sphere (see Figure \ref{fig:mlp_errors}, may be due to demagnetizing factors not included in this work. We observe that the one-shot inference total time in MLP is comparable to each (of the many) optimization steps of the iterative methods (Section \ref{sec:eva_syn_runtime}). As remarkable as its speed, the MLP can be called on demand, sporadically, without the need to keep the target tracked and locked in order to ensure a correct convergence within a few iteration steps. We also show that it is possible to implement a MLP tracking algorithm on a portable, energy-constrained device. We notice that the inference time via GPU takes about the same time than reading $8$ sensors via the I2C protocol via the CPU. Therefore, the sensor reading process throttles the refresh rate of the current prototype. Protocols such as SPI can improve this issue. A shortcoming of data-driven methods is that the neural network needs to be retrained every new condition, e.g., different amount of sensors and their placements or when the magnet shape is changed. Although our training take only $\sim 1$ hour including data generation, it prevents some applications such as the online optimization of sensor location. \section{Conclusion} In this paper, we show the accuracy and efficiency of using neural networks to directly predict the location and orientation of magnets. We combine 2D FEM-simulated data with a coordinate transformation algorithm to generate on-demand synthetic training for any type of axis-symmetric magnet. The tracking performed by neural networks is stable and does not suffer from convergence failure in optimization-based tracking. When conducting hands-on experiments, the average positional error is generally smaller than $4$ mm within the entire working volume. While now we run at $35$ Hz, this frequency could be doubled by moving to another sensor's reading protocol. The experiments show that we can now move the tracking algorithms to run on an energy-restricted device and make interactive magnetic applications portable. Promising future research directions include using neural networks to track multiple magnets, using the prediction of neural networks to initialize optimization-based methods, and investigating recurrent neural networks to improve temporal consistency.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Semantic segmentation is the process of assigning a class to each pixel of an image. Recently, convolutional neural networks have proven to be highly effective in solving this challenging visual task \cite{cnnss, deeplabv2, Enet, unet}, leading to ever-increasing interest in the deployment of semantic segmentation models in spaces as diverse as autonomous driving, robotics, and medicine. However, training a semantic segmentation network requires a large amount of pixel-wise annotated data, which are tedious, time-consuming, and expensive to collect. Moreover, current models often fail to generalize toward new domains, an issue that cannot be overlooked in many relevant real-world applications. Indeed, performance often drops when models are tested on new scenarios, especially when there exists a domain gap between the training (source) and test (target) images. For instance, in autonomous driving settings, object appearances may drastically change when training and testing across different cities, leading to severe segmentation errors. This problem is even more pronounced when relying on synthetic data generated by computer graphics, such as video games \cite{gta} or 3D simulations \cite{synthia}, that could otherwise be advantageously exploited to easily obtain large amounts of labeled data. Unsupervised Domain Adaptation (UDA) \cite{Wang_2018} aims at minimizing the impact of the domain gap under the assumption that no ground-truth annotations are available for the target domain. In the last few years, several UDA techniques have been proposed for the task of semantic segmentation \cite{hoffman2016fcns, adaptsegnet, ramirez2018exploiting, cycada, dcan, Chang_2019}. However, all these methods ignore the main goal of semantic segmentation, which is to obtain sharp prediction masks and only focus in the feature adaptation part. For this reason, previous works can correctly segment out coarse blobs of large elements in a scene such as cars or buildings, while they provide inaccurate segmentation masks along class boundaries as shown in \cref{fig:teaser}. On the other hand, in the supervised semantic segmentation setting, a large amount of works focus on obtaining sharp predictions \cite{SEMEDA, Ke_2018, Chen_2016,Yuan_2020,Ding_2019}. This is commonly done by better integrating low-level features into high-level features since modern segmentation architectures discard spatial information with down-sampling operations such as max-pooling or strided convolution due to memory and time constraints. Following the supervised setting, we argue that this line of research should also be pursued for the UDA case to obtain sharp predictions across domains even though target labels are not available. Our approach, also leverages on low-level features to seek this goal, and we introduce a novel low-level adaptation strategy specifically for the UDA scenario. More precisely, we enforce alignment of low-level features exploiting an auxiliary task that can be solved for both domains in a self-supervised fashion, intending to make them more transferable. By doing this, we enable the possibility to exploit shallow features to refine the coarse segmentation masks for both the source and target domains. To achieve this, we estimate a 2D displacement field from the aligned shallow features that, for each spatial location of the predicted coarse feature map, specifies the direction where the representation for that patch is less ambiguous (i.e. at centre of the semantic object). Our intuition is that when the coarse feature map is bi-linearly up-sampled to regain the target resolution, the feature representation of those patches corresponding to semantic boundaries in the input image is mixed up, as it contains semantic information belonging to different classes. Thanks to the estimated 2D displacement field, however, we refine each patch representation according to the features coming from the center of the object, which are less prone to be influenced by other classes as they lay spatially far from boundaries. This process will be referred later as the feature \textit{warping} process. Finally, following a recent trend in UDA for semantic segmentation \cite{iast, mrnet, Zheng_2021, Paul_WeakSegDA_ECCV20}, we employ self-training, a technique that foresees the training of a neural network with its own predictions denoted as pseudo-labels. This step allows to implicitly encourage cross-domain feature alignment thanks to the simultaneous training on multiple domains. Yet, differently from previous works that mainly focus on masking incorrect pixels with some heuristics, we propose a novel data augmentation technique aimed at preserving information specifically along class boundaries. In fact, due to the low confidence of the network in the target domain, pixels along edges are usually masked by the aforementioned methods, resulting in a further performance degradation along class boundaries due to the lack of supervision during the self-training process. Thus, we employ a class-wise erosion filtering algorithm that allows us to synthesize new training samples in which only the inner body of the target objects is preserved and copied into other images. By doing this, all pixels have supervision, and the network is trained to classify correctly edges also in the target domain. Code available at {\small{\url{https://github.com/CVLAB-Unibo/Shallow_DA}}.} To summarize our contributions are: \begin{itemize} \item We propose to use shallow features to improve the accuracy of the network along class boundaries in the UDA scenario. This is achieved by computing a displacement field that lets the network use information from the center of semantic blobs. \item We deploy semantic edge detection as an auxiliary task to enforce the alignment of shallow features, which is key to overcome the domain shift when computing the displacement map. \item We introduce an effective data augmentation that selects objects from target images and filters out noise at class boundaries to obtain sharp pseudo-labels. \item We show that our approach achieves overall on par or even state-of-the-art performance in standard UDA for semantic segmentation benchmarks, and more importantly improves predictions along boundaries when compared to previous works. \end{itemize} \section{Related Work} \subsection{Pixel-level Domain Adaptation} Pixel-level adaptation aims at reducing the visual gap between source and target images. Typically, style and colors are adapted by deploying CycleGANs\cite{cyclegan}, a generative model able to capture the target style and injecting it into the source images without altering their content. Early works \cite{dcan, cycada} learn such transformation offline, and employ the translated images during training time. Recent approaches instead~\cite{bdl, stylization}, fuse the translation process into the training pipeline, obtaining an end-to-end framework. ~\cite{ltir} extended this approach to obtain a texture-invariant network by training on source images augmented with textures from other natural images. Following recent works, our approach builds upon these techniques. Indeed, we make use of translated images to obtain strong baseline and extract good pseudo-labels when adapting from synthetic to target. \subsection{Adversarial Learning} The goal of adversarial training in the context of Domain Adaptation is to align the distributions of source and target images so that the same classifier can be seamlessly applied on a shared feature extractor. Adaptation can be forced either in feature space \cite{fada} or in output-space \cite{adaptsegnet}. Many extensions of \cite{adaptsegnet} have been introduced. \cite{stuffAndThings} proposed to align differently classes based on their intra-class variability in their appearance. Other works deploy adversarial learning to minimize the entropy of the target classifier \cite{advent} or to perform feature perturbation \cite{perturbation}. In our work, since training a network adversarially is notoriously a difficult and unstable process \cite{salimans2016improved}, we avoid it. \subsection{Self-Training} A recent line of research focuses on self-training~\cite{pseudolabel} thanks to its effectiveness and simplicity. This approach is based on the idea of producing pseudo-labels for the target domain and use them to capture domain-specific characteristics. \cite{cbst} proposes an algorithm to filter out wrong pixels with some confidence thresholds. Similarly, \cite{iast} extended the idea by introducing an instance adaptive algorithm to improve the quality of pseudo-label. \cite{mrnet} proposes to use pseudo-labels to minimize the discrepancy between two classifiers, while \cite{Pan_2020} tries to minimize both the inter-domain and intra-domain gap with the support of the pseudo-labels. Differently, \cite{dacs} synthesizes new training samples by embedding objects from source images into the target ones. Inspired by these recent trends, we adopt self-training to align shallow features and guide the warping process across domains. Differently from previous approaches, however, we synthesise new training pairs enriching images of both domains with target objects to improve segmentation quality on class boundaries. \begin{figure}[t] \centering \includegraphics[trim={2cm, 0.3cm, 1cm, 0.5cm}, clip, width=1\linewidth]{images/net.pdf} \caption{ Illustration of our architecture in the adaptation step. Given an RGB input image, the network learns to extract semantic edges from shallow features. From the same feature map, a 2D displacement map is estimated in order to guide the warping of down-sampled deep features, which lacks of fine-grained details.} \label{fig:network} \end{figure} \section{Method} In UDA for semantic segmentation we are given image-labels pairs $\{x_s^i, y_s^i\}_{i=1}^M$ for a source domain $\mathcal{S}$, while only images $\{x_t^i\}_{i=1}^N$ are available for a target domain $\mathcal{T}$. The goal consists in predicting pixel-wise classification masks for target images. Our proposed framework comprises several components, as depicted in \cref{fig:network}. A standard backbone (yellow branch) produces a coarse feature map $A_c$ from an image. A semantic edge extractor (top purple branch) estimates semantic edges $\hat{e}$, given the activation map $A$ produced by the first convolutional block of the backbone. The same shallow features are processed by another convolutional block (bottom red branch) to obtain a 2D displacement map, $D$. Then, $A_c$ is up-sampled to the same size as $D$ and it is refined according to $D$ to produce a fine-grained feature map $A_f$. Finally, one last convolutional block that acts as a classifier is applied to produce a $C$-dimensional vector for each pixel, with $C$ being the number of classes, and a final bi-linear up-sampling yields a prediction map of the same size of the input. We detail each component in the following subsections. \subsection{Low-level adaptation} \textbf{Learning transferable shallow features}. \label{sec:lowalign} We introduce an auxiliary task to push the network to learn domain-invariant features that include details on objects boundaries already from early layers. Given the feature map $A$, a convolutional block $\gamma$ is applied to predict an edge map $\hat{e}$. Ground truths $e$ are obtained by the Canny edge detector \cite{canny} applied directly on semantic annotations for the source domain and on pseudo-labels for the target domain, so that only semantic boundaries are considered. A binary-cross entropy loss is minimized for batches including images from both domains: \begin{equation} \label{eq:edge_loss} \begin{aligned} \hat{e} &= \gamma(A), \\ \mathcal{L}_{edge} &= \sum_{h}^{H}\sum_{w}^{W} e^{(h,w)}\log \hat{e}^{(h,w)} \\ &+ (1-e^{(h,w)})\log (1-\hat{e}^{(h,w)}) \end{aligned} \end{equation} Hence, we enforce the auxiliary semantic edge detection task for the very first layers of the network only, rather than, as in typical multi-task learning settings such as \cite{gebru2017fine, choi2020shuffle, sun2019unsupervised}, at a deeper level, where features are more task-dependent. We believe this design choice to be key for a good generalization for three reasons. First, trying to solve this task from shallow layers guides the network to explicitly reason about object shapes from the beginning, rather than solely texture and colors as typically done by CNNs \cite{textureVSshape}. Second, solving an auxiliary task for both domains forces the network to learn a shared feature representation, which naturally leads to aligned distributions. Consequently, the displacement field generated from the shallow features is effective also in the target domain, and it can be directly exploited at a deeper level to recover fine-grained details. Finally, the peculiar choice of semantic edge detection is directly beneficial to estimate a displacement field that mainly focuses on edges, making the following warping process more effective where the network is uncertain. We refer to the supplementary material for ablations on the alignment performed at different levels. \textbf{Feature warping}. One of the contributions of our method is to refine the bi-linearly up-sampled coarse feature map $A_c$, hereafter $A_c^{bu}$, to obtain a fine-grained feature map $A_f$ that better captures the correct class for pixels laying in the boundary regions. The refinement is guided by a 2D displacement field $D$ obtained from the domain-invariant shallow features computed by the first convolutional block of the backbone. The displacement field indicates for each location of $A_c^{bu}$ where the network should look to recover the correct class information, namely the direction that better characterize that patch. We estimate the 2D displacement map $D$ by applying a convolutional block to the aligned shallow features $A$ that are aligned as described above. Our intuition is that, due to the unavoidable side-effect of the down-sample operations in the forward pass, the representation of those elements in $A_c$ whose receptive field includes regions at class boundaries in the original image, contains ambiguous semantic information. Indeed, when $A_c$ is bi-linearly up-sampled, patches that receive contributions from ambiguous coarse patches inherit such ambiguity. However, in the higher resolution feature map $A_c^{bu}$ it may be possible to compute a better, unambiguous representation for some of the patches, \ie those now laying entirely in a region belonging to one class. The correct semantic information may be available in the nearby high-resolution patches closer to the semantic blob centers. Thus, each feature vector at position $p$ on a standard 2D spatial grid of $A_c^{bu}$, is mapped to a new position $\hat{p} = p + D(p)$, and we use a differentiable sampling mechanism \cite{stn} to approximate the new feature vector representation for that patch: \begin{equation} A_f(p) = \sum_{p_{l} \in \mathcal{N}(\hat{p})} w_{p_{l}} A_c^{bu}(p_{l}) \label{equ:warping} \end{equation} where $w_{p_{l}}$, are the bi-linear kernel weights obtained from $D$ and $\mathcal{N}$ the set of neighboring pixels. Hence, \cref{equ:warping} defines a backward warping operation in feature space, where $A_f$ is obtained by warping $A_c^{bu}$ according to $D$. Finally, the fine-grained feature map $A_f$ is fed to the classifier to obtain the final prediction that is up-sampled by a factor of $2$ to regain the input image resolution. We minimize the cross entropy loss using annotations for the source domain and pseudo-labels for the target domain: \begin{equation} \label{eq:semantic_loss} \mathcal{L}_{sem} = \sum_{h=1}^{H}\sum_{w=1}^{W} \sum_{c=1}^{C} y^{(h,w,c)} \log \hat{y}^{(h,w,c)} \end{equation} \subsection{Data Augmentation for Self-Training} \label{sec:selftraining} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{images/crop.png} \caption{Given a target image prediction pair (top-left) and a source training pair (top-right), we select classes such as \textit{person} (bottom-left) and apply our class-wise data augmentation pipeline to synthesise a new training pair (bottom-right). The selected shapes are eroded before being pasted.} \label{fig:copypaste} \end{figure} Inspired by \cite{cutmix, Dwibedi_2017_ICCV, ghiasi2021simple, dacs}, we use a pre-trained model to select objects based on predictions on target images and paste them over source images (see \cref{fig:copypaste}). Peculiarly, our self-training approach relies on a data augmentation process that selects objects from the target scenes rather than the source ones as done \cite{dacs}. Although selecting source objects may be useful to reduce the unbalanced distributions of classes, it is a sub-optimal choice since the network would be still trained to identify shapes and details peculiar to the source domain, which are different to those found at inference time for the target images. We instead use pseudo-labels to cut objects from the target scenes and paste them into source or target images, forcing the network to look for these patterns on both domains. However, due to the inherent noise of pseudo-labels we need to filter out noisy predictions. In particular, we aim at removing object boundaries as they typically exhibit classification errors and tend to be localized rather inaccurately. Given a target image $x_t$ and its associated predictions $\hat{y}_t$, we compute a binary mask $B_c$ for each class $c \in C^*$, where $C^*$ denotes a random subset of the considered classes. We exclude classes such as \textit{"road"} and \textit{"building"} to avoid occlusion of the whole scene and to counteract the unbalanced distributions of classes, and only use object instances such as as \textit{"car"} and \textit{"poles"}. This categorization is similar to the one used in \cite{stuffAndThings}, and can be easily adapted to different datasets. We refer to the supplementary material for the set of classes we used in each experiment. For each spatial location \textit{p}, $B_c$ has value 1 if \textit{p} is assigned to class $c$, 0 otherwise. Then, we apply an erosion operation, $\ominus$, with a $5\times5$ structuring element $k$ to each class mask $B_c$. To obtain the set of pixels to be copied from the target image to a randomly selected source image we apply the union set operator to all masks: \begin{align} \label{eq:erosion} B &= \bigcup_{c \in C^*} B_c \ominus k, \end{align} \begin{align} x^p = \begin{cases} x_t^p, & B^p=1 \\ x_s^p, & B^p=0 \end{cases}, y^p = \begin{cases} \hat{y}_t^p, & B^p=1 \\ y_s^p, & B^p=0 \end{cases} \end{align} The new synthesised training pairs are very often enriched with fine-grained details from the target domain. Indeed, as shown in \cref{fig:copypaste}, thanks to our data augmentation pipeline, only the inner part of an object is preserved while edges are discarded, producing sharp pseudo-labels even at class boundaries. The whole data augmentation process is applied offline before training, therefore it does not have any impact on the training time. \subsection{Training Procedure} The whole pipeline can be summarised in 3 simple steps. We start with the \textit{initialization} step to train our baseline model (i.e. the yellow backbone of \cref{fig:network}) on the source domain only. We follow standard practices \cite{bdl, stuffAndThings, Tsai_2019, Paul_WeakSegDA_ECCV20, Yang_2021_WACV} and, for synthetic-to-real adaptation, we utilize domain-translated source images provided by~\cite{bdl}. We deploy this baseline to produce pseudo-labels for the target domain and obtain an augmented mixed dataset as detailed in \cref{sec:selftraining}. Then, we perform the \textit{adaptation} step: we train the model illustrated in \cref{fig:network} that empowers our additional modules for low-level alignment as explained in \cref{sec:lowalign}. It is important to highlight that the proposed data augmentation extracts objects from only target images and pastes them on images on both domains. Hence, at this stage, the training is done simultaneously on both domains. The training loss is as follows: \begin{equation} \label{eq:finalloss} \mathcal{L} = \mathcal{L}_{sem} + \lambda_{}\mathcal{L}_{edge} \end{equation} with $\lambda$ set to 0.1 in all experiments. Finally, we use the predictions from the model trained in the previous step to synthesise new training pairs by following again the procedure detailed in \cref{sec:selftraining}. This allows us to distill the knowledge and the good precision along class boundaries of the previously enhanced model into a lighter segmentation architecture as the one used in the first step. We do this to avoid the introduction of additional modules at inference time. Differently from the adaptation step however, we apply our data algorithm using solely images from the target domain. Indeed, as we are now at the third and final stage, we expect pseudo-labels to be less noisy compared to the previous step, and training only on the target domain allows to capture domain specific characteristic. We denote this third step as the \textit{distillation} step. \begin{table*}[ht!] \begin{center} \setlength{\tabcolsep}{2.5pt} \scalebox{0.7}{ \begin{tabular}{l|cc|ccccccccccccccccccc|cc} method & IT & ST &\rotatebox{90}{Road} & \rotatebox{90}{Sidewalk} & \rotatebox{90}{Building} &\rotatebox{90}{Walls} & \rotatebox{90}{Fence} & \rotatebox{90}{Pole} & \rotatebox{90}{T-light} & \rotatebox{90}{T-sign} & \rotatebox{90}{Vegetation} & \rotatebox{90}{Terrain} & \rotatebox{90}{Sky} & \rotatebox{90}{Person} & \rotatebox{90}{Rider} & \rotatebox{90}{Car} & \rotatebox{90}{Truck} & \rotatebox{90}{Bus} & \rotatebox{90}{Train} & \rotatebox{90}{Motorbike} & \rotatebox{90}{Bicycle} & \textbf{mIoU} \\ \hline AdaptSegNet~\cite{adaptsegnet} & && 86.5 & 36.0 & 79.9 & 23.4 & 23.3 & 23.9 & 35.2 & 14.8 & 83.4 & 33.3 & 75.6 & 58.5 & 27.6 & 73.6 & 32.5 & 35.4 & 3.9 & 30.1 & 28.1 & 42.4 \\ MaxSquare~\cite{maxsquare} &&& 88.1 & 27.7 & 80.8 & 28.7 & 19.8 & 24.9 & 34.0 & 17.8 & 83.6 & 34.7 & 76.0 & 58.6 & 28.6 & 84.1 & 37.8 & 43.1 & 7.2 & 32.2 & 34.5 & 44.3 \\ BDL~\cite{bdl} &\checkmark&\checkmark& 88.2 & 44.7 & 84.2 & 34.6 & 27.6 & 30.2 & 36.0 & 36.0 & 85.0 & 43.6 & 83.0 & 58.6 & 31.6 & 83.3 & 35.3 & 49.7 & 3.3 & 28.8 & 35.6 & 48.5 \\ MRNET~\cite{mrnet} &\checkmark&\checkmark& 90.5 & 35.0 & 84.6 & 34.3 & 24.0 & 36.8 & 44.1 & 42.7 & 84.5 & 33.6 & 82.5 & 63.1 & 34.4 & 85.8 & 32.9 & 38.2 & 2.0 & 27.1 & 41.8 & 48.3 \\ Stuff and things~\cite{stuffAndThings} &\checkmark&\checkmark& 90.6 & 44.7 & 84.8 & 34.3 & 28.7 & 31.6 & 35.0 & 37.6 & 84.7 & 43.3 & 85.3 & 57.0 & 31.5 & 83.8 & 42.6 & 48.5 & 1.9 & 30.4 & 39.0 & 49.2 \\ FADA~\cite{fada} &&\checkmark& 92.5 & 47.5 & 85.1 & 37.6 & 32.8 & 33.4 & 33.8 & 18.4 & 85.3 & 37.7 & 83.5 & 63.2 & \textbf{39.7} & \textbf{87.5} & 32.9 & 47.8 & 1.6 & 34.9 & 39.5 & 49.2 \\ LTIR~\cite{ltir} &\checkmark&\checkmark& 92.9 & 55.0 & 85.3 & 34.2 & 31.1 & 34.4 & 40.8 & 34.0 & 85.2 & 40.1 & 87.1 & 61.1 & 31.1 & 82.5 & 32.3 & 42.9 & 3 & 36.4 & 46.1 & 50.2 \\ Yang \etal~\cite{Yang_2021_WACV} &\checkmark&\checkmark& 91.3 & 46.0 & 84.5 & 34.4 & 29.7 & 32.6 & 35.8 & 36.4 & 84.5 & 43.2 & 83.0 & 60.0 & 32.2 & 83.2 & 35.0 & 46.7 & 0.0 & 33.7 & 42.2 & 49.2 \\ IAST~\cite{iast} &&\checkmark& \textbf{93.8} & \textbf{57.8} & 85.1 & \textbf{39.5} & 26.7 & 26.2 & 43.1 & 34.7 & 84.9 & 32.9 & 88.0 & 62.6 & 29.0 & 87.3 & 39.2 & 49.6 & \textbf{23.2} & 34.7 & 39.6 & 51.5 \\ DACS\textsuperscript{\textdagger}~\cite{dacs} &&\checkmark& 89.9 & 39.7 & \textbf{87.9} & 30.7 & \textbf{39.5} & \textbf{38.5} & \textbf{46.4} & \textbf{52.8} & \textbf{88.0} & \textbf{44.0} & \textbf{88.8} & \textbf{67.2} & 35.8 & 84.5 & \textbf{45.7} & \textbf{50.2} & 0.0 & 27.3 & 34.0 & 52.1 \\ \hline Ours &\checkmark&\checkmark& 91.9 & 48.9 & 86.0 & 38.6 & 28.6 & 34.8 & 45.6 & 43.0 & 86.2 & 42.4 & 87.6 & 65.6 & 38.6 & 86.8 & 38.4 & 48.2 & 0.0 & \textbf{46.5} & \textbf{59.2} & \textbf{53.5} \end{tabular}} \end{center} \vspace{-5mm} \caption{Results on GTA5$\rightarrow$Cityscapes{}. \textdagger{} denotes models pre-trained on MSCOCO~\cite{coco} and ImageNet \cite{imagenet}. IT: Image Translation; ST: Self-Training.} \label{tab:results_gta} \end{table*} \begin{table*}[t] \begin{center} \setlength{\tabcolsep}{2.5pt} \scalebox{0.75}{ \begin{tabular} {l|cc|cccccccccccccccc|cc} method & IT & ST &\rotatebox{90}{Road} & \rotatebox{90}{Sidewalk} & \rotatebox{90}{Building} &\rotatebox{90}{Walls*} & \rotatebox{90}{Fence*} & \rotatebox{90}{Pole*} & \rotatebox{90}{T-light} & \rotatebox{90}{T-sign} & \rotatebox{90}{Vegetation} & \rotatebox{90}{Sky} & \rotatebox{90}{Person} & \rotatebox{90}{Rider} & \rotatebox{90}{Car} & \rotatebox{90}{Bus} & \rotatebox{90}{Motorbike} & \rotatebox{90}{Bicycle} & \textbf{mIoU} & \textbf{mIoU*} \\ \hline AdaptSegNet~\cite{adaptsegnet} &&& 84.3 & 42.7 & 77.5 & - & - & - & 4.7 & 7.0 & 77.9 & 82.5 & 54.3 & 21.0 & 72.3 & 32.2 & 18.9 & 32.3 & - & 46.7 \\ MaxSquare~\cite{maxsquare} &&& 77.4 & 34.0 & 78.7 & 5.6 & 0.2 & 27.7 & 5.8 & 9.8 & 80.7 & 83.2 & 58.5 & 20.5 & 74.1 & 32.1 & 11.0 & 29.9 & 39.3 & 45.8 \\ BDL~\cite{bdl} &\checkmark&\checkmark& 86.0 & 46.7 & 80.3 & - & - & - & 14.1 & 11.6 & 79.2 & 81.3 & 54.1 & 27.9 & 73.7 & 42.2 & 25.7 & 45.3 & - & 51.4 \\ MRNET~\cite{mrnet} &\checkmark&\checkmark& 83.1 & 38.2 & 81.7 & 9.3 & 1.0 & 35.1 & 30.3 & 19.9 & 82.0 & 80.1 & 62.8 & 21.1 & 84.4 & 37.8 & 24.5 & \textbf{53.3} & 46.5 & 53.8 \\ Stuff and things~\cite{stuffAndThings} &\checkmark&\checkmark& 83.0 & 44.0 & 80.3 & - & - & - & 17.1 & 15.8 & 80.5 & 81.8 & 59.9 & 33.1 & 70.2 & 37.3 & 28.5 & 45.8 & - & 52.1 \\ FADA~\cite{fada} &&\checkmark& 84.5 & 40.1 & 83.1 & 4.8 & 0.0 & 34.3 & 20.1 & 27.2 & 84.8 & 84.0 & 53.5 & 22.6 & 85.4 & 43.7 & 26.8 & 27.8 & 45.2 & 52.5 \\ LTIR~\cite{ltir} &\checkmark&\checkmark& \textbf{92.6} & 53.2 & 79.2 & - & - & - & 1.6 & 7.5 & 78.6 & 84.4 & 52.6 & 20.0 & 82.1 & 34.8 & 14.6 & 39.4 & - & 49.3 \\ Yang \etal~\cite{Yang_2021_WACV} &\checkmark&\checkmark& 82.5 & 42.2 & 81.3 & - & - & - & 18.3 & 15.9 & 80.6 & 83.5 & 61.4 & 33.2 & 72.9 & 39.3 & 26.6 & 43.9 & - & 52.4 \\ IAST~\cite{iast} &&\checkmark& 81.9 & 41.5 & 83.3 & 17.7 & \textbf{4.6} & 32.3 & \textbf{30.9} & 28.8 & 83.4 & 85.0 & 65.5 & 30.8 & \textbf{86.5} & 38.2 & \textbf{33.1} & 52.7 & \textbf{49.8} & \textbf{57.0} \\ DACS\textsuperscript{\textdagger}~\cite{dacs} &&\checkmark& 80.6 & 25.1 & 81.9 & \textbf{21.5} & 2.6 & \textbf{37.2} & 22.7 & 24.0 & 83.7 & \textbf{90.8} & \textbf{67.6} & \textbf{38.3} & 82.9 & 38.9 & 28.5 & 47.6 & 48.3 & 54.8 \\ \hline Ours &\checkmark&\checkmark& 90.4 & \textbf{51.1} & \textbf{83.4} & 3.0 & 0.0 & 32.3 & 25.3 & \textbf{31.0} & \textbf{84.8} & 85.5 & 59.3 & 30.1 & 82.6 & \textbf{53.2} & 17.5 & 45.6 & 48.4 & 56.9 \end{tabular}} \end{center} \vspace{-5mm} \caption{Results on SYNTHIA$\rightarrow$Cityscapes{}. \textdagger{} denotes models pre-trained with MSCOCO~\cite{coco} and ImageNet \cite{imagenet}. IT: Image Translation; ST: Self-Training. The 13 classes with $^*$ are used to compute mIoU$^*$.} \label{tab:results_synthia} \end{table*} \section{Implementation} \subsection{Architecture} According to standard practice in UDA for semantic segmentation \cite{adaptsegnet, maxsquare, bdl, mrnet, stuffAndThings, fada, ltir}, we deploy the Deeplab-v2~\cite{deeplabv2} architecture, with a dilated ResNet101 pre-trained on ImageNet~\cite{imagenet} and output stride 8. The ASPP~\cite{deeplabv2} module acts as classifier. We use this architecture for both the initialization step and the distillation step. For more details on the additional modules of the adaptation step we refer to the supplementary material. \subsection{Training Details} Our pipeline is implemented in PyTorch~\cite{pytorch} and trained on a single NVIDIA 2080Ti GPU with 12GB of memory. We train for 20 epochs in the first two steps, while we set the number of epochs to 25 for the final distillation with batch size 4 in all cases. We use random scaling, random cropping at $1024\times892$, and color jittering in our data augmentation pipeline. Akin to previous works, we freeze Batch-Normalization layers \cite{batchnorm} while performing the initialization and adaptation step. For the last step, instead, we activate these layers. We adopt the One Cycle learning rate policy~\cite{onecycle} for each training, with maximum learning rate $10^{-3}$ and SGD as optimizer. \section{Experiments} \begin{table*}[t] \begin{center} \setlength{\tabcolsep}{0.9mm} \scalebox{0.76}{ \begin{tabular} {l|c|c|ccccccccccccc|cc} City & Method & ST & \rotatebox{90}{road} & \rotatebox{90}{sidewalk} & \rotatebox{90}{building} & \rotatebox{90}{light} & \rotatebox{90}{sign} & \rotatebox{90}{veg.} & \rotatebox{90}{sky} & \rotatebox{90}{person} & \rotatebox{90}{rider} & \rotatebox{90}{car} & \rotatebox{90}{bus} & \rotatebox{90}{motor} & \rotatebox{90}{bike} & mIoU (\%) \\ \hline \multirow{5}{*}{Rome} & Source only & & 85.9 & 40.0 & 86.0 & 9.0 & 25.4 & 82.4 & 90.5 & 38.8 & 25.9 & 81.6 & 52.0 & 48.7 & 6.7 & 51.9 \\ & CBST~\cite{cbst} & \checkmark& 87.1 & 43.9 & \textbf{89.7} & 14.8 & 47.7 & 85.4 & 90.3 & 45.4 & 26.6 & \textbf{85.4} & 20.5 & 49.8 & 10.3 & 53.6 \\ & AdaptSegNet~\cite{adaptsegnet} & & 83.9 & 34.2 & 88.3 & 18.8 & 40.2 & \textbf{86.2} & \textbf{93.1} & 47.8 & 21.7 & 80.9 & 47.8 & 48.3 & 8.6 & 53.8 \\ & MaxSquare~\cite{maxsquare} & & 80.0 & 27.6 & 87.0 & \textbf{20.8} & \textbf{42.5} & 85.1 & 92.4 & 46.7 & 22.9 & 82.1 & 53.5 & 50.8 & 8.8 & 53.9 \\ & FADA~\cite{fada} & \checkmark & 84.9 & 35.8 & 88.3 & 20.5 & 40.1 & 85.9 & 92.8 & \textbf{56.2} & 23.2 & 83.6 & 31.8 & 53.2 & \textbf{14.6} & 54.7 \\ & Ours & \checkmark & \textbf{89.4} & \textbf{48.2} & 87.5 & \textbf{26.3} & 37.2 & 83.1 & 90.7 & \textbf{55.2} & \textbf{42.1} & 84.8 & \textbf{66.6} & \textbf{59.2} & 11.1 & \textbf{60.1} \\ \hline \multirow{5}{*}{Rio} & Source only & & 80.4 & 53.8 & 80.7 & 4.0 & 10.9 & 74.4 & 87.8 & 48.5 & 25.0 & 72.1 & 36.1 & 30.2 & 12.5 & 47.4 \\ & CBST~\cite{cbst} & \checkmark& 84.3 & 55.2 & 85.4 & 19.6 & \textbf{30.1} & 80.5 & 77.9 & 55.2 & 28.6 & \textbf{79.7} & 33.2 & 37.6 & 11.5 & 52.2 \\ & AdaptSegNet~\cite{adaptsegnet} & & 76.2 & 44.7 & 84.6 & 9.3 & 25.5 & \textbf{81.8} & 87.3 & 55.3 & 32.7 & 74.3 & 28.9 & 43.0 & 27.6 & 51.6\\ & MaxSquare~\cite{maxsquare} & & 70.9 & 39.2 & \textbf{85.6} & \textbf{14.5} & 19.7 & \textbf{81.8} & 88.1 & 55.2 & 31.5 & 77.2 & 39.3 & 43.1 & 30.1 & 52.0 \\ & FADA~\cite{fada} & \checkmark & 80.6 & 53.4 & 84.2 & 5.8 & 23.0 & 78.4 & 87.7 & \textbf{60.2} & 26.4 & 77.1 & 37.6 & \textbf{53.7} & \textbf{42.3} & 54.7 \\ & Ours &\checkmark & \textbf{86.6} & \textbf{63.3} & 82.3 & 10.3 & 19.8 & 73.9 & \textbf{88.4} & 57.5 & \textbf{41.3} & 78.1 & \textbf{51.5} & 40.0 & 19.4 & \textbf{54.8} \\ \hline \multirow{5}{*}{Tokyo} & Source only & & 86.0 & 38.8 & 76.6 & 11.7 & 12.3 & 80.0 & 89.5 & 44.9 & 28.0 & 71.5 & 4.7 & 27.1 & 42.2 & 47.2 \\ & CBST~\cite{cbst} & \checkmark& 85.2 & 33.6 & \textbf{80.4} & 8.3 & \textbf{31.1} & 83.9 & 78.2 & 53.2 & 28.9 & 72.7 & 4.4 & 27.0 & 47.0 & 48.8 \\ & AdaptSegNet~\cite{adaptsegnet} & & 81.5 & 26.0 & 77.8 & 17.8 & 26.8 & 82.7 & 90.9 & 55.8 & \textbf{38.0} & 72.1 & 4.2 & 24.5 & 50.8 & 49.9\\ & MaxSquare~\cite{maxsquare} & & 79.3 & 28.5 & 78.3 & 14.5 & 27.9 & 82.8 & 89.6 & 57.3 & 31.9 & 71.9 & 6.0 & 29.1 & 49.2 & 49.7 \\ & FADA~\cite{fada} & \checkmark & 85.8 & 39.5 & 76.0 & 14.7 & 24.9 & \textbf{84.6} & \textbf{91.7} & 62.2 & 27.7 & 71.4 & 3.0 & 29.3 & \textbf{56.3} & 51.3 \\ & Ours & \checkmark & \textbf{87.8} & \textbf{41.0} & 79.6 & \textbf{20.3} & 24.2 & 80.2 & 90.0 & \textbf{62.3} & 30.8 & \textbf{74.0} & \textbf{6.4} & \textbf{32.7} & 50.0 & \textbf{52.4} \\ \hline \multirow{5}{*}{Taipei} & Source only & & 85.0 & 38.1 & 82.2 & 17.8 & 8.9 & 75.2 & 91.4 & 23.9 & 19.6 & 69.2 & 45.9 & 49.4 & 16.0 & 47.9 \\ & CBST~\cite{cbst} &\checkmark & 86.1 & 35.2 & 84.2 & 15.0 & 22.2 & 75.6 & 74.9 & 22.7 & 33.1 & 78.0 & 37.6 & 58.0 & 30.9 & 50.3 \\ & AdaptSegNet~\cite{adaptsegnet} & & 81.7 & 29.5 & 85.2 & 26.4 & 15.6 & 76.7 & 91.7 & 31.0 & 12.5 & 71.5 & 41.1 & 47.3 & 27.7 & 49.1 \\ & MaxSquare~\cite{maxsquare} & & 81.2 & 32.8 & 85.4 & 31.9 & 14.7 & 78.3 & 92.7 & 28.3 & 8.6 & 68.2 & 42.2 & 51.3 & 32.4 & 49.8 \\ & FADA~\cite{fada} & \checkmark& 86.0 & 42.3 & 86.1 & 6.2 & 20.5 & 78.3 & 92.7 & 47.2 & 17.7 & 72.2 & 37.2 & 54.3 & 44.0 & 52.7 \\ & Ours & \checkmark & \textbf{95.6} & \textbf{78.9} & \textbf{94.3} & \textbf{45.9} & \textbf{70.3} & \textbf{93.0} & \textbf{96.2} & \textbf{63.3} & \textbf{51.3} & \textbf{90.5} & \textbf{83.6} & \textbf{84.8} & \textbf{56.5} & \textbf{55.7} \\ \hline \end{tabular}} \end{center} \caption{Results for the Cross-City experiments. ST: Self-Training.} \label{tab:crosscity} \end{table*} \subsection{Datasets} We test our method on both synthetic-to-real and real-to-real adaptation. We set GTA \cite{gta} or SYNTHIA \cite{synthia} as source datasets and Cityscapes \cite{Cityscapes} as target for the former setting, while we use Cityscapes as source and the NTHU \cite{NTHU} dataset as target for the latter. GTA5 is a synthetic dataset that contains 24,966 annotated images of $1914 \times 1052$ resolution. As for SYNTHIA, we use the SYNTHIA-RAND-CITYSCAPES subset, which is a collection of 9,400 synthetic images with resolution $1280 \times 760$. The Cityscapes dataset is a high-quality collection of real images of $2048 \times 1024$ resolution. The dataset has 2975 and 500 images for the training and validation split, respectively. For the synthetic-to-real case, we only utilize the training split without labels for training, and test on the validation set as done in previous works \cite{adaptsegnet, cbst, bdl}. The NTHU dataset is a collection of images taken from four different cities with $2048 \times 1024$ resolution: Rio, Rome, Tokyo, and Taipei. For each city, 3200 unlabeled images are available for the adaptation phase, and 100 labeled images for the evaluation. For fair comparison to other models, we compute the mIoU by considering all 19 classes in the GTA5$\rightarrow$Cityscapes{} benchmark, 16 or 13 shared classes for SYNTHIA$\rightarrow$Cityscapes{}, and 13 common classes for the cross-city adaptation setting. \subsection{Synthetic-to-real adaptation} To test our framework, we follow standard practice \cite{adaptsegnet, cbst, bdl, mrnet, advent, maxsquare} and report the results for the synthetic-to-real adaptation in the GTA5$\rightarrow$Cityscapes{} and SYNTHIA$\rightarrow$Cityscapes{} benchmarks in \cref{tab:results_gta} and \cref{tab:results_synthia} respectively. We obtain state-of-the-art performance in the former setting, surpassing also recent methods such as \cite{iast} that performs many iterations of self-training. We also improve over \cite{dacs} for GTA5$\rightarrow$Cityscapes{}, which, differently from all other methods, pre-trains the baseline network not only on ImageNet\cite{imagenet} but also on MSCOCO\cite{coco}. We argue that pre-training on more tasks and real annotated data notably improves the baseline performance of the synthetic-to-real benchmark. For GTA5$\rightarrow$Cityscapes{}, we note that, thanks to our low-level adaptation, we can boost performances for fine-detailed classes such as \textit{Bicycle} and \textit{Motorcycle}. Regarding SYNTHIA$\rightarrow$Cityscapes{}, we obtain competitive performance, showing that our method can work also in this challenging scenario in which the source synthetic domain exhibits many bird's-eye views that are very different from the one in Cityscapes. Indeed our method is only slightly inferior to IAST\cite{iast} and again superior to \cite{dacs} that performs a similar data augmentation. \begin{table}[t!] \centering \small \scalebox{0.8}{ \begin{tabular}{c|ccccc|c|c} & & & & & & \cellcolor{YellowOrange}GTA & \cellcolor{blue!25}Synthia \\ \hline \textbf{Step} & IT & ST & A & W & D & mIoU & mIoU \\ \hline Initialization &\checkmark& &&&$\mathcal{S}$& 47.3 & 41.6 \\ \hline \multirow{3}{*}{Adaptation} &\checkmark&\checkmark&&&$\mathcal{S,T}$& 49.8 & 43.5 \\ &\checkmark&\checkmark&\checkmark&&$\mathcal{S,T}$& 52.0 & 46.4 \\ &\checkmark&\checkmark&\checkmark&\checkmark&$\mathcal{S,T}$& 52.6 & 46.9\\ \hline Distillation &\checkmark&\checkmark&\checkmark&&$\mathcal{T}$& 53.5 & 48.4\\ \hline Oracle &&&&&$\mathcal{T}$& 63.8 & 65.1\\ \end{tabular}} \caption{Ablation studies on GTA5$\rightarrow$Cityscapes{} (second-to-last) and SYNTHIA$\rightarrow$Cityscapes{} (last) columns. IT: image translation; ST: Self-Training; W: low-level adaptation; A: Data Augmentation; D: Training domain.} \label{tab:ablation_modules} \end{table} \subsection{Cross-city adaptation} We report in \cref{tab:crosscity} our performance for the real-to-real setting. Our proposal shows great results, confirming the generalization properties of our contributions on diverse settings. We improve performance with respect to previous works for all the cities. Our model achieves 60\% in mIoU in Rome, which is likely the most similar to the German cities used in the Cityscapes dataset. Nonetheless, we achieve strong results even for more distant domains, e.g. as in the case of Taipei, improving by 7.8\% with respect to the model trained only on the source domain. For the Cross-city adaptation setting, differently from the other settings, we use images of both domains in our \textit{distillation} step to exploit the perfect annotations available in the similar source domain. \subsection{Ablation Studies} In this section, we analyze the contribution provided by each component of our framework and motivate our design choices. In \cref{tab:ablation_modules} we detail the results for both GTA5$\rightarrow$Cityscapes{} and SYNTHIA$\rightarrow$Cityscapes{}. The first row reports the performance obtained using only translated source domain images. This is nowadays a common building block of many UDA frameworks, and we also consider it our baseline on which we build our pipeline. In the adaptation section instead, we isolate both our contributions and use the model trained in the initialization step to extract pseudo-labels for the target domain as explained in \cref{sec:selftraining} and train on both domains simultaneously. When applying a naive self-training strategy (i.e. training directly on pseudo-labels) we already obtain a significant boost (+2.5\% and +1.9\%) respectively. However, when deploying the proposed data augmentation (row 3), we observe an even greater boost: +4.8\% for both settings. This clearly demonstrates the effectiveness of our data augmentation and its applicability to diverse scenarios. Then, applying the proposed low-level adaptation (row 4) also yields an additional contribution overall: about +0.6\% on top of the data augmentation version. We argue that is noticeable, especially when performances are already high, as in our case, and the strongest competitors are all within a narrow window. Finally, in row 5, we distill our full model (i.e. row 4) into a simple Deeplab-v2 for efficient inference time and apply once again the proposed data augmentation. Remarkably, this further improves performance with respect to the distilled model and avoids the typical pseduo-labels overfitting behavior when employing many steps of self-training. Moreover, to motivate our intuition that shallow features are amenable to guide the warping process, we compare the results obtained by applying our adaptation step in the GTA5$\rightarrow$Cityscapes{} setting at the three different levels of the backbone before the last module achieving 52.6\%, 51.6\%, and 51.8\% mIoU for layers \textit{Conv1}, \textit{Conv2}, and \textit{Conv3} respectively. Thus, the best result is achieved by using the first convolutional block of the architecture, while on \textit{Conv2} and \textit{Conv3} results are comparable (see \cref{fig:network} for layer names). \subsection{Performance Along Class Boundaries} \begin{figure}[h] \centering \includegraphics[width=0.65\linewidth]{images/trimap_competitors.pdf} \caption{mIOU on GTA5$\rightarrow$Cityscapes{} as a function of trimap band width around class boundaries.} \label{fig:trimap} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth]{images/trimap_components.pdf} \caption{mIOU on GTA5$\rightarrow$Cityscapes{} as a function of trimap band width around class boundaries. We report results for the three versions of the \textit{adaptation} step of \cref{tab:ablation_modules}.} \label{fig:trimap_warp} \end{figure} In this section, we test the segmentation accuracy with the trimap experiment \cite{chen2018encoder, krahenbuhl2011efficient, chen2017deeplab, kohli2009robust} to quantify the accuracy of the proposed method along semantic edges. Specifically, we evaluate in terms of mIoU pixels within four bandwidths (4, 8, 16, 20 pixels) around class boundaries (trimaps). We first compare our final model against other frameworks in \cref{fig:trimap}. We observe that our method is more accurate w.r.t. all other competitors in all the tested bandwidths, validating our main goal that is improving precision along class boundaries. We also highlight that although the green line is obtained from a distilled model (row 5 of \cref{tab:ablation_modules}), that does not include the additional modules presented in \cref{sec:lowalign}, it is still able to maintain strong performances at semantic boundaries thanks to the precise pseudo-labels extracted from the adaptation step. We refer to supplementary materials for some qualitative examples. Then, we assess in \cref{fig:trimap_warp} how our contributions affect performances on semantic boundaries. To this end, we repeat the same trimap experiment using the intermediate steps of our pipeline i.e. row 2, 3, and 4 of \cref{tab:ablation_modules}. When applying all our contributions (purple line), we are able to improve by a large margin over the self-training strategy (black line) confirming that the additional modules account for an improvement along semantic edges. Furthermore, activating the low-level adaptation strategy maintains its improvements along semantic edges over the data augmentation only version (cyan line), leading to better pseudo-labels for the distillation step. \subsection{Comparison with other data augmentations} We compare our data augmentation, one of our main contributions, with the one introduced in \cite{dacs}. More specifically, we apply this data augmentation in the adaptation step as in row 3 of \cref{tab:ablation_modules}, i.e. without the low-level adaptation modules to isolate the data-augmentation effect. We augment target images randomly pasting objects from the source domain, using the open source implementation of \cite{dacs}. With this strategy, we only obtain 51.0\% in terms of mIoU, while with our technique the mIoU raises to 52.2\%, confirming our intuition that looking for target instances is more effective than forcing the network to identify source objects as done \cite{dacs} during the self-training step. \subsection{Displacement map visualization} In this section, we analyze the displacement map learned by the model. As \cref{fig:flow} shows, the 2D map that guides the warping process is consistent with our intuition that the displacement is more pronounced at the boundaries, while areas within regions such as the body of a person, are characterized by a low displacement (i.e. white color). Moreover, we can appreciate that when the warping is applied according to the estimated displacement field (top-right), the contours of small objects such as poles, traffic signs, and persons are better delineated (bottom-right). On the other hand, in the bottom-left mask, these objects are coarsely segmented when using a segmentation model train with translated images only. We also highlight that the displacement field is agnostic to semantic class (it only considers boundaries), and even though it captures other kinds of edges (i.e. not only semantic ones), it leads to computing an average of patches belonging to the same class. \begin{figure}[t] \centering \includegraphics[scale=0.21]{images/flow.png} \caption{Top left: input target image. Top right: estimated 2D displacement. Bottom left: semantic map from a model trained on translated images. Bottom Right: Our results, improved on class boundaries by using the warping module. Colors and lightness in the middle indicates the warping direction with the corresponding intensity. } \label{fig:flow} \end{figure} \section{Conclusion} In this paper, we have proposed a novel framework for UDA for semantic segmentation that explicitly focuses on improving accuracy along class boundaries. We have shown that we can exploit domain-invariant shallow features to estimate a displacement map used to achieve sharp predictions along semantic edges. Jointly with a novel data augmentation technique that preserves fine edge information during self-training, our approach achieves better accuracy along class boundaries w.r.t. previous methods. {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} he ability to learn from experience is common to all animals and some artificial agents. Neural networks trained with stochastic gradient descent (SGD) are a current theory of human learning that can account for a wide range of learning phenomena, but while standard networks seem to imply that all learning is gradual, humans may sometimes learn in an abrupt manner. Such non-linear improvements in task performance or problem solving have been described as insights or aha-moments \cite{Kohler1925TheApes, Durstewitz2010}, and are often thought to reflect a qualitatively different, discrete learning mechanism \cite{Stuyck2021TheSolving, Weisberg2015TowardSolving}. One prominent idea, dating back to Gestalt psychology \cite{Kohler1925TheApes}, is that an insight occurs when an agent has found a novel problem solution by restructuring an existing task representation \cite{Kounios2014TheInsight}. It has also been noted that humans often lack the ability to trace back the cognitive process leading up to an insight \cite{Jung-Beeman2004NeuralInsight}, suggesting that insights involve unconscious processes becoming conscious. Moreover, so called ``aha-moments'' can sometimes even be accompanied by a feeling of relief or pleasure in humans \cite{Kounios2014TheInsight, Danek2014ItsSolving, Kounios2015TheBrain.}. Such putative uniqueness of the insight phenomenon would also be in line with work that has related insights to brain regions distinct from those associated with gradual learning \cite{Shen2018TrackingStudies, Jung-Beeman2004NeuralInsight}. These include, for instance, the anterior temporal gyrus \cite{Jung-Beeman2004NeuralInsight, Tik2018Ultra-high-fieldAha-moment}, as well as subcortical areas such as the left amygdala or right hippocampal gyrus \cite{Shen2018TrackingStudies}. Altogether, these findings have led psychologists and neuroscientists to propose that insights are governed by a distinct learning process \cite{Jung-Beeman2004NeuralInsight}, that cannot be accounted for by current common theories of learning. Here, we show that insight-like phenomena can occur without dedicated mechanisms for re-representation or a division of labour between conscious and unconscious processes. Our argument does not concern the subjective experiences related to insights, but focuses on showing how insight-like behaviour can emerge from gradual learning algorithms. Specifically, we aim to explain the following three main observations \cite{Schuck2015, Schuck2022SpontaneousChildren, Gaschler2019IncidentalChange, Gaschler2013SpontaneousPrinciple, Gaschler2015OnceTasks}: First, insights trigger abrupt behavioural changes, accompanied by meta-cognitive suddenness (a ``sudden and unexpected flash'') \cite{Bowden2005NewInsight, Gaschler2013SpontaneousPrinciple, Metcalfe1987IntuitionSolving, Weisberg2015TowardSolving}. These abrupt behavioural changes are often accompanied by fast neural transitions, which have been observed in humans as well as animals \cite{Durstewitz2010, Karlsson2012NetworkUncertainty, Miller2010StochasticDecision-making, Schuck2015,Allegra2020BrainOptimization}. Second, insights occur selectively in some subjects, while for others improvement in task performance arises only gradually, or never \cite{Schuck2015}. Finally, insights occur ``spontaneously'', i.e. without the help of external cues \cite{Friston2017ActiveInsight}, and are therefore observed after a seemingly random duration of impasse \cite{Ohlsson1992Information-processingPhenomena} or delay after a change in environmental contingencies for different participants. In other words, participants seem to be ``blind'' to the new solution for an extended period of time, before it suddenly occurs to them. Insights are thus characterised by suddenness, selectivity, and delay. The idea that insight-like behaviour can arise naturally from gradual learning is supported by previous work on neural networks trained with gradient descent \cite{Power2022Grokking:Datasets}. Saxe and collaegues \cite{Saxe2014ExactNetworks}, for instance, have shown that non-linear learning dynamics, i.e. suddenness in the form of saddle points and stage-like transitions, can result from gradient descent even in linear neural networks, which could explain sudden behavioural improvements. Other work has shown a delayed or stage-like mode of learning in neural networks that is reminiscent of the period of impasse observed in humans, and reflected for instance in the structure of the input data \cite{Saxe2019ANetworks, Schapiro2009ATask, McClelland2003TheCognition}, or information compression of features that at some point seemed task-irrelevant \cite{Flesch2022OrthogonalNetworks, Saxe2019OnLearning}. Finally, previous work has also found substantial individual differences between neural network instances that are induced by random differences in weight initialisation, noise, or the order of training examples \cite{Bengio2009CurriculumLearning, Flesch2018ComparingMachines}, which can become larger with training \cite{Mehrer2020IndividualModels}. Two factors that influence discontinuities in learning in neural networks are regularisation and gating. Regularisation plays a key role in the suppression of input features. While this avoids overfitting and can help a network to escape a local minimum \cite{Liu2020BadThem}, it might also cause above mentioned ``blindness'' to a solution that involves inputs which were once erroneously deemed irrelevant. Gating, on the other hand, is known to cause exponential transitions in learning that are widely seen in multiplicative dynamical systems like the logistic growth model. Both techniques are widely used in artificial neural networks \cite{Bishop2006PatternLearning, Krishnamurthy2022TheoryNetworks, Jozefowicz2015AnArchitectures}, and are inspired by biological brains \cite{Groschner2022ANeuron, Poggio1985ComputationalTheory, Costa2017CorticalNetworks}. Regularisation and gating could therefore be important aspects of network structure and training that are related to the temporary impasse followed by a sudden performance change, akin to insight-like behaviour. Based on these findings, we hypothesised that insight-like behaviour -- as characterised by suddenness, selectivity, and delay -- can occur in simple neural networks trained with gradient descent. As indicated above, a simple neural network architecture with multiplicative gates and regularisation served as our candidate model. We predicted that due to the multiplicative nature of gating, regularising gates during training could lead to blindness of some relevant features that are key to a solution. We focused specifically on L1-regularisation because it forces gates of irrelevant inputs most strongly towards 0, compared to the less aggressive L2-regularisation. We reason that applying L1-regularisation, besides creating non-linear learning dynamics due to the multiplicative nature of the weights and gates, will lead to a sustained suppression period before the fast transition, similar to the delay observed in humans. \section*{Results} To study insight-like learning dynamics, 99 participants and 99 neural networks, matched in their behavioural performance to their human counterparts (see below for details), performed a decision task that required a binary choice about circular arrays of moving dots \cite{Rajananda2018AResearch} for humans and a symbolic version in which inputs were two scalars for networks. Dots were characterised by two features with different degrees of noise, (1) a motion direction (four possible orthogonal directions: NW, NE, SW, SE) and (2) a colour (orange or purple) (Fig.\ref{fig:Fig1}A). Participants and networks had to learn the correct choice in response to each stimulus from trial-wise binary feedback, and were not instructed which features of the stimulus to pay attention to. Importantly, the task provided a hidden opportunity to improve one's decision strategy that could be discovered through insight, similar to the spontaneous strategy switch task developed earlier \cite{Schuck2015}. Participants first underwent an initial training phase (4 blocks, 100 trials each in humans, 8 blocks/800 trials in networks), during which only the motion direction predicted the correct choice, while stimulus colour was random (\emph{motion phase}, see Fig.\ref{fig:Fig1}D). Without any announcement, stimulus colour became predictive of the correct response in a later phase, such that from then on both features could be used to determine choice (\emph{motion and colour phase}, 5 blocks for humans and networks, Fig.\ref{fig:Fig1}D). Such unannounced changes in feature relevance elicit insights, i.e. behaviour exhibits changes that are sudden, delayed and selective, and post-experimental verbal questionnaires indicate that these changes go hand in hand with gaining consciousness about the new regularity \cite{Gaschler2019IncidentalChange}. To test whether and when participants employed the hidden colour insight, we assessed whether choices were sensitive to the motion direction (using the colour insight meant that stimulus motion could be ignored). Specifically, following an initial pre-training period (see Methods) the amount of motion noise varied randomly in five levels of motion coherence ( 5\%, 10\%, 20\%, 30\% or 45\%, noise variability started in the last two blocks before the onset of the \emph{motion and colour phase}). Behaviour in trials with the highest amount of noise in dot motion (5\% coherence, 30 trials per block) was then used to test whether participants had an insight about the usefulness of the colour, as high performance in these trials could only be achieved by using the colour information \cite{Schuck2015}. Colour difficulty was constant and consistently allowed participants and networks to identify colour easily. A second measure that we used to investigate insight was a post-experimental questionnaire, in which participants were asked whether (1) they had noticed a rule in the experiment, (2) how long it took them to notice the rule, (3) whether they had paid attention to colour during choice. The questionnaire was administered after the \emph{motion and colour phase}, and was followed by a instruction block that served as a sanity check (see Methods). \begin{figure*}[t] \centering \includegraphics[width=11.7cm]{Fig1.pdf} \caption{Stimuli and task design \footnotesize \textbf{(A)} Stimuli and stimulus-response mapping: dot clouds were either coloured in orange or purple and moved to one of the four directions NW, NE, SE, SW with varying coherence. A left response key, "X", corresponded to the NW/SE motion directions, while a right response key "M" corresponded to NE/SW directions. \textbf{(B)} Schematic of simple neural network with regularised gate modulation with colour codes corresponding to respective colour and motion \textit{weights} and \textit{gates}. Number of nodes shown is the exact number of nodes used in the neural network simulations. \textbf{(C)} Trial structure: a fixation cue is shown for a duration that is shuffled between 400, 600, 800 and 1000 ms. The random dot cloud stimulus is displayed for 2000 ms. A response can be made during these entire 2000 ms and a feedback cue will replace the fixation cue at the centre of the stimulus until the end of the stimulus display duration. \textbf{(D)} Task structure of the two-alternative forced choice task for humans and neural networks: each block consisted of 100 trials. Colour was predictive of correct choices and correlated with motion directions as well as correct response buttons in the last five blocks (\textit{motion and colour phase}). In the last block, humans and networks were instructed to use colour inputs to respond. In the \textit{motion phase}, colour changed randomly and was not predictive. A first training block only for humans contained only 100\% motion coherence trials to familiarise subjects with the S-R mapping. The remaining training blocks contained only high coherence (0.2, 0.3, 0.45) trials.} \label{fig:Fig1} \end{figure*} \subsection*{Human Behaviour} Data from the \emph{training phase}, during which motion directions were highly coherent and colours changed randomly (Block 1-2, dark grey tiles in Fig 1D), showed that participants learned the response mapping for the four motion directions well (78\% correct, t-test against chance: $t(98) = 30.8$, $p < .001$). In the following task phase, noise was added to the motion, while the colour remained uncorrelated (\emph{motion phase}, blocks 3-4, grey tiles in Fig. 1D). This resulted in an accuracy gradient that depended on noise level (linear mixed effects model of accuracy: ${\chi}^2$(1) = 726.36, $p < .001$; RTs: ${\chi}^2$(1) = 365.07, $p < .001$; N = 99, Fig.\ref{fig:Fig2}A). Crucially, performance during this phase was heavily diminished in the conditions with the largest amounts of motion noise, i.e. the two lowest coherence conditions: the percentage of correct choices was at only 60\% and 63\% in the two lowest coherence conditions, and did not change over time (paired t-test block 3 vs 4: $t(195.9) = -1.13$, $p = 0.3$, $d = 0.16$). Hence, performance (improvements) largely beyond these low baseline levels can only be attributed to colour use, rather than heightened motion sensitivity. The noise level continued to influence performance in the \textit{motion and colour phase}, as evidenced by a difference between performance in high vs. low coherence trials (20, 30 \& 45\% vs 5 \& 10 \% coherent motion, respectively; $M = 93\pm6\%$ vs $M = 77\pm12\%$; $t(140.9) = 12.5$, $p < .001$, $d = 1.78$, see Fig.\ref{fig:Fig2}A-B). Notably, however, the onset of the colour correlation triggered performance improvements across all coherence levels ($t(187.2) = -12.4$, $p < .001$, $d = 1.8$; end of \textit{motion phase}: $M= 78\pm7\%$ vs. end of \textit{motion and colour phase}: $M = 91\pm8\%$), contrasting the stable performance found during the motion phase and suggesting that at least some participants leveraged colour information once available. We asked whether these improvements are related to gaining conscious insight by analysing the post-experimental questionnaire. Results show that conscious knowledge about the colour regularity arose in some, but not all, participants: 57.6\% (57/99) reported in the questionnaire to have used colour, while 42.4\% indicated to not have noticed or used the colour. We then checked whether these conscious insights were related to the key behavioural characteristics of suddenness, selectivity, and variable delay. To test for suddenness, we fitted each participant's time course of accuracy on low coherence trials by either (1) a linear ramp or (2) a sigmoid function. While a linear model (free parameters: intercept $y_0$ and slope $m$) can capture gradual improvements in behaviour that might occur without insight, a better fit of a non-linear sigmoid function indicates sudden behavioural transitions (free parameters: slope $m$, inflection point $t_s$ and function maximum $y_{max}$). Performance across participants on low coherence trials was best fit by a non-linear sigmoid function, indicating at least a subsection of putative insight participants (BIC sigmoid function: $M = -6.7$, $SD = 0.7$, protected exceedance probability: 1, BIC linear function: $M = -6.4$, $SD = 0.5$, protected exceedance probability: 0). The sigmoid function also outperformed a step function with free parameters inflection point $t_s$ and function maximum $y_{max}$ (BIC step function: $M = -6.5$, $SD = 0.6$, protected exceedance probability: 0) (Fig.\ref{fig:Fig2}D-E, Fig.S2). We next tested insight selectivity, i.e. whether all participants, or only a subset, showed abrupt behavioural transitions, as indicated by participants' self-reports. Chance level of suddenness was determined by an out-of-sample null distribution of sigmoid steepness derived from a control experiment (N = 20), in which participants performed an identical task, except that colour never started to correlate with motion, and hence no insight was possible. Fitting the same sigmoid function to this data, we derived a baseline distribution of the steepness (see Methods for details). Comparing the steepness values (at the inflection point) obtained in our experimental sample to the baseline distribution derived from the control group with no colour correlation, showed that about half of participants (48/99, 48.5\%) had values larger than the 100\% percentile of the control distribution. This thus suggests that truly abrupt insight occurred selectively in these ``insight participants'' (Fig.\ref{fig:Fig2}F). 79.2\% of the participants classified as insight subjects also self-reported to have used colour to make correct choices (Fig. S6A-B). Hence, our behavioural marker of unexpectedly sudden performance changes can serve as a valid indicator for insight. We validated our behavioural metric of selectivity through additional analyses. Splitting behaviour into two separate insight (participants with steepness values larger than the 100\% percentile of the control distribution) and no-insight groups showed that, as expected based on the dependency of accuracy and our behavioural metric, insight subjects started to perform significantly better in the lowest coherence trials once the \textit{motion and colour phase} (Fig.\ref{fig:Fig2}C) started, (mean proportion correct in \textit{motion and colour phase}: $M = 83\pm10\%$), compared to participants without insight ($M = 66\pm8\%$) ($t(92) = 9.5$, $p < .001$, $d = 1.9$). Unsurprisingly, a difference in behavioural accuracy between insight participants and no-insight participants also held when all coherence levels were included ($M = 91\pm5\%$ vs. $M = 83\pm7\%$, respectively, t-test: $t(95.4) = 6.9$, $p < .001$, $d = 1.4$). Interestingly, accuracy in the \textit{motion phase}, which was not used in steepness fitting, did not differ between groups (low coherence trials: $M = 59\%$, vs. $M = 62\%$; $t(94.4) = -1.9$, $p = 0.07$, $d = 0.38$; all noise levels: $M= 76\%$ vs $M = 76\%$,$t(96) = 0.45$, $p = 0.7$, $d = 0.09$). Reaction times, which are independent from the choices used in model fitting and thus served as a sanity check for our behavioural metric split, reflected the same improvements upon switching to the colour strategy. Subjects that showed insight about the colour rule ($M = 748.47\pm171.1$ ms) were significantly faster ($t(96.9) = -4.9$, $p < .001$, $d = 0.97$) than subjects that did not ($M = 924.2\pm188.9$ ms) on low coherence trials, as well as over all noise levels ($t(97) = -3.8$, $p < .001$, $d = 0.87$) ($M = 675.7\pm133$ ms and $M = 798.7\pm150.3$ ms, respectively). Finally, we asked whether insights occurred with random delays, as reported earlier. To quantify this key characteristic, insight moments were defined as the time points of inflection of the fitted sigmoid function, i.e. when performance exhibited abrupt increases (see Methods). We verified the precision of our switch point identification by time-locking the data to the individually fitted switch points. This showed that accuracy steeply increased between the halved task block (50 trials) immediately before vs. after the switch, as expected ($M = 62\%$ vs $M = 83\%$ $t(89) = -11.2$, $p < .001$, $d = 2.34$, Fig.\ref{fig:Fig2}C, Fig. S5A). Additionally, reaction times dropped steeply from pre- to post-switch ($M = 971.63$ ms vs. $M = 818.77$ ms, $t(87) = 3.34$, $p < .001$, $d = 0.7$). The average delay of insight onset was 1.3 task blocks (130 trials) ($\pm95$ trials / $0.95$ blocks, Fig.\ref{fig:Fig2}G). The distribution of delays among insight participants ranged from 0 to 3 blocks after the start of the \textit{motion and colour phase}, and statistically did not differ from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: $D(48) = 0.15$, $p = 0.69$). Hence, the behaviour of human subjects showed all characteristics of insight: sudden improvements in performance that occurred only in a subgroup and with variable delays. \begin{figure*}[ht] \includegraphics[width=16.5cm]{Fig2.pdf} \caption{Humans: task performance and insight-like strategy switches \footnotesize \textbf{(A)} Accuracy (\% correct) during the \textit{motion phase} increases with increasing motion coherence. N = 99, error bars signify standard error of the mean (SEM). \textbf{(B)} Accuracy (\% correct) over the course of the experiment for all motion coherence levels. First dashed vertical line marks the onset of the colour predictiveness (\textit{motion and colour phase}), second dashed vertical line the "instruction" about colour predictiveness. Blocks shown are halved task blocks (50 trials each). N = 99, error shadows signify SEM. \textbf{(C)} Switch point-aligned accuracy on lowest motion coherence level for insight (48/99) and no-insight (51/99) subjects. Blocks shown are halved task blocks (50 trials each). Error shadow signifies SEM. \textbf{(D)} Illustration of the sigmoid function for different slope steepness parameters. \textbf{(E)} Difference between BICs of the linear and sigmoid function for each human subject. N = 99. \textbf{(F)} Distributions of fitted slope steepness at inflection point parameter for control experiment and classified insight and no-insight groups. \textbf{(G)} Distribution of switch points. Dashed vertical line marks onset of colour predictiveness. Blocks shown are halved task blocks (50 trials each).} \label{fig:Fig2} \end{figure*} \subsection*{Neural Network Behaviour} To probe whether insight-like behaviour can arise in simple neural networks trained with gradient descent, we simulated 99 network models performing the same decision making task. The networks had two input nodes ($x_c$, $x_m$, for colour and motion, respectively), two input-specific gates ($g_m$, $g_c$) and weights ($w_m$, $w_c$), and one output node ($\hat{y}$, Fig.\ref{fig:Fig1}B). Network weights and gates were initialised at 0.01. The stimulus features motion and colour were reduced to one input node each, which encoded colour/motion direction of each trial by taking on either a positive or a negative value. More precisely, given the correct decision $y=\pm$, the activities of the input nodes were sampled from i.i.d. normal distributions with means $yM_m$ and $yM_c$ and standard deviations $\sigma_m=0.01$ and $\sigma_c=0.01$ for colour and motion respectively. Hence $M_m$ and $M_c$ determine the signal to noise ratio in each input. We fixed the colour mean shift $M_c=0.22$, while the mean shifts of the motion node differed by noise level and were fitted individually such that each human participant had one matched network with comparable pre-insight task accuracy in each motion noise condition (see below). The network multiplied each input node by two parameters, a corresponding weight, and a gate, and returned a decision based on the output node's sign $\hat{y}$: \begin{equation} \hat{y}=\mathrm{sign}(g_mw_mx_m+g_cw_cx_c+\eta) \end{equation} where $\eta\sim\mathcal{N}(0,\sigma)$ is Gaussian noise, and weights and gates are the parameters learned online through gradient descent. To train L1-networks we used a simple squared loss function with L1-regularisation of gate weights: \begin{equation} \mathcal{L}=\frac{1}{2}(g_mw_mx_m+g_cw_cx_c+\eta-y)^2+\lambda(|g_m|+|g_c|) \end{equation} with a fixed level of regularisation $\lambda = 0.07$. During training, Gaussian noise was added to each gradient update to mimic learning noise and induce variability between individual networks (same gradient noise level for all networks). $\xi\sim\mathcal{N}(\mu_{\xi} = 0,\sigma_{\xi} = 0.05)$ was added to each gradient update, yielding the following update equations for noisy SGD of the network's weights \begin{align} \Delta w_m=&-\alpha x_m g_m (x_m g_m w_m + x_c g_c w_c +\eta - y) +\xi_{w_m}, \end{align} and gates, \begin{align} \Delta g_m =&- \alpha x_m w_m (x_m g_m w_m + x_c g_c w_c + \eta - y) \notag \\ &-\alpha\lambda \rm{sign}(g_m) +\xi_{g_m} \end{align} where we have not notated the dependence of all quantities on trial index $t$ for clarity; and analogous equations hold for colour weights and gates with all noise factors $\xi_{g_m} \, \xi_{w_m}$ etc, following the same distribution. Using this setup, we studied whether L1-regularisation would lead the network to show key characteristics of insight-like behaviour. Specifically, we reasoned that L1-regularisation of the gate weights would introduce competitive dynamics between the input channels that can lead to non-linear learning dynamics. We focused on L1-regularisation because it forces gates of irrelevant inputs most strongly towards 0, compared to L2-regularisation, which is less aggressive in particular once gates are already very small. While the multiplicative nature of the weights and gates results in non-linear quadratic and cubic gradient dynamics, applying L1-regularisation will lead to a sustained suppression period before the fast transition (see Methods). Networks received an extended pre-task training phase of 6 blocks, but then underwent a training curriculum precisely matched to the human task (2 blocks of 100 trials in the \textit{motion phase} and 5 blocks in the \textit{motion and colour phase}, see Fig. 1D). We adjusted direction specificity of motion inputs (i.e. difference in distribution means from which $x_m$ was drawn for left vs right trials) separately for each participant and coherence condition, such that performance in the motion phase was equated between each pair of human and network (Fig.\ref{fig:Fig3}A, see Methods). Moreover, the colour and motion input sequences used for network training were sampled from the same ten input sequences that humans were exposed to. A learning rate of $\alpha = 0.6$ (same for all participants) was selected to match average learning speed. \subsubsection*{L1-regularised Neural Networks} Networks learned the motion direction-response mapping well in the training phase, during which colour inputs changed randomly and output should therefore depend only on motion inputs (\emph{motion phase}, 75\% correct, t-test against chance: $t(98) = 33.1$, $p < .001$, the accuracy of humans in this phase was $M = 76\pm6\%$). As in humans, adding noise to the motion inputs (\textit{motion phase}) resulted in an accuracy gradient that depended on noise level (linear mixed effects model of accuracy: ${\chi}^2$(1) = 165.61, $p < .001$; N = 99, Fig.\ref{fig:Fig3}A), as expected given that input distributions were set such that network performance would equate to human accuracy (Fig.\ref{fig:Fig3}A-B). Networks also exhibited low and relatively stable performance levels in the two lowest coherence conditions (58\% and 60\%, paired t-test to assess stability in the \textit{motion phase}: $t(98) = -0.7$, $p = 0.49$, $d = 0.02$), and had a large performance difference between high vs low coherence trials ($M = 88\%\pm6\%$ vs. $M = 74\pm13\%$, $t(137.3) = 9.6$, $p < .001$, $d = 1.36$ for high, i.e. $\ge$ 20\% coherence, vs. low trials). Finally, humans and networks also performed comparably well at the end of learning (last block of the \textit{colour and motion phase}: $M(nets) = 79\%\pm17\%$ vs. $M(humans) = 82\pm17\%$, $t(195.8) = 1.1$, $p = 0.27$, $d = 0.16$, Fig. S8C), suggesting that at least some networks did start to use colour inputs. Hence, networks' baseline performance and learning were successfully matched to humans. To look for characteristics of insight in network performance, we employed the same approach used for modelling human behaviour, and investigated suddenness, selectivity, and delay. To identify sudden performance improvements, we fitted each network's time course of accuracy on low coherence trials by (1) a linear model and (2) a non-linear sigmoid function, which would indicate gradual performance increases or insight-like behaviour, respectively. As in humans, network performance on low coherence trials was best fit by a non-linear sigmoid function, indicating at least a subsection of putative ``insight networks'' (BIC sigmoid function: $M = -10$, $SD = 1.9$, protected exceedance probability: 1, BIC linear function: $M = -9$, $SD = 2.4$, protected exceedance probability: 0)(Fig.\ref{fig:Fig3}D). We then tested whether insight-like behaviour occurred only in a subset of networks (selectivity) by assessing in how many networks the steepness of the performance increase exceeded a chance level defined by a baseline distribution of the steepness. As in humans, we ran simulations of 99 control networks with the same architecture, which were trained on the same task except that during the \textit{motion and colour phase}, the two inputs remained uncorrelated. About half of networks (48/99, 48.5\%) had steepness values larger than the 100\% percentile of the control distribution, matching exactly the value we observed in the human sample. The L1-networks that showed sudden performance improvements were not matched to insight humans more often than chance (${\chi}^2$(47) = 27.9, $p = 0.99$), suggesting that network variability did not originate from baseline performance levels or trial orders. Hence, a random subset of networks showed sudden performance improvements comparable to those observed during insight moments in humans (Fig.\ref{fig:Fig3}E). For simplicity reasons in comparing network behaviour to humans, we will refer to the two groups as ``insight and no-insight networks''. Analysing behaviour separately for the insight and no-insight networks showed that switches to the colour strategy improved the networks' performance on the lowest coherence trials once the \textit{motion and colour phase} started, as compared to networks that did not show a strategy shift ($M = 83\pm11\%$, vs. $M = 64\pm9\%$, respectively, $t(89.8) = 9.2$, $p < .001$, $d = 1.9$, see Fig.\ref{fig:Fig3}C). The same performance difference between insight and no-insight networks applied when all coherence levels of the \textit{motion and colour phase} were included ($M = 88\pm7\%$ vs. $M = 77\pm6\%$, $t(93.4) = 7.8$, $p < .001$, $d = 1.57$). Unexpectedly, insight networks performed slightly worse on low coherence trials in the motion phase, i.e. before the change in predictiveness of the features, ($t(97) = -3.1$, $p = 0.003$, $d = 0.62$) (insight networks: $M = 58\pm8\%$; no-insight networks: $M = 64\pm9\%$), and in contrast to the lack of pre-insight differences we found in humans. Finally we asked whether insight-like behaviour occurred with random delays in neural networks, again scrutinising the time points of inflection of the fitted sigmoid function, i.e. when performance exhibited abrupt increases (see Methods). Time-locking the data to these individually fitted switch points verified that, as in humans, the insight-like performance increase was particularly evident around the switch points: accuracy was significantly increased between the halved task blocks preceding and following the insight-like behavioural switch, for colour switching networks ($M = 66\pm8\%$ vs. $M = 86\pm7\%$, $t(91.6) = -12.7$, $p < .001$, $d = 2.6$, see Fig.\ref{fig:Fig3}C, Fig. S5B). Among insight networks, the delay distribution ranged from 1 to 4 blocks after the start of the \textit{motion and colour phase}, and did not differ from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: $D(48) = 0.13$, $p = 0.85$). The average delay of insight-like switches was 1.75 task blocks ($\pm1.05$), corresponding to 175 trials (Fig.\ref{fig:Fig3}F). The insight networks' delay was thus slightly longer than for humans ($M = 130\pm95$ trials vs. $M = 175\pm105$ trials, $t(92.7) = -2.1$, $p = 0.04$, $d = 0.42$). The variance of insight induced strategy switch onsets as well as the relative variance in the abruptness of the switch onsets thus qualitatively matched our behavioural results observed in human participants. The behaviour of L1-regularised neural networks therefore showed all characteristics of human insight: sudden improvements in performance that occurred selectively only in a subgroup with variable random delays. \begin{figure*} \includegraphics[width=16.5cm]{Fig3.pdf} \caption{L1-regularised neural networks: task performance and insight-like strategy switches \footnotesize \textbf{(A)} Accuracy (\% correct) during the \textit{motion phase} increases with increasing motion coherence. N = 99, error bars signify SEM. Grey line is human data for comparison. \textbf{(B)} Accuracy (\% correct) over the course of the experiment for all motion coherence levels. First dashed vertical line marks the onset of the colour predictiveness (\textit{motion and colour phase}), second dashed vertical line the "instruction" about colour predictiveness. Blocks shown are halved task blocks (50 trials each). N = 99, error shadows signify SEM. \textbf{(C)} Switch point-aligned accuracy on lowest motion coherence level for insight (48/99) and no-insight (51/99) networks. Blocks shown are halved task blocks (50 trials each). Error shadow signifies SEM. \textbf{(D)} Difference between BICs of the linear model and sigmoid function for each network. \textbf{(E)} Distributions of fitted slope steepness at inflection point parameter for control networks and classified insight and no-insight groups. \textbf{(F)} Distribution of switch points. Dashed vertical line marks onset of colour predictiveness. Blocks shown are halved task blocks (50 trials each).} \label{fig:Fig3} \end{figure*} \subsubsection*{L2-regularised Neural Networks} Following our observation that L1-regularised networks exhibited human-like insight behaviour, we investigated whether this was specific to the form of regularisation. We therefore trained otherwise identical networks with a L2-regularisation term on the gate weights. We hypothesised that L2-regularisation would also lead to competitiveness between input nodes, but to a lower extent than L1-regularisation. We reasoned that in particular the fact that during the \textit{motion phase} the networks motion weights would not shrink as close to 0 would lead to more frequent and earlier insight-like behavioural switches. While L2-regularised gate weights led to switches that were similar to those previously observed in their abruptness (Fig. S7C), such insight-like behaviours were indeed much more frequent and clustered: 96\% of networks switched to a colour strategy, with a switch point distribution that was much more centred around the onset of the colour predictiveness (Fig. S7F, average delay of 1 task block ($SD = 1.1$) corresponding to 100 trials after onset of the colour correlation (\textit{motion and colour phase}). This was significantly shorter than for L1-regularised networks ($M = 1.05\pm1.1$ vs. $M = 1.75\pm1.05$, $t(59.6) = 4$, $p < 0.001$, $d = 0.9$) and also differed from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: $D(95) = 0.26$, $p = 0.005$). Additionally, performance on the lowest coherence level in the last block of the \textit{colour and motion phase} before colour instruction was centred just below ceiling and thus did not indicate a range of colour use like humans and L1-regularised networks ($M(L2-networks) = 97\%\pm2\%$ vs. $M(humans) = 82\pm17\%$, $t(101.6) = -8.8$, $p < .001$, $d = 1.25$, Fig. S8C). While L2-regularised networks thus showed abrupt behavioural transitions, they failed to show the other two key characteristics of insight: selectivity and delay. \subsubsection*{Non-regularised Neural Networks} In non-regularised networks, the effects observed in L2-regularised networks are enhanced. 99\% of the networks started using colour inputs (Fig. S8A), but colour use occurred in a more linear, less abrupt way than for L1- or L2-regularised networks. Additionally, there was very little delay of only 0.7 task blocks (70 trials, ($\pm0.25$)) between onset of the \textit{motion and colour phase} and the start of the networks making use of the colour input predictiveness (Fig. S8B). As for L2-networks, this delay was significantly shorter than for L1-regularised networks ($M = 0.7\pm0.55$ vs. $M = 1.75\pm1.05$, $t(49.3) = 6.6$, $p < 0.001$, $d = 1.6$) and also differed from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: $D(98) = 0.35$, $p < .001$). Similarly, performance on the lowest coherence level in the last block indicated that all networks used colour inputs ($M = 100\%\pm0.3\%$ vs. $M = 82\pm17\%$, $t(98) = -10.4$, $p < .001$, $d = 1.5$, Fig. S8C). Thus non-regularised networks also did not show the insight key behavioural characteristics of selectivity and delay. \bigskip \subsection*{Origins of Insight-like Behaviour in Neural Networks} Having established the behavioural similarity between L1-networks and humans in an insight task, we asked what gave rise to insight-like switches in some networks, but not others. We therefore investigated the dynamics of gate weights and the effects of noise in insight vs. no-insight networks, and the role of regularisation strength parameter $\lambda$. \subsubsection*{Colour Gradients Increase after Colour Becomes Predictive} Our first question was how learning about stimulus colour differed between insight and no-insight L1 networks, as expressed by the dynamics of network gradients. We time-locked the time courses of gradients to each network's individual switch point. Right when the switch occurred (at t of the estimated switch), colour gate weight gradients were significantly larger in insight compared to no-insight L1-networks ($M = 0.06\pm0.06$ vs. $M = 0.02\pm0.03$, $t(73.2) = 5.1$, $p < .001$, $d = 1.05$), while this was not true for motion gate weight gradients ($M = 0.18\pm0.16$ vs. $M = 0.16\pm0.16$, $t(97) = 0.7$, $p = 0.5$, $d = 0.13$). Notably, insight networks had larger colour gate weight gradients even before any behavioural changes were apparent, right at the beginning of the \textit{motion and colour phase} (first 5 trials of \textit{motion and colour phase}: $M = 0.05\pm0.07$ vs. $M = 0.01\pm0.01$; $t(320) = 8.7$, $p < .001$), whereas motion gradients did not differ ($t(576.5) = -0.1$, $p = 0.95$). This increase in colour gate weight gradients for insight networks happened within a few trials after correlation onset (colour gradient last trial of \textit{motion phase}: $M = 0\pm0$ vs. 5th trial of \textit{motion and colour phase}: $M = 0.06\pm0.08$; $t(47) = -5.6$, $p < .001$, $d = 1.13$), and suggests that insight networks start early to silently learn more about colour inputs compared to their no-insight counterparts. A change point analysis considering the mean and variance of the gradients confirmed the onset of the \textit{motion and colour phase} to be the change point of the colour gradient mean, with a difference of $0.04$ between the consecutive pre-change and change time points for insight networks vs $0.005$ for no-insight networks (with a change point detected two trials later), indicating considerable learning about colour for insight networks. \subsubsection*{``Silent'' Colour Knowledge Precedes Insight-like Behaviour} A core feature of our network architecture is that inputs were multiplied by two factors, a gate $g$, and a weight $w$, but only gates were regularised. This meant that some networks might have developed larger colour weights, but still showed no signs of colour use, because the gates were very small. This could explain the early differences in gradients reported above. To test this idea, we investigated the absolute size of colour gates and weights of insight vs no-insight L1-networks before and after insight-like switches had occurred. Comparing gates at the start of learning (first trial of the \textit{motion and colour phase}), there were no differences between insight and no-insight networks for either motion or colour gates (colour gates: $M = 0\pm0.01$ vs. $M = 0\pm0.01$; $t(95.3) = 0.8$, $p = 0.44$, motion gates: $M = 0.5\pm0.3$ vs. $M = 0.6\pm0.3$; $t(93.1) = -1.7$, $p = 0.09$, see Fig.\ref{fig:Fig5}A, Fig.\ref{fig:Fig5}H,J). Around the individually fitted switch points, however, the gates of insight and no-insight networks differed only for colour gates (colour gates: $0.2\pm0.2$ vs $0.01\pm0.02$ for insight vs no-insight networks, $t(48) = 6.7$, $p < 0.001$, $d = 1.4$, motion gates: $0.5\pm0.3$ vs $0.5\pm0.3$ for insight vs no-insight networks, $t(95.6) = 0.2$, $p = 0.9$, $d = 0.04$). Insight networks' increased use of colour inputs was particularly evident at the end of learning (last trial of the \textit{motion and colour phase}) and reflected in larger colour gates ($0.7\pm0.3$ vs $0.07\pm0.2$ for insight vs no-insight networks, $t(73.7) = 13.4$, $p < 0.001$, $d = 2.7$) while the reverse was true for motion gates ($M = 0.2\pm0.2$ vs $M = 0.5\pm0.3$, respectively, $t(81) = -7.5$, $p < 0.001$, $d = 1.5$, see Fig.\ref{fig:Fig5}B, Fig.\ref{fig:Fig5}H,J). Hence, differences in gating between network subgroups were only present after, but not before learning, and did not explain the above reported gradient differences or which network would show insight-like behaviour. A different pattern emerged when investigating the weights of the networks. Among insight networks colour weights were significantly larger already at the start of learning (first trial of the \textit{motion and colour phase}), as compared to no-insight networks (insight: $M = 1.2\pm0.6$; no-insight: $M = 0.4\pm0.3$, $t(66.2) = 8.1$, $p < .001$, $d = 1.7$, see Fig.\ref{fig:Fig5}C, Fig.\ref{fig:Fig5}G,I). This was not true for motion weights (insight: $M = 3.4\pm0.7$; no-insight: $M = 3.5\pm0.5$, $t(89.5) = -1.1$, $p = 0.3$, $d = 0.2$, see Fig.\ref{fig:Fig5}C, Fig.\ref{fig:Fig5}G,I). Thus, colour information appeared to be encoded in the weights of insight networks already before any insight-like switches occurred. Because the colour gates were suppressed through the L1-regularisation mechanism before learning, the networks did not differ in any observable colour sensitivity. An increase of colour gates reported above could then unlock the ``silent knowlegde'' of colour relevance. To experimentally test the effect of pre-learning colour weights, we ran a new sample of L1-networks ($N = 99$), and adjusted the colour and motion weight of each respective network to the mean absolute colour and motion weight size we observed in insight networks at start of learning (first trial of \emph{motion and colour phase}). Gates were left untouched. This increased the number of insight networks from 48.5\% to 70.7\%, confirming that encoding of colour information at an early stage was an important factor for later switches, but also not sufficient to cause insight-like behaviour in all networks. Note that before weights adjustments were made, the performance of the new networks did not differ from the original L1-networks ($M = 0.8\pm0.07$ vs $M = 0.8\pm0.07$, $t(195) = 0.2$, $p = 0.9$, $d = 0.03$). In our new sample, networks that would later show insight-like behaviour or not also did not differ from each other (insight: $M = 0.7\pm0.07$ vs $M = 0.7\pm0.07$, $t(100.9) = 1.4$, $p = 0.2$, $d = 0.3$, no-insight: $M = 0.8\pm0.05$ vs $M = 0.8\pm0.07$, $t(71) = 0.9$, $p = 0.4$, $d = 0.2$). Weight and gate differences between L1- and L2-networks are reported in the Supplementary Material (see also Fig.\ref{fig:Fig5}E-F). \begin{figure} \centering \includegraphics[width=10cm]{Fig5_v2.pdf} \caption{Gate and weight size differences at the start and end of learning and dynamics. \footnotesize Colour and motion gates at \textbf{(A)} the first trial and \textbf{(B)} the last trial of the \textit{motion and colour phase}. \textbf{(C)} Colour and motion weights at the first trial and \textbf{(D)} the last trial of the \textit{motion and colour phase}. Error bars signify SEM. \textbf{(E)} Gate weight sizes for colour and motion gate weights at the first trial and \textbf{(F)} the last trial of the \textit{motion and colour phase} for L1- and L2-regularised networks. \textbf{(G)} Weights of insight L1-networks. The dashed vertical line marks the onset of the \textit{motion and colour phase}. Error shadows signify SEM. \textbf{(H)} Gates of insight L1-networks. The dashed vertical line marks the onset of the \textit{motion and colour phase}. Error shadows signify SEM. \textbf{(I)} Weights of no-insight L1-networks. The dashed vertical line marks the onset of the \textit{motion and colour phase}. Error shadows signify SEM. \textbf{(J)} Gates of no-insight L1-networks. The dashed vertical line marks the onset of the \textit{motion and colour phase}. Error shadows signify SEM.} \label{fig:Fig5} \end{figure} \subsubsection*{Noise is Needed For Insight-like Behaviour} One possible factor that could explain the early differences between the weights of network subgroups is noise. The networks were exposed to noise at two levels: on each trial noise was added at the output stage ($\eta \sim \mathcal{N}(0,\,\sigma_{\eta}^{2})$), and to the gate and weight gradients during updating ($\xi \sim \mathcal{N}(0,\,\sigma_{\xi}^{2})$). We probed whether varying the level of noise added during gradient updating, i.e. $\sigma_{\xi}$, would affect the proportion of networks exhibiting insight-like behaviour. Parametrically varying the variance of noise added to colour and motion gates and weights led to increases in insight-like behaviour, from no single insight network when no noise was added to 100\% insight networks when $\sigma_{\xi_{g}}$ reached values of larger than approx. 0.05 (Fig.\ref{fig:Fig7}A). Since gate and weight updates were coupled (see Eq. 4-7), noise during one gradient update could in principle affect other updates as well. We therefore separately manipulated the noise added to updates of colour gates and weights, motion gates and weights, all weights and all gates. This showed that adding noise to only weights during the updates was sufficient to induce insight-like behaviour (Fig.\ref{fig:Fig7}B). In principle, adding noise to only gates was sufficient for insight-like switches as well, although noise applied to the gates had to be relatively larger to achieve the same effect as applying noise to weight gradients (Fig.\ref{fig:Fig7}B), presumably due the effect of regularisation. Adding noise only to the gradients of motion gates or weights, but not to the colour gradients, was not sufficient to induce insight-like switches (Fig.\ref{fig:Fig7}B). On the other hand, noise added only to the colour parameter updates quickly led to substantial amounts of insight-like behavioural switches (Fig.\ref{fig:Fig7}B). An analysis of \emph{cumulative} noise showed that the effects reported above are mostly about momentary noise fluctuations: cumulative noise added to the output did not differ between insight and no-insight networks at either the start (first trial of the \textit{motion and colour phase}) or end of learning (last trial of the \textit{motion and colour phase}) (start: $M = -0.3\pm4.7$ vs. $M = -0.6\pm3.9$; $t(91.2) = 0.4$, $p = 0.7$, end: $M = 0.6\pm7.1$ vs. $M = 0.5\pm7.1$; $t(96.7) = 0.07$, $p = 1$), and the same was true for cumulative noise added during the gradient updates to weights and gates (see Supplementary Material for details). We therefore conclude that Gaussian noise added to updates of particularly colour gate weights, in combination with ``silent knowledge'' about colour information stored in suppressed weights, is a crucial factor for insight-like behavioural changes. \begin{figure}[h] \centering \includegraphics[width=11cm]{Fig8_v3.pdf} \caption{Influence of Gaussian noise distribution variance $\sigma_{\xi}$ and regularisation parameter $\lambda$ on insight-like switches in L1-regularised networks \footnotesize{ \textbf{(A)} Influence of noise standard deviation ($\sigma_{\xi}$) applied to all gradient updates on the frequency of switches to a colour strategy (number of networks defined as having ``insight"). The frequency of insight-like switches increases gradually with $\sigma_{\xi}$ until it plateaus. Error bars are SD. We ran 10 x 99 simulations. \textbf{(B)} Effects of noise added only to either all weights ($\sigma_{\xi_{w}}$), all gates ($\sigma_{\xi_{g}}$), all motion parameters (i.e. motion weight and motion gates, $\sigma_{\xi_{g_m,w_m}}$) and all colour parameters ($\sigma_{\xi_{g_c,w_c}}$) on the frequency of insight-like switches when it is only applied to the network \textit{weights} and/or \textit{gates}. The frequency of insight-like switches increases gradually with $\sigma_{\xi_{w}}$ until it plateaus (dashed purple line), while it jumps abruptly after relatively high levels of $\sigma_{\xi_{g}}$ (solid purple line). $\sigma_{\xi_{g_m,w_m}}$ on motion alone is not sufficient for insight-like switches (lightest purple shade), but small $\sigma_{\xi_{g_c,w_c}}$ is sufficient for the frequency of insight networks to plateau (darkest purple shade). Error bars are SD. We ran 10 x 99 simulations. Colour scheme as in Fig. 1B} \textbf{(C)} Influence of $\lambda$ on the frequency of switches to a colour strategy (number of networks defined as having ``insight"). The frequency of insight-like switches declines with increasing $\lambda$ for L1-regularised networks, but is largely unaffected for L2-regularised networks. \textbf{(D)} Influence of $\lambda$ on the averaged switch points. The averaged switch point occurs later in the task with increasing $\lambda$ for both L1 and L2-regularised networks. Error bars signify SEM.} \label{fig:Fig7} \end{figure} \subsubsection*{Regularisation Parameter $\lambda$ Affects Insight Delay and Frequency} In our previous results, the regularisation parameter $\lambda$ was arbitrarily set to $0.07$. We next tested the effect of of $\lambda$ on insight-like behaviour. The number of L1-regularised insight networks linearly decreased with increasing $\lambda$ (Fig.\ref{fig:Fig7}C). Lambda further had an effect on the delay of the insight-like switches, with smaller $\lambda$ values leading to decreased average delays of switching to a colour strategy after predictiveness of the inputs had changed (Fig.\ref{fig:Fig7}D). The regularisation parameter $\lambda$ thus affects two of the key characteristics of human insight -- selectivity and delay. \section*{Discussion} We investigated insight-like learning behaviour in humans and neural networks. In a binary decision making-task with a hidden regularity that entailed an alternative way to solve the task more efficiently, a subset of regularised neural networks with multiplicative gates of their input channels (as an attention mechanism) displayed spontaneous, jump-like learning that signified the sudden discovery of the hidden regularity -- mysterious insight moments boiled down to the simplest expression. Networks exhibited all key characteristics of human insight-like behaviour in the same task (suddenness, selectivity, delay). Crucially, neural networks were trained with standard stochastic gradient descent that is often associated with gradual learning. Our results therefore suggest that the behavioural characteristics of aha-moments can arise from gradual learning mechanisms, and hence suffice to mimic human insight. Network analyses identified the factors which caused insight-like behaviour in L1-networks: noise added during the gradient computations accumulated to non-zero weights in some networks. As long as colour information was not useful yet, i.e. prior to the onset of the hidden regularity, close-to-0 colour gates rendered these weights ``silent'', such that no effects on behaviour can be observed. Once the hidden colour regularity became available, the non-zero colour weights helped to trigger non-linear learning dynamics that arise during gradient updating, and depend on the starting point. Hence, our results hint at important roles of ``attentional'' gating, noise, and regularisation as the computational origins of sudden, insight-like behavioural changes. We report several findings that are in line with this interpretation: addition of gradient noise $\xi$ in particular to the colour weights and gates, pre-learning adjustment of colour weights and a reduction of the regularisation parameter $\lambda$ all increased insight-like behaviour. We note that our networks did not have a hidden layer, witnessing the fact that no hidden layer is needed to produce non-linear learning dynamics. Our findings have implications for the conception of insight phenomena in humans. While present-day machines clearly do not have the capacity to have aha-moments due to their lack of meta-cognitive awareness, our results show that the remarkable behavioural signatures of insights by themselves do not necessitate a dedicated process. This raises the possibility that sudden behavioural changes which occur even during gradual learning could in turn lead to the subjective effects that accompany insights \cite{Frensch2003TheRegularity., Esser2022WhatConsciousness}. Our results also highlight noise and regularisation as aspects of brain function that are involved in the generation of insights. Cellular and synaptic noise is omnipresent in brain activity \cite{Faisal2008NoiseSystem,Waschke2021BehaviorVariability}, and has a number of known benefits, such as stochastic resonance and robustness that comes with probabilistic firing of neurons based on statistical fluctuations due to Poissonian neural spike timing \cite{Rolls2008SpatialSystem}. It has also been noted that noise plays an important role in jumps between brain states, when noise provokes transitioning between attractor states \cite{Rolls2012TheFunction}. Previous studies have therefore noted that stochastic brain dynamics can be advantageous, allowing e.g. for creative problem solving (as in our case), exploratory behaviour, and accurate decision making \cite{Rolls2012TheFunction, Faisal2008NoiseSystem, Garrett2013Neuroscience, Waschke2021BehaviorVariability}. Our work adds a computationally precise explanation of how noise can lead to insights to this literature. Questions about whether inter-individual differences in neural variability predict insights \cite{Garrett2013Neuroscience}, or about whether noise that occurs during synaptic updating is crucial remain an interesting topic for future research. Previous work has also suggested the occurrence and possible usefulness of regularisation in the brain. Regularisation has for instance been implied in synaptic scaling, which helps to adjust synaptic weights in order to maintain a global firing homeostasis \cite{Lee2019MechanismsVivo}, thereby aiding energy requirements and reducing memory interference \cite{Tononi2014SleepIntegration, DeVivo2017UltrastructuralCycle}. It has also been proposed that regularisation modulates the threshold for induction of long-term potentation \cite{Lee2019MechanismsVivo}. These mechanisms therefore present possible synaptic factors that contribute to insight-like behaviour in humans and animals. We note that synaptic scaling has often been linked to sleep \cite{Tononi2014SleepIntegration}, and regularisation during sleep has also been suggested to help avoid overfitting to experiences made during the day, and therefore generalisation \cite{Hoel2021TheGeneralization}. Since our experiments were conduced in an uninterrupted fashion during daylight, our findings could not reflect any sleep effects. The findings above nevertheless suggests a possible link between sleep, synaptic scaling and insight \cite{Wagner2004, Lacaux2021SleepSpot}. On a more cognitive level, regularisation has been implied in the context of heuristics. In this notion, regularisation has been proposed to function as an infinitely strong prior in a Bayesian inference framework \cite{Parpart2018HeuristicsPriors}. This infinitely strong prior would work as a sort of attention mechanism and regularise input and information in a way that is congruent with the specific prior, whereas a finite prior would under this assumption enable learning from experience \cite{Parpart2018HeuristicsPriors}. Another account regards cognitive control as regularised optimisation \cite{Ritz2022CognitiveProblem}. According to this theory, better transfer learning is supported by effort costs regularising towards more task-general policies. It therefore seems possible that the factors that impact regularisation during learning can also lead to a neural switch between states that might be more or less likely to govern insights. The occurrence of insight-like behaviour with the same characteristics as found in humans was specific to L1-regularised networks, while no comparable similarity occurred in L2- or non-regularised networks. Although L2-regularised neural networks learned to suppress initially irrelevant colour feature inputs and showed abrupt performance increases reminiscent of insights, only L1 networks exhibited a wide distribution of time points when the insight-like switches occur (delay) as well as a selectivity of the phenomenon to a subgroup of networks, as found in humans. We note that L2- and non-regularised networks technically performed better on the task, because they collectively improve their behavioural efficiency sooner. One important question therefore remains under which circumstances L1 would be the most beneficial form of regularisation. One possibility could be that the task is too simple for L1-regularisation to be beneficial. It is conceivable that L1-regularisation only starts being advantageous in more complex task settings when generalisation across task sets is required and a segregation of task dimensions to learn about at a given time would prove useful. Taken together, gradual training of neural networks with gate modulation leads to insight-like behaviour as observed in humans, and points to roles of regularisation, noise and ``silent knowledge'' in this process. These results make an important contribution to the general understanding of learning dynamics and representation formation in environments with non-stationary feature relevance in both biological and artificial agents. \section*{Methods} \subsection*{Task} \subsubsection*{Stimuli} We employed a perceptual decision task that required a binary choice about circular arrays of moving dots \cite{Rajananda2018AResearch}, similar to the spontaneous strategy switch task developed earlier \cite{Schuck2015}. Dots were characterised by two features, (1) a motion direction (four possible orthogonal directions: NW, NE, SW, SE) and (2) a colour (orange or purple, Fig.\ref{fig:Fig1}A). The noise level of the motion feature was varied in 5 steps (5\%, 10\%, 20\%, 30\% or 45\% coherent motion), making motion judgement relatively harder or easier. Colour difficulty was constant, thus consistently allowing easy identification of the stimulus colour. The condition with most noise (5\% coherence) occurred slightly more frequently than the other conditions (30 trial per 100, vs 10, 20, 20, 20 for the other conditions). The task was coded in JavaScript and made use of the jsPsych 6.1.0 plugins. Participants were restricted to use desktops (no tablets or mobile phones) of at least 13 inch width diagonally. Subjects were further restricted to use either a Firefox or Google Chrome browser to run the experiment. On every trial, participants were presented a cloud of 200 moving dots with a radius of 7 pixels each. In order to avoid tracking of individual dots, dots had a lifetime of 10 frames before they were replaced. Within the circle shape of 400 pixel width, a single dot moved 6 pixel lengths in a given frame. Each dot was either designated to be coherent or incoherent and remained so throughout all frames in the display, whereby each incoherent dot followed a randomly designated alternative direction of motion. The trial duration was 2000 ms and a response could be made at any point during that time window. After a response had been made via one of the two button presses, the white fixation cross at the centre of the stimulus would turn into a binary feedback symbol (happy or sad smiley) that would be displayed until the end of the trial (Fig.\ref{fig:Fig1}C). An inter trial interval (ITI) of either 400, 600, 800 or 1000 ms was randomly selected. If no response was made, a "TOO SLOW" feedback was displayed for 300 ms before being replaced by the fixation cross for the remaining time of the ITI. \subsubsection*{Task Design} For the first 400 trials, the \textit{motion phase}, the correct binary choice was only related to stimulus motion (two directions each on a diagonal were mapped onto one choice), while the colour changed randomly from trial to trial (Fig.\ref{fig:Fig1}D). For the binary choice, participants were given two response keys, "X" and "M". The NW and SE motion directions corresponded to a left key press ("X"), while NE and SW corresponded to a right key press ("M") (Fig.\ref{fig:Fig1}A). Participants received trial-wise binary feedback (correct or incorrect), and therefore could learn which choice they had to make in response to which motion direction (Fig.\ref{fig:Fig1}C). We did not specifically instruct participants to pay attention to the motion direction. Instead, we instructed them to learn how to classify the moving dot clouds using the two response keys, so that they would maximise their number of correct choices. To ensure that participants would pick up on the motion relevance and the correct stimulus-response mapping, motion coherence was set to be at 100\% in the first block (100 trials), meaning that all dots moved towards one coherent direction. Participants learned this mapping well and performed close to ceiling (87\% correct, t-test against chance: $t(98) = 37.4$, $p < .001$). In the second task block, we introduced the lowest, and therefore easiest, three levels of motion noise (20\%, 30\% and 45\% coherent motion), before starting to use all five noise levels in block 3. Since choices during this phase should become solely dependent on motion, they should be affected by the level of motion noise. We assessed how well participants had learned to discriminate the motion direction after the fourth block. Participants that did not reach an accuracy level of at least 85\% in the three lowest motion noise levels during this last task block of the pre-training were excluded from the \textit{motion and colour phase}. All subjects were notified before starting the experiment, that they could only advance to the second task phase (\textit{motion and colour phase}, although this was not communicated to participants) if they performed well enough in the first phase and that they would be paid accordingly for either one or two completed task phases. After the \textit{motion phase}, in the \textit{motion and colour phase}, the colour feature became predictive of the correct choice in addition to the motion feature (Fig.\ref{fig:Fig1}D). This meant that each response key, and thus motion direction diagonal, was consistently paired with one colour, and that colour was fully predictive of the required choice. Orange henceforth corresponded to a correct "X" key press and a NW/SE motion direction, while purple was predictive of a correct "M" key press and NE/SW motion direction (Fig.\ref{fig:Fig1}A). This change in feature relevance was not announced to participants, and the task continued for another 400 trials as before - the only change being the predictiveness of colour. Before the last task block we asked participants whether they 1) noticed a rule in the experiment, 2) how long it took until they noticed it, 3) whether they used the colour feature to make their choices and 4) to replicate the mapping between stimulus colour and motion directions. We then instructed them about the correct colour mapping and asked them to rely on colour for the last task block. This served as a proof that subjects were in principle able to do the task based on the colour feature and to show that, based on this easier task strategy, accuracy should be near ceiling for all participants in the last instructed block. \subsection*{Human Participants} Participants between eighteen and 30 years of age were recruited online through Prolific. Participation in the study was contingent on showing learning of the stimulus classification. Hence, to assess whether participants had learned to correctly identify motion directions of the moving dots, we probed their accuracy on the three easiest, least noisiest coherence levels in the last block of the uncorrelated task phase. If subjects reached an accuracy level of at least 85\%, they were selected for participation in the experiment. Ninety-six participants were excluded due to insufficient accuracy levels after the \textit{motion phase} as described above. 99 participants learned to classify the dots' motion direction, passed the accuracy criterion and completed both task phases. These subjects make up the final sample included in all analyses. 34 participants were excluded due to various technical problems or premature quitting of the experiment. All participants gave informed consent prior to beginning the experiment. The study protocol was approved by the local ethics committee of the Max Planck Institute for Human Development. Participants received 3£ for completing only the first task phase and 7£ for completing both task phases. \subsection*{Neural Networks} \subsubsection*{L1-regularised Neural Networks} We utilise a simple neural network model to reproduce the observations of the human behavioural data in a simplified supervised learning regression setting. We trained a simple neural network with two input nodes, two input gates and one output node on the same decision making task (Fig.\ref{fig:Fig1}B). The network received two inputs, $x_m$ and $x_c$, corresponding to the stimulus motion and colour, respectively, and had one output, $\hat{y}$. Importantly, each input had one associated multiplicative gate ($g_m$, $g_c$) such that output activation was defined as $\hat{y}=\mathrm{sign}(g_mw_mx_m+g_cw_cx_c+\eta)$ where $\eta\sim\mathcal{N}(0,\sigma)$ is Gaussian noise (Fig.\ref{fig:Fig1}B). To introduce competitive dynamics between the input channels, we added L1-regularisation on the gate weights $g$, resulting in the following loss function: \begin{align} \mathcal{L}=\frac{1}{2}(g_mw_mx_m+g_cw_cx_c+\eta-y)^2+\lambda(|g_m|+|g_c|) \end{align} The network was trained in a gradual fashion through online gradient descent with Gaussian white noise $\xi$ added to the gradient update and a fixed learning rate $\alpha$. Given the loss function, this yields the following update equations for noisy stochastic gradient descent (SGD): \begin{align} &\Delta w_m =-\alpha x_m g_m (x_m g_m w_m + x_c g_c w_c +\eta - y) +\xi_{w_m}\\ &\Delta g_m =-\alpha x_m w_m (x_m g_m w_m + x_c g_c w_c + \eta - y) \notag \\ & -\alpha\lambda \rm{sign}(g_m) +\xi_{g_m}\\ &\Delta w_c=-\alpha x_c g_c (x_c g_c w_c + x_m g_m w_m + \eta - y) +\xi_{w_c}\\ &\Delta g_c =-\alpha x_c w_c (x_c g_c w_c + x_m g_m w_m + \eta - y) \notag \\ &- \alpha\lambda \rm{sign}(g_c) +\xi_{g_c} \end{align} with $\lambda$ = 0.07, $\alpha$ = 0.6 and $\xi$ = 0.05. This implies that the evolution of the colour weights and gates will exhibit non-linear quadratic and cubic dynamics, driven by the interaction of $w_c$ and $g_c$. Multiplying the weights $w$ with the regularised gate weights $g$ leads to smaller weights and therefore initially slower increases of the colour weights $w_c$ and respective gate weights $g_c$ after colour has become predictive of correct choices. To understand this effect of non-linearity analytically, we used a simplified setup of the same model without gate weights: \begin{equation} \mathcal{L}=[w_mx_m+w_cx_c+\eta-y]^2 \end{equation} Using this model, we observe exponential increases of the colour weights $w_c$ after the onset of the \textit{motion and colour phase}. This confirms that the interaction of $w_c$ and $g_c$, as well as the regularisation applied to $g_c$ are necessary for the insight-like non-linear dynamics including a distribution of insight onsets as well as variety in slope steepness of insight-like switches. Note that because the regularisation term is non-differentiable at $0$, we cannot take the limit $\alpha\rightarrow 0$, but averaged over the data instead. To avoid oscillations of the coefficients around $0$ due to the non-differentiability, we added the following rules after each update of the gates: (1) if the gate $g^t$ was zero before the update, a regularisation term $-\rm{min}(\alpha\lambda, |g^{t+1}|)\rm{sign}(g^{t+1})$ was added and (2) if the gate changed sign during the update, the value was set to $0$. The accuracy is given by: \begin{align} &\mathbb{P}[\hat{y}=y|w_m,g_m,w_c,g_c] \notag \\ &=\frac{1}{2}[1+\rm{erf}(\frac{g_m w_m x_m +g_c w_c x_c}{\sqrt{2((g_m w_m \sigma_m)^2+(g_c w_c \sigma_c)^2+\sigma^2)}})] \end{align} We trained the network on a curriculum precisely matched to the human task, and adjusted hyperparameters (noise levels), such that baseline network performance and learning speed were carefully equated between humans and networks. Specifically, we simulated the same number of networks than humans were included in the final analysis sample ($N = 99$). We matched the motion noise based performance variance of a given simulation to a respective human subject using a non-linear COBYLA optimiser. While the mean of the colour input distribution (0.22) as well as the standard deviations of both input distributions were fixed (0.01 for colour and 0.1 for motion), the respective motion input distribution mean values were individually fitted for each single simulation as described above. The input sequences the networks received were sampled from the same ten input sequences that humans were exposed to in task phase two. This means that for the task part where colour was predictive of the correct binary choice, \textit{motion and colour phase} (500 trials in total), networks and humans received the same input sequences. The networks were given a slightly longer \textit{training phase} of six blocks (600 trials) in comparison to the two blocks \textit{training phase} that human subjects were exposed to (Fig.\ref{fig:Fig1}D). Furthermore, human participants first completed a block with 100\% motion coherence before doing one block with low motion noise. The networks received six \textit{training phase} blocks containing the three highest motion coherence levels. Both human subjects and networks completed two blocks including all noise levels in the \textit{motion phase} before colour became predictive in the \textit{motion and colour phase}. \subsubsection*{L2-regularised Neural Networks} To probe the effect of the aggressiveness of the regulariser on insight-like switch behaviour in networks, we compared our L1-regularised networks with models of the same architecture, but added L2-regularisation on the gate weights $g$. This yielded the following loss function: \begin{equation} \mathcal{L}=\frac{1}{2}(g_mw_mx_m+g_cw_cx_c+\eta-y)^2+\frac{\lambda}{2}(|g_m|+|g_c|)^2 \end{equation} From the loss function we can again derive the following update equations for noisy stochastic gradient descent (SGD): \begin{align} \Delta w_m&=-\alpha x_m g_m (x_m g_m w_m + x_c g_c w_c +\eta - y) +\xi_{w_m}\\ \Delta g_m &=- \alpha x_m w_m (x_m g_m w_m + x_c g_c w_c + \eta - y) \notag \\ &-\alpha\lambda \rm{sign}(g_m){abs}(g_m) +\xi_{g_m}\\ \Delta w_c&=-\alpha x_c g_c (x_c g_c w_c + x_m g_m w_m + \eta - y) +\xi_{w_c}\\ \Delta g_c&=-\alpha x_c w_c (x_c g_c w_c + x_m g_m w_m + \eta - y) \notag \\ &-\alpha\lambda \rm{sign}(g_c)(g_m){abs}(g_c) +\xi_{g_c} \end{align} with $\lambda$ = 0.07, $\alpha$ = 0.6 and $\xi$ = 0.05. The training is otherwise the same as for the L1-regularised networks. \subsection*{Modelling of insight-like switches} \subsubsection*{Models of colour use} In order to probe whether strategy switches in low coherence trials occurred abruptly, we compared three different models with different assumptions about the form of the data. First, we fitted a linear model with two free parameters: $$ y = mt + y_0\ $$ where $m$ is the slope, $y_0$ the y-intercept and $t$ is time (here, task blocks)(Fig. S2). This model should fit no-insight participants' data well where colour use either increases linearly over the course of the experiment or stays at a constant level. Contrasting the assumptions of the linear model, we next tested whether colour-based responses increased abruptly by fitting a step model with three free parameters, a switch point $t_s$, the step size $s$ and a maximum value $y_{max}$ (Fig. S2), so that $$ y = \begin{cases} y_{max} - s\ & \text{if $t < t_s$}\\ y_{max} & \text{if $ t \geq t_s$} \end{cases} $$ We also included a sigmoid function with three free parameters as a smoother approximation of the step model: $$ y = {y_{max} - y_{min} \frac 1 + e^{-m(t-t_s)}} + y_{min} $$ where $y_{max}$ is the fitted maximum value of the function, $m$ is the slope and $t_s$ is the inflection point (Fig. S2). $y_{min}$ was given by each individual's averaged accuracy on 5\% motion coherence trials in block 3-4. Comparing the model fits across all subjects using the Bayesian Information Criterion (BIC) and protected exceedance probabilities yielded a preference for the sigmoid function over both a step and linear model, for both humans (Fig.\ref{fig:Fig2}E) and L1-regularised neural networks (Fig.\ref{fig:Fig3}D). On the one hand, this supports our hypothesis that insight-like strategy switches do not occur in an incremental linear fashion, but abruptly, with variance in the steepness of the switch. Secondly, this implies that at least a subset of subjects shows evidence for an insight-like strategy switch. \subsubsection*{Human participants} To investigate these insight-like strategy adaptations, we modelled human participants' data using the individually fitted sigmoid functions (Fig. S3). The criterion we defined in order to assess whether a subject had switched to the colour strategy, was the slope at the inflection point, expressing how steep the performance jump was after having an insight about colour. We obtained this value by taking the sigmoid function's partial derivative of time $$ \frac{\partial y}{\partial t} = (y_{max} - y_{min}) {m e^{-m(t - t_s)}\over (1 + e^{-m(t - t_s)})^2} $$ and then evaluating the above equation for the fitted switch point, $t = t_s$, which yields: $$ y'(t_s) = \frac{1}{4} m (y_{max} - y_{min}) $$ Switch misclassifications can happen that are caused by irregularities and small jumps in the data - irrespective of a colour strategy switch. We therefore corrected for a general fit of the data to the model by subtracting the individually assessed general model fit from the slope steepness at the inflection point. Insight subjects were then classified as those participants whose corrected slope steepness at inflection point parameters were outside of the 100\% percentile of a control group's (no change in predictiveness of colour) distribution of that same parameter. By definition, insights about a colour rule cannot occur in this control condition, hence our derived out-of-sample distribution evidences abrupt strategy improvements hinting at insight (Fig.\ref{fig:Fig3}F). Before the last task block we asked participants whether they used the colour feature to make their choices. 57.6\% of participants indicated that they used colour to press correctly. The 48.5\% insight participants we identified using our classification method overlapped to 79.2\% with participants' self reports. \subsubsection*{Neural Networks} We used the same classification procedure for neural networks. All individual sigmoid function fits for L1-regularised networks can be found in the Supplementary Material (Fig. S4). \section*{Acknowledgements} ATL is supported by the International Max Planck Research School on Computational Methods in Psychiatry and Ageing Research (IMPRS COMPPPSYCH, www.mps.ucl-centre.mpg.de). PMK was funded by the Wellcome Trust (award: 210849/Z/18/Z). AMS was supported by a Sir Henry Dale Fellowship from the Wellcome Trust and Royal Society (216386/Z/19/Z), and the Sainsbury Wellcome Centre Core Grant from Wellcome (219627/Z/19/Z) and the Gatsby Charitable Foundation (GAT3755). AMS is a CIFAR Azrieli Global Scholar in the Learning in Machines \& Brains program. CS was funded by the European Research Council (ERC Consolidator awards 725937) and Special Grant Agreement No. 945539 (Human Brain Project SGA). NWS was funded by the Federal Government of Germany and the State of Hamburg as part of the Excellence Initiative, a Starting Grant from the European Union (ERC-StG-REPLAY-852669), and an Independent Max Planck Research Group grant awarded by the Max Planck Society (M.TN.A.BILD0004). The funding parties had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We thank Robert Gaschler for helpful comments on this manuscript. \bibliographystyle{unsrtnat} \section*{Hidden layer model} In order to verify that our results were not merely an artefact of the oversimplified models we used, we tested the task on a more complex neural network model that had one additional hidden layer of fully connected linear units. The linear neural network received two inputs, $x_m$ and $x_c$, corresponding to the stimulus motion and colour, respectively, and had two output nodes, $\hat{y}$, as well as one hidden layer of 48 units. Importantly, each weight connecting the inputs with a hidden unit had one associated multiplicative gate $g$. To introduce competitive dynamics between the input channels, we again applied L1-regularisation on the gate weights $g$. The network was trained on the Cross Entropy loss using stochastic gradient descent with $\lambda$ = 0.002 and $\alpha$ = 0.1. As for the one-layer network, we trained this network on a curriculum precisely matched to the human task, and adjusted hyperparameters (noise levels), such that baseline network performance and learning speed were carefully equated between humans and networks (see Methods). We employed the same analysis approach to detect insight-like behaviour (see Methods for details) by running simulations of a "control" network of the same architecture, but without correlated features and therefore without colour predictiveness in the \textit{motion and colour phase}. We found that when we applied L1-regularisation with a regularisation parameter of $\lambda = 0.002$ on the gate weights, 18.2\% of the networks exhibited \textit{abrupt} and \textit{delayed} learning dynamics, resembling insight-like behaviour in humans (Fig.\ref{fig:Supp1}A) and thereby replicating the key insight characteristics suddenness and selectivity. Insight-like switches to the colour strategy thereby again improved the networks' performance significantly. Using the same parameters, experimental setup and analyses, but applying L2-regularisation on the gate weights $g$, yielded an insight-like switch rate of 51.5\% (Fig.\ref{fig:Supp1}B). We again also observed a wider distribution of delays, the time point when the switches in the \textit{motion and colour phase} occurred in insight networks, for L1-regularised networks with a hidden layer (Fig.\ref{fig:Supp1}C-D). Taken together, these results mirror our observations from network simulations with a simplified setup. We can thereby confirm that our results of L1-regularised neural networks' behaviour exhibiting all key characteristics of human insight behaviour (suddenness, selectivity and delay) are not an artefact of the one-layer linearity. \section*{Weight and Gate Differences between L1- and L2-regularised Networks} At correlation onset (first trial of \textit{motion and colour phase}), neither motion nor colour weights differed (motion: $M = 3.5\pm0.6$ vs $M = 3.4\pm0.5$, $t(192.7) = 1.2$, $p = 0.2$, $d = 0.2$, colour: $M = 0.8\pm0.6$ vs $M = 0.8\pm0.5$, $t(189.2) = 0.4$, $p = 0.7$, $d = 0.1$). After learning, however, i.e. at the last trial of the \textit{motion and colour phase}, the average absolute size of the colour weights was higher in L2- compared to L1-networks ($M = 2.6\pm2.2$ vs $M = 4.7\pm0.7$, $t(115.1) = -9$, $p < .001$, $d = 1.3$), while the reverse was true for motion weights ($M = 3.4\pm0.7$ vs $M = 2.8\pm0.6$, $t(194.9) = 5.6$, $p < .001$, $d = 0.8$). For gate weights, differences between L1- and L2-networks are already apparent at correlation onset (first trial of \textit{motion and colour phase}), where the mean of the motion gate was 0.53 for L1-networks and 0.58 for L2-networks, and hence lower in L1 networks, albeit not significantly ($t(195.1) = -1$, $p = 0.3$, $d = 0.1$, see Fig. 4E). In addition, the average absolute size of the colour gate weights was higher in L2- compared to L1-networks ($M = 0.04\pm0.05$ vs $M = 0.002\pm0.006$, respectively, $t(100.6) = -7.2$, $p < 0.001$, $d = 1$). The respective distributions also reflected these effects. L1-networks had a much more narrow distribution for colour gates and just slightly narrower distribution for motion gates (L1: colour gates: \numrange{0}{0.04}, motion gates: \numrange{0}{1.3}, L2: colour gates: \numrange{0}{0.2}, motion gates: \numrange{0}{1.4}) After learning, i.e. at the last trial of the \textit{motion and colour phase}, the mean colour gate size still was lower in L1- compared to L2-regularised networks ($M = 0.4\pm0.4$ vs $M = 0.8\pm0.2$, $t(169.1) = -9.3$, $p < 0.001$, $d = 1.3$), while the reverse was true for motion gates ($M = 0.3\pm0.3$ vs $M = 0.2\pm0.2$, $t(152.4) = 3.9$, $p < 0.001$, $d = 0.6$, see Fig. 4F). This was again also reflected in the respective distributions with L1-networks having much wider distributions for motion and slightly shorter width for colour gates (L1: colour gates: \numrange{0}{1.2}, motion gates: \numrange{0}{1.3}, L2: colour gates: \numrange{0}{1.3}, motion gates: \numrange{0}{0.7}). \section*{Gaussian Noise Differences at Weights and Gates between Insight and No-Insight Networks} Comparing Gaussian noise $\xi \sim \mathcal{N}(0,\,\sigma_{\xi}^{2})$ at the weights and gates around the individually fitted switch points revealed no differences between insight and no-insight networks for either motion or colour weights (colour weights: $M = -0.08\pm1$ vs. $M = 0.04\pm0.8$; $t(89.5) = -0.6$, $p = 0.5$, motion weights: $M = 0.5\pm0.3$ vs. $M = 0.6\pm0.3$; $t(93.1) = -1.7$, $p = 0.09$) or gates (colour gates: $M = -0.1\pm0.9$ vs. $M = 0.1\pm0.9$; $t(95.3) = 0.8$, $p = 0.44$, motion gates: $M = 0.2\pm0.6$ vs. $M = -0.3\pm0.8$; $t(94.4) = 2$, $p = 0.05$). There also were no $\sigma_{\xi}$ differences at either the start of learning (first trial of the \textit{motion and colour phase}) (colour weights: $M = -0.06\pm0.8$ vs. $M = -0.03\pm0.5$; $t(78.1) = -0.2$, $p = 0.8$, motion weights: $M = 0.08\pm0.7$ vs. $M = 0.07\pm0.7$; $t(96.7) = 1$, $p = 0.3$, colour gates: $M = 0\pm0.6$ vs. $M = -0.2\pm0.7$; $t(97) = 1.6$, $p = 0.1$, motion gates: $M = -0.04\pm0.6$ vs. $M = -0.07\pm0.7$; $t(97) = 0.2$, $p = 0.8$) or end of learning (last trial of the \textit{motion and colour phase})(colour weights: $M = 0.05\pm1.3$ vs. $M = 0.08\pm1.1$; $t(92.7) = -0.1$, $p = 0.9$, motion weights: $M = 0\pm1.2$ vs. $M = -0.02\pm1.1$; $t(95.6) = 0.04$, $p = 1$, colour gates: $M = 0.2\pm1.1$ vs. $M = -0.2\pm1.2$; $t(97) = 1.7$, $p = 0.09$, motion gates: $M = -0.1\pm1.3$ vs. $M = 0.05\pm1.3$; $t(96) = -0.7$, $p = 0.5$). \begin{figure}[h!] \includegraphics[width=1\textwidth]{Supp5.pdf} \footnotesize \caption{Switch-aligned performance and switch point distributions for L1- and L2-regularised neural networks with a 48 unit hidden layer each. Blocks shown are halved task blocks (50 trials each). Error shadows signify SEM. \textbf{(A)} Switch-aligned performance for insight (18/99) and no-insight groups (81/99) respectively for L1-regularised networks with a hidden layer. \textbf{(B)} Switch-aligned performance for insight (51/99) and no-insight (48/99) L2-regularised neural networks with a hidden layer. \textbf{(C)} Switch point distributions for L1-regularised insight networks with a hidden layer. Dashed vertical line marks onset of colour predictiveness. \textbf{(D)} Switch point distributions for L2-regularised insight neural networks. Dashed vertical line marks onset of colour predictiveness.} \label{fig:Supp1} \end{figure} \begin{figure}[h!] \includegraphics[width=1\textwidth]{Supp4.pdf} \footnotesize \caption{Illustrations of models and respective parameters. \textbf{(A)} Linear function with free parameters intercept $y_0$ and slope $m$. \textbf{(B)} Step function with free parameters inflection point $t_s$ and function maximum $y_{max}$. \textbf{(B)} Generalised logistic regression function with free parameters slope $m$, inflection point $t_s$ and function maximum $y_{max}$.} \label{fig:Supp2} \end{figure} \begin{figure}[h!] \includegraphics[page = 1, width=1\textwidth]{Supp3.pdf} \end{figure} \begin{figure}[h!] \includegraphics[page = 2, width=1\textwidth]{Supp3.pdf} \footnotesize \caption{Performance on highest motion noise trials (yellow) and model predictions (black) for every human participant. Blocks shown are halved task blocks (50 trials each). Error shadows signify SEM.} \label{fig:Supp3} \end{figure} \begin{figure}[h!] \includegraphics[page = 1, width=1\textwidth]{Supp3b.pdf} \end{figure} \begin{figure}[h!] \includegraphics[page = 2, width=1\textwidth]{Supp3b.pdf} \footnotesize \caption{Performance on Performance on highest motion noise trials (yellow) and model predictions (black) for every L1-regularised neural network. Blocks shown are halved task blocks (50 trials each). Error shadows signify SEM.} \label{fig:Supp4} \end{figure} \begin{figure}[h!] \includegraphics[width=1\textwidth]{Supp2.pdf} \footnotesize \caption{Switch-aligned performance for insight group (left) and no-insight group (right). \textbf{(A)} Human insight group (48/99). \textbf{(B)} L1-regularised neural network insight group (48/99).} \label{fig:Supp5} \end{figure} \begin{figure}[h!] \includegraphics[width=1\textwidth]{Supp6.pdf} \footnotesize \caption{Switch-aligned performance and overlap between classification and self-reported colour use. \textbf{(A)} Switch-aligned performance and overlap (38) between classified insight subjects (48/99) and self-reported colour use (57/99). \textbf{(B)} Switch-aligned performance and overlap (32) between classified no-insight subjects (51/99) and self-reported no colour use (42/99).} \label{fig:Supp6} \end{figure} \begin{figure}[h!] \includegraphics[width=1\textwidth]{Supp1.pdf} \footnotesize \caption{L2 networks: Task performance and insight-like strategy switches \textbf{(A)} Accuracy (\% correct) during the \textit{motion phase} increases with increasing motion coherence. Blocks shown are halved task blocks (50 trials each). N = 99, error bars signify SEM. Grey line is human data for comparison. \textbf{(B)} Accuracy (\% correct) over the course of the experiment for all motion coherence levels. First dashed vertical line marks the onset of the colour predictiveness (\textit{motion and colour phase}), second dashed vertical line the "instruction" about colour predictiveness. N = 99, error shadows signify SEM. \textbf{(C)} Switch point-aligned accuracy on lowest motion coherence level for insight (95/99) and no-insight (4/99) networks. Blocks shown are halved task blocks (50 trials each). Error shadow signifies SEM. \textbf{(D)} Difference between BICs of the linear and sigmoid function for each network. \textbf{(E)} Distributions of fitted slope steepness at inflection point parameter for control networks and classified insight and no-insight groups. \textbf{(F)} Distribution of switch points for insight networks. Dashed vertical line marks onset of colour predictiveness. Blocks shown are halved task blocks (50 trials each).} \label{fig:Supp7} \end{figure} \begin{figure*}[t] \includegraphics[width=1\textwidth]{Supp7.pdf} \footnotesize \caption{Comparison of insight percentage and performance in the last task block across groups \textbf{(A)} \% insight-like switches in humans, L1, L2 and non-regularised networks, respectively. Dashed line marks chance percentage of ``insight''. \textbf{(B)} Distributions of switch points for humans, L1, L2 and non-regularised networks. Blocks shown are halved task blocks (50 trials each). \textbf{(C)} Distributions of performance (\% correct) for humans, L1, L2 and non-regularised networks for the last block of the \textit{colour and motion phase} before the colour instruction.} \label{fig:Supp8} \end{figure*} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The coronavirus disease 2019 (COVID-19) pandemic, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), continues to rage on around the world, with multiple waves causing substantial harm to health and economies around the world. Real-time reverse transcription polymerase chain reaction (RT-PCR) testing remains the primary screening tool for COVID-19, where SARS-CoV-2 ribonucleic acid (RNA) is detected within an upper respiratory tract sputum sample~\cite{Wang2020_RTPCR}. However, despite being highly specific, the sensitivity of RT-PCR can be relatively low~\cite{Fang2020, Li2020_RTPCR} and can vary greatly depending on the time since symptom onset as well as sampling method~\cite{Yang2020, Li2020_RTPCR, Ai2020}. Clinical institutes around the world have explored the use of computed tomography (CT) imaging as an effective, complementary screening tool alongside RT-PCR~\cite{Fang2020, Ai2020, Xie2020}. In particular, studies have shown CT to have great utility in detecting COVID-19 infections during routine CT examinations for non-COVID-19 related reasons such as elective surgical procedure monitoring and neurological examinations~\cite{Tian,Shatri}. Other scenarios where CT imaging has been leveraged include cases where patients have worsening respiratory complications, as well as cases where patients with negative RT-PCR test results are suspected to be COVID-19 positive due to other factors. Early studies have shown that a number of potential indicators for COVID-19 infections may be present in chest CT images~\cite{Guan2020, Wang2020, Chung2020, Pan2020, Fang2020, Ai2020, Xie2020}, but may also be present in non-COVID-19 infections. This can lead to challenges for radiologists in distinguishing COVID-19 infections from non-COVID-19 infections using chest CT~\cite{Bai2020_Perf, Mei2020}. Inspired by the potential of CT imaging as a complementary screening method and the challenges of CT interpretation for COVID-19 screening, we previously introduced COVID-Net~CT~\cite{Gunraj2020}, a deep convolutional neural network tailored for detection of COVID-19 cases from chest CT images. We further introduced COVIDx~CT, a large curated benchmark dataset comprising of chest CT scans from a cohort of 1,489 patients derived from a collection by the China National Center for Bioinformation (CNCB)~\cite{cncb}. Both COVID-Net~CT and COVIDx~CT were made publicly available as part of the COVID-Net~\cite{covidnet,alex2020covidnets} initiative, an open source initiative\footnote{\url{https://alexswong.github.io/COVID-Net}} aimed at accelerating advancement and adoption of deep learning in the fight against the COVID-19 pandemic. While COVID-Net~CT was able to achieve state-of-the-art COVID-19 detection performance, one potential limiting factor is the restricted quantity and diversity of CT imaging data used to learn the deep neural network given the entirely Chinese patient cohort used in the study. As such, a greater quantity and diversity in the patient cohort has the potential to improve generalization, particularly when COVID-Net~CT is leveraged under different clinical settings around the world. Motivated by the success and widespread adoption of COVID-Net~CT and COVIDx~CT, as well as its potential data quantity and diversity limitations, in this study we introduce COVID-Net~CT-2, enhanced deep convolutional neural networks for COVID-19 detection from chest CT images that are trained on a large, diverse, multinational patient cohort. More specifically, we accomplish this through the introduction of two new CT benchmark datasets (COVIDx~CT-2A and COVIDx~CT-2B), the largest of which comprises a multinational cohort of 4,501 patients from at least 15 countries. To the best of the authors' knowledge, these benchmark datasets represent the largest, most diverse multinational cohorts for COVID-19 CT images available in open access form. Finally, we leverage explainability to investigate the decision-making behaviour of COVID-Net~CT-2 to ensure decisions are based on relevant visual indicators in CT images, with the results for select patient cases being reviewed and reported on by two board-certified radiologists with 10 and 30 years of experience, respectively. The COVID-Net~CT-2 networks and corresponding COVIDx~CT-2 datasets are publicly available as part of the COVID-Net initiative~\cite{covidnet,alex2020covidnets}. While not a production-ready solution, we hope the open-source, open-access release of the COVID-Net~CT-2 networks and the corresponding COVIDx~CT-2 benchmark datasets will enable researchers, clinicians, and citizen data scientists alike to build upon them. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{images/covidnetctv2.png} \caption{COVID-Net~CT-2 architecture design and COVIDx~CT-2 benchmark. We leverage the COVID-Net~CT network architecture~\cite{Gunraj2020} as the core of the COVID-Net~CT-2 networks (COVID-Net~CT-2~L network shown in figure, with COVID-Net~CT-2~S network sharing same macroarchitecture design but fewer parameters), which was discovered automatically via machine-driven design exploration. Some interesting characteristics about the COVID-Net~CT-2 design include a very diverse yet lightweight designs ($\sim$16.8$\times$ and $\sim$52.6$\times$ lower architectural complexity than ResNet-50~\cite{resnet} for COVID-Net~CT-2~L and S networks, respectively) and selective long-range connectivity to draw a balance between complexity and representational power. The COVID-Net~CT-2 design was trained on CT scans from a large, diverse, multinational cohort of patient cases across at least 15 countries (i.e., COVIDx~CT-2).} \label{fig:arch} \end{figure} \section{Methods}\label{methods} In this study, we introduce COVID-Net~CT-2~L and COVID-Net~CT-2~S, a pair of enhanced deep convolutional neural networks for the detection of COVID-19 from chest CT. To train and test these networks, we further introduce two COVIDx~CT-2 benchmark datasets which represent the largest, most diverse multinational patient cohorts for COVID-19 CT images available in open access form, spanning cases from at least 15 countries. A visual overview of COVID-Net~CT-2 and COVIDx~CT-2 is shown in Figure~\ref{fig:arch}. The methodology behind the preparation of the two COVIDx~CT-2 benchmark datasets, the construction and learning of the COVID-Net~CT-2 networks, and the explainability-driven performance validation process are described in detail below. \subsection{COVIDx~CT-2 benchmark dataset preparation} The original COVIDx~CT benchmark dataset consists of chest CT scans collected by the China National Center for Bioinformation (CNCB)~\cite{cncb} which were carefully processed and selected to form a cohort of 1,489 patient cases. While COVIDx~CT is significantly larger than many CT datasets for COVID-19 detection in literature, a potential limitation with leveraging COVIDx~CT for learning deep neural networks is the limited diversity in terms of patient demographics. More specifically, the cohort of patients used in COVIDx~CT are collected in different provinces of China, and as such the characteristics of COVID-19 infection as observed in the chest CT images may not generalize to patients around the world outside of China. Therefore, increasing the quantity and diversity of the patient cohort in constructing new benchmark datasets could result in more diverse, well-rounded learning of deep neural networks. In doing so, improved generalization and applicability for use under different clinical environments around the world can be achieved. In this study, we carefully processed and curated CT images from several patient cohorts from around the world which were collected using a variety of CT equipment types, protocols, and levels of validation. By unifying CT imaging data from several cohorts from around the world, we created two diverse, large-scale benchmark datasets: \begin{itemize} \item \textbf{COVIDx~CT-2A}: This benchmark dataset comprises 194,922 CT images from a multinational cohort of 3,745 patients between 0 and 93 years old (median age of 51) with strongly clinically-verified findings. The multinational cohort consists of patient cases collected by the following organizations and initiatives from around the world: (1) China National Center for Bioinformation (CNCB)~\cite{cncb} (China), (2) National Institutes of Health Intramural Targeted Anti-COVID-19 (ITAC) Program (hosted by TCIA~\cite{TCIA}, countries unknown), (3) Negin Radiology Medical Center~\cite{rahimzadeh2020fully} (Iran), (4) Union Hospital and Liyuan Hospital of Huazhong University of Science and Technology~\cite{HUST} (China), (5) COVID-19 CT Lung and Infection Segmentation initiative, annotated and verified by Nanjing Drum Tower Hospital~\cite{COVID-19-SegBenchmark} (Iran, Italy, Turkey, Ukraine, Belgium, some countries unknown), (6) Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI)~\cite{LIDC} (countries unknown), and (7) Radiopaedia collection~\cite{radiopaedia} (Iran, Italy, Australia, Afghanistan, Scotland, Lebanon, England, Algeria, Peru, Azerbaijan, some countries unknown). \item \textbf{COVIDx~CT-2B}: This benchmark dataset comprises 201,103 CT images from a multinational cohort of 4,501 patients between 0 and 93 years old (median age of 51) with a mix of strongly verified findings and weakly verified findings. The patient cohort in COVIDx~CT-2B consists of the multinational patient cohort we leveraged to construct COVIDx~CT-2A, which have strongly clinically-verified findings, with additional patient cases with weakly verified findings collected by the Research and Practical Clinical Center of Diagnostics and Telemedicine Technologies, Department of Health Care of Moscow (MosMed)~\cite{Morozov2020.05.20.20100362} (Russia). Notably, these additional cases are only included in the training dataset, and as such the validation and test datasets are identical to those of COVIDx~CT-2A. \end{itemize} In both COVIDx~CT-2 benchmark datasets, the findings for the chest CT volumes corresponds to three different infection types: (1) novel coronavirus pneumonia due to SARS-CoV-2 viral infection (NCP), (2) common pneumonia (CP), and (3) normal controls, with the patient distribution for the three infection types across training, validation, and test shown in Figure~\ref{fig:distribution}. For CT volumes labelled as NCP or CP, slices containing abnormalities were identified and assigned the same labels as the CT volumes. Notably, patient age was not available for all cases, and as such the age ranges and median ages reported above are based on patient cases for which age was available. For images which were originally in Hounsfield units (HU), a standard lung window centered at -600~HU with a width of 1500~HU was used to map the images to unsigned 8-bit integer range (i.e., [0, 255]). \begin{figure}[t] \centering \includegraphics[width=\textwidth]{images/patient_distributions.png} \caption{Patient distribution across training, validation, and test for COVIDx~CT-2A (left) and COVIDx~CT-2B (right).} \label{fig:distribution} \end{figure} The rationale for creating two different COVIDx~CT-2 benchmark datasets stems from the availability of weakly verified findings (i.e., findings not based on RT-PCR test results or final radiology reports), which can be useful for further increasing the quantity and diversity of patient cases that a deep neural network can be exposed to and can be of great interest for researchers, clinicians, and citizen scientists to explore and build upon while being made aware of the fact some of the CT scans do not have strongly verified findings available. Both COVIDx~CT-2A and COVIDx~CT-2B benchmark datasets are publicly available\footnote{\url{https://www.kaggle.com/hgunraj/covidxct}} as part of the COVID-Net initiative, with example CT images from each type of infection shown in Figure~\ref{fig:examples}. \subsection{COVID-Net~CT-2 construction and learning} By leveraging the COVIDx~CT-2 benchmark datasets introduced in the previous section, we build the COVID-Net~CT-2 deep convolutional neural networks in a way that is more generalizable and more readily adoptable to a wider range of clinical scenarios around the world through bigger, more diverse learning on the largest quantity and diversity of multinational patient cases in research literature. More specifically, two COVID-Net~CT-2 networks are built (COVID-Net~CT-2~L and COVID-Net~CT-2~S), with both sharing the same macroarchitecture design but different number of parameters. The COVID-Net~CT-2 architecture is shown in Figure~\ref{fig:arch}, and the networks are made publicly available\footnote{\url{https://github.com/haydengunraj/COVIDNet-CT}}. More specifically, we leverage the COVID-Net~CT network architecture design proposed in~\cite{Gunraj2020} as the core of the architecture designs of the COVID-Net~CT-2 networks. The architecture designs were discovered automatically via a machine-driven design exploration process using generative synthesis~\cite{gensynth}, where the macroarchitecture and microarchitecture designs of a tailored deep neural network architecture for the task and data at hand is determined via iterative constrained optimization based on a universal performance function (e.g.,~\cite{wong2019netscore}) and a set of quantitative constraints. The result is highly customized architecture designs that strike a strong balance between complexity and representational power beyond what a human designer can achieve alone. The COVID-Net~CT-2 designs possess several interesting architectural characteristics. First, COVID-Net~CT-2 designs exhibit very diverse yet lightweight designs composed largely of a heterogeneous combination of strided and unstrided depthwise convolutions as well as pointwise convolutions, with unique microarchitecture design characteristics tailored during the machine-driven design exploration process. Second, COVID-Net~CT-2 leverages selective long-range connectivity through several point convolution hubs to draw a balance between architectural complexity and representational power. As a result of these macroarchitecture and microarchitecture design traits tailored around COVID-19 detection from CT images, the COVID-Net~CT-2 architecture designs have, at $\sim$1.4M parameters and $\sim$0.45M parameters, approximately $\sim$16.8$\times$ and $\sim$52.6$\times$ lower architectural complexity than ResNet-50~\cite{resnet}, respectively. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{images/dataset_examples.png} \caption{Example CT images from the COVIDx~CT-2 benchmark datasets from each type of infection: (1) normal coronavirus pneumonia due to SARS-CoV-2 infection (NCP), (2) common pneumonia (CP), and (3) normal controls.} \label{fig:examples} \end{figure} The constructed COVID-Net~CT-2 deep convolutional neural networks were trained on the COVIDx~CT-2A benchmark dataset via stochastic gradient descent with momentum~\cite{momentum}, where the following hyperparameters were leveraged: learning rate=5e-4, momentum=0.9, number of epochs=25, batch size=64. To further increase data diversity beyond what is provided by the large multinational cohort in order to improve the generalization of COVID-Net~CT-2, we leveraged data augmentation in the form of cropping box jitter, rotation, horizontal and vertical shear, horizontal flip, and intensity shift and scaling. The construction, training, and evaluation of COVID-Net~CT-2 networks were conducted using the TensorFlow~\cite{tensorflow} machine learning library. \subsection{Explainability-driven performance validation via model auditing}\label{xai} As with COVID-Net~CT~\cite{Gunraj2020}, we utilize GSInquire~\cite{gsinquire} as the explainability method of choice to conduct explainability-driven performance validation, Using GSInquire, we audit the trained COVID-Net~CT-2 to better understanding and verify its decision-making behaviour when analyzing CT images to predict the condition of a patient. This form of performance validation via model auditing is particularly important in a clinical context, as the decisions made about a patient's conditions can affect the health of patients via treatment and care decisions made using a model's predictions. Therefore, examining the decision-making behaviour through model auditing is key to ensuring that the right visual indicators in the CT scans (in the case of COVID-19 infections, visual anomalies such as ground-glass opacities and bilateral abnormalities) are leveraged for making a prediction as opposed to irrelevant visual cues (e.g., synthetic padding, circular scan artifacts, patient table, etc.). Furthermore, incorporating interpretability in the validation process also increases the level of trust that a clinician has in leveraging such models for clinical decision support by adding an extra degree of algorithmic transparency. To facilitate explainability-driven performance validation via model auditing, GSInquire provides an explanation of how a model makes a decision based on input data by identifying a set of critical factors within the input data that impact the decision-making process of the deep neural network in a quantitatively significant way. This is accomplished by probing the model with an input signal (in this case, a CT image) as the targeted stimulus signal and observing the reactionary response signals throughout the model, thus enabling quantitative insights to be derived through the inquisition process. These quantitative insights are then transformed and projected into the same space as the input signal to produce an interpretation (in this case, a set of critical factors in the CT image that quantitatively led to the prediction of the patient's condition). These interpretations can be visualized spatially relative to the CT images for greater insights into the decision-level behaviour of COVID-Net~CT-2. Compared to other explainability methods~\cite{kumar2017explaining,lundberg2017unified,ribeiro2016why,erion2020improving,kumar2017discovery}, this interesting nature of GSInquire in identifying quantitative impactful critical factors enables it to achieve explanations that better reflect the decision-making process of models when compared to other state-of-the-art explainability methods~\cite{gsinquire}. This makes it particularly suitable for quality assurance of models prior to clinical deployment to identify errors, biases, and anomalies that can lead to `right decisions for the wrong reasons'. The results obtained during the explainability-driven performance validation via model auditing for select patient cases are further reviewed and reported on by two board-certified radiologists (A.S. and D.K.). The first radiologist (A.S.) has over 10 years of experience, while the second radiologist (D.K.) has over 30 years of radiology experience. \section{Results}\label{results} \subsection{Quantitative analysis} To explore the efficacy of the COVID-Net~CT-2 networks for COVID-19 detection from CT images, we conducted a quantitative evaluation of the trained deep neural networks using the COVIDx~CT-2 test dataset. For comparison purposes, we also conduct a quantitative comparison with COVID-Net~CT~\cite{Gunraj2020} (referred from here on as COVID-Net~CT-1 for clarity), which was previously shown to achieve state-of-the-art performance when compared with state-of-the-art deep neural network architectures such as ResNet-50~\cite{resnetv2}, NASNet-A-Mobile~\cite{nasnet}, and EfficientNet-B0~\cite{efficientnet} for the task of COVID-19 detection from CT images. The test accuracy of the COVID-Net~CT-2 networks and COVID-Net~CT-1 are shown in Table~\ref{table:comp}. It can be observed that COVID-Net~CT-2~L and COVID-Net~CT-2~S achieved strong test accuracies of 98.1\% and 97.9\%, respectively, on the COVIDx~CT-2 test dataset, while at the same time possessing low architectural complexity ($\sim$1.4M parameters and $\sim$0.45M parameters, respectively) and low computational complexity ($\sim$4.18 GFLOPs and $\sim$1.94 GFLOPs). Compared to COVID-Net~CT-1, it can be observed that COVID-Net~CT-2~S and COVID-Net~CT-2~L achieved 3.4\% and 3.6\% higher accuracy, respectively. The sensitivity and positive predictive value (PPV) for each infection type on the COVIDx~CT-2 test dataset is shown in Table~\ref{table:sens} and Table~\ref{table:ppv}, respectively. It can be observed that COVID-Net~CT-2~L and COVID-Net~CT-2~S were able to achieve both high COVID-19 sensitivity (96.2\% and 95.7\%, respectively) and high COVID-19 PPV (96.7\% and 96.4\%, respectively). Compared to COVID-Net~CT-1, it can be observed that COVID-Net~CT-2~S and COVID-Net~CT-2~L achieved 15.5\% and 16\% higher COVID-19 sensitivity, respectively. At the cost of significantly lower COVID-19 sensitivity, COVID-Net~CT-1 is able to achieve 0.9\% and 1.2\% higher COVID-19 PPV than COVID-Net~CT-2~L and COVID-Net~CT-2~S, respectively. From a clinical perspective, high sensitivity ensures few false negatives which would lead to missed patients with COVID-19 infections, whereas high PPV ensures few false positives which add an unnecessary burden on the healthcare system, which is already stressed due to the ongoing pandemic. The specificity and negative predictive value (NPV) for each infection type on the COVIDx~CT-2 test dataset is shown in Table~\ref{table:spec} and Table~\ref{table:npv}, respectively. It can be observed that COVID-Net~CT-2~L and COVID-Net~CT-2~S were able to achieve both high COVID-19 specificity (99\% and 98.9\%, respectively) and high COVID-19 NPV (98.8\% and 98.7\%, respectively). Compared to COVID-Net~CT-1, it can be observed that COVID-Net~CT-2~S and COVID-Net~CT-2~L achieved 4.5\% and 4.6\% higher COVID-19 NPV, respectively. Notably, COVID-Net~CT-1 achieves 0.4\% and 0.5\% higher COVID-19 specificity than COVID-Net~CT-2~L and COVID-Net~CT-2~S, respectively, but as previously mentioned this comes at the cost of significantly lower COVID-19 sensitivity. The high specificity and NPV achieved by COVID-Net~CT-2 are important from a clinical perspective to ensure that COVID-19-negative predictions are indeed true negatives in the vast majority of cases, which facilitates rapid identification of COVID-19-negative patients. These experimental results are particularly promising in terms of model generalization and applicability for use under different clinical environments given the much more diverse nature of the COVIDx~CT-2 test dataset. As such, these results demonstrate the potential value of COVID-Net~CT-2 as an effective clinical decision support tool to aid with COVID-19 screening. \begin{table}[t] \caption{Accuracy (image level) for the tested networks on the COVIDx~CT-2 benchmark test dataset. Best results highlighted in \textbf{bold}.} \medskip \label{table:comp} \centering \begin{tabular}{ll} \toprule Network & Accuracy (\%) \\ \midrule COVID-Net~CT-1~\cite{Gunraj2020} & 94.5\\ COVID-Net~CT-2~L & \textbf{98.1}\\ COVID-Net~CT-2~S & 97.9\\ \bottomrule \end{tabular} \vspace{-0.1in} \end{table} \begin{table}[h] \caption{Sensitivity for each infection type at the image level on the COVIDx~CT-2 benchmark test dataset. Best results highlighted in \textbf{bold}.} \medskip \label{table:sens} \centering \begin{tabular}{llll} \hline \toprule Network & \multicolumn{3}{c}{Sensitivity (\%)}\\ \cmidrule(lr){2-4} & Normal & CP & NCP \\ \cmidrule(lr){2-4} COVID-Net~CT-1~\cite{Gunraj2020} & 98.8 & \textbf{99.0} & 80.2 \\ COVID-Net~CT-2~L & \textbf{99.0} & 98.2 & \textbf{96.2} \\ COVID-Net~CT-2~S & 98.9 & 98.1 & 95.7\\ \bottomrule \end{tabular} \end{table} \begin{table}[h] \caption{Positive predictive value (PPV) for each infection type at the image level on the COVIDx~CT-2 benchmark test dataset. Best results highlighted in \textbf{bold}.} \medskip \label{table:ppv} \centering \begin{tabular}{llll} \hline \toprule Network & \multicolumn{3}{c}{PPV (\%)} \\ \cmidrule(lr){2-4} & Normal & CP & NCP \\ \cmidrule(lr){2-4} COVID-Net~CT-1~\cite{Gunraj2020} & 96.1 & 90.2 & \textbf{97.6}\\ COVID-Net~CT-2~L & \textbf{99.4} & \textbf{97.2} & 96.7\\ COVID-Net~CT-2~S & 99.3 & 97.0 & 96.4\\ \bottomrule \end{tabular} \end{table} \begin{table}[h] \caption{Specificity for each infection type at the image level on the COVIDx~CT-2 benchmark test dataset. Best results highlighted in \textbf{bold}.} \medskip \label{table:spec} \centering \begin{tabular}{llll} \hline \toprule Network & \multicolumn{3}{c}{Specificity (\%)} \\ \cmidrule(lr){2-4} & Normal & CP & NCP \\ \cmidrule(lr){2-4} COVID-Net~CT-1~\cite{Gunraj2020} & 96.3 & 95.7 & \textbf{99.4}\\ COVID-Net~CT-2~L & \textbf{99.5} & \textbf{98.8} & 99.0\\ COVID-Net~CT-2~S & 99.3 & 98.8 & 98.9\\ \bottomrule \end{tabular} \end{table} \begin{table}[h] \caption{Negative predictive value (NPV) for each infection type at the image level on the COVIDx~CT-2 benchmark test dataset. Best results highlighted in \textbf{bold}.} \medskip \label{table:npv} \centering \begin{tabular}{llll} \hline \toprule Network & \multicolumn{3}{c}{NPV (\%)} \\ \cmidrule(lr){2-4} & Normal & CP & NCP \\ \cmidrule(lr){2-4} COVID-Net~CT-1~\cite{Gunraj2020} & 98.9 & \textbf{99.6} & 94.2\\ COVID-Net~CT-2~L & \textbf{99.1} & 99.3 & \textbf{98.8}\\ COVID-Net~CT-2~S & 99.0 & 99.2 & 98.7\\ \bottomrule \end{tabular} \end{table} \subsection{Qualitative analysis} To audit the decision-making behaviour of COVID-Net~CT-2 and ensure that it is leveraging relevant visual indicators when predicting the condition of a patient, we conducted explainability-driven performance validation using the COVIDx~CT-2 benchmark test dataset, and the results obtained using COVID-Net~CT-2~L for select patient cases are further reviewed and reported on by two board-certified radiologists. The critical factors identified by GSInquire for example chest CT images from the four COVID-19-positive cases that were reviewed are shown in Figure~\ref{fig:explain}, and additional examples for COVID-Net~CT-2~S are shown in Figure~\ref{fig:explain2}. Overall, it can be observed from the GSInquire-generated visual explanations that both COVID-Net~CT-2~L and COVID-Net~CT-2~S are mainly utilizing visible lung abnormalities to distinguish between COVID-19-positive and COVID-19-negative cases. As such, this auditing process allows us to determine that COVID-Net~CT-2 is indeed leveraging relevant visual indicators in the decision-making process as opposed to irrelevant visual indicators such as imaging artifacts, artificial padding, and patient tables. This performance validation process also reinforces the importance of utilizing explainability methods to confirm proper decision-making behaviour in deep neural networks designed for clinical decision support. \subsection{Radiologist findings} The expert radiologist findings and observations with regards to the critical factors identified by GSInquire for each of the four patient cases shown in Figure~\ref{fig:explain} are as follows. In all four cases, COVID-Net~CT-2~L detected them to be novel coronavirus pneumonia due to SARS-CoV-2 viral infection, which was clinically confirmed. \textbf{Case 1 (top-left of Figure~\ref{fig:explain})}. It was observed by one of the radiologists that there are bilateral peripheral mixed ground-glass and patchy opacities with subpleural sparing, which is consistent with the identified critical factors leveraged by COVID-Net~CT-2~L. The absence of large lymph nodes and effusion further helped the radiologist point to novel coronavirus pneumonia due to SARS-CoV-2 viral infection. The degree of severity is observed to be moderate to high. It was confirmed by the second radiologist that the identified critical factors leveraged by COVID-Net~CT-2~L are correct areas of concern and represent areas of consolidation with a geographic distribution that is in favour of novel coronavirus pneumonia due to SARS-CoV-2 viral infection. \textbf{Case 2 (top-right of Figure~\ref{fig:explain})}. It was observed by one of the radiologists that there are bilateral peripherally-located ground-glass opacities with subpleural sparing, which is consistent with the identified critical factors leveraged by COVID-Net~CT-2~L. As in Case 1, the absence of large lymph nodes and large effusion further helped the radiologist point to novel coronavirus pneumonia due to SARS-CoV-2 viral infection. The degree of severity is observed to be moderate to high. It was confirmed by the second radiologist that the identified critical factors leveraged by COVID-Net~CT-2~L are correct areas of concern and represent areas of consolidation with a geographic distribution that is in favour of novel coronavirus pneumonia due to SARS-CoV-2 viral infection. \textbf{Case 3 (bottom-left of Figure~\ref{fig:explain})}. It was observed by one of the radiologists that there are peripheral bilateral patchy opacities, which is consistent with the identified critical factors leveraged by COVID-Net~CT-2~L. Unlike the first two cases, there is small right effusion. However, as in Cases 1 and 2, the absence of large effusion further helped the radiologist point to novel coronavirus pneumonia due to SARS-CoV-2 viral infection. Considering that the opacities are at the base, a differential of atelectasis change was also provided. The degree of severity is observed to be moderate. It was confirmed by the second radiologist that the identified critical factors leveraged by COVID-Net~CT-2~L are correct areas of concern and represent areas of consolidation. \textbf{Case 4 (bottom-right of Figure~\ref{fig:explain})}. It was observed by one of the radiologists that there are peripherally located asymmetrical bilateral patchy opacities, which is consistent with the identified critical factors leveraged by COVID-Net~CT-2~L. As in Cases 1 and 2, the absence of lymph nodes and large effusion further helped the radiologist point to novel coronavirus pneumonia due to SARS-CoV-2 viral infection, but a differential of bacterial pneumonia was also provided considering the bronchovascular distribution of patchy opacities. In addition, there is no subpleural sparing. This highlights the potential difficulties in differentiating between novel coronavirus pneumonia and common pneumonia. It was confirmed by the second of the radiologists that the identified critical factors leveraged by COVID-Net~CT-2~L are correct areas of concern and represent areas of consolidation with a geographic distribution that is in favour of novel coronavirus pneumonia due to SARS-CoV-2 viral infection. Therefore, it can be observed that the explainability-driven validation process shows consistency between the decision-making process of COVID-Net CT-2 and radiologist interpretation, which suggests strong potential for computer-aided COVID-19 assessment within a clinical environment. \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{images/explain-covidnet-ct-l.png} \caption{Example chest CT images from four COVID-19 cases reviewed and reported on by two board-certified radiologists, and the associated critical factors (highlighted in red) as identified by GSInquire~\cite{gsinquire} for COVID-Net~CT-2~L. Based on the observations made by two expert radiologists, it was found that the critical factors leveraged by COVID-Net~CT-2~L are consistent with radiologist interpretation.} \label{fig:explain} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{images/explain-covidnet-ct-s.png} \caption{Example chest CT images from four COVID-19 cases, and the associated critical factors (highlighted in red) as identified by GSInquire~\cite{gsinquire} for COVID-Net~CT-2~S.} \label{fig:explain2} \end{figure} Based on both quantitative and qualitative results, it can be seen that not only does COVID-Net~CT achieve high performance, but it is leveraging relevant abnormalities in the lungs in its decision-making process rather than erroneous visual cues. \section{Conclusions and Discussion} In this work, we introduced COVID-Net~CT-2, enhanced deep convolutional neural networks tailored for the purpose of COVID-19 detection from chest CT images via more diverse learning on the largest quantity and diversity of multinational patient cases in research literature. Two new CT benchmark datasets were introduced and used to facilitate the learning of COVID-Net~CT-2, and these datasets represent the largest, most diverse, multinational cohorts of their kind available in open access form, spanning cases from at least 15 countries. Experimental results show that the COVID-Net~CT-2 networks are capable of not only achieving strong test accuracy, sensitivity, and positive predictive value, but also do so in a manner that is consistent with radiologist interpretation via explainability-driven performance validation. The results are promising and suggest the strong potential of deep neural networks as an effective tool for computer-aided COVID-19 assessment. Given the severity of the COVID-19 pandemic and the potential for deep learning as a potential tool to facilitate computer-assisted COVID-19 clinical decision support, a number of deep learning systems have been proposed in research literature for detecting SARS-CoV-2 infections using CT images~\cite{Mei2020, cncb, Xu2020, Bai2020_Aug, Li2020, Ardakani2020, Shah2020, Chen2020, Zheng2020, Jin2020, Jin2020_2, Song2020, Wang2020_Screen,Harmon2020,Gunraj2020,HUST}. While some proposed deep learning systems focus on binary detection (SARS-CoV-2 positive vs. negative)~\cite{Harmon2020}, several proposed systems operate at a finer level of granularity by further identifying whether SARS-CoV-2 negative cases are normal control~\cite{cncb, Xu2020, Jin2020_2, Song2020}, SARS-CoV-2 negative pneumonia (e.g., bacterial pneumonia, viral pneumonia, community-acquired pneumonia (CAP), etc.)~\cite{cncb, Xu2020, Bai2020_Aug, Li2020, Ardakani2020, Song2020, Wang2020_Screen}, or non-pneumonia~\cite{Li2020}. The majority of the proposed deep learning systems for COVID-19 detection from CT images rely on pre-existing network architectures that were designed for other image classification tasks. A large number of proposed systems additionally rely on segmentation of the lung region and/or lung lesions~\cite{Mei2020, cncb, Xu2020, Bai2020_Aug, Li2020, Chen2020, Zheng2020, Jin2020_2, Song2020}. Some proposed systems also augment pre-existing network architectures, with Xu et al.~\cite{Xu2020} augmenting a pre-existing ResNet-18~\cite{resnet} backbone architecture with location-attention classification, and Li et al.~\cite{Li2020} and Bai et al.~\cite{Bai2020_Aug} augmenting pre-existing network architectures with pooling operations for volume-driven detection. Of the deep learning systems that proposed new deep neural network architectures, Shah et al.~\cite{Shah2020} proposed a 10-layer convolutional neural network architecture named CTnet-10, which ultimately showed lower detection performance than pre-existing architectures in literature. Zheng et al.~\cite{Zheng2020} proposed a 3D convolutional neural network architecture named DeCovNet which is capable of volume-driven detection. Finally, in the system introduced by Gunraj et al.~\cite{Gunraj2020}, machine-driven design exploration was leveraged to construct a deep neural network architecture that is tailored specifically for COVID-19 detection using CT images. While the concept of leveraging deep learning for COVID-19 detection from CT images has been previously explored, even the largest studies in research literature in this area have been limited in terms of quantity and/or diversity of patients, with many limited to single-nation cohorts. For example, the studies by Mei et al.~\cite{Mei2020}, Gunraj et al.~\cite{Gunraj2020}, Ning et al.~\cite{HUST}, and Zhang et al.~\cite{cncb} were all limited to Chinese patient cohorts consisting of 905 patients, 1,489 patients, 1,521 patients, and 3,777 patients, respectively. The largest multinational study in research literature was conducted by Harmon et al.~\cite{Harmon2020}, which leveraged a cohort of 2,617 patients across 4 countries. To the best of the authors' knowledge, the largest of the unified multinational patient cohorts introduced in this study represents the largest, most diverse multinational patient cohort at 4,501 patients across at least 15 countries. By building the proposed COVID-Net CT-2 deep neural networks using a large multinational patient cohort, we can better study the generalization capabilities and applicability of deep learning for computer-assisted assessment under a wider diversity of clinical scenarios and demographics. With the tremendous burden the ongoing COVID-19 pandemic has put on healthcare systems and healthcare workers around the world, the hope is that research such as COVID-Net~CT-2 and open source initiatives such as the COVID-Net initiative can accelerate the advancement and adoption of deep learning solutions within a clinical setting to aid front-line health workers and healthcare systems in improving clinical workflow efficiency and effectiveness in the fight against the COVID-19 pandemic. While to the best of the authors' knowledge this research does not put anyone at a disadvantage, it is important to note that COVID-Net~CT-2 is not a production-ready solution and is meant for research purposes. As such, predictions made by COVID-Net~CT-2 should not be utilized blindly and should instead be built upon and leveraged in a human-in-the-loop fashion by researchers, clinicians, and citizen data scientists alike. Future work involves leveraging the core COVID-Net~CT-2 backbone for downstream tasks such as lung function prediction, severity assessment, and actionable predictions for guiding personalized treatment and care for SARS-CoV-2 positive patients. \begin{ack} We thank the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada Research Chairs program, the Canadian Institute for Advanced Research (CIFAR), DarwinAI Corp., Justin Kirby of the Frederick National Laboratory for Cancer Research, and the various organizations and initiatives from around the world collecting valuable COVID-19 data to advance science and knowledge. The study has received ethics clearance from the University of Waterloo (42235). \end{ack} \section*{Author contributions statement} H.G. and A.W. conceived the experiments, H.G. conducted the experiments, all authors analysed the results, D.K. and A.S. reviewed and reported on select patient cases and corresponding explainability results illustrating model's decision-making behaviour, and all authors reviewed the manuscript. \section*{Declaration of interests} A.W. is affiliated with DarwinAI Corp. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:introduction} This note is concerned with \emph{twisted coefficient systems}, by which we mean simply functors from a category $\mathcal{C}$ equipped with a certain structure to an abelian category $\mathcal{A}$. The structure on $\mathcal{C}$ depends on the precise situation that one wishes to study, and is used to define a notion of \emph{degree} for any functor $\mathcal{C} \to \mathcal{A}$. The main goal of this note is to compare different structures on the source category $\mathcal{C}$, and the resulting notions of degree. Twisted coefficient systems of \emph{finite degree}, also known as \emph{polynomial functors}, are often used to study the homology of interesting spaces or groups (or other abelian invariants, such as the filtration quotients in the lower central series of a group), for example automorphism groups of free groups and congruence groups (\textit{cf}.\ \cite[\S 5]{DjamentVespa2019FoncteursFaiblementPolynomiaux}). Indeed, polynomial functors first appeared in the paper~\cite{EilenbergMac1954groupsHnII} of Eilenberg and MacLane (see \S 9), where they were used to compute the homology, in a certain range of degrees, of the Eilenberg-MacLane spaces $K(A,n)$ for $n\geq 2$. This involves assembling the objects of interest (e.g.\ the homology of congruence subgroups) into a polynomial functor, and leveraging the fact that it has finite degree to study them. On the other hand, polynomial functors may also appear as the \emph{coefficients} in homology groups: one may also be interested in the homology of a family of spaces or groups, with respect to a corresponding family of local coefficient systems \emph{that assemble into a polynomial functor} -- it is in this guise that polynomial functors are more commonly referred to as \emph{twisted coefficient systems} (of finite degree). Many families of groups or spaces are known to be \emph{homologially stable} with respect to (appropriately-defined) finite-degree twisted coefficient systems, including the symmetric groups, braid groups, configuration spaces, general linear groups, automorphism groups of right-angled Artin groups and mapping class groups of surfaces and of $3$-manifolds.\footnote{Symmetric groups: \cite{Betley2002Twistedhomologyof}; braid groups: \cite{ChurchFarb2013Representationtheoryand,Randal-WilliamsWahl2017Homologicalstabilityautomorphism,Palmer2018Twistedhomologicalstability}; configuration spaces: \cite{Palmer2018Twistedhomologicalstability}; general linear groups: \cite{Dwyer1980Twistedhomologicalstability,Kallen1980Homologystabilitylinear}; automorphism groups of free groups: \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism}; automorphism groups of right-angled Artin groups: \cite{GandiniWahl2016Homologicalstabilityautomorphism}; mapping class groups of surfaces: \cite{Ivanov1993homologystabilityTeichmuller, CohenMadsen2009Surfacesinbackground,Boldsen2012Improvedhomologicalstability,Randal-WilliamsWahl2017Homologicalstabilityautomorphism}; mapping class groups of $3$-manifolds: \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism}. Note that these are references for the proofs of \emph{twisted} homological stability; in many cases, homological stability with untwisted coefficients was known much earlier.} \paragraph*{Cross-effects vs.\ endofunctors.} There are two common approaches to defining the \emph{degree} of a functor $\mathcal{C} \to \mathcal{A}$. One approach uses certain structure on $\mathcal{C}$ to define the \emph{cross-effects} of a given functor, which consists of an $\mathbb{N}$-graded set of objects of $\mathcal{A}$, and the degree is then determined by the vanishing of these objects. The idea is that the cross-effects encode information about the functor; if they vanish above a certain level, then this information is concentrated in an essentially finite amount of data, which makes it possible to prove certain things about the given functor (such as homological stability with coefficients in this functor). For example, this is the approach taken in the paper \cite{Palmer2018Twistedhomologicalstability}, from which this note arose, to prove homological stability for configuration spaces with finite-degree twisted coefficients. The second approach is a recursive definition, depending on the choice of an endofunctor $s$ of $\mathcal{C}$ and a natural transformation $\mathrm{id} \to s$. This allows one to prove things about functors of finite degree (in this sense) by induction on the degree. For example, the main theorem of \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism}, providing a ``machine'' for proving twisted homological stability for families of groups, is an inductive proof of this kind, and the degree of their twisted coefficient systems is defined recursively. \paragraph*{Degree and height.} In this note, we will use the word \emph{degree} to refer to a definition of the second kind, i.e., a recursive definition, and we will use the word \emph{height} to refer to a definition of the first kind, given by the vanishing of certain \emph{cross-effects} above a certain ``height''. In sections \ref{sec:inductive-degree} and \ref{sec:cross-effects} (respectively) we compare various notions of \emph{degree} and \emph{height} appearing in the literature. We will focus on comparisons between \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}, \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism}, \cite{Ivanov1993homologystabilityTeichmuller}, \cite{CohenMadsen2009Surfacesinbackground}, \cite{Boldsen2012Improvedhomologicalstability} and \cite{Palmer2018Twistedhomologicalstability} for the degree, and between \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}, \cite{HartlVespa2011Quadraticfunctorspointed}, \cite{HartlPirashviliVespa2015Polynomialfunctorsalgebras}, \cite{CollinetDjamentGriffin2013Stabilitehomologiquepour} and \cite{Palmer2018Twistedhomologicalstability} for the height. We do not pursue here the relationship between the height and the degree of a functor $\mathcal{C} \to \mathcal{A}$ (when both are defined) in the greatest possible generality (although this is discussed in certain special cases, see Remark \ref{rmk:summary} for a summary). Rather, we focus on comparing (and unifying) \emph{with each other} the various different notions of \emph{degree} appearing in the literature, and similarly for the various different notions of \emph{height} in the literature. \paragraph*{Historical remarks.} The notion of what we term the \emph{height} of a functor was first introduced by Eilenberg and MacLane (who used the name \emph{degree}) in \cite[\S 9]{EilenbergMac1954groupsHnII}, where it was used to compute the integral homology of Eilenberg-MacLane spaces in a range of degrees. Somewhat later, it was used by Dwyer~\cite{Dwyer1980Twistedhomologicalstability} to formulate and prove a twisted homological stability theorem for general linear groups. The \emph{height} of a functor also appears in \cite{Pirashvili2000DoldKantype} (see \S 2.3) and was used in \cite{Betley2002Twistedhomologyof} to prove a twisted homological stability theorem for the symmetric groups. More recently, it also appears in \cite{HartlVespa2011Quadraticfunctorspointed}, \cite{HartlPirashviliVespa2015Polynomialfunctorsalgebras}, \cite{CollinetDjamentGriffin2013Stabilitehomologiquepour} (in which it is used to prove a homological stability theorem for automorphism groups of free products of groups) and \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux} (which also introduces the notion of \emph{weak} polynomial functors, in contrast to \emph{strong} polynomial functors -- see also Definition \ref{def:degree-general} below, which is inspired by their definition). The notion of what we term the \emph{degree} of a functor appeared first (as far as the author is aware) implicitly in the work of Dwyer~\cite{Dwyer1980Twistedhomologicalstability} and (slightly later) explicitly in the work of van der Kallen~\cite{Kallen1980Homologystabilitylinear} (see \S 5.5).\footnote{Dwyer~\cite{Dwyer1980Twistedhomologicalstability} explicitly defines a \emph{height}-like notion (at the beginning of \S 3), but there is a \emph{degree}-like notion implicit in his work (\textit{cf}.\ Theorem 2.2 and the proof of Lemma 3.1). Van der Kallen~\cite{Kallen1980Homologystabilitylinear}, on the other hand, uses techniques similar to those of \cite{Dwyer1980Twistedhomologicalstability}, but explicitly uses a \emph{degree}-like notion (see \S 5.5), and remarks that functors of finite degree (in his sense) can be obtained from functors of finite degree in the sense of \cite{Dwyer1980Twistedhomologicalstability}.} This notion was also used by Ivanov~\cite{Ivanov1993homologystabilityTeichmuller} to formulate and prove a twisted homological stability theorem for mapping class groups, and analogous (not quite identical) definitions were used also by \cite{CohenMadsen2009Surfacesinbackground} and \cite{Boldsen2012Improvedhomologicalstability} in similar contexts (\textit{cf}.\ \S \ref{para:degree-mcg}). The notion also appears in the work of Djament and Vespa~\cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}, who use both \emph{degree}-like (\textit{cf}.\ D{\'e}finitions 1.5 and 1.22) and \emph{height}-like (\textit{cf}.\ Proposition 2.3) descriptions of their polynomial functors. It is also used by Randal-Williams and Wahl~\cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} in their general framework for proving twisted homological stability theorems for sequences of groups. \paragraph*{Outline.} In section \ref{sec:inductive-degree} we describe a general framework for defining the \emph{degree} of a functor $\mathcal{C} \to \mathcal{A}$ using the structure of an endofunctor $s$ of $\mathcal{C}$ together with a natural transformation $\mathrm{id} \to s$ (more generally, a collection of such data), and specialise this to several settings in the literature, including symmetric monoidal categories (\S\ref{para:degree-DV}), labelled braid categories (\S\ref{para:partial-braid-categories}) and categories of decorated surfaces (\S\ref{para:degree-mcg}). In section \ref{sec:cross-effects} we set up a general framework for defining the degree of a functor using cross-effects (which we call the \emph{height} of the functor), and specialise this to various settings in the literature, including symmetric monoidal categories (\S\S\ref{para:specialise-DV}--\ref{para:finite-coproducts}), wreath products of categories (\S\ref{para:specialise-CDG}) and labelled braid categories (\S\ref{para:specialise-this-paper}). In sections \ref{sec:functorial-configuration-spaces} and \ref{sec:representations-of-categories} we consider the special case of labelled braid categories $\mathcal{C} = \mathcal{B}(M,X)$ in more detail, and describe a functorial (in $M$ and $X$) setting for the notions of degree and height of functors $\mathcal{B}(M,X) \to \mathcal{A}$. \section{Recursive degree}\label{sec:inductive-degree} To relate different notions of \emph{degree} in the literature, we use a notion of \emph{category with stabilisers}, which is roughly a category $\mathcal{C}$ equipped with endofunctors $s_i \colon \mathcal{C} \to \mathcal{C}$ and natural transformations $\mathrm{id} \to s_i$. These are the objects of a category $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ (see Definition \ref{def:degree-general}). There is then a natural notion of \emph{degree} for any functor $\mathcal{C} \to \mathcal{A}$ with $\mathcal{C} \in \ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ and $\mathcal{A}$ an abelian category. There is a functor $\ensuremath{\mathcal{M}\mathrm{on}_{\mathit{ini}}} \to \ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ which is compatible with the definition of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux} of the degree of a functor with source a monoidal category with initial unit object (\S \ref{para:degree-DV}). This construction also generalises to left modules over such a monoidal category (Remark \ref{rmk:modules-over-monoidal-categories}). There is another functor $\mathcal{B} \colon \ensuremath{\mathsf{Mfd}_{\mathsf{c}}} \to \ensuremath{\mathsf{Cat}_{\mathsf{st}}}$, which we will define later in \S \ref{sss:some-functors},\footnote{The functor that we define later in fact has source $\ensuremath{\mathsf{Mfd}_{\mathsf{c}}} \times \ensuremath{\mathsf{Top}_{\circ}}$ and target $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$, so we are implicitly composing with the inclusions $M \mapsto (M,*) : \ensuremath{\mathsf{Mfd}_{\mathsf{c}}} \to \ensuremath{\mathsf{Mfd}_{\mathsf{c}}} \times \ensuremath{\mathsf{Top}_{\circ}}$ and $\ensuremath{\mathsf{Cat}_{\mathsf{s}}} \subset \ensuremath{\mathsf{Cat}_{\mathsf{st}}}$, where $* \in \ensuremath{\mathsf{Top}_{\circ}}$ is the one-point space.} where $\ensuremath{\mathsf{Mfd}_{\mathsf{c}}}$ is a category whose objects are smooth manifolds-with-boundary equipped with a collar neighbourhood and a basepoint on the boundary. The operation of boundary connected sum of manifolds gives $\mathcal{B}(\mathbb{D}^n)$ the structure of a monoidal category (with initial unit object) and $\mathcal{B}(M)$ the structure of a left module over it, so $\mathcal{B}(M)$ may also be viewed as an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ using the previous construction. These two objects of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ are not equal, but nevertheless result in the same definition of the \emph{degree} of a functor $\mathcal{B}(M) \to \mathcal{A}$ (this is Proposition \ref{p:two-degrees-agree}). See \S\S\ref{para:gen-def-degree}--\ref{para:partial-braid-categories} for the details of this brief summary. In \S\S\ref{para:degree-WRW} and \ref{para:degree-mcg} we also discuss how the general Definition \ref{def:degree-general} relates to the notion of degree used in \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} and the notions of degree used in relation to mapping class groups. Throughout this section, $\mathcal{A}$ will denote a fixed abelian category. \subsection{A general definition.}\label{para:gen-def-degree} \begin{defn}\label{def:degree-general} Let $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ be the category whose objects are small $1$-categories $\mathcal{C}$ equipped with a collection $\{s_i \colon \mathcal{C} \to \mathcal{C}\}_{i \in I}$ of endofunctors and natural transformations $\{\imath_i \colon \mathrm{id} \to s_i\}_{i \in I}$. We call such an object a \emph{category with stabilisers}. A morphism in $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ from $(\mathcal{C},I,s,\imath)$ to $(\mathcal{D},J,t,\jmath)$ is a functor $f \colon \mathcal{C} \to \mathcal{D}$ together with a function $\sigma \colon I \to J$ and a collection of natural isomorphisms $\{ \psi_i \colon f \circ s_i \to t_{\sigma(i)} \circ f \}_{i \in I}$ such that $\jmath_{\sigma(i)} * \mathrm{id}_f = \psi_i \circ (\mathrm{id}_f * \imath_i)$ for all $i \in I$. We denote by $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ the full subcategory on objects where $\lvert I \rvert = 1$ (i.e., categories with just one stabiliser -- we will restrict to this subcategory later, in \S\ref{sec:functorial-configuration-spaces}). It also contains $\mathsf{Cat}$ as the full subcategory on objects where $I = \varnothing$, but this will not be relevant for us. We define the \emph{degree} of functors from $\mathcal{C} \in \ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ to the abelian category $\mathcal{A}$ as follows. The function $\mathrm{deg} \colon \mathsf{Fun}(\mathcal{C},\mathcal{A}) \to \{-1,0,1,\ldots,\infty\}$ is the largest function such that $\mathrm{deg}(0) = -1$ and such that for non-zero $T$ we have $\mathrm{deg}(T) \leq d$ if and only if \begin{equation}\label{eq:degree-condition} \mathrm{deg}(\mathrm{coker}(T\imath_i \colon T \to Ts_i)) \leq d-1, \end{equation} for all $i$. We may also vary the definition slightly and define the \emph{split degree} to be the largest function $\mathrm{sdeg}\colon \mathsf{Fun}(\mathcal{C},\mathcal{A}) \to \{-1,0,1,\ldots,\infty\}$ such that $\mathrm{sdeg}(0) = -1$ and such that for non-zero $T$ we have $\mathrm{sdeg}(T) \leq d$ if and only if \begin{equation}\label{eq:split-degree-condition} \begin{gathered} T\imath_i \colon T \to Ts_i \text{ is a split monomorphism in } \mathsf{Fun}(\mathcal{C},\mathcal{A}) \text{ and } \\ \mathrm{sdeg}(\mathrm{coker}(T\imath_i \colon T \to Ts_i)) \leq d-1, \end{gathered} \end{equation} for all $i$. In between these two definitions, there is the \emph{injective degree} $\mathrm{ideg}(T)$, where the condition that $T\imath_i$ is a split monomorphism in $\mathsf{Fun}(\mathcal{C},\mathcal{A})$ is weakened to the condition that $\mathrm{ker}(T\imath_i) = 0$. Another variation of the definition is inspired by the notion of weak degree (\emph{degr{\'e} faible}) introduced by Djament and Vespa~\cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}. Note that $\mathrm{ker}(T\imath_i \colon T \to Ts_i)$ is a subobject of $T$ in the abelian category $\mathsf{Fun}(\mathcal{C},\mathcal{A})$ for all $i$, and therefore so is the sum $\sum_i \mathrm{ker}(T\imath_i \colon T \to Ts_i)$, which we denote by $\kappa(T)$, following the notation of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}. We then define the \emph{weak degree} to be the largest function $\mathrm{wdeg}\colon \mathsf{Fun}(\mathcal{C},\mathcal{A}) \to \{-1,0,1,\ldots,\infty\}$ such that \[ \mathrm{wdeg}(T) = -1 \qquad\text{if and only if}\qquad \kappa(T) = T, \] and otherwise we have $\mathrm{wdeg}(T) \leq d$ if and only if $\mathrm{wdeg}(\mathrm{coker}(T\imath_i \colon T \to Ts_i)) \leq d-1$ for all $i$. \end{defn} \begin{rmk}\label{rmk:4-definitions-of-deg} A simple inductive argument shows that \[ \mathrm{wdeg}(T) \leq \mathrm{deg}(T) \leq \mathrm{ideg}(T) \leq \mathrm{sdeg}(T) \] for all functors $T\colon \mathcal{C} \to \mathcal{A}$. Moreover, if $\mathcal{C} \in \ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ has the property that each $\imath_i$ has a left-inverse, i.e., a natural transformation $\pi_i \colon s_i \to \mathrm{id}$ such that $\pi_i \circ \imath_i = \mathrm{id}_{\mathrm{id}}$, then all four types of degree are equal for all functors $T \colon \mathcal{C} \to \mathcal{A}$. \end{rmk} \begin{rmk}\label{rmk:gen-of-degree-under-composition} In \S\ref{sss:degree} below we discuss the question of when the degree of a functor $\mathcal{C} \to \mathcal{A}$ is preserved under precomposition, in the setting where $\mathcal{C} \in \ensuremath{\mathsf{Cat}_{\mathsf{s}}} \subset \ensuremath{\mathsf{Cat}_{\mathsf{st}}}$.\footnote{In \S\ref{sec:functorial-configuration-spaces} we also set $\mathcal{A} = \ensuremath{\mathsf{Ab}}$, but the only reason for this is to preserve notational similarity with \cite{Palmer2018Twistedhomologicalstability}, and everything in that section generalises verbatim to the setting of an arbitrary abelian category $\mathcal{A}$.} That discussion extends easily to the setting of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$, and also to the other variations of \emph{degree} defined above, so, for completeness, we mention the general statement here. Let $f = (f,\sigma,\psi) \colon \mathcal{C} \to \mathcal{D}$ be a morphism in $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$. Lemma \ref{l:degree-under-composition} generalises to say that for any functor $T \colon \mathcal{D} \to \mathcal{A}$ we have $\ensuremath{\mathsf{x}}\mathrm{deg}(Tf) \leq \ensuremath{\mathsf{x}}\mathrm{deg}(T)$, with equality if $f$ is essentially surjective on objects and $\sigma$ is surjective, for $\ensuremath{\mathsf{x}} \in \{ \varnothing, \text{i}, \text{s} \}$. For the weak degree we have $\text{wdeg}(Tf) \leq \text{wdeg}(T)$ if $\sigma$ is surjective, and we have equality $\text{wdeg}(Tf) = \text{wdeg}(T)$ if $\sigma$ is bijective and $f$ is essentially surjective on objects. We may then generalise Definition \ref{def:braidable} by saying that an object $(\mathcal{C},I,s,\imath)$ of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ is \emph{braidable} if there are certain natural isomorphisms $\Psi_i \colon s_i \circ s_i \to s_i \circ s_i$ for each $i \in I$. Corollary \ref{coro:braidable} generalises exactly as stated to objects of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$. \end{rmk} \begin{rmk}\label{rmk:degree-under-composition-DV} The above remark, specialised to the setting of Djament and Vespa (see below) and with $\ensuremath{\mathsf{x}} = \varnothing$, recovers Proposition 1.7 of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}. With $\ensuremath{\mathsf{x}} = \text{w}$, it implies the analogous statement for the weak degree of functors from a monoidal category with initial unit object. In the notation of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}, this says that if $\alpha \colon \mathcal{M} \to \mathcal{M}^\prime$ is a strict monoidal functor between strict monoidal categories whose unit objects are initial, and if $\alpha$ is moreover surjective on objects, then it induces a functor $\mathcal{P}\mathit{ol}_n(\mathcal{M}^\prime,\mathcal{A}) \to \mathcal{P}\mathit{ol}_n(\mathcal{M},\mathcal{A})$. \end{rmk} \subsection{Specialising to the setting of Djament and Vespa.}\label{para:degree-DV} In the article \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}, Djament and Vespa work with the category $\ensuremath{\mathcal{M}\mathrm{on}_{\mathit{ini}}}$ whose objects are small strict symmetric monoidal categories whose unit object is initial, and whose morphisms are strict monoidal functors. Now, one may define a functor \[ \Psi \colon \ensuremath{\mathcal{M}\mathrm{on}_{\mathit{ini}}} \longrightarrow \ensuremath{\mathsf{Cat}_{\mathsf{st}}} \] as follows: the underlying category of $\Psi(\mathcal{M})$ is just $\mathcal{M}$ and the indexing set for the collection of endofunctors is the set $\mathrm{ob}(\mathcal{M})$ of objects of $\mathcal{M}$. For each $x \in \mathrm{ob}(\mathcal{M})$, the endofunctor $s_x \colon \mathcal{M} \to \mathcal{M}$ is $x \oplus -$ and the natural transformation $\imath_x \colon \mathrm{id} \to s_x$ consists of the morphisms $i_x \oplus \mathrm{id}_y \colon y = 0 \oplus y \to x \oplus y$, where $i_x \colon 0 \to x$ is the unique morphism from the initial object $0$ to $x$. If $F \colon \mathcal{M} \to \mathcal{N}$ is a strict monoidal functor, then $\Psi(F) \colon \Psi(\mathcal{M}) \to \Psi(\mathcal{N})$ is simply the functor $F$, together with the function $\mathrm{ob}(F)$ from the indexing set $\mathrm{ob}(\mathcal{M})$ of $\Psi(\mathcal{M})$ to the indexing set $\mathrm{ob}(\mathcal{N})$ of $\Psi(\mathcal{N})$, and the natural isomorphisms are identities. Given $\mathcal{M} \in \mathrm{ob}(\ensuremath{\mathcal{M}\mathrm{on}_{\mathit{ini}}})$, an abelian category $\mathcal{A}$ and a functor $T \colon \mathcal{M} \to \mathcal{A}$, we may view $\mathcal{M}$ as an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ via the functor $\Psi$, and therefore obtain notions of \emph{degree} $\mathrm{deg}(T)$ and \emph{weak degree} $\mathrm{wdeg}(T)$. These coincide with the definitions of \emph{strong degree} and \emph{weak degree}, introduced in \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}, respectively (\textit{cf}.\ D{\'e}finition 1.5 for the strong degree, and for the weak degree see D{\'e}finitions 1.10, 1.16 and 1.22, as well as Proposition 1.19, which provides the key property -- using the notation of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux} -- that $\delta_x$ and $\pi_\mathcal{M}$ commute). The article \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux} in fact sets up a detailed theory of \emph{weak polynomial functors} (those with finite weak degree) by considering the quotient category $\mathsf{Fun}(\mathcal{M},\mathcal{A})/\ensuremath{\mathcal{S}\mathrm{n}}(\mathcal{M},\mathcal{A})$, where $\ensuremath{\mathcal{S}\mathrm{n}}(\mathcal{M},\mathcal{A})$ is the full subcategory of functors $T$ with $\mathrm{wdeg}(T)<0$. Since the notion of weak degree may be described very generally, whenever the source category is an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$, it may be interesting to try to export this theory from $\ensuremath{\mathcal{M}\mathrm{on}_{\mathit{ini}}}$ to other settings to which the general definition for $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ specialises, such as twisted coefficient systems for mapping class groups (\textit{cf}.\ \S\ref{para:degree-mcg} below). \begin{rmk} We note that the construction $\Psi$ above does not use the symmetry of $\mathcal{M} \in \ensuremath{\mathcal{M}\mathrm{on}_{\mathit{ini}}}$, and in fact works equally well for any strict monoidal category whose unit object is initial. Another remark is that, if the unit object of $\mathcal{M}$ is null, i.e., initial and terminal, then the natural transformations $\imath_x \colon \mathrm{id} \to s_x$ have left-inverses $\pi_x \colon s_x \to \mathrm{id}$ given by the morphisms $t_x \oplus \mathrm{id}_y \colon x \oplus y \to 0 \oplus y = y$, where $t_x \colon x \to 0$ is the unique morphism from $x$ to the terminal object $0$. So for functors $\mathcal{M} \to \mathcal{A}$ from a monoidal category with null unit object, the three types of degree coincide, by Remark \ref{rmk:4-definitions-of-deg}. \end{rmk} \begin{rmk}[\textit{Modules over monoidal categories.}]\label{rmk:modules-over-monoidal-categories} Recall that a strict left-module over a strict monoidal category $\mathcal{M}$ is a category $\mathcal{C}$ and a functor ${\oplus} \colon \mathcal{M} \times \mathcal{C} \to \mathcal{C}$ such that ${\oplus} \circ (\mathbf{1}_{\mathcal{M}} \times \mathrm{id}_{\mathcal{C}}) = \mathrm{id}_{\mathcal{C}}$ and ${\oplus} \circ (\mathrm{id}_{\mathcal{M}} \times {\oplus}) = {\oplus} \circ ({\oplus} \times \mathrm{id}_{\mathcal{C}})$, where $\mathbf{1}_{\mathcal{M}} \colon * \to \mathcal{M}$ takes the unique object to the unit object $I_{\mathcal{M}}$ of $\mathcal{M}$. If $I_{\mathcal{M}}$ is initial in $\mathcal{M}$, then any strict left-module $\mathcal{C}$ over $\mathcal{M}$ naturally has the structure of an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$, generalising exactly the construction above, which is the case of $\mathcal{M}$ as a module over itself: the indexing set is $\mathrm{ob}(\mathcal{M})$, the endomorphisms are defined by $x \oplus -$ and the natural transformations are formed using the fact that $I_{\mathcal{M}}$ is initial.\footnote{For the author, the idea of generalising from monoidal categories to modules over monoidal categories came from a conversation in 2015 with Aur{\'e}lien Djament.} \end{rmk} \subsection{Partial braid categories.}\label{para:partial-braid-categories} In \S\ref{sec:functorial-configuration-spaces} below we define another functor \[ \mathcal{B} \colon \ensuremath{\mathsf{Mfd}_{\mathsf{c}}} \times \ensuremath{\mathsf{Top}_{\circ}} \longrightarrow \ensuremath{\mathsf{Cat}_{\mathsf{s}}} \subset \ensuremath{\mathsf{Cat}_{\mathsf{st}}} \] sending a manifold $M$ (equipped with a collar neighbourhood and a basepoint on its boundary) and a space $X$ to the (labelled) \emph{partial braid category} $\mathcal{B}(M,X)$, whose objects are the non-negative integers. See \S\S\ref{sss:some-categories} and \ref{sss:some-functors} for the full details of this construction (alternatively \S\ref{para:degree-WRW} for a description of the underlying category $\mathcal{B}(M,X)$, without the functoriality or the structure as an object of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$). For the next few paragraphs we will denote this object instead by $\mathcal{B}(M,X)^{\ensuremath{\dagger}}$ in order to distinguish it from a different structure (which we will define next) on the same underlying category, also making it into an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$. For $n\geq 2$ let $\mathbb{D}^n$ denote the closed unit disc in Euclidean $n$-space, equipped with a collar neighbourhood and basepoint on its boundary. This is an object of $\ensuremath{\mathsf{Mfd}_{\mathsf{c}}}$, and for any $X \in \ensuremath{\mathsf{Top}_{\circ}}$ the category $\mathcal{B}(\mathbb{D}^n,X)$ can be made into a strict monoidal category with the number zero as its (null) unit object. For any object $M$ of $\ensuremath{\mathsf{Mfd}_{\mathsf{c}}}$ of dimension $n$, the category $\mathcal{B}(M,X)$ then has the structure of a strict left-module over $\mathcal{B}(\mathbb{D}^n,X)$. Both the monoidal and the module structure are induced by the operation of boundary connected sum of two manifolds in $\ensuremath{\mathsf{Mfd}_{\mathsf{c}}}$. Thus, by Remark \ref{rmk:modules-over-monoidal-categories} above, there is another structure on $\mathcal{B}(M,X)$ making it into an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$, coming from this module structure. Denote this object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ by $\mathcal{B}(M,X)^{\ensuremath{\ddagger}}$. The objects $\mathcal{B}(M,X)^{\ensuremath{\dagger}}$ and $\mathcal{B}(M,X)^{\ensuremath{\ddagger}}$ of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ have the same underlying category $\mathcal{B}(M,X)$, so any functor $T \colon \mathcal{B}(M,X) \to \mathcal{A}$ has a degree with respect to each of these structures; denote these by $\mathrm{deg}^{\ensuremath{\dagger}}(T)$ and $\mathrm{deg}^{\ensuremath{\ddagger}}(T)$ respectively. \begin{prop}\label{p:two-degrees-agree} In this setting, for any functor $T \colon \mathcal{B}(M,X) \to \mathcal{A}$, we have $\mathrm{deg}^{\ensuremath{\dagger}}(T) = \mathrm{deg}^{\ensuremath{\ddagger}}(T)$. \end{prop} We will prove this as a corollary of a more general statement about modules over monoidal categories. For any object $(\mathcal{C},I,s,\imath) \in \ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ and functor $T \colon \mathcal{C} \to \mathcal{A}$, we have a degree $\mathrm{deg}(T)$. But for any element $x \in I$ we may also forget part of the structure, considering just the object $(\mathcal{C},s_x,\imath_x) \in \ensuremath{\mathsf{Cat}_{\mathsf{s}}}$, and compute the degree of $T$ with respect to this structure -- denote this by $\mathrm{deg}^x(T)$. An easy inductive argument shows that $\mathrm{deg}^x(T) \leq \mathrm{deg}(T)$. \begin{prop}\label{p:two-degrees-agree-2} Let $\mathcal{C}$ be a strict left-module over a strict braided monoidal category $\mathcal{M}$, whose unit object $I_{\mathcal{M}}$ is null, and which is generated by $x \in \mathrm{ob}(\mathcal{M})$, in the sense that every object of $\mathcal{M}$ is isomorphic to $x^{\oplus n}$ for some non-negative integer $n$. Consider $\mathcal{C}$ as an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ as in Remark \ref{rmk:modules-over-monoidal-categories} and let $T \colon \mathcal{C} \to \mathcal{A}$ be a functor. Then $\mathrm{deg}^x(T) = \mathrm{deg}(T)$. \end{prop} We prove an analogous comparison result for heights in Proposition \ref{prop:compare-two-heights-special-case}. See Remark \ref{rmk:summary} for a summary of how these facts are related. Also see Remark \ref{rmk:pre-braided} for generalisations of Proposition \ref{p:two-degrees-agree-2} and references to related results. \begin{proof}[Proof of Proposition \ref{p:two-degrees-agree}] First note that the monoidal category $\mathcal{B}(\mathbb{D}^n,X)$ is braided (since $n\geq 2$) and is generated by the object $1$. Thus the category $\mathcal{C} = \mathcal{B}(M,X)^{\ensuremath{\ddagger}}$ satisfies the hypotheses of Proposition \ref{p:two-degrees-agree-2}, which implies that $\mathrm{deg}^{\ensuremath{\ddagger}}(T) = \mathrm{deg}(T) = \mathrm{deg}^{1}(T) = \mathrm{deg}^{\ensuremath{\dagger}}(T)$.\footnote{For the final equality $\mathrm{deg}^{1}(T) = \mathrm{deg}^{\ensuremath{\dagger}}(T)$ to be valid, one has to be slightly more precise with the definition of the structure of $\mathcal{B}(M,X)$ as a module over $\mathcal{B}(\mathbb{D}^n,X)$: it must be induced by the boundary connected sum between $\mathbb{D}^n$ and $M$, \emph{using the component of $\partial M$ containing the basepoint}.} \end{proof} To prove Proposition \ref{p:two-degrees-agree-2}, we first establish a lemma, which will allow us to apply Corollary \ref{coro:braidable} from \S\ref{sec:functorial-configuration-spaces} below in the present setting. Let $\mathcal{C}$ be as in the statement of the proposition, considered as an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$, i.e., equipped with an endofunctor and natural transformation $\iota_y \colon \mathrm{id}_{\mathcal{C}} \Rightarrow y \oplus -$ for each object $y$ of $\mathcal{M}$. Write $\mathcal{C}^x$ for the object $(\mathcal{C},x \oplus -,\iota_x)$ of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$, where we have forgotten all but one of the endofunctors. (For example if $\mathcal{C} = \mathcal{B}(M,X)^{\ensuremath{\ddagger}}$ and $x = 1$ then $\mathcal{C}^x = \mathcal{B}(M,X)^{\ensuremath{\dagger}}$.) \begin{lem}\label{lem:braidable} The object $\mathcal{C}^x \in \ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ is braidable in the sense of Definition \ref{def:braidable}. \end{lem} \begin{proof} We need to find a certain natural automorphism $\Psi$ of the functor $s \circ s = x \oplus x \oplus -$. Note that $\imath * \mathrm{id}_s$ and $\mathrm{id}_s * \imath$ are the natural transformations $s \Rightarrow s \circ s$ consisting of the morphisms $x \oplus c \to x \oplus x \oplus c$, for each $c \in \mathrm{ob}(\mathcal{C})$, given by the matrices \begin{equation}\label{eq:two-matrices} \left(\, \begin{matrix} 0 & 0 \\ \mathrm{id}_x & 0 \\ 0 & \mathrm{id}_c \end{matrix} \,\right) \qquad\text{and}\qquad \left(\, \begin{matrix} \mathrm{id}_x & 0 \\ 0 & 0 \\ 0 & \mathrm{id}_c \end{matrix} \,\right) \end{equation} respectively. We need to show that these differ by a natural automorphism $\Psi$. This may be constructed from the braiding of $\mathcal{M}$, as follows. Write $i$ for the inclusion $\mathcal{C} \to \mathcal{M} \times \mathcal{M} \times \mathcal{C}$ given by $c \mapsto (x,x,c)$ and write $f$ for the flip functor $\mathcal{M} \times \mathcal{M} \to \mathcal{M} \times \mathcal{M}$ given by $(y,z) \mapsto (z,y)$. Then the braiding of $\mathcal{M}$ is a natural isomorphism $b \colon {\oplus} \Rightarrow {\oplus} \circ f \colon \mathcal{M} \times \mathcal{M} \to \mathcal{M}$. Taking products with $\mathcal{C}$ and identities, this induces a natural isomorphism $b \times \mathrm{id} \colon {\oplus} \times \mathrm{id}_{\mathcal{C}} \Rightarrow ({\oplus} \circ f) \times \mathrm{id}_{\mathcal{C}} \colon \mathcal{M} \times \mathcal{M} \times \mathcal{C} \to \mathcal{M} \times \mathcal{C}$. Then we may take $\Psi$ to be the automorphism $\oplus * (b \times \mathrm{id}) * i$ of $x \oplus x \oplus -$. Diagrammatically: \begin{equation}\label{eq:natural-aut} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (l) at (-5,0) {$\mathcal{C}$}; \node (ml) at (20,0) {$\mathcal{M} \times \mathcal{M} \times \mathcal{C}$}; \node (m) at (60,-10) {$\mathcal{M} \times \mathcal{M} \times \mathcal{C}$}; \node (mr) at (100,0) {$\mathcal{M} \times \mathcal{C}$}; \node (r) at (120,0) {$\mathcal{C}.$}; \draw[->] (l) to node[above,font=\small]{$i$} (ml); \draw[->] (ml.east) to node[above,font=\small]{$\oplus \times \mathrm{id}_\mathcal{C}$} (mr.west); \draw[->] (ml.south east) to[out=-45,in=180] (m.west); \draw[->] (m.east) to[out=0,in=225] (mr.south west); \node at (88,-8) [anchor=west] {$\oplus \times \mathrm{id}_\mathcal{C}$}; \node at (36,-8) [anchor=east] {$f \times \mathrm{id}_\mathcal{C}$}; \draw[->] (mr) to node[above,font=\small]{$\oplus$} (r); \draw[double,double equal sign distance,-implies] (60,-2) to node[right,font=\small]{$b \times \mathrm{id}$} (m); \end{tikzpicture} \end{split} \end{equation} In components, we may write this as the collection of morphisms $b_{x,x} \oplus \mathrm{id}_c$ for $c \in \mathrm{ob}(\mathcal{C})$, where $b_{x,x}$ denotes the braiding of $\mathcal{M}$ on the object $x$. The fact that $\imath * \mathrm{id}_s$ and $\mathrm{id}_s * \imath$ differ by $\Psi$ then follows from the equation: \begin{equation}\label{eq:natural-aut-equation} \left(\, \begin{array}{ccc} \multicolumn{2}{c}{\multirow{2}{*}{$b_{x,x}$}} & 0 \\ && 0 \\ 0 & 0 & \mathrm{id}_c \end{array} \,\right) \cdot \left(\, \begin{matrix} \mathrm{id}_x & 0 \\ 0 & 0 \\ 0 & \mathrm{id}_c \end{matrix} \,\right) \quad = \quad \left(\, \begin{matrix} 0 & 0 \\ \mathrm{id}_x & 0 \\ 0 & \mathrm{id}_c \end{matrix} \,\right) , \end{equation} where we are using the matrix notation of \eqref{eq:two-matrices}. \end{proof} \begin{proof}[Proof of Proposition \ref{p:two-degrees-agree-2}] It is always true that $\mathrm{deg}^x(T) \leq \mathrm{deg}(T)$, as observed just before the statement of Proposition \ref{p:two-degrees-agree-2}. So we just need to prove, for all $d\geq -1$, that, if $\mathrm{deg}^x(T) \leq d$, then $\mathrm{deg}(T) \leq d$. The proof will be by induction on $d$. The base case, when $d=-1$, is clear, since both statements are equivalent to $T$ being equal to the zero functor. Now let $d\geq 0$ and assume by induction that the implication is true for smaller values of $d$. We assume that $\mathrm{deg}^x(T) \leq d$ and we need to show that $\mathrm{deg}(T) \leq d$. We showed in Lemma~\ref{lem:braidable} that $\mathcal{C}^x = (\mathcal{C},s_x,\imath_x) \in \ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ is braidable, so Corollary \ref{coro:braidable} implies that $\mathrm{deg}^x(T \circ s_x) \leq \mathrm{deg}^x(T)$. Here we are writing $s_x$ as shorthand for $x \oplus -$. Iterating this argument, we see that \[ \mathrm{deg}^x(T \circ (s_x)^i) \leq \mathrm{deg}^x(T) \leq d \] for all $i\geq 0$. By the recursive definition of $\mathrm{deg}^x(-)$, this means that \[ \mathrm{deg}^x \bigl( \mathrm{coker} \bigl( T \circ (s_x)^i * \imath_x \colon T \circ (s_x)^i \longrightarrow T \circ (s_x)^{i+1} \bigr) \bigr) \leq d-1 \] for all $i\geq 0$. Since the unit object $I_{\mathcal{M}}$ of $\mathcal{M}$ is null, not just initial, we know that the natural transformations $\imath_y$, for objects $y \in \mathrm{ob}(\mathcal{M})$, are all split-injective. Now, for any $n\geq 0$, the natural transformation $T * \imath_{x^{\oplus n}}$ is equal to the composition \[ \bigl( T \circ (s_x)^{n-1} * \imath_x \bigr) \circ \quad\cdots\cdots\quad \circ \bigl( T \circ (s_x)^2 * \imath_x \bigr) \circ \bigl( T \circ s_x * \imath_x \bigr) \circ \bigl( T * \imath_x \bigr) . \] This is a composition of split-injective morphisms in the abelian category $\mathsf{Fun}(\mathcal{C},\mathcal{A})$, so we have \begin{equation}\label{eq:decomposition-of-cokernels} \mathrm{coker}(T * \imath_{x^{\oplus n}}) \;\cong\; \bigoplus_{i=0}^{n-1} \; \mathrm{coker}(T \circ (s_x)^i * \imath_x) \end{equation} and hence \[ \mathrm{deg}^x(\mathrm{coker}(T * \imath_{x^{\oplus n}})) \;=\; \max_{i=0,\ldots,n-1} \bigl( \mathrm{deg}^x (\mathrm{coker}(T \circ (s_x)^i * \imath_x)) \bigr) \leq d-1. \] Now let $y$ be any object of $\mathcal{M}$. By assumption, $y$ is isomorphic to $x^{\oplus n}$ for some $n\geq 0$. Thus there is a natural isomorphism $\Phi \colon T \circ s_{x^{\oplus n}} \to T \circ s_y$ such that $\Phi \circ (T * \imath_{x^{\oplus n}}) = T * \imath_y$, and so \[ \mathrm{coker}(T * \imath_y) \;\cong\; \mathrm{coker}(T * \imath_{x^{\oplus n}}). \] Thus, for any object $y$ of $\mathcal{M}$, we have $\mathrm{deg}^x(\mathrm{coker}(T * \imath_y)) \leq d-1$. By the inductive hypothesis we therefore also have, for any $y \in \mathrm{ob}(\mathcal{M})$, \[ \mathrm{deg}(\mathrm{coker}(T * \imath_y)) \leq d-1. \] By the recursive definition of $\mathrm{deg}(-)$, this implies that $\mathrm{deg}(T) \leq d$. \end{proof} \begin{rmk}[\textit{Generalisations.}]\label{rmk:pre-braided}\footnote{The author would like to thank Aur{\'e}lien Djament for pointing out an error in an earlier version of this remark.} For Lemma \ref{lem:braidable}, and thus for Proposition \ref{p:two-degrees-agree-2}, it is possible to weaken the assumption that $\mathcal{M}$ is braided to the assumption that it is \emph{pre-braided} (a notion that was introduced in \cite[Definition 1.5]{Randal-WilliamsWahl2017Homologicalstabilityautomorphism}). By definition, this means that its underlying groupoid $\mathcal{M}^{\sim}$ is braided and the braiding $b_{x,y} \colon x \oplus y \to y \oplus x$ of $\mathcal{M}^{\sim}$ satisfies the equation \begin{equation}\label{eq:pre-braided} \left(\, \begin{array}{cc} \multicolumn{2}{c}{\multirow{2}{*}{$b_{x,y}$}} \\ & \end{array} \,\right) \cdot \left(\, \begin{matrix} \mathrm{id}_x \\ 0 \end{matrix} \,\right) \quad = \quad \left(\, \begin{matrix} 0 \\ \mathrm{id}_x \end{matrix} \,\right) \; \colon \; x \longrightarrow y \oplus x, \end{equation} for any two objects $x,y$ of $\mathcal{M}$. The existence of the braiding on $\mathcal{M}^{\sim}$ allows one to construct the automorphism $\Psi$ (replace each appearance of $\mathcal{M}$ with $\mathcal{M}^{\sim}$ in the diagram \eqref{eq:natural-aut}) and the relation \eqref{eq:pre-braided} implies the relation \eqref{eq:natural-aut-equation}. By the same reasoning, we could dually weaken the assumption that $\mathcal{M}$ is braided to the assumption that it is \emph{pre\textsuperscript{op}-braided}, meaning that $\mathcal{M}^{\sim}$ is braided and its braiding satisfies the equation \begin{equation}\label{eq:preop-braided} \left(\, \begin{array}{cc} \multicolumn{2}{c}{\multirow{2}{*}{$b_{x,y}$}} \\ & \end{array} \,\right) \cdot \left(\, \begin{matrix} 0 \\ \mathrm{id}_y \end{matrix} \,\right) \quad = \quad \left(\, \begin{matrix} \mathrm{id}_y \\ 0 \end{matrix} \,\right) \; \colon \; y \longrightarrow y \oplus x, \end{equation} for any two objects $x,y$ of $\mathcal{M}$. The assumption that $I_{\mathcal{M}}$ is null in Proposition \ref{p:two-degrees-agree-2} was convenient to make the homological algebra simpler, by giving us the decomposition \eqref{eq:decomposition-of-cokernels}, but we expect the proposition to hold more generally whenever $I_{\mathcal{M}}$ is initial (\textit{cf}.\ Proposition 1.8 of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}; see also Proposition 3.9 of \cite{Soulie2017LongMoodyconstruction}). One can of course also generalise this proposition to the setting in which $\mathcal{M}$ has a given \emph{set} of objects that generate it, instead of a single object (\textit{cf}.\ the two references just cited). When $I_{\mathcal{M}}$ is not null, the four versions of degree defined in \S\ref{para:gen-def-degree} do not necessarily coincide, so one may ask whether Proposition \ref{p:two-degrees-agree-2} is also true if deg is replaced by $\ensuremath{\mathsf{x}}\text{deg}$ for $\ensuremath{\mathsf{x}} \in \{ \text{i} , \text{s} , \text{w} \}$. For the weak degree ($\ensuremath{\mathsf{x}} = \text{w}$) this is true, by Proposition 1.24 of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux} (their statement is for a symmetric monoidal category, rather than a left-module over a pre-braided monoidal category, but their methods should extend to this more general setting too), and for $\ensuremath{\mathsf{x}} = \text{i or s}$ the above proof goes through with minor modifications, making use of Remark \ref{rmk:gen-of-degree-under-composition}. \end{rmk} \subsection{Relation to the degree of Randal-Williams and Wahl.}\label{para:degree-WRW} In their paper \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism}, Randal-Williams and Wahl use a notion of degree of twisted coefficient systems which is slightly different to that of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}, and which they remark is inspired by the work of Dwyer~\cite{Dwyer1980Twistedhomologicalstability}, van der Kallen~\cite{Kallen1980Homologystabilitylinear} and Ivanov~\cite{Ivanov1993homologystabilityTeichmuller}. \paragraph*{Setting.} The starting point in \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} is a \emph{homogeneous category} $\mathcal{C}$ -- meaning a monoidal category whose unit object is initial, satisfying two axioms H1 and H2 described in Definition 1.3 of \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} -- which is also \emph{pre-braided} -- see Remark \ref{rmk:pre-braided} above -- together with two objects $a$ and $x$ of $\mathcal{C}$. Let $\mathcal{C}_{a,x}$ denote the full subcategory on the objects $x^{\oplus m} \oplus a \oplus x^{\oplus n}$. There is an endofunctor of this category given by $x \oplus -$ and a natural transformation $\mathrm{id} \to (x \oplus -)$ since the unit of $\mathcal{C}$ is initial, so $\mathcal{C}_{a,x}$ is in this way an object of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}} \subset \ensuremath{\mathsf{Cat}_{\mathsf{st}}}$. A twisted coefficient system in \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} is a functor $T \colon \mathcal{C}_{a,x} \to \mathcal{A}$. For each $N \geq 0$ they define a notion of \emph{degree at $N$} and \emph{split degree at $N$} for $T$ (see Definition 4.10); when $N=0$ these correspond to the injective degree and the split degree of $T$ as defined in \S\ref{para:gen-def-degree}. \begin{rmk}[\textit{Comparison to degree for modules over monoidal categories.\footnote{The author is grateful to Manuel Krannich for the observation that $\mathcal{C}_{a,x}$ can be viewed as a module over a monoidal category.}}] If we denote by $\mathcal{C}_x$ the full (monoidal) subcategory of $\mathcal{C}$ on the objects $x^{\oplus n}$ for $n\geq 0$, then $\mathcal{C}_{a,x}$ is a left-module over $\mathcal{C}_x$, so there is a notion of (injective, split, etc.) degree of functors $\mathcal{C}_{a,x} \to \mathcal{A}$ coming from Remark \ref{rmk:modules-over-monoidal-categories}. If the unit object of $\mathcal{C}$ is null,\footnote{We expect that this assumption is not necessary, since we expect that Proposition \ref{p:two-degrees-agree-2} should hold assuming only that $I_{\mathcal{M}}$ is initial, and also with deg replaced by either ideg or sdeg (the case of wdeg seems more subtle).} this exactly coincides with the degree (at $N=0$) of \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism}. To see this, note first that the degree of $T$ (at $N=0$) according to \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} is precisely $\mathrm{deg}^x(T)$ in the notation of Proposition \ref{p:two-degrees-agree-2}.\footnote{The four types of degree coincide since the unit object of $\mathcal{C}$ is null (\textit{cf}.\ Remark \ref{rmk:4-definitions-of-deg}), which is why we can write $\mathrm{deg}^x(T)$ instead of, say, $\mathrm{ideg}^x(T)$.} Then Proposition \ref{p:two-degrees-agree-2} plus Remark \ref{rmk:pre-braided} imply that this is equal to the degree of $T$ according to the structure of $\mathcal{C}_{a,x}$ as a module over $\mathcal{C}_x$. \end{rmk} \paragraph*{Injective braid categories.} As mentioned above (\S\ref{para:partial-braid-categories}), we define in \S\ref{sec:functorial-configuration-spaces} below the \emph{partial braid category} $\mathcal{B}(M,X)$ associated to a manifold $M$ and space $X$, which is naturally a category with stabiliser, in other words, an object of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$. It may be described as follows: its objects are finite subsets $c$ of $M$ equipped with a function $\ell \colon c \to X$ (``labelled by $X$''). A morphism from $\ell \colon c \to X$ to $m \colon d \to X$ is a \emph{braid between subconfigurations of $c$ and $d$ labelled by paths in $X$}. More precisely, it is a path $\gamma$ in the configuration space $C_k(M,X)$ for some integer $k$, up to endpoint-preserving homotopy, such that $\gamma(0)$ is the restriction of $\ell$ to some subset of $c$ and $\gamma(1)$ is the restriction of $m$ to some subset of $d$. In fact, this defines a slightly larger category $\hat{\mathcal{B}}(M,X)$, of which $\mathcal{B}(M,X)$ is a skeleton. Both $M$ and $X$ are assumed to be path-connected, so the isomorphism classes of the objects $\ell \colon c \to X$ of $\hat{\mathcal{B}}(M,X)$ are determined by the cardinality $\lvert c \rvert$. Then $\mathcal{B}(M,X)$ is the full subcategory on the objects $c_n \to \{x_0\} \subseteq X$, where $x_0$ is the basepoint of $X$ and $c_n$ is a certain nested sequence of subsets of $M$ of cardinality $n$. We may therefore think of the objects of $\mathcal{B}(M,X)$ as the non-negative integers. See \S\S\ref{sss:some-categories} and \ref{sss:some-functors} for more details, including the functoriality of this definition with respect to $M$ and $X$ and the structure making $\mathcal{B}(M,X)$ into an object of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$. There is a subcategory of $\mathcal{B}(M,X)$, denoted $\ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X)$ and called the \emph{injective braid category}, also with the non-negative integers as objects, but with only those morphisms (using the description of the previous paragraph) where $\gamma(0)=\ell$. Morphisms in $\ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X)$ may be thought of as ``fully-defined injective braids on $M$'', whereas those in $\mathcal{B}(M,X)$ are ``partially-defined injective braids on $M$''. The stabiliser (endofunctor plus natural transformation) of $\mathcal{B}(M,X)$ restricts to $\ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X)$, making it into a subobject in the category $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$. The simplest example corresponds to taking $X$ a point and $M=\mathbb{R}^n$ for $n\geq 3$, in which case $\ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X)$ is equivalent to the category FI of finite sets and injections, and $\mathcal{B}(M,X)$ is equivalent to the category $\text{FI}\sharp$ of finite sets and partially-defined injections. \paragraph*{Which braid categories are homogeneous?} One may wonder whether the categories $\mathcal{B}(M,X)$ and $\ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X)$ are pre-braided homogeneous. First of all, if $M$ splits as $M = \mathbb{R} \times M^\prime$, they are both monoidal with initial unit object, and if moreover $M^\prime$ also splits as $M^\prime = \mathbb{R} \times M^{\prime\prime}$ they are braided (and hence pre-braided). The category $\mathcal{B}(M,X)$ is, however, never homogeneous: it fails axiom H1 for homegeneity. On the other hand, the category $\ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X)$ always satisfies axiom H1, and it satisfies axiom H2 if and only if $M = \mathbb{R}^2 \times M^\prime$ has dimension at least $3$, i.e., $\mathrm{dim}(M^\prime) \geq 1$. In particular, the category $\ensuremath{\mathcal{B}_{\mathsf{f}}}(\mathbb{R}^2)$ is not homogeneous. The ``natural'' pre-braided homogeneous category whose automorphisms groups are the braid groups is denoted $U\beta$ in \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism}, and comes with a natural functor $U\beta \to \ensuremath{\mathcal{B}_{\mathsf{f}}}(\mathbb{R}^2)$. Using the graphical calculus for $U\beta$ described in \S 1.2 of \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism}, this functor may be described as taking a braid diagram representing a morphism of $U\beta$ and forgetting all strands with ``free'' ends. \paragraph*{Comparison of twisted homological stability results.} As an aside, we discuss briefly the overlap between the twisted homological stability results of \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} and those of \cite{Palmer2018Twistedhomologicalstability} (where this note originated). For the purposes of this paragraph, a sequence of (based, path-connected) spaces $X_n$ indexed by non-negative integers is \emph{homologically stable} if for each $i$ the group $H_i(X_n)$ is independent of $n$ (up to isomorphism) once $n$ is sufficiently large. (Given a sequence of groups $G_n$ we consider their classifying spaces $X_n = BG_n$.) If $\mathcal{C}$ is a category whose objects are non-negative integers and $\mathrm{Aut}_\mathcal{C}(n) = \pi_1(X_n)$, then a functor $T \colon \mathcal{C} \to \ensuremath{\mathsf{Ab}}$ determines a local coefficient system on each $X_n$, and the sequence is homologically stable \emph{with coefficients in $T$} if the corresponding local homology groups stabilise. Theorem A of \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} says that the groups $\mathrm{Aut}_{\mathcal{C}}(a \oplus x^{\oplus n})$ are homologically stable with coefficients in any finite-degree twisted coefficient system on $\mathcal{C}_{a,x}$, as long as $\mathcal{C}$ is pre-braided homogeneous and a certain simplicial complex built out of $\mathcal{C}_{a,x}$ is highly-connected. Taking $\mathcal{C} = \ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X)$ and $M = \mathbb{R}^2 \times M^\prime$ for a manifold $M^\prime$ of dimension at least one, we saw above that $\mathcal{C}$ is pre-braided homogeneous. Taking $a = 0$ and $x = 1$, we have $\mathcal{C}_{a,x} = \mathcal{C}$, which is equivalent to the category $\mathrm{FI}_G$ of \cite{SamSnowden2014RepresentationscategoriesG} with $G=\pi_1(M\times X)$. As noted in \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} (at the bottom of page 596), the associated simplicial complex is known to be highly-connected by a result of \cite{HatcherWahl2010Stabilizationmappingclass}, and so Theorem A of \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} applies in this setting. In fact, it yields a particular case of their Theorem D, saying that the sequence of fundamental groups $G_n = \pi_1(C_n(M,X)) \cong \pi_1(M\times X) \wr \Sigma_n$ satisfies twisted homological stability for finite-degree coefficient systems on the category $\ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X) = U(\sqcup_n G_n)$. On the other hand, in this setting, Theorem A of \cite{Palmer2018Twistedhomologicalstability} says that the sequence of (\emph{non-aspherical}) spaces $C_n(M,X)$ satisfies twisted homological stability for finite-degree coefficient systems on the larger category $\mathcal{B}(M,X)$. If $M=S$ is a surface and $X=BG$ is an aspherical space, then the configuration spaces $C_n(S,BG)$ are also aspherical with fundamental groups $G_n = G\wr \beta_n^S$, where $\beta_n^S$ denotes the $n$th surface braid group. In this case Theorem A of \cite{Palmer2018Twistedhomologicalstability} says that this sequence of groups satisfies twisted homological stability for finite-degree coefficient systems on $\mathcal{B}(S,BG)$. In this setting, Theorem D of \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} also says that this sequence of groups satisfies twisted homological stability, but for finite-degree coefficient systems on the category $U(\sqcup_n G_n)$. This is more general, since there is a natural functor $U(\sqcup_n G_n) \to \ensuremath{\mathcal{B}_{\mathsf{f}}}(S,BG) \subset \mathcal{B}(S,BG)$, and precomposition by this functor preserves the degree of twisted coefficient systems (\textit{cf}.\ Lemma \ref{l:degree-under-composition}). \begin{rmk} When $M$ has dimension greater than $2$ or when $X$ has non-trivial higher homotopy groups, the spaces $C_n(M,X)$ are not aspherical, so in this setting the twisted homological stability result of \cite{Palmer2018Twistedhomologicalstability} is not comparable to the results of \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism}, since the latter paper is concerned only with sequences of \emph{groups}. On the other hand, the framework of \cite{Randal-WilliamsWahl2017Homologicalstabilityautomorphism} has been generalised by Krannich~\cite{Krannich2017Homologicalstabilitytopological} to a topological setting, which includes the setting of configuration spaces, even when they are not aspherical. See Remark 1.5 of \cite{Palmer2018Twistedhomologicalstability} for a comparison. \end{rmk} \subsection{Degree of twisted coefficient systems on mapping class groups.}\label{para:degree-mcg} There are several different settings that have been considered for twisted coefficient systems on mapping class groups and their degrees, all using the notion of ``split degree'' (or slight variations thereof) described in \S\ref{para:gen-def-degree}. We will describe and compare these different settings, using the language of \S\ref{para:gen-def-degree}, without defining in all details the categories involved. There is a certain category $\mathcal{C}$, introduced by Ivanov~\cite{Ivanov1993homologystabilityTeichmuller}, whose objects are compact, connected, oriented surfaces $F$ equipped with an embedded arc in $\partial F$. Morphisms are, roughly, embeddings together with a path between the midpoints of the two arcs, all considered up to ambient isotopy. There is an endofunctor $t\colon \mathcal{C} \to \mathcal{C}$ and a natural transformation $\mathrm{id} \to t$ defined by Ivanov, which on objects takes the boundary connected sum with $F_{1,1}$, the torus with one boundary component. There is another such endofunctor $a\colon \mathcal{C} \to \mathcal{C}$, introduced by Cohen and Madsen~\cite{CohenMadsen2009Surfacesinbackground}, which instead takes the boundary connected sum with an annulus. The coefficient systems of Ivanov are indexed on $\mathcal{C}$ and his degree is the \emph{split degree} (as defined in \S\ref{para:gen-def-degree}), considering $\mathcal{C}$ as an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ using just the endomorphism $t$. Cohen and Madsen use a slight variation of $\mathcal{C}$ to index their coefficient systems, and their degree is again the split degree, but this time using both $t$ and $a$ to turn $\mathcal{C}$ into an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$. As a side note, their definition very slightly deviates from this, in fact. They do not require that the splittings of $T \to Tt$ and of $T \to Ta$ are functorial, i.e., they do not have to be natural transformations. They only require that $T(F) \to Tt(F)$ and $T(F) \to Ta(F)$ split for each $F$, and that these splittings are equivariant for the action of the automorphism group of the object $F$ in $\mathcal{C}$ (which is the mapping class group of $F$). In other words, $T \to Tt$ and $T \to Ta$ are only required to be split mono natural transformations after restricting $\mathcal{C}$ to the subcategory $\mathcal{C}_{\mathrm{aut}} \subset \mathcal{C}$ of all automorphisms in $\mathcal{C}$. Boldsen~\cite{Boldsen2012Improvedhomologicalstability} uses the same $\mathcal{C}$ as Cohen and Madsen and the same two endofunctors, and he also introduces another functor $p \colon \mathcal{C}(2) \to \mathcal{C}$, defined on a certain subcategory $\mathcal{C}(2)$ of $\mathcal{C}$ in which objects all have at least two boundary components, which glues a pair of pants onto two boundary components of a given surface. His coefficient systems are indexed on $\mathcal{C}$, as for Cohen and Madsen. The endofunctors $t$ and $a$ turn $\mathcal{C}$ into an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$, and therefore give a notion of split degree. However, Boldsen's definition of degree is slightly stricter: the recursive condition \eqref{eq:split-degree-condition} is modified to say that $T \to Tt$ and $T \to Ta$ must be split mono and $\mathrm{sdeg}(\mathrm{coker}(T\to Tt)) \leq d-1$ and $\mathrm{sdeg}(\mathrm{coker}(T\to Ta)) \leq d-1$, and also $T|_{\mathcal{C}(2)} \to Tp$ must also be split mono, in $\mathsf{Fun}(\mathcal{C}(2),\mathcal{A})$. Randal-Williams and Wahl also consider mapping class groups as an example of their general twisted homological stability machine, and their setup is again slightly different to the previous settings. They consider two subcategories of $\mathcal{C}$ separately. One is the full subcategory on surfaces with any genus but a fixed number of boundary components, to which the endofunctor $t$ restricts. They then consider coefficient systems indexed on this subcategory, and define the split degree of such coefficient systems by using the restriction of $t$ to view the subcategory as an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$. (For simplicity we are taking $N=0$ in their definition of split degree.) Separately, they consider the subcategory on surfaces with a fixed genus and any number of boundary components, to which the endofunctor $a$ restricts. They then consider coefficient systems indexed on this subcategory, and define the split degree by using the restriction of $a$ to view it as an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$. Finally, they also consider a non-orientable analogue of Ivanov's category $\mathcal{C}$, with objects all non-orientable surfaces with a given fixed number of boundary components and any (non-orientable) genus. This admits an endofunctor $m$ defined on objects by taking the boundary connected sum with a M{\"o}bius band, and they then consider coefficient systems indexed on this category, with the split degree defined by using $m$ to view it as an object of $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$. \section{Vanishing of cross-effects}\label{sec:cross-effects} In this section, we give a general definition of the \emph{height} of a functor $\mathcal{C} \to \mathcal{A}$, for an abelian category $\mathcal{A}$ and a category $\mathcal{C}$ equipped with certain structure,\footnote{In fact, we give three definitions, each depending on a slightly different structure on $\mathcal{C}$, and show that they agree whenever two are defined (Lemma \ref{lem:three-definitions}).} and relate it to various notions of \emph{height} appearing in the literature, including that of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux} (much of this section has been directly inspired by the definitions given in that paper). In particular, this encompasses the setting where $\mathcal{C}$ is monoidal and its unit object is either initial or terminal (see \S\ref{para:specialise-DV} and \S\ref{para:specialise-HPV}), and also the setting where $\mathcal{C}$ is any category equipped with a functor $\mathcal{I} \to \mathcal{C}$, where $\mathcal{I}$ is the category defined just below at the beginning of \S\ref{para:first-def} (see \S\ref{para:specialise-this-paper}). In \S\ref{para:compare-two-heights} we study the intersection between these two settings. This is analogous to \S\ref{para:partial-braid-categories} above (which is concerned with the intersection between two different ways of defining the \emph{degree} of a functor with source $\mathcal{C}$); see in particular Remark \ref{rmk:summary}. Throughout this section $\mathcal{A}$ will denote a fixed abelian category. In proofs we will often assume that $\mathcal{A}$ is a category of modules over a ring, so that its objects have elements, which is justified by the Freyd-Mitchell embedding theorem. \subsection{First definition.}\label{para:first-def} Let $\mathcal{I}$ be the category whose objects are the non-negative integers, and whose morphisms $m \to n$ are subsets of $\{ 1,\ldots,\mathrm{min}(m,n) \}$, with composition given by intersection. The endomorphism monoid $\mathrm{End}_{\mathcal{I}}(n)$ is denoted $\I{n}$, and is the monoid of subsets of $\ensuremath{\underline{n}} = \{1,\ldots,n\}$ under the operation $\cap$ with neutral element $\ensuremath{\underline{n}}$ itself. We will also think of $\I{n}$ as a category on the single object $\bullet$. There is an operation $\mathrm{cr}(-)$ that takes a functor $f\colon \I{n} \to \mathcal{A}$ as input and produces the following object of $\mathcal{A}$: \[ \mathrm{cr}(f) \;=\; \mathrm{im} \biggl(\, \sum_{S\subseteq\ensuremath{\underline{n}}} (-1)^{\lvert S\rvert} f(\ensuremath{\underline{n}}\smallsetminus S) \colon f(\bullet) \longrightarrow f(\bullet) \biggr) \] as output. Now suppose that we are given a category $\mathcal{C}$ equipped, for each $n\geq 0$, with a collection of functors $\{ f_j\colon \I{n}\to\mathcal{C} \}_{j\in J_n}$. Then the \emph{height} $\mathrm{ht}(T)\in\{-1,0,1,2,\ldots,\infty\}$ of a functor $T\colon\mathcal{C}\to\mathcal{A}$ is defined by the criterion that $\mathrm{ht}(T)\leq h$ if and only if for all $n>h$ and all $j\in J_n$, $\mathrm{cr}(T\circ f_j)=0$. \subsection{Second definition.}\label{para:second-def} Let $\J{n}$ denote the set of all subsets of $\ensuremath{\underline{n}}$, considered as a partially-ordered set -- and thus as a category -- under the relation of inclusion of subsets. There is an operation $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}(-)$ taking a functor $f\colon \J{n}\to\mathcal{A}$ as input and producing the following object of $\mathcal{A}$: \[ \ensuremath{\,\overline{\! \mathrm{cr}\!}\,}(f) \;=\; \mathrm{coker} \biggl(\, \bigoplus_{S \subsetneq \ensuremath{\underline{n}}} f(S\hookrightarrow\ensuremath{\underline{n}}) \colon \bigoplus_{S \subsetneq \ensuremath{\underline{n}}} f(S) \longrightarrow f(\ensuremath{\underline{n}}) \biggr) \] as output. Now suppose that we are given a category $\mathcal{C}$ equipped, for each $n\geq 0$, with a collection of functors $\{ f_j\colon \J{n}\to\mathcal{C} \}_{j\in J_n}$. Then the \emph{height} $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}(T)\in\{-1,0,1,2,\ldots,\infty\}$ of a functor $T\colon\mathcal{C}\to\mathcal{A}$ is defined by the criterion that $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}(T)\leq h$ if and only if for all $n>h$ and all $j\in J_n$, $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}(T\circ f_j)=0$. \subsection{Relationship between the definitions.}\label{para:two-definitions} There is a functor $z\colon \J{n}\to \I{n}$ given by sending each morphism $S\subseteq T$ in $\J{n}$ to the morphism $\ensuremath{\underline{n}}\smallsetminus (T\smallsetminus S)$ in $\I{n}$. (More generally, any lattice $L$ may be viewed as a monoid $L^{\wedge}$ under the meet operation, and there is an analogous functor $L\to L^{\wedge}$ if $L$ is a Boolean algebra.) This relates the two constructions above as follows: \begin{lem}\label{lem:two-definitions} For any functor $f\colon \I{n}\to\mathcal{A}$ the objects $\mathrm{cr}(f)$ and $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}(f\circ z)$ are isomorphic. \end{lem} A category $\mathcal{C}$ equipped with collections of functors $\{\I{n}\to\mathcal{C}\}$ as in the first definition may be viewed via $z$ as a category equipped with collections of functors $\{\J{n}\to\mathcal{C}\}$ as in the second definition. Hence -- \emph{a priori} -- functors $T\colon\mathcal{C}\to\mathcal{A}$ have two possibly different heights, $\mathrm{ht}(T)$ and $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}(T)$. But the above lemma implies that these coincide, i.e.\ $\mathrm{ht}(T)=\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}(T)$. The second definition is therefore more general, reducing to the first definition in the special case where the given functors $\J{n}\to\mathcal{C}$ all factor through $z\colon \J{n}\to \I{n}$. \begin{proof}[Proof of Lemma \ref{lem:two-definitions}] This is proved exactly as Proposition 2.9 of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}. We will give the details here, in order to identify (for later; see \S\ref{para:semi-functors}) where we use the fact that $f$ preserves the identity. First of all we will show that: \begin{equation}\label{eq:ker-im-identity} \mathrm{ker}(g) \;=\; \sum_{S\subsetneq\ensuremath{\underline{n}}} \mathrm{im}(f(S)) \qquad\text{where}\qquad g = \displaystyle\sum_{S\subseteq\ensuremath{\underline{n}}} (-1)^{\lvert S\rvert} f(\ensuremath{\underline{n}}\smallsetminus S). \end{equation} \noindent $(\supseteq):$ Let $x=f(T)(y)$ for $y\in f(\bullet)$ and $T\subsetneq\ensuremath{\underline{n}}$. Choose $i\in\ensuremath{\underline{n}}\smallsetminus T$ and write \[ g(x) \;\;= \sum_{S\subseteq \ensuremath{\underline{n}}\smallsetminus\{i\}} \Bigl( (-1)^{\lvert S\rvert} f(\ensuremath{\underline{n}}\smallsetminus S)f(T)(y) + (-1)^{\lvert S\rvert +1} f((\ensuremath{\underline{n}}\smallsetminus S)\smallsetminus \{i\})f(T)(y) \Bigr) . \] Since $(T\smallsetminus S)\smallsetminus\{i\} = T\smallsetminus S$ the terms cancel pairwise and $x\in\mathrm{ker}(g)$. \noindent $(\subseteq):$ Suppose $x\in f(\bullet)$ and $g(x)=0$. Since $f$ preserves the identity, i.e.\ $f(\ensuremath{\underline{n}})=\mathrm{id}$, we may write \begin{align*} x \;\;&= \sum_{\varnothing \neq S\subseteq\ensuremath{\underline{n}}} (-1)^{\lvert S\rvert +1} f(\ensuremath{\underline{n}}\smallsetminus S)(x) \\ &= \sum_{S\subsetneq\ensuremath{\underline{n}}} (-1)^{n-\lvert S\rvert +1} f(S)(x) \quad \in \quad \sum_{S\subsetneq\ensuremath{\underline{n}}} \mathrm{im}(f(S)). \end{align*} \noindent Now note that the right-hand side of (the left-hand equation of) \eqref{eq:ker-im-identity} is equal to the image of \[ h = \bigoplus_{S\subsetneq\ensuremath{\underline{n}}} f(S) \colon \bigoplus_{S\subsetneq\ensuremath{\underline{n}}} f(\bullet) \longrightarrow f(\bullet). \] Hence we have $\mathrm{cr}(f) = \mathrm{im}(g) \cong f(\bullet)/\mathrm{ker}(g) = f(\bullet)/\mathrm{im}(h) = \mathrm{coker}(h) = \ensuremath{\,\overline{\! \mathrm{cr}\!}\,}(f\circ z)$. \end{proof} \subsection{Specialising to the setting of Djament and Vespa.}\label{para:specialise-DV} Let $\mathcal{C}$ be a symmetric monoidal category whose monoidal unit is null (simultaneously initial and terminal). In \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}, Djament and Vespa define the notion of a \emph{strong polynomial} functor $\mathcal{C}\to\mathcal{A}$ of \emph{degree} $d$. Their definition is recovered by the first definition of a \emph{functor of height $d$} above by equipping $\mathcal{C}$ with the following collections of functors $\{\I{n}\to\mathcal{C}\}$. Take $J_n$ to be the set of $n$-tuples $(x_1,\ldots,x_n)$ of objects of $\mathcal{C}$. The associated functor $\I{n}\to\mathcal{C}$ sends the unique object $\bullet$ to $\bigoplus_{i=1}^n x_i$ and a subset $S\subseteq\ensuremath{\underline{n}}$ to the endomorphism $\bigoplus_{i=1}^n \phi_i$ where $\phi_i=\mathrm{id}$ for $i\in S$ and $\phi_i=0$ otherwise. More generally, let $\mathcal{C}$ be a symmetric monoidal category whose monoidal unit is initial. The general definition of Djament and Vespa is for this setting, and corresponds to the second definition of a \emph{functor of height $d$} above by equipping $\mathcal{C}$ with the following collections of functors $\{\J{n}\to\mathcal{C}\}$. Take $J_n$ to be the set of $n$-tuples $(x_1,\ldots,x_n)$ of objects of $\mathcal{C}$ as before. The associated functor $\J{n}\to\mathcal{C}$ sends the object $S\subseteq\ensuremath{\underline{n}}$ to $\bigoplus_{i\in S}x_i$ and the inclusion $S\subseteq T$ to the canonical morphism $\bigoplus_{i\in S} x_i \cong \bigoplus_{i\in T} y_i \to \bigoplus_{i\in T} x_i$ where $y_i=x_i$ if $i\in S$ and $y_i=0$ otherwise. Of course, our general definition of a \emph{functor of height $d$} introduced above specialises very naturally to this setting as it was directly inspired by the work of Djament and Vespa.\footnote{To see that it specialises as claimed to the setting of Djament and Vespa, combine D{\'e}finition 2.1, Proposition 2.3, D{\'e}finition 2.6 and Proposition 2.9 of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}.} Soon we will generalise it slightly (\S\ref{para:semi-functors}) so that it also recovers the notion of \emph{height} used in \cite{Palmer2018Twistedhomologicalstability}. First we describe the dual of our second definition of height and specialise it to the setting of \cite{HartlPirashviliVespa2015Polynomialfunctorsalgebras}. \subsection{Third definition.}\label{para:third-def} There is an operation $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}^\prime(-)$ that takes a functor $f\colon \K{n}\to\mathcal{A}$ as input and produces the following object of $\mathcal{A}$: \[ \ensuremath{\,\overline{\! \mathrm{cr}\!}\,}^\prime(f) \;=\; \mathrm{ker} \biggl(\, \bigoplus_{S \subsetneq \ensuremath{\underline{n}}} f(S\hookrightarrow\ensuremath{\underline{n}}) \colon f(\ensuremath{\underline{n}}) \longrightarrow \bigoplus_{S \subsetneq \ensuremath{\underline{n}}} f(S) \biggr) \] as output. Suppose that we are given a category $\mathcal{C}$ equipped, for each $n\geq 0$, with a collection of functors $\{ f_j\colon \K{n}\to\mathcal{C} \}_{j\in J_n}$. The \emph{height} $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}^\prime(T)\in\{-1,0,1,2,\ldots,\infty\}$ of a functor $T\colon\mathcal{C}\to\mathcal{A}$ is defined by the criterion that $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}^\prime(T)\leq h$ if and only if for all $n>h$ and all $j\in J_n$, $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}^\prime(T\circ f_j)=0$. \subsection{Relation between all three definitions.}\label{para:three-definitions} This may be related to the first and second definitions as follows. There is a functor $z^\prime\colon {\J{n}}^{\mathrm{op}}\to \I{n}$ given by sending each morphism $S\subseteq T$ in $\K{n}$ to the morphism $\ensuremath{\underline{n}}\smallsetminus (T\smallsetminus S)$ in $\I{n}$. Using this and the functor $z\colon \J{n}\to \I{n}$ from above, any functor $\I{n}\to\mathcal{A}$ induces functors $\J{n} \to \mathcal{A}$ and $\K{n} \to \mathcal{A}$. \begin{lem}\label{lem:three-definitions} For any functor $f\colon \I{n}\to\mathcal{A}$ we have isomorphisms $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}(f\circ z) \cong \mathrm{cr}(f) \cong \ensuremath{\,\overline{\! \mathrm{cr}\!}\,}^\prime(f\circ z^\prime)$. \end{lem} A category $\mathcal{C}$ equipped with collections of functors $\{\I{n}\to\mathcal{C}\}$ as in the first definition may be viewed as a category equipped either with collections of functors $\{\J{n}\to\mathcal{C}\}$ as in the second definition or collections of functors $\{\K{n}\to\mathcal{C}\}$ as in the third definition. The above lemma implies that in this situation the three possible notions of height for functors $\mathcal{C}\to\mathcal{A}$ all coincide. \begin{proof}[Proof of Lemma \ref{lem:three-definitions}] By Lemma \ref{lem:two-definitions} it suffices to prove that $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}(f\circ z) \cong \ensuremath{\,\overline{\! \mathrm{cr}\!}\,}^\prime(f\circ z^\prime)$, in other words: \begin{equation}\label{eq:coker-and-ker} \mathrm{coker} \biggl( \bigoplus_{S\subsetneq\ensuremath{\underline{n}}} f(S) \colon f(\bullet)^{n!-1} \longrightarrow f(\bullet) \biggr) \;\cong\; \mathrm{ker} \biggl( \bigoplus_{S\subsetneq\ensuremath{\underline{n}}} f(S) \colon f(\bullet) \longrightarrow f(\bullet)^{n!-1} \biggr) \end{equation} where we have written $f(\bullet)^{n!-1}$ to denote $\bigoplus_{S\subsetneq\ensuremath{\underline{n}}}f(\bullet)$. Since the morphisms $f(S)$ are idempotent and pairwise commute, there is a decomposition \[ f(\bullet) \;\;\cong \bigoplus_{T\subseteq \mathcal{P}^\prime(n)} \biggl( \bigcap_{S\in T} \mathrm{ker} (f(S)) \cap \bigcap_{S\in\mathcal{P}^\prime(n)\smallsetminus T} \mathrm{im} (f(S)) \biggr) , \] where $\mathcal{P}^\prime(n)$ denotes the set of proper subsets of $\ensuremath{\underline{n}}$.\footnote{\textit{Cf}.\ the first part of the proof of Lemme 2.7 in \cite{CollinetDjamentGriffin2013Stabilitehomologiquepour}.} The direct sum of all components except the one corresponding to $T=\mathcal{P}^\prime(n)$ is equal to $\sum_{S\subsetneq\ensuremath{\underline{n}}} \mathrm{im}(f(S))$, so we have: \begin{align*} f(\bullet) \;&\cong \,\bigcap_{S\subsetneq\ensuremath{\underline{n}}} \mathrm{ker}(f(S)) \;\oplus\; \sum_{S\subsetneq\ensuremath{\underline{n}}} \mathrm{im}(f(S)) \\ &\cong \;\mathrm{ker} \biggl( \bigoplus_{S\subsetneq\ensuremath{\underline{n}}} f(S) \colon f(\bullet) \longrightarrow f(\bullet)^{n!-1} \biggr) \;\oplus\; \mathrm{im} \biggl( \bigoplus_{S\subsetneq\ensuremath{\underline{n}}} f(S) \colon f(\bullet)^{n!-1} \longrightarrow f(\bullet) \biggr) , \end{align*} which implies the isomorphism \eqref{eq:coker-and-ker}, as desired. \end{proof} \subsection{Specialising to the setting of Hartl-Pirashvili-Vespa.}\label{para:specialise-HPV} Let $\mathcal{C}$ be a monoidal category whose monoidal unit is null, and which is not necessarily symmetric. In \cite{HartlPirashviliVespa2015Polynomialfunctorsalgebras}, Hartl, Pirashvili and Vespa define the notion of a \emph{polynomial} functor $\mathcal{C}\to\mathcal{A}$ of \emph{degree} $d$. (When $\mathcal{C}$ is symmetric it agrees with the definition of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux}.) Their definition is recovered by our third definition of a \emph{functor of height $d$} by equipping $\mathcal{C}$ with the following collections of functors $\{\K{n}\to\mathcal{C}\}$.\footnote{See Definition 3.6 and Proposition 3.3 of \cite{HartlPirashviliVespa2015Polynomialfunctorsalgebras}.} As before, take $J_n$ to be the set of $n$-tuples $(x_1,\ldots,x_n)$ of objects of $\mathcal{C}$. The associated functor $\K{n}\to\mathcal{C}$ sends the object $S\subseteq\ensuremath{\underline{n}}$ to the object $\bigoplus_{i\in S}x_i$ and the inclusion $S\subseteq T$ to the canonical morphism $\bigoplus_{i\in T} x_i \to \bigoplus_{i\in T} y_i \cong \bigoplus_{i\in S} x_i$ where $y_i=x_i$ if $i\in S$ and $y_i=0$ otherwise. Note: since $\mathcal{C}$ is not symmetric, to correctly define $\bigoplus_{i\in S}x_i$ we must consider $\ensuremath{\underline{n}}$ as a totally-ordered set and use the inherited ordering of each subset $S\subseteq\ensuremath{\underline{n}}$. We note that the definition of \cite{HartlPirashviliVespa2015Polynomialfunctorsalgebras} only requires the monoidal unit to be terminal. Also, the definition given earlier (\S \ref{para:specialise-DV}) for a symmetric monoidal category with initial unit works equally well when the monoidal structure is not symmetric, as long as one is careful, as in the previous paragraph, to use the natural total ordering on $\ensuremath{\underline{n}}$. Thus there is a general notion of \emph{height} for functors $\mathcal{C}\to\mathcal{A}$ whenever $\mathcal{C}$ is monoidal and its unit is either initial or terminal, and these notions coincide when the unit is null. \subsection{Categories with finite coproducts; relation to the Taylor tower.}\label{para:finite-coproducts} In \cite{HartlVespa2011Quadraticfunctorspointed} there is a definition of \emph{polynomial} functor $\mathcal{C}\to\mathcal{A}$ of \emph{degree} $d$ in the setting where $\mathcal{C}$ has a null object and finite coproducts, and where $\mathcal{A}$ is either $\mathsf{Ab}$ or $\mathsf{Grp}$, the category of groups. When $\mathcal{A}=\mathsf{Ab}$ this is a special case of the definition of \cite{HartlPirashviliVespa2015Polynomialfunctorsalgebras}, since $\mathcal{C}$ has a monoidal structure given by the coproduct. When $\mathcal{A}=\mathsf{Grp}$ it falls outside the scope of the discussion in this section, since $\mathsf{Grp}$ is not an abelian category. It is, however, a \emph{semi-abelian category} (see \cite{JanelidzeMarkiTholen2002Semiabeliancategories,Borceux2004surveysemiabelian}), which suggests that it would be interesting to try to extend the general notion of the \emph{height} of a functor $\mathcal{C}\to\mathcal{A}$ in this section to the case where $\mathcal{A}$ is only semi-abelian (for example the category of groups or the category of non-unital rings). As an aside, we recall that when the monoidal structure on $\mathcal{C}$ is given by the coproduct, one can do more than just define the height of a functor $T \colon \mathcal{C} \to \mathcal{A}$: one can also approximate it by functors of smaller height, and these approximations form its so-called \emph{Taylor tower}. The key property of the coproduct that allows this is that its universal property equips us with ``fold maps'' $c + \cdots + c \to c$. In the next paragraph, we recall briefly the construction from \cite{HartlVespa2011Quadraticfunctorspointed}, using the terminology of the present section. Recall that the structure on $\mathcal{C}$ used to define the height of a functor defined on it is a collection of functors $f_{(c_1,\ldots,c_n)} \colon \K{n} \to \mathcal{C}$, one for each $n$-tuple of of objects in $\mathcal{C}$ (and for each $n\geq 0$), and the cross-effect $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}^\prime(Tf_{(c_1,\ldots,c_n)})$ is a subobject of $T(c_1 + \cdots + c_n)$, where $+$ denotes the coproduct in $\mathcal{C}$. Now take $c_1 = \cdots = c_n = c$. The universal property of the coproduct gives us a morphism $c + \cdots + c \to c$, to which we may apply $T$ and then compose with the inclusion of the cross-effect to obtain a morphism $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}^\prime(Tf_{(c,\ldots,c)}) \to T(c)$. Define $p_{n-1} T(c)$ to be the cokernel of this morphism. This construction is functorial in $c$ and defines a functor $p_{n-1} T$ of height $\leq n-1$, which is to be thought of as the best approximation of $T$ by such a functor. There are also natural transformations $p_0 T \leftarrow p_1 T \leftarrow \cdots \leftarrow p_{n-1} T \leftarrow p_n T \leftarrow \cdots$ and $T \to \mathrm{lim}(p_\bullet T)$, which between them constitute the ``Taylor tower'' of $T$. \subsection{Specialising to the setting of Collinet-Djament-Griffin.}\label{para:specialise-CDG} Let $\ensuremath{\mathsf{Se}^{\mathsf{fin}}}$ denote the category of finite sets and partially-defined functions and let $\Sigma$ denote its subcategory of finite sets and partially-defined injections. For any intermediate category $\Sigma \leq \Lambda \leq \ensuremath{\mathsf{Se}^{\mathsf{fin}}}$ and any category $\mathcal{C}$ we may define the \emph{wreath product} $\mathcal{C} \wr \Lambda$ to have finite tuples of objects of $\mathcal{C}$ as objects, and a morphism from $(c_1,\ldots,c_m)$ to $(d_1,\ldots,d_n)$ to consist of a morphism $\phi\colon m \to n$ of $\Lambda$ together with morphisms $\alpha_i \colon c_i \to d_{\phi(i)}$ of $\mathcal{C}$ for each $i$ on which $\phi$ is defined. We write this morphism as $(\phi \mathbin{;} \{\alpha_i\}_{i\in \mathrm{dom}(\phi)})$. We may then equip $\mathcal{C} \wr \Lambda$ with collections of functors $\{ \I{n} \to \mathcal{C} \wr \Lambda \}$, as follows. As before, take the indexing set $J_n$ to be the set of $n$-tuples of objects of $\mathcal{C}$. The functor $\I{n} \to \mathcal{C} \wr \Lambda$ associated to the $n$-tuple $(c_1,\ldots,c_m)$ takes the unique object $\bullet$ of $\I{n}$ to $(c_1,\ldots,c_m)$ and a subset $S \subseteq \ensuremath{\underline{n}}$ to the endomorphism $(r_S \mathbin{;} \{\mathrm{id}_{c_i}\}_{i\in S})$ where $r_S(i)=i$ for $i\in S$ and $r_S(i)$ is undefined otherwise. This defines a notion of \emph{height} for any functor $T \colon \mathcal{C} \wr \Lambda \to \mathcal{A}$ into an abelian category $\mathcal{A}$, using the first definition (\S\ref{para:first-def}) above. This exactly recovers the definition of \emph{height} given by Collinet, Djament and Griffin~\cite{CollinetDjamentGriffin2013Stabilitehomologiquepour} in this setting. To see this, we may by Lemma \ref{lem:three-definitions} use the third definition (\S\ref{para:third-def}) above instead. Unravelling this definition, we see that it is precisely the definition of \cite{CollinetDjamentGriffin2013Stabilitehomologiquepour}, given in D{\'e}finitions 2.5 together with the sentence before Proposition 2.11.\footnote{A small difference is that they additionally assume that $T(\varnothing)$ is the zero object of $\mathcal{A}$. So, for example, a functor $\mathcal{C} \wr \Lambda \to \mathcal{A}$ taking every object to a fixed object $a \neq 0$ of $\mathcal{A}$ and every morphism to $\mathrm{id}_a$ has height zero according to our definition, whereas it does not have any finite height according to the definition of \cite{CollinetDjamentGriffin2013Stabilitehomologiquepour}. The difference is analogous to the difference between linear and affine functions.} \subsection{Semi-functors.}\label{para:semi-functors} The construction in \S\ref{para:first-def} taking a functor $f\colon \I{n}\to\mathcal{A}$ as input and returning an object $\mathrm{cr}(f)$ of $\mathcal{A}$ works also if $f$ is just a \emph{semi-functor}, in other words preserving composition but not necessarily identities.\footnote{In fact, for this construction, there is no need even for it to preserve composition -- but we will want this later.} So if $\mathcal{C}$ is a category equipped, for each $n\geq 0$, with a collection of semi-functors $\{ f_j\colon \I{n}\to\mathcal{C} \}_{j\in J_n}$, then we may define the \emph{height} of a semi-functor $T\colon\mathcal{C}\to\mathcal{A}$ exactly as before: $\mathrm{ht}(T)\leq h$ if and only if for all $n>h$ and all $j\in J_n$, $\mathrm{cr}(T\circ f_j)=0$. The second (\S\ref{para:second-def}) and third (\S\ref{para:third-def}) definitions of height generalise in the same way: if $\mathcal{C}$ is a category equipped with collections of semi-functors $\{\J{n}\to\mathcal{C}\}_{j\in J_n}$ or $\{\K{n} \to\mathcal{C}\}_{j\in J_n}$ then we have a notion of the \emph{height} of any semi-functor $\mathcal{C}\to\mathcal{A}$, defined exactly as in the case of functors. Lemma \ref{lem:two-definitions} is no longer true for semi-functors: the fact that $f$ preserves the identity was used to prove one of the two inclusions for the equality \eqref{eq:ker-im-identity}. However, the rest of the proof goes through and shows that there is an exact sequence $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}(f\circ z) \to \mathrm{cr}(f) \to 0$ in this setting. The proof of Lemma \ref{lem:three-definitions} does not use that $f$ preserves the identity, so we still have that $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}(f\circ z) \cong \ensuremath{\,\overline{\! \mathrm{cr}\!}\,}^\prime(f\circ z^\prime)$ when $f$ is a semi-functor. As a result, if $\mathcal{C}$ is a category equipped with collections of semi-functors $\{ \I{n}\to\mathcal{C} \}$ and $T\colon \mathcal{C}\to\mathcal{A}$ is a semi-functor, then: \[ \mathrm{ht}(T) \leq \ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}(T) = \ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}^\prime(T). \] In fact, $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}(T)$ is often infinite when the semi-functors $\{\J{n}\to\mathcal{C}\}$ are not functors (\textit{cf}.\ Proposition \ref{prop:htbar-is-infinite}) so the right notion in this case is $\mathrm{ht}(T)$, which we will use in the next subsection. \subsection{Specialising to partial braid categories.}\label{para:specialise-this-paper} As before, we denote by $\mathcal{I}$ the category with objects $0,1,2,\ldots$ and morphisms $m \to n$ corresponding to subsets of $\{ 1,\ldots,\mathrm{min}(m,n) \}$, with composition given by intersection. We will sometimes think of these morphisms $m \to n$ as partially-defined functions $\{1,\ldots,m\} \to \{1,\ldots,n\}$ that are the identity wherever they are defined. We will usually abbreviate $\{1,\ldots,n\}$ as $\ensuremath{\underline{n}}$. Let $\mathcal{C}$ be a category equipped with a functor $s\colon \mathcal{I} \to \mathcal{C}$. For example, $\mathcal{C}$ could be an object of $\ensuremath{\mathsf{Cat}_{\mathcal{I}}}$, in the notation of \S\ref{sss:height} below, which in particular includes the case where $\mathcal{C}$ is the partial braid category $\mathcal{B}(M,X)$ defined in \S\ref{sss:some-functors} (see also \S\ref{para:degree-WRW}). Now equip $\mathcal{C}$ with the following collections of semi-functors $\{ f_m\colon \I{n}\to\mathcal{C} \}_{m\in J_n}$. Take the indexing set $J_n$ to be $\mathbb{N}\cap [n,\infty)$. Then for $m\geq n$ let $f_m$ be the composite semi-functor \[ \I{n} \to \I{m} = \mathrm{End}_{\mathcal{I}}(m) \hookrightarrow \mathcal{I} \xrightarrow{\;s\;} \mathcal{C}, \] where $\I{n}\to \I{m}$ takes a subset $S$ of $\ensuremath{\underline{n}}$ to the subset $S+m-n = \{s+m-n \mid s\in S\}$ of $\ensuremath{\underline{m}}$. This defines a notion of \emph{height} for each semi-functor $T\colon\mathcal{C}\to\mathcal{A}$. Unwinding the definition, it says that $\mathrm{ht}(T)\leq h$ if and only if for all $m\geq n>h$ the following subobject of $Ts(m)$ vanishes: \begin{equation}\label{eq:subobject-of-Tsm} \mathrm{im} \biggl(\, \sum_{S\subseteq\{m-n+1,\ldots,m\}} (-1)^{\lvert S\rvert} Ts(f_{S\cup\{1,\ldots,m-n\}}) \biggr) . \end{equation} Here $f_T\colon \ensuremath{\underline{m}}\to\ensuremath{\underline{m}}$ is the partially-defined function that ``forgets'' $T\subseteq\ensuremath{\underline{m}}$, in other words $f_T(i)=i$ if $i\in\ensuremath{\underline{m}}\smallsetminus T$ and is undefined if $i\in T$. \begin{lem}\label{lem:isomorphism-of-subobjects} The subobject \eqref{eq:subobject-of-Tsm} of $Ts(m)$ is equal to the subobject \begin{equation}\label{eq:subobject-of-Tsm-2} \mathrm{im}(Ts(f_{\{1,\ldots,m-n\}})) \cap \bigcap_{i=m-n+1}^m \mathrm{ker}(Ts(f_{\{i\}})). \end{equation} \end{lem} As a corollary, we deduce that the definition of \emph{height} used in the paper \cite{Palmer2018Twistedhomologicalstability} (see Definition 3.15 of that paper) is recovered when we specialise in this way, taking $\mathcal{C} = \mathcal{B}(M,X)$ equipped with the canonical functor $\mathcal{I} \to \mathcal{B}(M,X)$ (see the paragraph below \eqref{eq:partial-braid-functor-v2}). \begin{coro} The height of a functor $\mathcal{B}(M,X)\to\mathsf{Ab}$ given by \textup{Definition 3.15} of \textup{\cite{Palmer2018Twistedhomologicalstability}} agrees with the definition above, specialised to the case $\mathcal{C}=\mathcal{B}(M,X)$ and $\mathcal{A}=\mathsf{Ab}$. \end{coro} \begin{proof} By Definition 3.15 of \cite{Palmer2018Twistedhomologicalstability}, the height of a functor $T\colon\mathcal{B}(M,X)\to\mathsf{Ab}$ is bounded above by $h$ if and only if for all $m\geq n>h$ we have $T_m^n=0$. Looking at the definition of $T_m^n$ (see Definition 3.10 of \cite{Palmer2018Twistedhomologicalstability}) we see that it is precisely \eqref{eq:subobject-of-Tsm-2}, and therefore \eqref{eq:subobject-of-Tsm}, by Lemma \ref{lem:isomorphism-of-subobjects}. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:isomorphism-of-subobjects}] We think of $\mathcal{A}$ as a concrete category of modules over a ring, by the Freyd-Mitchell embedding theorem, so that we may talk about the elements of its objects. \noindent $\bullet\; \eqref{eq:subobject-of-Tsm-2} \subseteq \eqref{eq:subobject-of-Tsm}:$ Suppose that $x$ is an element of \eqref{eq:subobject-of-Tsm-2}, say $x=Ts(f_{\{1,\ldots,m-n\}})(y)$. If $S$ is a non-empty subset of $\{m-n+1,\ldots,m\}$ then we may pick some $i\in S$ and compute that \begin{align*} Ts(f_{S\cup\{1,\ldots,m-n\}})(y) &= Ts(f_S) \circ Ts(f_{\{i\}}) \circ Ts(f_{\{1,\ldots,m-n\}})(y) \\ &= Ts(f_S) \circ Ts(f_{\{i\}}) (x) = 0. \end{align*} Hence we deduce that \[ \sum_{S\subseteq\{m-n+1,\ldots,m\}} (-1)^{\lvert S\rvert} Ts(f_{S\cup\{1,\ldots,m-n\}}) (y) = x, \] so in particular $x\in\eqref{eq:subobject-of-Tsm}$. \noindent $\bullet\; \eqref{eq:subobject-of-Tsm} \subseteq \eqref{eq:subobject-of-Tsm-2}:$ Now suppose that we begin with an element $x$ of the form \[ x = \sum_{S\subseteq\{m-n+1,\ldots,m\}} (-1)^{\lvert S\rvert} Ts(f_{S\cup\{1,\ldots,m-n\}}) (y). \] Then for $m-n+1\leq i\leq m$ we have \begin{align*} Ts(f_{\{i\}})(x) = \sum_{S\subseteq\{m-n+1,\ldots,m\}\smallsetminus\{i\}} &(-1)^{\lvert S\rvert} Ts(f_{\{i\}}) \circ Ts(f_{S\cup\{1,\ldots,m-n\}})(y) \\ &+ (-1)^{\lvert S\rvert +1} Ts(f_{\{i\}}) \circ Ts(f_{S\cup\{i\}\cup\{1,\ldots,m-n\}}) (y) \end{align*} which vanishes since the terms pairwise cancel, so $x\in\mathrm{ker}(Ts(f_{\{i\}}))$. One can similarly show that $Ts(f_{\{1,\ldots,m-n\}})(x)=x$, so $x\in\mathrm{im}(Ts(f_{\{1,\ldots,m-n\}}))$. \end{proof} \subsection{Two notions of height on a cyclic monoidal category.}\label{para:compare-two-heights} There is an overlap between the definition in \S\ref{para:specialise-DV} of the height of a functor $\mathcal{C}\to\mathcal{A}$ when $\mathcal{C}$ is equipped with a monoidal structure\footnote{In \S\ref{para:specialise-DV} it was assumed that the monoidal structure is symmetric, but, as remarked in \S\ref{para:specialise-HPV}, the symmetry is not really necessary for the definition.} with null unit and the definition in \S\ref{para:specialise-this-paper} of the height of a functor $\mathcal{C}\to\mathcal{A}$ when $\mathcal{C}$ is equipped with a functor $\mathcal{I}\to\mathcal{C}$. Recall that $\mathcal{I}$ and $\Sigma$ have objects $0,1,2,\ldots$, morphisms $m \to n$ of $\Sigma$ are partially-defined injections $\ensuremath{\underline{m}} \to \ensuremath{\underline{n}}$ and morphisms of $\mathcal{I}$ are those partially-defined injections that are the identity wherever they are defined. Let $\mathcal{B}$ have the same objects and take the morphisms $m \to n$ of $\mathcal{B}$ to be partially-defined braided injections from $\ensuremath{\underline{m}}$ to $\ensuremath{\underline{n}}$. In other words, it is the category $\mathcal{B}(\mathbb{R}^2)$ from \S\ref{sss:some-functors}. There is an embedding $\mathcal{I} \subset \mathcal{B}$ and a functor $\mathcal{B} \to \Sigma$ which compose to an embedding $\mathcal{I} \subset \Sigma$. Now let $\mathcal{C}$ be a strict monoidal category and pick an object $x$ of $\mathcal{C}$. There is then a natural functor $s \colon \mathcal{I} \to \mathcal{C}$ that takes $n$ to $x^{\oplus n}$. If $\mathcal{C}$ is braided then $s$ extends to a monoidal functor $\mathcal{B} \to \mathcal{C}$ and if it is symmetric then $s$ extends further to a monoidal functor $\Sigma \to \mathcal{C}$.\footnote{In the notation of \S\ref{sss:some-functors}, $\Sigma$ is $\mathcal{B}(\mathbb{R}^\infty)$ and $\mathcal{B}$ is $\mathcal{B}(\mathbb{R}^2)$, whereas $\mathcal{I}$ is a (non-monidal) subcategory of $\mathcal{B}(\mathbb{R})$. For any monoidal category $\mathcal{C}$ and object $x$ of $\mathcal{C}$, there is a unique monoidal functor $\mathcal{B}(\mathbb{R}) \to \mathcal{C}$ sending $1$ to $x$; its restriction to $\mathcal{I} \subset \mathcal{B}(\mathbb{R})$ is the ``natural'' functor $s$ to which we are referring.} Now assume that the unit object of $\mathcal{C}$ is null and that $\mathcal{C}$ is \emph{generated} by $x$ in the sense that every object of $\mathcal{C}$ is isomorphic to $x^{\oplus n}$ for some (not necessarily unique) non-negative integer $n$. In this sense we may say that $\mathcal{C}$ is a \emph{cyclic monoidal category}. For example, if the manifold $M$ splits as $\mathbb{R} \times N$ for some manifold $N$, then the category $\mathcal{B}(M,X)$ defined in \S\ref{sss:some-functors} is a cyclic monoidal category generated by the object $1$. The natural functor $s \colon \mathcal{I} \to \mathcal{B}(M,X)$ taking $1$ to $1$ is exactly the one constructed in the paragraph below \eqref{eq:partial-braid-functor-v2}. If $N$ splits further as $\mathbb{R} \times N^\prime$ then $\mathcal{B}(M,X)$ is braided, and if $N = \mathbb{R}^2 \times N^{\prime\prime}$ then it is symmetric. One may then define the \emph{height} of a functor $T\colon\mathcal{C}\to\mathcal{A}$ either using the monoidal structure of $\mathcal{C}$ as in \S\ref{para:specialise-DV} -- this will be denoted $\mathrm{ht}_\oplus(T)$ -- or using the functor $s \colon \mathcal{I} \to \mathcal{C}$ as in \S\ref{para:specialise-this-paper} -- this will be denoted $\mathrm{ht}_\mathcal{I}(T)$. \begin{prop}\label{prop:compare-two-heights-special-case} For any functor $T \colon \mathcal{C} \to \mathcal{A}$ we have $\mathrm{ht}_\mathcal{I}(T) \leq \mathrm{ht}_\oplus(T)$. If we assume that $\mathcal{C}$ is braided we have an equality $\mathrm{ht}_\mathcal{I}(T) = \mathrm{ht}_\oplus(T)$. \end{prop} We will prove this as a corollary of a slightly more general setup. \begin{defn}\label{def:o-s-partition} Fix $m,n\geq 0$. An \emph{ordered shifted partition} $\lambda$ of $m$ of \emph{length} $n$ -- written $\lambda\vdash m$ and $\lvert\lambda\rvert = n$ -- is an ordered $(n+1)$-tuple $(\lambda_0,\lambda_1,\ldots,\lambda_n)$ of non-negative integers whose sum is $m$. Associated to this there is a semigroup homomorphism $\psi_\lambda \colon \I{n} \to \I{m}$ taking a subset $S$ of $\ensuremath{\underline{n}}$ to the subset $S_\lambda$ of $\ensuremath{\underline{m}}$, where $S_\lambda$ is defined as follows: \[ S_\lambda = \bigcup_{i\in S}\{i\}_\lambda \qquad\qquad \{i\}_\lambda = \{ \lambda_0 + \cdots + \lambda_{i-1} + 1, \ldots, \lambda_0 + \cdots + \lambda_i \}. \] We are viewing $\mathcal{I}_n$ as a monoid under intersection, with identity element $\ensuremath{\underline{n}}$, so $\psi_\lambda$ is a monoid homomorphism if and only if $\lambda_0 = 0$. \end{defn} \begin{defn}\label{def:two-types-of-degree} Now let $\mathcal{C}$ be any category and $s \colon \mathcal{I} \to \mathcal{C}$ a functor. We obtain (semi-)functors $f_\lambda\colon \I{n}\to\mathcal{C}$ defined by \[ \I{n} \xrightarrow{\, \psi_\lambda \,} \I{m} = \mathrm{End}_\mathcal{I}(m) \hookrightarrow \mathcal{I} \xrightarrow{\; s\;} \mathcal{C}. \] For any functor $T \colon \mathcal{C} \to \mathcal{A}$, we define $\mathrm{ht}_\mathcal{I}(T)$ and $\mathrm{ht}_\oplus(T)$ by the condition that (for $\square = \mathcal{I} \text{ or } {\oplus}$) $\mathrm{ht}_\square(T)\leq h$ if and only if for each ordered shifted partition $\lambda\vdash m$ of length $\lvert\lambda\rvert >h$, \begin{itemizeb} \item[$(\square = \oplus)$] with $\lambda_0=0$, \item[$(\square = \mathcal{I})$] with $\lambda_1 = \cdots = \lambda_n = 1$, \end{itemizeb} the cross-effect $\mathrm{cr}(Tf_\lambda)$ vanishes. We similarly define $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}_\square(T)$ using $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}(Tf_\lambda z)$ in place of $\mathrm{cr}(Tf_\lambda)$. In other words, when $\square = \mathcal{I}$ we define the height using (for each $n\geq 0$) the collection of semi-functors $\{f_\lambda \colon \I{n} \to \mathcal{C} \mid \lambda \vdash m, \lvert\lambda\rvert = n, \lambda_1 = \cdots = \lambda_n = 1 \}$ and when $\square = \oplus$ we define the height using the collection of functors $\{f_\lambda \colon \I{n} \to \mathcal{C} \mid \lambda \vdash m, \lvert\lambda\rvert = n, \lambda_0 = 0 \}$. \end{defn} In the previous setup, with a cyclic monoidal category $\mathcal{C}$ generated by the object $x$, we had a natural functor $s \colon \mathcal{I} \to \mathcal{C}$ taking $1$ to $x$. For a functor $T \colon \mathcal{C} \to \mathcal{A}$ we defined $\mathrm{ht}_\mathcal{I}(T)$ to be the height of $T$ as defined in \S\ref{para:specialise-this-paper}, using the structure given by the functor $s$. This is exactly the same as the definition of $\mathrm{ht}_\mathcal{I}(T)$ given in Definition \ref{def:two-types-of-degree}. Moreover, we defined $\mathrm{ht}_\oplus(T)$ to be the height of $T$ as defined in \S\ref{para:specialise-DV}, using the monoidal structure of $\mathcal{C}$. Unravelling the definitions, one can see that this is exactly the same as the definition of $\mathrm{ht}_\oplus(T)$ given in Definition \ref{def:two-types-of-degree}, using just the functor $s \colon \mathcal{I} \to \mathcal{C}$. (For this fact, it is critical that $\mathcal{C}$ is generated by the object $x$.) Thus Definition \ref{def:two-types-of-degree} for a category $\mathcal{C}$ equipped with a functor $s \colon \mathcal{I} \to \mathcal{C}$ generalises the setting described at the beginning of this subsection, where the functor $s$ arose from the structure of $\mathcal{C}$ as a cyclic monoidal category. For the rest of this subsection, unless otherwise stated, we assume that we are in the general setting of a category $\mathcal{C}$ equipped with a functor $s \colon \mathcal{I} \to \mathcal{C}$, and we use the definitions of height from Definition \ref{def:two-types-of-degree}. \begin{rmk}\label{rmk:add-assumption-to-defn} It is not hard to see that if $\lambda_i=0$ for some $i\geq 1$ then $\mathrm{cr}(Tf_\lambda)=0$. Thus, if $\lambda_0=0$ too (so that $f_\lambda$ is a functor and Lemma \ref{lem:two-definitions} applies), we have $\ensuremath{\,\overline{\! \mathrm{cr}\!}\,}(Tf_\lambda z)=0$. So in Definition \ref{def:two-types-of-degree}, when $\square = \oplus$, we may assume that $\lambda_1,\ldots,\lambda_n \geq 1$. \end{rmk} \begin{rmk} When $\square=\oplus$ the $f_\lambda$ involved in the definition are all \emph{functors}, so $\mathrm{ht}_\oplus(T) = \ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}_\oplus(T)$, by the discussion following Lemma \ref{lem:two-definitions}. When $\square=\mathcal{I}$ we only know that $\mathrm{ht}_\mathcal{I}(T) \leq \ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}_\mathcal{I}(T)$, as discussed in \S\ref{para:semi-functors}. We first show that $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}_\mathcal{I}(T)$ is in fact almost always infinite. \end{rmk} \begin{prop}\label{prop:htbar-is-infinite} Let $T\colon\mathcal{C}\to\mathcal{A}$ be any functor. Then $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}_\mathcal{I}(T)>0$ implies that $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}_\mathcal{I}(T)=\infty$. \end{prop} So it remains to compare $\mathrm{ht}_\mathcal{I}(T)$ and $\mathrm{ht}_\oplus(T)$, which we do after the next definition. \begin{defn}\label{def:admits-conjugations} We say that a functor $T\colon\mathcal{C}\to\mathcal{A}$ \emph{admits conjugations} if the composite functor $Ts\colon\mathcal{I}\to\mathcal{A}$ extends to some category $\mathcal{S}\supset\mathcal{I}$ and for each $n\geq 0$ and $R,S\subseteq\ensuremath{\underline{n}}$ with $\lvert R\rvert = \lvert S\rvert$ there exists an automorphism $\phi \in \mathrm{Aut}_\mathcal{S}(n)$ such that $\phi r_R \phi^{-1} = r_S$, where $r_R\in\mathrm{End}_\mathcal{I}(n)$ denotes the endomorphism that restricts to $R$, i.e., is the identity on $R$ and undefined on $\ensuremath{\underline{n}}\smallsetminus R$. \end{defn} \begin{eg}\label{eg:admitting-conjugations-braiding} If $\mathcal{C}$ is a strict \emph{braided} monoidal category with null unit object, generated by the object $x$, then the natural functor $s \colon \mathcal{I} \to \mathcal{C}$ extends to $\mathcal{B} \supset \mathcal{I}$, as explained at the beginning of this subsection. In this case \emph{every} functor $\mathcal{C} \to \mathcal{A}$ admits conjugations: we may take $\mathcal{S}=\mathcal{B}$ and for $\phi$ choose any braid connecting the points $R$ with the points $S$ and the points $\ensuremath{\underline{n}}\smallsetminus R$ with the points $\ensuremath{\underline{n}}\smallsetminus S$. In particular, this applies to $\mathcal{C} = \mathcal{B}(M,X)$ as defined in \S\ref{sss:some-functors} when $M$ is of the form $\mathbb{R}^2 \times N$. \end{eg} \begin{eg}\label{eg:admitting-conjugations-BMX} In fact, for any $M$ (of dimension at least two), if we take $\mathcal{C} = \mathcal{B}(M,X)$ with the natural functor $s \colon \mathcal{I} \to \mathcal{B}(M,X)$ (\textit{cf}.\ \eqref{eq:partial-braid-functor-v2}), then every functor $\mathcal{C} \to \mathcal{A}$ admits conjugations: we may take $\mathcal{S}$ to be $\mathcal{B}(M,X)$ itself and for $\phi$ choose any braid on $M$ that connects the points $\{a_i \mid i\in R\}$ with the points $\{a_i \mid i\in S\}$ and the points $\{a_i \mid i\in\ensuremath{\underline{n}}\smallsetminus R\}$ with the points $\{a_i \mid i\in\ensuremath{\underline{n}}\smallsetminus S\}$. \end{eg} \begin{prop}\label{prop:compare-two-heights} For any functor $T\colon\mathcal{C}\to\mathcal{A}$ we have $\mathrm{ht}_\mathcal{I}(T) \leq \mathrm{ht}_\oplus(T)$. If $T$ admits conjugations then $\mathrm{ht}_\mathcal{I}(T) = \mathrm{ht}_\oplus(T)$. However, in general the inequality may be strict: for any $h\in\{2,\ldots,\infty\}$ there exists a functor $T_h\colon \mathcal{I}\to\mathsf{Ab}$ such that $\mathrm{ht}_\mathcal{I}(T_h)=1$ but $\mathrm{ht}_\oplus(T_h)=h$. \end{prop} \begin{proof}[Proof of Proposition \ref{prop:compare-two-heights-special-case}] This now follows from Proposition \ref{prop:compare-two-heights} and Example \ref{eg:admitting-conjugations-braiding}. \end{proof} \begin{rmk}\label{rmk:compare-heights-for-BMX} Proposition \ref{prop:compare-two-heights-special-case} applied to the cyclic monoidal category $\mathcal{C} = \mathcal{B}(\mathbb{R} \times N,X)$ tells us that $\mathrm{ht}_\mathcal{I}(-) \leq \mathrm{ht}_\oplus(-)$ with equality if $N = \mathbb{R} \times N^\prime$. But by Proposition \ref{prop:compare-two-heights} and Example \ref{eg:admitting-conjugations-BMX} we know that in fact $\mathrm{ht}_\mathcal{I}(-) = \mathrm{ht}_\oplus(-)$ for $\mathcal{C} = \mathcal{B}(M,X)$ for \emph{any} $M$ (of dimension at least two). This suggests that it should be possible to generalise Proposition \ref{prop:compare-two-heights-special-case} to a setting where $\mathcal{C}$ is a left module over a cyclic monoidal category, analogously to Proposition \ref{p:two-degrees-agree-2} for degree. \end{rmk} \begin{rmk}[\textit{Summary.}]\label{rmk:summary} For a functor $T \colon \mathcal{C} \to \mathcal{A}$ we have the following square of equalities: \begin{equation}\label{eq:square-of-equalities} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node at (0,12) {$\mathrm{deg}^{x}(T)$}; \node at (24,12) {$\mathrm{deg}(T)$}; \node at (0,0) {$\mathrm{ht}_{\mathcal{I}}(T)$}; \node at (24,0) {$\mathrm{ht}_{\oplus}(T)$}; \node at (12,12) {$=$}; \node at (12,14.5) [font=\footnotesize] {(a)}; \node at (12,0) {$=$}; \node at (12,-2.5) [font=\footnotesize] {(b)}; \node at (0,6) {\rotatebox{90}{$=$}}; \node at (-3,6) [font=\footnotesize] {(c)}; \node at (24,6) {\rotatebox{90}{$=$}}; \node at (27,6) [font=\footnotesize] {(d)}; \end{tikzpicture} \end{split} \end{equation} (using notation of \S\ref{para:partial-braid-categories} in the top row), where \begin{itemizeb} \item[(a)] holds when $\mathcal{C}$ is a left-module over a braided monoidal category with null unit and generating object $x$ (Proposition \ref{p:two-degrees-agree-2}); \item[(b)] holds when $\mathcal{C}$ is a braided monoidal category with null unit and generating object $x$ (Proposition \ref{prop:compare-two-heights-special-case}) or $\mathcal{C} = \mathcal{B}(M,X)$ (Remark \ref{rmk:compare-heights-for-BMX}); \item[(c)] holds when $\mathcal{C} = \mathcal{B}(M,X)$, by Lemma 3.16 of \cite{Palmer2018Twistedhomologicalstability} \item[(d)] holds when $\mathcal{C} = \mathcal{B}(\mathbb{R}^3 \times N,X)$, by Proposition 2.3 of \cite{DjamentVespa2019FoncteursFaiblementPolynomiaux} --- more generally, they prove this for $\mathcal{C}$ a symmetric monoidal category with initial unit. \end{itemizeb} Putting these together, we see that in fact (d) holds whenever $\mathcal{C} = \mathcal{B}(M,X)$ for \emph{any} $M$, via (a)--(c). Also, (c) holds whenever $\mathcal{C}$ is a symmetric monoidal category with null unit and generating object $x$, via (a), (b) and (d). This suggests that (c) should generalise to $\mathcal{C}$ any left-module over a braided monoidal category with null unit and generating object $x$ and (d) should generalise to $\mathcal{C}$ any left-module over a braided monoidal category with initial unit. This would imply that (b) also generalises to $\mathcal{C}$ any left-module over a braided monoidal category with null unit and generating object $x$ (\textit{cf}.\ Remark \ref{rmk:compare-heights-for-BMX}). \end{rmk} In the remainder of this subsection we prove Propositions \ref{prop:htbar-is-infinite} and \ref{prop:compare-two-heights}. \begin{proof}[Proof of Proposition \ref{prop:htbar-is-infinite}] Let us abbreviate $Ts(n)$ to $T_n$ and for a subset $S\subseteq\{1,\ldots,\mathrm{min}(k,l)\}$ let us write simply $r_S\colon T_k \to T_l$ instead of $Ts(r_S)$. (Recall that $r_S\colon \underline{k} \to \underline{l}$ is the identity on $S$ and undefined elsewhere.) Now suppose that $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}_\mathcal{I}(T)<\infty$. In particular this implies that, for some $h\geq 0$ and any $k\geq 0$, \[ T_{k+h} = \sum_{S\subsetneq\underline{h}} \mathrm{im}(r_{S+k}). \] But each $\mathrm{im}(r_{S+k})$ is contained in $\mathrm{im}(r_{\{k+1,\ldots,k+h\}})$, so $r_{\{k+1,\ldots,k+h\}} \colon T_{k+h} \to T_{k+h}$ is surjective. Also note that $r_{\underline{k}} \colon T_{k+h} \to T_k$ is surjective, since it has a right-inverse. The commutative square \begin{equation} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tm) at (0,15) {$T_{k+h}$}; \node (tr) at (40,15) {$T_k$}; \node (bm) at (0,0) {$T_{k+h}$}; \node (br) at (40,0) {$T_k$}; \draw[->>] (tm) to node[above,font=\small]{$r_{\underline{k}}$} (tr); \draw[->>] (bm) to node[left,font=\small]{$r_{\{k+1,\ldots,k+h\}}$} (tm); \draw[->] (bm) to node[above,font=\small]{$r_\varnothing$} (br); \draw[->] (br) to node[right,font=\small]{$r_\varnothing$} (tr); \draw[draw=none,use as bounding box](-20,0) rectangle (60,18); \end{tikzpicture} \end{split} \end{equation} therefore tells us that $r_\varnothing \colon T_k \to T_k$ is also surjective. So for any $m\geq n>0$ we have \[ T_m = \mathrm{im}(r_\varnothing) = \sum_{S\subsetneq\ensuremath{\underline{n}}} \mathrm{im}(r_{S+m-n}), \] and so $\ensuremath{\,\overline{\raisebox{0pt}[1.3ex][0pt]{\ensuremath{\! \mathrm{ht}\!}}}\,}_\mathcal{I}(T)\leq 0$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:compare-two-heights}] We first prove the inequality $\mathrm{ht}_\mathcal{I}(T)\leq\mathrm{ht}_\oplus(T)$, then give the promised example of $T$ where it is strict, and then finally show that the additional assumption that $T$ admits conjugations rules out this possibility, i.e., that $\mathrm{ht}_\mathcal{I}(T)=\mathrm{ht}_\oplus(T)$ for such $T$. \vspace{1ex} \noindent (a) \textit{Proof of the inequality.} We use the notation of the previous proof, abbreviating $Ts(r_S)$ to $r_S$. Let $m\geq n>\mathrm{ht}_\oplus(T)$. We need to show that \begin{equation}\label{eq:proof-of-inequality} \sum_{S\subseteq\ensuremath{\underline{n}}} (-1)^{\lvert S\rvert} r_{(\ensuremath{\underline{n}} \smallsetminus S)+m-n} \end{equation} is the zero morphism. Let $\lambda$ be the ordered shifted partition of length $n$ with $\lambda_0=0$, $\lambda_1=m-n+1$ and $\lambda_i=1$ otherwise. Then \eqref{eq:proof-of-inequality} is equal to \[ \sum_{S\subseteq\ensuremath{\underline{n}}} (-1)^{\lvert S\rvert} r_{\{m-n+1,\ldots,m\}} \circ r_{S_\lambda} \;=\; r_{\{m-n+1,\ldots,m\}} \circ \biggl( \sum_{S\subseteq\ensuremath{\underline{n}}} (-1)^{\lvert S\rvert} \circ r_{S_\lambda} \biggr) . \] Since $\lvert \lambda \rvert = n > \mathrm{ht}_\oplus(T)$, the morphism in brackets on the right-hand side is zero, and so \eqref{eq:proof-of-inequality} is zero, as required.\hfill $\diamond$ \vspace{1ex} \noindent (b) \textit{Example of strict inequality.} For this example we will take $\mathcal{C}$ to be $\mathcal{I}$ itself, with $s=\mathrm{id}\colon\mathcal{I}\to\mathcal{I}$. Fix $h\in\{2,3,4,\ldots,\infty\}$ and define a functor $T_h\colon \mathcal{I}\to\mathsf{Ab}$ as follows. The object $n$ is taken to the free abelian group \[ \mathbb{Z}\{ S\subseteq\ensuremath{\underline{n}} \mid \lvert S\rvert \leq h \text{ and } S \text{ has no consecutive elements} \} \] and for $R\subseteq\{1,\ldots,\mathrm{min}(m,n)\}$ the morphism $r_R\colon \ensuremath{\underline{m}} \to \ensuremath{\underline{n}}$ is taken to the homomorphism $T_h(m)\to T_h(n)$ that sends the basis element $S\subseteq\ensuremath{\underline{m}}$ to the basis element $r_R(S) = S\cap R \subseteq \ensuremath{\underline{n}}$. This will turn out to have $\mathrm{ht}_\mathcal{I}(T_h)=1<h=\mathrm{ht}_\oplus(T_h)$. The idea is that both $\mathrm{ht}_\mathcal{I}(-)$ and $\mathrm{ht}_\oplus(-)$ examine a functor $T$ using certain partitions -- but the former only uses partitions in which each piece has size $1$ and is therefore sensitive to ``interference'' from the condition above that subsets have \emph{no consecutive elements} and therefore measures the ``wrong'' height, whereas the latter uses partitions with pieces of arbitrary size, and so is insensitive to such interference. To show that $\mathrm{ht}_\oplus(T_h)=h$ we take $\lambda\vdash m$ with $\lambda_0=0$ and $\lvert\lambda\rvert =n$ and a basis element $R\subseteq\ensuremath{\underline{m}}$ for $T_h(m)$, and consider the element \begin{equation}\label{eq:element-of-Thm} \sum_{S\subseteq\ensuremath{\underline{n}}} (-1)^{\lvert S\rvert} (R\smallsetminus S_\lambda) \end{equation} of $T_h(m)$. We need to show that it is always zero when $n>h$, whereas when $n=h$ there exist $\lambda$ and $R$ such that it is non-zero. If $n>h$ there must be some $i\in\{1,\ldots,n\}$ such that $R\cap\{i\}_\lambda = \varnothing$. Then we may write \eqref{eq:element-of-Thm} as the sum over $S\subseteq\ensuremath{\underline{n}}\smallsetminus\{i\}$ of $(-1)^{\lvert S\rvert} (R\smallsetminus S_\lambda) + (-1)^{\lvert S\rvert +1} ((R\smallsetminus \{i\}_\lambda)\smallsetminus S_\lambda)$, which cancels to zero since $R\smallsetminus\{i\}_\lambda = R$. When $n=h$ we may take $\lambda$ with $\lambda_0=0$ and $\lambda_i=2$ for $i\geq 1$ (so $m=2n$) and $R=\{2,4,\ldots,2n\}$. This completes the proof that $\mathrm{ht}_\oplus(T_h)=h$. Now we show that $\mathrm{ht}_\mathcal{I}(T_h)=1$. To begin with, note that to have $\mathrm{ht}_\mathcal{I}(T_h)\leq 0$ would require that $T_h(r_\varnothing)=\mathrm{id}$, which is not the case, so instead we have $\mathrm{ht}_\mathcal{I}(T_h) \geq 1$. To see that it is exactly equal to $1$ we need to show that, for all $m\geq n\geq 2$ and any basis element $R$ of $T_h(m)$, the element \[ \sum_{S\subseteq\{m-n+1,\ldots,m\}} (-1)^{\lvert S\rvert} (R\smallsetminus S) \] is zero. The trick is to rewrite this element as the sum over subsets $S\subseteq\{m-n+1,\ldots,m-2\}$ of \[ (-1)^{\lvert S\rvert} \Bigl( Q + (Q \smallsetminus \{m-1,m\}) - (Q \smallsetminus \{m-1\}) - (Q \smallsetminus \{m\}) \Bigr) \] where we have written $Q = R\smallsetminus S$. Since $R$ (and therefore also $Q$) cannot contain both $m-1$ and $m$ (these would be consecutive elements), the four terms above cancel to zero. This completes the proof that $\mathrm{ht}_\mathcal{I}(T_h)=1$.\hfill $\diamond$ \vspace{1ex} \noindent (c) \textit{Equality when $T$ admits conjugations.} To show this we will use the following fact, which is an immediate generalisation of Lemma \ref{lem:isomorphism-of-subobjects}. \begin{fact}\label{fact:equality-of-two-subobjects} If $\lambda\vdash m$ is an ordered shifted partition of length $n$ and $T\colon\mathcal{C}\to\mathcal{A}$ is a functor, then \[ \mathrm{im} \biggl( \sum_{S\subseteq\ensuremath{\underline{n}}} (-1)^{\lvert S\rvert} Ts(f_{\{1,\ldots,\lambda_0\} \cup S_\lambda}) \biggr) \;=\; \mathrm{im}(Ts(f_{\{1,\ldots,\lambda_0\}})) \cap \bigcap_{i=1}^n \mathrm{ker}(Ts(f_{\{i\}_\lambda})). \] \end{fact} Let $T\colon\mathcal{C}\to\mathcal{A}$ be a functor and assume that $T$ admits conjugations. Suppose that $\mathrm{ht}_\mathcal{I}(T)\leq h$. Our aim is to show that $\mathrm{ht}_\oplus(T)\leq h$. In detail, this means the following. Fix $\lambda\vdash m$ with $\lambda_0=0$ and $\lambda_i\geq 1$ for $i\geq 1$ (\textit{cf}.\ Remark \ref{rmk:add-assumption-to-defn}) and $\lvert\lambda\rvert =n>h$. In the light of Fact \ref{fact:equality-of-two-subobjects}, our aim is to show that \begin{equation}\label{eq:compare-two-heights} \bigcap_{i=1}^n \mathrm{ker}(Ts(f_{\{i\}_\lambda})) \end{equation} is zero. Since $\mathrm{ht}_\mathcal{I}(T)\leq h$, we know (using Lemma \ref{lem:isomorphism-of-subobjects} and the fact that $T$ admits conjugations) that for any $S\subseteq\ensuremath{\underline{m}}$ of size $\lvert S\rvert >h$, \[ \mathrm{im}(Ts(f_{\ensuremath{\underline{m}}\smallsetminus S})) \cap \bigcap_{i\in S} \mathrm{ker}(Ts(f_{\{i\}})) = 0. \] We claim that the following equality always holds: \begin{equation}\label{eq:claim-equality} \bigcap_{i=1}^n \mathrm{ker}(Ts(f_{\{i\}_\lambda})) \;=\; \bigoplus_{S} \mathrm{im}(Ts(f_{\ensuremath{\underline{m}}\smallsetminus S})) \cap \bigcap_{i\in S} \mathrm{ker}(Ts(f_{\{i\}})), \end{equation} where the direct sum on the right-hand side is taken over all subsets $S\subseteq\ensuremath{\underline{m}}$ such that for each $i\in\{1,\ldots,n\}$ we have $S\cap\{i\}_\lambda \neq \varnothing$. Any such subset must have size $\lvert S\rvert \geq \lvert \lambda \rvert =n>h$, so -- in our situation -- its contribution to the sum vanishes, and therefore \eqref{eq:compare-two-heights} is zero, as required. So it just remains to prove the equality \eqref{eq:claim-equality}. \noindent $\bullet\; (\supseteq) :$ Let $S\subseteq\ensuremath{\underline{m}}$ satisfy the condition above and suppose that $Ts(f_{\{i\}})(x)=0$ for all $i\in S$. For each $j\in\{1,\ldots,n\}$ we may choose $i\in S\cap \{j\}_\lambda$ and compute that \[ Ts(f_{\{j\}_\lambda})(x) = Ts(f_{\{j\}_\lambda}) \circ Ts(f_{\{i\}})(x) = 0. \] \noindent $\bullet\; (\subseteq) :$ Since the idempotents $Ts(f_{\{i\}})$ on $Ts(m)$ pairwise commute there is a decomposition \begin{align} Ts(m) \;&=\; \bigoplus_{S\subseteq\ensuremath{\underline{m}}} \bigcap_{i\in S} \mathrm{ker}(Ts(f_{\{i\}})) \cap \bigcap_{i\in\ensuremath{\underline{m}}\smallsetminus S} \mathrm{im}(Ts(f_{\{i\}})) \nonumber \\ &=\; \bigoplus_{S\subseteq\ensuremath{\underline{m}}} \bigcap_{i\in S} \mathrm{ker}(Ts(f_{\{i\}})) \cap \mathrm{im}(Ts(f_{\ensuremath{\underline{m}}\smallsetminus S})). \label{eq:decomposition} \end{align} Now suppose that $x\in Ts(m)$ and $Ts(f_{\{i\}_\lambda})(x)=0$ for each $i\in\{1,\ldots,n\}$. We may write \[ x=\sum_{S\subseteq\ensuremath{\underline{m}}} x_S \] corresponding to the decomposition \eqref{eq:decomposition}. Note that the endomorphism $Ts(f_{\{i\}_\lambda})$ preserves the decomposition \eqref{eq:decomposition}. Since it is a decomposition as a \emph{direct} sum, we must have $Ts(f_{\{i\}_\lambda})(x_S)=0$ for each $S\subseteq\ensuremath{\underline{m}}$. Now, to see that $x$ is contained in the right-hand side of \eqref{eq:claim-equality} we just need to show that if there exists $i\in\{1,\ldots,n\}$ such that $S\cap\{i\}_\lambda = \varnothing$ then $x_S=0$. But we have $x_S\in\mathrm{im}(Ts(f_{\ensuremath{\underline{m}}\smallsetminus S}))$ and $\{i\}_\lambda \subseteq \ensuremath{\underline{m}}\smallsetminus S$, so $x_S\in\mathrm{im}(Ts(f_{\{i\}_\lambda}))$. Since $Ts(f_{\{i\}_\lambda})$ is idempotent, this means that \renewcommand{\qedsymbol}{\ensuremath{\diamond\,\raisebox{-0.2mm}{\tikz[x=1mm,y=1mm]{\draw[line cap=rect] (0,0)--(2,0)--(2,2); \draw[very thick, line cap=butt] (2,2)--(0,2)--(0,0);}}}} \[ x_S = Ts(f_{\{i\}_\lambda})(x_S) = 0. \qedhere \] \end{proof} \subsection{The injective braid category.}\label{para:full-braids} Recall from \S\ref{para:degree-WRW} that the \emph{injective braid category} $\ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X)$ is the subcategory of $\mathcal{B}(M,X)$ having the same objects (the non-negative integers) and where a morphism of $\mathcal{B}(M,X)$ -- i.e.\ a (labelled) braid in $M \times [0,1]$ from some subset of $\{a_1,\ldots,a_m\} \times \{0\}$ to some subset of $\{a_1,\ldots,a_n\} \times \{1\}$ -- lies in this subcategory if and only if it has precisely $m$ strands. The equivalence between the different notions of height discussed in this section suggests how one may extend the notion of height for functors $T \colon \mathcal{B}(M,X) \to \mathcal{A}$ to a notion of height for functors $T \colon \ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X) \to \mathcal{A}$. If we take our definition of the height of a functor defined on $\mathcal{B}(M,X)$, which uses the first definition (\S\ref{para:first-def}) of height (see the discussion in \S\ref{para:specialise-this-paper} above), and reinterpret it using instead the second definition (\S\ref{para:second-def}) of height, it may be rewritten in such a way that it involves only morphisms from the subcategory $\ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X)$. Thus, the height of $T$ depends only on its restriction to $\ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X)$, and indeed one may use this observation to directly define the height of a functor $T \colon \ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X) \to \mathcal{A}$. Explicitly, the definition unravels to the following: $\mathrm{height}(T)\leq h$ if and only if for all $m\geq n>h$ we have \[ \sum_S \mathrm{coker} \bigl( T(b(\phi_{m,S})) \colon T(\underline{s}) \longrightarrow T(\ensuremath{\underline{m}}) \bigr) =0, \] where the sum is taken over all proper subsets $S$ of $\{ m-n+1, \ldots, m \}$ and $\underline{s}$ denotes $\{ 1,\ldots,\lvert S\rvert \}$. The notation $\phi_{m,S}$ means the unique order-preserving injection $\underline{s} \to \ensuremath{\underline{m}}$ whose image is equal to $S \subseteq \ensuremath{\underline{m}}$. In general, given any order-preserving injection $\phi \colon \underline{s} \to \ensuremath{\underline{m}}$, there is a canonical braid $b(\phi)$ in $M \times [0,1]$ from $\{a_1,\ldots,a_s\} \times \{0\}$ to $\{a_{\phi(1)},\ldots,a_{\phi(s)}\} \times \{1\}$ that realises $\phi$, specified as follows. Recall from \S\ref{sss:some-categories} that the manifold $M$ comes equipped with a collar neighbourhood $c \colon \partial M \times [0,\infty] \hookrightarrow M$ and a basepoint $p \in \partial M$. Let $L$ be the embedded arc $c(\{p\} \times [1,\infty])$ in the interior of $M$. Then $b(\phi)$ is uniquely determined by specifying its endpoints, as above, and that it must be contained in the embedded square $L \times [0,1]$ in $M \times [0,1]$. Labelling each strand of $b(\phi)$ by the constant path at the basepoint $x_0$ of $X$ makes it into a morphism $\underline{s} \to \ensuremath{\underline{m}}$ of $\ensuremath{\mathcal{B}_{\mathsf{f}}}(M,X)$. \subsection{Possible extensions.}\label{para:generalisations} We finish this section by suggesting potential extensions of the general definitions of \emph{height} given in \S\S\ref{para:first-def}---\ref{para:third-def}. One generalisation, which we have already discussed in detail, is to consider categories $\mathcal{C}$ equipped with collections of \emph{semi-}functors $\mathcal{J}_n \to \mathcal{C}$, i.e., ``functors'' that are not required to preserve identities (the notation $\mathcal{J}_n$ denotes any one of $\I{n}$, $\J{n}$ or $\K{n}$). Another potential generalisation, which was mentioned in \S\ref{para:finite-coproducts}, is to consider twisted coefficient systems (i.e., functors or semi-functors) $T\colon \mathcal{C} \to \mathcal{A}$ whose target is a \emph{semi-}abelian category, such as the category $\mathsf{Grp}$ of groups. This is motivated by the work of Hartl, Pirashvili and Vespa~\cite{HartlPirashviliVespa2015Polynomialfunctorsalgebras}, who study functors of the form $\mathcal{C} \to \mathsf{Grp}$ when $\mathcal{C}$ admits finite coproducts and a null object. More fundamentally, one could weaken the structure on $\mathcal{C}$ by replacing Boolean algebras with posets possessing less structure. If we work in the setting of \S\ref{para:second-def}, then the structure on $\mathcal{C}$ is given by collections of functors $\J{n} \to \mathcal{C}$, where $\J{n}$ is the poset of subsets of $\{1,\ldots,n\}$ under inclusion, which is a Boolean algebra. It would be interesting to set up a theory of polynomial functors $\mathcal{C} \to \mathcal{A}$ when $\mathcal{C}$ is instead equipped with collections of functors $L(n) \to \mathcal{C}$, where the $L(n)$ are lattices with less structure than a Boolean algebra, for example orthocomplemented lattices (in which $\vee$ and $\wedge$ do not necessarily distribute over each other). The lattice of closed subspaces of a Hilbert space is an orthocomplemented lattice, for example, so a natural example to consider would be to take $L(n)$ as the lattice of subspaces of the Hilbert space $\mathbb{C}^n$. \section{Partial braid categories}\label{sec:functorial-configuration-spaces} The paper \cite{Palmer2018Twistedhomologicalstability} is concerned with proving twisted homological stability for the labelled configuration spaces $C_n(M,X)$, for $M$ a connected, open manifold and $X$ a path-connected space. Its twisted coefficient systems are indexed by certain \emph{partial braid categories} $\mathcal{B}(M,X)$ associated to these data; in that paper they are defined in a slightly ad hoc way, and the \emph{height} and \emph{degree} of a twisted coefficient system on $\mathcal{B}(M,X)$ is defined in this specific context. In this section, we explain a natural functorial framework into which these constructions fit. \begin{rmk} The notions of degree and height used in this section agree with those discussed in the previous two sections (whenever both are defined), but the domains of definition are slightly different. The degree in this section is simply defined as a special case of the degree of \S\ref{sec:inductive-degree}, assuming that the source category is an object of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ rather than of the larger category $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$. The height in this section is defined when the source category is an object of $\ensuremath{\mathsf{Cat}_{\mathcal{I}}}$. Such an object is in particular a category $\mathcal{C}$ equipped with a functor $\mathcal{I} \to \mathcal{C}$, where $\mathcal{I}$ is a certain category (\textit{cf}.\ \S\ref{para:first-def}). The general definition of height given in \S\ref{sec:cross-effects} specialises to this case, as described in \S\ref{para:specialise-this-paper}, and it agrees with the definition given in this section (see Remark \ref{rmk:two-defs-of-height-agree}). For this section, we will take the abelian category $\mathcal{A}$ to be the category $\ensuremath{\mathsf{Ab}}$ of abelian groups. However, this is just in order to preserve notational similarity with \cite{Palmer2018Twistedhomologicalstability}, and in fact everything generalises directly to the setting of an arbitrary abelian category $\mathcal{A}$. \end{rmk} \subsection{Some categories.}\label{sss:some-categories} We first introduce some $(2,1)$-categories that we will consider. Only the first one has non-identity $2$-morphisms; the other two are really just $1$-categories. $\bullet\; \ensuremath{\mathsf{Mfd}_{\mathsf{c}}}$: Objects are smooth manifolds $M$ (of dimension at least two) equipped with a collar neighbourhood and a basepoint on the boundary. The $1$-morphisms are embeddings preserving collar neighbourhoods and basepoints and $2$-morphisms are isotopies of such embeddings. More precisely, a collar neighbourhood means a proper embedding \[ c \colon \partial M \times [0,\infty] \longrightarrow M \] such that $c(x,0)=x$ for all $x\in\partial M$. A $1$-morphism from $(M,c_M,p)$ to $(N,c_N,q)$ is then an embedding $f \colon M \hookrightarrow N$ taking $p\in\partial M$ to $q\in\partial N$ and commuting with the collar neighbourhoods, meaning that $f(c_M(x,t)) = c_N(f(x),t)$ for all $x\in\partial M$ and $t\in [0,\infty]$. A $2$-morphism from $f_0$ to $f_1$ is an isotopy $f_s$ such that $f_s(p)=q$ and $f_s(c_M(x,t)) = c_N(f_s(x),t)$ for all $x$, $t$ and $s$. $\bullet\; \ensuremath{\mathsf{Top}_{\circ}}$: The category of based, path-connected topological spaces and based continuous maps. $\bullet\; \ensuremath{\mathsf{Cat}_{\mathsf{s}}}$: Objects are small $1$-categories $\mathcal{C}$ equipped with an endofunctor $s\colon \mathcal{C} \to \mathcal{C}$ and a natural transformation $\imath \colon \mathrm{id} \to s$. A $1$-morphism from $(\mathcal{C},s,\imath)$ to $(\mathcal{D},t,\jmath)$ is a functor $f\colon \mathcal{C}\to \mathcal{D}$ together with a natural isomorphism $\psi \colon f\circ s \to t\circ f$ of functors $\mathcal{C}\to \mathcal{D}$ such that $\jmath * \mathrm{id}_f = \psi \circ (\mathrm{id}_f * \imath)$, where $*$ denotes horizontal composition of natural transformations. (Note that this is a subcategory of the category $\ensuremath{\mathsf{Cat}_{\mathsf{st}}}$ defined in \S\ref{para:gen-def-degree}.) \subsection{The partial braid functor.}\label{sss:some-functors} This is a $2$-functor \begin{equation}\label{eq:partial-braid-functor} \mathcal{B}\colon \ensuremath{\mathsf{Mfd}_{\mathsf{c}}} \times \ensuremath{\mathsf{Top}_{\circ}} \longrightarrow \ensuremath{\mathsf{Cat}_{\mathsf{s}}} \end{equation} such that, for any manifold $M\in\ensuremath{\mathsf{Mfd}_{\mathsf{c}}}$ and any space $X\in\ensuremath{\mathsf{Top}_{\circ}}$, the object $\mathcal{B}(M,X)\in\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ agrees with the category $\mathcal{B}(M,X)$ defined in \S 2.3 of \cite{Palmer2018Twistedhomologicalstability} together with the extra data defined in \S 3.1 of \cite{Palmer2018Twistedhomologicalstability}. The definition is as follows. Given objects $(M,c,p) \in \ensuremath{\mathsf{Mfd}_{\mathsf{c}}}$ and $(X,x_0) \in \ensuremath{\mathsf{Top}_{\circ}}$, first set $a_t = c(p,t) \in M$ for $t\in [0,\infty]$ and define an embedding $e \colon M \hookrightarrow M$ by $e(c(m,t)) = c(m,t+1)$ for $(m,t) \in \partial M \times [0,\infty]$ and by the identity outside of the collar neighbourhood. The objects of $\mathcal{B}(M,X)$ are the non-negative integers. A morphism $m \to n$ is a choice of $k\leq\mathrm{min}(m,n)$ and a path in $C_k(M,X)$, up to endpoint-preserving homotopy, from a subset of $\{(a_1,x_0),\ldots,(a_m,x_0)\}$ to a subset of $\{(a_1,y_0),\ldots,(a_n,y_0)\}$. These may be thought of as braids in $M \times [0,1]$ whose strands have been labelled by loops in $X$ based at $x_0$. Composition is defined by concatenating paths, and then deleting configuration points for which the concatenated path is defined only half-way. For example, omitting the labels, we have the heuristic picture: \begin{equation}\label{eComposition} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (al1) at (0,0) [fill,circle,inner sep=1pt] {}; \node (al2) at (0,2) [fill,circle,inner sep=1pt] {}; \node (al3) at (0,4) [fill,circle,inner sep=1pt] {}; \node (ar1) at (10,0) [fill,circle,inner sep=1pt] {}; \node (ar2) at (10,2) [fill,circle,inner sep=1pt] {}; \node (ar3) at (10,4) [fill,circle,inner sep=1pt] {}; \node (ar4) at (10,6) [fill,circle,inner sep=1pt] {}; \node (ar5) at (10,8) [fill,circle,inner sep=1pt] {}; \draw (al1) .. controls (5,0) and (5,4) .. (ar3); \draw[white,line width=1mm] (al3) .. controls (5,4) and (5,0) .. (ar1); \draw (al3) .. controls (5,4) and (5,0) .. (ar1); \draw[white,line width=1mm] (al2) .. controls (5,2) and (5,8) .. (ar5); \draw (al2) .. controls (5,2) and (5,8) .. (ar5); \node at (13,4) {$\circ$}; \begin{scope}[xshift=16mm] \node (bl1) at (0,0) [fill,circle,inner sep=1pt] {}; \node (bl2) at (0,2) [fill,circle,inner sep=1pt] {}; \node (bl3) at (0,4) [fill,circle,inner sep=1pt] {}; \node (bl4) at (0,6) [fill,circle,inner sep=1pt] {}; \node (bl5) at (0,8) [fill,circle,inner sep=1pt] {}; \node (br1) at (10,0) [fill,circle,inner sep=1pt] {}; \node (br2) at (10,2) [fill,circle,inner sep=1pt] {}; \node (br3) at (10,4) [fill,circle,inner sep=1pt] {}; \node (br4) at (10,6) [fill,circle,inner sep=1pt] {}; \draw (bl1) .. controls (5,0) and (5,2) .. (br2); \draw (bl2) .. controls (5,2) and (5,6) .. (br4); \draw[white,line width=1mm] (bl5) .. controls (5,8) and (5,4) .. (br3); \draw (bl5) .. controls (5,8) and (5,4) .. (br3); \end{scope} \node at (32,2.5) {$=$}; \begin{scope}[xshift=38mm] \node (cl1) at (0,0) [fill,circle,inner sep=1pt] {}; \node (cl2) at (0,2) [fill,circle,inner sep=1pt] {}; \node (cl3) at (0,4) [fill,circle,inner sep=1pt] {}; \node (cr1) at (10,0) [fill,circle,inner sep=1pt] {}; \node (cr2) at (10,2) [fill,circle,inner sep=1pt] {}; \node (cr3) at (10,4) [fill,circle,inner sep=1pt] {}; \node (cr4) at (10,6) [fill,circle,inner sep=1pt] {}; \draw (cl3) .. controls (5,4) and (5,2) .. (cr2); \draw[white,line width=1mm] (cl2) .. controls (5,2) and (5,4) .. (cr3); \draw (cl2) .. controls (5,2) and (5,4) .. (cr3); \end{scope} \end{tikzpicture} \end{split} \end{equation} The endofunctor $s \colon \mathcal{B}(M,X) \to \mathcal{B}(M,X)$ sends the object $n$ to $n+1$ and sends a morphism $\gamma$, which is a path in $C_k(M,X)$, to the morphism $s_k \circ \gamma$, where $s_k \colon C_k(M,X) \to C_{k+1}(M,X)$ sends a configuration $\{(m_1,x_1),\ldots,(m_k,x_k)\}$ to $\{(a_1,x_0),(e(m_1),x_1),\ldots,(e(m_k),x_k)\}$. The natural transformation $\imath \colon \mathrm{id} \to s$ consists of the morphisms $n \to n+1$ given by the paths $t \mapsto \{(a_{1+t},x_0),\ldots,(a_{n+t},x_0)\}$. Given $1$-morphisms $\phi \colon (M,c_M,p) \to (N,c_N,q)$ and $f\colon (X,x_0) \to (Y,y_0)$, we need to specify a functor $F \colon \mathcal{B}(M,X) \to \mathcal{B}(N,Y)$ and a natural isomorphism $\psi \colon F \circ s \to s \circ F$. In fact, we will define $F$ such that $F \circ s = s \circ F$ and take $\psi$ to be the identity. On objects, we define $F$ to be the identity. A morphism $\gamma$ in $\mathcal{B}(M,X)$ -- represented by a path in $C_k(M,X)$ for some $k$ -- is sent by $F$ to the morphism in $\mathcal{B}(N,Y)$ represented by the path $C_k(\phi,f) \circ \gamma$, where $C_k(\phi,f) \colon C_k(M,X) \to C_k(N,Y)$ sends a configuration $\{(m_1,x_1),\ldots,(m_k,x_k)\}$ to $\{(\phi(m_1),f(x_1)),\ldots,(\phi(m_k),f(x_k))\}$. If $\phi^\prime$ is another $1$-morphism (i.e., embedding) that is connected to $\phi$ by a $2$-morphism (i.e., is isotopic to $\phi$ respecting basepoints and collar neighbourhoods), then applying the above construction to $\phi^\prime$ and $f$, instead of $\phi$ and $f$, results in exactly the same functor $F \colon \mathcal{B}(M,X) \to \mathcal{B}(N,Y)$. Thus $\mathcal{B}$ extends to a $2$-functor by sending all $2$-morphisms to identities. \subsection{Degree.}\label{sss:degree} Definition \ref{def:degree-general} from \S\ref{sec:inductive-degree} specialises to associate a \emph{degree} \[ \ensuremath{\mathrm{deg}}(T)\in\{-1,0,1,2,3,\ldots,\infty\} \] to any functor $T\colon \mathcal{C}\to\mathsf{Ab}$ for any object $\mathcal{C}\in\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$. In particular, via the functor \eqref{eq:partial-braid-functor} above, it associates a \emph{degree} to any functor $\mathcal{B}(M,X) \to \ensuremath{\mathsf{Ab}}$, and recovers the notion of \emph{degree} used in \cite{Palmer2018Twistedhomologicalstability}. \begin{lem}\label{l:degree-under-composition} If $f\colon \mathcal{C}\to \mathcal{D}$ is a morphism in $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ and $T\colon \mathcal{D}\to \mathsf{Ab}$ is any functor, then we have the inequality $\mathrm{deg}(Tf) \leq \mathrm{deg}(T)$. If $f$ is essentially surjective on objects then it is an equality. \end{lem} \begin{proof} We need to show that $\mathrm{deg}(T)\leq n \Rightarrow \mathrm{deg}(Tf)\leq n$ for each $n\geq -1$, and that the reverse implication also holds if $f$ is essentially surjective on objects. The base case $n=-1$ is clear, since $\mathrm{deg}(T)=-1$ simply means that $T=0$. It is then an exercise in elementary $2$-category theory to show that $(\Delta T)f \cong \Delta (Tf)$, from which the inductive step follows. \end{proof} \begin{defn}\label{def:braidable} Say that a category $(\mathcal{C},s,\imath) \in \ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ is \emph{braidable} if there exists a natural isomorphism $\Psi \colon s \circ s \to s \circ s$ such that $\imath * \mathrm{id}_s = \Psi \circ (\mathrm{id}_s * \imath)$. Note that this is equivalent to saying that the endofunctor $s \colon \mathcal{C} \to \mathcal{C}$ itself may be extended to a morphism of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$. \end{defn} \begin{coro}\label{coro:braidable} If $(\mathcal{C},s,\imath) \in \ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ is braidable, and $T \colon \mathcal{C} \to \mathsf{Ab}$ is any functor, then we have the inequality $\mathrm{deg}(Ts) \leq \mathrm{deg}(T)$, which is an equality if $s$ is essentially surjective on objects. \end{coro} \subsection{Uniformly-defined twisted coefficient systems.}\label{sss:uniform-coeff-systems} Given $\mathcal{C}\in\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$, a twisted coefficient system is simply a functor $\mathcal{C}\to\ensuremath{\mathsf{Ab}}$. More generally, we may start with a diagram $F\colon\mathcal{D}\to\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ and define a twisted coefficient system for each object of $\mathcal{D}$ in a compatible way, as follows. By abuse of notation, write $F$ also for the composition $\mathcal{D}\to\ensuremath{\mathsf{Cat}_{\mathsf{s}}}\to\mathsf{Cat}$ of $F$ with the forgetful functor down to the category of small categories. A \emph{uniformly-defined twisted coefficient system for $F$} is then a functor \[ T\colon \mathcal{D} {\textstyle\int} F \longrightarrow \ensuremath{\mathsf{Ab}} \] with domain the Grothendieck construction of $F$. For each object $d\in\mathcal{D}$ there is a natural functor $j_d\colon F(d) \to \mathcal{D} {\textstyle\int} F$, so this determines a twisted coefficient system $T_d \colon F(d) \to \ensuremath{\mathsf{Ab}}$ for each $d\in\mathcal{D}$. \begin{lem}\label{l:grothendieck-construction} The category $\mathcal{D} {\textstyle\int} F$ is naturally an object of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ and $j_d$ is a morphism of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$. \end{lem} Thus we have a well-defined degree $\mathrm{deg}(T)$ of a uniformly-defined twisted coefficient system $T$, and by Lemma \ref{l:degree-under-composition} we know that $\mathrm{deg}(T) \geq \mathrm{deg}(T_d)$, in other words it is an upper bound on the degrees of the individual twisted coefficient systems $T_d \colon F(d)\to\mathcal{D}{\textstyle\int}F\to\ensuremath{\mathsf{Ab}}$. \begin{proof}[Proof of Lemma \ref{l:grothendieck-construction}] For each object $d\in\mathcal{D}$ the category $F(d)$ is equipped with an endofunctor, which we will denote $s_d\colon F(d)\to F(d)$, and a natural transformation $\iota_d \colon 1_{F(d)} \Rightarrow s_d$. Recall that $\mathcal{D}{\textstyle\int}F$ has objects $(d,x)$ for $d\in\mathcal{D}$ and $x\in F(d)$ and morphisms $(f,g) \colon (d,x)\to (d^\prime,x^\prime)$ where $f\colon d\to d^\prime$ in $\mathcal{D}$ and $g\colon F(f)(x)\to x^\prime$ in $F(d^\prime)$. One may then define an endofunctor $\bar{s}$ on $\mathcal{D}{\textstyle\int}F$ by setting $\bar{s}(d,x) = (d,s_d(x))$ and $\bar{s}(f,g) = (f,s_{d^\prime}(g))$, and a natural transformation $\bar{\iota} \colon 1_{\mathcal{D}{\scriptstyle\int}F} \Rightarrow \bar{s}$ by setting $\bar{\iota}_{(d,x)} = (1_d,(\iota_d)_x)$. This makes $\mathcal{D}{\textstyle\int}F$ into an object of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ and the functor $j_d \colon F(d) \to \mathcal{D} {\textstyle\int} F$, together with $\psi = \mathrm{id} \colon j_d \circ s_d \to \bar{s} \circ j_d$, into a morphism of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$. \end{proof} For example, we could take $\mathcal{D}$ to be $\ensuremath{\mathsf{Mfd}_{\mathsf{c}}} \times \ensuremath{\mathsf{Top}_{\circ}}$ and $F$ to be the functor \eqref{eq:partial-braid-functor}, in which case a ``uniformly-defined twisted coefficient system'' determines twisted coefficient systems for all \emph{partial braid categories} $\mathcal{B}(M,X)$ simultaneously. \begin{rmk}\label{r:cocone} Fix $F\colon \mathcal{D}\to\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ and suppose we are given a cocone on $F$ (i.e.\ an object $\mathcal{C}\in\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ and a natural transformation $\alpha\colon F\Rightarrow \mathrm{const}_\mathcal{C}$) together with a functor $T\colon \mathcal{C}\to\ensuremath{\mathsf{Ab}}$. This determines a functor $\mathbb{T}\colon\mathcal{D}{\textstyle\int}F\to\ensuremath{\mathsf{Ab}}$ given on objects by $\mathbb{T}(d,x)=T(\alpha_d(x))$. One can show inductively that in this setting we have an inequality $\mathrm{deg}(\mathbb{T})\leq\mathrm{deg}(T)$. Note that the category $\Sigma$ of finite cardinals and partially-defined injections is naturally an object of $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ if one equips it with the endofunctor taking $n$ to $n+1$ and a morphism $f$ to the morphism defined by $1 \mapsto 1$ and $i \mapsto f(i-1)+1$ for $i\geq 2$, together with the natural transformation given by the collection of morphisms $\iota_n\colon n \to n+1$ defined by $\iota_n(i)=i+1$. We may therefore consider the slice category $(\ensuremath{\mathsf{Cat}_{\mathsf{s}}} \downarrow \Sigma)$. A lift of a functor $F \colon \mathcal{D} \to \ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ to the slice category is the same thing as a cocone on $F$ with $\Sigma\in\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ at its ``tip''. So if we fix a functor $F \colon \mathcal{D} \to (\ensuremath{\mathsf{Cat}_{\mathsf{s}}} \downarrow \Sigma)$, any twisted coefficient system on $\Sigma$ (i.e.\ functor $\Sigma\to\ensuremath{\mathsf{Ab}}$) automatically induces a uniformly-defined twisted coefficient system (i.e.\ functor $\mathcal{D}{\textstyle\int}F\to\ensuremath{\mathsf{Ab}}$) of the same or smaller degree. In particular, the functor $\mathcal{B}$ \eqref{eq:partial-braid-functor} naturally lifts to the slice category\footnote{\label{f:Whitney}One may see this claim as follows. The category $\ensuremath{\mathsf{Mfd}_{\mathsf{c}}} \times \ensuremath{\mathsf{Top}_{\circ}}$ has a \emph{cofinal} subcategory consisting of (collared, basepointed) Euclidean halfspaces of dimension $\geq 3$ in $\ensuremath{\mathsf{Mfd}_{\mathsf{c}}}$, together with the one-point space $* \in \ensuremath{\mathsf{Top}_{\circ}}$. Cofinality of this subcategory follows from the Whitney Embedding Theorem, or, more precisely, its analogue for manifolds with collared boundary (see Lemma \ref{lem:Whitney-with-boundary}). The functor $\mathcal{B}$ sends this whole subcategory to the object $\Sigma$ (and its identity morphism) in $\ensuremath{\mathsf{Cat}_{\mathsf{s}}}$ (\textit{cf}.\ \S 2.4 of \cite{Palmer2018Twistedhomologicalstability}), thus automatically providing a lift of $\mathcal{B}$ to $(\ensuremath{\mathsf{Cat}_{\mathsf{s}}} \downarrow \Sigma)$.} (\textit{cf}.\ the construction of \eqref{eq:partial-braid-functor-v2} below), so a twisted coefficient system for $\Sigma$ induces a uniformly-defined twisted coefficient system for $\mathcal{B}$, and thence twisted coefficient systems for each $\mathcal{B}(M,X)$. \end{rmk} \subsection{Height.}\label{sss:height} The definition of the \emph{height} of a twisted coefficient system $T \colon \mathcal{C} \to \ensuremath{\mathsf{Ab}}$ requires a different structure on the source category $\mathcal{C}$. Recall from \S\ref{para:compare-two-heights} that $\mathcal{I}$ and $\Sigma$ have objects $0,1,2,\ldots$, morphisms $m \to n$ of $\Sigma$ are partially-defined injections $\ensuremath{\underline{m}} \to \ensuremath{\underline{n}}$ and morphisms of $\mathcal{I}$ are those partially-defined injections that are the identity wherever they are defined. Their automorphism groups are the symmetric groups $\Sigma_n$ and trivial respectively. Denote their endomorphism monoids by $\mathcal{P}_n = \mathrm{End}_\Sigma(n)$ and $\mathcal{I}_n = \mathrm{End}_\mathcal{I}(n)$. Note that $\mathcal{I}_n$ is the submonoid of $\mathcal{P}_n$ of all idempotent elements. It may also be described as the monoid of subsets of $\{1,\ldots,n\}$ under the operation $\cap$ with neutral element $\{1,\ldots,n\}$, or under the operation $\cup$ with neutral element $\varnothing$. The latter identification is given by associating to a subset $S\subseteq\{1,\ldots,n\}$ the morphism $f_{n,S} \in \mathcal{I}_n$ that ``forgets'' $S$, in other words the partial injection from $\{1,\ldots,n\}$ to itself that is undefined on $S$ and the identity on $\{1,\ldots,n\} \smallsetminus S$. \begin{defn}\label{def:cati} Let $\ensuremath{\mathsf{Cat}_{\mathcal{I}}}$ be the category whose objects are small categories $\mathcal{C}$ equipped with functors $s\colon \mathcal{I} \to \mathcal{C}$ and $\pi \colon \mathcal{C} \to \Sigma$ such that $\pi \circ s$ is the inclusion, and such that the following two conditions are satisfied: \begin{itemizeb} \item The homomorphisms $\pi \colon \mathrm{End}_\mathcal{C}(s(n)) \to \mathcal{P}_n$ and $\pi \colon \mathrm{Aut}_\mathcal{C}(s(n)) \to \Sigma_n$ are surjective. \item (``Locality'') Fix $n\geq 0$ and $\phi\in\mathrm{End}_\mathcal{C}(s(n))$. For each $i$ there exists $j$ and for each $j$ there exists $i$ such that \begin{equation}\label{eq:locality} \phi \circ s(f_{n,\{i\}}) \;=\; s(f_{n,\{j\}}) \circ \phi. \end{equation} \end{itemizeb} Morphisms from $(\mathcal{C},s,\pi)$ to $(\mathcal{C}^\prime,s^\prime,\pi^\prime)$ are functors $f \colon \mathcal{C} \to \mathcal{C}^\prime$ such that $f \circ s = s^\prime$ and $\pi = \pi^\prime \circ f$. \end{defn} There is an analogue of the functor \eqref{eq:partial-braid-functor} for this setting, which we denote by the same letter, \begin{equation}\label{eq:partial-braid-functor-v2} \mathcal{B}\colon \ensuremath{\mathsf{Mfd}_{\mathsf{c}}} \times \ensuremath{\mathsf{Top}_{\circ}} \longrightarrow \ensuremath{\mathsf{Cat}_{\mathcal{I}}} , \end{equation} and which is defined as follows. Given objects $M \in \ensuremath{\mathsf{Mfd}_{\mathsf{c}}}$ and $X \in \ensuremath{\mathsf{Top}_{\circ}}$, the category $\mathcal{B}(M,X)$ itself is defined as in \S\ref{sss:some-functors}. Now we define functors $s \colon \mathcal{I} \to \mathcal{B}(M,X)$ and $\pi \colon \mathcal{B}(M,X) \to \Sigma$. On objects, $s$ is the identity. If $f \colon m \to n$ is the morphism in $\mathcal{I}$ that is the identity on $S \subseteq \{1,\ldots,\mathrm{min}(m,n)\}$ and undefined elsewhere, define $s(f)$ to be the (homotopy class of the) constant path in $C_{\lvert S \rvert}(M,X)$ at the point $\{ (a_s,x_0) \mid s \in S \}$. The functor $\pi$ is also the identity on objects. A morphism $m \to n$ in $\mathcal{B}(M,X)$ is determined by a path of configurations from some subconfiguration of $\{a_1,\ldots,a_m\}$ to some subconfiguration of $\{a_1,\ldots,a_n\}$ (together with some labels, which we are ignoring). This induces a partial injection from $\ensuremath{\underline{m}}$ to $\ensuremath{\underline{n}}$, and the functor $\pi$ records precisely this information. The locality property \eqref{eq:locality} is satisfied since deleting the $i$th strand of a braid from one end corresponds to deleting the $j$th strand from the other end for some $j$. If there is no $i$th strand, according to the ordering at one end, then we may take $j$ to be a number such that there is no $j$th strand at the other end, so that both sides of \eqref{eq:locality} are equal to $\phi$. The surjectivity property holds since any (partial) injection may be realised by a (partial) braid on $M$, since manifolds $M \in \ensuremath{\mathsf{Mfd}_{\mathsf{c}}}$ are required to have dimension at least two. \paragraph*{An alternative viewpoint.} Since a category $\mathcal{C} \in \ensuremath{\mathsf{Cat}_{\mathcal{I}}}$ in particular comes equipped with a functor $s \colon \mathcal{I} \to \mathcal{C}$, we have from \S\ref{sec:cross-effects} a definition of the \emph{height} \[ \mathrm{height}(T)\in\{-1,0,1,2,3,\ldots,\infty\} \] of any functor $T \colon \mathcal{C} \to \ensuremath{\mathsf{Ab}}$, as described in \S\ref{para:specialise-this-paper}. In particular, via the functor \eqref{eq:partial-braid-functor-v2}, it associates a \emph{height} to any functor $\mathcal{B}(M,X) \to \ensuremath{\mathsf{Ab}}$, and recovers the notion of \emph{height} used in \cite{Palmer2018Twistedhomologicalstability}. In the next section we describe the definition of the \emph{height} from a different viewpoint, which depends on the full structure of $\mathcal{C}$ as an object of $\ensuremath{\mathsf{Cat}_{\mathcal{I}}}$, not just on the functor $s \colon \mathcal{I} \to \mathcal{C}$. This may be summarised as follows. An object $\mathcal{C} \in \ensuremath{\mathsf{Cat}_{\mathcal{I}}}$ has associated categories and faithful functors $\mathcal{A} \to \mathcal{B} \subseteq \mathcal{C}$, together with an $\mathbb{N}$-grading of the objects of $\mathcal{A}$. We may therefore filter the category $\mathcal{A}$ by defining $\mathcal{A}^{>n} \subseteq \mathcal{A}$ to be the full subcategory on objects with grading more than $n$, for $n \in \{-1,0,1,2,\ldots,\infty\}$. Now given any functor $T \colon \mathcal{C} \to \ensuremath{\mathsf{Ab}}$, there is an associated functor $T^\prime \colon \mathcal{A} \to \ensuremath{\mathsf{Ab}}$ related to $T$ by the fact that the induced functor $\mathrm{Ind}_{\mathcal{A} \to \mathcal{B}}(T^\prime)$ is isomorphic to $T$ on the subgroupoid $\mathcal{B}^{\sim}$ (the underlying groupoid of $\mathcal{B}$). The \emph{height} of $T$ is then \[ \mathrm{height}(T) = \mathrm{min}\bigl\lbrace n \bigm| T^\prime \equiv 0 \, \text{ on } \mathcal{A}^{>n} \bigr\rbrace . \] The idea is that the functor $T^\prime \colon \mathcal{A} \to \ensuremath{\mathsf{Ab}}$ records all of the \emph{cross-effects} of $T \colon \mathcal{C} \to \ensuremath{\mathsf{Ab}}$ simultaneously, with the grading indicating which cross-effects correspond to which objects of $\mathcal{C}$. This viewpoint could perhaps be useful in generalising the notion of \emph{height} to more sophisticated situations, by allowing the structure of the category $\mathcal{A}$ indexing the cross-effects to be more complicated (here it is just a disjoint union of monoids). The details of this alternative viewpoint on the \emph{height} of a functor are given in \S\ref{sss:height-general}, using some facts about induction for representations of categories, which may be of interest in their own right, which we discuss in \S\S\ref{sss:induction}--\ref{sss:special-cases}. We explain in Remark \ref{rmk:two-defs-of-height-agree} why this alternative viewpoint agrees exactly with the definition from \S\ref{para:specialise-this-paper} above. \section{Induction for representations of categories}\label{sec:representations-of-categories} In this section we give details of the alternative viewpoint on the \emph{height} of a twisted coefficient system $T \colon \mathcal{C} \to \mathcal{A}$, when $\mathcal{C}$ is an object of the category $\ensuremath{\mathsf{Cat}_{\mathcal{I}}}$ defined in \S\ref{sss:height} immediately above. We begin with a detour through the notion of induction for representations of categories in \S\S\ref{sss:induction}--\ref{sss:special-cases}, and then return to the alternative definition of the \emph{height} of a twisted coefficient system in \S\ref{sss:height-general}. \subsection{Induction for representations of categories.}\label{sss:induction} We will take $\mathbb{Z}$ as our ground ring in this section, but everything works equally well over an arbitrary commutative unital ring. Suppose that we have a functor $f\colon \mathcal{A}\to \mathcal{B}$ and we wish to extend representations of $\mathcal{A}$, i.e., functors $g\colon \mathcal{A}\to\ensuremath{\mathsf{Ab}} = \mathbb{Z}\text{-mod}$, along $f$ to $\mathcal{B}$. We will define a functor (``induction along $f$'') \begin{equation}\label{eq:induction-functor} \mathrm{Ind}_f \colon \mathsf{Fun}(\mathcal{A},\ensuremath{\mathsf{Ab}}) \longrightarrow \mathsf{Fun}(\mathcal{B},\ensuremath{\mathsf{Ab}}) \end{equation} that does this, and prove a few of its properties. We note that what we will be defining is simply the \emph{left Kan extension} operation along the functor $f$, but we would like to have explicit formulas for this, so we will give an elementary definition instead of using this universal characterisation. First we explain the notion of a category ring. Given any category $\mathcal{A}$, its \emph{category ring} $\mathbb{Z} \mathcal{A}$ is defined as follows: as an abelian group it is freely generated by the morphisms of $\mathcal{A}$, and the product of two morphisms is their composition if they are composable and zero otherwise.\footnote{More generally, there is a ring associated to any \emph{semigroup with absorbing element}, i.e.\ semigroup $S$ containing an element $\infty$ such that $s\infty = \infty s = \infty$ for all $s\in S$. This ring is $\mathbb{Z} S/\mathbb{Z}\{\infty\}$: the free ring without unit $\mathbb{Z} S$ generated by $S$ quotiented by the two-sided ideal $\mathbb{Z}\{\infty\}$ generated by $\infty$. A category $\mathcal{C}$ may be regarded as a \emph{partial semigroup} and then turned into a semigroup with absorbing element $\mathcal{C}^\circ$ by adjoining a new element $\infty$: any composition $fg$ that is undefined in $\mathcal{C}$ is defined to be $\infty$ in $\mathcal{C}^\circ$. This recovers the definition of \emph{category ring} given above. The construction is similar to that of \cite{Boettger2016Monoidswithabsorbing}, which associates a ring to any \emph{partial monoid}, going via a \emph{monoid with absorbing element}, called a \emph{binoid} in the cited paper.} Note that $\mathbb{Z}\mathcal{A}$ is unital if and only if $\mathcal{A}$ has finitely many objects, in which case the unit is given by the formal sum of the identities $1_a$ over all objects $a$ of $A$. This definition was given by B.\ Mitchell in \S 7 of \cite{Mitchell1972Ringswithseveralobjects}, see also \S 2 of \cite{Webb2007introductionrepresentationscohomology}. (We note that the definition of Mitchell is more general: it associates a ring $[\mathcal{C}]$ to each \emph{preadditive} ($\ensuremath{\mathsf{Ab}}$-enriched) category $\mathcal{C}$; the above definition of $\mathbb{Z} \mathcal{A}$ is recovered as $[\mathcal{A}_{\ensuremath{\mathsf{Ab}}}]$, where $\mathcal{A}_{\ensuremath{\mathsf{Ab}}}$ denotes the free preadditive category generated by $\mathcal{A}$.) Now, to a functor $f\colon \mathcal{A}\to \mathcal{B}$ and an object $b$ of $\mathcal{B}$ we may associate the following right $\mathbb{Z} \mathcal{A}$-module: \[ \mathbb{Z} (f,b) = \mathbb{Z} \Bigl\langle (\beta,a) \bigm| a\in\mathrm{ob}(\mathcal{A}),\; \beta\colon f(a)\to b \text{ in } \mathcal{B} \Bigr\rangle \] with the $\mathbb{Z} \mathcal{A}$ action defined as follows: a morphism $\alpha\colon a_1 \to a_2$ sends $(\beta,a)$ to zero if $a\neq a_2$ and to $(\beta\circ f(\alpha),a_1)$ otherwise. (This could be written more compactly in terms of ``heteromorphisms'' \cite{Ellerman2007Adjointfunctorsheteromorphisms} as $\bigoplus_a \mathbb{Z}\mathrm{Het}_f(a,b)$, but this will not be relevant for us here.) Given a representation $g\colon \mathcal{A}\to\ensuremath{\mathsf{Ab}}$ we may define a left $\mathbb{Z} \mathcal{A}$-module: \[ g(\mathrm{ob} \mathcal{A}) = \bigoplus_{a\in\mathrm{ob}(\mathcal{A})}g(a), \] with $\alpha\colon a_1\to a_2$ sending $x\in g(a)$ to zero if $a\neq a_1$ and to $g(\alpha)(x)\in g(a_2)$ otherwise. We now define $\mathrm{Ind}_f(g)\colon \mathcal{B}\to\ensuremath{\mathsf{Ab}}$ as follows: \begin{equation}\label{eq:induction-fg} \begin{aligned} &\text{on objects:}\qquad& &\phantom{=}\mathrm{Ind}_f(g)(b) \;=\; \mathbb{Z} (f,b) \,\otimes_{\mathbb{Z} \mathcal{A}}\, g(\mathrm{ob} \mathcal{A}) \\ &\text{on morphisms:}\qquad& &\phantom{=}\mathrm{Ind}_f(g)(\gamma\colon b\to b^\prime) \colon (\beta,a)\otimes x \;\mapsto\; (\gamma\circ\beta,a) \otimes x. \end{aligned} \end{equation} We note that $\mathrm{Ind}_f(g)(b)$ is generated by elements of the form $(\beta,a)\otimes x$ with $x\in g(a)$.\footnote{This is because it is clearly generated by elements of this form with $x\in g(a^\prime)$ for $a^\prime$ possibly different to $a$, but if $a\neq a^\prime$ this element is in fact zero, since then $(\beta,a)\otimes x = (\beta,a)\otimes g(\mathrm{id}_{a^\prime})(x) = (\beta,a)\cdot \mathrm{id}_{a^\prime} \otimes x = 0\otimes x = 0$.} This defines the functor $\mathrm{Ind}_f$ on objects, i.e.\ representations of $\mathcal{A}$. Given a natural transformation $\tau\colon g\Rightarrow g^\prime$ we define the natural transformation $\mathrm{Ind}_f(\tau) \colon \mathrm{Ind}_f(g) \Rightarrow \mathrm{Ind}_f(g^\prime)$ by \begin{flalign*} &&& \mathrm{Ind}_f(\tau)_b \colon (\beta,a)\otimes x \;\mapsto\; (\beta,a)\otimes \tau_{a}(x). &&(\text{where } x\in g(a)) \end{flalign*} This completes the definition of the induction functor \eqref{eq:induction-functor}. As mentioned above, one can check that this explicit construction is left adjoint to the restriction functor $(-) \circ f$; in other words, it is the left Kan extension operation: $\mathrm{Ind}_f = \mathrm{Lan}_f$. \subsection{Comparison to induction for modules over category rings.}\label{ss:induction} The construction mentioned above, taking a representation $g\colon \mathcal{A}\to\ensuremath{\mathsf{Ab}}$ to the $\mathbb{Z} \mathcal{A}$-module $g(\mathrm{ob}\mathcal{A})$, in fact defines an embedding \[ \mathsf{Fun}(\mathcal{A},\ensuremath{\mathsf{Ab}}) \longrightarrow \mathbb{Z} \mathcal{A}\text{-mod} \] of the representation category of $\mathcal{A}$ as a full subcategory of the category of left $\mathbb{Z} \mathcal{A}$-modules. The image is the full subcategory on those $\mathbb{Z} \mathcal{A}$-modules $M$ such that for each element $m\in M$ the set $\{a\in\mathrm{ob}(\mathcal{A}) \mid 1_a \cdot m \neq 0 \}$ is finite. Hence if $\mathcal{A}$ has only finitely many objects, this is an equivalence of categories. This is Theorem 7.1 of \cite{Mitchell1972Ringswithseveralobjects}; see also Proposition 2.1 of \cite{Webb2007introductionrepresentationscohomology}. A functor $f\colon \mathcal{A}\to \mathcal{B}$ induces a homomorphism of abelian groups $\mathbb{Z} f\colon \mathbb{Z} \mathcal{A} \to \mathbb{Z} \mathcal{B}$ that is a homomorphism of (non-unital) \emph{rings} if and only if $f$ is \emph{injective on objects} (see Proposition 2.2.3 of \cite{Xu2007Representationscategoriesapplications} and Proposition 3.1 of \cite{Webb2007introductionrepresentationscohomology}). In this case $\mathbb{Z} \mathcal{B}$ may be considered as a right module over $\mathbb{Z} \mathcal{A}$ via the ring homomorphism $\mathbb{Z} f$ and hence there is an induction functor \begin{equation}\label{eq:induction-functor-2} \mathbb{Z} \mathcal{B} \otimes_{\mathbb{Z} \mathcal{A}} - \colon \mathbb{Z} \mathcal{A}\text{-mod} \longrightarrow \mathbb{Z} \mathcal{B}\text{-mod}. \end{equation} This agrees with our definition above: \begin{lem} When $f$ is injective on objects so that the right-hand vertical arrow below exists, the following square commutes up to natural isomorphism\textup{:} \begin{equation}\label{eq:comparing-induction-functors} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,15) {$\mathsf{Fun}(\mathcal{A},\ensuremath{\mathsf{Ab}})$}; \node (tr) at (40,15) {$\mathbb{Z} \mathcal{A}\text{\textup{-mod}}$}; \node (bl) at (0,0) {$\mathsf{Fun}(\mathcal{B},\ensuremath{\mathsf{Ab}})$}; \node (br) at (40,0) {$\mathbb{Z} \mathcal{B}\text{\textup{-mod}}$}; \draw[->] (tl) to node[left,font=\small]{$\mathrm{Ind}_f$} (bl); \draw[->] (tr) to node[right,font=\small]{$\mathbb{Z} \mathcal{B}\otimes_{\mathbb{Z} \mathcal{A}}-$} (br); \incl{(tl)}{(tr)} \incl{(bl)}{(br)} \end{tikzpicture} \end{split} \end{equation} \end{lem} \begin{proof} As a right $\mathbb{Z} \mathcal{B}$-module, $\mathbb{Z} \mathcal{B}$ itself is isomorphic to the direct sum $\bigoplus_b \mathbb{Z} \mathrm{Hom}_\mathcal{B}(\mathcal{B},b)$ where the sum is over all objects $b$ of $\mathcal{B}$ and the notation $\mathrm{Hom}_\mathcal{B}(\mathcal{B},b)$ denotes the disjoint union of the sets $\mathrm{Hom}_\mathcal{B}(b^\prime,b)$ over all objects $b^\prime$ of $\mathcal{B}$. This may be viewed as an isomorphism of right $\mathbb{Z} \mathcal{A}$-modules via $\mathbb{Z} f$. Moreover, under the hypothesis that $f$ is injective on objects, the right $\mathbb{Z} \mathcal{A}$-module $\mathbb{Z}(f,b)$ is isomorphic to $\mathbb{Z}\mathrm{Hom}_\mathcal{B}(\mathcal{B},b)$. Hence we have isomorphisms of right $\mathbb{Z} \mathcal{A}$-modules \[ \mathbb{Z} \mathcal{B} \;\cong\; \textstyle{\bigoplus_b}\, \mathbb{Z} (f,b). \] Now the result of sending a functor $g\colon \mathcal{A}\to\ensuremath{\mathsf{Ab}}$ clockwise around the diagram is $\mathbb{Z} \mathcal{B} \otimes_{\mathbb{Z} \mathcal{A}} g(\mathrm{ob}\mathcal{A})$ whereas the result of sending it anticlockwise around the diagram is $\bigoplus_b \mathbb{Z}(f,b) \otimes_{\mathbb{Z} \mathcal{A}} g(\mathrm{ob}\mathcal{A})$. \end{proof} It does not seem clear how to extend $\mathrm{Ind}_f$ to an induction functor $\mathbb{Z} \mathcal{A}\text{-mod} \to \mathbb{Z} \mathcal{B}\text{-mod}$ in the case when $f$ is not injective on objects. \begin{rmk} Under certain conditions (although certainly not in general) induction followed by restriction is isomorphic to the identity. More precisely, write $\mathrm{Res}_f(-) = (-)\circ f \colon \mathsf{Fun}(\mathcal{B},\ensuremath{\mathsf{Ab}}) \to \mathsf{Fun}(\mathcal{A},\ensuremath{\mathsf{Ab}})$, so that $\mathrm{Res}_f \circ \mathrm{Ind}_f$ is an endofunctor of $\mathsf{Fun}(\mathcal{A},\ensuremath{\mathsf{Ab}})$. Then there is a natural transformation $\mathrm{id} \Rightarrow \mathrm{Res}_f \circ \mathrm{Ind}_f$ (the unit of the adjunction $\mathrm{Ind}_f \dashv \mathrm{Res}_f$) with the property that for each $g\in\mathsf{Fun}(\mathcal{A},\ensuremath{\mathsf{Ab}})$ and $a\in \mathcal{A}$ its component $g(a) \to \mathrm{Ind}_f(g)(f(a))$ is surjective if $f$ is full and bijective if $f$ is also faithful. So when $f$ is fully faithful the composition $\mathrm{Res}_f \circ \mathrm{Ind}_f$ is isomorphic to the identity. We leave this assertion without proof since we will not use it (but see \S 3 of \cite{Webb2007introductionrepresentationscohomology}, especially Prop.\ 3.2(1), for further discussion). \end{rmk} \subsection{Special cases.}\label{sss:special-cases} We note that the formula \eqref{eq:induction-fg} for the induced functor simplifies in some special cases. Suppose first that $\mathcal{A}$ is a disjoint union of monoids, i.e., has no morphisms between distinct objects. Then $\mathbb{Z} \mathcal{A}$ splits as a direct sum of rings $\bigoplus_a \mathbb{Z}\mathrm{End}_\mathcal{A}(a)$. Also, the right $\mathbb{Z} \mathcal{A}$-module $\mathbb{Z}(f,b)$ splits as a direct sum of modules $\bigoplus_a \mathbb{Z}\mathrm{Hom}_\mathcal{B}(f(a),b)$ and the left $\mathbb{Z} \mathcal{A}$-module splits as a direct sum of modules $\bigoplus_a g(a)$. The tensor product therefore splits in the same way, and we have: \[ \mathrm{Ind}_f(g)(b) \;\cong\; \textstyle{\bigoplus}_a \bigl( \mathbb{Z}\mathrm{Hom}_\mathcal{B}(f(a),b) \otimes_{\mathbb{Z}\mathrm{End}_\mathcal{A}(a)} g(a) \bigr). \] If the category $\mathcal{B}$ is also a disjoint union of monoids, then this simplifies further to \[ \mathrm{Ind}_f(g)(b) \;\cong\; \textstyle{\bigoplus}_{a\in f^{-1}(b)} \bigl( \mathbb{Z}\mathrm{End}_\mathcal{B}(b) \otimes_{\mathbb{Z}\mathrm{End}_\mathcal{A}(a)} g(a) \bigr). \] Under certain conditions, this may be written purely in terms of automorphism groups, rather than endomorphism monoids, using the following elementary lemma. \begin{lem}\label{l:induction-and-restriction-commute} Suppose that the square of submonoids \begin{center} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$C$}; \node (tr) at (20,10) {$D$}; \node (bl) at (0,0) {$A$}; \node (br) at (20,0) {$B$}; \incl{(tl)}{(tr)} \incl{(bl)}{(br)} \incl{(bl)}{(tl)} \incl{(br)}{(tr)} \end{tikzpicture} \end{center} satisfies the following condition $(*)$\textup{:} there is a subset $X\subseteq B\times C$ such that the multiplication map $X\to D$ is surjective and whenever $b_1c_1=b_2c_2$ for $(b_i,c_i)\in X$ there exists $a\in A$ such that $b_1=b_2a$ and $ac_1=c_2$. Then for any $\mathbb{Z} C$-module $M$ there is an isomorphism of $\mathbb{Z} B$-modules\textup{:} \[ \mathbb{Z} D\otimes_{\mathbb{Z} C} M \;\cong\; \mathbb{Z} B\otimes_{\mathbb{Z} A} M. \] \end{lem} One may also write this as $\mathrm{Res}_B^D (\mathrm{Ind}_C^D (M)) \cong \mathrm{Ind}_A^B (\mathrm{Res}_A^C (M))$. \begin{proof} There is an obvious $\mathbb{Z} B$-module homomorphism $i\colon \mathbb{Z} B\otimes_{\mathbb{Z} A} M \to \mathbb{Z} D\otimes_{\mathbb{Z} C} M$ given by $b\otimes m \mapsto b\otimes m$. To define an inverse, note that by property $(*)$ there is a well-defined function $D\times M \to \mathbb{Z} B\otimes_{\mathbb{Z} A} M$ given by sending $(d,m)$ to $b\otimes c\cdot m$, where $(b,c)\in X$ such that $bc=d$. This is linear in the second entry, and it sends $(dc,m)$ and $(d,c\cdot m)$ to the same element for any $c\in C$, so it induces a homomorphism $\mathbb{Z} D\otimes_{\mathbb{Z} C} M \to \mathbb{Z} B\otimes_{\mathbb{Z} A} M$. This is an inverse for $i$. \end{proof} The condition $(*)$ in Lemma \ref{l:induction-and-restriction-commute} will be valid in our setting by the following lemma. Let $\mathcal{P}_n$ be the monoid of partial bijections of $\{1,\ldots,n\}$ and write $\mathcal{P}_k \times \mathcal{P}_{n-k}$ for its submonoid of those partial bijections that preserve the partition into $\{1,\ldots,n-k\}$ and $\{n-k+1,\ldots,n\}$. Write $D^\sim$ for the underlying group of a monoid $D$, so for example $(\mathcal{P}_n)^\sim = \Sigma_n$ is the $n$th symmetric group. \begin{lem}\label{l:checking-property-star} Suppose that $\pi\colon D\to \mathcal{P}_n$ is a surjective monoid homomorphism such that the homomorphism of underlying groups $D^\sim \to \Sigma_n$ is also surjective. Define $C=\pi^{-1}(\mathcal{P}_k \times \mathcal{P}_{n-k})$. Then the square of submonoids \begin{equation}\label{eq:square-of-submonoids} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$C$}; \node (tr) at (20,10) {$D$}; \node (bl) at (0,0) {$C^\sim$}; \node (br) at (20,0) {$D^\sim$}; \incl{(tl)}{(tr)} \incl{(bl)}{(br)} \incl{(bl)}{(tl)} \incl{(br)}{(tr)} \end{tikzpicture} \end{split} \end{equation} satisfies condition $(*)$ of Lemma \ref{l:induction-and-restriction-commute}. \end{lem} \begin{proof} Write $n=k+l$ and $A=C^\sim$, $B=D^\sim$. First note that the square above is a pullback diagram, i.e., $A=C\cap B$, which follows from the fact that $\Sigma_k \times \Sigma_l = \Sigma_n \cap (\mathcal{P}_k \times \mathcal{P}_l)$. Define $X\subseteq B\times C$ as follows: $(b,c)\in X$ if and only if the partial bijection $\pi(b)$ is order-preserving on $\mathrm{im}(\pi(c))^\perp \coloneqq \{1,\ldots,n\} \smallsetminus \mathrm{im}(\pi(c))$. We need to show that (a) every $d\in D$ is of the form $bc$ for $(b,c)\in X$ and that (b) if $b_1 c_1 = b_2 c_2$ for $(b_i,c_i)\in X$ then $b_2 a=b_1$ and $ac_1=c_2$ for some $a\in A$. (a). Given any $d\in D$, the partial bijection $\pi(d)$ will not in general preserve the partition $\{1,\ldots,l\} \sqcup \{l+1,\ldots,n\}$, but we may find some permutation $\sigma\in\Sigma_n$ such that $\sigma^{-1}\pi(d)$ does preserve it, i.e., lies in the submonoid $\mathcal{P}_k \times \mathcal{P}_l$. Moreover, it does not matter how $\sigma^{-1}$ acts away from the image of $\pi(d)$ so we may assume that it is order-preserving on $\mathrm{im}(\pi(d))^\perp$. We assumed that the restriction of $\pi$ to underlying groups is surjective, so we may pick $b\in B=D^\sim$ such that $\pi(b)=\sigma$. Now define $c=b^{-1}d$, so of course $bc=d$. Since $\pi(c)=\sigma^{-1}\pi(d) \in \mathcal{P}_k \times \mathcal{P}_l$ we know that $c\in C$. It remains to show that $(b,c)$ is in $X$, i.e., that $\pi(b)$ is order-preserving on $\mathrm{im}(\pi(c))^\perp$. But we ensured that $\sigma^{-1}$ is order-preserving on $\mathrm{im}(\pi(d))^\perp$, which is equivalent to saying that $\sigma$ is order-preserving on $\mathrm{im}(\sigma^{-1}\pi(d))^\perp$, which is precisely the required condition. (b). Define $a=b_2^{-1}b_1 \in B$. It then immediately follows that $b_2 a=b_1$ and $ac_1 = c_2$, so we just have to show that $a\in A$. Since $A=C\cap B$ this means we just need to show that $a\in C$, in other words that $\pi(a)\in \mathcal{P}_k \times \mathcal{P}_l$ --- i.e.\ that $\pi(a)$ preserves the partition $\{1,\ldots,l\} \sqcup \{l+1,\ldots,n\}$. First, since $\pi(a)\pi(c_1)=\pi(c_2)$ with $\pi(c_i)$ both preserving the partition, it follows that $\pi(a)$ restricted to $\mathrm{im}(\pi(c_1))$ preserves the partition. We will now show that $\pi(a)$ restricted to $\mathrm{im}(\pi(c_1))^\perp$ is order-preserving --- which will imply that $\pi(a)$ preserves the partition on all of $\{1,\ldots,n\}$. By the definition of $X$, we know that $\pi(b_2)$ is order-preserving on $\mathrm{im}(\pi(c_2))^\perp$. Hence $\pi(b_2)^{-1}$ is order-preserving on \[ \pi(b_2)\bigl( \mathrm{im}(\pi(c_2))^\perp \bigr) = \mathrm{im}(\pi(b_2 c_2))^\perp = \mathrm{im}(\pi(b_1 c_1))^\perp = \pi(b_1)\bigl( \mathrm{im}(\pi(c_1))^\perp \bigr). \] Combined with the fact that $\pi(b_1)$ is order-preserving on $\mathrm{im}(\pi(c_1))^\perp$ this tells us that $\pi(a) = \pi(b_2)^{-1} \pi(b_1)$ is order-preserving on $\mathrm{im}(\pi(c_1))^\perp$, as required. This completes the proof of property (b) of $X\subseteq B\times C$, so the square of submonoids \eqref{eq:square-of-submonoids} satisfies condition $(*)$ of Lemma \ref{l:induction-and-restriction-commute}. \end{proof} \subsection{Returning to the definition of height.}\label{sss:height-general} Following on from \S\ref{sss:height}, we give details of the alternative definition of the \emph{height} of a twisted coefficient system with indexing category $\mathcal{C} \in \ensuremath{\mathsf{Cat}_{\mathcal{I}}}$. For the first step we define a subcategory $\mathcal{B} \subseteq \mathcal{C}$, a faithful functor $\mathcal{A} \to \mathcal{B}$ and an $\mathbb{N}$-grading of the objects of $\mathcal{A}$. For the second step, given a functor $T \colon \mathcal{C} \to \ensuremath{\mathsf{Ab}}$, we define the \emph{cross-effect functor} $T^\prime \colon \mathcal{A} \to \ensuremath{\mathsf{Ab}}$ associated to $T$, and show that $\mathrm{Ind}_{\mathcal{A} \to \mathcal{B}}(T^\prime)|_{\mathcal{B}^{\sim}} \cong\, T|_{\mathcal{B}^{\sim}}$, where $\mathcal{B}^\sim$ denotes the underlying groupoid of $\mathcal{B}$. As stated in \S\ref{sss:height}, the \emph{height} of $T$ is then the smallest $n$ such that $T^\prime$ is supported on the subcategory $\mathcal{A}^{\leq n} \subseteq \mathcal{A}$, in other words vanishes on the subcategory $\mathcal{A}^{>n} \subseteq \mathcal{A}$. In Remark \ref{rmk:two-defs-of-height-agree}, we explain why this agrees with the \emph{height} of $T$ as defined in \S\ref{para:specialise-this-paper}. \paragraph*{The first step.} Recall that $\mathcal{C}\in\ensuremath{\mathsf{Cat}_{\mathcal{I}}}$ comes equipped with functors $s\colon\mathcal{I} \to \mathcal{C}$ and $\pi\colon \mathcal{C}\to \Sigma$. The objects of $\mathcal{B}$ are non-negative integers and those of $\mathcal{A}$ are pairs of non-negative integers. Both are simply disjoint unions of monoids, i.e.\ they consist only of endomorphisms, so we just need to specify $\mathrm{End}_{\mathcal{A}}(k,l)$ and $\mathrm{End}_{\mathcal{B}}(n)$. As in \S\ref{sss:height} and in Lemma \ref{l:checking-property-star} above, let $\mathcal{P}_n$ denote the monoid $\mathrm{End}_\Sigma(n)$ of partial bijections of $\{1,\ldots,n\}$ and write $l=n-k$ for convenience. There is a submonoid isomorphic to $\mathcal{P}_k \times \mathcal{P}_l$ consisting of those partial bijections that respect the partition $\{1,\ldots,l\}\sqcup\{l+1,\ldots,n\}$ wherever they are defined. We now define \begin{align*} \mathrm{End}_{\mathcal{B}}(n) &= \mathrm{End}_\mathcal{C}(s(n)) \\ \mathrm{End}_{\mathcal{A}}(k,l) &= \text{preimage of } \mathcal{P}_k \times \mathcal{P}_l \text{ under the map } \pi \colon \mathrm{End}_{\mathcal{B}}(n) \longrightarrow \mathrm{End}_\Sigma(n) = \mathcal{P}_n. \end{align*} This completes the definitions of $\mathcal{B}$ and $\mathcal{A}$. The grading of the objects of $\mathcal{A}$ is given by setting $\mathrm{deg}((k,l)) = k$. There is an obvious faithful functor $\mathcal{A} \to \mathcal{B}$, given on objects by $(k,l)\mapsto k+l$, and an embedding of categories $\mathcal{B} \hookrightarrow \mathcal{C}$. \paragraph*{The second step.} Recall that the monoid $\mathcal{I}_n = \mathrm{End}_{\mathcal{I}}(n)$, which is the submonoid of $\mathcal{P}_n$ consisting of all idempotent elements, is isomorphic to the power set $\mathsf{P}(\{1,\ldots,n\})$, which is a commutative monoid via the operation $\cup$. The correspondence sends a subset $S\subseteq\{1,\ldots,n\}$ to the idempotent element $f_S\in\mathcal{P}_n$ that is undefined on $S$ and the identity elsewhere. Thus, given an object $\mathcal{C}\in\ensuremath{\mathsf{Cat}_{\mathcal{I}}}$ and a functor $T\colon \mathcal{C}\to\ensuremath{\mathsf{Ab}}$, we have a collection of idempotents \[ Ts(f_S) \colon T(s(n)) \longrightarrow T(s(n)) \qquad\text{for } S\subseteq \{1,\ldots,n\}. \] Write $l=n-k$ for convenience. In order to define the functor $T^\prime \colon \mathcal{A} \to \ensuremath{\mathsf{Ab}}$ we need to specify an $\mathrm{End}_{\mathcal{A}}(k,l)$-module for each pair $(k,l)$ of non-negative integers. As an abelian group, we define it to be \begin{equation}\label{eq:functorial-cross-effect} T^\prime(k,l) \;=\; \mathrm{im}\bigl( Ts(f_{\{1,\ldots,l\}}) \bigr) \cap \bigcap_{i=l+1}^n \mathrm{ker} \bigl( Ts(f_{\{i\}}) \bigr) \quad\leq\quad T(s(n)). \end{equation} The monoid $\mathrm{End}_\mathcal{C}(s(n))$ acts on $T(s(n))$ via the functor $T$, and it turns out (see $3$ lines below) that each element $\phi$ of its submonoid $\mathrm{End}_{\mathcal{A}}(k,l)$ sends the subgroup $T^\prime(k,l)$ to itself. Hence $T^\prime(k,l)$ is an $\mathrm{End}_{\mathcal{A}}(k,l)$-module --- and so we have defined the functor $T^\prime \colon \mathcal{A} \to\ensuremath{\mathsf{Ab}}$. The claim in the previous paragraph follows from the fact that $\phi$ commutes with the element $s(f_{\{1,\ldots,l\}})$ and with the set of elements $\bigl\lbrace s(f_{\{l+1\}}),\ldots,s(f_{\{n\}})\bigr\rbrace$. This in turn follows from the fact that $\pi(\phi)\in \mathcal{P}_k \times \mathcal{P}_l$ together with the ``locality'' property \eqref{eq:locality} of $\mathcal{C} \in \ensuremath{\mathsf{Cat}_{\mathcal{I}}}$. It remains to show the following: \begin{prop} The functors $\mathrm{Ind}_{\mathcal{A} \to \mathcal{B}}(T^\prime)$ and $T$ are isomorphic on the subgroupoid $\mathcal{B}^{\sim}$. \end{prop} \begin{proof} Since $\mathcal{B}$ is a disjoint union of monoids, this is just a more elaborate way of saying that for each $n\geq 0$ there is an isomorphism of modules over $\mathrm{Aut}_\mathcal{C}(s(n)) = \mathrm{Aut}_{\mathcal{B}}(n)$: \begin{equation}\label{eq:cross-effect-decomp-generalised} T(s(n)) \;\cong\; \mathrm{Ind}_{\mathcal{A} \to \mathcal{B}} (T^\prime)(n). \end{equation} The proof of Proposition 3.5 of \cite{Palmer2018Twistedhomologicalstability} generalises verbatim to this setting to show that the left-hand side of \eqref{eq:cross-effect-decomp-generalised} is isomorphic to \[ \bigoplus_{k+l=n} \bigl( \mathbb{Z}\mathrm{Aut}_{\mathcal{B}}(n) \otimes_{\mathbb{Z}\mathrm{Aut}_{\mathcal{A}}(k,l)} T^\prime(k,l) \bigr). \] The categories $\mathcal{A}$ and $\mathcal{B}$ are both disjoint unions of monoids, so, as remarked in \S\ref{sss:special-cases}, the right-hand side of \eqref{eq:cross-effect-decomp-generalised} may be written as follows: \[ \bigoplus_{k+l=n} \bigl( \mathbb{Z}\mathrm{End}_{\mathcal{B}}(n) \otimes_{\mathbb{Z}\mathrm{End}_{\mathcal{A}}(k,l)} T^\prime(k,l) \bigr). \] To finish the proof we will apply Lemma \ref{l:induction-and-restriction-commute}, so we need to know that for each $k+l=n\geq 0$ the square of submonoids \begin{center} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$\mathrm{End}_{\mathcal{A}}(k,l)$}; \node (tr) at (40,10) {$\mathrm{End}_{\mathcal{B}}(n)$}; \node (bl) at (0,0) {$\mathrm{Aut}_{\mathcal{A}}(k,l)$}; \node (br) at (40,0) {$\mathrm{Aut}_{\mathcal{B}}(n)$}; \incl{(tl)}{(tr)} \incl{(bl)}{(br)} \incl{(bl)}{(tl)} \incl{(br)}{(tr)} \end{tikzpicture} \end{center} satisfies condition $(*)$ of that lemma. This will be given by Lemma \ref{l:checking-property-star} as long as the homomorphism \[ \pi\colon \mathrm{End}_{\mathcal{B}}(n) = \mathrm{End}_\mathcal{C}(s(n)) \longrightarrow \mathrm{End}_\Sigma(n) = \mathcal{P}_n \] (as well as its restriction to maximal subgroups) is surjective. But this is true by definition for any $\mathcal{C}\in\ensuremath{\mathsf{Cat}_{\mathcal{I}}}$ (see Definition \ref{def:cati}). \end{proof} \begin{rmk}\label{rmk:two-defs-of-height-agree} Finally, we note that this description of $\text{height}(T \colon \mathcal{C} \to \ensuremath{\mathsf{Ab}})$ for an object $\mathcal{C} \in \ensuremath{\mathsf{Cat}_{\mathcal{I}}}$ agrees with the definition of $\text{height}(T \colon \mathcal{C} \to \ensuremath{\mathsf{Ab}})$, given in \S\ref{para:specialise-this-paper}, for any category $\mathcal{C}$ equipped with a functor $\mathcal{I} \to \mathcal{C}$ (such as any object of $\ensuremath{\mathsf{Cat}_{\mathcal{I}}}$). In other words, it is just a different way of packaging the same definition. To see this: the height of $T$, as defined in this section, is the largest $k$ such that \eqref{eq:functorial-cross-effect} is non-zero for some value of $l$, whereas the height of $T$, as defined in \S\ref{para:specialise-this-paper}, is the largest $n$ such that \eqref{eq:subobject-of-Tsm-2} is non-zero for some value of $m-n$. But the objects \eqref{eq:functorial-cross-effect} and \eqref{eq:subobject-of-Tsm-2} are the same, with $k \leftrightarrow n$ and $l \leftrightarrow m-n$. \end{rmk} \renewcommand{\thesection}{A} \section[Appendix]{Whitney's embedding theorem for manifolds with collared boundary} The Whitney Embedding Theorem implies that any (paracompact) smooth manifold without boundary admits an embedding into some Euclidean space. In footnote \ref{f:Whitney} on page \pageref{f:Whitney}, the analogous statement for manifolds with collared boundary was used. We could not find an explicit reference for this in the literature, so we explain here briefly how to deduce the statement for manifolds with collared boundary from the statement for manifolds without boundary. \begin{lem}\label{lem:Whitney-with-boundary} Any (paracompact) smooth manifold $M$ equipped with a collar neighbourhood admits a neat embedding into some Euclidean half-space $\mathbb{R}^k_+ = \{ (s_1,\ldots,s_k) \in \mathbb{R}^k \mid s_k \geq 0 \}$. \end{lem} A \emph{collar neighbourhood} means a proper embedding $c \colon \partial M \times [0,1] \hookrightarrow M$ such that $c(p,0)=p$. An embedding $f \colon M \hookrightarrow \mathbb{R}^k_+$ is \emph{neat} if it takes $\partial M$ into $\mathbb{R}^{k-1} = \partial (\mathbb{R}^k_+)$ and the interior of $M$ into the interior of $\mathbb{R}^k_+$ and, moreover, there is $\varepsilon > 0$ such that for all $(p,t) \in \partial M \times [0,\varepsilon)$ we have $f(c(p,t)) = (f(p),t)$. \begin{proof} First, we may embed $M$ into a manifold without boundary, either by gluing two copies of $M$ together along their common boundary or simply by attaching an open collar to the boundary of a single copy of $M$. By Whitney's Embedding Theorem we therefore obtain an embedding $g \colon M \hookrightarrow \mathbb{R}^{k-1}$ for some $k$. Now choose a smooth embedding $(x,y) \colon [0,1] \hookrightarrow [0,1]^2$ such that \begin{itemizeb} \item for $0\leq t\leq \frac{1}{4}$ we have $x(t)=0$ and $y(t)=t$, \item for $\frac{3}{4} \leq t\leq 1$ we have $x(t)=t$ and $y(t)=1$. \end{itemizeb} We may then define the required neat embedding $f \colon M \hookrightarrow \mathbb{R}^k_+$ as follows. If $p \in M\smallsetminus\mathrm{image}(c)$ then $f(p) = (g(p),1)$. If $p \in \partial M$ and $t\in [0,1]$ then we define $f(c(p,t)) \;=\; (g(c(p,x(t))),y(t))$. \end{proof} The idea is that most of $M$ -- the part far away from its boundary -- is embedded into the affine hyperplane $\mathbb{R}^{k-1} \times \{1\}$, and its collar neighbourhood is bent smoothly downwards towards the linear hyperplane $\mathbb{R}^{k-1} \times \{0\}$, using the functions $x$ and $y$, such that the boundary of $M$ is on this hyperplane and the part of the collar neighbourhood closest to the boundary of $M$ is embedded so that it rises vertically upwards from the hyperplane. \phantomsection \addcontentsline{toc}{section}{References} \renewcommand{\bibfont}{\normalfont\small} \setlength{\bibitemsep}{0pt} \printbibliography \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection*{#1} \lstinputlisting{#2} \vspace{2em} } \begin{document} \title{Answers to Two Questions on the DP Color Function} \author{Jeffrey A. Mudrock\footnotemark[1] and Seth Thomason\footnotemark[1]} \footnotetext[1]{Department of Mathematics, College of Lake County, Grayslake, IL 60030. E-mail: {\tt {jmudrock@clcillinois.edu}}} \maketitle \begin{abstract} DP-coloring is a generalization of list coloring that was introduced in 2015 by Dvo\v{r}\'{a}k and Postle. The chromatic polynomial of a graph is a notion that has been extensively studied since the early 20th century. The chromatic polynomial of graph $G$ is denoted $P(G,m)$, and it is equal to the number of proper $m$-colorings of $G$. In 2019, Kaul and Mudrock introduced an analogue of the chromatic polynomial for DP-coloring; specifically, the DP color function of graph $G$ is denoted $P_{DP}(G,m)$. Two fundamental questions posed by Kaul and Mudrock are: (1) For any graph $G$ with $n$ vertices, is it the case that $P(G,m)-P_{DP}(G,m) = O(m^{n-3})$ as $m \rightarrow \infty$? and (2) For every graph $G$, does there exist $p,N \in \mathbb{N}$ such that $P_{DP}(K_p \vee G, m) = P(K_p \vee G, m)$ whenever $m \geq N$? We show that the answer to both these questions is yes. In fact, we show the answer to (2) is yes even if we require $p=1$. \medskip \noindent {\bf Keywords.} graph coloring, list coloring, DP-coloring, chromatic polynomial, list color function \noindent \textbf{Mathematics Subject Classification.} 05C15, 05C30, 05C69 \end{abstract} \section{Introduction}\label{intro} In this note all graphs are nonempty, finite, simple graphs unless otherwise noted. Generally speaking we follow West~\cite{W01} for terminology and notation. The set of natural numbers is $\mathbb{N} = \{1,2,3, \ldots \}$. For $m \in \mathbb{N}$, we write $[m]$ for the set $\{1, \ldots, m \}$. Given a set $A$, $\mathcal{P}(A)$ is the power set of $A$. If $G$ is a graph and $S, U \subseteq V(G)$, we use $G[S]$ for the subgraph of $G$ induced by $S$, and we use $E_G(S, U)$ for the subset of $E(G)$ with one endpoint in $S$ and one endpoint in $U$. If an edge in $E(G)$ connects the vertices $u$ and $v$, the edge can be represented by $uv$ or $vu$. If $G$ and $H$ are vertex disjoint graphs, we write $G \vee H$ for the join of $G$ and $H$. The \emph{cone of graph $G$} is $K_1 \vee G$. \subsection{List Coloring and DP-Coloring} \label{basic} In the classical vertex coloring problem we wish to color the vertices of a graph $G$ with up to $m$ colors from $[m]$ so that adjacent vertices receive different colors, a so-called \emph{proper $m$-coloring}. The chromatic number of a graph $G$, denoted $\chi(G)$, is the smallest $m$ such that $G$ has a proper $m$-coloring. List coloring, a well-known variation on classical vertex coloring, was introduced independently by Vizing~\cite{V76} and Erd\H{o}s, Rubin, and Taylor~\cite{ET79} in the 1970s. For list coloring, we associate a \emph{list assignment} $L$ with a graph $G$ such that each vertex $v \in V(G)$ is assigned a list of colors $L(v)$ (we say $L$ is a list assignment for $G$). Then, $G$ is \emph{$L$-colorable} if there exists a proper coloring $f$ of $G$ such that $f(v) \in L(v)$ for each $v \in V(G)$ (we refer to $f$ as a \emph{proper $L$-coloring} of $G$). A list assignment $L$ is called a \emph{$k$-assignment} for $G$ if $|L(v)|=k$ for each $v \in V(G)$. The \emph{list chromatic number} of a graph $G$, denoted $\chi_\ell(G)$, is the smallest $k$ such that $G$ is $L$-colorable whenever $L$ is a $k$-assignment for $G$. We say $G$ is \emph{$k$-choosable} if $k \geq \chi_\ell(G)$. Since $G$ must be $L$-colorable whenever $L$ is a $\chi_\ell(G)$-assignment for $G$ that assigns the same list of colors to each element in $V(G)$, it is clear that $\chi(G) \leq \chi_\ell(G)$. This inequality may be strict since it is known that there are bipartite graphs with arbitrarily large list chromatic number (see~\cite{ET79}). In 2015, Dvo\v{r}\'{a}k and Postle~\cite{DP15} introduced a generalization of list coloring called DP-coloring (they called it correspondence coloring) in order to prove that every planar graph without cycles of lengths 4 to 8 is 3-choosable. DP-coloring has been extensively studied over the past 5 years (see e.g.,~\cite{B16,B17, BK17, BK182, BK18, KM19, KM20, KO18, KO182, LL19, LLYY19, Mo18, M18}). Intuitively, DP-coloring is a variation on list coloring where each vertex in the graph still gets a list of colors, but identification of which colors are different can change from edge to edge. Following~\cite{BK17}, we now give the formal definition. Suppose $G$ is a graph. A \emph{cover} of $G$ is a pair $\mathcal{H} = (L,H)$ consisting of a graph $H$ and a function $L: V(G) \rightarrow \mathcal{P}(V(H))$ satisfying the following four requirements: \vspace{5mm} \noindent(1) the set $\{L(u) : u \in V(G) \}$ is a partition of $V(H)$; \\ (2) for every $u \in V(G)$, the graph $H[L(u)]$ is complete; \\ (3) if $E_H(L(u),L(v))$ is nonempty, then $u=v$ or $uv \in E(G)$; \\ (4) if $uv \in E(G)$, then $E_H(L(u),L(v))$ is a matching (the matching may be empty). \vspace{5mm} Suppose $\mathcal{H} = (L,H)$ is a cover of $G$. We refer to the edges of $H$ connecting distinct parts of the partition $\{L(v) : v \in V(G) \}$ as \emph{cross-edges}. An \emph{$\mathcal{H}$-coloring} of $G$ is an independent set in $H$ of size $|V(G)|$. It is immediately clear that an independent set $I \subseteq V(H)$ is an $\mathcal{H}$-coloring of $G$ if and only if $|I \cap L(u)|=1$ for each $u \in V(G)$. We say $\mathcal{H}$ is \emph{$m$-fold} if $|L(u)|=m$ for each $u \in V(G)$. The \emph{DP-chromatic number} of $G$, $\chi_{DP}(G)$, is the smallest $m \in \mathbb{N}$ such that $G$ has an $\mathcal{H}$-coloring whenever $\mathcal{H}$ is an $m$-fold cover of $G$. Suppose $\mathcal{H} = (L,H)$ is an $m$-fold cover of $G$. We say that $\mathcal{H}$ has a \emph{canonical labeling} if it is possible to name the vertices of $H$ so that $L(u) = \{ (u,j) : j \in [m] \}$ and $(u,j)(v,j) \in E(H)$ for each $j \in [m]$ whenever $uv \in E(G)$.~\footnote{When $\mathcal{H}=(L,H)$ has a canonical labeling, we will always refer to the vertices of $H$ using this naming scheme.} Clearly, when $\mathcal{H}$ has a canonical labeling, $G$ has an $\mathcal{H}$-coloring if and only if $G$ has a proper $m$-coloring. Also, given an $m$-assignment, $L$, for a graph $G$, it is easy to construct an $m$-fold cover $\mathcal{H}'$ of $G$ such that $G$ has an $\mathcal{H}'$-coloring if and only if $G$ has a proper $L$-coloring (see~\cite{BK17}). It follows that $\chi(G) \leq \chi_\ell(G) \leq \chi_{DP}(G)$. The second inequality may be strict since it is easy to prove that $\chi_{DP}(C_n) = 3$ whenever $n \geq 3$, but the list chromatic number of any even cycle is 2 (see~\cite{BK17} and~\cite{ET79}). In some instances DP-coloring behaves similar to list coloring, but there are some interesting differences. Molloy~\cite{Mo18} has shown that Kahn's~\cite{K96} result that the list edge-chromatic number of a simple graph asymptotically equals the edge-chromatic number holds for DP-coloring as well. Thomassen~\cite{T94} famously proved that every planar graph is 5-choosable, and Dvo\v{r}\'{a}k and Postle~\cite{DP15} observed that the DP-chromatic number of every planar graph is at most 5. Also, Molloy~\cite{M17} recently improved a theorem of Johansson by showing that every triangle-free graph $G$ with maximum degree $\Delta(G)$ satisfies $\chi_\ell(G) \leq (1 + o(1)) \Delta(G)/ \log(\Delta(G))$. Bernshteyn~\cite{B17} subsequently showed that this bound also holds for the DP-chromatic number. On the other hand, Bernshteyn~\cite{B16} showed that if the average degree of a graph $G$ is $d$, then $\chi_{DP}(G) = \Omega(d/ \log(d))$. This is in striking contrast to the celebrated result of Alon~\cite{A00} that says $\chi_\ell(G) = \Omega(\log(d))$. It was also recently shown in~\cite{BK17} that there exist planar bipartite graphs with DP-chromatic number 4 even though the list chromatic number of any planar bipartite graph is at most 3~\cite{AT92}. A famous result of Galvin~\cite{G95} says that if $G$ is a bipartite multigraph and $L(G)$ is the line graph of $G$, then $\chi_\ell(L(G)) = \chi(L(G)) = \Delta(G)$. However, it is also shown in~\cite{BK17} that every $d$-regular graph $G$ satisfies $\chi_{DP}(L(G)) \geq d+1$. \subsection{Counting Proper Colorings, List Colorings, and DP-Colorings} In 1912 Birkhoff introduced the notion of the chromatic polynomial in hopes of using it to make progress on the four color problem. For $m \in \mathbb{N}$, the \emph{chromatic polynomial} of a graph $G$, $P(G,m)$, is the number of proper $m$-colorings of $G$. It can be shown that $P(G,m)$ is a polynomial in $m$ of degree $|V(G)|$ (see~\cite{B12}). For example, $P(K_n,m) = \prod_{i=0}^{n-1} (m-i)$, $P(C_n,m) = (m-1)^n + (-1)^n (m-1)$, $P(T,m) = m(m-1)^{n-1}$ whenever $T$ is a tree on $n$ vertices, and $P(K_1 \vee G, m) = m P(G, m-1)$ (see~\cite{W01}). The notion of chromatic polynomial was extended to list coloring in the 1990s. In particular, if $L$ is a list assignment for $G$, we use $P(G,L)$ to denote the number of proper $L$-colorings of $G$. The \emph{list color function} $P_\ell(G,m)$ is the minimum value of $P(G,L)$ where the minimum is taken over all possible $m$-assignments $L$ for $G$. It is clear that $P_\ell(G,m) \leq P(G,m)$ for each $m \in \mathbb{N}$ since we must consider the $m$-assignment that assigns the same $m$ colors to all the vertices in $G$ when considering all possible $m$-assignments for $G$. In general, the list color function can differ significantly from the chromatic polynomial for small values of $m$. However, for large values of $m$, Wang, Qian, and Yan~\cite{WQ17} (improving upon results in~\cite{D92} and~\cite{T09}) showed the following in 2017. \begin{thm} [\cite{WQ17}] \label{thm: WQ17} If $G$ is a connected graph with $l$ edges, then $P_{\ell}(G,m)=P(G,m)$ whenever $m > \frac{l-1}{\ln(1+ \sqrt{2})}$. \end{thm} It is also known that $P_{\ell}(G,m)=P(G,m)$ for all $m \in \mathbb{N}$ when $G$ is a cycle or chordal (see~\cite{KN16} and~\cite{AS90}). Moreover, if $P_{\ell}(G,m)=P(G,m)$ for all $m \in \mathbb{N}$, then $P_{\ell}(K_n \vee G,m)=P(K_n \vee G,m)$ for each $n, m \in \mathbb{N}$ (see~\cite{KM18}). See~\cite{T09} for a survey of known results and open questions on the list color function. In 2019, Kaul and the first author introduced a DP-coloring analogue of the chromatic polynomial in hopes of gaining a better understanding DP-coloring and using it as a tool for making progress on some open questions related to the list color function~\cite{KM19}. Specifically, suppose $\mathcal{H} = (L,H)$ is a cover of graph $G$. Let $P_{DP}(G, \mathcal{H})$ be the number of $\mathcal{H}$-colorings of $G$. Then, the \emph{DP color function} of $G$, $P_{DP}(G,m)$, is the minimum value of $P_{DP}(G, \mathcal{H})$ where the minimum is taken over all possible $m$-fold covers $\mathcal{H}$ of $G$.~\footnote{We take $\mathbb{N}$ to be the domain of the DP color function of any graph.} It is easy to show that for any graph $G$ and $m \in \mathbb{N}$, $P_{DP}(G, m) \leq P_\ell(G,m) \leq P(G,m)$.~\footnote{To prove this, recall that for any $m$-assignment $L$ for $G$, an $m$-fold cover $\mathcal{H}'$ of $G$ such that $G$ has an $\mathcal{H}'$-coloring if and only if $G$ has a proper $L$-coloring is constructed in~\cite{BK17}. It is easy to see from the construction in~\cite{BK17} that there is a bijection between the proper $L$-colorings of $G$ and the $\mathcal{H}'$-colorings of $G$.} Note that if $G$ is a disconnected graph with components: $H_1, H_2, \ldots, H_t$, then $P_{DP}(G, m) = \prod_{i=1}^t P_{DP}(H_i,m)$. So, we will only consider connected graphs from this point forward unless otherwise noted. As with list coloring and DP-coloring, the list color function and DP color function of certain graphs behave similarly. However, for some graphs there are surprising differences. For example, similar to the list color function, $P_{DP}(G,m) = P(G,m)$ for every $m \in \mathbb{N}$ whenever $G$ is chordal or an odd cycle~\cite{KM19}. On the other hand, we have the following two results. \begin{thm} [\cite{KM19}] \label{thm: evengirth} If $G$ is a graph with girth that is even, then there is an $N \in \mathbb{N}$ such that $P_{DP}(G,m) < P(G,m)$ whenever $m \geq N$. Furthermore, for any integer $g \geq 3$ there exists a graph $H$ with girth $g$ and an $N \in \mathbb{N}$ such that $P_{DP}(H,m) < P(H,m)$ whenever $m \geq N$. \end{thm} This result is particularly surprising since Theorem~\ref{thm: WQ17} implies that the list color function of any graph eventually equals its chromatic polynomial. The following is also known. \begin{thm} [\cite{KM19}] \label{thm: asymptotic} For any graph $G$ with $n$ vertices, $$P(G,m)-P_{DP}(G,m) = O(m^{n-2}) \; \; \text{as $m \rightarrow \infty$.}$$ \end{thm} In studying the tightness of Theorem~\ref{thm: asymptotic}, the authors of~\cite{KM19} mentioned that if $G$ is a unicyclic graph~\footnote{A \emph{unicyclic graph} is a connected graph containing exactly one cycle.} on $n$ vertices that contains a cycle of length 4, then $P(G,m)-P_{DP}(G,m) = \Theta(m^{n-3})$. However, they stated that ``we do not have an example of a graph $G$ such that $P(G,m)-P_{DP}(G,m) = \Theta(m^{n-2})$." Motivated by a result of Bernshteyn, Kostochka, and Zhu~\cite{BK18} that says for any graph $G$ there exists an $N \leq 3|E(G)|$ such that $\chi_{DP}(K_p \vee G) = \chi(K_p \vee G)$ whenever $p \geq N$, the authors of~\cite{KM19} also studied $P_{DP}(K_p \vee G, m)$. Interestingly, it turns out that the question of whether there exist $p,N \in \mathbb{N}$ such that $P_{DP}(K_p \vee G, m) = P(K_p \vee G , m)$ whenever $m \geq N$ is related to the asymptotics of $P(G,m)-P_{DP}(G,m)$. In fact, the following two questions were both posed in~\cite{KM19}. These two questions are the focus of this note. \begin{ques} \label{ques: asymptotic} For any graph $G$ with $n$ vertices, is it the case that $P(G,m)-P_{DP}(G,m) = O(m^{n-3})$ as $m \rightarrow \infty$? \end{ques} \begin{ques} \label{ques: join} For every graph $G$, does there exist $p,N \in \mathbb{N}$ such that $P_{DP}(G \vee K_p, m) = P(G \vee K_p , m)$ whenever $m \geq N$? \end{ques} In~\cite{KM19} it is shown that if the the answer to Question~\ref{ques: asymptotic} is yes, then the answer to Question~\ref{ques: join} must be yes. We will show that the answer to Question~\ref{ques: asymptotic} is yes. This of course implies that the answer to Question~\ref{ques: join} is yes, but we will show that its answer is yes even when $p$ is fixed to 1. \subsection{Summary of Results} We begin by showing the following. \begin{thm} \label{thm: general} Suppose $g$ is an odd integer with $g \geq 3$. If $G$ is a graph on $n$ vertices with girth $g$ or $g+1$, then $P(G,m) - P_{DP}(G,m) = O(m^{n-g})$ as $m \rightarrow \infty$. Consequently, $P(M,m) - P_{DP}(M,m) = O(m^{|V(M)|-3})$ as $m \rightarrow \infty$ for any graph $M$. \end{thm} When considering the third sentence of Theorem~\ref{thm: general}, recall that if the girth of a graph is infinite, then the graph is acyclic and therefore chordal which means the DP color function of the graph is always equal to its chromatic polynomial. A result in~\cite{KM19} implies that if $G$ is a unicyclic graph on $n$ vertices with girth $2k+2$ where $k \in \mathbb{N}$, then $P_{DP}(G,m) = \Theta(m^{n-2k-1})$. It can also be shown that for any odd integer $g$ with $g \geq 3$, if $G$ consists of a cycle on $g$ vertices and a cycle on $g+1$ vertices such that the cycles share exactly one vertex, then $P(G,m) - P_{DP}(G,m) = \Theta(m^{(2g)-g})$. This demonstrates the tightness of Theorem~\ref{thm: general} for all possible girths. We end this note by proving the following. \begin{thm} \label{thm: cone} For any graph $G$, there is an $N \in \mathbb{N}$ such that $P_{DP}(K_1 \vee G,m) = P(K_1 \vee G,m)$ whenever $m \geq N$. \end{thm} Theorem~\ref{thm: cone} shows that the DP color function of $K_1 \vee G$ behaves like the list color function of $K_1 \vee G$ since the DP color function of $K_1 \vee G$ eventually equals the chromatic polynomial of $K_1 \vee G$. It is worth mentioning that in this note no attempt has been made to minimize the value of $N$ in Theorem~\ref{thm: cone}. It would be interesting to study the threshold at which $P_{DP}(K_1 \vee G,m) = P(K_1 \vee G,m)$ for a given graph $G$. \section{Proofs of Results} \label{main} The key to proving our results is generalizing the proof technique of the following classical result to the context of DP-coloring. \begin{thm} [\cite{W32}] \label{pro: base} Suppose $G$ is a graph. Then, $$P(G,m) = \sum_{A \subseteq E(G)} (-1)^{|A|} m^{k_A}$$ where $k_A$ is the number of components of the spanning subgraph of $G$ with edge set $A$. \end{thm} The next four results will also be useful tools to keep in mind. \begin{pro} [\cite{W32}] \label{pro: coefficients2} Suppose $G$ is a graph on $n$ vertices. Then there are nonnegative integers $a_0, \ldots, a_n$ such that $P(G,m) = \sum_{i=0}^{n} (-1)^i a_i m^{n-i}$. Furthermore, if $G$ has $c$ components, then $a_0, \ldots, a_{n-c}$ are all positive integers, and $a_{n-c+1} = \cdots = a_n = 0$. \end{pro} \begin{pro} [\cite{W32}] \label{pro: coefficients} Suppose $G$ is a graph with $s$ edges and $n$ vertices having girth $g \in \mathbb{N}$. Suppose $P(G,m) = \sum_{i=0}^{n} (-1)^i a_i m^{n-i}$. Then, for $i = 0, 1, \ldots, g-2$ $$ a_i = \binom{s}{i} \; \; \text{and} \; \; a_{g-1} = \binom{s}{g-1} - t$$ where $t$ is the number of cycles of length $g$ contained in $G$. \end{pro} \begin{pro} [\cite{KM19}] \label{pro: tree} Suppose $T$ is a tree and $\mathcal{H} = (L,H)$ is an $m$-fold cover of $T$ such that $E_H(L(u),L(v))$ is a perfect matching whenever $uv \in E(T)$. Then, $\mathcal{H}$ has a canonical labeling. \end{pro} \begin{pro} \label{pro: obvious} Suppose that $\mathcal{H} = (L,H)$ is an $m$-fold cover of graph $G$ and $\mathcal{H}$ has a canonical labeling. Let $B_i = \{(v,i): v \in V(G) \}$ for each $i \in [m]$. Then, $I \subset V(H)$ satisfies: $|I \cap L(v)|=1$ for each $v \in V(G)$ and $H[I]$ is isomorphic to $G$ if and only if $I = B_j$ for some $j \in [m]$. \end{pro} \begin{proof} For each $i \in [m]$, it is clear that $|B_i \cap L(v)|=1$ for each $v \in V(G)$ and $H[B_i]$ is isomorphic to $G$. Conversely, suppose that and $I \notin \{B_1, \ldots, B_m \}$. Since $|I \cap L(v)|=1$ for each $v \in V(G)$, $H[I]$ has fewer edges than $G$ contradicting the fact that $H[I]$ is isomorphic to $G$. \end{proof} \subsection{Proof of Theorem~\ref{thm: general}} \label{general} We will now introduce some notation that will be used for the remainder of this note. Suppose that $G$ is a graph on $n \geq 3$ vertices with $|E(G)| \geq 3$. Let $s = |E(G)|$, and $E(G) = \{e_1, \ldots, e_s \}$. Also, for some $m \in \mathbb{N}$ suppose that $\mathcal{H}= (L,H)$ is an $m$-fold cover of $G$ satisfying $|E_H(L(u), L(v))| = m$ whenever $uv \in E(G)$. Let $\mathcal{U} = \{ I \subseteq V(H) : |L(v) \cap I| = 1 \text{ for each } v \in V(G) \}$. Clearly, $|\mathcal{U}| = m^n$. Now, for each $i \in [s]$, suppose $e_i=u_i v_i$, and let $S_i$ be the set consisting of each $I \in \mathcal{U}$ with the property that $H[I]$ contains an edge in $E_H(L(u_i), L(v_i))$. Also, for each $i \in [s]$ let $C_i = \mathcal{U} - S_i$. Clearly, $$P_{DP}(G, \mathcal{H}) = \left | \bigcap_{i=1}^s C_i \right |.$$ So, by the Inclusion-Exclusion Principle, we see that $$P_{DP}(G, \mathcal{H}) = |\mathcal{U}| - \left | \bigcup_{i=1}^s S_i \right | = m^n - \sum_{k=1}^s (-1)^{k-1} \left ( \sum_{1 \leq i_1 < \cdots < i_k \leq s} \left | \bigcap_{j=1}^k S_{i_j} \right| \right).$$ The following Lemma is the key to our proof of Theorem~\ref{thm: general}. \begin{lem} \label{lem: formulas2} Assuming the set up established above, suppose that $G$ is a graph of girth $g \in \mathbb{N}$. Then, the following three statements hold. \\ (i) For any $k \in [g-1]$ and $i_1, \ldots, i_k \in [s]$ satisfying $i_1 < \cdots < i_k$, $\left | \bigcap_{j=1}^k S_{i_j} \right| = m^{n-k}$. \\ (ii) If $e_{i_1}, \ldots, e_{i_g}$ are distinct edges in $G$, then $\left | \bigcap_{j=1}^g S_{i_j} \right| \leq m^{n-g+1}$. Moreover, $\left | \bigcap_{j=1}^g S_{i_j} \right| = m^{n-g}$ when $e_{i_1}, \ldots, e_{i_g}$ are not the edges of a $g$-cycle in $G$. \\ (iii) For any $k \geq g+1$ and $i_1, \ldots, i_k \in [s]$ satisfying $i_1 < \cdots < i_k$, $\left | \bigcap_{j=1}^k S_{i_j} \right| \leq m^{n-g}$. \end{lem} \begin{proof} For Statement (i), suppose that $G'$ is the spanning subgraph of $G$ with $E(G')= \{e_{i_1}, \ldots, e_{i_k} \}$. Since $G$ has girth $g$ and $k \in [g-1]$, $G'$ is an acyclic graph with $n-k$ components. Suppose the components of $G'$ are $W_1, \ldots, W_{n-k}$ (each component is a tree). Note that we can construct each element $I$ of $\bigcap_{j=1}^k S_{i_j}$ in $(n-k)$ steps as follows. For each $i \in [n-k]$ consider the component $W_i$. Suppose $V(W_i) = \{w_1, \ldots, w_l\}$. Choose one element from each of $L(w_1), \ldots, L(w_l)$ so that the subgraph of $H$ induced by the set containing these chosen elements is isomorphic to $W_i$. Then, place these chosen elements in $I$. By Propositions~\ref{pro: tree} and~\ref{pro: obvious}, this step can be done in $m$ ways (regardless of the choices made in previous steps). So, $\left | \bigcap_{j=1}^k S_{i_j} \right| = m^{n-k}$. For Statement (ii), the first part follows from Statement~(i) since $\left | \bigcap_{j=1}^g S_{i_j} \right| \leq \left | \bigcap_{j=1}^{g-1} S_{i_j} \right| = m^{n-g+1}$. So, suppose that $e_{i_1}, \ldots, e_{i_g}$ are not the edges of a $g$-cycle in $G$. Let $G''$ be the spanning subgraph of $G$ with $E(G'')= \{e_{i_1}, \ldots, e_{i_g}\}$. Clearly, $G''$ is an acyclic graph with $n-g$ components. We can obtain $\left | \bigcap_{j=1}^g S_{i_j} \right| = m^{n-g}$ by using an argument similar to the argument used for the proof of Statement~(i). For Statement (iii), notice that we can assume without loss of generality that $e_{i_1}, \ldots, e_{i_g}$ are not the edges of a $g$-cycle in $G$. So, by Statement~(ii), we see that $\left| \bigcap_{j=1}^k S_{i_j} \right| \leq \left | \bigcap_{j=1}^g S_{i_j} \right| = m^{n-g}.$ \end{proof} We are now ready to prove Theorem~\ref{thm: general}. \begin{proof} Suppose $s=|E(G)|$ and $t$ is the number of $g$-cycles in $G$ (note that $t=0$ in the case that $G$ has girth $g+1$). Since $g$ is odd, Propositions~\ref{pro: coefficients2} and~\ref{pro: coefficients} tell us that there is an $N \in \mathbb{N}$ such that $$P(G,m) \leq \left(\binom{s}{g-1} - t \right) m^{n-g+1} + \sum_{i=0}^{g-2} (-1)^i \binom{s}{i} m^{n-i}$$ whenever $m \geq N$. Suppose that $m$ is a fixed natural number satisfying $m \geq N$. Suppose that $\mathcal{H}= (L,H)$ is an $m$-fold cover of $G$ satisfying $P_{DP}(G, \mathcal{H}) = P_{DP}(G,m)$. Clearly, we may assume that $|E_H(L(u), L(v))| = m$ whenever $uv \in E(G)$. Now, assume we use the same notation described at the start of this Subsection. By Statement~(i) of Lemma~\ref{lem: formulas2}, we have that \begin{align*} &P_{DP}(G,m) \\ &= m^n + \sum_{k=1}^s (-1)^{k} \left ( \sum_{1 \leq i_1 < \cdots < i_k \leq s} \left | \bigcap_{j=1}^k S_{i_j} \right| \right) \\ &= \sum_{i=0}^{g-1} (-1)^i \binom{s}{i} m^{n-i} - \sum_{1 \leq i_1 < \cdots < i_g \leq s} \left | \bigcap_{j=1}^g S_{i_j} \right| + \sum_{k=g+1}^s (-1)^{k} \left ( \sum_{1 \leq i_1 < \cdots < i_k \leq s} \left | \bigcap_{j=1}^k S_{i_j} \right| \right). \end{align*} We see that Statement~(ii) of Lemma~\ref{lem: formulas2} implies that $$\sum_{1 \leq i_1 < \cdots < i_g \leq s} \left | \bigcap_{j=1}^g S_{i_j} \right| \leq tm^{n-g+1} + \left(\binom{s}{g} - t \right) m^{n-g}.$$ Furthermore, Statement~(iii) of Lemma~\ref{lem: formulas2} implies that $$\sum_{k=g+1}^s (-1)^{k} \left ( \sum_{1 \leq i_1 < \cdots < i_k \leq s} \left | \bigcap_{j=1}^k S_{i_j} \right| \right) \geq -2^s m^{n-g}.$$ These facts imply that $$P_{DP}(G,m) \geq \left(\binom{s}{g-1} - t \right) m^{n-g+1} + \sum_{i=0}^{g-2} (-1)^i \binom{s}{i} m^{n-i} - \left(\binom{s}{g} - t \right) m^{n-g} - 2^s m^{n-g}.$$ So, we see that $$P(G,m) - P_{DP}(G,m) \leq \left(\binom{s}{g} - t + 2^s \right) m^{n-g}.$$ The desired result immediately follows. \end{proof} \subsection{Proof of Theorem~\ref{thm: cone}} Notice that the result of Theorem~\ref{thm: cone} is obvious when $G$ is acyclic since the cone of such a graph is chordal. So, throughout this Subsection suppose that $G$ is a graph with $n-1$ vertices where $n \geq 4$ and $s \geq 3$ edges. Suppose that $E(G) = \{e_1, \ldots, e_s \}$. Also, suppose that $M = K_1 \vee G$, and $w$ is the vertex corresponding to the copy of $K_1$ used to form $M$. We use $e_{s+1}, \ldots, e_{s+n}$ to denote the edges in $E(M)$ that have $w$ as an endpoint. We want to show that $P_{DP}(M,m) = P(M,m)$ for sufficiently large $m$, or equivalently, $P_{DP}(M,m) \geq P(M,m)$ for sufficiently large $m$. Since $M$ is a graph with $n$ vertices and $n+s$ edges, Propositions~\ref{pro: coefficients2} and~\ref{pro: coefficients} tell us that $$P(M,m)= m^n - (n+s)m^{n-1} + \left(\binom{n+s}{2} - t \right)m^{n-2} - a_3 m^{n-3} + O(m^{n-4}) \text{ as } m \rightarrow \infty$$ where $t$ is the number of 3-cycles contained in $M$ (note that $t \geq s$). We now give a formula for $a_3$. Let $A = \{A_1, \ldots, A_q \}$ be the set of spanning subgraphs of $M$ with $(n-3)$ components. For each $i \in [q]$, it is straightforward to verify that $3 \leq |E(A_i)| \leq 6$. For $i \in \{3, 4, 5, 6 \}$, let $P_i = \{E(A_j) : A_j \in A, |E(A_j)|=i \}$. By Theorem~\ref{pro: base}, $a_3 = |P_3|-|P_4|+|P_5|-|P_6|$. In this Subsection we are interested in finding a lower bound for $P_{DP}(M,m)$. So, whenever $\mathcal{H} = (L,H)$ is an $m$-fold cover for $M$, we will assume that $|E_H(L(u),L(v))|=m$ for each $uv \in E(M)$. We will also suppose without loss of generality that $L(u) = \{(u,j): j \in [m] \}$ for each $u \in V(M)$, and $(w,j)(v,j) \in E(H)$ for each $v \in V(G)$ and $j \in [m]$ (this is permissible by Proposition~\ref{pro: tree} since the spanning subgraph of $M$ with edge set $\{e_{s+1}, \ldots, e_{s+n}\}$ is a tree). Also, for each $e_i \in E(G)$ we suppose that $e_i = u_iv_i$, and we let $x_{i, \mathcal{H}}$~\footnote{We will just write $x_i$ when $\mathcal{H}$ is clear from context.} be the number of edges in $E_H(L(u_i),L(v_i))$ that connect endpoints with differing second coordinates. Finally, we let $x_\mathcal{H} = \sum_{i=1}^s x_{i, \mathcal{H}}$. Clearly, if $x_{\mathcal{H}}=0$, then $\mathcal{H}$ has a canonical labeling and $P_{DP}(M, \mathcal{H})=P(M,m)$. Also, $x_\mathcal{H}$ is the number of cross edges in $H$ that connect vertices with differing second coordinates. For the next Lemma assume that $\mathcal{H} = (L,H)$ is an $m$-fold cover for $M$, and assume we are using the same notation as the beginning of Subsection~\ref{general} (with $M$ playing the role of $G$). \begin{lem} \label{lem: three} The following statments hold. \\ (i) $\sum_{1 \leq i_1 < \cdots < i_3 \leq n+s} \left | \bigcap_{j=1}^3 S_{i_j} \right| \leq tm^{n-2} - x_\mathcal{H}m^{n-3} + |P_3|m^{n-3}$, \\ (ii) $\sum_{1 \leq i_1 < \cdots < i_4 \leq n+s} \left | \bigcap_{j=1}^4 S_{i_j} \right| \geq |P_4|m^{n-3} - 2|P_4|x_\mathcal{H}m^{n-4}$, \\ (iii) $\sum_{1 \leq i_1 < \cdots < i_5 \leq n+s} \left | \bigcap_{j=1}^5 S_{i_j} \right| \leq |P_5|m^{n-3} + \left(\binom{n+s}{5} - |P_5| \right) m^{n-4}$, \\ (iv) $\sum_{1 \leq i_1 < \cdots < i_6 \leq n+s} \left | \bigcap_{j=1}^6 S_{i_j} \right| \geq |P_6|m^{n-3}- 2|P_6|x_\mathcal{H}m^{n-4}$, and \\ (v) For $k \geq 7$, $\sum_{1 \leq i_1 < \cdots < i_k \leq n+s} \left | \bigcap_{j=1}^k S_{i_j} \right| \leq \binom{n+s}{k} m^{n-4}$. \end{lem} \begin{proof} For Statement~(i), suppose $x$, $y$, and $z$ are distinct edges in $E(M)$. Let $M'$ be the spanning subgraph of $M$ with $E(M') = \{x,y,z\}$. If $x$, $y$, and $z$ form a 3-cycle in $M$ containing $w$, then $M'$ consists of this 3-cycle and $n-3$ isolated vertices. Notice that each 3-cycle in $M$ containing $w$ contains exactly one edge in $E(G)$. So, we suppose that $z = e_i$ for some $i \in [s]$, then it is clear that $|S_x \cap S_y \cap S_z| = m^{n-3}(m-x_i) = m^{n-2} - x_im^{n-3}$. In the case that $x$, $y$, and $z$ form a 3-cycle in $M$ not containing $w$, then Lemma~\ref{lem: formulas2} implies that $|S_x \cap S_y \cap S_z| \leq m^{n-2}$. Finally, in the case that $x$, $y$, and $z$ do not form a 3-cycle in $M$ (note that there are $|P_3|$ such sets of three edges), Lemma~\ref{lem: formulas2} implies $|S_x \cap S_y \cap S_z| = m^{n-3}$. Statement~(i) now follows immediately from these facts. For Statement~(ii), suppose $a,x,y$, and $z$ are distinct edges in $E(M)$. Let $M'$ be the spanning subgraph of $M$ with $E(M') = \{a,x,y,z\}$. If $M'$ contains a cycle, then $M'$ contains one cycle and consists of $n-3$ components (note that there are $|P_4|$ sets of four edges for which this happens). Suppose that the components of $M'$ are $W_1, \ldots, W_{n-3}$, and assume that $W_1$ is the component of $M'$ containing the cycle. Also, suppose that $V(W_1) = \{w_1, \ldots, w_l \}$. Now, let $B_i = \{(w_j,i) : j \in [l] \}$ for each $i \in [m]$. If $H[B_i]$ is not isomorphic to $W_1$, then one of the elements in $B_i$ must be the endpoint of a cross edge in $H$ that connects vertices with differing second coordinates. Let $\mathcal{B}$ consist of each $B_i \in \{B_1, \ldots, B_m\}$ with the property that $H[B_i]$ is not isomorphic to $W_1$. Notice this means that for each $B_j \in \{B_1, \ldots, B_m\}-\mathcal{B}$, $H[B_j]$ is isomorphic to $W_1$, and there are are least $|\{B_1, \ldots, B_m\}-\mathcal{B}|$ ways to select one element from each of $L(w_1), \ldots, L(w_l)$ so that the subgraph of $H$ induced by the set containing these chosen elements is isomorphic to $W_1$. Let $\mathcal{E}$ be the set of cross edges in $H$ that connect vertices with differing second coordinates (note that $|\mathcal{E}|=x_{\mathcal{H}}$). We can construct a function $\eta: \mathcal{B} \rightarrow \mathcal{E}$ that maps each $B_i \in \mathcal{B}$ to one of the edges in $\mathcal{E}$ that has an endpoint in $B_i$. Furthermore, if $B_i$, $B_j$, and $B_t$ are distinct elements of $\mathcal{B}$, then it is not possible for $\eta(B_i) = \eta(B_j) = \eta(B_t)$ since an edge only has two endpoints. Consequently, $|\{B_1, \ldots, B_m\}-\mathcal{B}| \geq (m-2x_\mathcal{H})$. So, $|S_a \cap S_x \cap S_y \cap S_z| \geq m^{n-4}(m-2x_\mathcal{H}) = m^{n-3} - 2x_\mathcal{H}m^{n-4}$. Statement~(ii) now immediately follows. For Statement~(iii), suppose $a, b, x, y$, and $z$ are distinct edges in $E(M)$. Let $M'$ be the spanning subgraph of $M$ with $E(M') = \{a,b,x,y,z\}$. Suppose $M'$ consists of $n-3$ components (note that there are $|P_5|$ sets of five edges for which this happens). Suppose that the components of $M'$ are $W_1, \ldots, W_{n-3}$. Note that we can construct each element $I$ of $(S_a \cap S_b \cap S_x \cap S_y \cap S_z)$ in $(n-3)$ steps as follows. For each $i \in [n-3]$ consider the component $W_i$. If $\{a,b,x,y,z \} \cap E(W_i) \neq \emptyset$, then $V(W_i)$ has at least 2 elements, say $V(W_i) = \{w_1, \ldots, w_l\}$, choose one element from each of $L(w_1), \ldots, L(w_l)$ so that the subgraph of $H$ induced by these chosen elements is isomorphic to $W_i$ (this can be done in at most $m$ ways~\footnote{To see why this is so, consider a spanning tree of $W_i$ and apply Propositions~\ref{pro: tree} and~\ref{pro: obvious}.}). Then, place these chosen elements in $I$. If $\{a,b,x,y,z \} \cap E(W_i) = \emptyset$, then $W_i$ is a single vertex, say $V(W_i) = \{w\}$, and we choose an element of $L(w)$ to place in $I$. Notice that in either case there are at most $m$ ways to complete the step. Consequently, $|S_a \cap S_b \cap S_x \cap S_y \cap S_z| \leq m^{n-3}$. A similar argument shows that when $M'$ has fewer than $n-3$ components, $|S_a \cap S_b \cap S_x \cap S_y \cap S_z| \leq m^{n-4}$. Statement~(iii) now follows from the fact that $\sum_{1 \leq i_1 < \cdots < i_5 \leq n+s} \left | \bigcap_{j=1}^5 S_{i_j} \right|$ has $\binom{n+s}{5}$ terms. For Statement~(iv), suppose $a,b,c,x,y$, and $z$ are distinct edges in $E(M)$. Let $M'$ be the spanning subgraph of $M$ with $E(M') = \{a,b,c,x,y,z\}$. Suppose $M'$ consists of $n-3$ components (note that there are $|P_6|$ sets of six edges for which this happens). It is easy to see that $M'$ must consist of a complete graph on four vertices and $n-4$ isolated vertices. Suppose that the components of $M'$ are $W_1, \ldots, W_{n-3}$, and assume that $W_1 = K_4$. Using an argument similar to the argument used to prove Statement~(ii), we obtain $|S_a \cap S_b \cap S_c \cap S_x \cap S_y \cap S_z| \geq m^{n-4}(m-2x_\mathcal{H}) = m^{n-3} - 2x_\mathcal{H}m^{n-4}$. Statement~(iv) now immediately follows. For Statement~(v), suppose $k \geq 7$ and $1 \leq i_1 < \cdots < i_k \leq n+s$. Let $M'$ be the spanning subgraph of $M$ with $E(M') = \{e_{i_1}, \ldots, e_{i_{k}}\}$. Then, $M'$ must consist of at most $n-4$ components. An argument similar to the argument used to prove Statement~(iii) then yields $\left | \bigcap_{j=1}^k S_{i_j} \right| \leq m^{n-4}$. Statement~(v) now immediately follows. \end{proof} We need one more Lemma before proving Theorem~\ref{thm: cone}. \begin{lem} \label{lem: lower} Suppose that $m \geq 2(|P_4|+|P_6|)$, and $\mathcal{H} = (L,H)$ is an $m$-fold cover for $M$ with $x_\mathcal{H} > 0$. Then, \begin{align*} &P_{DP}(M, \mathcal{H}) \\ &\geq m^n - (n+s)m^{n-1} + \left(\binom{n+s}{2} - t \right)m^{n-2} - a_3 m^{n-3} \\ &+ m^{n-3} - (2(|P_4|+|P_6|+2^{s-1}))m^{n-4}. \end{align*} \end{lem} \begin{proof} Using the notation established in Subsection~\ref{general} (with $M$ playing the role of $G$) along with Lemma~\ref{lem: formulas2}, we know that \begin{align*} &P_{DP}(M,\mathcal{H}) \\ &= m^n + \sum_{k=1}^{n+s} (-1)^{k} \left ( \sum_{1 \leq i_1 < \cdots < i_k \leq n+s} \left | \bigcap_{j=1}^k S_{i_j} \right| \right) \\ &= m^n - (n+s)m^{n-1} + \binom{n+s}{2} m^{n-2} - \sum_{1 \leq i_1 < \cdots < i_3 \leq n+s} \left | \bigcap_{j=1}^3 S_{i_j} \right| + \sum_{1 \leq i_1 < \cdots < i_4 \leq n+s} \left | \bigcap_{j=1}^4 S_{i_j} \right| \\ &- \sum_{1 \leq i_1 < \cdots < i_5 \leq n+s} \left | \bigcap_{j=1}^5 S_{i_j} \right| +\sum_{1 \leq i_1 < \cdots < i_6 \leq n+s} \left | \bigcap_{j=1}^6 S_{i_j} \right| + \sum_{k=7}^{n+s} (-1)^{k} \left ( \sum_{1 \leq i_1 < \cdots < i_k \leq n+s} \left | \bigcap_{j=1}^k S_{i_j} \right| \right). \end{align*} Then, Lemma~\ref{lem: three} yields: \begin{align*} &P_{DP}(M,\mathcal{H}) \\ &\geq m^n - (n+s)m^{n-1} + \binom{n+s}{2} m^{n-2} - \left(tm^{n-2} - x_\mathcal{H}m^{n-3} + |P_3|m^{n-3} \right)\\ & + |P_4|m^{n-3} - 2|P_4|x_\mathcal{H}m^{n-4}- \left(|P_5|m^{n-3} + \left(\binom{n+s}{5} - |P_5| \right) m^{n-4} \right) \\ &+|P_6|m^{n-3}- 2|P_6|x_\mathcal{H}m^{n-4} - \sum_{k=7}^{n+s} \binom{n+s}{k} m^{n-4} \\ &= m^n - (n+s)m^{n-1} + \left(\binom{n+s}{2} - t \right)m^{n-2} - a_3 m^{n-3} \\ &+ x_\mathcal{H}m^{n-3} - 2|P_4|x_\mathcal{H}m^{n-4} - 2|P_6|x_\mathcal{H}m^{n-4} - 2^{s}m^{n-4} \\ &\geq m^n - (n+s)m^{n-1} + \left(\binom{n+s}{2} - t \right)m^{n-2} - a_3 m^{n-3} \\ &+ m^{n-3} - (2(|P_4|+|P_6|+2^{s-1}))m^{n-4}. \end{align*} \end{proof} We now prove Theorem~\ref{thm: cone}. \begin{proof} We know that there must be $C, N_1 \in \mathbb{N}$ such that $P(M,m) \leq m^n - (n+s)m^{n-1} + \left(\binom{n+s}{2} - t \right)m^{n-2} - a_3 m^{n-3} + Cm^{n-4}$ whenever $m \geq N_1$. Also, when $m \geq 2(|P_4|+|P_6|)$ and $\mathcal{H} = (L,H)$ is an $m$-fold cover for $M$ with $x_\mathcal{H} > 0$, Lemma~\ref{lem: lower} tells us $P_{DP}(M, \mathcal{H}) \geq m^n - (n+s)m^{n-1} + \left(\binom{n+s}{2} - t \right)m^{n-2} - a_3 m^{n-3} + m^{n-3} - (2(|P_4|+|P_6|+2^{s-1}))m^{n-4}.$ Finally, there must be an $N_2 \in \mathbb{N}$ such that $m^{n-3} - (2(|P_4|+|P_6|+2^{s-1})+C)m^{n-4} \geq 0$ whenever $m \geq N_2$. Let $N = \max \{N_1, N_2, 2(|P_4|+|P_6|) \}$. If $m \geq N$ and $\mathcal{H} = (L,H)$ is an $m$-fold cover for $M$ with $x_\mathcal{H} > 0$, then $P_{DP}(M, \mathcal{H}) - P(M,m) \geq m^{n-3} - (2(|P_4|+|P_6|+2^{s-1})+C)m^{n-4} \geq 0$. Since we know that when $\mathcal{H} = (L,H)$ is an $m$-fold cover for $M$ with $x_\mathcal{H} = 0$, $P_{DP}(M, \mathcal{H})=P(M,m)$, we may conclude that $P_{DP}(M,m) = P(M,m)$ whenever $m \geq N$. \end{proof} {\bf Acknowledgment.} The authors would like to thank Hemanshu Kaul and Alexandr Kostochka for their guidance and encouragement.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The facts that have motivated us to write this paper lie in the fractional calculus theory. Basically, an event that a differential operator with a fractional derivative in final terms underwent a careful study \cite{firstab_lit:1Nakhushev1977}, \cite{kukushkin2019} have played an important role in our research. The main feature is that there exists various approaches to study the operator and one of them is based on an opportunity to represent it in a sum of a senior term and an a lower term, here we should note that this method works if the senior term is selfadjoint or normal. Thus, in the case corresponding to a selfadjoint senior term, we can partially solve the problem having applied the results of the perturbation theory, within the framework of which the following papers are well-known \cite{firstab_lit:1Katsnelson}, \cite{firstab_lit:1Krein}, \cite{firstab_lit:2Markus}, \cite{firstab_lit:3Matsaev}, \cite{firstab_lit:1Mukminov}, \cite{firstab_lit:Shkalikov A.}. Note that to apply the last paper results we must have the mentioned above representation. In other cases we can use methods of the papers \cite{firstab_lit(arXiv non-self)kukushkin2018}, which are relevant if we deal with non-selfadjoint operators and allow us to study spectral properties of operators. In the paper \cite{kukushkin Gen} we explore a special operator class for which a number of spectral theory theorems can be applied. Further, we construct an abstract model of a differential operator in terms of m-accretive operators and call it an m-accretive operator transform, we find such conditions that being imposed guaranty that the transform belongs to the class. One of them is a compact embedding of a space generated by an m-accretive operator (infinitesimal generator) into the initial Hilbert space. Note that in the case corresponding to the second order operator with the Kiprianov operator in final terms we have obtained the embedding mentioned above in the one-dimensional case only. In this paper we try to reveal this problem and the main result is a theorem establishing equivalence of norms in function spaces in consequence of which we have a compact embedding of a space generated by the infinitesimal generator of the shift semigroup in a direction into the Lebsgue space. We should note that this result do not give us a useful concrete application in the built theory for it is more of an abstract generalization. However this result, by virtue of popularity and well known applicability of the Lebesgue spaces theory, deserves to be considered itself. \section{Preliminaries} Let $ C,C_{i} ,\;i\in \mathbb{N}_{0}$ be real constants. We assume that a value of $C$ is positive and can be different in various formulas but values of $C_{i} $ are certain. Everywhere further, if the contrary is not stated, we consider linear densely defined operators acting on a separable complex Hilbert space $\mathfrak{H}$. Denote by $ \mathcal{B} (\mathfrak{H})$ the set of linear bounded operators on $\mathfrak{H}.$ Denote by $\tilde{L}$ the {\it closure} of an operator $L.$ Denote by $ \mathrm{D} (L),\, \mathrm{R} (L),\,\mathrm{N}(L)$ the {\it domain of definition}, the {\it range}, and the {\it kernel} or {\it null space} of an operator $L$ respectively. Consider a pair of complex Hilbert spaces $\mathfrak{H},\mathfrak{H}_{+},$ the notation $ \mathfrak{H}_{+}\subset\subset\mathfrak{ H} $ means that $\mathfrak{H}_{+}$ is dense in $\mathfrak{H}$ as a set of elements and we have a bounded embedding provided by the inequality $ \|f\|_{\mathfrak{H}}\leq C_{0}\|f\|_{\mathfrak{H}_{+}},\,C_{0}>0,\;f\in \mathfrak{H}_{+}, $ moreover any bounded set with respect to the norm $\mathfrak{H}_{+}$ is compact with respect to the norm $\mathfrak{H}.$ An operator $L$ is called {\it bounded from below} if the following relation holds $\mathrm{Re}(Lf,f)_{\mathfrak{H}}\geq \gamma_{L}\|f\|^{2}_{\mathfrak{H}},\,f\in \mathrm{D} (L),\,\gamma_{L}\in \mathbb{R},$ where $\gamma_{L}$ is called a lower bound of $L.$ An operator $L$ is called {\it accretive} if $\gamma_{L}=0.$ An operator $L$ is called {\it strictly accretive} if $\gamma_{L}>0.$ An operator $L$ is called {\it m-accretive} if the next relation holds $(A+\zeta)^{-1}\in \mathcal{B}(\mathfrak{H}),\,\|(A+\zeta)^{-1}\| \leq (\mathrm{Re}\zeta)^{-1},\,\mathrm{Re}\zeta>0. $ Assume that $T_{t},\,(0\leq t<\infty)$ is a semigroup of bounded linear operators on $\mathfrak{H},$ by definition put $$ Af=-\lim\limits_{t\rightarrow+0} \left(\frac{T_{t}-I}{t}\right)f, $$ where $\mathrm{D}(A)$ is a set of elements for which the last limit exists in the sense of the norm $\mathfrak{H}.$ In accordance with definition \cite[p.1]{Pasy},\cite{firstab_lit:Yosida} the operator $-A$ is called the {\it infinitesimal generator} of the semigroup $T_{t}.$ Using notations of the paper \cite{firstab_lit:kipriyanov1960} we assume that $\Omega$ is a convex domain of the $n$ - dimensional Euclidean space $\mathbb{E}^{n}$, $P$ is a fixed point of the boundary $\partial\Omega,$ $Q(r,\mathbf{e})$ is an arbitrary point of $\Omega;$ we denote by $\mathbf{e}$ a unit vector having a direction from $P$ to $Q,$ denote by $r=|P-Q|$ the Euclidean distance between the points $P,Q,$ and use the shorthand notation $T:=P+\mathbf{e}t,\,t\in \mathbb{R}.$ We consider the Lebesgue classes $L_{p}(\Omega),\;1\leq p<\infty $ of complex valued functions. For the function $f\in L_{p}(\Omega),$ we have \begin{equation}\label{1} \int\limits_{\Omega}|f(Q)|^{p}dQ=\int\limits_{\omega}d\chi\int\limits_{0}^{d(\mathbf{e})}|f(Q)|^{p}r^{n-1}dr<\infty, \end{equation} where $d\chi$ is an element of solid angle of the unit sphere surface (the unit sphere belongs to $\mathbb{E}^{n}$) and $\omega$ is a surface of this sphere, $d:=d(\mathbf{e})$ is the length of the segment of the ray going from the point $P$ in the direction $\mathbf{e}$ within the domain $\Omega.$ We use a shorthand notation $P\cdot Q=P^{i}Q_{i}=\sum^{n}_{i=1}P_{i}Q_{i}$ for the inner product of the points $P=(P_{1},P_{2},...,P_{n}),\,Q=(Q_{1},Q_{2},...,Q_{n})$ which belong to $\mathbb{E}^{n}.$ Denote by $D_{i}f$ a weak partial derivative of the function $f$ with respect to a coordinate variable with index $1\leq i\leq n.$ We assume that all functions have a zero extension outside of $\bar{\Omega}.$ Everywhere further, unless otherwise stated, we use notations of the papers \cite{firstab_lit:kato1980}, \cite{firstab_lit:kipriyanov1960}, \cite{firstab_lit:1kipriyanov1960}. \begin{lem}\label{L1} Assume that $A$ is a closed densely defined operator, the following condition holds \begin{equation}\label{2} \|(A+t)^{-1}\|_{\mathrm{R} \rightarrow \mathfrak{H}}\leq\frac{1}{ t},\,t>0, \end{equation} where a notation $\mathrm{R}:=\mathrm{R}(A+t)$ is used. Then the operator $A$ is m-accretive. \end{lem} \begin{proof} Using \eqref{2} consider $$ \|f\|^{2}_{\mathfrak{H}}\leq \frac{1}{t^{2}} \|(A+t)f \|^{2}_{ \mathfrak{H}};\, \|f\|^{2}_{\mathfrak{H}}\leq \frac{1}{ t^{2}} \left\{ \| A f\|^{2}_{ \mathfrak{H}}+2t \mathrm{Re}(Af,f)_{ \mathfrak{H}}+t^{2}\| f \|^{2}_{ \mathfrak{H}}\right\} ; $$ $$ t^{-1} \| A f \|^{2}_{ \mathfrak{H}}+2 \mathrm{Re}(A f,f)_{ \mathfrak{H}}\geq0,\,f\in \mathrm{D} (A). $$ Let $t$ be tended to infinity, then we obtain \begin{equation}\label{3}\mathrm{Re}(A f,f)_{ \mathfrak{H}}\geq0 ,\,f\in \mathrm{D}(A). \end{equation} It means that the operator $A$ is accretive. Due to \eqref{3}, we have $\{\lambda \in \mathbb{C}:\,\mathrm{Re}\lambda<0\}\subset \Delta(A), $ where $\Delta(A)=\mathbb{C}\setminus \overline{\Theta (A)}.$ Applying Theorem 3.2 \cite[p.268]{firstab_lit:kato1980}, we obtain that $A-\lambda$ has a closed range and $\mathrm{nul} (A-\lambda)=0,\,\mathrm{def} (A-\lambda)=\mathrm{const},\,\forall\lambda\in \Delta(A).$ Let $\lambda_{0}\in \Delta(A) ,\;{\rm Re}\lambda_{0} <0.$ Note that in consequence of inequality \eqref{3}, we have \begin{equation}\label{4} {\rm Re} ( f,(A-\lambda )f )_{\mathfrak{H}}\geq - {\rm Re} \lambda \|f\|^{2}_{\mathfrak{H}},\,f\in \mathrm{D}(A). \end{equation} Since the operator $A-\lambda_{0}$ has a closed range, then \begin{equation*} \mathfrak{H}=\mathrm{R} (A-\lambda_{0})\oplus \mathrm{R} (A-\lambda_{0})^{\perp} . \end{equation*} We remark that the intersection of the sets $\mathrm{D}(A)$ and $\mathrm{R} (A-\lambda_{0})^{\perp}$ is zero, because if we assume the contrary, then applying inequality \eqref{4}, for arbitrary element $f\in \mathrm{D}(A)\cap \mathrm{R} (A-\lambda_{0})^{\perp}$ we get \begin{equation*} - {\rm Re} \lambda_{0} \|f\|^{2}_{\mathfrak{H}} \leq {\rm Re} ( f,[A-\lambda_{0} ]f )_{\mathfrak{H}}=0, \end{equation*} hence $f=0.$ It implies that $$ \left(f,g\right)_{\mathfrak{H}}=0,\;\forall f\in \mathrm{R} (A-\lambda_{0})^{\perp},\;\forall g\in \mathrm{D}(A). $$ Since $ \mathrm{D}(A)$ is a dense set in $\mathfrak{H},$ then $\mathrm{R} (A-\lambda_{0})^{\perp}=0.$ It implies that ${\rm def} (A-\lambda_{0}) =0$ and if we take into account Theorem 3.2 \cite[p.268]{firstab_lit:kato1980}, then we come to the conclusion that ${\rm def} (A-\lambda )=0,\;\forall\lambda\in \Delta(A),$ hence the operator $A$ is m-accretive. The proof is complete. \end{proof} Assume that $\Omega\subset \mathbb{E}^{n}$ is a convex domain, with a sufficient smooth boundary ($C^{3}$ class) of n-dimensional Euclidian space. For the sake of the simplicity we consider that $\Omega$ is bounded. Consider the shift semigroup in a direction acting on $L_{2}(\Omega)$ and defined as follows $ T_{t}f(Q)=f(Q+\mathbf{e}t), $ where $Q\in \Omega,\,Q=P+\mathbf{e}r.$ The following lemma establishes a property of the infinitesimal generator $-A$ of the semigroup $T_{t}.$ \begin{lem}\label{L2} We claim that $ A=\tilde{A_{0}},\,\mathrm{N}(A)=0, $ where $A_{0}$ is a restriction of $A$ on the set $ C^{\infty}_{0}( \Omega ).$ \end{lem} \begin{proof} Let us show that $T_{t}$ is a strongly continuous semigroup ($C_{0}$ semigroup). It can be easily established due to the continuous in average property. Using the Minkowskii inequality, we have $$ \left\{\int\limits_{\Omega}|f(Q+\mathbf{e}t)-f(Q)|^{2}dQ\right\}^{\frac{1}{2}}\leq \left\{\int\limits_{\Omega}|f(Q+\mathbf{e}t)-f_{m}(Q+\mathbf{e}t)|^{2}dQ\right\}^{\frac{1}{2}}+ $$ $$ +\left\{\int\limits_{\Omega}|f(Q)-f_{m}(Q)|^{2}dQ\right\}^{\frac{1}{2}}+\left\{\int\limits_{\Omega}|f_{m}(Q)-f_{m}(Q+\mathbf{e}t)|^{2}dQ\right\}^{\frac{1}{2}}= $$ $$ =I_{1}+I_{2}+I_{3}<\varepsilon, $$ where $f\in L_{2}(\Omega),\,\left\{f_{n}\right\}_{1}^{\infty}\subset C_{0}^{\infty}(\Omega);$ $m$ is chosen so that $I_{1},I_{2}< \varepsilon/3 $ and $t$ is chosen so that $I_{3}< \varepsilon/3.$ Thus, there exists such a positive number $t_{0}$ so that $$ \|T_{t}f-f\|_{L_{2} }<\varepsilon,\,t<t_{0}, $$ for arbitrary small $\varepsilon>0.$ Hence in accordance with the definition $T_{t}$ is a $C_{0}$ semigroup. Using the assumption that all functions have the zero extension outside $\bar{\Omega},$ we have $\|T_{t}\| \leq 1.$ Hence we conclude that $T_{t}$ is a $C_{0}$ semigroup of contractions (see \cite{Pasy}). Hence by virtue of Corollary 3.6 \cite[p.11]{Pasy}, we have \begin{equation} \label{5} \|(\lambda+A)^{-1}\| \leq \frac{1}{\mathrm{Re} \lambda },\,\mathrm{Re}\lambda>0. \end{equation} Inequality \eqref{5} implies that $A$ is m-accretive. It is the well-known fact that an infinitesimal generator $-A$ is a closed operator, hence $A_{0}$ is closeable. It is not hard to prove that $ \tilde{A} _{0}$ is an m-accretive operator. For this purpose let us rewrite relation \eqref{5} in the form \begin{equation*} \|(\lambda+ \tilde{A} _{0})^{-1}\|_{\mathrm{R}\rightarrow \mathfrak{H}} \leq \frac{1}{\mathrm{Re} \lambda },\,\mathrm{Re}\lambda>0, \end{equation*} applying Lemma \ref{1}, we obtain that $ \tilde{A} _{0}$ is an m-accretive operator. Note that there does not exist an accretive extension of an m-accretive operator (see \cite{firstab_lit:kato1980}). On the other hand it is clear that $\tilde{A} _{0}\subset A.$ Thus we conclude that $\tilde{A} _{0}= A.$ Consider an operator $$ B f(Q)=\!\int_{0}^{r}\!\!f(P+\mathbf{e }[r-t])dt,\,f\in L_{2}(\Omega). $$ It is not hard to prove that $B \in \mathcal{B}(L_{2}),$ applying the generalized Minkowskii inequality, we get $$ \|B f\|_{L_{2} }\leq \int\limits_{0}^{\mathrm{diam\,\Omega}}dt \left(\int\limits_{\Omega}|f(P+\mathbf{e }[r-t])|dQ\right)^{1/2}\leq C\|f\|_{L_{2} }. $$ Note that the fact $A^{-1}_{ 0}\subset B ,$ follows from the properties of the one-dimensional integral defined on smooth functions. Using Theorem 2 \cite[p.555]{firstab_lit:Smirnov5}, the proved above fact $\tilde{A} _{0}= A,$ we deduce that $A^{-1} = \widetilde{A ^{-1}_{ 0}} ,$ hence $A^{-1} \subset B .$ The proof is complete. \end{proof} \section{Main results} Consider a linear space $ \mathbb{L}^{n}_{2}(\Omega):=\left\{f=(f_{1},f_{2},...,f_{n}),\,f_{i}\in L_{2}(\Omega)\right\}, $ endowed with the inner product $$ (f,g)_{\mathbb{L}^{n}_{2}}=\int\limits_{\Omega} (f, g)_{\mathbb{E}^{n}} dQ,\,f,g\in \mathbb{L}^{n}_{2}(\Omega). $$ It is clear that this pair forms a Hilbert space and let us use the same notation $\mathbb{L}^{n}_{2}(\Omega)$ for it. Consider a semilinear form $$ t(f,g):=\sum\limits_{i=1}^{n}\int\limits_{\Omega} (f ,\mathbf{e_{i}})_{\mathbb{E}^{n}}\overline{(g,\mathbf{e_{i}})}_{\mathbb{E}^{n}} dQ,\,f,g\in \mathbb{L} ^{n}_{2} (\Omega), $$ where $\mathbf{e_{i}}$ corresponds to $P_{i}\in \partial\Omega,\,i=1,2,...,n.$ \begin{lem}\label{L3} The points $P_{i}\in \partial\Omega,\,i=1,2,...,n$ can be chosen so that the form $t$ generates an inner product. \end{lem} \begin{proof} It is clear that we should only establish an implication $t(f,f)=0\,\Rightarrow f=0.$ Since $\Omega\in \mathbb{E}^{n},$ then without lose of generality we can assume that there exists $P_{i}\in \partial\Omega,\,i=1,2,...,n,$ such that \begin{equation}\label{6}\Delta= \left| \begin{array}{cccc} P_{11}&P_{12}&...&P_{1n}\\ P_{21}&P_{22}&...&P_{2n}\\ ...&...&...&...\\ P_{n1}&P_{n2}&...&P_{nn} \end{array} \right|\neq0, \end{equation} where $P_{i}=(P_{i1},P_{i2},...,P_{in}).$ It becomes clear if we remind that in the contrary case, for arbitrary set of points $P_{i}\in \partial\Omega,\,i=1,2,...,n,$ we have $$ P_{n}=\sum\limits_{k=1}^{n-1}c_{k}P_{k},\,c_{k}= \mathrm{const}, $$ from what follows that we can consider $\Omega$ at least as a subset of $\mathbb{E}^{n-1}.$ Continuing this line of reasonings we can find such a dimension $p$ that a corresponding $\Delta\neq0$ and further assume that $\Omega\in \mathbb{E}^{p}. $ Consider a relation \begin{equation*} \sum\limits_{i=1}^{n}\int\limits_{\Omega}| (\psi,\mathbf{e_{i}})_{\mathbb{E}^{n}}|^{2} dQ =0,\,\psi\in \mathbb{L}^{n}_{2}(\Omega). \end{equation*} It follows that $\left(\psi(Q),\mathbf{e_{i}}\right)_{\mathbb{E}^{n}}=0$ a.e. $i=1,2,...,n.$ Note that every $P_{i}$ corresponds to the set $\vartheta_{i}:=\{Q\subset \vartheta_{i} :\;(\psi(Q),\mathbf{e_{i}})_{\mathbb{E}^{n}}\neq0 \}.$ Consider $\Omega'=\Omega\backslash\bigcup\limits_{i=1}^{n}\vartheta_{i},$ it is clear that $\mathrm{mess} \left(\bigcup\limits_{i=1}^{n}\vartheta_{i}\right)=0.$ Note that due to the made construction, we can reformulate the obtained above relation in the coordinate form \begin{equation*} \left\{ \begin{aligned} (P_{11}-Q_{1})\psi_{1}(Q)+(P_{12}-Q_{2})\psi_{2}(Q)+...+(P_{1n}-Q_{n})\psi_{n}(Q)=0 \\ (P_{21}-Q_{1})\psi_{1}(Q)+(P_{22}-Q_{2})\psi_{2}(Q)+ ... +(P_{2n}-Q_{n})\psi_{n}(Q)=0 \\ ...\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;...\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; ...\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \\ (P_{n1}-Q_{1})\psi_{1}(Q)+(P_{n2}-Q_{2})\psi_{2}(Q)+ ... +(P_{nn}-Q_{n})\psi_{n}(Q)=0 \end{aligned} \right.\;, \end{equation*} where $\psi=(\psi_{1},\psi_{2},...,\psi_{n}),\,Q=(Q_{1},Q_{2},...,Q_{n}),\, Q\in \Omega'.$ Therefore, if we prove that $$\Lambda(Q)= \left| \begin{array}{cccc} P_{11}-Q_{1}&P_{12}-Q_{2}&...&P_{1n}-Q_{n}\\ P_{21}-Q_{1}&P_{22}-Q_{2}&...&P_{2n}-Q_{n}\\ ...&...&...&...\\ P_{n1}-Q_{1}&P_{n2}-Q_{2}&...&P_{nn}-Q_{n} \end{array} \right|\neq0\;a.e., $$ then we obtain $\psi =0$ a.e. Assume the contrary i.e. that there exists such a set $ \Upsilon \subset \Omega ,\,\mathrm{mess}\,\Upsilon \neq0,$ so that $\Lambda(Q)=0,\,Q\in \Upsilon .$ We have $$ \left| \begin{array}{cccc} P_{11}-Q_{1}&P_{12}-Q_{2}&...&P_{1n}-Q_{n}\\ P_{21}-Q_{1}&P_{22}-Q_{2}&...&P_{2n}-Q_{n}\\ ...&...&...&...\\ P_{n1}-Q_{1}&P_{n2}-Q_{2}&...&P_{nn}-Q_{n} \end{array} \right|= \left| \begin{array}{cccc} P_{11} &P_{12} &...&P_{1n} \\ P_{21}-Q_{1}&P_{22}-Q_{2}&...&P_{2n}-Q_{n}\\ ...&...&...&...\\ P_{n1}-Q_{1}&P_{n2}-Q_{2}&...&P_{nn}-Q_{n} \end{array} \right|- $$ $$ -\left| \begin{array}{cccc} Q_{1}& Q_{2}&...& Q_{n}\\ P_{21}-Q_{1}&P_{22}-Q_{2}&...&P_{2n}-Q_{n}\\ ...&...&...&...\\ P_{n1}-Q_{1}&P_{n2}-Q_{2}&...&P_{nn}-Q_{n} \end{array} \right|= \left| \begin{array}{cccc} P_{11} &P_{12} &...&P_{1n} \\ P_{21} &P_{22} &...&P_{2n} \\ ...&...&...&...\\ P_{n1}-Q_{1}&P_{n2}-Q_{2}&...&P_{nn}-Q_{n} \end{array} \right|- $$ $$ -\left| \begin{array}{cccc} P_{11} &P_{12} &...&P_{1n} \\ Q_{1}& Q_{2}&...& Q_{n}\\ ...&...&...&...\\ P_{n1}-Q_{1}&P_{n2}-Q_{2}&...&P_{nn}-Q_{n} \end{array} \right| -\left| \begin{array}{cccc} Q_{1}& Q_{2}&...& Q_{n}\\ P_{21} &P_{22} &...&P_{2n} \\ ...&...&...&...\\ P_{n1}-Q_{1}&P_{n2}-Q_{2}&...&P_{nn}-Q_{n} \end{array} \right|= $$ $$ =\left| \begin{array}{cccc} P_{11} &P_{12} &...&P_{1n}\\ P_{21} &P_{22} &...&P_{2n} \\ ...&...&...&...\\ P_{n1} &P_{n2} &...&P_{nn} \end{array} \right|-\sum\limits_{j=1}^{n} \Delta_{j}=0, $$ where $$ \Delta_{j}=\left| \begin{array}{cccc} P_{11} &P_{12} &...&P_{1n}\\ P_{21} &P_{22} &...&P_{2n} \\ ...&...&...&...\\ P_{j-1\,1} &P_{j-1\,2} &...&P_{j-1\,n}\\ Q_{1}& Q_{2}&...& Q_{n}\\ P_{j+1\,1} &P_{j+1\,2} &...&P_{j+1\,n}\\ ...&...&...&...\\ P_{n1} &P_{n2} &...&P_{nn} \end{array} \right|. $$ Therefore, we have $$ \sum\limits_{j=1}^{n} \Delta_{j}/ \Delta =1, $$ since $\Delta\neq0.$ Hence, we can treat the above matrix constructions in the way that gives us the following representation $$ \sum\limits_{j=1}^{n} \alpha_{j} P_{j} =Q,\,\sum\limits_{j=1}^{n} \alpha_{j} =1,\,\alpha_{j}=\Delta_{j}/ \Delta . $$ Now, let us prove that $\Upsilon$ belongs to a hyperplane in $\mathbb{E}^{n},$ we have $$ \left| \begin{array}{cccc} P_{11}-Q_{1}&P_{12}-Q_{2}&...&P_{1n}-Q_{n}\\ P_{21}-P_{11}&P_{22}-P_{12}&...&P_{2n}-P_{1n}\\ ...&...&...&...\\ P_{n1}-P_{n-1\,1}&P_{n2}-P_{n-1\,2}&...&P_{nn}-P_{n-1\,n} \end{array} \right|= \left| \begin{array}{cccc} P_{11}&P_{12}&...&P_{1n}\\ P_{21}&P_{22}&...&P_{2n}\\ ...&...&...&...\\ P_{n1}&P_{n2}&...&P_{nn} \end{array} \right|- $$ $$ -\left| \begin{array}{cccc} Q_{1}& Q_{2}&...& Q_{n}\\ P_{21}-P_{11}&P_{22}-P_{12}&...&P_{2n}-P_{1n}\\ ...&...&...&...\\ P_{n1}-P_{n-1\,1}&P_{n2}-P_{n-1\,2}&...&P_{nn}-P_{n-1\,n} \end{array} \right| = $$ $$ = \left| \begin{array}{cccc} P_{11}&P_{12}&...&P_{1n}\\ P_{21}&P_{22}&...&P_{2n}\\ ...&...&...&...\\ P_{n1}&P_{n2}&...&P_{nn} \end{array} \right|- \left| \begin{array}{cccc} \sum\limits_{j=1}^{n} \alpha_{j} P_{j1}&\sum\limits_{j=1}^{n} \alpha_{j} P_{j2}&...&\sum\limits_{j=1}^{n} \alpha_{j} P_{jn}\\ P_{21}-P_{11}&P_{22}-P_{12}&...&P_{2n}-P_{1n}\\ ...&...&...&...\\ P_{n1}-P_{n-1\,1}&P_{n2}-P_{n-1\,2}&...&P_{nn}-P_{n-1\,n} \end{array} \right| = $$ $$ =\left| \begin{array}{cccc} P_{11}&P_{12}&...&P_{1n}\\ P_{21}&P_{22}&...&P_{2n}\\ ...&...&...&...\\ P_{n1}&P_{n2}&...&P_{nn} \end{array} \right|-\sum\limits_{j=1}^{n} \alpha_{j} \left| \begin{array}{cccc} P_{j1}& P_{j2}&...& P_{jn}\\ P_{21}-P_{11}&P_{22}-P_{12}&...&P_{2n}-P_{1n}\\ ...&...&...&...\\ P_{n1}-P_{n-1\,1}&P_{n2}-P_{n-1\,2}&...&P_{nn}-P_{n-1\,n} \end{array} \right| = $$ $$ =\left| \begin{array}{cccc} P_{11}&P_{12}&...&P_{1n}\\ P_{21}&P_{22}&...&P_{2n}\\ ...&...&...&...\\ P_{n1}&P_{n2}&...&P_{nn} \end{array} \right|- \left| \begin{array}{cccc} P_{11}&P_{12}&...&P_{1n}\\ P_{21}&P_{22}&...&P_{2n}\\ ...&...&...&...\\ P_{n1}&P_{n2}&...&P_{nn} \end{array} \right|\sum\limits_{j=1}^{n} \alpha_{j} = $$ $$ =\left| \begin{array}{cccc} P_{11}&P_{12}&...&P_{1n}\\ P_{21}&P_{22}&...&P_{2n}\\ ...&...&...&...\\ P_{n1}&P_{n2}&...&P_{nn} \end{array} \right|- \left| \begin{array}{cccc} P_{11}&P_{12}&...&P_{1n}\\ P_{21}&P_{22}&...&P_{2n}\\ ...&...&...&...\\ P_{n1}&P_{n2}&...&P_{nn} \end{array} \right| =0. $$ Hence $\Upsilon$ belongs to a hyperplane generated by the points $P_{i},\,i=1,2,...,n.$ Therefore $ \mathrm{mess} \Upsilon=0,$ and we obtain $ \psi =0$ a.e. The proof is complete. \end{proof} Consider a pre Hilbert space $ \mathbf{L} ^{n}_{2}(\Omega):=\{f:\,f\in \mathbb{L}^{n}_{2}(\Omega)\}$ endowed with the inner product $$ (f,g)_{\mathbf{L} ^{n}_{2} }:=\sum\limits_{i=1}^{n}\int\limits_{\Omega} (f ,\mathbf{e_{i}})_{\mathbb{E}^{n}}\overline{(g,\mathbf{e_{i}})}_{\mathbb{E}^{n}} dQ,\,f,g\in \mathbb{L} ^{n}_{2} (\Omega), $$ where $\mathbf{e_{i}}$ corresponds to $P_{i}\in \partial\Omega,\,i=1,2,...,n,$ condition \eqref{6} holds. The following theorem establishes a norm equivalence. \begin{teo}\label{T1} The norms $\|\cdot\|_{ \mathbb{L}^{n}_{2} }$ and $\|\cdot\|_{\mathbf{L} ^{n}_{2} } $ are equivalent. \end{teo} \begin{proof} Consider the space $ \mathbb{L}^{n}_{2}(\Omega) $ and a functional $ \varphi(f):= \|f\|_{\mathbf{L} ^{n}_{2} } ,\,f\in \mathbb{L}^{n}_{2}(\Omega). $ Let us prove that $ \varphi(f)\geq C,\,f\in \mathrm{U}, $ where $\mathrm{U}:=\{f\in \mathbb{L}^{n}_{2}(\Omega),\,\|f\|_{\mathbb{L}^{n}_{2}}=1\}.$ Assume the contrary, then there exists such a sequence $\{\psi_{k}\}_{1}^{\infty}\subset \mathrm{U},$ so that $\varphi(\psi_{k})\rightarrow0,\,k\rightarrow\infty.$ Since the sequence $\{\psi_{k}\}_{1}^{\infty}$ is bounded, then we can extract a weekly convergent subsequence $\{\psi_{k_{j}}\}_{1}^{\infty}$ and claim that the week limit $\psi$ of the sequence $\{\psi_{k_{j}}\}_{1}^{\infty}$ belongs to $\mathrm{U}.$ Consider a functional $$ \mathcal{L}_{g}(f):=\sum\limits_{i=1}^{n}\int\limits_{\Omega} (f ,\mathbf{e_{i}})_{\mathbb{E}^{n}}\overline{(g,\mathbf{e_{i}})}_{\mathbb{E}^{n}} dQ,\;f,g\in \mathbb{L}^{n}_{2}(\Omega). $$ Due to the following obvious chain of the inequalities \begin{equation}\label{7} |\mathcal{L}_{g}(f)| \leq \sum\limits_{i=1}^{n} \left\{\int\limits_{\Omega}| (f ,\mathbf{e_{i}})_{\mathbb{E}^{n}}|^{2} dQ\right\}^{\frac{1}{2}}\left\{\int\limits_{\Omega}| (g ,\mathbf{e_{i}})_{\mathbb{E}^{n}}|^{2} dQ\right\}^{\frac{1}{2}} \leq $$ $$ \leq n \|f\|_{\mathbb{L}^{n}_{2}}\|g\|_{\mathbb{L}^{n}_{2}}, \; f,g\in \mathbb{L}^{n}_{2}(\Omega), \end{equation} we see that $\mathcal{L}_{g}$ is a linear bounded functional on $\mathbb{L}^{n}_{2}(\Omega).$ Therefore, by virtue of the weak convergence of the sequence $\{\psi_{k_{j}}\},$ we have $\mathcal{L}_{g}(\psi_{k_{j}})\rightarrow \mathcal{L}_{g}(\psi),\,k_{j}\rightarrow \infty.$ On the other hand, recall that since it was supposed that $\varphi(\psi_{k})\rightarrow0,\,k\rightarrow\infty,$ then we have $\varphi(\psi_{k_{j}})\rightarrow0,\,k\rightarrow\infty.$ Hence applying \eqref{7}, we conclude that $\mathcal{L}_{g}(\psi_{k_{j}})\rightarrow 0,\,k_{j}\rightarrow \infty.$ Combining the given above results we obtain \begin{equation}\label{8} \mathcal{L}_{g}(\psi) =\sum\limits_{i=1}^{n}\int\limits_{\Omega} (\psi ,\mathbf{e_{i}})_{\mathbb{E}^{n}}\overline{(g,\mathbf{e_{i}})}_{\mathbb{E}^{n}} dQ=0,\,\forall g \in \mathbb{L}^{n}_{2}(\Omega). \end{equation} Taking into account \eqref{8} and using the ordinary properties of Hilbert space, we obtain \begin{equation*} \sum\limits_{i=1}^{n}\int\limits_{\Omega}| (\psi,\mathbf{e_{i}})_{\mathbb{E}^{n}}|^{2} dQ =0. \end{equation*} Hence in accordance with Lemma \ref{L3}, we get $ \psi =0$ a.e. Notice that by virtue of this fact we come to the contradiction with the fact $\|\psi\|_{\mathbb{L}^{n}_{2}}=1.$ Hence the following estimate is true $ \varphi(f)\geq C,\,f\in \mathrm{U}. $ Having applied the Cauchy Schwartz inequality to the Euclidian inner product, we can also easily obtain $ \varphi(f) \leq \sqrt{n}\|f\|_{\mathbb{L}^{n}_{2}},\,f\in \mathbb{L}^{n}_{2}(\Omega). $ Combining the above inequalities, we can rewrite these two estimates as follows $ C_{0}\leq\varphi(f)\leq C_{1},\,f\in \mathrm{U}. $ To make the issue clear, we can rewrite the previous inequality in the form \begin{equation}\label{9} C_{0}\|f\|_{\mathbb{L}^{n}_{2} }\leq\varphi(f)\leq C_{1}\|f\|_{\mathbb{L}^{n}_{2} },\,f\in \mathbb{L}^{n}_{2}(\Omega),\;C_{0},C_{1}>0. \end{equation} The proof is complete. \end{proof} Consider a pre Hilbert space $$ \mathfrak{\widetilde{H}}^{n}_{ A }:= \big \{f,g\in C_{0}^{\infty}(\Omega),\,(f,g)_{\mathfrak{\widetilde{H}}^{n}_{ A }}=\sum\limits_{i=1}^{n}(A_{i} f,A_{i} g)_{L_{2} } \big\}, $$ where $-A_{i}$ is an infinitesimal generator corresponding to the point $P_{i}.$ Here, we should point out that the form $(\cdot,\cdot)_{\mathfrak{\widetilde{H}}^{n}_{ A }} $ generates an inner product due to the fact $\mathrm{N}(A_{i})=0,\,i=1,2,...,n$ proved in Lemma \ref{L2}. Let us denote a corresponding Hilbert space by $\mathfrak{H}^{n}_{A}.$ \begin{corol}\label{C1} The norms $\|\cdot\|_{\mathfrak{H}^{n}_{A}}$ and $\|\cdot\|_{H_{0}^{1}} $ are equivalent, we have a bounded compact embedding $$ \mathfrak{H}^{n}_{A}\subset\subset L_{2}(\Omega). $$ \end{corol} \begin{proof} Let us prove that \begin{equation*} Af = -(\nabla f ,\mathbf{e})_{\mathbb{E}^{n}},\,f\in C^{\infty}_{0}( \Omega ). \end{equation*} Using the Lagrange mean value theorem, we have $$ \int\limits_{\Omega}\left| \left(\frac{T_{t}-I}{t}\right)f(Q)-(\nabla f ,\mathbf{e})_{\mathbb{E}^{n}}(Q )\right|^{2} dQ=\int\limits_{\Omega}\left| (\nabla f ,\mathbf{e})_{\mathbb{E}^{n}}(Q_{\xi} )-(\nabla f ,\mathbf{e})_{\mathbb{E}^{n}}(Q )\right|^{2} dQ, $$ where $Q_{\xi}=Q+ \mathbf{e} \xi,\,0<\xi<t.$ Since the function $(\nabla f ,\mathbf{e})_{\mathbb{E}^{n}}$ is continuous on $\bar{\Omega},$ then it is uniformly continuous on $\bar{\Omega}.$ Thus, for arbitrary $\varepsilon>0,$ a positive number $\delta>0$ can be chosen so that $$ \int\limits_{\Omega}\left| (\nabla f ,\mathbf{e})_{\mathbb{E}^{n}}(Q_{\xi} )-(\nabla f ,\mathbf{e})_{\mathbb{E}^{n}}(Q ),\right|^{2} dQ<\varepsilon,\,t<\delta, $$ from what follows the desired result. Taking it into account, we obtain $$ \|Af\|_{L_{2}}=\left\{\int\limits_{\Omega}|(\nabla f ,\mathbf{e})_{\mathbb{E}^{n}}|^{2}dQ\right\}^{1/2}\leq \left\{\int\limits_{\Omega} \|\mathbf{e}\|^{2}_{\mathbb{E}^{n}}\sum\limits_{i=1}^{n}|D_{i}f|^{2} dQ\right\}^{1/2} =\|f\|_{H_{0}^{1}},\,f\in C^{\infty}_{0}(\Omega). $$ Using this estimate, we easily obtain $\|f\|_{\mathfrak{H}^{n}_{A}}\leq C \|f\|_{H_{0}^{1}},\,f\in C_{0}^{\infty}(\Omega).$ On the other hand, as a particular case of formula \eqref{9}, we obtain $ C_{0}\|f\|_{H_{0}^{1}}\leq\|f\|_{\mathfrak{H}^{n}_{A}} ,\,f\in C_{0}^{\infty}(\Omega). $ Thus, we can combine the previous inequalities and rewrite them as follows $ C_{0}\|f\|_{H_{0}^{1}}\leq\|f\|_{\mathfrak{H}^{n}_{A}}\leq C \|f\|_{H_{0}^{1}},\,f\in C_{0}^{\infty}(\Omega). $ Passing to the limit at the left-hand and right-hand side of the last inequality, we get \begin{equation*} C_{0}\|f\|_{H_{0}^{1}}\leq\| f\|_{\mathfrak{H}^{n}_{A}}\leq C \|f\|_{H_{0}^{1}},\,f\in H_{0}^{1}(\Omega). \end{equation*} Combining the fact $ H_{0}^{1}(\Omega)\subset\subset L_{2}(\Omega), $ (Rellich-Kondrashov theorem) with the lower estimate in the previous inequality, we complete the proof. \end{proof} \vspace{0.5 cm} \noindent{\bf Uniformly elliptic operator in the divergent form}\\ Consider a uniformly ecliptic operator $$ \,-\mathcal{T}:=-D_{j} ( a^{ij} D_{i}\cdot),\, a^{ij}(Q) \in C^{2}(\bar{\Omega}),\, a^{ij}\xi _{i} \xi _{j} \geq \gamma_{a} |\xi|^{2} ,\, \gamma_{a} >0,\,i,j=1,2,...,n,\; $$ $$ \mathrm{D}( \mathcal{T} ) =H^{2}(\Omega)\cap H^{1}_{0}( \Omega ). $$ The following theorem gives us a key to apply results of the paper \cite{kukushkin Gen} in accordance with which a number of spectral theorems can be applied to the operator $-\mathcal{T}.$ Moreover the conditions established bellow are formulated in terms of the operator $A,$ what reveals a mathematical nature of the operator $-\mathcal{T}.$ \begin{teo}\label{T2} We claim that \begin{equation}\label{10} -\mathcal{T}=\frac{1}{n}\sum\limits^{n}_{i=0} A_{i}^{\ast}G_{i}A_{i}, \end{equation} the following relations hold $$ -\mathrm{Re}(\mathcal{T}f,f)_{L_{2}}\geq C \|f\|_{\mathfrak{H}^{n}_{A}};\, |(\mathcal{T}f,g)_{L_{2}}|\leq C \|f\|_{\mathfrak{H}^{n}_{A}}\|g\|_{\mathfrak{H}^{n}_{A}},\;f,g\in C_{0}^{\infty}(\Omega), $$ where $G_{i}$ are some operators corresponding to the operators $A_{i}.$ \end{teo} \begin{proof} It is easy to prove that \begin{equation}\label{11} \|A_{i}f\|_{L_{2}}\leq C\|f\|_{H_{0}^{1}},\,f\in H_{0}^{1}(\Omega), \end{equation} for this purpose we should use a representation $A_{i}f(Q) =-(\nabla f ,\mathbf{e_{i}})_{\mathbb{E}^{n}}, f\in C^{\infty}_{0}(\Omega).$ Applying the Cauchy Schwarz inequality, we get $$ \|A_{i}f\|_{L_{2}}\leq\left\{\int\limits_{\Omega}|(\nabla f ,\mathbf{e_{i}})_{\mathbb{E}^{n}}|^{2}dQ\right\}^{1/2}\leq \left\{\int\limits_{\Omega}\|\nabla f\|^{2}_{\mathbb{E}^{n}} \|\mathbf{e_{i}}\|^{2}_{\mathbb{E}^{n}} dQ\right\}^{1/2}=\|f\|_{ H_{0}^{1}},\,f\in C^{\infty}_{0}(\Omega). $$ Passing to the limit at the left-hand and right-hand side, we obtain \eqref{11}. Thus, we get $H_{0}^{1}(\Omega) \subset \mathrm{D}(A_{i}).$ Let us find a representation for the operator $G_{i}.$ Consider the operators $$ B_{i}f(Q)=\!\int_{0}^{r}\!\!f(P_{i}+\mathbf{e }[r-t])dt,\,f\in L_{2}(\Omega),\,i=1,2,...n. $$ It is obvious that \begin{equation}\label{12} \int\limits_{\Omega } A_{i}\left(B_{i} \mathcal{T} f \cdot g\right) dQ =\int\limits_{\Omega } A_{i}B_{i}\mathcal{T}f \cdot g\, dQ +\int\limits_{\Omega } B_{i} \mathcal{T} f \cdot A_{i}g \,dQ ,\, f\in C^{2}(\bar{\Omega}),g\in C^{\infty}_{0}( \Omega ). \end{equation} Using the divergence theorem, we get \begin{equation}\label{13} \int\limits_{\Omega } A_{i}\left(B_{i}\mathcal{T}f \cdot g\right) \,dQ=\int\limits_{S}(\mathbf{e_{i}},\mathbf{n})_{\mathbb{E}^{n}}(B_{i}\mathcal{T}f\cdot g)(\sigma)d\sigma, \end{equation} where $S$ is the surface of $\Omega.$ Taking into account that $ g(S)=0$ and combining \eqref{12},\eqref{13}, we get \begin{equation}\label{14} -\int\limits_{\Omega } A_{i}B_{i}\mathcal{T} f\cdot \bar{g} \, dQ= \int\limits_{\Omega } B_{i}\mathcal{T} f\cdot \overline{A_{i} g} \, dQ,\, f\in C^{2}(\bar{\Omega}),g\in C^{\infty}_{0}( \Omega ). \end{equation} Suppose that $f\in H^{2}(\Omega),$ then there exists a sequence $\{f_{n}\}_{1}^{\infty}\subset C^{2}(\bar{\Omega})$ such that $ f_{n}\stackrel{ H^{2}}{\longrightarrow} f$ (see \cite[p.346]{firstab_lit:Smirnov5}). Using this fact, it is not hard to prove that $\mathcal{T}f_{n}\stackrel{L_{2}}{\longrightarrow} \mathcal{T}f.$ Therefore $A_{i}B_{i}\mathcal{T}f_{n}\stackrel{L_{2}}{\longrightarrow} \mathcal{T}f,$ since $A_{i}B_{i}\mathcal{T}f_{n}=\mathcal{T}f_{n}.$ It is also clear that $B_{i}\mathcal{T}f_{n}\stackrel{L_{2}}{\longrightarrow} B_{i}\mathcal{T}f,$ since $B_{i}$ is continuous (see proof of Lemma \ref{L2}). Using these facts, we can extend relation \eqref{14} to the following \begin{equation}\label{15} -\int\limits_{\Omega } \mathcal{T} f \cdot \bar{g} \, dQ= \int\limits_{\Omega } B_{i}\mathcal{T}f\, \overline{A_{i}g} \, dQ,\; f\in \mathrm{D}(\mathcal{T}),\,g\in C_{0}^{\infty}(\Omega). \end{equation} Note, that it was previously proved that $A_{i}^{-1} \subset B_{i}$ (see the proof of Lemma \ref{L2}), $H_{0}^{1}(\Omega) \subset \mathrm{D}(A_{i}).$ Hence $G_{i} A_{i}f=B_{i}\mathcal{T} f,\, f\in \mathrm{D}(\mathcal{T}),$ where $G_{i}:=B_{i}\mathcal{T}B_{i}.$ Using this fact we can rewrite relation \eqref{15} in a form \begin{equation}\label{16} -\int\limits_{\Omega } \mathcal{T} f \cdot \bar{g} \, dQ= \int\limits_{\Omega } G_{i} A_{i}f\, \overline{A_{i}g} \, dQ,\; f\in \mathrm{D}(\mathcal{T}),\,g\in C_{0}^{\infty}(\Omega). \end{equation} Note that in accordance with Lemma \ref{L2}, we have $$ \forall g\in \mathrm{D}(A_{i}),\,\exists \{g_{n}\}_{1}^{\infty}\subset C^{\infty}_{0}( \Omega ),\, g_{n}\xrightarrow[ A_{i} ]{}g. $$ Therefore, we can extend relation \eqref{16} to the following \begin{equation}\label{17} -\int\limits_{\Omega } \mathcal{T} f \cdot \bar{g} \, dQ= \int\limits_{\Omega } G_{i} A_{i}f \, \overline{A_{i}g} \, dQ,\; f\in \mathrm{D}(\mathcal{T}),\,g\in \mathrm{D}(A_{i}). \end{equation} Relation \eqref{17} indicates that $G_{i} A_{i}f\in \mathrm{D}( A_{i} ^{\ast})$ and it is clear that $ -\mathcal{T}\subset A_{i} ^{\ast}G_{i}A_{i}.$ On the other hand in accordance with Chapter VI, Theorem 1.2 \cite{firstab_lit: Berezansk1968}, we have that $-\mathcal{T}$ is a closed operator. Using the divergence theorem we get $$ - \int\limits_{\Omega}D_{j} ( a^{ij} D_{i}f) \bar{g} dQ= \int\limits_{\Omega} a^{ij} D_{i}f\, \overline{D_{j}g} dQ,\,f\in C^{2}(\Omega),\,g\in C^{\infty}_{0}(\Omega). $$ Passing to the limit at the left-hand and right-hand side of the last inequality, we can extend it to the following $$ - \int\limits_{\Omega}D_{j} ( a^{ij} D_{i}f)\, \bar{g} dQ= \int\limits_{\Omega} a^{ij} D_{i}f\, \overline{D_{j}g} dQ,\,f\in H^{2}(\Omega),\,g\in H^{1}_{0}(\Omega). $$ Therefore, using the uniformly elliptic property of the operator $-\mathcal{T},$ we get \begin{equation}\label{18} -\mathrm{Re} \left(\mathcal{T}f, f \right)_{L_{2}} \geq \gamma_{a}\int\limits_{\Omega} \sum\limits_{i=1}^{n} | D_{i}f |^{2} \, dQ= \gamma_{a}\|f\|^{2}_{H_{0}^{1}},\,f\in \mathrm{D}(\mathcal{T}). \end{equation} Using the Poincar\'{e}-Friedrichs inequality, we get $ -\mathrm{Re} \left(\mathcal{T}f, f \right)_{L_{2}}\geq C\|f\|^{2}_{L_{2}},\,f\in \mathrm{D}(\mathcal{T}), $ Applying the Cauchy-Schwarz inequality to the left-hand side, we can easily deduce that the conditions of Lemma \ref{L1} are satisfied. Thus, the operator $-\mathcal{T}$ is m-accretive. In particular, it means that there does not exist an accretive extension of the operator $-\mathcal{T}.$ Let us prove that $A_{i} ^{\ast}G_{i}A_{i}$ is accretive, for this purpose combining \eqref{16},\eqref{18}, we get $ \left(G_{i} A_{i}f , A_{i}f \right)_{L_{2}} \geq 0, f \in C_{0}^{\infty} (\Omega). $ Due to the relation $\tilde{A}_{0}=A,$ proved in Lemma \ref{L2}, the previous inequality can be easily extended to $ \left(G_{i} A_{i}f , A_{i}f \right)_{L_{2}} \geq 0, f \in \mathrm{D}(G_{i} A_{i}). $ In its own turn, it implies that $ \left( A^{\ast}_{i}G_{i} A_{i}f ,f \right)_{L_{2}} \geq 0, f \in \mathrm{D}(A^{\ast}_{i}G_{i} A_{i}), $ thus we have obtained the desired result. Therefore, taking into account the facts given above, we deduce that $-\mathcal{T}= A_{i} ^{\ast}G_{i}A_{i},\,i=1,2,...\,n\,$ and obtain \eqref{10}. Applying the Cauchy-Schwarz inequality to the inner sums, then using Corollary \ref{C1}, we obtain \begin{equation*} \left|\int\limits_{\Omega } \mathcal{T} f \cdot \bar{g} \, dQ\right|=\left|\int\limits_{\Omega} a^{ij} D_{i}f\, \overline{D_{j}g} dQ\right|\leq a_{1} \int\limits_{\Omega} \|\nabla f\| _{\mathbb{E}^{n}}\, \|\nabla g\| _{\mathbb{E}^{n}} dQ \leq $$ $$ \leq a_{1}\|f\|_{H_{0}^{1}}\|g\|_{H_{0}^{1}}\leq C\|f\|_{\mathfrak{H}^{n}_{A}}\|g\| _{\mathfrak{H}^{n}_{A}} ,\; f,g\in C_{0}^{\infty}(\Omega), $$ where $$ a_{1}=\sup\limits_{Q\in \bar{\Omega}} \sqrt{\sum\limits_{i,j=1}^{n} |a^{ij}(Q)|^{2}}. \end{equation*} On the other hand, applying \eqref{11},\eqref{18} we get \begin{equation*} -\mathrm{Re}(\mathcal{T} f,f) \geq C\|f\|^{2}_{\mathfrak{H}^{n}_{A}},\,f\in C^{\infty}_{0}(\Omega). \end{equation*} The proof is complete. \end{proof} Thus, by virtue of Corollary \ref{C1} and Theorem \ref{T2}, we are able to claim that theorems $(\mathbf{A})-(\mathbf{C})$ \cite{kukushkin Gen} can be applied to the operator $-\mathcal{T}.$ \section{Conclusions} In this paper we have established a norm equivalence in the Lebesgue space, it gives us an opportunity to reveal more fully a true mathematical nature of a differential operator. As a consequence of the mentioned equivalence we have a compact embedding of a space generated by the infinitesimal generator of the shift semigroup in a direction into the Lebsgue space. The considered particular case corresponds to a uniformly elliptic operator which is not selfadjoint under minimal assumptions regarding its coefficients. Thus, the opportunity to apply spectral theorems in the natural way becomes relevant, since there are not many results devoted to the spectral properties of non-selfadjoint operators. Along with all these, by virtue of popularity and the well-known applicability of the Lebesgue spaces theory, the result related to the norm equivalence deserves to be considered itself.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Zeroth-Order Optimization} The idea of ZOO is to approximate the gradient using only zeroth order loss values. In computer vision, \citet{zoo_images} developed a ZOO-based attack that significantly outperforms other black-box attacks. We believe that this success can be transferred to the NLP domain. \citet{dont-search} have proposed an NLP attack that uses a discrete version of ZOO, but the results were unsatisfactory. Our Black MANGO method is the first to successfully adapt the continuous version of ZOO in NLP attacks. \subsection{Black MANGO} \subsection{Metrics} \section{Hyperparameters} \label{app:hparams} \paragraph{MANGO} To save computational resources during \textbf{candidates selection}, we use the dynamic number of candidates $m$. We rescale the candidate scores $s_k$ to $[0, 1]$ and take at most $M=5$ candidates whose scores differ from the best score at most by a threshold $T=0.5$: $s_k \geq \max_j s_j - T$. We use $\lambda_{prob}=0.5$ in \Cref{eq:score}. \paragraph{White-Box Attacks} MANGO, Naive and GBDA methods use the loss function \Cref{eq:loss} with the same parameters $\lambda_s=20$, $\lambda_f=1$, $\kappa=5$ (taken from \citet{gbda}) for all tasks, except Yelp, where they use $\lambda_s=10$. As a reference model $g$, we used the GPT-2 model downloaded from the official GBDA repository. We set $C=10$ for initialization of the adversarial sample parameters. The number of optimization epochs $S=100$ for all models and the batch size in GBDA was set to 10. \paragraph{Black-Box Attacks} We take TextFooler, BertAttack, and BAE implementations from TextAttack \cite{textattack} along with their original parameters. For fair comparison, we set the USE similarity threshold to the lowest value (0.2) used along these methods. Following the GBDA paper, we slightly modify the BertAttack method to mitigate its problem with subtokens and extremely long time of attack. \section{Comparison Fairness} \label{app:biases} When comparing the results of optimization-based (MANGO, GBDA, Naive MANGO) and black-box methods (TextFooler, Bert-Attack, BAE), we should note that black-box methods stop perturbing text as soon as they fool the model, while optimization-based attacks minimize adversarial loss (that encourage them to fool the model by some margin) for some fixed number of steps. The former improves similarity metrics (USE sim., BERTScore) and the latter highly decreases the model's prediction on ground-truth labels (Adv. prob.), increasing the difficulty of generated sample. Therefore, we believe that training accuracy under attack (Adv.) is the fairest metric to make a direct comparison between optimization-based and classic black-box methods. \section{Attack Examples} \label{app:attack-examples} To draw some insights into MANGO performance, we compared examples generated by BAE, GBDA and MANGO. We chose all the sentences from AG News and MNLI hypothesis that were successfully perturbed by the three considered methods and on which the methods obtained USE cosine similarity score greater than 0.9. We then sampled two sentences from AG News and two from MNLI hypothesis tasks. To avoid cherry-picking, we fixed a seed and sampled only once. Examples can be found in \cref{tab:attack-examples-ag-news} and in \cref{tab:attack-examples-mnli}. We are careful in drawing any conclusion from the qualitative results, however, there seems to be a trend consistent with the result from \cref{tab:all-results} and our observations from \cref{app:biases}: BAE perturbs less words than GBDA and MANGO, but also achieves lower confidence of the mislassified label. \begin{table*}[t!] \centering \begin{tabular}{lcc} \toprule Method & Prediction & Sentence \\ \midrule \multicolumn{3}{c}{\textbf{AG News - Example no 1.}} \\ \midrule Original & world (100\%) & \multicolumn{1}{p{10cm}}{air india trial witness said motivated by revenge ( reuters ) reuters - a desire for revenge motivated a prosecution witness to tell the air india bombing trial he had been asked to carry an mysterious suitcase on to an airliner, defense lawyers charged on wednesday.} \\ \midrule BAE & sci/tech (61\%) & \multicolumn{1}{p{10cm}}{air india trial witness said motivated by revenge ( reuters ) \underline{website} - a desire for revenge motivated a prosecution witness to tell the air india \underline{company} \underline{s} he had been asked to carry an mysterious suitcase on to an \underline{account}, defense lawyers charged on wednesday.} \\ \midrule GBDA & business (99\%) & \multicolumn{1}{p{10cm}}{air india trial witness said motivated by revenge - \underline{today} \underline{investigative} \underline{reuters reporting} a desire for revenge motivated criminal prosecution \underline{witnesses} to tell the air \underline{canada} \underline{strike} trial he had been asked to carry an mysterious suitcase on to an airliner, defense lawyers charged on \underline{tuesday}.} \\ \midrule MANGO & business (100\%) & \multicolumn{1}{p{10cm}}{air indies trial witness said motivated by revenge ( reuters ) \underline{time} - a desire for revenge motivated a prosecution witness to tell the air \underline{america} \underline{arson} trial he had been asked to carry a mysterious suitcase on to an airliner, defense lawyers charged on \underline{monday}.} \\ \midrule \multicolumn{3}{c}{\textbf{AG News - Example no 2.}} \\ \midrule Original & business (91\%) & \multicolumn{1}{p{10cm}}{brazil passes bankruptcy reform brazilian congress gives the green light to a long awaited overhaul of bankruptcy laws, which it hopes will reduce business and credit costs. } \\ \midrule BAE & sci/tech (95\%) & \multicolumn{1}{p{10cm}}{brazil passes bankruptcy reform brazilian congress gives the green light to a long awaited overhaul of \underline{copyright} laws, which it hopes will reduce business and credit costs.} \\ \midrule GBDA & world (95\%) & \multicolumn{1}{p{10cm}}{ brazil passes bankruptcy reform brazilian congress gives the green light to a long awaited overhaul of \underline{privacy} laws, which it \underline{aims} will reduce \underline{tourism} and \underline{population} \underline{impacts}.} \\ \midrule MANGO & world (99\%) & \multicolumn{1}{p{10cm}}{brazil passes \underline{golf} reform brazilian congress gives the green light to a long awaited overhaul of \underline{elections} laws, which it hopes will reduce \underline{spending} and \underline{maintenance} costs.} \\ \bottomrule \end{tabular} \caption{\label{tab:attack-examples-ag-news} Attack examples sampled from AG News dataset.} \end{table*} \begin{table*}[t!] \centering \begin{tabular}{lcc} \toprule Method & Prediction & Sentence \\ \midrule \multicolumn{3}{c}{\textbf{MNLI hypothesis - Example no 1.}} \\ \midrule Original & contraditcion (96\%) & \multicolumn{1}{p{10cm}}{\textbf{premise}: the houses are built to a \underline{long} - standing design and are filled with embroidery, lace, and crochet work. \; \; \; \; \; \; \; \; \; \textbf{hypothesis}: there is no embroidery in the houses.} \\ \midrule BAE & neutral (45\%) & \multicolumn{1}{p{10cm}}{\textbf{hypothesis:} there is no \underline{fire} in the houses.} \\ \midrule GBDA & neutral (100\%) & \multicolumn{1}{p{10cm}}{\textbf{hypothesis:} there is \underline{liturgical} embroidery in the houses.} \\ \midrule MANGO & neutral (99\%) & \multicolumn{1}{p{10cm}}{\textbf{hypothesis:} there is no \underline{erosion} in the \underline{ruins}.} \\ \midrule \multicolumn{3}{c}{\textbf{MNLI hypothesis - Example no 2.}} \\ \midrule Original & contradiction (100\%) & \multicolumn{1}{p{10cm}}{\textbf{premise}: whether the service emerges as an adaptation from primary care or as an innovation from the ed is less important than whether it can be evaluated to the satisfaction of those who make key decisions about whether it becomes part of standard practice. \textbf{hypothesis:} key decision makers are not important to decided things.} \\ \midrule BAE & neutral (96\%) & \multicolumn{1}{p{10cm}}{\textbf{hypothesis:} \underline{consensus} decision makers are not important to \underline{first} \underline{things}.} \\ \midrule GBDA & neutral (98\%) & \multicolumn{1}{p{10cm}}{\textbf{hypothesis:} key decision makers are \underline{noted} \underline{fairchild} – \underline{emery} \underline{associates}.} \\ \midrule MANGO & neutral (99\%) & \multicolumn{1}{p{10cm}}{\textbf{hypothesis:} \underline{older} \underline{ahlers} are \underline{also} important \underline{in} \underline{this} \underline{regard}.} \\ \bottomrule \end{tabular} \caption{\label{tab:attack-examples-mnli} Attack examples sampled from MNLI hypothesis task.} \end{table*} \section{Gray MANGO} \label{app:black_mango} To circumvent the white-box nature of MANGO attack, we additionally develop Gray MANGO: a version of MANGO that can be used in the loosened black-box setting, which we call gray-box setting. \paragraph{Gray-Box Setting} Gray MANGO is not strictly a black-box attack, as it requires the attacked model to take probability vectors and needs access to token vocabulary $V$. Transformer-based models satisfy these assumptions: they usually share the same $V$ and their embedding function $e$ can be used for both one-hot and probability vectors. However, to avoid misconception, we call this loosened black-box setting a grey-box setting. \paragraph{Zeroth-Order Optimization} Gray MANGO is based on Zeroth-Order Optimization (ZOO) \cite{zoo}. The idea of ZOO is to approximate the gradient using only zeroth order loss values. In computer vision, \citet{zoo_images} developed a ZOO-based attack that significantly outperforms other black-box attacks. We believe that this success can be transferred to the NLP domain. \citet{dont-search} have proposed an NLP attack that uses a discrete version of ZOO, but the results were unsatisfactory. Our Gray MANGO method is the first to successfully adapt the continuous version of ZOO in NLP attacks. \paragraph{Formulation} The main modification with respect to MANGO is the use of the zeroth-order gradient approximation of the gradient $\nabla_{\Theta'}\mathcal{L}(x')$ \cite{zoo_many}: \begin{equation*} \widetilde{\nabla}_{\Theta'} \mathcal{L}(x') = \frac{1}{K}\sum_{i=1}^{K} \frac{\mathcal{L}(\sigma(\Theta' + \mu u_i)) - \mathcal{L}(x')}{\mu}u_i, \end{equation*} where $u_i$ is a noise sampled from the normal distribution, $\mu$ is the scale factor and $\sigma(\Theta' + \mu u_i)$ is $x'$ with noise $\mu u_i$ added to its parameters $\Theta'$. As $\widetilde{\nabla}_{\Theta'}\mathcal{L}(x')$ is unstable, we set $\lambda_{prob}=1$ and use AMSGrad variant of Adam \cite{zoo_adamm} without reset after every quantization. To reduce the high dimensionality of $x'$, which is an issue in ZOO \cite{high_dimension}, we disallow replacement of the original token with tokens that have a cosine similarity of GloVe \cite{glove} embedding lower than 0. \paragraph{Hyperparameters} We use almost the same parameters as for MANGO (see \cref{app:hparams}), but with $\lambda_{prob} = 1$, $S=140$ and $\lambda_{s}=80$. To save computational resources, we set $S=100$ for the IMDB and Yelp datasets. Based on small grid search, we set the noise scaling parameter $\mu=0.1$. \paragraph{Results} We evaluated the Gray MANGO method and compared it to vanilla MANGO. Results can be found in \cref{tab:black_results}. Gray MANGO, which is the first method to incorporate continuous ZOO in NLP attack, performs competitively with other black-box attacks in terms of training accuracy reduction, but struggles to keep adversarial examples similar to original texts. We believe that the performance of Gray MANGO may be greatly elevated by a more thorough design of ZOO components \cite{zoo_many}. This may be an interesting topic for future research. \subfile{tables/black_mango} \section{Introduction} \subfile{content/introduction} \section{Related Work} \subfile{content/background} \section{MANGO} \label{sec:mango} \subfile{content/mango} \subfile{content/tables/all_results} \section{Experiments} \label{sec:experiments} \subfile{content/experiments} \section{Ablation Study} \label{app:entropy} \subfile{content/ablations} \section{Visualization of Quantization Gap} \label{app:losses} \subfile{content/visualization} \section{Conclusion} \subfile{content/conclusion} \section*{Limitations} \subfile{content/limitations} \section*{Acknowledgements} The work of Klaudia Bałazy was carried out within the research project "Bio-inspired artificial neural network" (grant no. POIR.04.04.00-00-14DE/18-00) within the Team-Net program of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund. Piotr Gaiński and Klaudia Bałazy are affiliated with Doctoral School of Exact and Natural Sciences at the Jagiellonian University.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} The discovery of topological insulators with bulk band gaps but robust topological surface states protected by time reversal symmetry triggered much research into topological phases of matter \cite{zhang2009topological,chen2009experimental}. Topological insulators are bulk insulators yet have conducting surface states as a consequence of the topologically non-trivial band structure, where a Dirac cone lies within the bulk gap \cite{Hre,TSCTI}. Subsequently, a number of semimetals with non-trivial topologies have also been discovered. In some cases such as Dirac and Weyl semimetals, the electronic structure is gapless and contains Dirac or Weyl points at the positions where the bands cross with linear dispersions and the excitations are well described as Dirac or Weyl fermions \cite{Liu,xiong2015evidence,borisenko2014experimental,CavaCd3As2,Weyl1,Weyl2,lv2015experimental}. Meanwhile in other topological semimetals, the band structure contains similar band inversion features to topological insulators, but the Fermi level does not lie within the band gap \cite{chadov2010tunable,liu2016observation}. The role of electronic correlations in topological systems has also become of particular interest after the proposal that SmB$_6$ is a topological Kondo insulator, where the surface state lies within the gap opened due to the Kondo interaction \cite{Dzero2010,kim2013surface,wolgast2013}. One striking feature is the presence of quantum oscillations in SmB$_6$ from de Haas-van Alphen effect (dHvA) measurements \cite{Li1208,tan2015unconventional}, despite the material being a bulk insulator at low temperatures, stimulating debate over whether these arise from the surface \cite{Li1208,erten2016kondo}, or if this is a bulk property \cite{tan2015unconventional,knolle2017excitons}. Furthermore, Tan \textit{et al}. found that the dHvA amplitude shows an anomalous increase at low temperatures below 1~K, and the origin of this highly unusual deviation is currently not resolved \cite{tan2015unconventional}. Recently, the $X$(Sb,Bi)($X$~=~lanthanide) series of semimetals with a cubic rocksalt structure have attracted considerable interest. These materials are commonly found to have an extremely large magnetoresistance (XMR), generally attributed to the nearly perfect compensation of electron and hole carriers \cite{LaSbCava,Cavatwo,LaBiPRB,he2016distinct,2016arXiv161102927G,NdSbPRB,YSbSr}. Meanwhile, non-trivial band topologies have been theoretically predicted \cite{zeng2015topological,calculation}, and evidence for topological states is found experimentally for some compounds from ARPES and transport measurements \cite{CeSbARPES,2016arXiv161102927G,YSbSr,NdSbIOP,Nayak2017Multiple,LaBiARPES,LaXFDL,XBiARPES}, while other members appear to be topologically trivial \cite{DHLaSb,he2016distinct,wu2017extremely}. In LaSb, it was suggested that the low-temperature plateau in the temperature dependence of the resistivity in applied magnetic fields was due to a topologically protected surface state, similar to that observed in SmB$ _6 $ \cite{LaSbCava}. However, ARPES measurements show conflicting results over whether the band topology is trivial \cite{DHLaSb,LaXFDL}, while the resistivity plateau can be explained by a three band model with nearly perfect electron-hole compensation \cite{DHLaSb}. Furthermore, by substituting for lanthanide elements with partially filled $4f$ electron shells, the effect of tuning the spin-orbit coupling, magnetism and electronic correlations can be studied. For example, CeSb shows evidence for Weyl fermions in the field-induced ferromagnetic state from both theoretical calculations and the measurement of a negative longitudinal magnetoresistance \cite{2016arXiv161102927G}. NdSb has also been proposed to be a topological semimetal from the observation of a Dirac-like semi-metal phase using ARPES, as well as from band structure calculations and analysis of the Landau indices \cite{NdSbIOP,NdSbHField}. In this work, we report magnetoresistance and quantum oscillation measurements of the antiferromagnet SmSb \cite{mullen1974}. Based on the analysis of Shubnikov-de Haas (SdH) oscillations, we found that the phase of all the harmonic oscillations corresponding to the $\alpha$-band is close to $\pi$, while for the other bands it is almost zero. These results provide evidence that the band topology of the $\alpha$-band is non-trivial. Moreover, striking differences are found between the temperature dependences of the quantum oscillation amplitudes from SdH and dHvA measurements. The amplitudes of the SdH oscillations show anomalous behavior at low temperatures, which may arise due to the presence of multiple Fermi surface sheets or a conduction channel related to the topological state. \section*{Results and Discussion} The temperature dependence of the electrical resistivity [$\rho(T)$] of SmSb is displayed in Fig.~1 \textbf{a}, where the data up to 300~K is shown in the inset. The $\rho(T)$ data show a small hump at around 65~K, which is explained as resulting from the splitting of the crystalline electric fields \cite{beeken1978intermediate}, as well as a sharp drop of $\rho(T)$ at the antiferromagnetic transition at $T_N=2.2$~K \cite{hulliger1978low}. While $T_N$ changes very little in applied magnetic fields \cite{ozeki1991haas}, an upturn appears in $\rho(T)$ at low temperatures, which becomes more pronounced as the field is increased. As shown in Fig.~1\textbf{a} and 1\textbf{b}, there is a strong increase of $\rho(T)$ in-field below $T_N$, before reaching a nearly constant value below around 1~K. Figure~1\textbf{c} shows the field dependence of the magnetoresistance [$\rho(H)/\rho(0)$] of SmSb up to 15~T at various temperatures down to 0.3~K. At 0.3~K $\rho(H)/\rho(0)=74000$, which decreases rapidly as the temperature is increased. Figure~1\textbf{d} displays measurements performed up to higher fields at 1.8~K, where the magnetoresistance increases quadratically to about 5558 at 60~T. The magnetoresistance of SmSb is one of the largest observed among the $X$Sb family of compounds \cite{DHLaSb,NdSbHField,2016arXiv161102927G,ye2017extreme,wu2017extremely}, and the quadratic field dependence suggests that the XMR most likely results from electron-hole carrier compensation \cite{PippardMR}. In this case, the increase of mobility below $T_N$ may lead to both the small low temperature values of $\rho(T)$ in zero-field, as well as the very large magnetoresistance and low-temperature plateau of $\rho(T)$ in high fields. While this low temperature plateau has also been observed in other $X$Sb materials \cite{LaSbCava,2016arXiv161102927G,YSbSr,NdSbPRB}, the appearance of the plateau at a lower temperature in SmSb is likely due to the comparatively low $T_N$. To examine the topology of the electronic structure of SmSb, we also analyzed the SdH oscillations in the resistivity for the two samples displayed in Fig.~1\textbf{d}. From the fast Fourier transformation analysis (see Supplementary Figure 1), the most prominent oscillation frequencies consist of three fundamental frequencies ($F_{\alpha}$, $F_{\beta}$ and $F_{\gamma}$), and two harmonic frequencies ($F_{2\alpha}$ and $F_{3\alpha}$). In the following, the data are analyzed with five oscillations, corresponding to these frequencies. Note that for the data measured in a pulsed magnetic field, the data measured at 15~K were analyzed, so as to avoid complications arising from strong harmonic oscillations. To obtain the oscillation phase factors ($\phi^i$) corresponding to each fundamental frequency, we have used Lifshitz-Kosevich (LK) theory to fit the data \cite{PRXAlex}: \begin{eqnarray} \Delta\rho/\rho_0 = \sum_{i=1}^{3}\sum_{r}a_{i,r}\sqrt{B/r}R_T^{i,r}R_D^{i,r}R_S^{i,r}{\rm cos}(2\pi r(F_i/B-1/2)-\phi^i). \label{equation1} \end{eqnarray} Here $a_{i,r}$ is a constant corresponding to the $r$th harmonic of the $i$th band, $\phi^i$ is the corresponding phase factor, and $R_T$, $R_D$ and $R_S$ are the temperature, Dingle and spin damping factors respectively. These are given by $ R_T^{i,r}=(14.69rm_iT/B)/{\rm sinh}[(14.69rm_iT/B)] $, $ R_D^{i,r}={\rm exp} (-14.69rm_iT_D/B) $, and $ R_S^{i,r}={\rm cos}(\pi grm_i/m_e) $, where $m_i$ denotes the cyclotron mass for each band, $ m_e $ is the free electron mass and $ T_D $ is the Dingle temperature. The data were fitted using Eq.~\ref{equation1}, and it can be seen in Fig.~2 that this can well describe the SdH oscillations both measured in a static field up to 35~T (S2) and a pulsed field up to 60~T (S6). Note that since the data are analyzed for a fixed temperature and field-angle, $R^{i,r}_{S}$ is taken to be a constant value. Here $F_i$ for the five components were fixed to the values from the FFT analysis, and the $m_i$ for the $\alpha-$ and $\beta-$ bands were obtained from the temperature dependence of the oscillation amplitudes from dHvA measurements (see Fig.~5). On the other hand it is difficult to obtain $m_i$ from analyzing the temperature dependence of oscillations corresponding to the $\gamma-$band, and therefore this was a fitted parameter in the analysis. In addition, $T_D$ and $\phi^i$ for each band were fitted parameters in the analysis, and the results are displayed in Supplementary Table I. The phase factor corresponding to the $\alpha$-band from both samples is 1.16$\pi$. Note that the phase is given by $\phi^i = \phi_D^i+\lambda^i$, where $\phi_D$ corresponds to the LK correction, and $\lambda$ is the sum of geometric and dynamical phases, which includes the Berry phase, the phase factor resulting from the orbital magnetic moment and the Zeeman coupling phase factor \cite{PRXAlex}. For a two-dimensional cylinder-like Fermi surface, $\phi_D$ is zero, while for a three-dimensional Fermi surface pocket, $\phi_D$ is $\pi/4$ or $-\pi/4$ for oscillations corresponding to the minimum or maximum Fermi surface cross sections respectively. Since the oscillation frequency is lowest when the field is applied along the [100] direction (see Supplementary Figure 2), this suggests that $\phi_D=\pi/4$, and hence $\lambda=0.91\pi$. This is close to the value of $\pi$ indicating a $\pi$ Berry phase, as expected for a topologically non-trivial electronic band \cite{mikitik1999manifestation,PhysRevLett93166402,shoenberg2009magnetic}, while the small deviation from $\pi$ suggests the possible influence of dynamical phases \cite{PRXAlex}. On the other hand, for fundamental oscillations corresponding to the $\beta$-band, $\lambda$ of -0.15$\pi$ and 0.01$\pi$ are obtained for samples S2 and S6 respectively, while the values for the $\gamma$ band are -0.24$\pi$ and -0.17$\pi$. These results suggest that both these bands have near zero Berry phase, where the influence of dynamical phase factors leads to a small deviation from zero. A Landau fan diagram was also utilized to extrapolate the dominant oscillation phase factor (Fig.~3), where Landau indices were assigned to the positions of the SdH oscillations which correspond to the deepest valleys, for both S2 and S6. The uncertainty of the valley positions is small which doesn't significantly affect the results, and the periodicity of these valleys corresponds to that of the $ \alpha $-band. From linearly fitting the Landau index $n$ as a function of $1/B$, the extrapolated residual Landau indices are $n_0=0.53(5)$ for sample S2 and $n_0=0.52(3)$ for sample S6. These agree well with the above conclusions that the phase factor of the $\alpha$-band, which is the strongest frequency component observed in the quantum oscillations, is close to $\pi$. Though these results are consistent with those expected for a non-trivial band topology, in Ref.~\cite{PRXAlex} it was suggested that the phase obtained from quantum oscillation experiments cannot directly probe the Berry phase in three-dimensional centrosymmetric materials. In our measurements, the field is applied along the [100] direction, which corresponds to the $ C_4 $ rotation axis of the cubic crystal structure. Due to the symmetry constraints in this situation (mirror symmetry) \cite{PRXAlex}, the spin degeneracy of each band is preserved, and the overall phase factor will be quantized to either $\pi$ or zero, which is not a linear summation of the phase factors of both spin up and down bands, and thus cannot be used to directly obtain the Berry phase or determine the band topology. Here we note that the same phase factor is found from measurements above and below $ T_ N$. Since the magnetic structure of SmSb has not yet been reported, we are not able to determine if the same symmetry constraints are present in the magnetically ordered state. Electronic structure calculations for the case of zero applied field suggest that SmSb has a trivial band topology but is close to the boundary between trivial and non-trivial \cite{calculation}. However we note that these calculations assume a Sm valence of 3+, while a mixed valence of about 2.75 was reported from x-ray photoemission spectroscopy \cite{campagna1974}. As such it is important to perform further studies to reveal the origin of this nontrivial Berry phase. Evidence for a $\pi$ Berry phase was also found in isostructural LaBi \cite{LaBiIOP}, where both an odd number of band inversions and surface states with an odd number of massless Dirac cones were observed in ARPES measurements \cite{Nayak2017Multiple,LaBiARPES,LaXFDL,XBiARPES}. In addition, we also performed field-dependent ac susceptibility measurements down to low temperatures where the presence of dHvA oscillations is clearly observed. For comparison, SdH and dHvA oscillations are shown for various temperatures down to 0.3~K in Figs.~4\textbf{a} and 4\textbf{b} respectively, after subtracting the background contribution. The respective FFT are shown in Figs.~4\textbf{c} and 4\textbf{d}, where two principal frequencies are observed, which remain unchanged below $T_N$, similar to previous reports \cite{ozeki1991haas}. The observed values are $F_{\alpha}~\approx~$334~T and 328~T, and $F _{\beta} ~\approx$~633~T and ~$\approx$~590~T for SdH and dHvA respectively, where the differences may be due to a small misalignment. The SdH oscillation amplitudes as $\Delta\rho$ corresponding to both the $\alpha$ and $ \beta $ bands increase significantly with decreasing temperature below 2.5~K, while in dHvA measurements the change is more gradual. Furthermore, the amplitude of the $\alpha$-band oscillations is maximum at around 0.8~K, and upon further decreasing the temperature, the amplitude decreases. Meanwhile, the angular dependence of $F_{\alpha}$ can be fitted with the equation for anisotropic three-dimensional ellipsoidal pockets \cite{3Dellips}(see Supplementary Figure 2), which can describe the anisotropic bullet-like Fermi surface pockets around the X-points, corresponding to the bulk Fermi surface of the $\alpha$-band \cite{calculation,LaBiPRB}. Figures~5\textbf{a} and 5\textbf{b} present the temperature dependence of the oscillation amplitudes from both SdH and dHvA measurements. Since there is a rapid change of the resistivity both with changing temperature and field, the SdH oscillation amplitudes are displayed as $\Delta\rho$/$\rho$$ _0 $, so as to normalize by the background values. The temperature dependence of the dHvA amplitudes are well fitted across the whole temperature range (0.3 to 10~K) for both the $\alpha$ and $\beta$ bands with the amplitude $\propto R_T$ (Eq.~1), where the fitted values of the effective cyclotron masses are 0.26~m$_e $ and 0.28~m$_e $ respectively. These results are consistent between measurements of both the ac and dc susceptibility (samples S3 and S5 respectively). However, although the SdH and dHvA data coincide well at higher temperatures, below around 2.5~K there is a significant deviation which cannot be accounted for by the LK formula. Moreover, for the $\alpha$-band, the amplitude reaches a maximum at around 1.6~K, before beginning to decrease. We note that these results are highly repeatable, as shown by the measurements of the two samples displayed in Fig.~5, and when the SdH amplitudes are plotted as $\Delta\rho$, the deviation from LK behavior is even more pronounced (see Supplementary Figure 3). In addition, the Dingle temperature analysis does not show a significant change at low temperatures (Supplementary Figure 4), and as a result this behavior cannot be accounted for by the temperature dependence of $T_D$. One possible explanation for the deviation between dHvA and SdH measurements is that the unusual behavior of SdH oscillations is related to the topologically non-trivial electronic structure which manifests a $\pi$ Berry phase. We note that dHvA measurements of floating zone grown crystals of the proposed Kondo insulator SmB$ _6 $ reveal a steep increase of the oscillation amplitude below 1~K which cannot be explained by LK theory \cite{tan2015unconventional,Hartstein2017}, while there is also evidence for a non-trivial Berry phase \cite{Li1208}. Currently, there is considerable debate over whether the quantum oscillations in SmB$_6$ are from the insulating bulk, or the surface \cite{tan2015unconventional,Li1208}. Proposals such as the magnetic breakdown scenario give rise to quantum oscillations in the bulk of an insulator which deviate from LK behavior \cite{knolle2017excitons}, while it has also been suggested that the anomalous increase of dHvA quantum oscillation amplitudes in SmB$_6$ arises due to surface quantum criticality \cite{erten2016kondo}. In contrast to SmB$_6$, the amplitudes of our dHvA measurements of SmSb can be well fitted using the LK expression and the anomalous behavior is only seen in SdH measurements. This may suggest that the anomalous behavior of SmSb arises only from certain conduction channels with high mobility, which do not significantly contribute to dHvA measurements. Metallic edge states were proposed to explain the anomalous quantum oscillations of some charge transfer salts, where the deviation of SdH amplitudes from conventional behavior is not observed in the dHvA \cite{QHEAnom1,QHEAnom2}. However, these systems have highly conducting quasi-two-dimensional planes where the edge states are suggested to arise in the quantum Hall regime, and this is therefore quite a different scenario to that presented by SmSb. Meanwhile, a comparison between SdH and dHvA cannot be made for SmB$_6$, since SdH oscillations in magnetotransport measurements have not been reported. Another possibility is that the deviation between SdH and dHvA measurements arises due to the difficulty in disentangling the contributions to the conductivity from the different FS sheets. Although it is often assumed that $\Delta\rho$/$\rho$$ _0 $ is proportional to the density of states and that the temperature dependence is well described by the LK formula, for multiband systems the situation can be more complicated due to the different contributions to the conductivity from each band \cite{shoenberg2009magnetic}. This effect will not influence the dHvA results, which can be well described by the LK formula across the whole temperature range. We note that below around $T_N$, the normalized oscillation amplitude of the $\alpha$-band is anomalously low relative to the dHvA results, while for the $\beta$-band there is an increase. This suggests that with decreasing temperature, there may be changes of the relative contributions to the conductivity from the two bands. Therefore, for systems with multiple FS sheets and large magnetoresistance, analysis of the cyclotron masses using SdH measurements may be difficult. Another question posed by these results is why this anomalous temperature dependence of SdH amplitudes is not observed in other $X$Sb compounds. This may be related to the fact that in other $X$Sb materials, the rapid increase of the magnetoresistance occurs at considerably higher temperatures. For example, in LaSb and YSb this occurs at around 80~K and 100~K, respectively \cite{LaSbCava,YSbNew}, while the SdH oscillations are usually measured below 20~K for these materials, where the magnetoresistance shows relatively little change with temperature. However, as displayed in Fig.~1\textbf{b}, the magnetoresistance of SmSb changes strongly below 2~K, which may influence the oscillation amplitudes at low temperatures. Therefore the origin of the dramatic departure from conventional behavior in the SdH amplitudes is an open question, and in particular since this deviation onsets near $T_N$, the possible relationship between the antiferromagnetic state and the anomalous behavior also needs to be explored. To summarize, we find evidence for a $\pi$ Berry phase in the $\alpha$-band of SmSb from analyzing high field measurements of SdH oscillations. Furthermore, our quantum oscillation measurements show that the amplitudes of dHvA oscillations are well described by the standard LK formula, while those from the SdH effect show anomalous behavior at low temperatures. The origin of this unusual behavior in SdH oscillations remains unclear, and therefore further studies are required to understand the origin of the Berry phase, as well as any relationship with the anomalous SdH oscillations. \section*{Methods} Single crystals of SmSb were synthesized using an Sn flux-method \cite{P1992Growth}. The elements were combined in a molar ratio Sm:Sb:Sn of 1:1:20 in an Al$ _2 $O$ _3 $ crucible before being sealed in evacuated quartz ampoules. These were slowly cooled from 1100$^\circ$C down to 800$^\circ$C, before centrifuging to remove the Sn flux. The resulting crystals are cubic with typical dimensions of around 3~mm. The residual resistivity ratio of $RRR = \rho(300{\rm K})/\rho(0.3{\rm K})\approx4000$, indicates a high sample quality. Low temperature resistivity and ac susceptibility measurements were performed between 0.27 K and 9 K using an Oxford Instruments $^3$He system, in applied fields up to 15 T. The resistivity data in this system was measured using a Lakeshore 370 resistance bridge, with a current of 0.3 mA across the whole temperature range. Four Pt wires were attached to the sample by spot welding so that the current was applied along the [100] direction. dHvA results were measured using a commercial susceptometer for ac susceptibility measurements, which consists of three sets of coils, including a drive coil, a pick-up coil, and a coil for compensation. Additional resistivity measurements in fields up to 9 T, and dc-susceptibility measurements in fields up to 13 T, were also performed using a Quantum Design Physical Property Measurement System (PPMS) from 300 K to 2 K, where the susceptibility was measured using a vibrating sample magnetometer insert. The high field magnetoresistance measurements were performed up to 60~T using a pulsed field magnet at the Los Alamos National Laboratory, USA, while measurements up to 32~T were carried out using the Cell 5 Water-Cooling Magnet at the High Magnetic Field Laboratory of the Chinese Academy of Sciences. \section* {Data availability statement} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section* {acknowledgments} We thank Y.~Liu, C.~Cao, Q. Si, Y. Zhou, F. Steglich, P. Li and Z. Z. Wu for interesting discussions and helpful suggestions. This work was supported by the the Science Challenge Project of China (Project No.~TZ2016004), National Key R\&D Program of China (Grants No.~2017YFA0303100 and No.~2016YFA0300202), the National Natural Science Foundation of China (Grants No. U1632275, No. 11604291 and No. 11474251), and the Innovative Program of Development Foundation of Hefei Center for Physical Science and Technology (Project No.~2017FXCX001). A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-1644779* and the State of Florida. JS acknowledges support from the DOE BES program "Science of 100 T". \section*{Competing financial interests} The Authors declare no Competing Financial or Non-Financial Interests \section*{Author contributions} The crystals were grown by F.W.. F.W., C.Y.G., J.L.Z., Y.C. and J.S. performed the measurements, which were analyzed by F.W., C.Y.G., M.S., J.S. and H.Q.Y. The manuscript were written by F.W., C.Y.G., M.S. and H.Q.Y. All authors participated in discussions. \section*{REFERENCES}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Turbulence can be roughly defined as a spatially and temporally complex state of fluid motion\cite{2014PNASbarenhi}. It can be found everywhere, such as rapid stream, smog, airflow, superfluid helium of heat convection, cold atoms being stirred, etc. Meanwhile, as the interaction is nonlinear and involve many length and time scales, turbulence become one of the biggest problems in modern science. Both the classical turbulence and quantum turbulence are popular and active research directions. Specially, due to the great development of cooling and controlling techniques in recent decades, quantum turbulence has established itself in the turbulence community, shedding new light on both the quantum and classical aspects of turbulence. Vortices are essential objects in turbulence. They play a crucial role in various phenomena. In quantum turbulence, such as superfluid helium, atomic Bose-Einstein condensates, vortices are topological defects which are quantized. Reconnection of vortex lines in three dimensional superfluid, or annihilation of vortex and anti-vortex in two dimensional superfluid are important phenomena which reduce the topological defects, dissipate energy and randomise the velocity field\cite{2011jltpbarenhi}. During the reconnection process, the separation distance $\delta (t)$ between two vortex lines varying with respect to time is intensively studied. It is firstly reported by de Waele and Aarts\cite{1994prlwaele} that \begin{equation} \delta(t)=(\kappa/2\pi)^{1/2}\sqrt{t_{0}-t}, \end{equation} where $\kappa$ is the circulation quantum and $t_{0}$ is the reconnection time. This numerical work is based on Schwarz's vortex filament model\cite{1988prbschwarz}, which assumes a very small vortex core. Thus the above result breaks down when $\delta (t)$ is smaller than the vortex size. Anyway, this $(t_{0}-t)^{1/2}$ scaling is confirmed in He II experiment\cite{2008PNASbewley,2008pdnppaoletti}, and by an approximate analytic solution of the Gross-Pitaevskii Equation (GPE)\cite{2003jltsergey}. However, there are also many numerical studies report similar fitting functions with modified scaling exponents. For example, the fitting functions \begin{equation} \delta(t)=A_{1}(t_{0}-t)^{A_{2}}, \end{equation} and \begin{equation} \delta(t)=B_{1}(t_{0}-t)^{1/2}[1+B_{2}(t_{0}-t)] \end{equation} were reported in Refs.\cite{2011jltpbarenhi,2012prbbaggaley, 2012pfzuccher, 2014praallen,2017prfalberto}, where the scaling exponent $A_{2}$ was found to be varying around $1/2$ and $B_{2}$ is small. All the above studies are about three-dimensional situations. The two-dimensional cases, such as the oblate Bose-Einstein condensates where vortices are annihilated rather than reconnected, are also important and interesting. Do they have similar behaviors and scaling exponents? In this paper, we try to solve this problem. We numerically simulate and study the annihilation process of vortex and anti-vortex based on the holographic duality rather than GPE. Holographic duality\cite{Maldacena1998top,gubser1998gauge,witten1998anti} is an alternative theoretical framework to deal with strongly coupled quantum many-body systems which is encoded in a classical gravitational system with one extra dimension. Holographic superfluid model was established in Refs.\cite{20083h1,20083h2}, while soliton and vortex solutions were studied in Refs.\cite{2010prdkeranen1,2010prdkeranen2,2017prdlan}. Recently, holographic superfluid turbulence were studied in Refs.\cite{chesler2013holographic,Ewerz2014tua,du2014holographic,2016jheplan}. Holographic superfluid model is non-perturbational, thus allowing a first-principles investigation of the annihilation process. What's more, it handles finite temperature cases naturally. While the GPE mainly handles the zero temperature case. Manually reconstructed GPE which involves dissipative effect can handle the finite temperature cases\cite{1958pitaevskii}. This paper is organised as follows. Holographic superfluid model and relevant numerics are introduced in Sec.\ref{sec2}. Numerical results and attractive interaction between vortex and anti-vortex are analysed and discussed in Sec.\ref{sec3}. Sec.\ref{sec4} devotes to conclusions and suggestions on future directions. \section{set up} \label{sec2} For the two dimensional superfluid, a simple holographic model is a gravitational system in asymptotically AdS$_{4}$ spacetime. The corresponding bulk action can be written as\cite{20083h1,20083h2} \begin{equation} S=\frac{1}{16\pi{G}}\int_{\mathcal{M}}\mathnormal{d}^{4}x\sqrt{-g}(R+\frac{6}{L^{2}}+\frac{1}{q^{2}}\mathcal{L}_{matter}), \end{equation} where the matter Lagrangian reads \begin{equation} \mathcal{L}_{matter}=-\frac{1}{4}F_{ab}F^{ab}-|D\Psi|^{2}-m^{2}|\Psi|^{2}. \end{equation} Here $D=\nabla-iA$ with $\nabla$ as the covariant derivative compatible to the metric. $A_{a}$ is a dynamical U(1) gauge field and $\Psi$ is a complex scalar field with mass $m$ and charge $q$. To simplify the problem, one usually work in the probe limit, which decouples the matter fields from gravity. Thus the Schwarzschild black brane can be written in the infalling Eddington coordinates as \begin{equation} ds^{2}=\frac{L^{2}}{z^{2}}(-f(z)dt^{2}-2dt dz+dx^{2}+dy^{2}), \end{equation} where the factor $f(z)=1-(\frac{z}{z_{h}})^{3}$ with $z=z_{h}$ corresponding to the horizon and $z=0$ corresponding to the AdS boundary. Then the equations of motion for the matter fields can be written as \begin{equation} D_{a}D^{a}\Psi-m^{2}\Psi=0,\nabla_{a}F^{ab}=i(\overline{\Psi}D^{b}\Psi-\Psi\overline{D^{b}\Psi}). \end{equation} In what follows, we will take the units in which $L=1,16\pi Gq^{2}=1$, and $z_{h}=1$, and work with $m^{2}=-2$ in the standard quantization case\cite{breitenlohner1982}. In the axial gauge $A_{z}=0$, the asymptotic solutions of $A$ and $\Psi$ near the AdS boundary can be expanded as \begin{equation}\label{asymp} A_{\mu}=a_{\mu}+b_{\mu}z+o(z),\Psi=z[\beta+\psi{z}+o(z)]. \end{equation} According to the holographic dictionary, the temperature, the expectation values of the conserved current $j^{\mu}$ and the condensate operator $O$ in the superfluid are given by\cite{2013jhepli} \begin{equation} T=\frac{3}{4\pi}, \end{equation} \begin{eqnarray}\label{current} \langle j^{\mu}\rangle=\frac{\delta{S_{ren}}}{\delta{a_{\mu}}}=\lim_{z\rightarrow 0}\sqrt{-g}F^{z\mu}, \end{eqnarray} \begin{eqnarray}\label{operator} \langle O\rangle&=&\frac{\delta{S_{ren}}}{\delta{\beta}}=-\lim_{z\rightarrow 0}z\sqrt{-\gamma}(n_a\overline{D^a\Psi}+\overline{\Psi})\nonumber\\ &=&\overline{\psi}-\dot{\overline{\beta}}-ia_{t}\overline{\beta}, \end{eqnarray} where the dot denotes the time derivative,and the renormalized action is given by \begin{equation} S_{ren}=S-\int_{\mathcal{B}}\sqrt{-\gamma}|\Psi|^{2} \end{equation} with the counter term added to make the original action finite. Here we switch off the sources of the operators by setting \begin{eqnarray} a_{x}=0,a_{y}=0,\beta=0. \end{eqnarray} Then the superfluid velocity is defined as \begin{eqnarray}\label{velocity} \boldsymbol{u}=\frac{\mathcal{J}}{|\psi|^{2}},\mathcal{J}=\frac{i}{2}(\overline{\psi}\boldsymbol{\partial}\psi-\psi\boldsymbol{\partial}\overline{\psi}), \end{eqnarray} and the winding number $\sigma$ of a vortex is \begin{equation}\label{omega} \sigma=\frac{1}{2\pi}\oint_{c}d\boldsymbol{x}\cdot\boldsymbol{u}, \end{equation} where $c$ denotes a counterclockwise oriented path surrounding a single vortex. In what follows we determine the position of the vortex by calculating the winding number of each point in the system. To investigate the annihilation process of a vortex pair with winding number $\sigma=\pm 1$, we consider a periodic $30\times30$ square box on which a pair of vortices evolve freely. The initial position of the vortices are randomly placed. What's more, a random perturbation velocity field are added to the system. As a result, the vortices have random initial velocities. By defining a new function $\Phi=\frac{\Psi}{z}$, the later time behavior of the system is determined by the following rewritten equations of motion \begin{eqnarray}\label{eqphi} \partial_{t}\partial_{z}\Phi&=&iA_{t}\partial_{z}\Phi+\frac{1}{2}[i\partial_{z}A_{t}\Phi+f\partial^{2}_{z}\Phi+f'\partial_{z}\Phi\nonumber\\ &\,&+(\partial-iA)^{2}\Phi-z\Phi], \end{eqnarray} \begin{equation}\label{constraint} \partial_{z}(\partial_{z}A_{t}-\partial\cdot\boldsymbol{A})=i(\overline{\Phi}\partial_{z}\Phi-\Phi\partial_{z}\overline{\Phi}), \end{equation} \begin{eqnarray}\label{eqa} \partial_{t}\partial_{z}\boldsymbol{A}&=&\frac{1}{2}[\partial_{z}(\boldsymbol{\partial} A_{t}+f\partial_{z}\boldsymbol{A})+(\partial^{2}\boldsymbol{A}-\partial\boldsymbol{\partial}\cdot\boldsymbol{A})\nonumber\\ &\,&-i(\overline{\Phi}\partial\Phi-\Phi\partial\overline{\Phi})]-\boldsymbol{A}\overline{\Phi}\Phi, \end{eqnarray} \begin{eqnarray}\label{eqat} \partial_{t}\partial_{z}A_{t}&=&\partial^{2}A_{t}+f\partial_{z}\boldsymbol{\partial}\cdot\boldsymbol{A}-\partial_{t}\boldsymbol{\partial}\cdot\boldsymbol{A}-2A_{t}\overline{\Phi}\Phi\nonumber\\ &\,&+if(\overline{\Phi}\partial_{z}\Phi-\Phi\partial_{z}\overline{\Phi})-i(\overline{\Phi}\partial_{t}\Phi-\Phi\partial_{t}\overline{\Phi}). \end{eqnarray} The related numerical methods used in this paper include pseudo-spectral method and Runge-Kutta method. The pseudo-spectral method is used to represent the above functions with 25 Chebyshev modes in the $z$ direction and 241 Fourier modes in the $x,y$ direction. The Runge-Kutta method is used to evolve the equations in time direction with the time step $\delta t=0.05$. For a detail review of these methods, one can refer to Ref.\cite{chesler2013holographic,du2014holographic,2016jheplan,2016cqgguo}. While the initial bulk configurations for $\Phi, \boldsymbol{A}, A_{t}$ can be found in appendix A. \section{numerical results} \label{sec3} Vortex and anti-vortex have the same configuration. The only difference is that one has clockwise rotation while the other has counterclockwise rotation. The static vortex solution can be obtained from the above equations of motion in the cylindrical coordinate\cite{2010prdkeranen2}. When the chemical potential of the superfluid is chosen as $\mu=6>\mu_{c}$, the vortex configuration is shown in Fig.\ref{vradius}. We define the vortex radius as $r$, such that $|\langle O (r)\rangle|=0.99|\langle O \rangle|_{max}$, where $|\langle O \rangle|_{max}$ denotes the condensation of homogeneous superfluid solution. The vortex radius $r$ is found to be $2.05$. \begin{figure} \begin{center} \includegraphics[scale=0.6]{vortexradius.pdf} \end{center} \caption{The configuration of vortex for chemical potential $\mu=6$. We define the vortex radius as $r$, such that $|\langle O (r)\rangle|=0.99|\langle O \rangle|_{max}$. Here $r=2.05$. }\label{vradius} \end{figure} \subsection{separation distance between vortex and anti-vortex} We achieve long-time simulations of the vortex annihilation process and obtain 10 sets of data. As examples, the superfluid configurations for different time $t=400,592,640,660$ are shown in Fig.\ref{vvst}. The graph is especially shown at time $t=592$, when the vortex and anti-vortex just touched, with $\delta(t=592)=2r=4.1$. The initial relative motion speed is small. The separation distance $\delta (t)$ decreases with time until the vortices are annihilated. \begin{figure} \begin{center} \includegraphics[scale=0.41]{t400.pdf} \includegraphics[scale=0.47]{t592.pdf} \includegraphics[scale=0.44]{t640.pdf} \includegraphics[scale=0.43]{t660.pdf} \end{center} \caption{The superfluid configurations for different time $t=400,592,640,660$. Red arrow indicates the moving direction. The separation distance $\delta(t)$ decreases with time until the vortices are annihilated. The graph is especially shown at time $t=592$, when the vortex and anti-vortex just touched, that is $\delta(t=592)=2r=4.1$. }\label{vvst} \end{figure} $\delta(t)$ is shown in Fig.\ref{dist123}. The blue dots represent the separation distance between the vortices at different times. We want to study the interaction between vortex and anti-vortex without being affected by other effects. So the separation distance is recorded every $\Delta t$ from time $t=330$, when the initially added perturbation modes are basically dissipated. From $t=330$ to $t=600$, $\Delta t=2$. While for the rest of time, $\Delta t=1$. It can be seen from the figure that the separation distance of the vortices becomes smaller with time, and the relative motion between them becomes faster and faster, as if there is a mutually attractive force. In particular, from their contact $\delta(t)<2r=4.1$, the acceleration becomes larger, that is, the attractive force becomes larger. As a result, this annihilation process can be divided into two stages. The first part is the stage before the vortex pair contacts each other, while the second part is the stage after the contact. In Fig.\ref{dist123}, the solid red line is the fitting curve for the first stage ($\delta(t)>4.1$), and the solid green line is the fitting for the second stage($\delta(t)<4.1$). \begin{figure} \begin{center} \includegraphics[scale=0.5]{distancevst3.pdf} \includegraphics[scale=0.5]{distancevst11.pdf} \includegraphics[scale=0.5]{distancevst2.pdf} \includegraphics[scale=0.5]{logdt.pdf} \end{center} \caption{The blue dots represent the real-time separation distance between the vortices. The solid red lines are the fitting curves ($\delta(t)=0.529(649.6 - t)^{1/2}$ and $y=0.5x-0.639$) for the first stage ($\delta(t)>4.1$), and the solid green lines are the fitting curves ($\delta(t)=0.799(649.6 - t)^{2/5}$ and $y=0.4x-0.227$) for the second stage($\delta(t)<4.1$). It clearly shows that the process can be divided into two stages, the first stage has a scaling exponent $1/2$ and the second stage has a scaling exponent $2/5$. }\label{dist123} \end{figure} As shown in the first three graphes, the first stage is well fitted by function \begin{equation}\label{disat} \delta (t)=A (t_{0} - t)^{1/2}, \end{equation} where $A=0.529$ is the fitting constant, $t_{0}=649.6$ is the time when the vortex pair annihilates. The second stage is well fitted by function \begin{equation}\label{disbt} \delta (t)=B (t_{0} - t)^{2/5}, \end{equation} where $B=0.799$ is the fitting constant. The fourth graph is a log-log plot. The fitting function is \begin{equation}\label{fitfun2} ln(\delta(t))=n ln(t_{0}-t)+\varpi. \end{equation} It clearly shows that the process can be divided into two stages, the first stage has a scaling exponent $n=0.5$ and the second stage has a scaling exponent $n=0.4$. The results of the fourth graph are consistent with those of the first three graphes. In the 10 sets of simulation experiments, the average value of $A$ is 0.529, and $B$ is 0.798. More detailed data analysis can be found in appendix B. A comparison of our results with the results of previous work is interesting. First of all, the scaling law $1/2$ of the separation distance between vortices for $\delta(t)>2r$ seems to be universal for two dimensional and three dimensional cases. It is predicted by simple dimensional analysis\cite{1994prlwaele,2017prfalberto}. It is also confirmed by experiments\cite{2008PNASbewley,2008pdnppaoletti},and by an approximate analytic solution \cite{2003jltsergey}. Although some numerical studies report small modified scaling exponents\cite{2011jltpbarenhi,2012prbbaggaley, 2012pfzuccher, 2014praallen,2017prfalberto}. Secondly, when $\delta(t)<2r$, a novel scaling exponent $2/5$ is obtained in this paper. Due to the small vortex radius, minor differences between the two scaling exponents $1/2$ and $2/5$ may have not yet been observed in other works. \subsection{attractive interaction} Eq.(\ref{disat}) and Eq.(\ref{disbt}) can be rewritten into the following form, \begin{equation}\label{fitfun} \delta(t)=\alpha (t_{0}-t)^{n}. \end{equation} Then the velocity that the vortices are approaching to each other is derived as \begin{eqnarray} v(t)=\frac{d\delta(t)}{dt}= -n\alpha(t_{0}-t)^{n-1}. \end{eqnarray} The expression of velocity about distance can be obtained as \begin{eqnarray} v(\delta)= -n\alpha^{\frac{1}{n}}\delta^{1-\frac{1}{n}}. \end{eqnarray} The expression of acceleration about time is derived as \begin{eqnarray} a(t)=\frac{d^{2}\delta(t)}{dt^{2}}= n(n-1)\alpha(t_{0}-t)^{n-2}. \end{eqnarray} The expression of acceleration about distance can be obtained as \begin{eqnarray} a(\delta)= n(n-1)\alpha^{\frac{2}{n}}\delta^{1-\frac{2}{n}}. \end{eqnarray} If the vortices are loosely treated as particles, then according to the expression of the acceleration with respect to the separation distance, we can derive the attractive force between the vortex and anti-vortex as a function of separation distance, \begin{eqnarray} f(\delta)\propto |a(\delta)|= |n(n-1)\alpha^{\frac{2}{n}}\delta^{1-\frac{2}{n}}|. \end{eqnarray} For the first stage, $n = 1/2$, the attractive force is \begin{eqnarray} f(\delta>2r)\propto \frac{1}{\delta^{3}}. \end{eqnarray} For the second stage, $n = 2/5$, the attractive force is \begin{eqnarray} f(\delta<2r)\propto \frac{1}{\delta^{4}}. \end{eqnarray} For the second stage, the acceleration as a function of time and as a function of separation distance is shown in Fig.\ref{atr12}. When the separation distance is far away, the acceleration is extremely small, and the velocity increases very slowly. \begin{figure} \begin{center} \includegraphics[scale=0.4]{accelerationvst.pdf} \includegraphics[scale=0.4]{accelerationvsx.pdf} \end{center} \caption{For the second stage, left graph is $a(t)$, right graph is $a(\delta)$. When the separation distance is far away, the acceleration is extremely small. }\label{atr12} \end{figure} \subsection{ vortex annihilation rate in superfluid turbulence with low vortex density} It can be seen that only when the separation distance is very small, even less than the vortex size $ 2r = 4.1 $, the acceleration is larger. Here we present a theoretical interpretation why the annihilation process of vortices in superfluid turbulence obeys the two-body decay law when the vortex density is low\cite{du2014holographic,2016jheplan,2018prabaggaley}. As the separation distance of vortices are large, the attractive force between them is small and the acceleration is small. Also because of the chaotic motion of superfluid turbulence, the effect of the long distance attractive force between vortex and anti-vortex is weakened. Therefore the average velocity of the vortex can be regarded as a constant $v_{0}$ for $\delta>2r$. So the annihilation probability for a pair of vortices should be proportional to the area ($v_{0}2r$) swept by one vortex per unit time, and inversely proportional to the area ($l^{2}/N(t)$) occupied by a single vortex. That is \begin{equation} P_{1}\propto \frac{v_{0}2r}{l^{2}/N(t)}. \end{equation} Then the total annihilation rate should be \begin{eqnarray} \frac{dN(t)}{dt}= -N(t)P_{1}\propto -N(t)\frac{v_{0}2r}{l^{2}/N(t)}=-\frac{2r v_{0}}{l^{2}}N(t)^{2}, \end{eqnarray} or rewritten in terms of the vortex number density \begin{eqnarray} \frac{dn(t)}{dt}\propto -2r v_{0} n(t)^{2}=-C n(t)^{2}, \end{eqnarray} where $C$ is a constant. The above equation denotes the two-body decay law. $n(t)$ is obtained as \begin{eqnarray} n(t)=\frac{1}{C t+1/n_{0}}\propto (t+\alpha)^{\beta}, \end{eqnarray} where $\beta=-1$ which is successfully interpreted the results in Ref.\cite{du2014holographic,2016jheplan,2018prabaggaley}. \section{conclusion and discussion} \label{sec4} In this paper, annihilation process of vortices in two-dimensional superfluid is numerically simulated based on holographic duality. Firstly, the separation distance between vortex and anti-vortex as a function of time is recorded. The function is well fitted by $\alpha (t_{0}-t)^{n}$, where the scaling exponent $n=1/2$ for $\delta (t)>2r$, and $n=2/5$ for $\delta(t)<2r$. Thus the annihilation process can be divided into two stages which are separated by the vortex size $2r$. Secondly, the approaching velocity as a function of time and as a function of separation distance are probed. When the separation distance is far away, the velocity increases very slowly and the acceleration is extremely small. If the vortices are loosely treated as particles, we obtain the attractive force $f(\delta)\propto 1/\delta^{3}$ for the first stage, and $f(\delta)\propto 1/\delta^{4}$ for the second stage. Thirdly, according to the characteristics of acceleration, we can reasonably assume that the average velocity of vortex in turbulent state is a constant when $\delta>2r$. Then the annihilation rate is derived as $ \frac{dn(t)}{dt}\propto -C n(t)^{2}$ which is called the two-body decay law. Thus we successfully explained why the annihilation process of vortices in superfluid turbulence obeys the two-body decay law when the vortex density is low\cite{du2014holographic,2016jheplan,2018prabaggaley}. In the end, we would like to emphasize that the scaling exponent $1/2$ for $\delta(t)>2r$ in two-dimensional superfluid is same with many results\cite{1994prlwaele,2008PNASbewley,2008pdnppaoletti,2003jltsergey,2017prfalberto} obtained in three-dimensional cases. This means that the dimensional analysis argument of Ref.\cite{1994prlwaele,2017prfalberto} works also in this two-dimensional holographic superfluid case. The only relevant dimensional quantity is the circulation quantum. But their dimensionless parameters are different. What's more, another scaling exponent $2/5$ is obtained for $\delta(t)<2r$ in this paper. This new discovery may be examined by experiment with higher resolution. In this paper, we only consider the chemical potential $\mu=6$ case. The dependence of the power law on chemical potential will be revealed in our next paper. All of the above results are based on holographic duality. The two-dimensional cases in other models deserve to be disclosed in the future research. We will give the two-dimensional result within the GPE model in the near future. Investigation of the interaction between two same kind of vortices is also interesting. According to the symmetry, the interaction should be repulsive. When the separation distance is large, the repulsive behavior should be the same as the attractive force. When the separation distance is small, these two forces should behave differently. Anyway, the final result deserves to be revealed. \acknowledgments This research is supported by National Natural Science Foundation of China (Grant Nos.11847001, 11605082,11747017), Natural Science Foundation of Guangdong Province, China (Grant Nos.2016A030310363, 2016A030307051, 2015A030313789), Department of Education of Guangdong Province, China (Grant Nos.2017KQNCX124, 2017KZDXM056) and the Lingnan Normal University Project ZL1931. \section*{ Appendix A: initial bulk configurations for $\Phi, \boldsymbol{A}, A_{t}$} At chemical potential $\mu=6$, the static superfluid are randomly located a pair of vortex and anti-vortex and are added a random perturbation velocity field. To achieve the initial configuration, firstly the static superfluid solution\cite{20083h1} is multiplied with normalized vortex and anti-vortex solutions\cite{2010prdkeranen2}, e.g. $\Phi_{eq}(z)\frac{ \Phi_{vortice}(z,x-x_{1},y-y_{1})}{\Phi_{eq}(z)}\frac{ \Phi_{anti-vortice}(z,x-x_{2},y-y_{2})}{\Phi_{eq}(z)}$, where $(x_{1},y_{1})$ and $(x_{2},y_{2})$ are the coordinates of the vortex and anti-vortex. Secondly, based on Eq.(\ref{velocity}), the above obtained configuration $\Phi(z,x,y)$ can be multiplied with a random phase $e^{i \chi(x,y)}$ to add the perturbation velocity field. Here we take $\chi(x,y)= Re \gamma\Sigma_{k_{x}=-n_{x}\Lambda}^{n_{x}\Lambda}\Sigma_{k_{y}=-n_{y}\Lambda}^{n_{y}\Lambda}\xi(\boldsymbol{k})e^{i\boldsymbol{k}\cdot\boldsymbol{x}}$, where $\gamma$ is a small constant, $n_{x},n_{y}$ are small integers with $\Lambda=\frac{2\pi}{30}$, and $\xi(\boldsymbol{k})$ is a set of $\mathcal{O}(1)$ random complex coefficients\cite{2016jheplan}. In this paper, $\gamma$ is set to be $0.04$, $n_{x}$ and $n_{y}$ are taken as the same value with two cases as $4$ and $10$. \section*{ Appendix B: data analysis of Fig.\ref{dist123}} In Fig.\ref{dist123}, the blue dots $(t_{m},\delta _{m})$ represent the real-time separation distance of the vortex pair. Data analysis start from $(t_{1}=330,\delta _{1}=9.535)$. When the separation distance is large, the attractive force is extremely small. Meanwhile, at early time, the initially added perturbation modes which are not basically dissipated have an effect on motion of vortices. As a result, behavior of $(t_{m},\delta _{m})$ before $t_{1}=330$ is not so regular and the data is omitted. By applying the least square method, we obtain the red fitting curves for the first stage and the green fitting curves for the second stage. For the first three graphes, the fitting function is Eq.(\ref{fitfun}), ie $ \delta(t)=\alpha (t_{0}-t)^{n}$ which has three parameters $\alpha, t_{0}, n$. In the enlarged view, it can be determined that the range of $t_{0}$ is $(649.4-649.8)$. Then we use the average relative deviation value to measure the quality of the fitting functions. The average relative deviation function is designed as \begin{eqnarray} \varepsilon(\alpha,t_{0},n)=\frac{\sum_{m=i_{1}}^{m=i_{2}}(|\alpha (t_{0}-t_{m})^{n}-\delta_{m}|/\delta_{m})}{i_{2}-i_{1}+1}\times 100\%. \end{eqnarray} To determine $t_{0}$, $i_{1}$ is taken as 137 where $t_{137}=601$ and $i_{2}$ is taken as 185 where $t_{185}=649$. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $t_{0}$&649.4 & 649.5& 649.6& 649.7& 649.8 \\ \hline $\alpha$ & 0.804 & 0.793& 0.783& 0.772& 0.763 \\ \hline n & 0.399& 0.402& 0.406& 0.409 & 0.412 \\ \hline $\varepsilon(t_{0})$ & 0.883\% & 0.675\%& 0.502\% & 0.517\% & 0.587\% \\ \hline \end{tabular} \end{center} \caption{For a giving specific $t_{0}$ and by applying the least square method, the fitting function $\delta(t)=\alpha (t_{0}-t)^{n}$ is obtained, ie the values of $\alpha,n$ are determined. Then the average relative deviation values $\varepsilon(t_{0})$ for different fitting functions are calculated.}\label{fitf1} \end{table} For a giving specific $t_{0}$ and by applying the least square method, the fitting function $\delta(t)=\alpha (t_{0}-t)^{n}$ is obtained, ie the parameters $\alpha,n$ are determined. Then the average relative deviation values $\varepsilon(t_{0})$ for different fitting functions can be calculated and are listed in Table.\ref{fitf1}. One can find that the $t_{0}=649.6$ case is the best as the average relative deviation value $\varepsilon(t_{0}=649.6)=0.502\%$ is minimum. What's more, the approximation of the exponent $n\approx 0.4$ is very good. The error of the exponent for the second stage can be defined as \begin{eqnarray} \eta_{12}=\frac{n(t_{0}=649.6)-0.4}{0.5-0.4}\times 100\%=\frac{0.406-0.4}{0.1}\times 100\%=6\%. \end{eqnarray} Thus we set $n=2/5$ and $t_{0}=649.6$. The corresponding fitting function is obtained as $\delta(t)=0.799 (649.6-t)^{2/5}$, which has an average relative deviation value of $\varepsilon=0.547\%$, which is very small. In this point of view, the data is well fitted by the function. For the first stage, $i_{1}$ is taken as 1 where $t_{1}=330$ and $i_{2}$ is taken as 132 where $t_{132}=592$. The average relative deviation values $\varepsilon(t_{0}=649.6,n)$ for different fitting functions $\delta(t)=\alpha (649.6-t)^{n}$ are listed in Table.\ref{fitf2}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline n & 0.48& 0.49& 0.50& 0.505 & 0.51 & 0.52 \\ \hline $\alpha$ & 0.588 & 0.558& 0.529& 0.515 & 0.501& 0.475 \\ \hline $\varepsilon(t_{0})$ & 1.110\% & 0.738\%& 0.565\% & 0.533\% & 0.539\% & 0.709\% \\ \hline \end{tabular} \end{center} \caption{In the first stage, for a giving $t_{0}=649.6$ and a specific $n$, by applying the least square method, the fitting function $\delta(t)=\alpha (649.6-t)^{n}$ is obtained, ie the parameter $\alpha$ is determined. Then the average relative deviation values $\varepsilon(t_{0}=649.6,n)$ for different fitting functions are calculated.}\label{fitf2} \end{table} One can find that the $n=0.505$ case is the best as the average relative deviation value $\varepsilon(t_{0}=649.6,n=0.505)=0.533\%$ is minimum. While, we set $n=1/2$ for the first stage where the average relative deviation value $\varepsilon(t_{0}=649.6,n=0.5)=0.565\%$ which is also small. So the approximation of the exponent $n=1/2$ is very good. The error of the exponent for the first stage can be defined as \begin{eqnarray} \eta_{11}=\frac{0.505-0.5}{0.5-0.4}\times 100\%=5\%. \end{eqnarray} The fourth graph is a log-log plot. The fitting function is Eq.(\ref{fitfun2}), ie $ ln(\delta(t))=n ln(649.6-t)+\varpi$ which has two parameters $n, \varpi$. By applying the least square method, the fitting functions are determined with $n=0.499, \varpi=-0.632$ for the first stage and with $n=0.402, \varpi=-0.232$ for the second stage. If we set the value of $n$ first, the fitting function will possess only one parameter $\varpi$. For the first stage, setting $n=1/2$, the fitting function is determined with $\varpi=-0.639$ and the corresponding average relative deviation value $\varepsilon(n=1/2,\varpi=-0.639)=0.281\%$ which is small. For the second stage, setting $n=2/5$, the fitting function is determined with $\varpi=-0.227$ and the corresponding average relative deviation value $\varepsilon(n=2/5,\varpi=-0.227)=0.587\%$ which is also small. So the functions are well fitted and the approximation of the exponents are very good. Then the error of the exponent for the first stage can be defined as \begin{eqnarray} \eta_{21}=\frac{0.499-0.5}{0.5-0.4}\times 100\%=-1\%. \end{eqnarray} The error of the exponent for the second stage can be defined as \begin{eqnarray} \eta_{22}=\frac{0.402-0.4}{0.5-0.4}\times 100\%=2\%. \end{eqnarray} \section*{ Appendix C: 10 sets of simulation experiments} As discussed in Appendix A, the vortices are randomly located on superfluid where a random perturbation velocity field are added. Therefore, the 10 sets of simulations have different initial conditions and the results are shown in Table.\ref{tenruns} with vortices annihilation time $t_{0}$, parameter $A$ in fitting function $\delta(t)=A(t_{0}-t)^{1/2}$, scaling exponent error $\eta_{11}$, parameter $B$ in fitting function $\delta(t)=B(t_{0}-t)^{2/5}$ and scaling exponent error $\eta_{12}$. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \,& 1& 2& 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline $t_{0}$ & 649.6& 506.2& 482.2 & 670.5 & 489.4 & 1240.4 & 926.5 & 518.8 & 468.2 & 625.2 \\ \hline A & 0.529& 0.528& 0.528 & 0.529 & 0.530 & 0.531 & 0.527 & 0.528 & 0.528 & 0.529 \\ \hline $\eta_{11}$ & 5\%& 2\%& 9\% & 6\% & 8\% & 3\% & 6\% & 3\% & 5\% & 4\% \\ \hline B & 0.799& 0.794& 0.802 & 0.796 & 0.795 & 0.797 & 0.797 & 0.801 & 0.798 & 0.797 \\ \hline $\eta_{12}$ & 6\%& 9\%& 9\% & 7\% & 8\% & 4\% & 1\% & 6\% & 8\% & 4\% \\ \hline \end{tabular} \end{center} \caption{Results of 10 sets of simulation experiments. $t_{0}$ is the vortices annihilation time, $A$ is the parameter in fitting function $\delta(t)=A(t_{0}-t)^{1/2}$, $\eta_{11}$ is the error of scaling exponent $n=1/2$, $B$ is the parameter in fitting function $\delta(t)=B(t_{0}-t)^{2/5}$ and $\eta_{12}$ is the error of scaling exponent $n=2/5$.}\label{tenruns} \end{table}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper, we focus on extensions of the celebrated {\em probabilistic serial} (PS) mechanism~\cite{Bogomolnaia01:New} for the classical resource allocation problem~\cite{Abdulkadiroglu99:House,Bogomolnaia01:New,Chevaleyre06:Issues,Moulin95:Cooperative}, to the {\em multi-type resource allocation problem} (\mtap{})~\cite{Mackin2016:Allocating}. An~\mtap{} involves $n$ agents, $p\ge 2$ {\em types} of items which are not interchangeable, one unit each of $n$ items of each type. Each agent demands a {\em bundle} consisting of one item of each type, and has {\em strict preferences} over all bundles. \mtaps{} may involve {\em divisible} items, like land and water resources~\cite{Segal17:Fair}, or computational resources such as CPU, memory, and storage in cloud computing~\cite{Ghodsi11:Dominant,Ghodsi12:Multi,Grandl15:Multi}. Items may also be {\em indivisible}, where each item must be assigned fully to a single agent, like houses and cars~\cite{Sikdar18:Top,Sikdar2017:Mechanism}, or research papers and time slots in a seminar class~\cite{Mackin2016:Allocating}. Efficient and fair resource allocation for a single type of items ($p=1$) is well studied~\cite{Bogomolnaia01:New,Hylland79:Efficient,zhou1990conjecture,Abdulkadiroglu98:Random,Shapley74:Cores,Abdulkadiroglu99:House,Chevaleyre06:Issues,Moulin95:Cooperative,moulin2018fair}. Our work follows the line of research initiated by~\citet{Bogomolnaia01:New}, who proposed the probabilistic serial (PS) mechanism. The PS mechanism outputs a fractional assignment in multiple rounds, by having all agents simultaneously "eat" shares of their favorite remaining items at a uniform and equal rate, until one of the items is exhausted, in each round. PS is a popular prototype for mechanism designers due to the following reasons: \begin{enumerate*}[label=(\roman*)] \item {\em decomposability}: PS can be applied to allocate both divisible and indivisible items, since fractional assignments are always {\em decomposable{}} when $p=1$, as guaranteed by the Birkhoff-von Neumann theorem~\cite{Birkhoff1946:Three,Neumann1953:A-certain}. In other words, a fractional assignment can be represented as a probability distribution over ``discrete'' assignments, where no item is split among agents. \item {\em efficiency and fairness}: PS satisfies \text{sd-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{}, which are desirable efficiency and fairness properties respectively based on the notion of {\em stochastic dominance}: Given a strict preference over the items, an allocation $p$ {\em stochastically dominates} $q$, if at every item $o$, the total share of $o$ and items strictly preferred to $o$ in $p$, is at least the total share of the same items in $q$. \end{enumerate*} Unfortunately, designing efficient and fair mechanisms for \mtap{} with $p\ge 2$ types is more challenging, especially because direct applications of PS to \mtap{} fail to satisfy the two desirable properties discussed above. First, decompoability{} (property (\romannumeral1)) above relies on the decompoability{} of fractional assignments, which not always holds for \mtap{} as in the following simple example. \begin{example} \label{eg:undecomposable} Consider the \mtap{} with two agents, $1$ and $2$, two types of items, food ($F$) and beverages ($B$), and two items of each type $\{1_F,2_F\}$, and $\{1_B, 2_B\}$ respectively. We demonstrate how the fractional assignment $P$ below, where agent $1$ gets $0.5$ share of $1_F1_B$ and $0.5$ share of $2_F2_B$ is not decomposable{}. \vspace{1em}\noindent\begin{minipage}{\linewidth} \begin{minipage}{0.4\linewidth} \begin{center} \centering \begin{tabular}{|c|cccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$P$}\\\cline{2-5} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1 & 0.5 & 0 & 0 & 0.5 \\ 2 & 0 & 0.5 & 0.5 & 0 \\\hline \end{tabular} \end{center \end{minipage} \hspace{0.1\linewidth} \begin{minipage}{0.4\linewidth} \begin{center \centering \begin{tabular}{|c|cccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$P'$}\\\cline{2-5} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1 & 1 & 0 & 0 & 0 \\ 2 & 0 & 0 & 0 & 1 \\\hline \end{tabular} \end{center} \end{minipage} \end{minipage}\vspace{1em} Obviously, the assignment $P'$ as above is the only assignment where $1_F1_B$ is allocated fully to agent $1$. Since agent $1$ acquires $1_F1_B$ with $0.5$ shares in $P$, the probability for $P'$ ought to be 0.5, and therefore $2_F2_B$ should be allocated to agent $2$ with $0.5$ shares in $P$ accordingly. However, agent $2$ is not allocated $2_F2_B$ in $P$ actually. Thus $P$ is not decomposable{}. \end{example} A natural idea is to decompose MTRA into $p$ single-type instances, one for each type of items, and then apply PS or other mechanisms separately to each of them. Unfortunately, this does not work because first it is unclear how to decompose agents' combinatorial preferences over bundles into separable preferences over items of the same type, and more importantly, even when there is a natural way to do so, e.g.~when agents' preferences are {\em separable}, and {\em lexicographic}, meaning that the agent has an importance order over types to compare bundles. The following example shows that the fairness and efficiency properties (\romannumeral2) above do not hold anymore. \begin{example}\label{eg:ps} We continue to use the \mtap{} above and assume that agents' preferences over $\{1_F,2_F\}\times\{1_B,2_B\}$ are as below. \begin{center} \begin{tabular}{|c|c|} \hline Agent & Preferences \\\hline 1 & $1_F1_B\succ_11_F2_B\succ_12_F1_B\succ_12_F2_B$ \\ 2 & $1_F1_B\succ_22_F1_B\succ_21_F2_B\succ_22_F2_B$ \\ \hline \end{tabular} \end{center} We note that both agents prefer $1_F$ to $2_F$, and $1_B$ to $2_B$ (separable preferences). Agent $1$ considers $F$ to be more important than $B$, while agent $2$ considers $B$ to be more important. In this way we can decompose this \mtap{} into two single type resource allocation problems for $F$ and $B$ respectively. It is easy to see that for each single type the only \text{sd-}\allowbreak\text{efficient}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{free}{} assignment is to give both agents $0.5$ shares of each item, yielding the decomposable{} fractional assignment $Q$ by the mutually independence of each type. However, $Q$ is inefficient, as $Q'$ stochastically dominates $Q$ from both agents' perspectives: \vspace{1em}\noindent\begin{minipage}{\linewidth} \begin{minipage}{0.4\linewidth} \begin{center \centering \begin{tabular}{|c|cccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$Q$}\\\cline{2-5} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1 & 0.25 & 0.25 & 0.25 & 0.25 \\ 2 & 0.25 & 0.25 & 0.25 & 0.25 \\\hline \end{tabular} \end{center} \end{minipage} \hspace{0.1\linewidth} \begin{minipage}{0.4\linewidth} \begin{center \centering \begin{tabular}{|c|cccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$Q'$}\\\cline{2-5} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1 & 0.25 & 0.5 & 0 & 0.25 \\ 2 & 0.25 & 0 & 0.5 & 0.25 \\\hline \end{tabular} \end{center} \end{minipage} \end{minipage}\vspace{1em} \end{example} As we have observed, the two desirable properties of PS for single type resource allocation no longer obviously hold for \mtaps{}. Recently,~\citet{Wang19:Multi} proposed {\em multi-type probabilistic serial (\text{MPS}{})} mechanism as an extension of PS for \mtaps{} with {\em divisible} items, and proved that \text{MPS}{} is \text{sd-}\allowbreak\text{efficient}{} for general partial preference, \text{sd-}\allowbreak\text{envy-}\allowbreak\text{free}{} for CP-net preferences~\cite{Boutilier04:CP}, and \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proof}{} for CP-net preferences with the shared dependency graph. However, it is unclear whether \text{MPS}{} can be applied to the allocation of indivisible items because the outcome may not be decomposable{}. This leaves the following natural open question: {\em How to design efficient and fair mechanisms for \mtaps{} with indivisible and divisible items?} \footnote{Note that for indivisible items, the (fractional) output of a mechanism must be decomposable.} \vspace{0.5em} \paragraph{\bf Our Contributions.} For \mtaps{} with indivisible items, unfortunately, our impossibility theorem ({\bf\Cref{thm:imp}}) shows that no mechanism which satisfies \text{sd-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} is guaranteed to always output decomposable{} assignments, if agents' preferences are allowed to be any linear orders over bundles. Fortunately, when agents' preferences are {\em lexicographic}, the impossibility theorem can be circumvented. To this end, we propose {\em lexicographic probabilistic serial mechanism (LexiPS)} and prove that it satisfies many desirable properties of PS: it is guaranteed to output a decomposable{} assignment, satisfy \text{sd-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} ({\bf\Cref{thm:lps}}), and satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{}, when agents do not lie about their importance orders over types ({\bf\Cref{thm:lpssp}}). For \mtaps{} with divisible items, we show that when agents' preferences are linear orders over all bundles of items, the \text{MPS}{} mechanism proposed by~\citet{Wang19:Multi} satisfies \text{lexi-}\allowbreak\text{efficiency}{} ({\bf\Cref{thm:mpslexopt}}) which is a stronger notion of efficiency than \text{sd-}\allowbreak\text{efficiency}{}. Indeed, we show that \text{lexi-}\allowbreak\text{efficiency}{} implies the {\em \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{}} condition, which is a sufficient condition for \text{sd-}\allowbreak\text{efficiency}{} (similarly to~\citet{Wang19:Multi}), but not a necessary one ({\bf\Cref{prop:gc}}) We also prove that {\em every} assignment satisfying \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{} can be computed by some algorithm in the family of {\em eating} algorithms ({\bf\Cref{thm:eps}}), of which \text{MPS}{} is a member. Importantly, \text{MPS}{} retains \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} ({\bf\Cref{prop:mps}}), and when agents' preferences are further assumed to be lexicographic, \text{MPS}{} satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{} ({\bf\Cref{thm:mpssp}}). Finally, we characterize \text{MPS}{} by \text{leximin-}\allowbreak\text{optimality}{} and \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{}, respectively ({\bf\Cref{thm:char}}). However, the output of \text{MPS}{} is not always decomposable{} (\Cref{rm:mpsnreal}) even under lexicographic preference, making it unsuitable for \mtaps{} with indivisible items. \vspace{0.5em} \paragraph{\bf Related Work and Discussions.} Our paper provides the first results on designing efficient and fair mechanisms for \mtaps{} {\em with indivisible items}, to the best of our knowledge. Despite our impossibility theorem ({\bf\Cref{thm:imp}}), our LexiPS{} mechanism and its properties deliver the following positive message: {\em it is possible to design efficient and fair mechanism for indivisible items under the natural domain restriction of lexicographic preferences}. Our results on properties of \text{MPS}{} are complementary, and not directly comparable, to the results by~\citet{Wang19:Multi} because we assume that agents' preferences are linear orders over bundles of items, whereas~\citet{Wang19:Multi} assumed partial orders. In particular, we prove that \text{MPS}{} satisfies \text{lexi-}\allowbreak\text{efficiency}{} which is a stronger notion that \text{sd-}\allowbreak\text{efficiency}{} for the unrestricted domain of linear orders, and we prove that \text{MPS}{} satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{} when agents' preferences are lexicographic (w.r.t.~possibly different importance orders). In contrast,~\citet{Wang19:Multi} prove that \text{MPS}{} satisfies \text{sd-}\allowbreak\text{efficiency}{} for the unrestricted domain of partial orders, and satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{} when agents' preferences are CP-nets with a common dependency structure. \mtaps{} was first introduced and discussed by~\citet{Moulin95:Cooperative}, and was recently explicitly formulated in the form presented in this paper by~\citet{Mackin2016:Allocating}, who provided a characterization of serial dictatorships satisfying strategyproofness, neutrality, and non-bossiness for \mtaps{}. In a similar vein, \citet{Sikdar18:Top,Sikdar2017:Mechanism}~considered multi-type housing markets~\cite{Moulin95:Cooperative}. A related problem setting is one where agents may demand multiple units of items.~\citet{Hatfield09:Strategy-proof} considered that agents have multi-unit demands, but their setting has a different combinatorial structure to ours'. \citet{Fujita2015:A-Complexity}~considered the exchange economy with multi-unit consumption. However, in each of these works, items are never shared, and agents must be assigned a whole bundle. The work by~\citet{Ghodsi11:Dominant} is an exception, as they consider the problem of allocating multiple types of divisible resources by shares, but is not comparable to ours because resources of the same type are indistinguishable in their setting. \mtaps{} with divisible items may also be viewed as a version of the cake-cutting problem~\cite{Steinhaus48:Problem,brams1996fair,robertson1998cake,even1984note,edmonds2006cake,procaccia2015cake}, with $p$ cakes, and agents having ordinal preferences over combinations of pieces from each cake. The lexicographic preference is a natural restriction on preference domain in resource allocation~\cite{Sikdar18:Top,Sikdar2017:Mechanism,Fujita2015:A-Complexity}, and combinatorial voting~\citep{Lang12:Aggregating,Xia10:Strategy,Booth10:Learning}.~\citet{Saban14:Note} showed that PS is efficient, envy-free, and strategy-proof under lexicographic preference on allocations.~\citet{Fujita2015:A-Complexity} considered the allocation problem which allows agents to receive multiple items and agents rank the groups of items lexicographically. Our work follows in this research agenda of natural domain restrictions on agents' preferences to circumvent impossibility results in guaranteeing efficiency and fairness. \mtaps{} belongs to a more general line of research on mechanism design known as multi-agent resource allocation (see \citet{Chevaleyre06:Issues} for a survey), where literature focuses on the problem where items are of a single type. Early research focused mainly on developing ``discrete'' mechanisms for indivisible items, where each item is assigned fully to a single agent~\cite{Gale62:College,Kojima09:Axioms,Shapley74:Cores,Roth77:Weak,Abdulkadiroglu99:House}. However, discrete mechanisms often fail to simultaneously satisfy efficiency and fairness. Fractional mechanisms simultaneously provide stronger efficiency and fairness guarantees. For example, the random priority (RP) mechanism~\cite{Abdulkadiroglu98:Random} outputs fractional assignments, and satisfies \text{ex-post-}\allowbreak\text{efficiency}{}, \text{sd-}\allowbreak\text{weak-}\allowbreak\text{envy-}\allowbreak\text{freeness}{}, and \text{sd-}\allowbreak\text{strategy}\allowbreak\text{proofness}{}. Such fractional mechanisms can be applied to both divisible and indivisible items in the single type setting ($p=1$), due to the Birkhoff-Von Neumann theorem which implies that every fractional assignment is decomposable{}. \citet{Bogomolnaia01:New}~proposed the PS mechanism, a fractional mechanism satisfying \text{sd-}\allowbreak\text{efficiency}{}, \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{}, and \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{}. PS uniquely possesses the important properties of \text{sd-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} with the restriction of {\em bounded invariance}~\cite{Bogomolnaia12:Probabilistic,Bogomolnaia15:Random}, and is the only mechanism satisfying ordinal fairness and non-wastefulness~\cite{Hashimoto14:Two}. Besides, \citet{Bogomolnaia15:Random}~characterized PS with leximin maximizing the vector describing cumulative shares at each item~\cite{Aziz14:Generalization,bogomolnaia2005collective}, which reflects that PS is egalitarian in attempting to equalize agents' shares of their top ranked choices. The remarkable properties of PS has encouraged several extensions: to the full preference domain, allowing indifferences~\cite{Katta06:Solution,Heo15:Characterization}, to multi-unit demands~\cite{Heo14:Probabilistic}, and to housing markets~\cite{Athanassoglou11:House,Yilmaz2009:Random}. In the settings above, PS usually only retains some of its original properties and loses stategyproofness~\cite{Katta06:Solution,Heo14:Probabilistic,Yilmaz10:Probabilistic,Athanassoglou11:House}. \vspace{0.5em} \paragraph{\bf Structure of the paper.} The rest of the paper is organized as follows. In Section~\ref{Preliminaries}, we define the \mtap{} problem, and provide definitions of desirable efficiency and fairness properties. Section~\ref{Impossibility Result} is the impossibility result for \mtaps{} with indivisible items. In Section~\ref{LPS}, we propose LexiPS{} for \mtaps{} with indivisible items under lexicographic preferences, which satisfies \text{sd-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{}, and it is \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proof}{} when agents do not lie about importance orders. In Section~\ref{MPS}, we show the properties of \text{MPS}{} for \mtaps{} with divisible items under linear preferences, and provide two characterizations for \text{MPS}{}. In Section~\ref{Conclusion}, we summarize the contributions of our paper, and discuss directions for future work. \section{Preliminaries} \label{Preliminaries} Let $N=\{1,\dots,n\}$ be the set of agents, and $M=D_1\cup\dots\cup D_p$ be the set of items. For each $i\le p$, $D_i$ is a set of $n$ items of type $i$, and for all $\hat{i}\neq i$, we have $D_i\cap D_{\hat{i}}=\emptyset$. There is one unit of {\em \text{supply}{}} of each item in $M$. We use $\mathcal{D}=D_1\times\dots\times D_p$ to denote the set of {\em bundles}. Each bundle ${\bf{x}}\in\mathcal{D}$ is a $p$-vector and each component refers to an item of each type. We use $o \in {\bf{x}}$ to indicate that bundle ${\bf{x}}$ contains item $o$. In an \mtap{}, each agent demands, and is allocated one unit of item of each type. A {\em preference profile} is denoted by $R=(\succ_j)_{j\le n}$, where $\succ_j$ represents agent $j$'s preference as a {\em strict linear preference}, i.e. the strict linear order over $\mathcal{D}$. Let $\mathcal{R}$ be the set of all possible preference profiles. A {\em \ram{} allocation} is a $|\mathcal{D}|$-vector, describing the fractional share of each bundle allocated to an agent. Let $\Pi$ be the set of all the possible \ram{} allocations. For any $p\in\Pi$, ${\bf{x}}\in\mathcal{D}$, we use $p_{{\bf{x}}}$ to denote the share of ${\bf{x}}$ assigned by $p$. A {\em \ram{} assignment} is a $n\times|\mathcal{D}|$-matrix $P=[p_{j,{\bf{x}}}]_{j\le n, {\bf{x}}\in\mathcal{D}}$, where \begin{enumerate*}[label=(\roman*)] \item for each $j\le n, {\bf{x}}\in\mathcal{D}$, $p_{j,{\bf{x}}}\in[0,1]$ is the fractional share of ${\bf{x}}$ allocated to agent $j$, \item for every $j\le n$, $\sum_{{\bf{x}}\in\mathcal{D}}p_{j,{\bf{x}}}=1$, fulfilling the demand of each agent, \item for every $o\in M$, $S_o=\{{\bf{x}}:{\bf{x}}\in\mathcal{D} \text{ and }o\in{\bf{x}}\}$, $\sum_{j\le n,{\bf{x}} \in S_o}p_{j,{\bf{x}}}=1$, respecting the unit supply of each item. \end{enumerate*} For each $j\le n$, row $j$ of $P$, denoted by $P_j$ represents agent $j$'s \ram{} allocation under $P$. We use $\mathcal{P}$ to denote the set of all possible \ram{} assignments. A {\em discrete assignment} $A$, is an assignment where each agent is assigned a unit share of a bundle, and each item is fully allocated to some agent\footnote{For for indivisible items, discrete assignments refer to deterministic assignments in the papers about randomization.}. It follows that a discrete assignment is represented by a matrix where each element is either $0$ or $1$. We use $\mathcal{A}$ to denote the set of all discrete assignment matrices. A {\em mechanism} $f$ is a mapping from preference profiles to \ram{} assignments. For any profile $R\in\mathcal{R}$, we use $f(R)$ to refer to the \ram{} assignment output by $f$, and for any agent $j\le n$ and any bundle ${\bf{x}}\in\mathcal{D}$, $f(R)_{j,{\bf{x}}}$ refers to the value of the element of the matrix indexed by $j$ and ${\bf{x}}$. \subsection{\textbf{Desirable Properties}} \label{Properties} \noindent We use the notion of stochastic dominance to compare fractional assignments, and define desired notions of efficiency and fairness in the \mtap{} setting from~\citet{Wang19:Multi}. \begin{definition}{\bf(stochastic dominance~\cite{Wang19:Multi})}\label{dfn:sd} Given a preference $\succ$ over $\mathcal{D}$, the {\em stochastic dominance} relation associated with $\succ$, denoted $\sd{\null}$, is a partial ordering over $\Pi$ such that for any pair of \ram{} allocations $p,q\in\Pi$, $p$ {\em weakly stochastically dominates} $q$, denoted $p\sd{\null} q$, if and only if for every ${\bf{x}}\in\mathcal{D}$, $\sum_{\hat{\bf{x}}\in\ucs(\succ,{\bf{x}})}p_{\hat{\bf{x}}}\ge\sum_{\hat{\bf{x}}\in\ucs(\succ,{\bf{x}})}q_{\hat{\bf{x}}}$, where $\ucs(\succ,{\bf{x}})=\{\hat{\bf{x}}:\hat{\bf{x}} \succ {\bf{x}}\}\cup\{{\bf{x}} \}$. \end{definition} The stochastic dominance order can also be extended to fractional assignments. For $P,Q\in\mathcal{P},j\le n$, we assume that agent $j$ only cares about her own allocations $P_j,Q_j$. If $P_j\sd{j} Q_j$, it will weakly prefer $P$ to $Q$, i.e. $P\sd{j} Q$. Therefore we say that $P$ weakly stochastically dominates $Q$, denoted $P\sd{\null}Q$, if $P\sd{j}Q$ for any $j\leq n$. It is easy to prove that $P\sd{j} Q,Q\sd{j} P$ if and only if $P_j=Q_j$. \begin{definition}{\bf(\text{sd-}\allowbreak\text{efficiency}{}~\cite{Wang19:Multi})}\label{dfn:sdopt} Given an \mtap{} $(N,M,R)$, a \ram{} assignment $P$ is \text{sd-}\allowbreak\text{efficient}{} if there is no other \ram{} assignment $Q\neq P$ such that $Q\sd{j}P$ for every $j\le n$. Correspondingly, if for every $R\in\mathcal{R}$, $f(R)$ is \text{sd-}\allowbreak\text{efficient}{}, then mechanism $f$ satisfies \text{sd-}\allowbreak\text{efficiency}. \end{definition} \begin{definition}{\bf(\text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{}~\cite{Wang19:Multi})}\label{dfn:sdef} Given an \mtap{} $(N,M,R)$, a \ram{} assignment $P$ is \text{sd-}\allowbreak\text{envy-}\allowbreak\text{free}{} if for every pair of agents $j,\hat j\le n$, $P_j\sd{j}P_{\hat j}$. Correspondingly, if for every $R\in\mathcal{R}$, $f(R)$ is \text{sd-}\allowbreak\text{envy-}\allowbreak\text{free}{}, then mechanism $f$ satisfies \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}. \end{definition} \begin{definition}{\bf(\text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{}~\cite{Wang19:Multi})}\label{dfn:sdsp} Given an \mtap{} $(N,M,R)$, a mechanism $f$ satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{} if for every profile $R\in\mathcal{R}$, every agent $j\le n$, every $R'\in\mathcal{R}$ such that $R'=(\succ'_j,\succ_{-j})$, where $\succ_{-j}$ is the preferences of agents in the set~$N\setminus\{j\}$, it holds that: \begin{equation*} f(R')\sd{j}f(R)\implies f(R')_j=f(R)_j. \end{equation*} \end{definition} Besides stochastic dominance, we introduce the {\em lexicographic dominance} relation to compare pairs of fractional allocations, by comparing the components of their respective vector representations one by one according to the agent's preference \begin{definition} {\bf(lexicographic dominance)} Given a preference $\succ$, and a pair of allocations $p$ and $q$, $p$ lexicographically dominates $q$, denoted $p\ld{}q$, if and only if there exist a bundle ${\bf{x}}$ such that $p_{{\bf{x}}}>q_{{\bf{x}}}$, and for each $\hat{\bf{x}}\succ{\bf{x}}$, $p_{\hat{\bf{x}}}\ge q_{\hat{\bf{x}}}$. \end{definition} Just like stochastic dominance, given assignments $P$ and $Q$, we say $Q \ld{} P$ if $Q_j \ld{j} P_j$ ($Q \ld{j} P$ for short) for every agent $j$. Note that stochastic dominance implies lexicographic dominance, but lexicographic dominance does not imply stochastic dominance \begin{definition} {\bf(\text{lexi-}\allowbreak\text{efficiency}{})} Given a preference profile $R$, the fractional assignment $P$ is \text{lexi-}\allowbreak\text{efficient}{} if there is no $Q \in \mathcal{P}$ s.t. $Q \ld{} P$. A fractional assignment algorithm $f$ satisfies \text{lexi-}\allowbreak\text{efficiency}{} if $f(R)$ is \text{lexi-}\allowbreak\text{efficient}{} for every $R \in \mathcal{R}$. \end{definition} \text{Leximin-}\allowbreak\text{optimality}{} requires that the output of a mechanism leximin maximizes the vector describing cumulative shares at each item~\cite{Aziz14:Generalization,bogomolnaia2005collective}, which reflects the egalitarian nature of the mechenism in attempting to equalize agents' shares of their top ranked choices. \begin{definition}{\bf(\text{leximin-}\allowbreak\text{optimality}{})}\label{dfn:leximin} For any vector ${\bf{u}}$ of length $k$, let ${\bf{u}}^*=(u_1^*,u_2^*,\dots,u_k^*)$, be its transformation into the $k$-vector of ${\bf{u}}$'s components sorted in ascending order. Let $L$ be the {\em leximin relation}, where for any two vectors ${\bf{u}}, {\bf{v}}$, we say $({\bf{u}},{\bf{v}})\in L$, if ${\bf{u}}^*\ld{}{\bf{v}}^*$. For any \ram{} assignment $P$, let ${\bf{u}}^P=(u^P_{j,{\bf{x}}})_{j\le n, {\bf{x}}\in\mathcal{D}}$, where for each agent $j\le n$, and bundle ${\bf{x}}\in\mathcal{D}$, $u^P_{j,{\bf{x}}}=\sum_{\hat{\bf{x}}\in\ucs(\succ_j,{\bf{x}})}{p_{j,\hat{\bf{x}}}}$. A \ram{} assignment $P$ is {\em \text{leximin-}\allowbreak\text{optimal}{}}, if for every other assignment $Q$, $({\bf{u}}^P,{\bf{u}}^Q)\in L$. \end{definition} \text{Item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{} involves the comparison of the cumulative share. In contrast to \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{}, the upper contour sets in \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{} depend on the different preferences, and the bundles to determine the sets only need to share a certain item. \begin{definition} {\bf (\text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{})} The fractional assignment $P$ which is \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fair}{} satisfies that: For any ${\bf{x}}$ with positive shares for agent $j$ s.t. $p_{j,{\bf{x}}}>0$, there exists $o\in{\bf{x}}$ such that for any $\hat{\bf{x}}$ containing $o$ and an arbitrary agent $k$, if $p_{k,\hat{\bf{x}}}>0$, then $\sum_{{\bf{x}}'\in\ucs(\succ_k,\hat{\bf{x}})}{p_{k,{\bf{x}}'}}\le\sum_{{\bf{x}}'\in\ucs(\succ_j,{\bf{x}})}{p_{j,{\bf{x}}'}}$. A fractional assignment algorithm $f$ satisfies \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{} if $f(R)$ is \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fair}{} for every $R \in \mathcal{R}$. \end{definition} \section{Efficiency and Fairness for \mtaps{} with Indivisible Items} \label{Impossibility Result} In this section, we show an impossibility result in \Cref{thm:imp} that no mechanism satisfying \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} and \text{sd-}\allowbreak\text{efficiency}{} is guaranteed to output decomposable{} assignments. This is unlike the case of resource allocation problems with a single type of items, where by the Birkhoff-von Neumann theorem, every \ram{} assignment is decomposable{}, i.e. every \ram{} assignment can be decomposed like \begin{equation*} P=\sum_{A_k\in\mathcal{A}}{\alpha_k\times A_k}. \end{equation*} Here each $A_k$ is a discrete assignment that assigns every item wholly to some agent. Observe that $\sum{\alpha_k}=1$. It follows that such a decomposable{} assignment can be applied to allocating indivisible items, by issuing a lottery for $A_k$, and $\alpha_k$ is its probability being selected among all the discrete assignments. It is not necessarily so in \mtaps{}, which leads to the impossibility result. \begin{theorem}\label{thm:imp} For any \mtaps{} with $p\ge 2$, where agents are allowed to submit any linear order over bundles, no mechanism that satisfies \text{sd-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} always outputs decomposable{} assignments. \end{theorem} \begin{proof} Suppose for the sake of contradiction that there exists a mechanism $f$ satisfying \text{sd-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} and $f(R)$ is always decomposable{} for any $R\in\mathcal{R}$. Let R be the following preference and $Q=f(R)$. \begin{center \centering \begin{tabular}{|c|c|} \hline Agent & Preferences \\\hline 1 & $1_F1_B\succ_11_F2_B\succ_12_F2_B\succ_12_F1_B$ \\ 2 & $1_F2_B\succ_22_F1_B\succ_21_F1_B\succ_22_F2_B$ \\\hline \end{tabular} \end{center} We show that if $Q$ is \text{sd-}\allowbreak\text{envy-}\allowbreak\text{free}{} and decomposable{}, it fails to satisfy \text{sd-}\allowbreak\text{efficiency}{}. There are only four discrete assignments which assign $1_F1_B,1_F2_B,2_F1_B,2_F2_B$ to agent $1$ respectively. Since $Q$ is decomposable{}, it can be represented as the following assignment. We also provide an assignment $P$ which is not decomposable{} since it does not satisfy the constraints for $Q$. \vspace{1em}\noindent \begin{minipage}{\linewidth} \begin{minipage}{0.4\linewidth} \begin{center \centering \begin{tabular}{|c|cccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$P$}\\\cline{2-5} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1 & 0.5 & 0 & 0 & 0.5 \\ 2 & 0 & 0.5 & 0.5 & 0 \\\hline \end{tabular} \end{center} \end{minipage} \hspace{0.1\linewidth} \begin{minipage}{0.4\linewidth} \begin{center \centering \begin{tabular}{|c|cccc|} \hline \multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$Q$}\\\cline{2-5} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1 & $v$ & $w$ & $y$ & $z$ \\ 2 & $z$ & $y$ & $w$ & $v$ \\ \hline \end{tabular} \end{center} \end{minipage} \end{minipage}\vspace{1em} Here $v,w,y,z$ are probabilities of these four discrete assignments and there exists $v+w+y+z=1$. Due to \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}, we have following inequalities in terms of agent $1$: It is easy to see that $\sum_{{{\bf{x}}} \in \ucs(\succ_1,2_F1_B)}{q_{1,{\bf{x}}}}=1=\sum_{{{\bf{x}}} \in \ucs(\succ_1,2_F1_B)}{q_{2,{\bf{x}}}}$. In addition, \begin{displaymath} \begin{split} \sum_{{{\bf{x}}} \in \ucs(\succ_1,1_F1_B)}{q_{1,{\bf{x}}}}=&v \geq z=\sum_{{{\bf{x}}} \in \ucs(\succ_1,1_F1_B))}{q_{2,{\bf{x}}}} \\ \sum_{{{\bf{x}}} \in \ucs(\succ_1,1_F2_B)}{q_{1,{\bf{x}}}}=&v+w\geq z+y=\sum_{{{\bf{x}}} \in \ucs(\succ_1,1_F2_B)}{q_{2,{\bf{x}}}}\\ \sum_{{{\bf{x}}} \in \ucs(\succ_1,2_F2_B)}{q_{1,{\bf{x}}}}=&v+w+z\geq z+y+v=\sum_{{{\bf{x}}} \in \ucs(\succ_1,2_F2_B)}{q_{2,{\bf{x}}}} \end{split} \end{displaymath} Similarly, for agent $2$, there exists $y \geq w, y+w+z\geq w+y+v$. Thus $w=y,v=z,v+w=y+z=0.5$. Because $Q$ is \text{sd-}\allowbreak\text{efficient}, $P\nsd{\null}Q$. Suppose that $P\nsd{1}Q$. Therefore at least one of the following inequalities is true: \begin{equation} \label{prfir2} \begin{split} \sum_{{{\bf{x}}} \in \ucs(\succ_1,1_F1_B)}{q_{1,{\bf{x}}}}=&v> 0.5 = \sum_{{{\bf{x}}} \in \ucs(\succ_1,1_F1_B)}{p_{1,{\bf{x}}}} \\ \sum_{{{\bf{x}}} \in \ucs(\succ_1,1_F2_B)}{q_{1,{\bf{x}}}}=&v+w> 0.5 = \sum_{{{\bf{x}}} \in \ucs(\succ_1,1_F2_B)}{p_{1,{\bf{x}}}} \\ \sum_{{{\bf{x}}} \in \ucs(\succ_1,2_F2_B)}{q_{1,{\bf{x}}}}=&v+w+z> 1 = \sum_{{{\bf{x}}} \in \ucs(\succ_1,2_F2_B)}{p_{1,{\bf{x}}}} \\ \sum_{{{\bf{x}}} \in \ucs(\succ_1,2_F1_B)}{q_{1,{\bf{x}}}}=&v+w+z+y> 1 = \sum_{{{\bf{x}}} \in \ucs(\succ_1,2_F1_B)}{p_{1,{\bf{x}}}} \end{split} \end{equation} Since $v=z\leq v+w=y+z=0.5$, all inequalities in (\ref{prfir2}) do not hold. Thus we have $P\sd{1}Q$. With a similar analysis, we can also obtain that $P\sd{2}Q$. Therefore we have that $P\sd{\null}Q$ and $P\neq Q$, which is contradictory to the assumption.\qed \end{proof} \begin{remark}\label{rm:tightimp} \Cref{thm:imp} can be tightened under LP-tree preferences \cite{Booth10:Learning,Sikdar18:Top} contained in the strict linear preferences, with \text{sd-}\allowbreak\text{weak-}\allowbreak\text{efficiency}{} \cite{hashimoto2011characterizations,Bogomolnaia12:Probabilistic} implied by \text{sd-}\allowbreak\text{efficiency}{}, and \text{sd-}\allowbreak\text{weak-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} \cite{Bogomolnaia01:New} implied by \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{}. An LP-tree preference profile is that each agent's preference can be represented as a rooted directed tree with each node labeled by type and a conditional preference table. In each path from root to a leaf, every type occurs only once. For a node labeled by type $t$, its conditional preference table is a strict linear order over items in $D_t$, and each edge from it is labeled by each of these items. The \text{sd-}\allowbreak\text{weak-}\allowbreak\text{efficiency}{} only concerns that any two agents cannot improve their allocations by exchanging shares between them. The \text{sd-}\allowbreak\text{weak-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} requires that for any agent, the other agents' allocations are not better than hers. Here we give the brief definition of these two properties, and the proof of the tighter result is in \Cref{sec:rm:tightimp}. {\bf \text{sd-}\allowbreak\text{weak-}\allowbreak\text{efficiency}{}:} The mechanism $f$ satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{efficiency}{} if for any $R\in\mathcal{R}$, there is no \ram{} assignment $P\neq f(R)$ such that $P\sd{j}f(R)$ for every $j\le n$ and $|\{j\in N: P_j\neq f(R)_j\}|\leq2$. {\bf \text{sd-}\allowbreak\text{weak-}\allowbreak\text{envy-}\allowbreak\text{freeness}{}:} The mechanism $f$ satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} if for every pair of agents $j,\hat j\le n, P_{\hat j}\sd{j}P_j\Rightarrow P_{\hat j}=P_j$. We give the proof in \Cref{sec:rm:tightimp}. \end{remark} \section{\mtaps{} with Indivisible Items and Lexicographic Preferences} \label{LPS} In this section, we first introduce the lexicographic preference, and then develop LexiPS{} as a specialized mechanism for \mtaps{} where the items are indivisible and agents' preferences are restricted to the lexicographic preference domain, and show that it retains the good properties of PS. Faced with the impossibility result of \Cref{thm:imp} we wonder if it is possible to circumvent it by adding some reasonable restriction. We finally choose lexicographic preferences as a restriction on the preference domain in our setting according to the result in \Cref{rm:tightimp}. An agent with a lexicographic preference compares two bundles by comparing the items of each type in the two bundles one by one according to the importance order on types. In short, the agent takes the importance of types into consideration while ranking bundles. We say that a certain \mtap{} is under the restriction of lexicographic preferences if it satisfies that for every agent $j\le n$, $\succ_j$ is a lexicographic preference. We give its formal definition in the following with the notation $\type{i}{{\bf{x}}}$: Given any bundle ${\bf{x}}\in\mathcal{D}$ and any type $i\le p$, $\type{i}{{\bf{x}}}$ refers to the item of type $i$ in ${\bf{x}}$. \begin{definition}\label{dfn:lex} (\textbf{lexicographic preference}) Given an \mtap{}, the preference relation $\succ_j$ of agent $j\le n$ is lexicographic if there exists \begin{enumerate*}[label=(\roman*)] \item importance order i.e. a strict linear order $\impord{j}$ over $p$ types, \item for each type $i\le p$, a strict linear order $\prefintype{j}{i}$ over $D_i$, \end{enumerate*} such that for every pair ${\bf{x}},\hat{\bf{x}} \in \mathcal{D}$, ${\bf{x}} \succ_j \hat{\bf{x}}$ if and only if there exists a type $i$ s.t. $\type{i}{{\bf{x}}} \prefintype{j}{i} \type{i}{\hat{\bf{x}}}$, and for all $\hat i \impord{j} i$, $\type{\hat i}{{\bf{x}}}=\type{\hat i}{\hat{\bf{x}}}$. \end{definition} For example, the preference $1_F2_B \succ 1_F1_B \succ 2_F2_B \succ 2_F1_B$ is a lexicographic preference represented as importance order $F\impord{\null} B$, and strict linear orders $1_F \prefintype{\null}{F} 2_F$ and $2_B \prefintype{\null}{B} 1_B$ for two types. We also note that although lexicographic preference and lexicographic dominance looks similar, lexicographic preference is used in ranking bundles in agents' preferences, while lexicographic dominance is used to compare allocations or assignments consisting of shares of bundles. \subsection{\textbf{Algorithm for LexiPS{}}} Before going any further with LexiPS{}, we introduce some notations for ease of exposition. We use $\srd{P}{i}$ to denote the \ram{} assignment of items of type $i$ at $P$. $\srd{P}{i}$ is a $|N| \times |D_i| $ matrix and for any $i\le p$, any $o\in D_i$, $\srd{p}{i}_{j,o}=\sum_{o\in {{\bf{x}}}, {{\bf{x}}}\in \mathcal{D}}p_{i,{\bf{x}}}$ represents the total share of bundles containing $o$ consumed by agent $j$. To distinguish from single type \ram{} assignments, we call the ones for \mtaps{} as multi-type \ram{} assignments. Besides, for $o\in D_i$, we use the notation of the upper contour set $\ucs(\prefintype{}{i},o)$ to refer to items of type $i$ that better or equal to $o$ regarding $\prefintype{}{i}$. \begin{algorithm} \begin{algorithmic}[1] \State {\bf Input:} An \mtap{} $(N,M)$, a lexicographic preference profile $R$. \State For every $o\in M$, $\text{{\em supply}}(o)\gets 1$. For every $i \leq n$, $\srd{P}{i}\gets 0^{n\times n}$. $P\gets 0^{n\times|\mathcal{D}|}$. \Loop\ $p$ times \State {\bf Identify top type} $i_j$ for every agent $j\le n$. //For the $k$th loop, $i_j$ is the $k$th type regarding $\impord{j}$. \For{$i\le p$} \State $t \gets 0$. \State $N^i=\{j\le n|i_j=i\}$. \While{$t<1$ \State {\bf Identify top item} $\topb{j}{i}$ in type $i$ for every agent ${j\in N^i}$. \parState{ {\bf Consume.} \begin{itemize}[topsep=0em,partopsep=0em] \item[10.1:] For each $o\in D_i$,${\con{o}{\null}\gets|\{j\in N^i:\topb{j}{i}=o\}|}$. \item[10.2:] $\prog{\null}\gets \min_{o\in D_i}\frac{\text{{\em supply}}(o)}{\con{o}{\null}}$. \item[10.3:] For each $j\in N^0$, $\srd{p}{i}_{j,\topb{j}{i}}\gets \srd{p}{i}_{j,\topb{j}{i}}+\prog{\null}$. \item[10.4:] For each $o\in D_i$, $\text{{\em supply}}(o)\gets\text{{\em supply}}(o)-{\prog{\null}\times \con{o}{\null}}$. \item[10.5:] $t \gets t + \prog{\null}$. \end{itemize}} \EndWhile \EndFor \EndLoop\vspace{-1em} \State For every $j\leq n,{\bf{x}}\in\mathcal{D}$,$p_{j,{\bf{x}}}=\prod_{o=\type{i}{{\bf{x}}},i\leq p}{\srd{p}{i}_{j,o}}$. \State \Return $P$ \end{algorithmic} \caption{\label{alg:lps} LexiPS{}} \end{algorithm} In the LexiPS{} mechanism, agent $j$ always consumes her favorite item $o_j$ available in the current most important type. Agent $j$ would not stop consuming $o_j$ unless one of the following occurs: \begin{enumerate}[label=(\roman*),itemindent=2em,topsep=0em] \item $o_j$ is exhausted, and then agent $j$ will consume her next favorite and available item according to $\prefintype{j}{i}$; \item $\sum_{o\in D_i}{\srd{p}{i}_{j,o}}=1,o_j \in D_i$, and then agent $j$ will turn to her next most important type $\hat i$ according to $\impord{j}$ and consume her favorite item available in that type. \end{enumerate} \hspace{1.5em}After consumption, we obtain $\srd{P}{i}$ for every type $i \leq p$. With the assumption that allocations among different types are independent, we construct $P$ by \begin{equation}\label{eq:lps:share} p_{j,{\bf{x}}}=\prod_{o=\type{i}{{\bf{x}}},i\leq p}{\srd{p}{i}_{j,o}}. \end{equation} We divide the execution of LexiPS{} into $p$ phase where each agent only consume items in her current most important type $i_j$, and the time $t$ for each phase is up to one unit. In the beginning of each phase, we set the timer $t=0$. During the consumption, agent $j$ first decides her most preferred {\em unexhausted} item $\topb{j}{i}$ in her current most importance type $i$ regarding $\prefintype{j}{i}$. Here we say an item $o$ is exhausted if the supply $\text{{\em supply}}(o)=0$. Each agent consumes the item at a uniform rate of one unit per unit of time. The consumption pauses whenever one of the items being consumed becomes exhausted. That means agent $j$'s share on $\topb{j}{i}$ is increased by $\prog{\null}$, the duration since last pause, and the supply $\text{{\em supply}}(o)$ is computed by subtracting $\prog{\null}$ for $\con{o}{\null}$ times, the number of agent $j$ which satisfies $\topb{j}{i}=o$. In \Cref{alg:lps}, $\prog{\null}$ is computed by $\min_{o\in M}\frac{\text{{\em supply}}(o)}{\con{o}{\null}}$. After that we increase the timer $t$ by $\prog{\null}$, identify $\topb{j}{i}$ for each agent, and continue the consumption. The current phase ends when the timer $t$ reaches $1$, and the algorithm starts the next phase. We show that LexiPS{} is able to deal with indivisible items while retaining the good properties in \Cref{thm:lps}. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{Fig1.png} \caption{Execution of LexiPS{} in \Cref{eg:lps}.} \label{fig:lps} \end{figure} \begin{example}\label{eg:lps} Consider an \mtap{} where $N=\{1,2,3\}$, $M=D_F\times D_B, D_F=\{1_F,2_F,3_F\}$, $D_B=\{1_B,2_B,3_B\}$, and the profile $R=\{\succ_1,\succ_2,\succ_3\}$. The preferences $\succ_1,\succ_2,\succ_3$ are as follows \begin{center} \begin{tabular}{|c|c|}\hline Agent & Preferences \\\hline 1 & $F\impord{1}B, 1_F\prefintype{1}{F}2_F\prefintype{1}{F}3_F,1_B\prefintype{1}{B}2_B\prefintype{1}{B}3_B$ \\ 2 & $F\impord{2}B, 1_F\prefintype{2}{F}2_F\prefintype{2}{F}3_F,1_B\prefintype{2}{B}3_B\prefintype{2}{B}2_B$ \\ 3 & $B\impord{3}F, 1_F\prefintype{3}{F}2_F\prefintype{3}{F}3_F,2_B\prefintype{3}{B}3_B\prefintype{3}{B}1_B$\\\hline \end{tabular} \end{center} The execution of LexiPS{} is shown in {\Cref{fig:lps}}. In loop $1$, agent $1$ and $2$ consume items in $D_F$, while agent $3$ consumes alone in $D_B$. Therefore agent $3$ gets her favorite items $2_B$ in $D_B$ fully, and $1_B$ and $3_B$ are left. Since agent $1$ and $2$ have the same preference for $D_F$, they each obtain $0.5$ units of $1_F$ and $0.5$ units of $2_F$, and $3_F$ is left. Similarly in loop $2$, agent $1$ and $2$ prefers type $B$ while agent $3$ prefers $F$. Then agent $3$ gets the remaining item $3_F$, and agent $1$ and $2$ divide $1_B$ and $3_B$ uniformly according to their preferences. \begin{center \centering \begin{tabular}{|c|ccccc|}\hline \multirow{2}{*}{Agent} & \multicolumn{5}{c|}{$P$}\\\cline{2-6} & $1_F1_B$ & $1_F3_B$ & $2_F1_B$ & $2_F3_B$ & $3_F2_B$\\\hline 1 & 0.25 & 0.25 & 0.25 & 0.25 & 0 \\ 2 & 0.25 & 0.25 & 0.25 & 0.25 & 0 \\ 3 & 0 & 0 & 0 & 0 & 1 \\\hline \end{tabular} \end{center} The consumption above results in the final assignment $P$, and it is easy to check that $P$ is decomposable{}. \end{example} \subsection{\textbf{Properties}} \Cref{thm:lps} shows that, as an extension of PS, LexiPS{} inherits efficiency and envyfreeness based on stochastic dominance in solving \mtaps{} with lexicographic preferences, and it is able to deal with indivisible items for its decomposable{} outputs. \begin{theorem}\label{thm:lps} For \mtaps{} with lexicographic preferences, LexiPS{} satisfies \text{sd-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{}. Especially, LexiPS{} outputs decomposable{} assignments. \end{theorem} \begin{proof} Given a \mtap{}$(N,M)$, and profile $R$ of lexicographic preferences, let $P=LexiPS(R)$. In the following proof, we use $\itemwt{i}$ to refer to an arbitrary item of type $i$, and $(\itemwt{i}*)$ to refer to any bundle containing $\itemwt{i}$. (1) (\textbf{\text{sd-}\allowbreak\text{efficiency}}) By contradiction, we suppose in another assignment $Q$ there exists agent $j$ who get a better allocation than in $P$ when others are also better or not affected, which means $Q\sd{}P$. W.l.o.g, label the type according to $\impord{j}$ as $1 \impord{j} 2 \impord{j} \dots \impord{j} p$. We show that $Q_j=P_j$ by proving the following equation with mathematical induction: for any $i\leq p$ and any $\itemwt{1},\dots,\itemwt{i}$, \begin{equation}\label{eq:thm:lps:1} \sum_{{\bf{x}}=(\itemwt{1},\dots,\itemwt{i}*)}{p_{j,{\bf{x}}}}=\sum_{{\bf{x}}=(\itemwt{1},\dots,\itemwt{i}*)}{q_{j,{\bf{x}}}}. \end{equation} First, we prove (\ref{eq:thm:lps:1}) when $i=1$ i.e. $\srd{Q}{1}_j=\srd{P}{1}_j$. If $\srd{Q}{1}\sd{j}\srd{P}{1}$ is false, we have that there exists $\hitemwt{1}$ and the least preferred bundle $\hat{\bf{x}}$ containing $\hitemwt{1}$ regarding $\succ_j$ such that \begin{equation*} \sum_{{\bf{x}}\in\ucs(\succ_j,\hat{\bf{x}})}{p_{j,{\bf{x}}}}= \sum_{o\in\ucs(\prefintype{j}{1},\hitemwt{1})}{p_{j,o}}> \sum_{o\in\ucs(\prefintype{j}{1},\hitemwt{1})}{q_{j,o}}= \sum_{{\bf{x}}\in\ucs(\succ_j,\hat{\bf{x}})}{q_{j,{\bf{x}}}}. \end{equation*} That is a contradiction to the assumption $Q\sd{j}P$. Therefore we suppose $\srd{Q}{1}\sd{j}\srd{P}{1}$ and $\srd{Q}{1}_j\neq\srd{P}{1}_j$. In some way, we say $\srd{P}{1}$ can be improved to $\srd{Q}{1}$ by shares transferring of bundles. Let $N_1$ denote the set of agents who consume items of type~$1$ in Phase~$1$. Let $\hitemwt{1}$ be the least preferred item agent $j$ gets in $\srd{P}{1}$ according to $\prefintype{j}{1}$. From \Cref{alg:lps}, we learn that agents in $N_1$ obey the rule of PS in Phase~$1$, and therefore the partial assignment for them $\srd{P}{1}_{N_1}=(\srd{P}{1}_j)_{j\in N_1}$ is \text{sd-}\allowbreak\text{efficient}{}. Besides, agents not in $N_1$ only gets bundles ${\bf{x}}$ with $\type{1}{{\bf{x}}}$ not better than $\hitemwt{1}$. By assumption there exists $\itemwt{1}$ such that $\srd{p}{1}_{j,\itemwt{1}}<\srd{q}{1}_{j,\itemwt{1}}$ and $\sum_{o\prefintype{j}{1}\itemwt{1}}{\srd{p}{1}_{j,o}}= \sum_{o\prefintype{j}{1}\itemwt{1}}{\srd{q}{1}_{j,o}}$, and therefore we have that the share transferring: \begin{enumerate}[label=(\roman*),itemindent=2em,topsep=0em] \item only involves agents in $N_1$, which is a contradiction of \text{sd-}\allowbreak\text{efficiency}{} of $\srd{P}{1}_{N_1}$; \item involves agents not in $N_1$ and the best items of type $1$ w.r.t. $\prefintype{j}{1}$ they may have is $\hitemwt{1}$. If $\itemwt{1}\neq\hitemwt{1}$, the extra share of $\srd{q}{1}_{j,\itemwt{1}}$ comes from agents in $N_1$, which is a contradiction of \text{sd-}\allowbreak\text{efficiency}{} as in Case (\romannumeral1). Therefore $\itemwt{1}=\hitemwt{1}$ and we have $\sum_{o\in D_1}{\srd{q}{1}_{j,o}}= \sum_{o\in\ucs(\prefintype{j}{1},\itemwt{1})}{\srd{q}{1}_{j,o}}> \sum_{o\in\ucs(\prefintype{j}{1},\itemwt{1})}{\srd{p}{1}_{j,o}}= \sum_{o\in D_1}{\srd{p}{1}_{j,o}}=1$, which is a contradiction. \end{enumerate} Therefore we have $\srd{Q}{1}_j=\srd{P}{1}_j$. Then we prove (\ref{eq:thm:lps:1}) for type $i>1$ while for any $\itemwt{1},\dots,\itemwt{i-1}$, \begin{equation}\label{eq:thm:lps:2} \sum_{{\bf{x}}=(\itemwt{1},\dots,\itemwt{i-1}*)}{p_{j,{\bf{x}}}}= \sum_{{\bf{x}}=(\itemwt{1},\dots,\itemwt{i-1}*)}{q_{j,{\bf{x}}}}. \end{equation} It is easy to see that here (\ref{eq:thm:lps:1}) is equivalent to $\srd{Q}{i}=\srd{P}{i}$. We first show that like in type $1$ it is necessary that $\srd{Q}{i}\sd{j}\srd{P}{i}$. For any $\hitemwt{1},\dots,\hitemwt{i}$, let $\hat{\bf{x}}$ be the least preferred bundle containing them regarding $\succ_j$. Let \begin{equation*} \begin{split} \mathcal{D}_i=&\{(\hitemwt{1},\dots,\hitemwt{i-1},\itemwt{i}*)|\itemwt{i}\in\ucs(\prefintype{j}{i},\hitemwt{i})\}, \\ \mathcal{D}'_i=&\{(\itemwt{1},\dots,\itemwt{i-1}*)|\itemwt{1}\in\ucs(\prefintype{j}{1},\hitemwt{1}),\dots,\itemwt{i-2}\in\ucs(\prefintype{j}{i-2},\hitemwt{i-2}),\itemwt{i-1}\prefintype{j}{i-1}\hitemwt{i-1}\}. \end{split} \end{equation*} By assumption $Q\sd{j}P$ we have that \begin{equation}\label{eq:thm:lps:3} \begin{split} \sum_{{\bf{x}}\in\mathcal{D}_i}{p_{j,{\bf{x}}}}+\sum_{{\bf{x}}\in\mathcal{D}'_i}{p_{j,{\bf{x}}}}= \sum_{{\bf{x}}\in\ucs(\succ_j,\hat{\bf{x}})}{p_{j,{\bf{x}}}} \leq \sum_{{\bf{x}}\in\ucs(\succ_j,\hat{\bf{x}})}{q_{j,{\bf{x}}}}= \sum_{{\bf{x}}\in\mathcal{D}_i}{q_{j,{\bf{x}}}}+\sum_{{\bf{x}}\in\mathcal{D}'_i}{q_{j,{\bf{x}}}} \end{split} \end{equation} With (\ref{eq:thm:lps:2}) we have $\sum_{{\bf{x}}\in\mathcal{D}'_i}{p_{j,{\bf{x}}}}=\sum_{{\bf{x}}\in\mathcal{D}'_i}{q_{j,{\bf{x}}}}$, and induce from (\ref{eq:thm:lps:3}) that $\sum_{{\bf{x}}\in\mathcal{D}_i}{p_{j,{\bf{x}}}}\leq \sum_{{\bf{x}}\in\mathcal{D}_i}{q_{j,{\bf{x}}}}$. By summing up each side by $\hitemwt{1},\dots,\hitemwt{i-1}$, we have that: \begin{equation*} \sum_{\hitemwt{1}}\dots\sum_{\hitemwt{i-1}}\sum_{{\bf{x}}\in\mathcal{D}_i}{p_{j,{\bf{x}}}}\leq \sum_{\hitemwt{1}}\dots\sum_{\hitemwt{i-1}}\sum_{{\bf{x}}\in\mathcal{D}_i}{q_{j,{\bf{x}}}}. \end{equation*} According to the definition of lexicographic preference, it is equal to: \begin{equation*} \sum_{{\bf{x}}\in\{(\itemwt{i}*)|\itemwt{i}\in\ucs(\prefintype{j}{i},\hitemwt{i})\}}{p_{j,{\bf{x}}}}\leq \sum_{{\bf{x}}\in\{(\itemwt{i}*)|\itemwt{i}\in\ucs(\prefintype{j}{i},\hitemwt{i})\}}{q_{j,{\bf{x}}}}. \end{equation*} It can also be written as: \begin{equation*} \sum_{\itemwt{i}\in\ucs(\prefintype{j}{i},\hitemwt{i})}{\srd{p}{i}_{j,\itemwt{i}}}\leq \sum_{\itemwt{i}\in\ucs(\prefintype{j}{i},\hitemwt{i})}{\srd{q}{i}_{j,\itemwt{i}}} \end{equation*} That means $\srd{Q}{i}\sd{j}\srd{P}{i}$. With $\srd{Q}{i}\sd{j}\srd{P}{i}$, next we prove $\srd{Q}{i}\neq\srd{P}{i}$ is false by contradiction as for type $i$. Let $N_i$ denote the set of agents who consume items of type~$i$ in Phase~$i$ and $N'_i$ denote agents consume items of type~$i$ after Phase~$i$. Notice that agents which have obtained items of type $i$ before Phase $i$ are not considered because they can never benefit by trading shares with agents in $N_i\bigcup N'_i$. Let $\hitemwt{i}$ be the least preferred item agent $j$ gets in $\srd{P}{i}$ according to $\prefintype{j}{i}$. As we have shown in the proof for type $1$, $\srd{P}{i}_{N_i}=(\srd{P}{i}_j)_{j\in N_i}$, the partial assignment for agents in $N_i$, is \text{sd-}\allowbreak\text{efficient}{} regarding the available items of type $i$ in Phase $i$, and agents in ${N'_i}$ only gets bundles ${\bf{x}}$ with $\type{i}{{\bf{x}}}$ not better than $\hitemwt{i}$. By assumption there exists $\itemwt{i}$ such that $\srd{p}{i}_{j,\itemwt{i}}<\srd{q}{i}_{j,\itemwt{i}}$ and $\sum_{o\prefintype{j}{i}\itemwt{i}}{\srd{p}{i}_{j,o}}= \sum_{o\prefintype{j}{i}\itemwt{i}}{\srd{q}{i}_{j,o}}$, and therefore we have that the share transferring: \begin{enumerate}[label=(\roman*'),itemindent=2em,topsep=0em] \item only involves agents in $N_i$, which is a contradiction of \text{sd-}\allowbreak\text{efficiency}{} of $\srd{P}{i}_{N_i}$; \item involves agents in $N'_i$, and the best items of type $i$ w.r.t. $\prefintype{j}{i}$ they may have is $\hitemwt{i}$. If $\itemwt{i}\neq\hitemwt{i}$, the extra share of $\srd{q}{i}_{j,\itemwt{i}}$ comes from agents in $N_i$, which is a contradiction of \text{sd-}\allowbreak\text{efficiency}{} as in Case (\romannumeral1'). Therefore $\itemwt{i}=\hitemwt{i}$ and we have $\sum_{o\in D_i}{\srd{q}{i}_{j,o}}= \sum_{o\in\ucs(\prefintype{j}{i},\itemwt{i})}{\srd{p}{i}_{j,o}}> \sum_{o\in\ucs(\prefintype{j}{i},\itemwt{i})}{\srd{q}{i}_{j,o}}= \sum_{o\in D_i}{\srd{q}{i}_{j,o}}=1$, which is a contradiction. \end{enumerate} Therefore we have the result. By mathematical induction, we have that $p_{j,{\bf{x}}}=q_{j,{\bf{x}}}$, which is a contradiction to $Q_j\neq P_j$. (2) (\textbf{\text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}}) As agents spend one unit of time for each type, we divide the execution of LexiPS{} into $p$ phases by type, and the following proof develops by phases. We first declare that agent $j$ does not envy other agents who have the same importance order. For convenience, we label the types according to $\impord{j}$. Let $N_i$ be the set of agents who consume items of types $i$ in Phase $i$. Phase $i$ can be viewed as the execution of PS for the single type allocation problem with agents in $N_i$ and available items left in $D_i$ at Phase $i$. By \cite{Bogomolnaia01:New}, we know that PS satisfies \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}. Therefore we have that for any $k \in N_i$, $\sum_{o'\in\ucs(\prefintype{j}{i},o)}{\srd{p}{i}_{j,o'}} \geq \sum_{o'\in\ucs(\prefintype{j}{i},o)}{\srd{p}{i}_{k,o'}}$ for any $o \in D_i$, which also means $\sum_{{\bf{x}}'\in\ucs(\succ_j,{\bf{x}})}{p_{j,{\bf{x}}'}} \geq \sum_{{\bf{x}}'\in\ucs(\succ_j,{\bf{x}})}{p_{k,{\bf{x}}'}}$ for any ${\bf{x}}\in\mathcal{D}$. We next declare that agent $j$ does not envy agents who have different important orders. Assume by contradiction that $k$ is such an agent and there exists ${\hat{\bf{x}}} \in \mathcal{D}$ which satisfies \begin{equation}\label{eq:lpsensup} \sum_{{\bf{x}} \in \ucs(\succ_j,\hat{\bf{x}})}p_{j,{\bf{x}}} < \sum_{{\bf{x}} \in \ucs(\succ_j,\hat{\bf{x}})}p_{k,{\bf{x}}}. \end{equation} We prove the declaration by mathematical induction. First we prove agent $k$'s most important type is $1$. If not, then agent $k$ would consume $o \in D_1$ in the later phase. However, the items left in $D_1$ at phase $i\neq1$ are not more preferable than those consumed by agent $j$. Thus for any $\hitemwt{1},\itemwt{1}$ satisfying $\srd{p}{1}_{j,\hitemwt{1}}>0,\srd{p}{1}_{k,\itemwt{1}}>0$, we have $\hitemwt{1}\prefintype{i}{1}{\itemwt{1}}$ or $\hitemwt{1}=\itemwt{1}$. We discuss them respectively. Case (\romannumeral1): If $\hitemwt{1} \prefintype{j}{1} \itemwt{1}$ for any $\hitemwt{1},\hitemwt{1}$ satisfying $\srd{p}{1}_{j,\itemwt{1}}>0,\srd{p}{1}_{k,\itemwt{1}}>0$. then we obtain that for any $\hat{\bf{x}},{\bf{x}},p_{j,\hat{\bf{x}}}>0,p_{k,{\bf{x}}}>0$, we have ${\hat{\bf{x}}}\succ_j{{\bf{x}}}$ since $\type{1}{\hat{\bf{x}}} \prefintype{j}{1} \type{1}{{\bf{x}}}$. Let $\hat{\bf{x}}$ be least preferable bundle according to $\succ_j$ and $p_{j,\hat{\bf{x}}}>0$, and we have that for any ${\bf{x}}'\in\ucs(\succ_j,\hat{\bf{x}})$, \begin{equation}\label{eq:lpsef1} \sum_{{\bf{x}}\in \ucs(\succ_j,{\bf{x}}')}p_{j,{\bf{x}}} \geq \sum_{{\bf{x}}\in \ucs(\succ_j,{\bf{x}}')}p_{k,{\bf{x}}} = 0 \end{equation} For any ${\bf{x}}'$ which satisfies $\hat{\bf{x}}\succ_j{\bf{x}}'$, \begin{equation}\label{eq:lpsef2} \sum_{{\bf{x}} \in \ucs(\succ_j,{\bf{x}}')}p_{j,{\bf{x}}} =1 \geq \sum_{{\bf{x}} \in \ucs(\succ_j,{\bf{x}}')}p_{k,{\bf{x}}} \end{equation} Combining (\ref{eq:lpsef1}) and (\ref{eq:lpsef2}), we obtain that $\sum_{{\bf{x}} \in \ucs(\succ_j,{\bf{x}}')}p_{j,{\bf{x}}}\geq\sum_{{\bf{x}} \in \ucs(\succ_j,{\bf{x}}')}p_{k,{\bf{x}}}$ for any ${\bf{x}}'$, a contradiction to (\ref{eq:lpsensup}). Case (\romannumeral2): If there exists $\hitemwt{1}$ satisfying $\srd{p}{1}_{j,\hitemwt{1}}>0,\srd{p}{1}_{k,\hitemwt{1}}>0$, then $\hitemwt{1}$ will be the least preferable item consumed by agent $j$, and the most preferable one consumed by agent $k$ both regarding $\prefintype{j}{1}$. We obtain that \begin{equation}\label{eq:lpsef3} \sum_{o\prefintype{j}{1}\hitemwt{1}}{\srd{p}{1}_{j,\itemwt{1}}} = 1 - \srd{p}{1}_{j,\hitemwt{1}} \geq \srd{p}{1}_{k,\hitemwt{1}} = \sum_{o \in \ucs(\prefintype{j}{1},\hitemwt{1})}{\srd{p}{1}_{k,\itemwt{1}}}. \end{equation} It means that for every $\hat{\bf{x}}=(\hitemwt{1}*)$, \begin{equation*}\label{} \sum_{{\bf{x}} \in \ucs(\succ_j,\hat{\bf{x}})}p_{j,{\bf{x}}} \geq \sum_{o\prefintype{j}{1}\hitemwt{1}}{\srd{p}{1}_{j,o}} \geq \sum_{o \in \ucs(\prefintype{j}{1},\hitemwt{1})}{\srd{p}{1}_{k,o}} \geq \sum_{{\bf{x}} \in \ucs(\succ_j,\hat{\bf{x}})}p_{k,{\bf{x}}} \end{equation*} When $\hat{\bf{x}}\neq(\hitemwt{1}*)$, we have (\ref{eq:lpsef1}) for $\type{1}{\hat{\bf{x}}}\prefintype{j}{1}\itemwt{1}$ and (\ref{eq:lpsef2}) for $\itemwt{1}\prefintype{j}{i}\type{1}{\hat{\bf{x}}}$, which cover the remaining cases. Therefore agent $k$'s first important type is $1$. Then we prove the agent $k$'s $i$th important type is $i$ when her ${i'}$th type is $\hat i$ for $i'<i$, i.e. $\srd{p}{i'}_{j,\itemwt{i'}}=\srd{p}{i'}_{k,\itemwt{i'}}$ for $i'<i$. Suppose it is false and we discuss by cases above as for type $1$ Because agent $k$ consumes items in type $i$ later that agent $j$, we note that for any $\itemwt{i},\hitemwt{i}$ satisfying $\srd{p}{i}_{j,\itemwt{i}}>0,\srd{p}{i}_{k,\hitemwt{i}}>0$, we have $\itemwt{i}\prefintype{j}{i}\hitemwt{i}$ or $\itemwt{i}=\hitemwt{i}$. Case (\romannumeral1'): Suppose for any $\itemwt{i},\hitemwt{i}$ satisfying $\srd{p}{i}_{j,\itemwt{i}}>0,\srd{p}{i}_{k,\hitemwt{i}}>0$ and $\itemwt{i}\neq\hitemwt{i}$, we have $\itemwt{i} \prefintype{j}{i} \hitemwt{i}$. That means for ${\bf{x}}=(\itemwt{1},\dots,\itemwt{i-1},\itemwt{i}*),\hat{\bf{x}}=(\itemwt{1},\dots,\itemwt{i-1},\hitemwt{i}*)$ satisfying $p_{j,{\bf{x}}}>0,p_{k,\hat{\bf{x}}}>0$, we have that ${\bf{x}}\succ_j\hat{\bf{x}}$. For any $\pitemwt{1},\dots,\pitemwt{i}$, let ${\bf{x}}'=(\pitemwt{1},\dots,\pitemwt{i}*)$. Then we can obtain the result similar to (\ref{eq:lpsef1}) and (\ref{eq:lpsef2}) in Case~(\romannumeral1) regarding (\ref{eq:lps:share}), the computation way of bundle shares in LexiPS{}: \begin{equation}\label{eq:lpsef7} \sum_{{\bf{x}}=(\pitemwt{1},\dots,\pitemwt{i-1}*), {\bf{x}}\in\ucs(\succ_j,{\bf{x}}')}{p_{j,{{\bf{x}}}}} \geq \sum_{{\bf{x}}=(\pitemwt{1},\dots,\pitemwt{i-1}*), {\bf{x}}\in\ucs(\succ_j,{\bf{x}}')}{p_{k,{{\bf{x}}}}} \end{equation} In addition, we can take the cumulative shares of upper contour set apart as follows: \begin{equation}\label{eq:lpsef4} \begin{split} \sum_{{{\bf{x}}}\in \ucs(\succ_j,{\bf{x}}')}{p_{j,{{\bf{x}}}}} =& \sum_{{{\bf{x}}}\in\{(\itemwt{1}*)|\itemwt{1} \prefintype{j}{1} \pitemwt{1}\}}{p_{j,{{\bf{x}}}}} + \sum_{{{\bf{x}}}\in\{(\pitemwt{1},\itemwt{2}*)|\itemwt{2} \prefintype{j}{2} \pitemwt{2}\}}{p_{j,{{\bf{x}}}}} \\ &+\dots+ \sum_{{\bf{x}}=(\pitemwt{1},\dots,\pitemwt{i-1}*), {\bf{x}}\in\ucs(\succ_j,{\bf{x}}')}{p_{j,{{\bf{x}}}}} \end{split} \end{equation} By the computation of shares for bundles (\ref{eq:lps:share}) and for any $\hat{i}<i$, $\srd{p}{\hat i}_{j,\itemwt{\hat i}}=\srd{p}{\hat i}_{k,\itemwt{\hat i}}$, we have that \begin{equation}\label{eq:lpsef5} \sum_{{{\bf{x}}}\in\{(\pitemwt{1},\dots,\itemwt{\hat{i}}*)|\itemwt{\hat{i}} \prefintype{j}{\hat{i}} \pitemwt{\hat{i}}\}}{p_{j,{{\bf{x}}}}}= \sum_{{{\bf{x}}}\in\{(\pitemwt{1},\dots,\itemwt{\hat{i}}*)|\itemwt{\hat{i}} \prefintype{j}{\hat{i}} \pitemwt{\hat{i}}\}}{p_{k,{{\bf{x}}}}} \end{equation} Therefore with (\ref{eq:lpsef7}), (\ref{eq:lpsef4}) and (\ref{eq:lpsef5}) we have $\sum_{{{\bf{x}}}\in \ucs(\succ_j,{\bf{x}}')}{p_{j,{{\bf{x}}}}}\geq\sum_{{{\bf{x}}}\in \ucs(\succ_j,{\bf{x}}')}{p_{k,{{\bf{x}}}}}$ for any ${\bf{x}}'$, a contradiction to (\ref{eq:lpsensup}). Case (\romannumeral2'): Suppose there exists $\hitemwt{i}$ satisfying $\srd{p}{i}_{j,\hitemwt{i}}>0,\srd{p}{i}_{k,\hitemwt{i}}>0$. Therefore $\hitemwt{i}$ will be the least preferable item consumed by agent $j$, and the most preferable one consumed by agent $k$ both regarding $\prefintype{j}{i}$. Let ${\bf{x}}'=(\pitemwt{1},\dots,\pitemwt{i-1},\pitemwt{i}*)$. We first consider $\pitemwt{i}=\hitemwt{i}$. Similar to (\ref{eq:lpsef3}), we have that \begin{equation}\label{eq:lpsef3.5} \sum_{o\prefintype{j}{i}\hitemwt{i}}{\srd{p}{i}_{j,o}} = 1 - \srd{p}{i}_{j,\hitemwt{i}} \geq \srd{p}{i}_{k,\hitemwt{i}} = \sum_{o \in \ucs(\prefintype{j}{i},\hitemwt{i})}{\srd{p}{i}_{k,o}} \end{equation} With computation way of bundle shares (\ref{eq:lps:share}), we induce that for $\alpha=\prod_{i'=1}^{i-1}{\srd{p}{i}_{j,\pitemwt{i'}}}=\prod_{i'=1}^{i-1}{\srd{p}{i}_{k,\pitemwt{i'}}}$, \begin{equation}\label{eq:lpsef6} \begin{split} &\sum_{{\bf{x}}=(\pitemwt{1},\dots,\pitemwt{i-1}*), {\bf{x}}\in\ucs(\succ_j,{\bf{x}}')}{p_{j,{{\bf{x}}}}}\\ \geq& \alpha\times\sum_{o\prefintype{j}{i}\hitemwt{i}}{\srd{p}{i}_{j,o}} \geq \alpha\times\sum_{o \in \ucs(\prefintype{j}{i},\hitemwt{i})}{\srd{p}{i}_{k,o}}\\ \geq& \sum_{{\bf{x}}=(\pitemwt{1},\dots,\pitemwt{i-1}*), {\bf{x}}\in\ucs(\succ_j,{\bf{x}}')}{p_{k,{{\bf{x}}}}}. \end{split} \end{equation} With (\ref{eq:lpsef4}) and (\ref{eq:lpsef5}), (\ref{eq:lpsef6}) means $\sum_{{{\bf{x}}}\in \ucs(\succ_j,{\bf{x}}')}{p_{j,{{\bf{x}}}}}\geq\sum_{{{\bf{x}}}\in \ucs(\succ_j,{\bf{x}}')}{p_{k,{{\bf{x}}}}}$. When $\pitemwt{i}\neq\hitemwt{i}$, we have (\ref{eq:lpsef7}) for ${\bf{x}}'$, and we can obtain the same results as in Case~(\romannumeral1'). Therefore agent $k$'s first important type is $i$. Therefore agent $k$'s important order is the same as agent $j$'s, which is a contradiction. With the results above, we prove that agent $j$ envies nobody. (3) (\textbf{decomposable{} outputs}) By the classical Birkhoff-Von Neumann theorem, all the single type \ram{} assignments are decomposable{}. For $i\leq p$, let $\srd{\mathcal{A}}{i}$ be the set of all the single type deterministic assignments for type $i$, $\srd{A}{i}$ be an arbitrary assignment in $\srd{\mathcal{A}}{i}$, and we use $\alpha^i$ to denote the possibility of $\srd{A}{i}$. Therefore $\srd{P}{i}=\sum{(\alpha_k^i\times\srd{A}{i}_k)}$ for every $i\leq p$. We first give a obvious claim that single type deterministic assignments of $p$ types decide an unique multi-type deterministic assignments, and vice versa. Specifically, for any multi-type deterministic assignment $A\in\mathcal{A}$ and $\srd{A}{1},\dots,\srd{A}{p}$ comprising $A$, we have that for every ${\bf{x}}\in\mathcal{D}$ and $\itemwt{i}=\type{i}{{\bf{x}}}$, \begin{equation}\label{eq:lpsreal0} A_{j,{\bf{x}}}=\prod_{i=1}^{p}{\srd{A}{i}_{j,\itemwt{i}}}. \end{equation} That means agent $j$ is assigned bundle ${\bf{x}}$ if she is assigned all the items in ${\bf{x}}$. We show an example of types $\{F,B\}$ for agent $\{1,2\}$ as follows: \begin{displaymath \centering \begin{split} \begin{tabular}{|c|cc|cc|} \hline\multirow{2}{*}{Agent} & \multicolumn{2}{c|}{$\srd{P}{F}$} & \multicolumn{2}{c|}{$\srd{P}{B}$}\\\cline{2-5} & $1_F$ & $2_F$ & $1_B$ & $2_B$\\\hline 1 & 1 & 0 & 0 & 1 \\ 2 & 0 & 1 & 1 & 0 \\\hline \end{tabular} \hspace{1em \begin{tabular}{|c|cccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$P$}\\\cline{2-5} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1 & 0 & 1 & 0 & 0 \\ 2 & 0 & 0 & 1 & 0 \\\hline \end{tabular} \end{split} \end{displaymath} Then by definition of LexiPS{} we obtain that for any ${\bf{x}}\in\mathcal{D}$ and any $\itemwt{i}=\type{i}{{\bf{x}}}$, \begin{equation}\label{eq:lpsreal1} \begin{split} p_{j,{\bf{x}}} & =\prod_{i\leq p}{\srd{p}{i}_{j,\itemwt{i}}}=\prod_{i\leq p}\sum_{\srd{A}{i}_k\in\srd{\mathcal{A}}{i}}(\srd{\alpha}{i}_k\times (\srd{A}{i}_k)_{j,\itemwt{i}}) \end{split} \end{equation} The result of (\ref{eq:lpsreal1}) is a product of $p$ polynomials, and we can write it as one polynomial and induce as below: \begin{equation}\label{eq:lpsreal2} \begin{split} (\ref{eq:lpsreal1})&=\sum_{\srd{A}{1}_{k_1}\in\srd{\mathcal{A}}{1},\srd{A}{2}_{k_2}\in\srd{\mathcal{A}}{2},\dots,\srd{A}{p}_{k_p}\in\srd{\mathcal{A}}{p}}\prod_{i\leq p}({\alpha^i_{k_i}}\times (\srd{A}{i}_{k_i})_{j,\itemwt{i}}) \\ & =\sum_{\srd{A}{1}_{k_1}\in\srd{\mathcal{A}}{1},\srd{A}{2}_{k_2}\in\srd{\mathcal{A}}{2},\dots,\srd{A}{p}_{k_p}\in\srd{\mathcal{A}}{p}}(\prod_{i\leq p}{\alpha^i_{k_i}}\times \prod_{i\leq p}(\srd{A}{i}_{k_i})_{j,\itemwt{i}}) \end{split \end{equation} Given $\srd{A}{1}_{k_1},\srd{A}{2}_{k_2},\dots,\srd{A}{p}_{k_p}$, let $A_k$ be the corresponding multi-type assignment consisting of them and $\alpha_k=\prod_{i\leq p}{\srd{\alpha}{i}_{k_i}}$ is the probability of $A_k$ where $\srd{\alpha}{i}_{k_i}$ is the probability of $\srd{A}{i}_{k_i}$ for $\srd{P}{i}$. With (\ref{eq:lpsreal0}) we have: \begin{equation*} (\ref{eq:lpsreal1})=\sum_{A_k\in\mathcal{A}}(\alpha_k\times (A_k)_{j,{\bf{x}}}) \end{equation*} Therefore $P$ is decomposable{} \qed \end{proof} \begin{remark}\label{rm:lpsnlexopt} LexiPS{} does not satisfy \text{lexi-}\allowbreak\text{efficiency}{}. In \Cref{eg:lps}, agent $1$'s allocation in $P$, denoted $p$ here, is lexicographically dominated by the following allocation $q$: \begin{center} \begin{tabular}{|c|ccccc|} \hline Agent & $1_F1_B$ & $1_F3_B$ & $2_F1_B$ & $2_F3_B$ & $3_F2_B$\\\hline 1 & 0.5 & 0 & 0 & 0.5 & 0 \\\hline \end{tabular} \end{center} We note that $q$ can be obtained by reallocating the shares of items in $p$. \end{remark} However, in resistance against manipulation, LexiPS{} does not perform as well as PS just like most extensions. \Cref{thm:lpssp} shows that LexiPS{} is \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proof}{} if the importance orders reported by agents are ensured to be truthful. We also show in \Cref{rm:lpsnsp} how an agent cheats by misreporting her importance order. \begin{theorem}\label{thm:lpssp} For \mtaps{} with lexicographic preferences, LexiPS{} satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{} when agents report importance orders truthfully. \end{theorem} \begin{proof} In the following proof, we use $\itemwt{i}$ to refer to an arbitrary item of type $i$, and $(\itemwt{i}*)$ to refer to any bundle containing $\itemwt{i}$. We also use $(\itemwt{i}*)$ as a shorthand to refer to an arbitary bundle containing $\itemwt{i}$. Suppose agent $j$ misreports its preference as $\succ'_j$ for some types and acquire a better allocation. Label the type according to $\impord{j}$ and let $i$ be the most important type for which $j$ misreports. Let $P=LexiPS(R)$, $R'=(\succ'_j,\succ_{-j})$ and $Q=LexiPS(R')$. Then by assumption we have that $Q \sd{j} P,Q\neq P$ and $\srd{p}{\hat i}_{j,o}=\srd{q}{\hat i}_{j,o}$ for every $\hat i < i$. The phase when agent $j$ consumes items in $D_i$ can be viewed as executing PS of type $i$. By \cite{Bogomolnaia01:New}, PS satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{}, which means $\srd{Q}{i} \sd{j} \srd{P}{i}$ is false. Thus there exists $\hitemwt{i}$ which satisfies \begin{equation}\label{eq:lpssp1} \sum_{o \in \ucs(\prefintype{j}{i},\hitemwt{i})}\srd{p}{i}_{j,o}>\sum_{o \in \ucs(\prefintype{j}{i},\hitemwt{i})}\srd{q}{i}_{j,o} \end{equation} With (\ref{eq:lpsef4}) and (\ref{eq:lpsef5}) in \Cref{thm:lps}, we can simplify (\ref{eq:lpssp1}) and obtain that for $\hat{\bf{x}}=(\hitemwt{1},\dots,\hitemwt{i}*)$ \begin{equation*} \sum_{{{\bf{x}}}\in\{(\hitemwt{1},\dots,\hitemwt{i-1},\itemwt{i}*)|\itemwt{i} \in \ucs(\prefintype{j}{i},\hitemwt{i})\}}p_{j,{\bf{x}}} > \sum_{{{\bf{x}}}\in\{(\hitemwt{1},\dots,\hitemwt{i-1},\itemwt{i}*)|\itemwt{i} \in \ucs(\prefintype{j}{i},\hitemwt{i})\}}q_{j,{\bf{x}}}. \end{equation*} It is equal to $\sum_{{\bf{x}} \in \ucs(\succ_j,\hat{\bf{x}})}p_{j,{\bf{x}}}> \sum_{{\bf{x}} \in \ucs(\succ_j,\hat{\bf{x}})}q_{j,{\bf{x}}}$, which means agent $j$ does not obtain a better allocation in $Q$, a contradiction.\qed \end{proof} \begin{remark}\label{rm:lpsnsp} When applying LexiPS{} to \mtaps{} with lexicographic preferences, an agent may get a better allocation by misreporting her importance order. \end{remark} \begin{proof} Consider a \mtap{} with lexicographic preferences where there are agents $1,2$ and types $F,B,T$. Agents both prefer $1_i$ to $2_i$ for $i \in \{F,B,T\}$, but their preferences over bundle are different due to their importance orders as follow: \begin{center} \begin{tabular}{|c|c|} \hline Agent & Importance Order \\\hline 1 & $F\impord{1}B\impord{1}T$ \\ 2 & $T\impord{2}F\impord{2}B$ \\\hline \end{tabular} \end{center} LexiPS{} gives the \ram{} assignment denoted $P$. If agent $2$ misreports her importance order as $\impord{2}:F\impord{2}T\impord{2}B$, LexiPS{} gives another \ram{} assignment denoted $P'$. Both $P$ and $P'$ are shown as follows (To save space, we omit the columns which only contain $0$, similarly hereinafter.): \vspace{1em}\noindent \begin{minipage}{\linewidth} \begin{minipage}{0.3\linewidth} \begin{center \centering \begin{tabular}{|c|cc|} \hline\multirow{2}{*}{Agent} & \multicolumn{2}{c|}{$P$}\\\cline{2-3} & $1_F1_B2_T$ & $2_F2_B1_T$ \\\hline 1 & 1 & 0 \\ 2 & 0 & 1 \\\hline \end{tabular} \end{center} \end{minipage} \hspace{0.1\linewidth} \begin{minipage}{0.5\linewidth} \begin{center \centering \begin{tabular}{|c|cccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$P'$}\\\cline{2-5} & $1_F1_B2_T$ & $1_F2_B1_T$ & $2_F1_B2_T$ & $2_F2_B1_T$ \\\hline 1 & 0.5 & 0 & 0.5 & 0 \\ 2 & 0 & 0.5 & 0 & 0.5 \\\hline \end{tabular} \end{center} \end{minipage} \end{minipage}\vspace{1em} We observe that compared with $P$, agent $2$ loses $0.5$ shares of $2_F2_B1_T$, but acquires $0.5$ shares of $1_F2_B1_T$ in $P'$. Since $1_F2_B1_T\succ_22_F2_B1_T$, we obtain that $P' \sd{2} P$, but $P \sd{2} P'$ is false, which means LexiPS{} does not satisfy \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{} when an agent can misreport her importance order.\qed \end{proof} \section{\text{MPS}{} for \mtaps{} with divisible items} \label{MPS} In this section we only consider \mtap{} with divisible items, which is not affected by \Cref{thm:imp}. We present a simplified version of \text{MPS}{}~\cite{Wang19:Multi} in {\bf\Cref{alg:mps}}, since we no longer need to deal with partial preferences. At a high level, in \text{MPS}{} agents consume bundles consisting of items in contrast with PS where agents consume items directly. Under strict linear preferences, we prove that \text{MPS}{} satisfies \text{lexi-}\allowbreak\text{efficiency}{}, which implies \text{sd-}\allowbreak\text{efficiency}{}, and provide two characterizations involving \text{leximin-}\allowbreak\text{optimality}{} and \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{} respectively. \subsection{\textbf{Algorithm for \text{MPS}{}}} Given an \mtap{}, \text{MPS}{} proceeds in multiple rounds as follows: In the beginning of each round, $M'$ contains all items that are unexhausted. Agent $j$ first decides her most preferred {\em available} bundle $\topb{j}{}$ according to $\succ_j$. A bundle ${\bf{x}}$ is available so long as every item $o\in{\bf{x}}$ is unexhausted. Then, each agent consumes the bundle by consuming all of the items in it at a uniform rate of one unit per unit of time. The round ends whenever one of the bundles being consumed becomes unavailable because an item being consumed has been exhausted. The algorithm terminates when all the items are exhausted. \begin{algorithm \begin{algorithmic}[1] \State {\bf Input:} An \mtap{} $(N,M)$ and a preference profile $R$. \State For every $o\in M$, $\text{{\em supply}}(o)\gets 1$. $M' \gets M$. $P\gets 0^{n\times|\mathcal{D}|}$. \While{$M'\neq\emptyset$} \State {\bf Identify top bundle} $\topb{j}{}$ for every agent $j\le n$. \parState{ {\bf Consume.} \begin{itemize} \item[5.1:] For any $o\in M'$, $\con{o}{}\gets|\{j\in N:o\in\topb{j}{}\}|$. \item[5.2:] $\prog{}\gets \min_{o\in M'}\frac{\text{{\em supply}}(o)}{\con{o}{}}$. \item[5.3:] For each $j\le n$, $p_{j,\topb{j}{}}\gets p_{j,\topb{j}{}}+\prog{}$. \item[5.4:] For each $o\in M'$, $\text{{\em supply}}(o)\gets\text{{\em supply}}(o)-{\prog{}\times \con{o}{}}$. \end{itemize} }\vspace{-1.5em} \State $\ex{}\gets\arg\min_{o\in M'}\frac{\text{{\em supply}}(o)}{\con{o}{}}$, $M'\gets M'\setminus B$ \EndWhile \State \Return $P$ \end{algorithmic} \caption{\label{alg:mps} \text{MPS}{} for \mtaps{} under strict linear preference.} \end{algorithm} \begin{example}\label{eg:mps} The execution of \text{MPS}{} for the following instance of \mtap{} is shown in {\bf\Cref{fig:mps}} \vspace{1em}\noindent \begin{minipage}{\linewidth} \begin{minipage}{0.4\linewidth} \begin{center} \begin{tabular}{|c|c|} \hline Agent & Preferences \\\hline 1 & $1_F1_B\succ_11_F2_B\succ_12_F2_B\succ_12_F1_B$ \\ 2 & $1_F2_B\succ_22_F1_B\succ_21_F1_B\succ_22_F2_B$ \\\hline \end{tabular} \end{center} \end{minipage} \hspace{0.1\linewidth} \begin{minipage}{0.4\linewidth} \begin{center \centering \begin{tabular}{|c|cccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$P$}\\\cline{2-5} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1 & 0.5 & 0 & 0 & 0.5 \\ 2 & 0 & 0.5 & 0.5 & 0 \\\hline \end{tabular} \end{center} \end{minipage} \end{minipage}\vspace{1em} At round $1$, agent $1$'s top bundle is $1_F1_B$, and agent $2$'s top bundle is $1_F2_B$. Notice that both agents wish to consume $1_F$. Therefore, round $1$ ends as $1_F$ gets exhausted with both agents getting a share of $0.5$ units of $1_F$. Agents $1$ and $2$ also consume $1_B$ and $2_B$ respectively at the same rate during round $1$. At the end of round $1$, agents $1$ and $2$ are assigned with a $0.5$ share of $1_F1_B$ and $1_F2_B$ respectively. At the start of round $2$, there is a supply of $1$ unit of $2_F$ and $0.5$ units each of $1_B$ and $2_B$. Agent $1$'s top available bundle is $2_F2_B$ since $1_F2_B$ is unavailable for the exhausted item $1_F$, and agent $2$'s top available bundle is $2_F1_B$ accordingly. The agents consume the items of each type from their top bundles at a uniform rate and at the end of the round, all items are exhausted, and agents $1$ and $2$ have consumed $0.5$ units each of $2_F2_B$ and $2_F1_B$ respectively, which results in the final assignment as shown in {\bf\Cref{fig:mps}}, which is the undecomposable{} assignment $P$ in \Cref{eg:undecomposable}. Further, we show in \Cref{rm:mpsnreal} that even under lexicographic preferences, the output of \text{MPS}{} is not always decomposable{}. This means that \text{MPS}{} is only applicable to \mtaps{} with divisible items. \end{example} \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{Fig2.png} \caption{An example of the execution of \text{MPS}{}. \label{fig:mps} \end{figure} \begin{remark}\label{rm:mpsnreal} The output of \text{MPS}{} is not always a decomposable{} assignment under the restriction of lexicographic preferences. \end{remark} \begin{proof} For the \mtap{} in \Cref{eg:lps}, \text{MPS}{} outputs the following \ram{} assignment, denoted $P$: \begin{center \centering \begin{tabular}{|c|ccccccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{7}{c|}{$P$}\\\cline{2-8} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ & $2_F3_B$ & $3_F2_B$ & $3_F3_B$\\\hline 1 & 1/3 & 0 & 1/6 & 1/6 & 0 & 1/12 & 1/4 \\ 2 & 1/3 & 0 & 1/6 & 0 & 1/6 & 0 & 1/3 \\ 3 & 0 & 1/3 & 0 & 1/3 & 0 & 1/12 & 1/4 \\\hline \end{tabular} \end{center} When items are indivisible, if agent $2$ get $2_F3_B$, agent $1$ will get $1_F1_B$ and agent $3$ will get $3_F2_B$ as $P$ indicates. However, $p_{1,1_F1_B}=1/3,p_{2,2_F3_B}=1/6,p_{3,3_F2_B}=1/12$ are not equal, which is a contradiction.\qed \end{proof} \subsection{\textbf{Efficiency and Generalized Cycles}} Under the unrestricted domain of strict linear preferences, Theorem 2 in \cite{Wang19:Multi} implies that \text{MPS}{} satisfies \text{sd-}\allowbreak\text{efficiency}{}. We prove in~\Cref{thm:mpslexopt} below that \text{MPS}{} satisfies \text{lexi-}\allowbreak\text{efficiency}{}, which is a stronger notion of efficiency than \text{sd-}\allowbreak\text{efficiency}{}, as we will prove in~\Cref{prop:gc} later. \begin{theorem}\label{thm:mpslexopt} \text{MPS}{} satisfies \text{lexi-}\allowbreak\text{efficiency}{}. \end{theorem} \begin{proof} Given \mtap{}(M,N) and preference profile $R$, let $P=\text{MPS}(R)$, and suppose another assignment $Q$ satisfies $Q\ld{}P$. Let $\hat N$ be the set of agents which have different allocations in $Q$, and by assumption we have that for any $j\in\hat N$, there exists a bundle ${\bf{x}}$ satisfies $q_{j,{\bf{x}}}>p_{j,{\bf{x}}}$ and $q_{j,\hat{\bf{x}}}=p_{j,\hat{\bf{x}}}$ for any $\hat{\bf{x}}\succ_j{\bf{x}}$. W.l.o.g. let $j$ be the agent with smallest $\sum_{\hat{\bf{x}}\succ_j{\bf{x}}}{p_{j,\hat{\bf{x}}}}$, and let $t$ denote the value. However, when \text{MPS}{} executes till time $t$, ${\bf{x}}$ is unavailable, which means that at least one item in ${\bf{x}}$ is exhausted and $p_{j,{\bf{x}}}$ cannot be increased anymore. Therefore if agent $j$ gains more shares of ${\bf{x}}$ in $Q$, there is another agent who lose shares before $t$, which is a contradiction.\qed \end{proof} We establish the relationship between \text{lexi-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{efficiency}{} in~\Cref{prop:gc} through the {\em \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{}} condition (\Cref{dfn:gc}), by showing that \text{sd-}\allowbreak\text{efficiency}{} is implied by the \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{} condition, which is implied by \text{lexi-}\allowbreak\text{efficiency}{}. We begin by borrowing the tool named generalized cycle from~\citet{Wang19:Multi}, which is based on the relation $\tau$ and the notion of {\em improvable tuples} defined as follows: \begin{definition}{\bf{(improvable tuples~\cite{Wang19:Multi})}} Given a fractional assignment $P$, and a profile $R=(\succ_j)_{j\le n}$, \begin{itemize} \item for pair of bundles ${\bf{x}},\hat{{\bf{x}}}\in\mathcal{D}$, ${\bf{x}}\tau\hat{{\bf{x}}}$ $\iff$ there exists an agent $j\le n$, such that ${\bf{x}}\succ_j\hat{{\bf{x}}}$ and $p_{j,\hat{{\bf{x}}}}>0$. \item for any pair of bundles ${\bf{x}},\hat{{\bf{x}}}\in\mathcal{D}$, $({\bf{x}},\hat{{\bf{x}}})$ is an {\em improvable tuple} if and only if ${\bf{x}}\tau\hat{{\bf{x}}}$, and \item $Imp(P,R)$ is the set of all improvable tuples admitted by assignment $P$ w.r.t. the preference profile $R$. \end{itemize} \end{definition} For ease of exposition, we use $Imp(P)$ to refer to the set of all improvable tuples admitted by the fractional assignment $P$ when the profile is clear from the context. We are now ready to formally define the \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{} condition. \begin{definition}{\bf{(\text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{}~\cite{Wang19:Multi})}}\label{dfn:gc} Given an \mtap{} $(N,M,R)$ and a \ram{} assignment $P$, a set $C \subseteq Imp(P,R)$ is a {\em generalized cycle} if it holds for every $o \in M$ that: if an improvable tuple $({\bf{x}}_1,\hat{\bf{x}}_1)\in C$ satisfies that $o\in {\bf{x}}_1$, then there exists a tuple $({\bf{x}}_2,\hat{\bf{x}}_2)\in C$ such that $o\in \hat{\bf{x}}_2$. An assignment $P$ satisfies {\em \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{}}, if it admits no generalized cycles. \end{definition} When $p=1$, \citet{Bogomolnaia01:New} proved that an assignment is \text{sd-}\allowbreak\text{efficient}{} if and only if the relation $\tau$ on it is acyclic, i.e. there does not exists ${\bf{x}}_1\tau {\bf{x}}_2 \tau\cdots\tau{\bf{x}}_1$. However, this condition fails for \mtaps{}.~\Cref{eg:gc} shows that an assignment which does not satisfy \text{sd-}\allowbreak\text{efficiency}{} satisfies the acyclicity of $\tau$, but admits a generalized cycle, which means the generalized cycle is more reliable in identifying \text{sd-}\allowbreak\text{efficient}{} assignments. \begin{example}\label{eg:gc} We illustrate generalized cycles, with the following assignment $Q$ for the \mtap{} in \Cref{eg:mps}. Note that $Q$ is not \text{sd-}\allowbreak\text{efficient}{} because the assignment $P$ in \Cref{eg:mps} stochastically dominates $Q$. \vspace{1em}\noindent \begin{minipage}{\linewidth} \begin{minipage}{0.4\linewidth} \begin{center \centering \begin{tabular}{|c|cccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$Q$}\\\cline{2-5} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1 & 0.4 & 0 & 0 & 0.6 \\ 2 & 0.2 & 0.4 & 0.4 & 0 \\\hline \end{tabular} \end{center} \end{minipage} \hspace{0.1\linewidth} \begin{minipage}{0.4\linewidth} \centering \begin{center} \begin{tabular}{|c|c|} \hline Agent & Improvable Tuples \\\hline 1 & \tabincell{c}{$(1_F1_B,2_F2_B),(1_F2_B,2_F2_B),$\\$(2_F1_B,2_F2_B)$} \\\hline 2 & \tabincell{c}{$(1_F2_B,2_F1_B),(1_F2_B,1_F1_B),$\\$(2_F1_B,1_F1_B)$} \\\hline \end{tabular} \end{center} \end{minipage} \end{minipage}\vspace{1em} It is easy to see that $\tau$ is acyclic on $Q$. However, it is easy to verify that there is a generalized cycle on $Q$: $\{(1_F1_B,2_F2_B),\allowbreak(1_F2_B,2_F1_B),\allowbreak(2_F1_B,1_F1_B)\}$. Consider for example the items in type $B$: $1_B$ is present in both components of $(2_F1_B,1_F1_B)$, and $2_B$ is present in the left component of $(1_F2_B,2_F1_B)$ and the right component of $(1_F1_B,2_F2_B)$. A similar analysis can be performed for the items of type $F$. \end{example} ~\Cref{prop:gc} reveals the relationship between \text{lexi-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{efficiency}{} vis-\`a-vis the \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{} condition. Unlike~\citet{Bogomolnaia15:Random} who point out that \text{lexi-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{efficiency}{} are equivalent in their setting where $p=1$, because both of them are equivalent to acyclicity condition on the relation $\tau$, we show that this is no longer true for \mtaps{}. \begin{proposition}\label{prop:gc} Given a preference profile $R$ and a \ram{} assignment $P$, \begin{enumerate}[label=(\arabic*)] \item \label{prop:gcsdopt}$P$ is \text{sd-}\allowbreak\text{efficient}{} regarding $R$ if $P$ admits no generalized cycle \item $P$ is \text{lexi-}\allowbreak\text{efficient}{} regarding $R$ only if $P$ admits no generalized cycle. \end{enumerate} \end{proposition} \begin{proof} (1) The proof is similar to the proof of Theorem 5, Claim (1) in \citet{Wang19:Multi}. A full proof is provided in \Cref{sec:prop:gc} for completeness. (2) Suppose by contradiction there is a \text{lexi-}\allowbreak\text{efficient}{} assignment $P$ which admits a generalized cycle $C$. For any agent $j$, let $({\bf{x}},\hat{\bf{x}})\in C$ be the one of tuple she involves, and let ${\bf{x}}$ be top ranked among bundles in all the tuples involved $j$. For every $o_k\in{\bf{x}}$, we can find a tuple $({\bf{x}}_k,\hat{\bf{x}}_k)\in C$ which satisfies that $o\in\hat{\bf{x}}_k$ and $p_{\hat j,\hat{\bf{x}}_k}>0$ for some $\hat j$ by definition. Then we extract shares of each $\hat{\bf{x}}_k$ by a small enough value $\epsilon$, and we can have $\epsilon$ shares of ${\bf{x}}$ with the shares of each $o_k$ from $\hat{\bf{x}}_k$. And to keep the supply of bundle not beyond her demand, she should give out the same shares of $\hat{\bf{x}}$. It is trivial that the shares of items in $\hat{\bf{x}}$ and rest items in each $\hat{\bf{x}}_k$ can be combined as bundles with no share left for single items, and we just assign them arbitrarily to agents who donate $\hat{\bf{x}}_k$ to meet their demands. We note that $\epsilon$ is chosen to be small enough so that the shares of bundles above are not used up, and therefore they can be used for other agents. Let $Q$ be the new assignment after we do the step above for all the agents. In this way, for any agent $j$ with the chosen tuple $({\bf{x}},\hat{\bf{x}})$, it gains shares of ${\bf{x}}$ and maybe other bundles, with loss of shares of $\hat{\bf{x}}$ and other bundles ranked behind ${\bf{x}}$. It follows that $Q_{j,{\bf{x}}}\ge P_{j,{\bf{x}}}+\epsilon>P_{j,{\bf{x}}}$ and for any ${\bf{x}}'\succ_j{\bf{x}}$, $Q_{j,{\bf{x}}'}\ge P_{j,{\bf{x}}'}$ because agent $j$ gains but not loses the shares of these bundles, which means $Q\ld{}P$, a contradiction.\qed \end{proof} \begin{remark}\label{rm:gcnne} The \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{} condition is not a necessary condition of \text{sd-}\allowbreak\text{efficiency}{}. Consider an \mtap{} with two agents where $\succ_1,\succ_2$ are the same as $1_F1_B\succ_11_F2_B\succ_12_F1_B\succ_12_F2_B$. We find that the assignment $P$ in \Cref{eg:mps} is \text{sd-}\allowbreak\text{efficient}{} for this \mtap{} but admits a generalized cycle (not unique): $\{(1_F1_B,1_F2_B),\allowbreak(1_F2_B,2_F1_B),\allowbreak(2_F1_B,2_F2_B)\}$. The \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{} condition is also not a sufficient condition of \text{lexi-}\allowbreak\text{efficiency}{}. With the $\succ_1,\succ_2$ above, the following assignment admits no generalized cycle, but is not \text{lexi-}\allowbreak\text{efficient}{}: \begin{center \centering \begin{tabular}{|c|cccc|} \hline Agent & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1,2 & 0 & 0.5 & 0.5 & 0 \\\hline \end{tabular} \end{center} \end{remark} In~\Cref{thm:eps}, we characterize the entire set of assigments that do not admit generalized cycles by the family of {\em eating} algorithms for \mtaps{} (Algorithm~\ref{alg:eps}), which is a natural extension of the family of eating algorithms introduced in~\citet{Bogomolnaia01:New} for the single type setting. Each eating algorithm is specified by a collection of exogenous {\em eating speed functions} $\omega=(\omega_j)_{j\le n}$. An eating speed function $\omega_j$ specifies the instantaneous rate at which agent $j$ consumes bundles, consisting of an item of each type, at each instant $t\in[0,1]$, such that the integral $\int_{t=0}^1\omega_j(t)$ is $1$. In an eating algorithm, in each round, each agent $j$ consumes her most preferred available bundle at the rate specified by her eating speed function $\omega_j$, until the supply of one of the items in one of the bundles being consumed is exhausted. Note that \text{MPS}{} is a special case of the family of eating algorithms, with $\omega_j(t)=1$ for all $t\in[0,1]$, and all $j\le n$. \begin{comment} \begin{algorithm \begin{algorithmic}[1] \State {\bf Input:} An \mtap{} $(N,M)$ and a preference profile $R$. \State For every $o\in M$, $\text{{\em supply}}(o)\gets 1$. $M' \gets M$. $P\gets 0^{n\times|\mathcal{D}|}$. $t \gets 0$. \While{$M'\neq\emptyset$} \State {\bf Identify top bundle} $\topb{j}{}$ for every agent $j\le n$. \parState{ {\bf Consume.} For each $j\le n$, agent $j$ consumes the items in $\topb{j}{}$ with $\omega_j$. When $\text{{\em supply}}(o)=0$ for some item $o$ at time $t'$, \begin{itemize} \item[5.1:] Let $B=\{o\in M'|\text{{\em supply}}(o)=0\}$. $M' \gets M' \setminus{} B$. \item[5.2:] For each $j\le n$, $\prog{}_j \gets \int_t^{t'}{\omega_j}$. $p_{j,\topb{j}{}}\gets p_{j,\topb{j}{}}+\prog{}_j$. \end{itemize} } \EndWhile\vspace{-1.5em} \State \Return $P$ \end{algorithmic} \caption{\label{alg:eps} The eating algorithm with $\{\omega_j\}_{j\leq n}$.} \end{algorithm} \end{comment} \begin{algorithm \begin{algorithmic}[1] \State {\bf Input:} An \mtap{} $(N,M)$ and a preference profile $R$. \State {\bf Parameters:} Eating speed functions $\omega=(\omega_j)_{j\le n}$. \State For every $o\in M$, $\text{{\em supply}}(o)\gets 1$. $M' \gets M$. $P\gets 0^{n\times|\mathcal{D}|}$. $t \gets 0$. \While{$M'\neq\emptyset$ and $t<1$} \State {\bf Identify top bundle} $\topb{j}{}$ for every agent $j\le n$. \parState{ {\bf Consume.} \begin{itemize} \item[5.1:] For any $o\in M'$, $\con{o}{}\gets|\{j\in N:o\in\topb{j}{}\}|$. \item[5.2:] $\prog{}\gets \min\{\prog{}|\sum_{j\in\con{o}{}}\int_t^{t+\prog{}}\omega_j=\text{{\em supply}}(o),o\in M'\}$. \item[5.3:] For each $j\le n$, $p_{j,\topb{j}{}}\gets p_{j,\topb{j}{}}+\int_t^{t+\prog{}}\omega_j$. \item[5.4:] For each $o\in M'$, $\text{{\em supply}}(o)\gets\text{{\em supply}}(o)-{\sum_{j\in\con{o}{}}\int_t^{t+\prog{}}\omega_j}$. \end{itemize} }\vspace{-1.5em} \State $M'\gets M'\setminus \{o\in M'|\text{{\em supply}}(o)=0\}$. $t\gets t+\prog{}$. \EndWhile \State \Return $P$ \end{algorithmic} \caption{\label{alg:eps} Eating Algorithms \end{algorithm} \begin{theorem}\label{thm:eps} An assignment satisfies \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{} if and only if the assignment is the output of an eating algorithm (Algorithm~\ref{alg:eps}). \end{theorem} \begin{proof} $(\Leftarrow)$ The proof that an assignment $P$, which is an output of an eating algorithm satisfies \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{} is similar to the proof of \Cref{prop:gc}, Claim~\ref{prop:gcsdopt} $(\Rightarrow)$ Let $R$ be an arbitrary preference profile, and let $P$ be an arbitrary assignment satisfying \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{} w.r.t. $R$. For convenience, we define some quantities to represent the state during the execution of a member of the family of eating algorithms at each round $s$. For ease of exposition, we use $s=0$ to represent the initial state before the start of execution. Let $M^0 = M$, $\mathcal{D}^0 = \mathcal{D}$. Let $B^s = \{ o \in M^{s-1} : \text{there are no } {\bf{x}},\allowbreak \hat {\bf{x}} \in \mathcal{D}^{s-1} \text{ s.t. } o \in \hat {\bf{x}} \text{ and } ({\bf{x}}, \hat {\bf{x}}) \in Imp(P) \}$, $M^s = M^{s-1} - B^s$ and $\mathcal{D}^s = \{ {\bf{x}} \in \mathcal{D} : \text{for every } o \in {\bf{x}}, o \in M^s \}$ be the available bundles in $M^s$. Let $S = \min\{s : M^s = \emptyset\}$. Let $\omega_j(t)$ be the eating rate of agent $j$ at time $t$, $N({\bf{x}},\mathcal{D}^s)$ be the set of agents who prefer ${\bf{x}}$ best in the available bundles $\mathcal{D}^s$ for any ${\bf{x}} \in \mathcal{D}^s$. The following eating speed functions $\omega_j$ define the algorithm: \[ \forall s \leq S, \frac{s-1}{S} \leq t \leq \frac{s}{S}, \omega_j(t)\overset{\text{def}}{=} \left\{ \begin{aligned} S \times p_{j,{\bf{x}}}, \qquad & \exists o \in {\bf{x}}, o \in B^s \text{ and } j \in N({\bf{x}}, \mathcal{D}^{s-1}) \\ 0, \qquad & \text{otherwise.} \end{aligned} \right. \] From the design of algorithm, we know that items in $B^s$ decide which bundles in $\mathcal{D}^{s-1}$ are consumed in round $s$, and these items are not consumed after the round $s$. We claim that the algorithm specified by the eating speed functions $\omega=(\omega_j)_{j\le n}$ above, outputs $P$ for the \mtap{} with preference profile $R$. Let $Q$ be the output of the eating algorithm and we prove $P = Q$ by proving with induction that for any $s\in[1,S]$, $o \in B^s$, ${\bf{x}} \in \mathcal{D}^{s-1}$ s.t. $o \in {\bf{x}}$, we prove for each $j \in N, p_{j,{\bf{x}}} = q_{j,{\bf{x}}}$. The base case where $s=1$ is omitted here, and we provide a proof sketch for the inductive step in the following. The full proof is in \Cref{sec:thm:eps}. \noindent{\bf Inductive step.} With the assumption that $p_{j,{\bf{x}}} = q_{j,{\bf{x}}}$ for any $o \in \bigcup_{k = 1}^s B^k$, ${\bf{x}} \in \mathcal{D}$ s.t. $o \in {\bf{x}}$ and any $j \in N$, we prove that: for any $o \in B^{s+1}$, ${\bf{x}} \in \mathcal{D}$ s.t. $o \in {\bf{x}}$ and any $j \in N$, $p_{j,{\bf{x}}} = q_{j,{\bf{x}}}$. If ${\bf{x}}$ is not in $\mathcal{D}^s$, then there is an item $\hat o \in \bigcup_{k = 1}^s B^k$ s.t. $\hat o \in {\bf{x}}$ and we have $p_{j,{\bf{x}}} = q_{j,{\bf{x}}}$ by the assumption. For $o\in B^{s+1}$ and ${\bf{x}}\in\mathcal{D}^{s}$ s.t. $o\in{\bf{x}}$, if ${\bf{x}}$ is not most preferred in $\mathcal{D}^s$ by some agent $j$, then $p_{j,{\bf{x}}}=0$, because if $p_{j,{\bf{x}}}>0$, then let $\hat{\bf{x}}$ be the most preferred bundle by $j$ in $\mathcal{D}^s$ and we have $(\hat{\bf{x}},{\bf{x}})\inImp(P)$ where $o\in{\bf{x}}$, a contradiction to the construction of $B^{s+1}$. Therefore we have the following equation by construction \[ \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^s}\sum_{j\in N({\bf{x}},\mathcal{D}^s)}p_{j,{\bf{x}}} + \sum_{o\in{\bf{x}},{\bf{x}}\in \mathcal{D}\setminus\mathcal{D}^s}\sum_{j \in N}p_{j,{\bf{x}}} = \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}}\sum_{j\in N}p_{j,{\bf{x}}} = 1. \] That implies for any $o\in B^{s+1}$, ${\bf{x}} \in \mathcal{D}^s$ s.t. $o \in {\bf{x}}$, if $j$ is not in $N({\bf{x}}, \mathcal{D}^s)$, then $p_{j,{\bf{x}}} = 0$. For any $o \in B^{s+1}$, ${\bf{x}} \in \mathcal{D}^s$ s.t. $o \in {\bf{x}}$ and $j \in N({\bf{x}}, \mathcal{D}^s)$, we prove that agent $j$ consumes exactly $p_{j,{\bf{x}}}$ units of bundle ${\bf{x}}$. By the assumption, $o$ remains available in $\frac{s}{S} \leq t \leq \frac{s+1}{S}$ and agent $j$ consumes ${\bf{x}}$ in $\frac{s}{S} \leq t \leq \frac{s+1}{S}$. We know that $q_{j,{\bf{x}}}=0$ by time $\frac{s}{S}$, and by time $\frac{s+1}{S}$, $q_{j,{\bf{x}}} = \frac{1}{S} \times S \times p_{j,{\bf{x}}} = p_{j, {\bf{x}}}$. That means \begin{equation*} \begin{split} \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^s}\sum_{j\in N({\bf{x}},\mathcal{D}^s)}q_{j,{\bf{x}}} + \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}\setminus\mathcal{D}^s}\sum_{j \in N}q_{j,{\bf{x}}} = \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^s}\sum_{j\in N({\bf{x}},\mathcal{D}^s)}p_{j,{\bf{x}}} + \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}\setminus\mathcal{D}^s}\sum_{j \in N}p_{j,{\bf{x}}} = 1 \end{split} \end{equation*} That implies for any $o \in B^{s+1}$, ${\bf{x}} \in \mathcal{D}^s$ s.t. $o \in {\bf{x}}$, \begin{enumerate*}[label=(\roman*)] \item if $j \in N({\bf{x}}, \mathcal{D}^s)$, then $q_{j,{\bf{x}}} = p_{j, {\bf{x}}}$; \item if $j$ is not in $N({\bf{x}}, \mathcal{D}^s)$, then $q_{j,{\bf{x}}} = 0 = p_{j, {\bf{x}}}$. \end{enumerate*} Therefore, for any $o \in B^{s+1}$, ${\bf{x}} \in \mathcal{D}^s$ s.t. $o \in {\bf{x}}$ and any $j \in N$, $p_{j,{\bf{x}}} = q_{j,{\bf{x}}}$. \qed \end{proof} \subsection{\textbf{Fairness and Characterization}} For CP-net preferences, Theorem~5 in~\citet{Wang19:Multi} showed that \text{MPS}{} satisfies \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} for CP-net preferences. Here CP-net determines the dependence among preferences of types, which also reflects the importance of each type. Since the domain of CP-net preferences and strict linear preferences are not totally overlapping, we provide \Cref{prop:mps} as complement and the proof is in \Cref{sec:prop:mps}. \begin{proposition}\label{prop:mps} \text{MPS}{} satisfies \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} for \mtaps{}. \end{proposition} By \cite{Wang19:Multi}, MPS is \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proof}{} if all the agents share the same CP-net, which means their importance orders are identical. We prove that under lexicographic preferences, \text{MPS}{} satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{}, and in particular, the importance orders of agents can be different. \begin{theorem}\label{thm:mpssp} \text{MPS}{} satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{} for \mtaps{} with lexicographic preferences. \end{theorem \begin{proof} Consider an arbitrary \mtap{} $(N,M)$ and an arbitrary lexicographic preference profile $R$. Suppose for the sake of contradiction that an agent $j$ can obtain a better allocation by misreporting her preference as another lexicographic preference $\succ'_j$. Throughout, we use $P=\text{MPS}{}(R)$ and $Q=\text{MPS}{}(R')$, where $R'=(\succ'_j,\succ_{-j})$. By assumption of beneficial misreporting, we have $Q\sd{j}P$. We will show that $Q_j =P_j$. We claim that $\srd{Q}{i}_j=\srd{P}{i}_j$ for every type $i \leq p$, where $\srd{P}{i}_j$ is agent $j$'s single type allocations of type $i$ at $P$ and $\srd{Q}{i}_j$ the same at $Q$. W.l.o.g. let the types be labeled such that $1 \impord{j} \cdots \impord{j} p$. For convenience, for any type $i\le p$, we define $\itemwt{i}$ to be an item $o$ of type $i$, and $(\itemwt{i}*)$ to be an arbitrary bundle containing $\itemwt{i}$. \begin{clm}\label{cl:mpstops} Under lexicographic preferences, $(\text{MPS}{}(R))^i=PS(R^i)$, where ${R^i=(\prefintype{j}{i})_{j\leq n}}$ \end{clm} The claim is obtained by comparing the execution of \text{MPS}{} with PS in each type. The full proof of the claim is in \Cref{sec:cl:mpstops}. Since PS satisfies \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{}~\cite{Bogomolnaia01:New}, we deduce from {\bf\Cref{cl:mpstops}} that for each type $i \leq p$, $\srd{Q}{i}\sd{j}\srd{P}{i} \Rightarrow \srd{Q}{i}_j=\srd{P}{i}_j$. Therefore we can prove $\srd{Q}{i}_j=\srd{P}{i}_j$ by showing $\srd{Q}{i}\sd{j}\srd{P}{i}$ instead, by induction. First we assume $\srd{Q}{1} \nsd{j} \srd{P}{1}$. That means there exists $\hitemwt{1}$ such that $\sum_{o \in \ucs(\prefintype{j}{i},\hitemwt{1})}\srd{p}{1}_{j,o} > \sum_{o \in \ucs({\prefintype{j}{i},\hitemwt{1}})}{\srd{q}{1}_{j,o}}$. Let $\hat{\bf{x}}$ be the least preferred bundle containing $\hitemwt{1}$. It follows that $$\sum_{o \in \ucs(\prefintype{j}{i},\hitemwt{1})}\srd{p}{1}_{j,o}= \sum_{{\bf{x}}\in\ucs(\succ_j,\hat{\bf{x}})}{p_{j,{\bf{x}}}}> \sum_{{\bf{x}}\in\ucs(\succ_j,\hat{\bf{x}})}{q_{j,{\bf{x}}}}= \sum_{o \in \ucs({\prefintype{j}{i},\hitemwt{1}})}{\srd{q}{1}_{j,o}},$$ which is a contradiction to our assumption. Thus $\srd{Q}{1}_j=\srd{P}{1}_j$. Next we prove that for any type $i \leq p$ when $\srd{Q}{\hat i}_j = \srd{P}{\hat i}_j$ for every $\hat i < i$. For arbitrary $\hitemwt{1},\hitemwt{2},\dots,\hitemwt{i}$, let $\hat{\bf{x}}=(\hitemwt{1},\dots,\hitemwt{i}*)$ be the least preferred bundle containing them. Because $Q\sd{j}P$, we have $\sum_{{{\bf{x}}}\in \ucs(\succ_j,\hat{\bf{x}})}{p_{j,{{\bf{x}}}}} \leq \sum_{{{\bf{x}}}\in \ucs(\succ_j,\hat{\bf{x}})}{q_{j,{{\bf{x}}}}}$, and we take apart the sum like: \begin{equation}\label{sdspi0} \begin{split} &\sum_{{{\bf{x}}}\in\{(\itemwt{1}*)|\itemwt{1} \prefintype{j}{1} \hitemwt{1}\}}{p_{j,{{\bf{x}}}}} + \sum_{{{\bf{x}}}\in\{(\hitemwt{1},\itemwt{2}*)|\itemwt{2} \prefintype{j}{2} \hitemwt{2}\}}{p_{j,{{\bf{x}}}}}\\ &+\dots+\sum_{{{\bf{x}}}\in\{(\hitemwt{1},\hitemwt{2},\dots,\itemwt{i-1}*)|\itemwt{i-1} \prefintype{j}{i-1} \hitemwt{i-1}\}}{p_{j,{{\bf{x}}}}}+\sum_{{{\bf{x}}}\in S_i}{p_{j,{{\bf{x}}}}}\\ \end{split} \end{equation} Here we use $S_i$ to refer to the set $\{(\hitemwt{1},\dots,\hitemwt{i-1},\itemwt{i}*)|\itemwt{i} \in \ucs(\prefintype{j}{i},\hitemwt{i})\}$. {\bf\Cref{cl:eqwtype}} follows from the observation that agents consume bundles until unavailable. The full proof of the claim is in \Cref{sec:cl:eqwtype}. \begin{clm}\label{cl:eqwtype} For $P=\text{MPS}(R)$, given a fixed $i'$ and $\srd{Q}{\hat i}_j=\srd{P}{\hat i}_j$ for any ${\hat i} \le i'$, if {$Q\sd{j}P$}, then for $S_{\hat i}=\{(\hitemwt{1},\dots,\hitemwt{{\hat i}-1},\itemwt{\hat i}*)|\itemwt{\hat i}\in\ucs(\prefintype{j}{\hat{i}},\hitemwt{\hat i})\}, \sum_{{\bf{x}}\in S_{\hat i}}{p_{j,{{\bf{x}}}}}=\sum_{{\bf{x}}\in S_{\hat i}}{q_{j,{{\bf{x}}}}}$. \end{clm} With (\ref{sdspi0}) and {\bf\Cref{cl:eqwtype}} we obtain $\sum_{{{\bf{x}}}\in S_i}{p_{j,{{\bf{x}}}}} \leq \sum_{{{\bf{x}}}\in S_i}{q_{j,{{\bf{x}}}}}$. We can sum up the inequality by $\hitemwt{1},\hitemwt{2},\dots,\hitemwt{i-1}$ like \begin{equation*} \sum_{\hitemwt{1}} \sum_{\hitemwt{2}} \dots \sum_{\hitemwt{i-1}} \sum_{{{\bf{x}}}\in S_i}{p_{j,{\bf{x}}}} \leq \sum_{\hitemwt{1}} \sum_{\hitemwt{2}} \dots \sum_{\hitemwt{i-1}} \sum_{{{\bf{x}}}\in S_i}{p_{j,{{\bf{x}}}}}. \end{equation*} Then we have \begin{displaymath} \sum_{{{\bf{x}}}\in\{(\itemwt{i}*)|\itemwt{i} \in \ucs(\prefintype{j}{i},\hitemwt{i})\}}{p_{j,{{\bf{x}}}}} \leq \sum_{{{\bf{x}}}\in\{(\itemwt{i}*)|\itemwt{i} \in \ucs(\prefintype{j}{i},\hitemwt{i})\}}{q_{j,{{\bf{x}}}}} \end{displaymath} It is equal to $\sum_{\itemwt{i} \in \ucs(\prefintype{j}{i},\hitemwt{i})}{\srd{p}{i}_{j,\itemwt{i}}} \leq \sum_{\itemwt{i} \in \ucs(\prefintype{j}{i},\hitemwt{i})}{\srd{q}{i}_{j,\itemwt{i}}}$, which means $\srd{Q}{i} \sd{j} \srd{P}{i}$. We have proved that $\srd{Q}{i}_j=\srd{P}{i}_j$ for every $i \leq p$. By {\bf\Cref{cl:eqwtype}}, agent $j$ has the same share over each upper contour set and therefore $Q_j=P_j$. \qed \end{proof} In \Cref{thm:char} we provide two characterizations for \text{MPS}{}. The \text{leximin-}\allowbreak\text{optimality}{} reflects the egalitarian nature of the PS mechanism, which means that the mechanism always tries to balance the shares over top ranked items/bundles among all the agents. \text{MPS}{} retains this nature for \mtaps{} with strict linear preferences. The \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{} is extended from \cite{Hashimoto14:Two}. We note that \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{} involves the cumulative shares regarding items, different from the version in \cite{Wang19:Multi} which only involves the share of each bundle. \begin{theorem}\label{thm:char \text{MPS}{} is the unique mechanism which \begin{enumerate*}[label=(\Roman*)] \item satisfies \text{leximin-}\allowbreak\text{optimality}{}, and \item satisfies \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{}.\end{enumerate*} \end{theorem \begin{proof} \noindent{\bf (\uppercase\expandafter{\romannumeral1}) \text{leximin-}\allowbreak\text{optimality}{}:} Let $R$ be any profile of strict linear preferences. Let $P=\text{MPS}{}(R)$, ${\bf{u}}=(u_{j,{\bf{x}}})_{j\le n, {\bf{x}}\in\mathcal{D}}$. For $j\le n$, and bundle ${\bf{x}}\in\mathcal{D}$, let $u_{j,{\bf{x}}}=\sum_{\hat{\bf{x}}\in\ucs(\succ_j,{\bf{x}})}{p_{j,\hat{\bf{x}}}}$. Suppose for the sake of contradiction that $P$ is not \text{leximin-}\allowbreak\text{optimal}{}. Then, there exists a \ram{} assignment $Q$, such that for ${\bf{v}}=(v_{j,{\bf{x}}})_{j\le n, {\bf{x}}\in\mathcal{D}}$, where $v_{j,{\bf{x}}}=\sum_{\hat{\bf{x}}\in\ucs(\succ_j,{\bf{x}})}{q_{j,\hat{\bf{x}}}}$, it holds that $({\bf{v}},{\bf{u}})\in L$, where $L$ is the leximin relation (\Cref{dfn:leximin}). Let ${\bf{u}}^*$ constructed by sorting the components of ${\bf{u}}$ in ascending order, and ${\bf{v}}^*$ be defined on ${\bf{v}}$ similarly. We prove by induction that ${\bf{v}}^*={\bf{u}}^*$, i.e. $v^*_k=u^*_k$ for all $k$, and the corresponding assignments $P$ and $Q$ are identical. \noindent{\bf Base case.} As the basis of induction, we prove that $u^*_1=v^*_1$. Suppose for the sake of contradiction that $u^*_1<v^*_1$. We use the tuple $(j,{\bf{x}})$ as the index of the component $u_{j,{\bf{x}}}$, for every agent $j\le n$, and bundle ${\bf{x}}\in\mathcal{D}$. Let $S_1$ be the set of indices, such that for each $(j,{\bf{x}})\in S_1$, $u_{j,{\bf{x}}} = u^*_1$. We consider the corresponding elements in ${\bf{v}}$ indicated by the set $S_1$. We note that for each $(j,{\bf{x}})\in S_1$, there are two possible cases: \begin{enumerate}[label={\em Case \arabic*}.,wide,labelindent=0pt,topsep=0pt]% \item\label{lexicase1} ${\bf{x}}$ is the most preferred bundle w.r.t. $\succ_{j}$. Then, $p_{j,{\bf{x}}}=\allowbreak\sum_{{\bf{x}}'\in\ucs({j},{{\bf{x}}})}{p_{j,{\bf{x}}'}}\allowbreak=\allowbreak u^*_1$, and $u^*_1<v^*_1$, implies that $p_{j,{\bf{x}}}=u^*_1<v^*_1\leq\sum_{{\bf{x}}'\in\ucs({j},{{\bf{x}}})}{q_{j,{\bf{x}}'}}=q_{j,{\bf{x}}}$, \item\label{lexicase2} ${\bf{x}}$ is {\em not} the most preferred bundle w.r.t. $\succ_{j}$. Then, for the most preferred bundle $\hat{\bf{x}}$ w.r.t. $\succ_{j}$, there must exist $p_{j,\hat{\bf{x}}}=u^*_1$ as in {\em Case 1}, since $u^*_1\allowbreak\le\sum_{{\bf{x}}'\in\ucs({j},{\hat{\bf{x}}})}{p_{j,{\bf{x}}'}}\allowbreak\le\sum_{{\bf{x}}'\in\ucs({j},{{\bf{x}}})}{p_{j,{\bf{x}}'}}\allowbreak=u^*_1$. This implies that $p_{j,{\bf{x}}}=0\leq q_{j,{\bf{x}}}$ \end{enumerate} From the execution of \text{MPS}{}, ${\bf{x}}$ must be unavailable at time $u^*_1$ because some items in it are exhausted at that time. Let $B_1$ denote the set of the items exhausted at time $u^*_1$. For any $o\in B_1$, we have that $\sum_{(j',{\bf{x}}')\in S_1,o\in{\bf{x}}'}{p_{j',{\bf{x}}'}}=1$. With the inequality in {\em Case 1} and {\em Case 2} we have $\sum_{(j',{\bf{x}}')\in S_1,o\in{\bf{x}}'}{q_{j',{\bf{x}}'}}>\sum_{(j',{\bf{x}})\in S_1,o\in{\bf{x}}'}{p_{j',{\bf{x}}'}}=1$ for some $o\in B_1$, which is a contradiction. Having shown that $u^*_{1}=v^*_{1}$, we claim that the shares indicated by $S_1$ are equal in $P$ and $Q$, i.e. for any $(j,{\bf{x}})\in S_{1}$, $p_{j,{\bf{x}}}=q_{j,{\bf{x}}}$. Suppose for the sake of contradiction, there exists a tuple $(j,{\bf{x}})\in S_{1}$ such that $p_{j,{\bf{x}}}<q_{j,{\bf{x}}}$, then for any $o\in{\bf{x}}$ such that $o\in B_1$, there must exist $(\hat j,\hat{\bf{x}})\in S_{1}$ such that $\hat{\bf{x}}$ contains $o$ and $p_{\hat j,\hat{\bf{x}}}>q_{\hat j,\hat{\bf{x}}}\geq 0$, because for any $o\in B_1$ $\sum_{(j',{\bf{x}}')\in S_1,o\in{\bf{x}}'}{p_{j',{\bf{x}}'}}=1$. By {\em Case 1} we know that $\hat{\bf{x}}$ is most preferred by agent $\hat j$, and therefore $u^*_1=\sum_{{\bf{x}}'\in\ucs(\succ_{\hat j},\hat{\bf{x}})}p_{\hat j,{\bf{x}}'}>\sum_{{\bf{x}}'\in\ucs(\succ_{\hat j},\hat{\bf{x}})}q_{{\hat j},{\bf{x}}'}$ which means $({\bf{u}},{\bf{v}})\in L$, a contradiction to the assumption. The claim also means that for all $k\le |S_1|$, $u^*_k = u^*_1 = v^*_k$. \noindent{\bf Inductive step.} Next, we prove by induction that for any $k>1$, $u^*_k=v^*_k$ given $u^*_k>u^*_{k-1}$, and $u^*_{l}=v^*_{l}$ for $l<k$, and $p_{j,{\bf{x}}}=q_{j,{\bf{x}}}$ for any $(j,{\bf{x}})\in S_{l},l<k$. Suppose for the sake of contradiction that $u^*_k<v^*_k$. Let $S_k$ be the set of indices, such that for each $(j,{\bf{x}})$ in $S_k$, $u_{j,{\bf{x}}} = u^*_k$. Let $S_k$ be the set of tuples which correspond to $u^*_k$. For $(j,{\bf{x}})\in S_k$, let $\hat{\bf{x}}$ be the least preferred bundle in $\{{\bf{x}}'|{\bf{x}}'\succ_{j}{\bf{x}}\}$ w.r.t. $\succ_j$, and its corresponding index is $(j,\hat{\bf{x}})$. Then, we have $p_{j,{\bf{x}}}=u^*_k-u_{j,\hat{\bf{x}}}$. Let $u^*_{l}=u_{j,\hat{\bf{x}}}$. By our intial assumption that $({\bf{v}},{\bf{u}})\in L$, we claim that for all $(j,{\bf{x}})\in S_k$, $p_{j,{\bf{x}}}\leq q_{j,{\bf{x}}}$ and strict for some indices, because \begin{enumerate}[label={\em Case \arabic*'}.,wide,labelindent=0pt,topsep=0pt] \item if $(j,\hat{\bf{x}})\notin S_k$, then we have that $l<k$ and $u^*_k>u^*_{l}=v^*_{l}$, and therefore $p_{j,{\bf{x}}}=u^*_k-u^*_{l}< v^*_k-v^*_{l}=q_{j,{\bf{x}}}$, \item if $(j,\hat{\bf{x}})\in S_k$, then $u^*_{l}=u^*_{k}$ and $p_{j,{\bf{x}}}=0\leq q_{j,{\bf{x}}}$. \end{enumerate} W.l.o.g let ${\bf{x}}$ satisfy $p_{j,{\bf{x}}}< q_{j,{\bf{x}}}$. We know ${\bf{x}}$ is unavailable at time $u^*_k$ in the execution of \text{MPS}{} because of exhausted items in it. Let $B_k$ be the set of items exhausted at time $u^*_k$. For any $o\in B_k$, we have \begin{equation}\label{char:3} \begin{split} \sum_{(j',{\bf{x}}')\in \cup_{l<k}S_{l},o\in{\bf{x}}'}{p_{j',{\bf{x}}'}}&=\sum_{(j',{\bf{x}}')\in \cup_{l<k}S_{l},o\in{\bf{x}}'}{q_{j',{\bf{x}}'}}\\ \sum_{(j',{\bf{x}}')\in \cup_{l<k}{S_{l}},o\in{\bf{x}}'}{p_{j',{\bf{x}}'}}&+ \sum_{(j',{\bf{x}}')\in S_k,o\in{\bf{x}}'}{p_{j',{\bf{x}}'}}=1. \end{split} \end{equation} Thus we have $\sum_{(j',{\bf{x}}')\in \cup_{l\le k}S_{l},o\in{\bf{x}}}{q_{j',{\bf{x}}'}}>\allowbreak\sum_{(j',{\bf{x}}')\in \cup_{l\le k}S_{l},o\in{\bf{x}}}{p_{j',{\bf{x}}'}}=1$ for some $o\in B_k$, which is a contradiction. Then we show if the shares indicated by $S_{k}$ in $P$ and $Q$ are equal. With $u^*_{k}=v^*_{k}$, we claim that \begin{equation} \text{for any }(j,{\bf{x}})\in S_{k}, p_{j,{\bf{x}}}=q_{j,{\bf{x}}}.\label{char:4} \end{equation} If a tuple $(j,{\bf{x}})\in S_k$ satisfies $p_{j,{\bf{x}}}<q_{j,{\bf{x}}}$, there must exist some $(\hat j,\hat{\bf{x}})\in S_k$ such that $p_{\hat j,\hat{\bf{x}}}>q_{\hat j,\hat{\bf{x}}}\ge 0$ by (\ref{char:3}) since $p_{j',{\bf{x}}'}=q_{j',{\bf{x}}'}$ for $(j',{\bf{x}}')\in S_{l},l<k$. Because $p_{\hat j,\hat{\bf{x}}}>0$, by {\em Case 1'} we know that $u^*_k=u_{\hat j,\hat{\bf{x}}}>v_{\hat j,\hat{\bf{x}}}$ which means $({\bf{u}},{\bf{v}})\in L$, a contradiction to the assumption. By the claim we also know that for all $k\le l< k+|S_k|$, $u^*_l = u^*_k$. By induction, we have $u^*_k=v^*_k$ for all $k\leq n|\mathcal{D}|$ and therefore ${\bf{u}}={\bf{v}}$. Besides, since $\bigcup_{k<=|{\bf{u}}|}{S_k}=N\times\mathcal{D}$, from (\ref{char:4}) we have that $p_{j,{\bf{x}}}=q_{j,{\bf{x}}}$ for any $j\in N,{\bf{x}} \in \mathcal{D}$. \vspace{1em} \noindent{\bf (\uppercase\expandafter{\romannumeral2}) \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{}:} We use the relation of time and consumption to make the proof and show the uniqueness. Given any \mtap{}(M,N) and preference profile $R$, let $P=\text{MPS}(R)$ in the following proof. \noindent{\bf Satisfaction:} For an arbitrary agent $j$ and any bundle ${\bf{x}}$, let $t=\sum_{{\bf{x}}'\in\ucs(\succ_j,{\bf{x}})}p_{j,{\bf{x}}}$. We assume by contradiction that there exists an agent $k$ who gets shares of bundle $\hat{\bf{x}}$ containing $o$ i.e. $p_{k,\hat{\bf{x}}}>0$, and $\hat t=\sum_{{\bf{x}}'\in\ucs(\succ_k,\hat{\bf{x}})}p_{k,{\bf{x}}}>t$. From the relation of time and consumption, we know that at time $t$, ${\bf{x}}$ is unavailable and therefore the supply of some item $o\in{\bf{x}}$ is exhausted. We note that $t$ is not necessary to be the exact time when ${\bf{x}}$ become unavailable. We also know that at time $\hat t$, $\hat{\bf{x}}$ become unavailable, and therefore for $t<t'<\hat t$, $\hat{\bf{x}}$ is still available, which also means $o\in\hat{\bf{x}}$ is not exhausted, which is a contradiction. \noindent{\bf Uniqueness:} Suppose $Q$ is another \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fair}{} assignment by contradiction. For some agent $j$, we can find by comparing $P$ and $Q$ that a bundle ${\bf{x}}$ which satisfies $p_{j,{\bf{x}}}\neq q_{j,{\bf{x}}}$ and $p_{j,\hat{\bf{x}}}=q_{j,\hat{\bf{x}}}$ for $\hat{\bf{x}}\succ_j{\bf{x}}$. W.l.o.g. let agent $j$ and ${\bf{x}}$ just mentioned satisfy that $t=min(\sum_{{\bf{x}}'\in\ucs(\succ_j,{\bf{x}})}{p_{j,{\bf{x}}'}},\sum_{{\bf{x}}'\in\ucs(\succ_j,{\bf{x}})}{q_{j,{\bf{x}}'}})$ is the smallest among all the agents and bundles. By the selection of $j$ and ${\bf{x}}$, we have $p_{k,\hat{\bf{x}}}=q_{k,\hat{\bf{x}}}$ for $(k,\hat{\bf{x}})\in S=\{(k,\hat{\bf{x}})|\sum_{{\bf{x}}'\in(\succ_k,\hat{\bf{x}})}{q_{j,{\bf{x}}'}}\le t\}$. We first claim that $p_{j,{\bf{x}}}> q_{j,{\bf{x}}}$, i.e. $t=\sum_{{\bf{x}}'\in\ucs(\succ_j,{\bf{x}})}{q_{j,{\bf{x}}'}}$. Otherwise, if $p_{j,{\bf{x}}}< q_{j,{\bf{x}}}$, then agent $j$ get more shares of ${\bf{x}}$ in $Q$ and therefore demands more supply of item contained in ${\bf{x}}$. Since the assumption also means $t=\sum_{{\bf{x}}'\in\ucs(\succ_j,{\bf{x}})}{p_{j,{\bf{x}}'}}$, we know that in $P$ there exists an item $o\in{\bf{x}}$ exhausted at $t$ which makes ${\bf{x}}$ unavailable, and the supply of $o$ is also used up in $Q$ for agents and bundles in $S$. Therefore the extra demand of $o$ for agent $j$ on bundle ${\bf{x}}$ comes from agent $k$ on bundle $\hat{\bf{x}}$ containing $o$ such that $(k,\hat{\bf{x}})\in S$, which means $q_{k,\hat{\bf{x}}}<p_{k,\hat{\bf{x}}}$, a contradiction. Since $p_{j,{\bf{x}}}> q_{j,{\bf{x}}}$ i.e. $t=\sum_{{\bf{x}}'\in\ucs(\succ_j,{\bf{x}})}{q_{j,{\bf{x}}'}}$ now, which means agent $j$ gives up some shares of ${\bf{x}}$ in $Q$, and thus for every $o\in{\bf{x}}$ we can find out some agent $k$ and $\hat{\bf{x}}$ containing $o$ such that $k$ gains more shares of $\hat{\bf{x}}$ in $Q$, i.e. $q_{k,\hat{\bf{x}}}>p_{k,\hat{\bf{x}}}\ge 0$. By the definition of $t$, we have $\sum_{{\bf{x}}'\in(\succ_k,\hat{\bf{x}})}{q_{j,{\bf{x}}'}}> t=\sum_{{\bf{x}}'\in(\succ_j,{\bf{x}})}{q_{j,{\bf{x}}'}}$. Therefore every $o\in{\bf{x}}$ does not meet the condition of \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{}, which is contradictory to the assumption that $Q\neq P$ is \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fair}{}. \qed \end{proof} \begin{remark}\label{rm:mpsnsp} \text{MPS}{} does not satisfy \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{}. \end{remark} \begin{proof} Consider an \mtap{} with two agents where $\succ_1,\succ_2$ are as below: \begin{center} \begin{tabular}{|c|c|} \hline Agent & Preferences \\\hline 1 & $1_F2_B\succ_11_F1_B\succ_12_F1_B\succ_12_F2_B$ \\ 2 & $1_F1_B\succ_22_F1_B\succ_22_F2_B\succ_21_F2_B$ \\\hline \end{tabular} \end{center} \text{MPS}{} outputs $P$ for this preference profile. If agent $1$ misreports $\succ_1$ as $\succ_1':2_F1_B\succ_11_F1_B\succ_11_F2_B\succ_12_F2_B$, then \text{MPS}{} outputs $P'$. Both $P$ and $P'$ are shown as below: \vspace{1em}\noindent \begin{minipage}{\linewidth} \begin{minipage}{0.4\linewidth} \begin{center \centering \begin{tabular}{|c|cccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$P$}\\\cline{2-5} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1 & 0 & 0.5 & 0.25 & 0.25 \\ 2 & 0.5 & 0 & 0.25 & 0.25 \\\hline \end{tabular} \end{center} \end{minipage} \hspace{0.1\linewidth} \begin{minipage}{0.4\linewidth} \begin{center \centering \begin{tabular}{|c|cccc|} \hline\multirow{2}{*}{Agent} & \multicolumn{4}{c|}{$P'$}\\\cline{2-5} & $1_F1_B$ & $1_F2_B$ & $2_F1_B$ & $2_F2_B$ \\\hline 1 & 0 & 0.5 & 0.5 & 0 \\ 2 & 0.5 & 0 & 0 & 0.5 \\\hline \end{tabular} \end{center} \end{minipage} \end{minipage}\vspace{1em} We have that $P' \sd{1} P$ and $P' \neq P$ by shifting agent $1$'s share from $2_F2_B$ to $2_F1_B$, which violates the requirement of \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proofness}{} i.e. $P'_1 = P_1$ if $P' \sd{1} P$.\qed \end{proof} \begin{comment} \section{A New Result} Given a \ram{} assignment $P$ for a certain \mtap{}, a set $C \subseteq Imp(P)$ is a generalized cycle if it holds for every $o \in M$ that: if an improvable tuple $({\bf{x}}_1,\hat{\bf{x}}_1)\in C$ satisfies that $o\in {\bf{x}}_1$, then there exists a tuple $({\bf{x}}_2,\hat{\bf{x}}_2)\in C$ such that $o\in \hat{\bf{x}}_2$. \text{MPS}{} is a special eating algorithm where the eating speed of agents is $\omega=1$ during $[0,1]$. The eating algorithm only restricts that $\int_0^1{\omega}=1$. \begin{theorem An assignment admits no generalized cycle if and only if the assignment is the output of an eating algorithm. \end{theorem}\footnote{\color{green}haibin: The result hold if and only if we remove the "and vice versa" in the definition of generalized cycle. (check other proofs about the generalized cycle after changing of the definition.)}\footnote{\color{green}haibin: The proof under strict linear preference is almost same with under acyclic CP-net except we need a proposition to show agents have the best bundles in any set of available bundles under acyclic CP-net.} \begin{proof} $\Leftarrow$ Suppose $P$ is the output of an eating algorithm given a preference profile $R$. Suppose for the sake of contradiction that $P$ admits a generalized cycle $C$. Let $({\bf{x}}, \hat {\bf{x}})$ be an arbitrary improvable tuple in $C$. Let \emph{Seq} be the partial order on $M$ which reflects the time when items are exhausted. Formally, for any pair of items $o$ and $\hat o$ exhausted at times $t$ and $\hat t$, if $t \leq \hat t$, then $o$ \emph{Seq} $\hat o$. Let $\hat M = \{ o \in M : o \in {\bf{x}}, ({\bf{x}}, \hat {\bf{x}}) \in C \}$, and let $\hat o \in \hat M$ be one of top items according to \emph{Seq}. By definition of generalized cycles, there is an improvable tuple $({\bf{x}}, \hat{\bf{x}})\in C$ such that $\hat o \in \hat{\bf{x}}$. Since $({\bf{x}}, \hat{\bf{x}}) \in Imp(P)$, there exists an agent $i\in N$ such that ${\bf{x}} \succ_i \hat{\bf{x}}$ and $p_{i,\hat{\bf{x}}} > 0$. Hence when agent $i$ starts to consume $\hat{\bf{x}}$, ${\bf{x}}$ is unavailable and there is an item $o\in {\bf{x}}$ such that $o$ is unavailable when $j$ starts to consume $\hat{\bf{x}}$. If we use $t(o)$ and $t(\hat o)$ to stand for the times when $o$ and $\hat o$ exhausted in the eating algorithm respectively, we have $t(o) < t(\hat o)$ since $p_{i,\hat{\bf{x}}} > 0$. Besides, since $({\bf{x}}, \hat{\bf{x}}) \in C$, we have $o\in \hat M$, a contradiction to that $\hat o$ is one of top items according to \emph{Seq} in $\hat M$. $\Rightarrow$ Given any assignment $P$ under a preference profile $R$ that admits no generalized cycle. Let $M^0 = M$, $\mathcal{D}^0 = \mathcal{D}$. Let $B^s = \{ o \in M^{s-1} : \not\exists {\bf{x}}, \hat {\bf{x}} \in \mathcal{D}^{s-1} \text{ s.t. } o \in \hat {\bf{x}} \text{ and } ({\bf{x}}, \hat {\bf{x}}) \in Imp(P) \}$, $M^s = M^{s-1} - B^s$ and $\mathcal{D}^s = \{ {\bf{x}} \in \mathcal{D} : \forall o \in {\bf{x}}, o \in M^s \}$ be the available bundles in $M^s$. For any $M^{s-1}\neq \emptyset$, $B^s \neq \emptyset$, otherwise, for any $o \in M^{s-1}$, there exists ${\bf{x}}, \hat {\bf{x}} \in \mathcal{D}^{s-1}$ s.t. $o \in \hat {\bf{x}}$ and $({\bf{x}}, \hat {\bf{x}}) \in Imp(P)$ which implies $Imp(P)$ admits a generalized cycle, a contradiction. Hence, $M^{np} = \emptyset$. Let $S = \min\{s : M^s = \emptyset\}$. Let $\omega_i(t)$ be the eating rate of agent $i$ at time $t$, $N({\bf{x}},\mathcal{D}^s)$ be the set of agents who prefer ${\bf{x}}$ best in the available bundles $\mathcal{D}^s$ for any ${\bf{x}} \in \mathcal{D}^s$. We show an eating algorithm here: \[ \forall s \leq S, \frac{s-1}{S} \leq t \leq \frac{s}{S}, \omega_i(t)\overset{\text{def}}{=} \left\{ \begin{aligned} S \times p_{i,{\bf{x}}}, \qquad & \exists o \in {\bf{x}}, o \in B^s \wedge i \in N({\bf{x}}, \mathcal{D}^{s-1}) \\ 0, \qquad & \text{otherwise.} \end{aligned} \right. \] Note that $i \in N({\bf{x}}, \mathcal{D}^{s-1})$ implies ${\bf{x}} \in \mathcal{D}^{s-1}$. Let $Q$ be the output of the eating algorithm. Next, we prove $P = Q$. For any $o \in B^1$, ${\bf{x}} \in \mathcal{D}$ where $o \in {\bf{x}}$, we prove $\forall i \in N, p_{i,{\bf{x}}} = q_{i,{\bf{x}}}$. For any $o \in B^1$, suppose \[ \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^0}\sum_{i\in N({\bf{x}},\mathcal{D}^0)}p_{i,{\bf{x}}} < \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^0}\sum_{i\in N}p_{i,{\bf{x}}} = 1. \] Then, there exists ${\bf{x}}, \hat {\bf{x}} \in \mathcal{D}^0$ and $\hat i \in N({\bf{x}}, \mathcal{D}^0)$ such that ${\bf{x}} \succ_{\hat i} \hat {\bf{x}}$, $o \in \hat {\bf{x}}$ and $p_{\hat i, \hat {\bf{x}}} > 0$, which implies $o \notin B^1$, a contradiction. Hence, we have \[ \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^0}\sum_{i\in N({\bf{x}},\mathcal{D}^0)}p_{i,{\bf{x}}} = \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^0}\sum_{i\in N}p_{i,{\bf{x}}} = 1. \] That implies for any $o\in B^1$, ${\bf{x}} \in \mathcal{D}^0$ such that $o \in {\bf{x}}$, if $i \notin N({\bf{x}}, \mathcal{D}^0)$, $p_{i,{\bf{x}}} = 0$. For any $o \in B^1$, ${\bf{x}} \in \mathcal{D}^0$ such that $o \in {\bf{x}}$ and $i \in N({\bf{x}}, \mathcal{D}^0)$, agent $i$ consumes $p_{i,{\bf{x}}}$ fraction of bundles totally in $0 \leq t \leq \frac{1}{S}$. Hence, ${\bf{x}}$ is available in $0 \leq t \leq \frac{1}{S}$ and agent $i$ consumes ${\bf{x}}$ in $0 \leq t \leq \frac{1}{S}$. Therefore, $q_{i,{\bf{x}}} \geq \frac{1}{S} \times S \times p_{i,{\bf{x}}} = p_{i, {\bf{x}}}$. Hence, \[ \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^0}\sum_{i\in N({\bf{x}},\mathcal{D}^0)}q_{i,{\bf{x}}} \geq \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^0}\sum_{i\in N({\bf{x}},\mathcal{D}^0)}p_{i,{\bf{x}}} = 1. \] Since, \[ \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^0}\sum_{i\in N({\bf{x}},\mathcal{D}^0)}q_{i,{\bf{x}}} \leq \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}}\sum_{i\in N}q_{i,{\bf{x}}} = 1, \] we have \[ \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^0}\sum_{i\in N({\bf{x}},\mathcal{D}^0)}q_{i,{\bf{x}}} = \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^0}\sum_{i\in N({\bf{x}},\mathcal{D}^0)}p_{i,{\bf{x}}} = 1. \] That implies: \begin{enumerate*}[label=(\roman*)] \item for any $o \in B^1$, ${\bf{x}} \in \mathcal{D}^0$ such that $o \in {\bf{x}}$ and $i \in N({\bf{x}}, \mathcal{D}^0)$, $p_{i,{\bf{x}}} = q_{i, {\bf{x}}}$; \item for any $o\in B^1$, ${\bf{x}} \in \mathcal{D}^0$ such that $o \in {\bf{x}}$ and $i \notin N({\bf{x}}, \mathcal{D}^0)$, $q_{i,{\bf{x}}} = 0$. \end{enumerate*} Therefore, for any $o \in B^1$, ${\bf{x}} \in \mathcal{D}$ such that $o \in {\bf{x}}$ and any $i \in N$, $p_{i,{\bf{x}}} = q_{i,{\bf{x}}}$. Assume for any $o \in \bigcup_{k = 1}^s B^k$, ${\bf{x}} \in \mathcal{D}$ such that $o \in {\bf{x}}$ and any $i \in N$, \begin{equation}\label{equ:assumegc} p_{i,{\bf{x}}} = q_{i,{\bf{x}}}. \end{equation} Then, we prove for any $o \in B^{s+1}$, ${\bf{x}} \in \mathcal{D}$ where $o \in {\bf{x}}$ and any $i \in N$, $p_{i,{\bf{x}}} = q_{i,{\bf{x}}}$. If ${\bf{x}} \notin \mathcal{D}^s$, there is $\hat o \in \bigcup_{k = 1}^s B^k$ such that $\hat o \in {\bf{x}}$ and we have $p_{i,{\bf{x}}} = q_{i,{\bf{x}}}$ by assumption (\ref{equ:assumegc}). For any $o \in B^{s+1}$, suppose \[ \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^s}\sum_{i\in N({\bf{x}},\mathcal{D}^s)}p_{i,{\bf{x}}} + \sum_{o\in{\bf{x}},{\bf{x}}\notin\mathcal{D}^s}\sum_{i \in N}p_{i,{\bf{x}}} < \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}}\sum_{i\in N}p_{i,{\bf{x}}} = 1. \] Then, there exists ${\bf{x}}, \hat {\bf{x}} \in \mathcal{D}^s$ and $\hat i \in N({\bf{x}}, \mathcal{D}^s)$ such that ${\bf{x}} \succ_{\hat i} \hat {\bf{x}}$, $o \in \hat {\bf{x}}$ and $p_{\hat i, \hat {\bf{x}}} > 0$, which implies $o \notin B^{s+1}$, a contradiction. Hence, we have \[ \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^s}\sum_{i\in N({\bf{x}},\mathcal{D}^s)}p_{i,{\bf{x}}} + \sum_{o\in{\bf{x}},{\bf{x}}\notin\mathcal{D}^s}\sum_{i \in N}p_{i,{\bf{x}}} = \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}}\sum_{i\in N}p_{i,{\bf{x}}} = 1. \] That implies for any $o\in B^{s+1}$, ${\bf{x}} \in \mathcal{D}^s$ such that $o \in {\bf{x}}$, if $i \notin N({\bf{x}}, \mathcal{D}^s)$, $p_{i,{\bf{x}}} = 0$. For any $o \in B^{s+1}$, ${\bf{x}} \in \mathcal{D}^s$ such that $o \in {\bf{x}}$ and $i \in N({\bf{x}}, \mathcal{D}^s)$, agent $i$ consumes $p_{i,{\bf{x}}}$ fraction of bundles totally in $\frac{s}{S} \leq t \leq \frac{s+1}{S}$. By the assumption (\ref{equ:assumegc}), for any $\hat o \in M^s$, $\hat o$ remains at least $\sum_{\hat o \in \hat {\bf{x}}, \hat {\bf{x}} \in \mathcal{D}^s}\sum_{i\in N(\hat {\bf{x}}, \mathcal{D}^s)}p_{i,{\bf{x}}}$ at time $t = \frac{s}{S}$. Hence, ${\bf{x}}$ is available in $\frac{s}{S} \leq t \leq \frac{s+1}{S}$ and agent $i$ consumes ${\bf{x}}$ in $\frac{s}{S} \leq t \leq \frac{s+1}{S}$. Therefore, $q_{i,{\bf{x}}} \geq \frac{1}{S} \times S \times p_{i,{\bf{x}}} = p_{i, {\bf{x}}}$. Hence, \[ \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^s}\sum_{i\in N({\bf{x}},\mathcal{D}^s)}q_{i,{\bf{x}}} + \sum_{o\in{\bf{x}},{\bf{x}}\notin\mathcal{D}^s}\sum_{i \in N}q_{i,{\bf{x}}} \geq \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^s}\sum_{i\in N({\bf{x}},\mathcal{D}^s)}p_{i,{\bf{x}}} + \sum_{o\in{\bf{x}},{\bf{x}}\notin\mathcal{D}^s}\sum_{i \in N}p_{i,{\bf{x}}} = 1. \] Since, \[ \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^s}\sum_{i\in N({\bf{x}},\mathcal{D}^s)}q_{i,{\bf{x}}} + \sum_{o\in{\bf{x}},{\bf{x}}\notin\mathcal{D}^s}\sum_{i \in N}q_{i,{\bf{x}}} \leq \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}}\sum_{i\in N}q_{i,{\bf{x}}} = 1, \] we have \[ \sum_{o\in{\bf{x}},{\bf{x}}\in\mathcal{D}^s}\sum_{i\in N({\bf{x}},\mathcal{D}^s)}q_{i,{\bf{x}}} + \sum_{o\in{\bf{x}},{\bf{x}}\notin\mathcal{D}^s}\sum_{i \in N}q_{i,{\bf{x}}} = 1, \] That implies: \begin{enumerate*}[label=(\roman*)] \item for any $o \in B^{s+1}$, ${\bf{x}} \in \mathcal{D}^s$ such that $o \in {\bf{x}}$ and $i \in N({\bf{x}}, \mathcal{D}^s)$, $p_{i,{\bf{x}}} = q_{i, {\bf{x}}}$; \item for any $o\in B^{s+1}$, ${\bf{x}} \in \mathcal{D}^s$ such that $o \in {\bf{x}}$ and $i \notin N({\bf{x}}, \mathcal{D}^s)$, $q_{i,{\bf{x}}} = 0$. \end{enumerate*} Therefore, for any $o \in B^{s+1}$, ${\bf{x}} \in \mathcal{D}$ such that $o \in {\bf{x}}$ and any $i \in N$, $p_{i,{\bf{x}}} = q_{i,{\bf{x}}}$. By induction, we have for any $o \in \bigcup_{k = 1}^S B^k$, ${\bf{x}} \in \mathcal{D}$ such that $o \in {\bf{x}}$ and any $i \in N$, $p_{i,{\bf{x}}} = q_{i,{\bf{x}}}$. That is to say, for any ${\bf{x}} \in \mathcal{D}$ and $i \in N$, $p_{i,{\bf{x}}} = q_{i,{\bf{x}}}$.\qed \end{proof} \end{comment} \section{Conclusion and Future Work} \label{Conclusion} In the paper, we first point out that it is impossible to design \text{sd-}\allowbreak\text{efficient}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{free}{} mechanisms for \mtaps{} with indivisible items. However, under the natural assumption that agents' preferences are lexicographic, we propose LexiPS{} as a mechanism which can deal with indivisible items while satisfying the desired efficiency and fairness properties of \text{sd-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{}. For divisible items, we show that \text{MPS}{} satisfies the stronger efficiency notion of \text{lexi-}\allowbreak\text{efficiency}{} in addition to \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{} under the unrestricted domain of linear preferences, and is \text{sd-}\allowbreak\text{weak-}\allowbreak\text{strategy}\allowbreak\text{proof}{} under lexicographic preferences, which complement the results in~\cite{Wang19:Multi}. We also provide two separate characterizations for \text{MPS}{} by \text{leximin-}\allowbreak\text{optimality}{}{} and \text{item-wise }\allowbreak\text{ordinal }\allowbreak\text{fairness}{}. We also show that every assignment that satisfies \text{no-}\allowbreak\text{generalized-}\allowbreak\text{cycle}{}, a sufficient condition for \text{sd-}\allowbreak\text{efficiency}{}, can be computed by an eating algorithm. Characterizing the domain of preferences under which it is possible to design mechanisms for \mtaps{} with indivisible items that are simultaneously fair, efficient, and strategyproof is an exciting topic for future research. For divisible items, characterizations of mechanisms for \mtaps{} satisfying \text{sd-}\allowbreak\text{efficiency}{} and \text{sd-}\allowbreak\text{envy-}\allowbreak\text{freeness}{}, in addition to other combinations of desirable properties, is an interesting topic for future work. In addition, developing efficient and fair mechanisms for natural extensions of the \mtap{} problem such as settings where there are demands for multiple units of each type, or initial endowments is also an exciting new avenue for future research.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction With the detection of black hole mergers by LIGO~\cite{Abbott:2016blz,Abbott:2016nmj,Abbott:2017vtc} and the continuing lack of direct detection of WIMPs, interest in primordial black holes (PBHs)~\cite{1966AZh....43..758Z,Hawking:1971ei,Carr:1975qj} as candidates for dark matter has recently received renewed attention \cite{Bird:2016dcv,Clesse:2016vqa,Sasaki:2016jop,Kawasaki:2016pql,Carr:2016drx,Inomata:2016rbd,Nakama:2016gzw,Kuhnel:2017pwq,Chiba:2017rvs,Carr:2017jsz}. PBHs form from density fluctuations that are comparable to or exceed order unity at horizon crossing. For PBHs to be the dark matter therefore requires a large amplification of the inflationary power spectrum between cosmic microwave background (CMB) scales and PBH mass scales. In \S\ref{sec:nogo}, we show that independently of the detailed model of inflation, this amplification requires at least an ${\cal O}(1)$ violation of slow roll in canonical single-field inflation. In the recent literature, this has lead to confusion as to which inflaton potentials support PBH formation~\cite{Garcia-Bellido:2017mdw,Ezquiaga:2017fvi} and how best to calculate the power spectrum for cases that do \cite{Drees:2011hb}. Conversely a much larger than ${\cal O}(1)$ violation is not required and so the various slow-roll (SR) and optimized slow-roll (OSR) approximations for the power spectrum reviewed in Appendix~\ref{sec:D2fs} perform differently. Using the inflection (\S\ref{sec:im}), running mass (\S\ref{sec:rm}) and slow-roll step (\S\ref{sec:eps}) models as examples, we show that the potential slow-roll formalism can yield qualitatively incorrect results whereas the optimized Hubble slow-roll hierarchy~\cite{Motohashi:2015hpa,Motohashi:2017gqb} can provide accurate results in most of the cases relevant to PBH formation where slow roll is not grossly violated. \section{No go for slow roll \label{sec:nogo} Following the pioneering work of \cite{Carr:1975qj}, we model PBH formation using the Press-Schechter (PS) approach where the PBH fraction is determined by a collapse threshold $\delta_c$ in the density field smoothed with a Gaussian at the horizon reentry scale. In order to relate the collapse fraction directly to the curvature power spectrum, we approximate the result with the analogous probability of Gaussian curvature fluctuations lying above a threshold $\zeta_c$~\cite{Drees:2011hb,Harada:2013epa,Young:2014ana} \begin{align} \label{betaz} \beta &\equiv \f{\rho_{\rm PBH}}{\rho_{\rm tot}} = 2 \int^\infty_{\zeta_c} d\zeta\, \f{1}{\sqrt{2\pi }\Delta_\zeta} e^{-\zeta^2/(2\Delta_\zeta^2)} \notag\\ &= {\rm erfc} \mk{\f{\zeta_c}{\sqrt{2} \Delta_\zeta}} \approx \sqrt{\f{2}{\pi }}\, \f{\Delta_\zeta}{\zeta_c} e^{-\zeta_c^2/(2\Delta_\zeta^2)} , \end{align} where the approximation assumes $\zeta_c \gg \Delta_\zeta$ so that PBHs form from rare peaks. Here the density and curvature thresholds are related by assuming a nearly scale-invariant curvature power spectrum for a few efolds around horizon crossing \cite{Drees:2011hb,Young:2014ana} \be \zeta_c = \f{9}{2\sqrt{2}} \delta_c . \end{equation} In the following estimates, we take $\zeta_c=1.3$ based on studies that show a range of $\delta_c = 0.4-0.5$ \cite{Musco:2012au,Harada:2013epa}. The additional factor 2 in (\ref{betaz}) is the usual PS bookkeeping that accounts for locally under threshold regions collapsing in globally over threshold regions. Note that $\Delta_\zeta^2$ is the variance of the curvature field per logarithmic interval in $k$ and hence $\beta$ represents the collapse fraction per $d\ln k$ in the spectrum. A large collapse fraction requires a large amplitude of $\Delta_\zeta^2$, much larger than \begin{equation} \Delta_\zeta^2(k_0) \approx 2.2\times 10^{-9} \label{CMBnorm} \end{equation} measured by the CMB at $k_0 = 0.05$ Mpc$^{-1}$~\cite{Ade:2015xua}. Since the leading-order SR approximation gives the curvature power spectrum as \begin{equation} \Delta_\zeta^2 = \frac{ H_I^2 }{8\pi^2 \epsilon_H}, \label{SRpower} \end{equation} where $\epsilon_H = -d\ln H_I/dN$ with $H_I$ as the Hubble parameter during inflation and $N$ as the efolds of inflation, the amplification of the power spectrum required for PBH formation can in principle be achieved by making $\epsilon_H$ small. One might think that the slow-roll approximation still holds so long as $\epsilon_H$ is small and the expansion is nearly de Sitter $H_I \approx$\,const. However, even if $\epsilon_H$ itself is small, if its fractional variation per efold violates \begin{equation} \left| \frac{\Delta \ln \epsilon_H}{\Delta N}\right| \ll 1 \label{eqn:slowrollviolation} \end{equation} then the slow-roll condition is broken. The various slow-roll approximations for the power spectrum reviewed in Appendix~\ref{sec:D2fs} all fail if (\ref{eqn:slowrollviolation}) is strongly violated for a sustained period and perform differently for ${\cal O}(1)$ or transient violation ~\cite{Kinney:2005vj,Dvorkin:2009ne,Namjoo:2012aa,Martin:2012pe,Motohashi:2014ppa,Motohashi:2017aob,Motohashi:2017vdc}. Indeed without specifying a specific inflationary model, we shall now show that for PBHs to be all of the dark matter there must be at least an ${\cal O}(1)$ violation of the slow-roll condition (\ref{eqn:slowrollviolation}) in single-field inflation. The collapse fraction required for all of the dark matter to be PBHs depends on their mass. Smaller masses enter the horizon earlier during radiation domination and redshift more slowly than radiation once they collapse. We define the PBH mass as $M =\gamma M_H$, where \begin{equation} M_H\equiv \f{4\pi \rho}{3H^3}=\f{1}{2GH} \end{equation} is the horizon mass and $\gamma$ accounts for the efficiency of collapse. During radiation domination, the Hubble parameter $H$ is given by \be \label{FeqRD} H^2 = \left( \frac{g_{*}}{g_{*0}}\right)^{-1/3} \Omega_r H_0^2 a^{-4}, \end{equation} where we have assumed that the effective degrees of freedom in the entropy and energy densities approximately coincide $g_{*S}\approx g_*$ as they do before electron-positron annihilation in the standard model. Here $g_{*0}=3.36$ is the value of the latter today and the radiation energy density today is given by $\Omega_r h^2 = 4.18\times 10^{-5}$ for $T_{\rm CMB}=2.725\,$K and $N_{\rm eff} = 3.046$. We then obtain $ M$ as a function of the horizon crossing epoch $a_H$ \be M = \frac{\gamma}{2 GH} = 4.84 \times 10^{24} \gamma \left( \frac{g_*}{g_{*0}}\right)^{1/6} a_H^2 M_\odot .\end{equation} After formation, the PBH density dilutes as matter so the relic density today in a mass range around $ M$ of $|d\ln M| = 2 d\ln k$ is given by \be \frac{d\Omega_{\rm PBH} h^2}{d\ln M} =\frac{1}{2} \beta(M)\left( \frac{g_*}{g_{*0}} \right)^{-1/3} \Omega_r h^2 a_H^{-1},\end{equation} where we ignore mass accretion and evaporation. If we define \be \bar \beta= \frac{a_H}{2}\int_M^\infty \frac{dM'}{M'}\frac{\beta}{a_H} ,\end{equation} we obtain the cumulative abundance $>M$ \be \label{PBHdensity} \Omega_{\rm PBH} h^2 = \bar \beta \left( \frac{g_*}{g_{*0}} \right)^{-1/3} \Omega_r h^2 a_H^{-1}, \end{equation} Note that if $\beta(M)=$\,const.\ as it is for a scale-invariant power spectrum, then $\bar \beta=\beta$. We can invert (\ref{PBHdensity}) to find the value of $\bar\beta$ required to produce a given relic density \be \label{betam} \bar \beta = 1.3 \times 10^{-9} \gamma^{-1/2} \left( \frac{g_*}{g_{*0}} \right)^{1/4} \left( \frac{\Omega_{\rm PBH} h^2}{0.12} \right) \left(\frac{ M}{M_\odot} \right)^{1/2} . \end{equation} Given that the cold dark matter density $\Omega_c h^2 = 0.1199\pm 0.0022$~\cite{Ade:2015xua}, this gives the $\bar\beta$ required for all of the dark matter to be in PBHs above $M$. We can then set \eqref{betam} equal to \eqref{betaz} to obtain necessary local value of $\Delta_\zeta^2$ corresponding to the chosen mass $M$. In order to determine whether the slow-roll condition (\ref{eqn:slowrollviolation}) is violated, we next need to estimate the change in efolds $\Delta N$ during inflation over which this enhancement occurs. The comoving scale corresponding to $M$ is $a_H H=a_{\rm exit} H_I$ and it exited the horizon during inflation $N$ efolds after the CMB scale $k_0=0.05$\,Mpc$^{-1} = a_0 H_I$ exited. Assuming that $H_I\approx$ const., we can estimate the efolds by the ratio of comoving scales \begin{align} \label{Nm} N &=\ln \left( \frac{a_{\rm exit}}{a_0} \right)= \ln \left( \frac{a_H H}{0.05\, {\rm Mpc}^{-1} }\right) \notag\\ &= 18.4 - \frac{1}{12} \ln \frac{g_*}{g_{*0}} + \frac{1}{2}\ln \gamma -\frac{1}{2} \ln \frac{ M}{M_\odot}. \end{align} This sets an upper bound on the duration of the change $\Delta N\le N$. To establish a lower bound of the variation $|\Delta \ln \epsilon_H/\Delta N|$, let us consider the largest $\Delta N$ or the smallest PBH mass that can compose the dark matter. Low mass PBHs evaporate by Hawking radiation. Even if the mass of PBHs grows after formation by merging and accretion, in order to be the dark matter they must at least survive until matter radiation equality. We therefore equate the time scale for evaporation by Hawking radiation \be t_{\rm ev} = \f{5120\pi G^2M^3}{\hbar c^4} = 6.6 \times 10^{74} \mk{\f{M}{M_\odot}}^3 ~{\rm s} \end{equation} to the time of equality using the radiation dominated estimate of \eqref{FeqRD} and $t \approx 1/2H$ to obtain \be \label{Mevap} M_{\rm min} = 1.5\times 10^{-21} \left(\frac{\Omega_m h^2}{0.14}\right)^{-2/3} M_\odot .\end{equation} With the Planck constraint $\Omega_m h^2 = 0.1426 \pm 0.0020$~\cite{Ade:2015xua}, this scale left the horizon at \be N \approx 42 +\frac{1}{2} \ln \gamma \end{equation} relative to CMB scales where we have used $g_*=106.75$ which is appropriate for $M<1.2\times 10^{-6} \gamma M_\odot$ in the standard model. We can now put this together to place a lower bound on the level of slow-roll violation. Since $\gamma<1$ requires larger $\beta$ and larger $\Delta^2_\zeta$, we set $\gamma=1$ to be conservative and from \eqref{betaz} and \eqref{betam} obtain \be \label{minenh} \Delta_\zeta^2 = 0.021, \end{equation} as the level of locally scale-invariant power ($\bar\beta=\beta$) required at this mass. Given $\Delta_\zeta^2(k_0)$ on CMB scales (\ref{CMBnorm}), \be\left| \f{\Delta\ln\epsilon_H}{\Delta N} \right| > 0.38 . \end{equation} We conclude that for PBHs to be the dark matter in single-field inflation, the slow-roll condition must be violated at least at ${\cal O}(1)$. Although the various approximations employed in this bound carry large uncertainties, including the choice of $\gamma$, the Gaussian approximation for rare peaks, etc., they all enter into $\Delta\ln \epsilon_H/\Delta N$ logarithmically. Even orders of magnitude changes in the mass scale and power spectrum amplification would not qualitatively change this result. \begin{figure}[h] \centering \includegraphics[width=0.92\columnwidth]{im.pdf} \caption{ Inflection model~\eqref{im-pot} with the parameter set $( a = 3/2, \beta = 4 \times 10^{-5}, \Delta N_{\rm SR} = 35)$~\cite{Garcia-Bellido:2017mdw}. Potential (top), evolution of slow-roll parameters (middle), and curvature power spectrum (bottom). The curvature power spectrum is evaluated by numerical calculation (black, solid) and the potential slow-roll (SR-V) approximation (red, dashed). } \label{fig:im} \end{figure} \section{Inflection model \label{sec:im} The inflection model was employed in \cite{Garcia-Bellido:2017mdw} to attempt to produce a large peak in the curvature power spectrum with a near-inflection point in the potential $V(\phi)$, at which $V'$ and $V''$ are close to zero. This means that \be \epsilon_V = \frac{1}{2} \left( \frac{V'}{V} \right)^2 \rightarrow 0. \end{equation} The field equation for the inflaton $\phi$ and the Friedmann equation can be written as \be \frac{\epsilon_V}{\epsilon_H} =\left( 1 + \frac{1}{2(3-\epsilon_H)} \frac{ d\ln \epsilon_H }{dN}\right)^2 \label{eqn:KG} \end{equation} and so given the slow-roll condition (\ref{eqn:slowrollviolation}) $\epsilon_V \approx \epsilon_H$. To map the potential and its derivative onto the power spectrum in $k$ space, the slow-roll approximation uses \be \epsilon_H = \frac{1}{2} \left( \frac{d\phi}{dN} \right)^2 \approx \epsilon_V \end{equation} to solve for $N(\phi)$ on the slow-roll attractor \be \f{d\phi}{dN} = -\f{V'}{V}. \label{eqn:phiNSR} \end{equation} In order to distinguish this solution from the exact relation we call this $N_{\rm SR}(\phi)$. Since $V \approx 3H^2$, the scalar power spectrum is given by the slow-roll potential form (SR-V) \be \Delta_\zeta^2 = \f{V}{24\pi^2 \epsilon_V}\Big|_{k\eta(\phi)=1}, \end{equation} where $\eta$ is the conformal time to the end of inflation and $\eta \approx k_0^{-1} e^{-N_{\rm SR}(\phi)}$ under SR-V. An inflection model where $\epsilon_V\rightarrow 0$ would then predict a large enhancement of the power spectrum under SR-V but violations of (\ref{eqn:slowrollviolation}) can prevent this from occurring in practice. \begin{figure}[t] \centering \includegraphics[width=0.92\columnwidth]{im-tune.pdf} \caption{ Evolution of inflaton and Hubble and potential slow-roll parameters for the inflection model~\eqref{im-pot} with various values of $\Delta N_{\rm SR}$ with the other parameters $( a = 3/2, \beta = 4 \times 10^{-5})$. While tuning $\Delta N_{\rm SR}$ provides a small window where $\epsilon_H$ is suppressed without the inflaton being stuck near the inflection for too many efolds, all cases fall short of the required $10^7$ reduction from the near scale-invariant limit at early times. } \label{fig:im-tune} \end{figure} The inflection potential explored in \cite{Garcia-Bellido:2017mdw} is given by \be \label{im-pot} V(\phi) = \f{\lambda v^4}{12} \f{x^2 (6 - 4 a x + 3 x^2)}{(1 + b x^2)^2} , \end{equation} where $x=\phi/v$. They define \begin{align} \beta &\equiv b - \kk{ 1 - \f{a^2}{3} + \f{a^2}{3} \mk{\f{9}{2a^2} - 1}^{2/3} } , \notag\\ \Delta N_{\rm SR} &\equiv \f{3\pi}{2} \f{v^2}{a^{7/4}} \f{(3/\sqrt{2}-a)^{1/2}}{\sqrt{\beta}} , \label{eqn:inflectionparams} \end{align} where $\beta=0$ means the existence of an inflection point $V''=0$ at real $x$ and $\Delta N_{\rm SR}$ is associated with the efolds required to cross the inflection under the slow-roll approximation (\ref{eqn:phiNSR}). The parameter $\lambda$ is determined by the CMB normalization (\ref{CMBnorm}). Given $\lambda$, the phenomenological parameters $(a,\beta,\Delta N_{\rm SR})$ fully define the potential. Note that a similar inflection potential is also considered in the context of critical Higgs inflation~\cite{Ezquiaga:2017fvi}. Following \cite{Garcia-Bellido:2017mdw}, we choose $(a, \beta, \Delta N_{\rm SR} ) = (3/2, 4 \times 10^{-5}, 35)$. As shown in Fig.~\ref{fig:im}, the potential has a plateau around the near-inflection point $\phi = v$. Under the slow-roll approximation the inflaton should slow down and take $\Delta N_{\rm SR} =35$ efolds to cross this point, which would significantly amplify the power spectrum. The SR-V quantities are shown in Fig.~\ref{fig:im} by red dashed curves. Here we have assumed that the CMB normalization scale $k_0=0.05\,{\rm Mpc}^{-1}$ exits the horizon at $\phi=\phi_0\approx 3.71$ when $N=0$. However, in reality the inflaton arrives at the near-inflection point with excess kinetic energy relative to the inflationary attractor (see Fig.~\ref{fig:im}). The maximum violation of slow roll $d\ln \epsilon_H/dN\approx -6$ occurs when $\epsilon_V \rightarrow 0$ in \eqref{eqn:KG} and the field consequently cannot slow down sufficiently quickly to stay on the attractor. Instead the field rolls past this point in about an efold $\Delta N\approx 1$ and ends inflation soon thereafter. Consequently $\epsilon_H \gg \epsilon_V$ at the near-inflection point and the inflation never slows down below $\epsilon_H \sim 0.1$. Figure~\ref{fig:im} (bottom) compares the power spectra from the SR-V approximation and the exact numerical solution of the Mukhanov-Sasaki (MS) equation of motion for the curvature fluctuation. Whereas the SR-V approximation shows a large enhancement of the power spectrum, the true solution does not. In this case since even the SR approximation of (\ref{SRpower}) correctly predicts the lack of a power spectrum enhancement and no PBHs, we do not consider the other approximations of Appendix~\ref{sec:D2fs} further. While the parameter set $(a, \beta, \Delta N_{\rm SR} ) = (3/2, 4 \times 10^{-5}, 35)$ does not sustain a long enough period for the inflaton to be trapped at the near-inflection point and reduce $\epsilon_H$, as shown in Fig.~\ref{fig:im-tune}, increasing the parameter $\Delta N_{\rm SR}$ in (\ref{eqn:inflectionparams}) to $\Delta N_{\rm SR} = 100, 123, 126$ allows the inflaton to traverse the near inflection in $\Delta N \sim 2, 15, 30$, specifically the efolds between entering into the inflection and the end of inflation. This does lead to $\epsilon_H$ reductions of $2.4, 1.1 \times 10^4, 3.2 \times 10^5$ compared to CMB scales at $N=0$. There is therefore a small and fine-tuned window in which the power spectrum is substantially enhanced without the inflaton getting stuck so that there are too many efolds between when CMB scales left the horizon and the end of inflation. Moreover, the net reduction in $\epsilon_H$ between the nearly scale-invariant region before the inflection to the minimum rapidly saturates at a level that falls short of the required $10^7$ enhancement for PBHs to be the dark matter\footnote{ Reference~\cite{Garcia-Bellido:2017mdw} aimed to produce $\sim 10^4$ amplification with the inflection model compared to the $\sim 10^7$ amplification taken here as required for PBHs, which originates from their threshold value $\zeta_c=0.086$.}. Although further increasing $\Delta N_{\rm SR}$ can enhance the change from $N=0$ to the minimum by shifting the curves in Fig.~\ref{fig:im-tune} to the left, it does so by placing the CMB scales near the maximum of $\epsilon_H$ where the power spectrum has an unacceptably large red tilt. We conclude that at least for these values of $a$ and $\beta$, it is impossible to make PBHs be all of the dark matter in the inflection model. \begin{figure}[t] \centering \includegraphics[width=0.92\columnwidth]{rm.pdf} \caption{Running mass model (\ref{rm-pot}) with the parameter set \eqref{rm-param}. Potential (top), evolution of slow-roll parameters (middle), and the curvature power spectrum with the relative error for various approximations (bottom). SR-V remains a poor approximation to the power spectrum enhancement whereas OSR2 provides accurate results. } \label{fig:rm} \end{figure} \section{Running mass model \label{sec:rm} The logarithmically running mass model provides a more phenomenological approach for designing a potential that produces PBHs. Let us assume the potential \be V(\phi) = V_0 + \f{1}{2}m^2(\ln\phi)\phi^2 , \end{equation} has a local extremum at $\phi=\phi_*$, namely, $V'(\phi_*)=0$. Using a Taylor expansion of $m^2(\ln\phi)$ around $\ln \phi=\ln \phi_*$, we can express the potential as \be \label{rm-pot} V(\phi) = V_0 \kk{ 1 + \f{\tilde c}{8} \phi^2 \mk{ - 1 + 2 L + \tilde g L^2 } }, \end{equation} where $L\equiv \ln(\phi/\phi_*)$. Following \cite{Drees:2011hb}, we set \be \label{rm-param} \tilde c = -0.1711, \quad \tilde g = 0.09648, \quad L_0 = -0.756, \end{equation} where $L_0=\ln(\phi_0/\phi_*)$ and the pivot scale $k_0$ exits the horizon at $\phi=\phi_0\approx 0.47 \phi_*$. This choice provides a tilt on CMB scales of $n_s(k_0) = 0.964$ with an allowed running $\alpha_s(k_0) = 0.012$ and large running of the running of its value. This provides a sufficiently scale-invariant spectrum at CMB scales while allowing a large enhancement of the power spectrum at much higher $k$ for PBH formation. Since this model is based on Taylor expansion around $\phi_*$, it only applies the vicinity of $\phi_*$ and does not describe how inflation ends. Indeed since $\epsilon_H$ is continuously decreasing, if this truncation were exact, the power spectrum would be continuously amplified on small scales and overproduce PBHs. Hence, we should regard this model as a phenomenological description of inflation only between CMB and the onset of PBH formation where $\Delta_\zeta^2 = {\cal O}(10^{-2})$. The potential, slow-roll parameters, and power spectrum are presented in Fig.~\ref{fig:rm}. As before, we normalize efolds as $\phi(N=0)=\phi_0$. In this model $d\ln \epsilon_H/dN \approx -1.2$ at its extremum, indicating an ${\cal O}(1)$ violation of (\ref{eqn:slowrollviolation}). Consequently, even though at the same field position $\epsilon_V(\phi) \approx \epsilon_H(N(\phi))$, the prolonged violation causes the slow-roll approximation (\ref{eqn:phiNSR}) to misestimate the efold corresponding to a given field position $N_{\rm SR}(\phi) \ne N(\phi)$. In the middle panel of Fig.~\ref{fig:rm}, we see that this leads to a strong deviation of $\epsilon_V(N_{\rm SR})$ from $\epsilon_H(N)$. The bottom panel of Fig.~\ref{fig:rm} depicts the numerical solution for the curvature power spectrum and the relative error for the various formulae given in Appendix~\ref{sec:D2fs}. The curvature power spectrum is enhanced to a sufficient value to form PBHs $\Delta_\zeta^2 =0.02$ by $\ln(k/k_0)\approx35$. We see that SR-V underestimates the power spectrum by a factor of $8.4$ ($-88$\%) there and substantially more beyond this point. The SR approximation of (\ref{SRpower}) corrects $\phi(N)$ and $\epsilon_H$ and performs better with a maximal overestimate of a factor of 1.7 (70\%) at $N\approx 30$. Most of this improvement comes from the correction to $\phi(N)$ and note that the SR-V case integrates (\ref{eqn:phiNSR}), an approach that is called ``exact" in \cite{Drees:2011hb}. While this misestimate amounts mainly to a shift in efolds, the efolds from the CMB scale control the mass scale of the PBHs through (\ref{Nm}). On the other hand, the errors for the optimized first and second order Hubble slow roll approximations of Appendix~\ref{sec:D2fs}, OSR1 and OSR2, peak at about $40\%$ and $10\%$, respectively. Since $d\ln\epsilon_H/dN$ never greatly exceeds unity, the higher approximations improve the accuracy without adding much to the computational cost. \begin{figure}[t] \centering \includegraphics[width=0.92\columnwidth]{tanhd10.pdf} \caption{Step model \eqref{tanh} with the parameter set $(C_1,C_2, C_3,N_s,d)=(-5.07,0.0194,8.7,40,10)$. Evolution of slow-roll parameter (top) and curvature power spectrum with the relative error for various approximations (bottom). The relatively wide width $d$ produces slow-roll violation that can be described to better than $\sim 4\%$ accuracy by the OSR2 approximation. } \label{fig:tanhd10} \end{figure} \section{Slow-roll Step Model \label{sec:eps} In order to more systematically explore slow-roll violation in the PBH context, we can bypass the choice of a potential and directly model the critical function $\epsilon_H(N)$, as in an effective field theory of inflation approach. This approach also allows one to straightforwardly generalize these results to noncanonical inflation models in the Horndeski and Gleyzes-Langlois-Piazza-Vernizzi (GLPV) classes by replacing $\epsilon_H(N)$ with the general scalar source function~\cite{Motohashi:2017gqb}. CMB measurements place observational constraints on $\epsilon_H$ around $N=0$, the epoch when $k_0=0.05$ Mpc$^{-1}$ left the horizon during inflation. If the slow-roll approximation is satisfied on these scales $\epsilon_H$ should be well characterized locally by its Taylor expansion \be \ln \epsilon_H \approx C_1 + C_2 N , \quad (N\approx 0). \end{equation} The scalar tilt and the tensor-scalar ratio are given by \begin{align} \label{epsHparams} n_s-1&\approx -2\epsilon_H -\frac{d\ln \epsilon_H}{dN} = -2e^{C_1} -C_2 ,\notag\\ r&\approx 16\epsilon_H =16e^{C_1} . \end{align} The Planck CMB data imply \cite{Ade:2015lrj} \be n_s\approx 0.968, \quad r<0.10 . \end{equation} For definiteness, let us take the upper bound on $r$ to fix $C_1$, and then use $n_s$ to fix $C_2$: \be C_1 = -5.07,\quad C_2=0.0194. \end{equation} Next, for PBH formation we must change $\ln\epsilon_H$ by at least $\ln( 10^7 )\sim 16$ before the end of inflation. We therefore consider a steplike transition around $N_s$ \be \label{tanh} \ln \epsilon_H = C_1 + C_2 N - C_3 \kk{ 1 + \tanh \mk{\f{N-N_s}{d}} } , \end{equation} Similar to the running mass model, we assume that this form only parametrizes $N \lesssim N_s$ so that $\ln \epsilon_H$ undergoes another transition to end inflation near $N=60$. The step causes a change of $\Delta\ln \epsilon_H \sim -2C_3$ across $\sim d$ efolds. For definiteness, we take $C_3= 8.7$ and $N_s=40$ so that $d$ determines the amount by which (\ref{eqn:slowrollviolation}) is violated. In Fig.~\ref{fig:tanhd10}, we first consider a fairly wide step \be (C_1,C_2, C_3,N_s,d)=(-5.07,0.0194,8.7,40,10). \end{equation} Here $\ln \epsilon_H$ has its minimum at $N\approx 65.9$, and its difference from $N=0$ is $\Delta \ln \epsilon_H \approx 16.0$. The maximum amplitude of slow-roll violation is $d\ln\epsilon_H/dN\approx -0.85$. The relative errors for SR, OSR1, OSR2 are about $60\%$, $-20\%$, $-4\%$, respectively. For slow changes of $\ln \epsilon_H$, the optimized formula works extremely well. We next increase the slow-roll violation by decreasing the step width in Fig.~\ref{fig:tanhd4}, \be (C_1,C_2,C_3,N_s,d)=(-5.07,0.0194,8.7,40,4), \end{equation} for which $\ln \epsilon_H$ has its minimum at $N\approx 52.2$, and its difference from $N=0$ is $\Delta \ln \epsilon_H \approx 16.3$. The maximum amplitude of time variation is $d\ln\epsilon_H/dN\approx -2.2$. The relative errors for SR, OSR1, OSR2 are about $100\%$, $-80\%$, $-50\%$, respectively. While OSR2 still performs relatively well, the larger the violation of slow roll, the less the higher order optimizations improve the result. For the same level of slow-roll violation $C_3/d$, the approximations perform similarly if the amplitude $C_3$ is changed at fixed width $d$. \begin{figure}[t] \centering \includegraphics[width=0.92\columnwidth]{tanhd4.pdf} \caption{Step model \eqref{tanh} with the parameter set $(C_1,C_2, C_3,N_s,d)=(-5.07,0.0194,8.7,40,4)$. Evolution of slow-roll parameter (top) and curvature power spectrum with the relative error for various approximations (bottom). As the width $d$ decreases, the slow-roll violations increase, reducing the efficacy of higher order corrections. } \label{fig:tanhd4} \end{figure} \section{Conclusion \label{sec:conc} For PBHs to be the dark matter in single-field inflation, the slow-roll approximation must be violated by at least ${\cal O}(1)$ in order to enhance the curvature power spectrum within the required number of efolds between CMB and PBH scales. As a consequence, power spectrum predictions which rely on the inflaton remaining on the slow-roll attractor can fail dramatically. Models like the inflection potential which might seem to enhance the power spectrum under the potential slow-roll approximation when calculated properly provide no enhancement across most of its parameter space. Since the slow-roll approximation must fail by ${\cal O}(1)$ but is not required to fail by much more, approximations based on an optimized temporal evaluation of Hubble slow-roll parameters at various orders (OSR1,2) can perform much better at little extra computational cost. In particular the OSR2 approximation remains a good description of the potential out to slow-roll violations of $|d\ln\epsilon_H/dN |<2$ which encompasses a wide range of PBH formation models where the $10^7$ amplification of power occurs in $\sim 10$ efolds or more. \vspace{5mm} \noindent {\bf Note added:} The analysis of the inflection model in \S\ref{sec:im} is based on the parameter set used in version 3 of \cite{Garcia-Bellido:2017mdw}. In version 4 of \cite{Garcia-Bellido:2017mdw}, which appeared after this work was completed, a different set of parameters is used instead to produce a large amplification, for which the difference between SR-V and exact calculation of curvature power spectrum is reduced. As mentioned in the last two paragraphs of \S\ref{sec:im}, this type of modification requires tuning and still fails to produce the required $10^7$ amplification of power over the nearly scale-invariant portion of the curvature power spectrum. While this work was being completed, \cite{Kannike:2017bxn} appeared where the violation of slow roll in inflection model was mentioned in Sec.~2.3. Also, \cite{Germani:2017bcs} investigated the violation of slow-roll approximation and an amplification of the curvature power spectrum during ultra-slow-roll phase in the inflection models in~\cite{Garcia-Bellido:2017mdw,Ezquiaga:2017fvi} reaching similar conclusions. \acknowledgment H.M.\ was supported in part by MINECO Grant SEV-2014-0398, PROMETEO II/2014/050, Spanish Grant FPA2014-57816-P of the MINECO, and European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreements No.~690575 and 674896. W.H.\ was supported by grants NSF PHY-0114422, NSF PHY-0551142, U.S.~Dept.\ of Energy contract DE-FG02-13ER41958, and NASA ATP NNX15AK22G.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}} \section{Introduction} \label{label:intro} In recent times, hardware designers are increasingly using \ac{COTS} third-party intellectual property (3PIP) for designing complex architectures \cite{rosetta}. For ease of automated design space exploration and functional verification, designers are adopting \ac{HLS} framework like SystemC, synthesizable C/C++. As the designs have become increasingly more complex, chip designers build complex system-on-chips using 3PIPs of required functionalities and integrate them in-house. Post integration, these system-on-chips are outsourced to off-shore fabrication facilities. This typical chip design flow has enabled 3PIP vendors to maliciously inject stealthy bugs at a higher abstraction level that can later be exploited by the attacker to cause malfunction. Thus, verifying the security aspects of such 3PIPs is primarily important for the in-house integrator apart from ensuring correct functionality. Hardware Trojans at high-level synthesized designs have been recently explored in \cite{s3cbench}, where the attacker injects malicious functionality either by leaking crucial information or corrupt final output of \ac{DUT}. Although there exists a plethora of work to detect hardware trojans at \ac{RTL} or gate-level designs, these detection mechanisms are biased towards certain types of gate-level trojans. The trojans inserted at a higher abstraction level are semantically meaningful and stealthy, but difficult for the defender to precisely detect any abnormal functionality. An attacker can manifest a trojan by simply adding an \ac{RTL} statement that uses hardware blocks designed for other functionalities. This motivates performing security testing on high-level synthesized designs. In this article, we propose a scalable trojan detection framework based on \ac{FuCE}: combines greybox fuzzing \cite{afl} and concolic execution \cite{sen2007concolic} in a synergistic way to alleviate the downsides of those two approaches in standalone mode. We show that prior state-of-the-art trojan detection works are heavily confined to the type and functionality of trojans, and fail on subtly modified trojan behavior. Our primary contributions in this paper are two-fold: \begin{enumerate} \item a hybrid test generation approach combining the best of both worlds: greybox fuzzing and concolic testing --- our proposed framework complements both the techniques extenuating the problems associated with standalone approaches; \item to the best of our knowledge, ours is the first work combining fuzzing with concolic testing to reach deeper segments of \ac{HLS} designs without hitting the scalability bottleneck. \end{enumerate} The rest of the paper is organized as follows: Section~\ref{sec:background} outlines the background and the prior related works in the area of trojan detection. Section~\ref{sec:limitAndChallenge} describes the current limitations and challenges using state-of-art techniques. In Section~\ref{sec:framework}, we propose our \ac{FuCE} framework and show the efficacy of results in Section~\ref{sec:results}. Section~\ref{analysis} presents an empirical analysis of the results and concluding remarks appear in Section~\ref{sec:conclusion}. \section{Background} \label{sec:background} \subsection{Overview of Hardware Trojan} \label{label:background_overview} For decades, the silicon chip was considered as the root-of-trust of a complex system. The assumption was that the hardware blocks and modules are trustworthy and functions are exactly as specified in the design documentation. However, at the turn of this millennium, researchers have extended the attack surface to the underlying hardware, and showed that it can be tampered to gain privileged information and/or launch denial-of-service attacks. This has disrupted the root-of-trust assumption placed on hardware. The core philosophy of hardware trojan is to insert a malicious logic in the design and bypass the functionality verification of the design. The malicious logic is activated by the attacker's designed input. Since the last decade, hardware trojan design and detection have been extensively studied in the field of hardware security. Security researchers have proposed numerous trojan threat models and novel ways of detecting them. In \cite{decadeTrojan}, the authors have shown hardware trojans can be inserted in various levels of the chip design life-cycle. Thus, security testing of a design becomes extremely important before being passed on to the next stage. \subsection{Threat model} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{images/FuceThreatModel_2.pdf} \caption{Threat model showing untrusted third-party IPs are used for developing customized complex system-on-chip modules} \label{fig:threatModel} \end{figure} We have outlined our threat model in \autoref{fig:threatModel}. We assume in-house designers and engineers are trusted entities who are primarily responsible for developing complex \ac{SoC} modules. A number of third-party hardware IPs are procured from third-party vendors for developing such complex SoC design. However, 3PIPs are untrusted (not developed in-house) and may have hidden backdoors and vulnerabilities. Therefore, it is important for the in-house designers to locate the presence of malicious functionalities in such 3PIPs. For validation purpose, we assume that the in-house designers have access to functionally correct cycle accurate behavioural model of the SoC design (in the form of C++/Simulink model). Our threat model is in line with several earlier works such as~\cite{fuzzSystemC,symbolicSystemC}. \subsection{Security Testing} \label{label:background_secTesting} In the software community, security testing is one of the mandated steps adopted by the practitioners to analyze and predict the behaviour of the system with unforeseen inputs. This has helped develop robust software that are immune against a variety of attacks like buffer-overflow~\cite{bufferOverflow}, divide by zero~\cite{diveByzero}, arithmetic overflow~\cite{arithmeticOverflow}. We describe next two well-known security testing methodologies that are widely used for the same: \subsubsection{Greybox fuzzing} Fuzz testing is a well known technique in software domain for detecting bugs. Greybox fuzzing~\cite{afl} involves instrumenting the code segments at the point-of-interest, and generate interesting test vectors using evolutionary algorithms. Instrumentation injects markers in the design code which post compilation and execution can track whether a test-case has reached the marker location. A fitness function is used in order to evaluate the quality of a test-vector. Typically, greybox fuzzing is used to improve branch-pair coverage~\cite{afl} of the design, therefore the codes are annotated at every basic block. A test-vector is regarded as interesting, if it reaches a previously unexplored basic block, or hits it for a unique number of times. The fuzz engine maintains a history-table for every basic block covered so far, and retains interesting test vectors for further mutation/crossover. The test generation process completes once the user-defined coverage goal is achieved. There exists a plethora of works \cite{safl,driller,verifuzz} that have improved the performance of greybox fuzzing by augmenting various program analysis techniques. Popular \ac{CGF} engine like \ac{AFL}~\cite{afl} have been able to detect countless hidden vulnerabilities in well-tested softwares. \subsubsection{Concolic Testing} Concolic testing~\cite{sen2007concolic} is a scalable way of performing symbolic execution on a program with certain inputs considered concrete, and the rest are symbolic inputs. Symbolic execution in general suffers from scalability issues since the number of path constraints generated, are exponential in terms of the number of conditional statements. In order to avoid costly computations, concolic execution executes the program along the path dictated by concrete input and fork execution at branch points. The path constraints generated in concolic execution have reduced the number of clauses and variables, thereby making it easier for solvers and can penetrate deep into complex program checks. \emph{Driller}~\cite{driller} and \ac{S2E}~\cite{s2e} are examples of engines adopting this approach. \subsection{Testing for Hardware Trojan detection} Testing based trojan detection is a well studied problem in the recent past. In earlier works \cite{mero,banga2011odette,saha2015improved,bchowdhury2018,mers}, authors have assumed that the trojan netlist contain a \emph{rare} logic value and/or switching activity at certain nodes. Therefore, the test generation was tuned to excite such nodes in a netlist. Later, researchers used concolic testing approaches to detect trojans in behavioural level \ac{RTL} designs \cite{iccd2017_concolic,itc2018_concolic,vlsid2018_atpgModelChecking,date2019_concolic}. The objective of performing concolic testing is to penetrate into deeper conditional statements of a HDL program to expose the trojan behaviour. Although the techniques seem to work well on trojan benchmarks \cite{2017_rtlBenchmarks}, employing concolic testing without an efficient search heuristic and a target of interest to cover, results in multiple SAT solver calls taking considerable time for test vector generation. With recent success of coverage-guided greybox fuzzing in software domain, it has been recently adopted in \cite{rfuzz,fuzzSystemC,snpfuzzing} for detecting trojans in hardware design. Concolic test generation for trojan detection in high level designs have only been proposed recently in \cite{2018_concolic_sysC,symbolicSystemC}. We summarize works related to ours in ~\autoref{table:priorWork}. \begin{table}[!h] \centering \caption{Works on Test-based Trojan Detection in Hardware designs} \resizebox{\textwidth}{!}{ \begin{tabular}{@{}ccccc@{}} \toprule \toprule \textbf{Work} & \textbf{Abstraction level} & \textbf{Technique used} & \textbf{Benchmarks} & \textbf{Golden-model available?} \\ \midrule \midrule \multirow{2}{*}{Chakraborty \textit{et al.}~\cite{mero}} & \multirow{2}{*}{Gate level} & \multirow{2}{*}{Guided ATPG} & \multirow{2}{*}{ISCAS85, ISCAS89} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Banga \textit{et al.}~\cite{banga2011odette}} & \multirow{2}{*}{Gate level} & \multirow{2}{*}{Novel DFT} & \multirow{2}{*}{ISCAS89} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Saha \textit{et al.}~\cite{saha2015improved}} & \multirow{2}{*}{Gate level} & \multirow{2}{*}{Genetic algorithm + SAT formulation} & \multirow{2}{*}{ISCAS85, ISCAS89} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Chowdhury \textit{et al.}~\cite{bchowdhury2018}} & \multirow{2}{*}{Gate level} & \multirow{2}{*}{ATPG binning + SAT formulation} & \multirow{2}{*}{ISCAS85, ISCAS89, ITC99} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Huang \textit{et al.}~\cite{mers}} & \multirow{2}{*}{Gate level} & \multirow{2}{*}{Guided ATPG} & \multirow{2}{*}{ISCAS85, ISCAS89} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Liu \textit{et al.}~\cite{lyu2021maxsense}} & \multirow{2}{*}{Gate level} & \multirow{2}{*}{Genetic algorithm + SMT formulation} & \multirow{2}{*}{TrustHub~\cite{trusthub}} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Ahmed \textit{et al.}~\cite{iccd2017_concolic}} & \multirow{2}{*}{Register transfer level} & \multirow{2}{*}{Concolic testing} & \multirow{2}{*}{TrustHub~\cite{trusthub}} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Ahmed \textit{et al.}~\cite{itc2018_concolic}} & \multirow{2}{*}{Register transfer level} & \multirow{2}{*}{Greedy concolic testing} & \multirow{2}{*}{TrustHub~\cite{trusthub}} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Cruz \textit{et al.}~\cite{vlsid2018_atpgModelChecking}} & \multirow{2}{*}{Register transfer level} & \multirow{2}{*}{ATPG + Model checking} & \multirow{2}{*}{TrustHub~\cite{trusthub}} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Liu \textit{et al.}~\cite{date2019_concolic}} & \multirow{2}{*}{Register transfer level} & \multirow{2}{*}{Parallelism + concolic testing} & \multirow{2}{*}{TrustHub~\cite{trusthub}} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Pan \textit{et al.}~\cite{pan2021automated}} & \multirow{2}{*}{Register transfer level} & \multirow{2}{*}{Reinforcement learning} & \multirow{2}{*}{TrustHub~\cite{trusthub}} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Le \textit{et al.}~\cite{fuzzSystemC}} & \multirow{2}{*}{HLS/SystemC} & \multirow{2}{*}{Guided greybox fuzzing} & \multirow{2}{*}{S3C~\cite{s3cbench}} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{Bin \textit{et al.}~\cite{symbolicSystemC}} & \multirow{2}{*}{HLS/SystemC} & \multirow{2}{*}{Selective symbolic execution} & \multirow{2}{*}{S3C~\cite{s3cbench}} & \multirow{2}{*}{\cmark} \\ \multirow{2}{*}{\textbf{\textit{FuCE (Ours)}}} & \multirow{2}{*}{HLS/SystemC} & \multirow{2}{*}{Greybox fuzzing + Concolic Execution} & \multirow{2}{*}{S3C~\cite{s3cbench}} & \multirow{2}{*}{\cmark} \\ &&&& \\ \bottomrule \end{tabular} } \label{table:priorWork} \end{table} \subsection{Trojan detection in high-level design} With high level synthesis becoming the new trend for designing customized hardware accelerators, only few works~\cite{s3cbench,farimah2021} have studied hardware trojan and security vulnerabilities in \ac{HLS} designs and proposed preliminary countermeasures to detect them. In prior work\cite{fuzzSystemC}, Trojan inserted in high-level design is called synthesizable hardware Trojan (SHT) as it gets manifested as malicious backdoor in the hardware design. Therefore, the problem of Trojan detection in low level RTL design can be appropriately abstracted as finding SHT in high-level design. Till date, \cite{fuzzSystemC} and \cite{symbolicSystemC} have systematically addressed the problem of Trojan detection in HLS designs. In \cite{fuzzSystemC}, the authors have tuned the software fuzzer \ac{AFL}~\cite{afl} for S3C Trojan benchmark characteristics and showed that their technique outperforms the vanilla \ac{AFL}. The modified \ac{AFL} called \ac{AFL-SHT}, introduces program-aware mutation strategy to generate meaningful test vectors. In \cite{symbolicSystemC}, the authors identify the additional overhead of concolic testing from usage of software libraries, and have restricted the search space within the conditional statements of design. They named their automated prototype as \ac{SCT-HTD}, based on \ac{S2E}~\cite{s2e}. In the next section, we focus on studying the Trojan characteristic embedded in high-level synthesized design and evaluate the efficacy of existing detection techniques. \section{Limitations and challenges} \label{sec:limitAndChallenge} We present our motivating case-study on a real-time finite state machine implementation. We use a system controller design mimicking a typical hardware functionality and highlight limitations of existing techniques to discover the trojan behavior. \subsection{Motivating case-study} \lstinputlisting[caption=Motivating example,style=customc,linebackgroundcolor={\ifnum\value{lstnumber}=19\color{yellow}\fi\ifnum\value{lstnumber}=15\color{yellow}\fi\ifnum\value{lstnumber}=16\color{yellow}\fi\ifnum\value{lstnumber}=17\color{yellow} \fi\ifnum\value{lstnumber}=18\color{yellow} \fi},label={label:motivatingCode}]{codes/motivatingEg.c} In~\autoref{label:motivatingCode}, we have a code-snippet of a typical controller accepting state information \texttt{stateA} and \texttt{stateB}. The controller first checks if the \texttt{stateA} and \texttt{stateB} are set to the values $23978$ and $5678$, respectively (line \autoref{line:stateCheck}). Once the guard condition is satisfied, it enters a while loop, and check for the values of \texttt{stateA}, \texttt{stateB} and \texttt{switchA}. The loop traversed for the first time, satisfies $Branch~ 1$ (line \ref{line:cond1}) and swaps the values of \texttt{stateA} and \texttt{stateB}, setting \texttt{switchA} to TRUE. In subsequent iterations, $Branch~ 2$ is always satisfied as \texttt{switchA} = TRUE (line~\ref{line:cond2}), resulting in updating the values of \texttt{stateA} and \texttt{stateB}. It also checks whether \texttt{stateA} and \texttt{stateB} have reached the pre-defined values (line \ref{line:cond3}). If so, then \texttt{switchA} is set to FALSE. At the end, the controller accepts an input from the user for performing further action. \begin{figure}[t] \centering \includegraphics{images/state_diagram.pdf} \caption{State-transition diagram of motivating example~[\ref{label:motivatingCode}]} \label{fig:state_diag} \end{figure} We insert a Trojan code at line \ref{line:trojan} where we compare the current value of \texttt{cycle} with a very large number. The attacker intends to skip the functionality of {\it Branch 2}, and provide a false impression to the user that the controller is working by accepting inputs, however skipping {\it Branch 2} and not performing any operations. We explain it with the help of a state-transition diagram in ~\autoref{fig:state_diag}. Here, we observe that as soon as the \texttt{cycle} reaches the value of $2^{20}-1$, \texttt{switchA} is maliciously set to FALSE and the values of \texttt{stateA} and \texttt{stateB} are swapped instead of the expected updates in {\it Branch 2}. Thus, the loop body repeatedly executes {\it Branch 1} and maliciously increments \texttt{cycle} after accepting input from the user. The Trojan resembles closely to the ``ticking time-bomb" or ``sequential Trojan" behaviour where the system malfunctions after running for sufficiently large number of cycles. From test and verification perspective, simulating the design for large number of cycles in an exhaustive manner is a hard problem. \subsection{Evaluating AFL-SHT on this example} \label{subsec:afl-sht} Le \textit{et al.}\cite{fuzzSystemC} envisioned hardware trojans in high-level SystemC designs as \ac{SHT}. They proposed fuzzing-based test generation \ac{AFL-SHT} using \ac{AFL} as the backend engine. Initially, the authors evaluated the trojan detection capability of vanilla \ac{AFL} on \textit{S3C} benchmarks and identified the pitfalls of \ac{AFL} in detecting \ac{SHT}. They presented three major modifications in the mutation block of fuzz-engine: 1) pump mutation 2) format-aware trimming, and 3) design-aware \textit{interesting number} generation to tune test-generation for trojan detection. We evaluated our motivating example with the authors' version of \ac{AFL-SHT} and found that it was unable to generate appropriate values of \texttt{input}, \texttt{stateA} and \texttt{stateB} to activate the trojan behaviour. We observe that in-spite of aiding the fuzz-engine with coarse-grain information about the \textit{interesting numbers}, it was unable to explore beyond line~\ref{line:stateCheck}. This clearly shows the incapability of customized fuzz engine to generate \texttt{stateA}$ =23978$ and \texttt{stateB}$ = 5678$ and satisfy the branch constraints. Similarly, we modified the trigger conditions of trojans in \textit{S3C} benchmarks and found that \ac{AFL-SHT} was unable to detect trojans while the performance were slightly better than vanilla \ac{AFL}. This indicates \ac{AFL-SHT} is unable to utilize the pump mutation and using interesting numbers from benchmarks in an effective manner. We conclude our evaluation of \ac{AFL-SHT} with a simple takeaway message: fuzzing needs additional aid to explore code segments guarded by complex conditional checks. \subsection{Evaluating SCT-HTD on this example} A growing body of work in high-level synthesized designs have led to renewed interest in concolic testing for SystemC~\cite{2018_concolic_sysC,symbolicSystemC,2016_concolic}. Recent work \ac{SCT-HTD}~\cite{symbolicSystemC} has proposed a scalable, selective and systematic exploration approach for concolic testing of SystemC designs to uncover stealthy trojans. The authors identified a crucial insight that concolic engine do not distinguish in-built between library codes and design codes, and therefore gets stuck in exploration of undesirable library codes. The underlying assumption is: library codes and pragmas are maintained by SystemC specifications and therefore are trusted elements. Thus, exploring different paths in library codes does not hold much relevance and hence the authors have selectively restricted the state-space exploration within the design code. Additionally, while exploring the state-space, they prioritize states hitting uncovered conditions compared to states having already explored conditions. The authors have evaluated their approach on \textit{S3C} benchmarks and showed improvement in terms of number of inputs generated for trojan detection. A direct advantage of using concolic based approach for test-generation is systematic exploration of state-space. It maintains a history of states previously visited and prioritize the bandwidth towards exploring complex conditional checks. We run selective concolic engine (re-implementing the concepts of \ac{SCT-HTD}) and run it on our motivating example. We discovered that \ac{SCT-HTD} failed to trigger the Trojan condition particularly because of two reasons: 1) \ac{SCT-HTD} forked two states for every iteration of the while loop and soon running into out of memory on a machine having 16 GB RAM, 2) \ac{SCT-HTD} repeatedly invokes SAT-engine to generate test-input for satisfying the condition at line~\ref{line:cond1} when the condition at line~\ref{line:cond2} is True. On a high-level, it shows that concolic engines require an additional aid for intelligent exploration of search space leaving less memory footprint. \subsection{Lessons learnt} Post evaluation of state-of-art techniques of trojan detection, we conclude that the challenges lying ahead involves two important issues: 1) detecting trojans in a faster and scalable manner, and 2) detecting extremely "hard-to-trigger" trojan logic. As a defender, one can never make a prior assumption about possible location of trojans and tune the Testing methodology in a particular way. For a defender, the confidence of a design being ``Trojan-free" can only come when a test-set is generated providing a sufficient coverage on the design. Our evaluation with fuzzing and concolic testing shows that they can be combined in a synergistic way to accelerate trojan detection. This can avoid the path explosion problem of symbolic execution by controlled forking in symbolic loops using the fuzzer-generated test cases, thereby reaching deeper code segments without generating lots of states. \begin{figure}[h] \begin{center} \includegraphics[width=\textwidth]{images/FuCE_flow_final_4.pdf} \end{center} \caption{\ac{FuCE} test generation framework. \emph{Fuzz engine} is fed with initial test-cases. As coverage improvement ceases in fuzz engine, \emph{Concolic engine} starts execution with fuzz generated test cases. \emph{Fuzz engine} and \emph{Concolic engine} execute sequentially to penetrate deep into hard-to-satisfy conditional checks of programs. The \emph{Trojan Detector} checks for trojan whenever \emph{FuCE} generates a new test-case.} \label{fig:flow_chrt} \end{figure} Combining the two mainstream techniques for security testing, FuCE can achieve high code coverage with faster Trojan detection ability. FuCE avoids getting stuck either in Fuzz testing or in symbolic execution. FuCE can avoid the path explosion problem of symbolic execution to a certain extend, by controlled forking in symbolic loops using the fuzzer-generated test cases. So it can reach deeper code areas without generating lots of states. {\small \begin{algorithm}[!ht] \SetAlgoLined \KwData{Design Under Test $DUT$, User provided test-inputs $T_{initial}$, User defined time bound $time_{cut-off}$} \KwResult{{$T_{fuzzed}$} \Comment{Interesting test-inputs queue}} $T_{fuzzed} \gets T_{initial}$ \Comment{Initialization of the AFL's test-inputs queue}\\ \While{time $ \leq time_{cut-off}$}{ \For{$\tau \in T_{fuzzed}$} { \Comment{Mutate $\tau$ to generate test-cases based on the energy parameter}\\ $K\gets$ CALCULATE\_ENERGY($\tau$)\\ \For{$i \in \{1,2,\dots,K\}$}{ $\tau'\gets$ MUTATE-SEED($\tau$) \Comment{$\tau'$ denotes the mutated test case} \\ \If{IS-INTERESTING($DUT, \tau'$)} { $T_{fuzzed}\gets$ $T_{fuzzed}\cup \tau'$ \Comment{$\tau'$ is interesting if it improves branch coverage}\\} } } } \Return $T_{fuzzed}$ \caption{FUZZER($DUT$, $T_{initial}$)} \label{algo:AFL_FUZZ} \end{algorithm} } \section{FuCE framework} \label{sec:framework} We present the workflow of \ac{FuCE} in~\autoref{fig:flow_chrt}. The \ac{FuCE} framework consists of three components: 1) Greybox fuzzing, 2) Concolic execution, and 3) Trojan detector. \subsection{Greybox Fuzzing by AFL} High-level synthesized designs predominantly written in SystemC/C++ are initially passed through static analysis tool \texttt{LLVM}~\cite{LLVM} backend to generate \ac{IR}. The \texttt{LLVM} generated \acp{IR} are fed to \texttt{afl-clang-fast}, which is based on \texttt{clang}~\cite{clang}, a front end compiler for programming languages like C, C++, SystemC, among others. \texttt{afl-clang-fast} performs code instrumentation by automated injection of control flow statements on every conditional statement in run time, and generates executable. The core insight is: Trojan logic must be embedded under one of the conditional statements, so covering all conditional statements while verification and testing provides sufficient confidence of triggering the Trojan logic. The instrumented executable is then fed to our greybox fuzz-engine \ac{AFL} along with an initial test-set ($T_{initial}$) for fuzz testing. In~\autoref{algo:AFL_FUZZ}, we outline the overall flow of Greybox fuzzing. At first, we provide the high-level \ac{DUT} and a user-provided test-set $T_{initial}$ to the fuzzing framework. The \textit{CALCULATE-ENERGY} function assigns energy to the initial seed $T_{initial}$ on the basis of external features of the test case like the execution time, bitmap coverage, depth of the test case in terms of fuzzing hierarchy. A test case that is fast, covers more branches and has more depth, is given more energy. AFL then decides the number of random fuzzing iterations for that test case. \ac{AFL} uses $T_{initial}$ to perform operations like \textit{deterministic mutations} and \textit{havoc mutations} to generate newer test-cases with the help of the \textit{MUTATE-SEED} function. The deterministic mutation stage scans each byte of test-case and mutates them to generate new test-case. This includes bit flipping, byte flipping, arithmetic increments and decrements and magic value substitution. The number of children test-cases generated loosely depends on the size of the original test-case. However, havoc mutation performs aggressive mutations like mutating bit/bytes values at random location with a random value, deleting or cloning sub-sequence for generating new test-cases. \ac{AFL} uses branch-pair as a fitness metric to determine the quality of test-input. For each branch-pair, \ac{AFL} maintains a hash-table entry the number of times it is hit. The \textit{IS-INTERESTING} function checks whether the mutated test case is interesting or not. \ac{AFL} considers a test-input to be \textit{interesting}, if it covers a new-branch pair not hit so far, or has hit a branch-pair unique number of times compared to past observations. Interesting test-inputs are retained to form the next candidates for fuzzing. The algorithm terminates when either no more interesting test cases can be found or the user-defined $time_{cut-off}$ expires. \ac{AFL} maintains all interesting test-inputs in the queue $T_{fuzzed}$. \subsection{Concolic Execution by S2E} \label{label:s2e_engine} In our work, we use \ac{S2E} as our concolic execution engine for test-generation. \ac{S2E} has two main components: 1) a concolic virtual machine based on QEMU~\cite{qemu} and 2) a symbolic execution engine based on KLEE to switch back and forth between concrete execution and symbolic execution. We provide it the high level design \ac{DUT} and a set of test-cases $T_{initial}$. The \textit{CONC-EXEC} execute the \ac{DUT} with all test-cases $T_{initial}$ generating concrete execution traces. \ac{S2E} maintains an execution tree $DUT_{execTree}$ and identifies all the true and/or false edges of conditional nodes which are not covered by $T_{initial}$. \ac{S2E} then assigns symbolic values to those predicates. The \textit{COND-PREDICATE} constructs the path constraints for the uncovered edge of a condition, forks a new thread and invokes SAT-solver (\textit{CONSTRAINT-SOLVER}) to generate the test-case. \ac{S2E} selects the path for exploration in depth-first search order, based on its coverage analyzer and heuristically selects the path that maximizes the coverage. So, the final test-cases reported by \ac{S2E} ideally should cover all conditions of $DUT_{execTree}$. We outline this approach in~\autoref{algo:concolic_execution}. {\small \begin{algorithm}[t] \SetAlgoLined \KwData{Design Under Test $DUT$, User provided test-inputs $T_{initial}$} \KwResult{{$T_{concolic}$} \Comment{Set of test-cases generated by the concolic engine}} $DUT_{execTree} \gets \phi$ \Comment{Execution tree for DUT}\\ \For{$\tau \in T_{initial}$} { \Comment{Update DUT's execution tree with path traces obtained from concrete execution of initial test inputs} $P_{trace} \gets $ CONC-EXEC($DUT$,$\tau$)\\ $DUT_{execTree} \gets DUT_{execTree} \cup P_{trace}$ } \For{uncovered cond $c \in DUT_{execTree}$} { \Comment{Perform symbolic execution steps targeting uncovered conditional statements}\\ $p_{c} \gets$ COND-PREDICATE($c$) \\ $t_{new}$ $\gets$ CONSTRAINT-SOLVER($p_c$) \Comment{$t_{new}$ is newly generated test-case by the concolic engine}\\ $T_{concolic}$ $\gets$ $T_{concolic}$ $\cup$ $t_{new}$ } \Return $T_{concolic}$ \caption{CONCOL-EXEC($DUT$,$T_{initial}$)} \label{algo:concolic_execution} \end{algorithm} } \begin{algorithm}[!ht] \SetAlgoLined \KwData{Design Under Test ($DUT$), Initial test-inputs ($T_{initial}$), Time limit ($time_{cut-off}$), Threshold time limit for coverage improvement by Fuzzer ($time_{threshold}$), Time budget allocated for concolic execution ($time_{budget}$)} \KwResult{{$T_{FuCE}$} \Comment{Final test-cases generated by \ac{FuCE}} } \Comment{\textit{InvokeConcolic} performs concolic execution with fuzzing generated test-inputs}\\ \SetKwFunction{FInvokeConcolic}{InvokeConcolic} \SetKwProg{Fn}{Function}{:}{} \Fn{\FInvokeConcolic{$DUT_{execTree}$,$i$}}{ $time_{concolic} \gets 0$ \Comment{Monitor concolic execution runtime} \\ \For{uncovered cond $c \in DUT_{execTree}$} { $p_{c} \gets$ COND-PREDICATE($c$) \\ $t_{new}$ $\gets$ CONSTRAINT-SOLVER($p_c$) \Comment{$t_{new}$ is a newly generated test case}\\ $T_{FuCE}$ $\gets$ $T_{FuCE}$ $\cup$ $t_{new}$\\ Invoke Trojan detector with $t_{new}$ (\autoref{algo:htdetector}) \\ \If{$time_{concolic} > time_{budget}$}{ \textbf{break} } } } $i \gets 1$ \Comment{Phase ID of sequential execution}\\ $T_{FuCE} \gets T_{initial}$ \Comment{\ac{FuCE} gets initial test-cases $T_{initial}$}\\ $time_{coverage} \gets 0$ \Comment{$time_{coverage}$ monitors time elapsed since last test-case retained}\\ \Comment{$time$ denotes wall time of \ac{FuCE} }\\ \While{time $\leq time_{cut-off}$}{ \For{$\tau \in T_{FuCE}$} { $K\gets$ CALCULATE\_ENERGY($\tau$) \Comment{Mutate $\tau$ to generate test-cases based on the energy parameter}\\ \For{$j \in \{1,2,\dots,K\}$}{ $\tau'\gets$ MUTATE-SEED($\tau$) \Comment{$\tau'$ denotes the mutated test case}\\ \If{IS-INTERESTING($DUT, \tau'$)} { $T_{FuCE}\gets$ $T_{FuCE}\cup \tau'$\\ Invoke Trojan detector with $\tau'$ (\autoref{algo:htdetector})\\ Reset $time_{coverage}$\\ } \If{$time_{coverage} > time_{threshold}$} { InvokeConcolic($DUT_{execTree}$, $i$) \Comment{Invokes the concolic engine when the fuzzer gets stuck} ($DUT_{execTree}$ is the program execution tree of $DUT$) \\ \If{time $> time_{cut-off}$} { \Return $T_{FuCE}$ \Comment{User defined total time budget for FuCE is exhausted} } \Else{ $i = i+1$\\ \textbf{break} } } } } } $coverage_{FuCE} = \texttt{reportCoverage}(T_{FuCE}$) \Comment{Reports code coverage} \\ \Return ($T_{FuCE}, coverage_{FuCE}$) \caption{FuCE ($DUT$, $T_{initial}$, $time_{cut-off}$, $time_{threshold}$, $time_{budget}$)} \vspace{0.3cm} \label{algo:FuCE_1} \end{algorithm} \subsection{Fusing fuzzer with concolic execution (FuCE)} We leverage the power of concolic execution to alleviate the drawbacks of greybox fuzzing without hitting a scalability bottleneck. As shown in ~\autoref{fig:flow_chrt}, we first perform lightweight instrumentation on all conditional statements of \ac{DUT} and generate an instrumented executable. We start our fuzz-engine (\textit{FUZZER}) with a set of initial test-cases $T_{initial}$. The fuzz-engine generates interesting test-cases using genetic algorithm and explores various paths in the design. When there is no coverage improvement and a user-defined time period $time_{threshold}$ is over, the concolic engine (\textit{CONCOL-EXEC}) is invoked for unseen path exploration. The concrete execution function of the concolic engine generates the $DUT_{execTree}$ for the DUT by feeding the fuzzer generated test cases. The \textit{CONCOL-EXEC} identifies uncovered conditions in $DUT_{execTree}$ and forks new threads for symbolic execution on such conditions using depth-first search strategy. We use fuzzed test-cases for concolic engine to generate new test-cases satisfying complex conditional statements. In order to avoid scalability bottleneck, we limit the runtime of concolic engine to $time_{budget}$. Concolic execution generated test-cases are then fed back to the fuzzer, thereby allowing scalable exploration of deeper program segments. This process continues till trojan is detected, which is the main objective of \ac{FuCE}. Whenever a new test case is generated with either the fuzz engine or the concolic engine, it is added to $T_{FuCE}$, the test case queue for \ac{FuCE}, and the trojan detector is invoked with the latest test case added to $T_{FuCE}$. If trojan is detected successfully, \ac{FuCE} displays the result for trojan detection and stops. We formally present our test-generation approach in \autoref{algo:FuCE_1}. There may be designs where trojans get detected before 100\% branch coverage is obtained. For related goals of either generating test-cases for achieving 100\% branch coverage or trojan dertection in the absence of a golden model, we can run \ac{FuCE} for a pre-defined $time_{cut-off}$ using a similar flow of switching between the fuzzer and the concolic engine as in~\autoref{fig:flow_chrt}. Instead of checking for trojan detection, we check for complete branch coverage for the design and stop \ac{FuCE} on attaining 100\% branch coverage or on time out. \begin{algorithm}[h] \SetAlgoLined \KwData{$DUT$ is Device Under Test, $Out_{golden}$ is golden model output), $t$ is the new test-case of \ac{FuCE}} $out_{ref} \gets Out_{golden}(t)$ \\ $out_{DUT} \gets SIMULATOR(DUT,t)$ \\ \If{$out_{DUT} \neq out_{ref}$ }{ Trojan detected\\ \texttt{break} } $coverage_{FuCE} = \texttt{reportCoverage}(T_{FuCE}$) \Comment{Reports code coverage} \\ \Return ($T_{FuCE}, coverage_{FuCE}$) \caption{HT-DETECTOR ($DUT$, $Out_{golden}, t$)} \label{algo:htdetector} \end{algorithm} \subsection{Hardware Trojan Detection} \label{label:trojan_detector} Using \ac{FuCE} test generation framework, we localize trojans in a given $DUT$ (\autoref{algo:htdetector}). For every test-case \ac{FuCE} generated either by \emph{FUZZER} or \emph{CONCOL-EXEC}, we run our design, collect the output response and compare it with reference response. The \textit{SIMULATOR} invokes the SystemC library that provides predefined structures and simulation kernel for simulation of SystemC designs. The assumption is: any deviation of $DUT$'s output (eg., bit corruption at certain locations or report of internal state information) from expected response is considered to be suspicious in nature and triggered from possible trojan behaviour. \ac{FuCE} terminates as soon as a trojan behaviour is detected, otherwise at a user-defined time-out $time_{cut-off}$ and reports coverage metrics using the test-cases generated. \subsection{Evaluating FuCE on our motivating example} We evaluated \ac{FuCE} on our motivating example to check the efficacy of our proposed framework. Our fuzzer quickly generated a considerable number of test-cases for \texttt{stateA} and \texttt{stateB} but was unable to explore beyond line~\ref{line:stateCheck}. The test-cases generated by fuzzer were passed on to concolic engine. Concolic engine generated values of \texttt{stateA} and \texttt{stateB} satisfying the condition at line~\ref{line:stateCheck}. These newly generated test-cases were fed back to the fuzzer leading to faster exploration of the entire loop body. We observed that the fuzzer preserved the test-cases which covered the loop body a unique number of times. Finally, \ac{FuCE} reached the Trojan location within 560s$\pm 15\%$, whereas neither \ac{AFL-SHT} nor \ac{SCT-HTD} could detect it within two hours of run. \section{Experimental setup and design} \label{sec:results} \subsection{Experimental setup} We implement \ac{FuCE} using state-of-art software testing tools: \ac{AFL} (v2.52b)~\cite{afl} for greybox fuzzing and \ac{S2E}~\cite{s2e} to perform concolic execution. For robust coverage measurements, we cross-validated our results using a combination of coverage measuring tools: namely \emph{afl-cov-0.6.1}~\cite{afl-cov}, \emph{lcov-1.13}~\cite{Lcov}, and \emph{gcov-7.5.0}~\cite{Gcov}. Experiments are performed on 64-bit linux machine having $i5$ processor and 16 GB RAM clocked at 3.20 GHz. \subsection{Benchmark characteristics} We evaluated \ac{FuCE} on the SystemC benchmark suite, $S3CBench$~\cite{s3cbench_benchmark} having SystemC Trojan-infected designs. The Trojans have a wide spectrum of purpose ranging across: Denial-of-service, Information leakage, and corrupted functionality. The types of trojan based on their triggered mechanism are categorized in~\autoref{tab:trojan_type}. For the trojan type where the payload has memory, the trojan remains active for a prolonged period of time even when the trigger condition is not active anymore. We define the severity level based on this characteristic. The $S3CBench$ is synthesizable to RTL using any commercial High Level Synthesis (HLS) tool. ~\autoref{tab:bench_char} shows the characterization of the \textit{S3C} benchmark both for the original circuit, and the trojan induced circuit. The benchmarks considered have diverse characterization: 1) Image and signal processing: ADPCM, 2) Cryptography: AES, 3) Data manipulation: Bubble sort, 4) Filters: Decimation, and 5) IP protocols: UART. The trojans in S3CBench are hard to detect with random seeds~\cite{s3cbench}. The benchmark suite provides certain test-cases having high statement coverage but does not trigger the trojan behaviour. Two benchmarks in S3C suite, namely \textit{sobel} and \textit{disparity} that accept image as file input, were not considered because the concolic engine was unable to generate test-cases with these input formats. However, we present our results on the largest benchmark \emph{AES-cwom}, and on the most complex benchmark of the $S3C$ suite, i.e.,\emph{interpolation-cwom}, which indeed demonstrates the efficacy of our approach. \begin{table}[t] \centering \caption{Trojan types --- Combinational: Comb., Sequential: Seq.} \begin{tabular}{cccc} \toprule Trojan & Trigger & Payload & Severity\\ \midrule \rt{CWOM} & \rt{Comb.} & \rt{Comb.} & \rt{Low} \\ \bt{CWM} & \bt{Comb.} & \bt{Seq.} & \bt{High} \\ \mat{SWOM} & \mat{Seq.} & \mat{Comb.} & \mat{Low} \\ \ct{SWM} & \ct{Seq.} & \ct{Seq.} & \ct{High} \\ \bottomrule \end{tabular} \label{tab:trojan_type} \end{table} \subsection{Design of experiments} We perform two variants of experiments to evaluate the efficacy of \emph{FuCE}: 1) Trojan detection 2) Achievable branch coverage. We compare the results with standardized baseline techniques namely fuzz-testing based approach(AFL) and symbolic model checking(S2E). We now describe the experimental setup of each baseline: \textbf{Baseline 1~(Fuzz testing)}: We run \ac{AFL} on S3C benchmarks using default algorithmic setting. Initial seed inputs are randomly generated. \textbf{Baseline 2~(Symbolic execution)}: Like baseline 1, we run \ac{S2E} on S3C benchmarks having default configurations. Randomly generated seed inputs are provided to \ac{S2E} as inputs. \begin{table}[t] \centering \caption{HLS synthesized hardware characterization of S3C benchmarks. Trojan types: \rt{CWOM}, \ct{SWM} and \mat{SWOM}} \begin{tabular}{ccccccccc} \toprule \multirow{2}{*}{Benchmark} & \multirow{2}{*}{Type} & \multicolumn{3}{c}{SystemC characterization} & \multicolumn{4}{c}{HLS synthesized hardware} \\ \cmidrule(lr){3-5} \cmidrule(lr){6-9} & & {Branches} & {Lines} & {Functions} & {LUTs} & {Registers} & {Nets} & {Critical path(ns)} \\ \midrule \multirow{3}{*}{ADPCM} & orig & 26 & 186 & 6 & 121 & 87 & 346 & 3.94 \\ & \ct{SWM} & \ct{28} & \ct{186} & \ct{6} & \ct{120} & \ct{118} & \ct{394} & \ct{3.801} \\ & \mat{SWOM} & \mat{30} & \mat{187} & \mat{6} & \mat{163} & \mat{240} & \mat{588} & \mat{3.019} \\ \cmidrule(lr){1-9} \multirow{2}{*}{AES} & orig & 50 & 371 & 13 & 2782 & 4684 & 8809 & 7.589 \\ & \rt{CWOM} & \rt{68} & \rt{380} & \rt{13} & \rt{2886} & \rt{4772} & \rt{9039} & \rt{7.668} \\ \cmidrule(lr){1-9} \multirow{3}{*}{Bubble\_sort} & orig & 20 & 78 & 3 & 472 & 551 & 1219 & 8.944 \\ & \rt{CWOM} & \rt{22} & \rt{78} & \rt{3} & \rt{494} & \rt{551} & \rt{1219} & \rt{7.527} \\ & \ct{SWM} & \ct{22} & \ct{78} & \ct{3} & \ct{546} & \ct{584} & \ct{1400} & \ct{7.87} \\ \cmidrule(lr){1-9} \multirow{2}{*}{Filter\_FIR} & orig & 14 & 75 & 2 & 68 & 36 & 146 & 5.729 \\ & \rt{CWOM} & \rt{16} & \rt{75} & \rt{4} & \rt{89} & \rt{59} & \rt{213} & \rt{7.46} \\ \cmidrule(lr){1-9} Interpolation & orig & 10 & 108 & 3 & 984 & 654 & 212 & 7.45 \\ & \rt{CWOM} & \rt{30} & \rt{108} & \rt{3} & \rt{1071} & \rt{595} & \rt{212} & \rt{8.331} \\ & \ct{SWM} & \ct{30} & \ct{108} & \ct{3} & \ct{612} & \ct{570} & \ct{212} & \ct{8.321} \\ & \mat{SWOM} & \mat{30} & \mat{109} & \mat{3} & \mat{612} & \mat{569} & \mat{212} & \mat{8.321} \\ \cmidrule(lr){1-9} Decimation & orig & 88 & 304 & 3 & 3018 & 1696 & 634 & 8.702 \\ & \ct{SWM} & \ct{94} & \ct{304} & \ct{3} & \ct{3108} & \ct{1741} & \ct{634} & \ct{8.702} \\ \cmidrule(lr){1-9} Kasumi & orig & 36 & 288 & 12 & 1378 & 956 & 188 & 8.016 \\ & \ct{SWM} & \ct{38} & \ct{288} & \ct{12} & \ct{1385} & \ct{958} & \ct{272} & \ct{8.016} \\ & \rt{CWOM} & \rt{38} & \rt{288} & \rt{12} & \rt{1431} & \rt{987} & \rt{273} & \rt{9.266} \\ \cmidrule(lr){1-9} UART & orig & 28 & 160 & 3 & 510 & 142 & 1336 & 3.137 \\ & \ct{SWM-1} & \ct{48} & \ct{164} & \ct{3} & \ct{549} & \ct{196} & \ct{1336} & \ct{2.766} \\ & \ct{SWM-2} & \ct{50} & \ct{164} & \ct{3} & \ct{566} & \ct{190} & \ct{13.36} & \ct{4.367} \\ \bottomrule \end{tabular} \label{tab:bench_char} \end{table} \textbf{FuCE}: We run \ac{FuCE} on S3C benchmarks using randomly generated input testcases. We set $time_{threshold}=5$s and $time_{budget}=1800$s for our experiments as per \ac{FuCE} (\autoref{algo:FuCE_1}). These are user defined configurable parameters. We evaluate \ac{FuCE} with our baseline test generation techniques on two dimension: 1) Trojan detection capability 2) Branch coverage achievable during a pre-defined time limit. The first objective assumes availability of input-output response pairs from golden model to check Trojans functionally corrupted the design. However, the second objective aims to study the efficacy of test generation framework to achieve complete branch coverage on the design. Test-cases covering all conditional statements in the design enhance defenders' confidence of capturing any anomalous behaviour in the absence of golden model. For both experimental settings, we present case-studies demonstrating the effectiveness of \ac{FuCE} on S3C benchmarks. For apples-to-apples comparison with prior state-of-art approaches \cite{symbolicSystemC,fuzzSystemC}, we compare the results of Trojan detection as reported in their published works. \section{Empirical analysis} \label{analysis} \subsection{Trojan detection} We first analyze trojan detection capability of \ac{FuCE} on S3C benchmarks. Since \ac{FuCE} invokes fuzzing and concolic engine interchangeably, we term each fuzzing phase as $fuzz_{n}$ and concolic execution phase as $conc_{n}$; where $n$ denotes phase ID in \ac{FuCE} execution. For example, a trojan detected in phase $fuzz_{3}$ implies that the framework has gone through test generation phases $fuzz_{1}$-$conc_{1}$-$fuzz_{2}$-$conc_{2}$-$fuzz_{3}$ before detecting the Trojan. We evaluated Trojan detection capability of \ac{FuCE} on S3C benchmarks since it is the state-of-art trojan infected high-level designs. We reported the results in \autoref{tab:trojan_detection} and \ref{tab:trojanDetectionFuCE}. \autoref{tab:trojan_detection} shows testcase generated and runtime of each execution phase of \ac{FuCE}. We compare branch coverage obtained by \ac{FuCE} with \ac{AFL} and \ac{S2E} in \autoref{tab:trojan_detection}. In ~\autoref{tab:trojanDetectionFuCE}, we report the total time taken, the number of testcases generated for Trojan detection and memory usage of \ac{FuCE} and compare these with our baseline techniques (\ac{AFL} and \ac{S2E}) as well as with the state-of-art test-based Trojan detection approaches~\cite{fuzzSystemC,symbolicSystemC}. We set a timeout of two hours for trojan detection using each technique. \begin{table}[t] \centering \small \caption{Trojan Detection by {\it FuCE}. Trojan types: \rt{CWOM}, \ct{SWM} and \mat{SWOM}. Detection: Yes (\cmark), No (\xmark)} \begin{tabular}{cccccccc} \toprule \multirow{2}{*}{\textbf{Benchmarks}} & \multicolumn{2}{c}{\textbf{Testcases}} & \multicolumn{2}{c}{\textbf{Time(in s)}} & \multicolumn{3}{c}{\textbf{Branch cov. (\%)}}\\ \cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-8} & {$fuzz_1$} & {$conc_1$} & {$fuzz_1$} & {$conc_1$} & AFL & S2E & FuCE\\ \midrule \multirow{2}{*}{ADPCM} & \ct{3} & - & \ct{38} & - & \ct{88.1(\cmark)} & \ct{88.9(\cmark)} &\ct{88.1(\cmark)} \\ & \mat{6} & - & \mat{15} & - & \mat{85.7(\cmark)} & \mat{86.1(\cmark)} &\mat{85.7(\cmark)} \\ \cmidrule(lr){1-8} AES & \rt{4} & \rt{1} & \rt{43} & \rt{4} & \rt{93.8(\cmark)} & \rt{81.5(\xmark)} &\rt{94.9(\cmark)} \\ \cmidrule(lr){1-8} \multirow{2}{*}{Bubble\_sort} & \rt{4} & \rt{8} & \rt{19} & \rt{192} & \rt{95.5(\xmark)} & \rt{95.5(\xmark)} &\rt{100(\cmark)} \\ & \ct{3} & \ct{3} & \ct{19} & \ct{124} & \ct{95.5(\xmark)} & \ct{95.5(\xmark)} &\ct{100(\cmark)} \\ \cmidrule(lr){1-8} Filter\_FIR & \rt{4} & \rt{2} & \rt{11} & \rt{13} & \rt{93.8(\cmark)} & \rt{93.8(\cmark)} & \rt{93.8(\cmark)} \\ \cmidrule(lr){1-8} \multirow{3}{*}{Interpolation} & \rt{63} & \rt{4} & \rt{11} & \rt{38} & \rt{45.1(\xmark)} & \rt{46(\xmark)} &\rt{76.1(\cmark)} \\ & \ct{3} & - & \ct{4} & - & \ct{57.5(\cmark)} & \ct{56.1(\cmark)} &\ct{57.5(\cmark)} \\ & \mat{3} & - & \mat{4} & - & \mat{57.5(\cmark)} & \mat{56.1(\cmark)} &\mat{57.5(\cmark)} \\\cmidrule(lr){1-8} Decimation & \ct{4} & - & \ct{24} & - & \ct{66.7(\cmark)} & \ct{66.7(\cmark)} &\ct{69.1(\cmark)} \\\cmidrule(lr){1-8} \multirow{2}{*}{Kasumi} & \rt{23} & - & \rt{5} & - & \rt{87.5(\cmark)} & \rt{84.3(\cmark)} & \rt{87.5(\cmark)} \\ & \ct{22} & - & \ct{5} & - & \ct{87.5(\cmark)} & \ct{84.3(\cmark)} &\ct{87.5(\cmark)} \\\cmidrule(lr){1-8} UART-1 & \ct{6} & \ct{3} & \ct{34} & \ct{234} & \ct{85.7(\cmark)} & \ct{81.2(\xmark)} &\ct{88.5(\cmark)} \\ UART-2 & \ct{2} & \ct{2} & \ct{32} & \ct{242} & \ct{79.4(\cmark)} & \ct{88.3(\cmark)} &\ct{88.3(\cmark)} \\ \bottomrule \end{tabular} \label{tab:trojan_detection} \end{table} \noindent{\textbf{1) Coverage obtained till Trojan detection:}} We report the branch coverage results of \ac{FuCE} in \autoref{tab:trojan_detection}. We present the number of test-cases and the time taken by \ac{FuCE} in each execution phase before Trojan detection. One important point to note: \ac{FuCE} could detect all the Trojans in S3C benchmarks within $conc_1$ execution phase. The designs for which \ac{AFL} standalone can detect the trojan in $fuzz_1$ phase only without violating the threshold time, $t_{threshold}$, we do not invoke $conc_1$ phase. We present the branch coverage obtained by \ac{FuCE} and baseline techniques until Trojan detection (or, reaching pre-defined timeout limit).\\ \noindent{\textbf{Comparison with baselines:}} We compare the branch coverage obtained by \ac{FuCE} and baseline techniques in \autoref{tab:trojan_detection} and observe that \ac{FuCE} outperforms baseline techniques in terms of Trojan detection capability and coverage achieved till Trojan detection. For \textit{ADPCM}, \ac{FuCE} could detect the trojan in $fuzz_1$ phase itself. It is observed that \ac{AFL} and \ac{FuCE} perform similarly in trojan detection for \textit{ADPCM}. \ac{S2E} on the other hand detects trojan with little better coverage but takes longer time. Timing comparison is shown in~\autoref{tab:trojanDetectionFuCE}. For \textit{AES}, \ac{FuCE} goes through the phases $fuzz_1$ and $conc_1$ to detect trojan with better coverage than \ac{AFL} and \ac{S2E}. \ac{AFL} could not detect the trojan without violating the threshold time allocated for it to check for test inputs generated that hit new branch in the design. So, \ac{FuCE} shifts from $fuzz_1$ phase to $conc_1$ phase thus detecting trojan in less time than \ac{AFL} alone. For benchmarks \textit{AES}, \textit{Bubble\_sort}, \textit{Filter\_FIR}, \textit{Interpolation} and \textit{UART}, \ac{FuCE} invokes $conc_1$ indicating fuzz engine was stuck at $t_{threshold}$. Similarly, S2E could not detect all the Trojans in user-defined time limit indicating concolic execution is a slow process because of timing overhead from expensive SAT calls of concolic engine. An important advantage of using fuzzing alongside concolic execution is automatic identification of variable states which are responsible for complex checking operations. This effectively reduces the burden on concolic engine to categorise the variables between symbolic and concrete before invoking it. In the next subsection, we outline case-studies on S3C benchmarks describing how \ac{FuCE} alleviates the challenges coming from standalone techniques to explore design state-space without hitting scalability bottleneck.\\ \noindent{\textbf{2) Analyzing timing improvement:}} In our work, we compare wall-time for trojan detection of \ac{FuCE} with baseline and state-of-art techniques. The reason is that the fuzz engine takes less cpu-time for a given wall-time as it is an IO intensive process, whereas a concolic engine takes more cpu-time for a given wall-time as it forks multiple threads for test generation. For a fair comparison across a range of techniques, we choose wall-time as a metric to compare with previous works. The wall-time taken for trojan detection is presented in \autoref{tab:trojanDetectionFuCE} (Column 3). {\em TO} indicates trojan is not detected within the wall-time limit of two hours.\\ \noindent{\textbf{Comparison with baselines:}} From \autoref{tab:trojanDetectionFuCE}, we conclude that \ac{FuCE} takes less time than vanilla \ac{AFL} for half of the benchmark designs considered and outperforms \ac{S2E} on all the designs except on \textit{Decimation}. \ac{FuCE} avoids expensive path exploration by concolic execution using fuzzer generated seeds leading to faster and scalable Trojan detection.\\ \begin{table}[t] \centering \caption{Comparing Trojan detection using \emph{FuCE} with prior works~\cite{fuzzSystemC,symbolicSystemC}. Trojan: \rt{CWOM}, \ct{SWM} and \mat{SWOM}.} \resizebox{\columnwidth}{!}{ \begin{tabular}{lcccccccccccccc} \toprule \multicolumn{1}{c}{\multirow{3}{*}{\bf{Benchmarks}}} & \multicolumn{5}{c}{\bf{Test-cases generated}} & \multicolumn{5}{c}{\bf{Wall-time taken (s)}} & \multicolumn{4}{c}{\bf{Memory footprint(MB)}} \\ \cmidrule(lr){2-6} \cmidrule(lr){7-11}\cmidrule(lr){12-14} & \begin{tabular}[c]{@{}l@{}}\bf{AFL}\end{tabular} & \begin{tabular}[c]{@{}l@{}}\bf{AFL-}\\\textbf{SHT}\end{tabular} & \begin{tabular}[c]{@{}l@{}}\\\bf{S2E}\end{tabular} & \begin{tabular}[c]{@{}l@{}}\bf{SCT-}\\ \bf{HTD}\end{tabular} & \begin{tabular}[c]{@{}l@{}}\\ \bf{FuCE}\end{tabular} & \bf{AFL} & \begin{tabular}[c]{@{}l@{}}\textbf{AFL-}\\ \textbf{SHT}\end{tabular} & \bf{S2E} & \begin{tabular}[c]{@{}l@{}}\bf{SCT-}\\ \bf{HTD}\end{tabular} & \bf{FuCE} & \begin{tabular}[c]{@{}l@{}}\bf{S2E}\\%\textrm{rand}\\{Seeds} \end{tabular} & \begin{tabular}[c]{@{}l@{}}\bf{SCT-}\\ \bf{HTD}\end{tabular} & \begin{tabular}[c]{@{}l@{}}\bf{FuCE}\\ \end{tabular} \\ \cline{1-15} \multirow{2}{*}{ADPCM} & \ct{3} & \ct{423} & \ct{14} & \ct{27} & \ct{3} & \ct{38} & \ct{1.17} & \ct{55} & \ct{157} & \ct{38} & \ct{3029} & \ct{3546} & \ct{51.44} \\ & \mat{6} & \mat{414} & \mat{23} & \mat{7} & \mat{6} & \mat{15} & \mat{1.67} & \mat{49} & \mat{31} & \mat{15} & \mat{3433} & \mat{1341} & \mat{51.53} \\ \cmidrule(lr){1-15} AES & \rt{7} & \rt{22} & \rt{2} & \rt{11} & \rt{5} & \rt{337} & \rt{0.04} & \rt{TO} & \rt{23} & \rt{47} & \rt{9879} & \rt{1386} & \rt{1051} \\ \cmidrule(lr){1-15} \multirow{2}{*}{Bubble\_sort} & \rt{30} & \rt{39} & \rt{160} & \rt{2} & \rt{12} & \rt{TO} & \rt{4.82} & \rt{TO} & \rt{8} & \rt{211} & \rt{4761} & \rt{1074} & \rt{2698} \\ & \ct{12} & \ct{108} & \ct{32} & \ct{4} & \ct{6} & \ct{TO} & \ct{337.36} & \ct{TO} & \ct{10} & \ct{143} & \ct{4759} & \ct{1106} & \ct{2618} \\ \cmidrule(lr){1-15} Filter\_FIR & \rt{11} & \rt{41} & \rt{5} & \rt{26} & \rt{6} & \rt{1184} & \rt{0.07} & \rt{41} & \rt{13} & \rt{24} & \rt{640} & \rt{1071} & \rt{480} \\ \cmidrule(lr){1-15} \multirow{3}{*}{Interpolation} & \rt{63} & \rt{2325402} & \rt{72} & - & \rt{67} & \rt{TO} & \rt{TO} & \rt{TO} & - & \rt{49} & \rt{16244} & - & \rt{3081}\\ & \ct{3} & \ct{47} & \ct{8} & - & \ct{3} & \ct{4} & \ct{0.16} & \ct{130} & - & \ct{4} & \ct{8790} & - & \ct{56.20} \\ & \mat{3} & \mat{47} & \mat{2} & - & \mat{3} & \mat{4} & \mat{0.16} & \mat{14} & - & \mat{4} & \mat{3326} & - & \mat{56.21}\\ \cmidrule(lr){1-15} Decimation & \ct{4} & - & \ct{2} & - & \ct{4} & \ct{22} & - & \ct{12} & - & \ct{24} & \ct{723} & - & \ct{55.57}\\ \cmidrule(lr){1-15} \multirow{2}{*}{Kasumi} & \rt{23} & \rt{316} & \rt{76} & - & \rt{23} & \rt{5} & \rt{1.32} & \rt{245} & - & \rt{5} & \rt{2908} & - & \rt{53.29}\\ & \ct{22} & \ct{414} & \ct{49} & - & \ct{22} & \ct{5} & \ct{1.32} & \ct{83} & - & \ct{5} & \ct{2976} & - & \ct{53.28}\\ \cmidrule(lr){1-15} UART-1 & \ct{7} & \ct{51} & \ct{15} & \ct{3} & \ct{9} & \ct{311} & \ct{0.18} & \ct{TO} & \ct{9} & \ct{268} & \ct{5929} & \ct{1071} & \ct{3164}\\ UART-2 & \ct{4} & - & \ct{2} & \ct{3} & \ct{4} & \ct{298} & - & \ct{730} & \ct{9} & \ct{274} & \ct{2651} & \ct{1070} & \ct{2618}\\ \bottomrule \end{tabular} } \label{tab:trojanDetectionFuCE} \end{table} \noindent{\textbf{Comparison with state-of-art approaches:}} We reached out to the authors of \cite{fuzzSystemC,symbolicSystemC} to obtain the test generation frameworks for independent evaluation and apples-to-apples comparison with \ac{FuCE}. However, the actual implementations were unavailable. Thus, we compared the timing as reported in their papers and used similar computing platform for our experiments. We found \ac{FuCE} took longer time than \ac{AFL-SHT} except for two cases: \textit{interpolation-cwom} and \textit{bubble\_sort-swm}; where \ac{FuCE} detects the trojan in 49s and 243s respectively. In these two cases, \ac{AFL-SHT} either took longer time or timed out. This exhibits fundamental limitation of fuzzing even though that fuzz engine is heavily customized based on benchmark characteristics. \ac{FuCE} on the other hand can easily identify the state of fuzz engine getting stuck and invoke concolic engine to penetrate into deeper program state. In a subsequent section, we will elaborately describe Trojan behaviour in S3C benchmarks and \ac{FuCE}'s ability to detect these. \ac{FuCE} detected Trojans quicker than \ac{SCT-HTD} for \textit{ADPCM}, \textit{Interpolation}, \textit{Decimation} and \textit{Kasumi} because \textit{SCT-HTD} took longer time to solve computationally intensive operations to generate the testcases.\\ \noindent{\textbf{3) Analyzing test-case quality:}} As listed in \autoref{tab:trojanDetectionFuCE} (Column 2), \ac{FuCE} leverages \ac{S2E} with fuzz generated test-cases to accelerate the coverage over a defined period of time. The fuzzer generated input seeds guide the symbolic engine to construct the execution tree along the execution path triggered by existing test-cases and generate new test cases reaching unexplored conditional statements. For a fair comparison with \ac{FuCE}, we report the number of test-cases preserved by each technique until it reaches user-defined timeout (or, till Trojan detection).\\ \noindent{\textbf{Comparison with baselines:}} Compared to \ac{AFL} and \ac{S2E}, the number of test-cases generated until Trojan detection are comparable for all benchmarks. A closer analysis reveals that the number of testcases generated by \ac{FuCE} is same as \ac{AFL} for the cases where \ac{AFL} as standalone was sufficient in detecting the Trojans without getting stuck for $time_{threshold}$. But, for cases where \ac{AFL} crossed the $time_{threshold}$ to generate a new testcase that improves coverage, \ac{FuCE} invoked concolic engine for generating qualitative test-cases quickly. Finally, \ac{FuCE} was able to detect Trojan using fewer test-cases than both \ac{AFL} and \ac{S2E}. This indicates that \ac{FuCE} uses the time-budget judiciously for creating effective test-cases that explore deeper program segments when fuzzing is stuck.\\ \noindent{\textbf{Comparison with state-of-art approaches}}: We compared the number of test-cases generated via state-of-art approaches with \ac{FuCE}. \ac{FuCE} outperforms \ac{SCT-HTD} for designs like \textit{ADPCM}, \textit{AES} and \textit{Filter-FIR} by generating fewer test-cases. Although \ac{SCT-HTD} is better than \ac{FuCE} in terms of generating effective test-cases for designs like \textit{Bubble-sort} and \textit{UART}, we will explore in subsequent sections that this heavily depends on the exploration strategy selected by concolic engine to explore the search space. Due to unavailability of branch coverage by \ac{SCT-HTD} till Trojan detection, it is difficult to conclude whether \ac{SCT-HTD} generated fewer test-cases with better coverage, or explored Trojan location quickly and terminated before achieving substantial coverage on the rest of the design.\\ \noindent{\textbf{4) Analyzing memory footprint:}} The last column of \autoref{tab:trojanDetectionFuCE} reports memory usage denoting maximum memory footprint . We observe that trojans are detected with reasonable memory usage by \ac{FuCE}. We compare it with concolic execution engine \ac{S2E} and \ac{SCT-HTD} reported in ~\cite{symbolicSystemC}. \emph{AFL} is an input-output~(IO) bound process whereas \ac{S2E} is memory intensive program allocating huge memory for an application to run on a virtual machine. For \ac{AFL} and \ac{AFL-SHT}, we used default configuration of 50MB memory which was sufficient for all the S3C benchmarks. From the results, one can interpret that fuzzing combined with concolic execution has 50\% less memory footprint(average) compared to standalone concolic execution. \input{images/fig_branchCov} \subsection{Coverage Improvement} For analyzing the effectiveness of \ac{FuCE} framework, we measure the branch coverage obtained by running the baseline techniques and \ac{FuCE} with a time-limit of two hours. \autoref{fig:coverageOfFuCE} and \autoref{tab:cov_improve} study the detailed coverage analysis over the entire time period for \ac{FuCE} and baseline techniques. From the coverage data (~\autoref{fig:coverageOfFuCE}), we can categorize S3C benchmarks into two types from testing perspective: simple and complex. For simple designs having small number of nested conditional statements, \ac{FuCE} could achieve 100\% coverage within a short span of time with fewer test-cases. These are: \textit{Bubble-sort} and \textit{Kasumi}. Complex benchmarks have deeper levels of nested loop and conditional statements along with ternary operations. These are: \textit{ADPCM} and \textit{Interpolation}. \ac{FuCE} achieved 100\% branch coverage on all S3C benchmarks except \textit{UART}, \textit{AES}, \textit{Filter\_FIR} and \textit{Decimation}. We dig deeper into the coverage analysis for which \ac{FuCE} could not achieve 100\% coverage and discovered an interesting observation: presence of unreachable branch conditions in the original benchmark designs. We analytically found uncovered branch conditions from our experiments and verified that no test-case is possible for covering these code segments. We believe that these unreachable code segments will be optimized out during \ac{RTL} level synthesis and therefore is not a drawback of \ac{FuCE} test generation framework. \begin{table*} \centering \caption{Coverage improvement by {\it FuCE}. Trojan types: \rt{CWOM}, \ct{SWM} and \mat{SWOM}.} \resizebox{0.85\columnwidth}{!}{ \begin{tabular}{ccccccccc} \toprule \multirow{2}{*}{\textbf{Benchmarks}} & \multicolumn{3}{c}{\textbf{\#Testcases generated}} & \multicolumn{3}{c}{\textbf{Time(in s)}} & \multirow{2}{*}{\textbf{Phases}} & \multirow{2}{*}{\textbf{Branch cov. (\%)}} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} & {$fuzz_1$} & {$conc_1$} & {$fuzz_2$} & {$fuzz_1$} & {$conc_1$} & {$fuzz_2$} & & \\ \midrule \multirow{2}{*}{ADPCM} & \ct{3} & \ct{39} & \ct{3} & \ct{43} & \ct{1800} & \ct{1940} & \ct{$fuzz_1$-$conc_1$-$fuzz_2$} & \ct{100}\\ & \mat{6} & \mat{82} & \mat{4} & \mat{20} & \mat{1800} & \mat{1640} & \mat{$fuzz_1$-$conc_1$-$fuzz_2$} & \mat{100} \\ \cmidrule(lr){1-9} AES & \rt{4} & \rt{1} & - & \rt{43} & \rt{4} & - & \rt{$fuzz_1$-$conc_1$} & \rt{94.9} \\ \cmidrule(lr){1-9} \multirow{2}{*}{Bubble\_sort} & \rt{4} & \rt{8} & - & \rt{19} & \rt{192} & - & \rt{$fuzz_1$-$conc_1$} & \rt{100} \\ & \ct{3} & \ct{3} & - & \ct{19} & \ct{124} & - & \ct{$fuzz_1$-$conc_1$} & \ct{100} \\ \cmidrule(lr){1-9} Filter\_FIR & \rt{4} & \rt{2} & - & \rt{13} & \rt{11} & - & \rt{$fuzz_1$-$conc_1$} & \rt{93.8}\\ \cmidrule(lr){1-9} \multirow{3}{*}{Interpolation} & \rt{63} & \rt{460} & \rt{55} & \rt{11} & \rt{1800} & \rt{3880} & \rt{$fuzz_1$-$conc_1$-$fuzz_2$} & \rt{100}\\ & \ct{3} & \ct{1427} & \ct{28} & \ct{9} & \ct{270} & \ct{2217} & \ct{$fuzz_1$-$conc_1$-$fuzz_2$} & \ct{100}\\ & \mat{3} & \mat{663} & \mat{51} & \mat{9} & \mat{270} & \mat{3640} & \mat{$fuzz_1$-$conc_1$-$fuzz_2$} & \mat{100}\\ \cmidrule(lr){1-9} Decimation & \ct{4} & \ct{12} & \ct{5} & \ct{29} & \ct{126} & \ct{1429} & \ct{$fuzz_1$-$conc_1$-$fuzz_2$} & \ct{96.8}\\ \cmidrule(lr){1-9} \multirow{2}{*}{Kasumi} & \rt{23} & \rt{22} & - & \rt{10} & \rt{1221} & - & \rt{$fuzz_1$-$conc_1$} & \rt{100}\\ & \ct{22} & \ct{12} & - & \ct{10} & \ct{1682} & - & \ct{$fuzz_1$-$conc_1$} & \ct{100}\\ \cmidrule(lr){1-9} UART-1 & \ct{6} & \ct{3} & - & \ct{34} & \ct{234} & - & \ct{$fuzz_1$-$conc_1$} & \ct{88.5}\\ UART-2 & \ct{2} & \ct{2} & - & \ct{32} & \ct{242} & - & \ct{$fuzz_1$-$conc_1$} & \ct{88.3}\\ \bottomrule \end{tabular} } \label{tab:cov_improve} \end{table*} \subsection{Case Studies} Here, we dive deep into \ac{FuCE}'s performance on four \emph{S3CBench} designs across two orthogonal directions: 1) Trojan detection capability and 2) achievable branch coverage within a defined time limit.\\ \noindent{\textbf{1) Case Study I: Trojan Detection}} {\textit{Interpolation}} is a 4-stage interpolation filter. We consider \textit{CWOM} trojan variant for our case study. Only \ac{FuCE} could successfully detect the trojan amongst every other state-of-art technique (\autoref{tab:trojanDetectionFuCE}). We show \textit{CWOM} trojan variant of interpolation in~\autoref{label:interpcode}. The trojan triggers once the output of FIR filter's final stage ($SoP4$) matches with a specific ``magic" value. Line \ref{line:trigger} is Trigger logic and line \ref{line:payload} shows the payload circuit as output write operation. The trigger activates for a input resulting in the sum of product of the fourth filter($SoP4$) as $-.015985481441$. Trigger activation leads to execution of payload circuit (line \ref{line:trigger}) and writes the output $odata-write = -0.26345$. Inputs not satisfying the Trigger condition behave functionally equivalent to Trojan-free design. Fuzzing techniques like \ac{AFL} are unlikely to satisfy the conditional checks against specific values (line \ref{line:trigger} of~\autoref{label:interpcode}) in a short time-span. \ac{AFL} failed to detect the trojan in two hours time limit. \ac{S2E} executing with random seeds also failed to generate inputs satisfying the trojan trigger condition. However, \ac{FuCE} leverages the strength of both fuzzing and concolic execution. \ac{FuCE} passes the interesting inputs as identified by the fuzzer in phase $fuzz_1$ to S2E in phase $conc_1$. S2E traces each input generated by the fuzzer discovering unexplored program states by \ac{AFL} during phase $fuzz_1$. \ac{S2E} generates inputs satisfying complex branch conditions to explore the undiscovered states. Thus, \ac{FuCE} triggered the trojan payload using \ac{S2E} and successfully detected the Trojan. \\ \lstinputlisting[caption={Interpolation - Trojan Logic}, style=customc,label={label:interpcode}]{codes/filt_interp.cpp} \vspace{0.8cm} {\textit{Advanced Encryption Standard}} (AES) is a symmetric block cipher algorithm. The plain text is 128 bits long and key can be 128, 192 or 256 bits. \emph{S3CBench} contain AES-128 bit design and the trojan type is \textit{CWOM}. This trojan leaks the secret key for a specific plain-text input corrupting the encryption generating incorrect cipher-text. \\ AES-128 performs ten rounds of repetitive operation to generate cipher-text $CT_{10}$. The trojan implementation performs an additional 'n' rounds to generate cipher-text ($CT_{10+n}$). Initially, the attacker tampers the key $K_n$ to be used in the round $R_{10+n}$ for the round operations: \textit{SubBytes, ShiftRows, MixColumns, AddRoundKey}. Later, key $K_{10}$ can be recovered from the plain text $P$ and cipher-text $CT_{10+n}$. Using \ac{FuCE}, we compare the generated cipher-text with the expected cipher-text to detect the presence of a Trojan. \lstinputlisting[caption={AES - Trojan Logic}, style=customc,label={label:aescode}]{codes/aescode.cpp} \vspace{0.5cm} We examined the performance of \ac{FuCE} on \textit{AES}. \ac{FuCE} successfully detected the Trojan in \textit{AES-cwom} in phase $fuzz_1$-$conc_1$ with a $6.5$x timing speed-up compared to \ac{AFL}. From~\autoref{tab:trojan_detection}, we observe that \ac{FuCE} detected the trojan using the testcase generated by \ac{S2E} during phase $conc_1$. We supplied four test-cases generated by \ac{AFL} during $fuzz_1$ to \ac{S2E}. However, \ac{S2E} supplied with random test-cases was unable to detect the trojan (\autoref{tab:trojanDetectionFuCE}) and timed-out with a branch coverage of 81.5\%. We indicate the Trojan infected \textit{AES} in~\autoref{label:aescode} (lines 2 and 3). The Trojan trigger condition for \emph{AES} is a rare combination of input values. \\ \noindent{\textbf{2) Case Study II: Coverage Improvement}} \lstinputlisting[caption={ADPCM - Encode and Decode Logic}, style=customc,label={label:adpcmcode}]{codes/adpcm_encode.cpp} \vspace{0.5cm} {\textit{Adaptive Differential Pulse Code Modulation}} (ADPCM) converts analog information to binary data. \emph{ADPCM} converts 16 bits Pulse Code Modulation (PCM) samples into 4-bit samples. The trojan considered is \emph{SWM} type. The trojan gets triggered once the counter reaches a specific value corrupting the modulation process. Although \ac{FuCE} detects the trojan successfully at phase $fuzz_1$ with a coverage of 88.1\% but does not attain 100\% coverage at this phase. From the \emph{LCOV} report it was observed that \ac{FuCE} was unable to reach certain portions of code with nested conditional branch statements as given in~\autoref{label:adpcmcode}. So \ac{FuCE} goes to phase $conc_1$ for further exploration with the input seeds generated by $fuzz_1$. At $conc_1$, \ac{S2E} traces the program following the same path taken by the fuzzer. When \ac{S2E} arrives at the conditional check at line 2 (~\autoref{label:adpcmcode}) of the encode logic, it realizes that the path was not covered by the fuzzer. So, it produces input satisfying the condition which drives the execution to this new state transition. After the phase $conc_1$, the coverage analyzer for \ac{FuCE} framework found that the coverage improved to 89.5\% but the target coverage of 100\% was yet to be reached. Since concolic execution is a slow process, \ac{S2E} fails to generate test inputs within its time bound, that satisfy the nested conditional statements at line 11 of the decode logic(~\autoref{label:adpcmcode}). So \ac{FuCE} goes to the next phase $fuzz_2$ to look for the undiscovered paths. \ac{AFL} starts its execution with the input seeds generated by \ac{S2E} at phase $conc_1$ which guides the fuzzer to quickly penetrate in the nested branch conditions and generate test cases that give 100\% branch coverage for \ac{FuCE}. Thus \emph{FuCE} could achieve 100\% branch coverage following the phases $fuzz_1$-$conc_1$-$fuzz_2$ in the defined time budget of two hours. From the plot of \emph{ADPCM-SWM} in \autoref{fig:coverageOfFuCE} (a), we can see that \ac{FuCE} successfully achieves coverage of 100\% in 3800 seconds. AFL on the other hand achieves coverage of 96.4\% in 6115 seconds and could not improve further in the test time of two hours. S2E reaches a maximum coverage of 88.9\% at 310 seconds while running for two hours. The breakdown of result for code coverage of \emph{ADPCM-SWM} by \emph{FuCE} could be found in ~\autoref{tab:cov_improve}.\\ {\textit{Decimation Filter}} is a 5-stage filter with five FIR filters cascaded together. The type of trojan inserted here is \textit{SWM}, which is triggered for a particular value of count in the design. The trojan is inserted here in the final stage, i.e., the fifth stage. Results on evaluation of this design of \emph{S3CBench} have not been reported by any of the state-of-art prior works \ac{AFL-SHT} or \ac{SCT-HTD}. We have evaluated this benchmark successfully with \ac{FuCE}, \ac{AFL} and \ac{S2E}. In our experiments, all the techniques could successfully detect the trojan in the circuit but \ac{FuCE} outperformed others significantly with respect to achievable branch coverage. From the plot of \emph{decimation-SWM}, in~\autoref{fig:coverageOfFuCE} (f), it is observed that the seeds generated by \ac{FuCE} could attain a coverage 96.8\% running for 3289 seconds. Whereas for \ac{AFL} the maximum coverage of 69.1\% is attained with seeds generated in the time interval of (30 - 60) seconds and \ac{S2E} reached the coverage of 73\% in the time interval (100 - 300) seconds beyond which coverage did not increase even after running for two hours of time limit. \ac{FuCE} could cover almost all the portions of code using its interlaced execution of phases $fuzz_1$-$conc_1$-$fuzz_2$ that \ac{AFL} and \ac{S2E} failed to cover individually. \label{label:exResults} \section{Conclusion} \label{sec:conclusion} In our work here, we have identified existing challenges of test-based hardware trojan detection techniques on high-level synthesized design. To this end, we proposed an end-to-end test-generation framework penetrating into deeper program segments at the HLS level. Our results show faster detection of trojans with better coverage scores than earlier methods. The complete framework for our proposed \emph{FuCE} test-generation framework has the potential to reinforce automated detection of security vulnerabilities present in HLS designs\cite{farimah2021}. We are investigating the ways in which trojans in RTL/gate level netlist get manifested in HLS using tools like VeriIntel2C, Verilator etc. Future works include exploration of input grammar aware fuzzing and more focus on coverage metrics such as Modified Condition/Decision Coverage (MC/DC)~\cite{codeCoverage} and path coverage. \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what's in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Non-abelions in solid state systems, such as Majorana fermions, parafermions or Fibonacci anyons, result in topologically degenerate ground state characterized by non-Abelian statistics and provide paths to topological fault-tolerant quantum computing \cite{Kitaev2003,Nayak2008}. Exotic states with non-Abelian excitations are predicted to emerge in correlated states in the fractional quantum Hall regime in two-dimensional electron, bilayer and hole gases \cite{MooreRead, Review, ReadRezayiPRB99, BarkeshliWenNATPRL, PetersonDasSarmaPRB, PapicGoerbig12PRB, Simion2017}, in $p$-wave $^3$He \cite{Volovik}, and in hybrid superconductor/topological insulator \cite{FuKanePRLMajorana,Molenkamp2017} and superconductor/semiconductor \cite{DasSarmaMajoranaPRL10,Oreg10,Mourik2012,Rokhinson2012a} systems. Topological superconductors can be divided into two broad classes: one, in which disorder induced by impurities strongly suppresses topological superconducting gap and can be detrimental to non-Abelions \cite{Lee2011,Huse2001,Stanescu2011,Lutchin2011,Brouwer2011,Brouwer2011L,Sau2012}, and the other, in which non-Abelian excitations are protected by a disorder-robust topological superconductor gap \cite{dasarma_chain_2012,Akhmerov_chain_2013,Vishveshwara,Franz,Nori}. In this work, we present a third class of topological superconductivity and Majorana fermions, which appear exclusively in the presence of disorder within an otherwise gapped energy spectrum. Observation and control of Majorana fermions and other non-Abelions often requires a symmetry of an underlying system leading to a gap in a single-particle or a quasiparticle spectra. An example is a quantum Hall system proximity-coupled to a superconductor, where Majorana fermions \cite{SternBerg2012}, parafermions \cite{Clarke2012}, and Fibonacci fermions \cite{Mong2013} are predicted to be formed in the presence of interacting counter-propagating edge channels. Experiments, though, indicate strong level repulsion and opening of a large exchange gap for interacting edge channels with the same orbital quantum numbers\cite{Kazakov2016}. A promising alternative is a quantum Hall ferromagnetic transition, where coupled counter-propagating chiral states at the boundaries of ferromagnetic domains form helical domain walls \cite{falko00}, perfect precursors for the formation of topological channels in the presence of superconducting interactions. However, even when helical domain walls are formed from almost orthogonal states with different orbital quantum numbers and opposite spins in integer and fractional QHE regimes, spin-orbit interactions open small spectral gaps in bulk Landau level spectrum \cite{Kazakov2016} and in the spectra of electrostatically induced edge states \cite{Kazakov2017}. These gaps suppress electron transport at low temperatures \cite{eisenstein90,Kazakov2016}, but in short helical domain walls transport can be carried by the in-gap states \cite{Kazakov2017}. Here we demonstrate that disorder-induced in-gap states in electrostatically defined helical domain walls can lead to topological superconductivity when coupled to an s-wave superconductor. We solve a general quantum mechanical problem of impurity states in the presence of Landau quantization and spin-orbit interactions, and derive impurity states in the electrostatically induced domain wall. We then map the system of these states on the generalized Kitaev chain \cite{kitaev_unpaired_2001,dasarma_chain_2012}, and calculate the phase diagram for the existence of topological superconductivity and Majorana fermions. Finally, we demonstrate that with a local control of the QHFm transition it is possible to induce, move, exchange, fuse and braid Majorana modes. We consider a specific case of a 2D electron gas formed in asymmetrically Mn-doped CdTe quantum wells, where local electrostatic control of a quantum Hall ferromagnetic transition and a single helical domain wall manipulation have been recently reported\cite{Kazakov2017}. However, these experiments indicated the vital role of disorder in conductance through electrostatically controlled individual domain wall in this system, and raised the question whether this intrinsically precludes generation of Majorana fermions and other non-Abelions in quantum Hall ferromagnets or non-Abelions are still feasible. We resolve this problem affirmatively, showing that disorder is crucial for generating non-Abelions. Our conclusions should be applicable to any system where quantum Hall ferromagnetic transition can be locally controlled, such as, e.g., a 2D hole gas in Ge\cite{Lu2017}. Quantum Hall ferromagnetic transitions in the integer and fractional QHE regimes have been observed in 2D gases in many semiconductors, including GaAs \cite{eisenstein90,Koch1993}, AlAs \cite{DePoortere2000}, InSb \cite{Chokomakoua2004}, CdMnTe\cite{Jaroszynski2002,Betthausen2014}, Si \cite{Lai2006} and graphene\cite{Feldman2013}, and their electrostatic control has been shown \cite{Klitzing2002,Kazakov2016}. \section{ Majorana modes in a helical domain wall} In Mn-doped CdTe quantum wells external magnetic field $B$ aligns spins of Mn$^{2+}$ ions and generates an additional exchange contribution to the electron spin splitting due to interactions between conduction electrons and d-shell electrons localized on Mn\cite{Wojtowicz1999}. This s-d exchange splitting has a sign opposite to the bare Zeeman splitting for electrons in the conduction band, leading to multiple level crossings at high magnetic fields\cite{Jaroszynski2002}. The ferromagnetic transition of interest occurs at a crossing of states with opposite polarizations belonging to the first two Landau levels $(n=0,\uparrow)$ and $(n=1,\downarrow)$ at a filling factor $\nu=2$. In asymmetrically Mn-doped quantum wells the strength of the s-d exchange can be electrostatically controlled \cite{Kazakov2016} and it is possible to form an unpolarized and a fully polarized states under different gates,as shown schematically in Fig.~\ref{fig:varg_edges}. \begin{figure}[h] \includegraphics[width=0.9\textwidth]{Edges.pdf} \caption{a) Electrostatic gates $V_1$ and $V_2$ control magnetization $\bf{M}_1$ and $\bf{M}_2$ caused by the electron exchange interactions with Mn impurities. A spatial gradient of magnetization $J_1$ and a potential gradient $\mathcal E_x$ result in the formation of edge-like states between the gates (red and blue). Vertical arrows along edge states show spin polarization of electrons, which is opposite for edge-like states as a result of quantum Hall ferromagnetic transition. Between the gates edge-like states have opposite velocities. Hybridized, they form a helical domain wall. b) Energy profile of electron states in the absence of spin-orbit interactions. Due to different polarization of red and blue states, the electron system at $\nu=2$, which also include electrons in the ground Landau level (black), are unpolarized on the left and polarized on the right.} \label{fig:varg_edges} \end{figure} In order to describe a helical domain wall formed between the gates we consider the edge-like states in a quantum Hall system induced by an electrostatic potential $V(z,x)$ uniform along the $y$-direction and varying between $V_1$ and $V_2$ in the $x$-direction between the two gates, Fig.~\ref{fig:varg_edges}. The electron Hamiltonian is given by \begin{equation} \label{model} H=-\frac{1}{2m^*}\left(-i\hbar \nabla-\frac{e{\bf{A}}}{c}\right)^2+e\mathcal E_x x+ \frac{1}{2} \sigma_z (g^*\mu_B B+J_0 + J_1 x) ~, \end{equation} where $\bf{A}$ is a vector potential of a magnetic field ${\bf B}=\curl \bf{A}$, which is directed along negative $z$, $B= |B_z|$, $m^{*}$, $e$ and $g^*$ are electron effective mass, charge, and g-factor, $\vec{\sigma}$ is the Pauli matrix vector, $\sigma_z$ is its $z$-component, $\mu_B$ is the Bohr magneton, and $\mathcal E_x = -\nabla_x \int \Psi^*(z) V(z,x)\Psi (z) dz$ is an electric field in $x$-direction caused by the gradient of the gate-induced potential $V(z,x)$. In the mean field approximation, s-d exchange interactions are represented by a uniform part $J_0$ and a gate-induced variation of the s-d exchange $J_1 x$ \cite{sup1}. $J_1$ constitutes a spin-dependent electric field in $x$-direction. As was demonstrated in \cite{Kazakov2016}, using a combination of front and back gates and in conditions of a non-uniform doping of the quantum well by Mn$^{2+}$ ions along the growth direction $z$, it is possible to achieve almost uniform 2D electron density but induce significant $J_1\gg e\mathcal E_x$, \cite{sup1}. While considering nonzero $\mathcal E_x$ will not change our essential results, we will keep only $J_1$ effective spin electric field and take $\mathcal E_x=0$. In this model, the electron eigenvalues and wavefunctions are \begin{eqnarray} \label{eq:eigen_e_w} E_{n,s,k_y}&=&\hbar\omega_c\left(n+\frac{1}{2}\right)+\hbar k_y v_{s}-\frac{m^*v_{s}^2}{2}-\frac{s}{2}(g^*\mu_B B+J_0)\\ \label{eq:eigen_w} \psi_{n,s,k_y}&=&u_n\left(x-k_y\ell^2+\frac{v_s}{\omega_c}\right) e^{ik_y y}\chi_s~, \end{eqnarray} where $\omega_c=eB/(m^*c)$ is the cyclotron frequency, $k_y$ is the $y$-component of the wavevector $\vec{k}$, $u_n$ are the Landau wavefunctions, $s=\pm1$ is for spins up and down, $\chi_{1}=\chi_{\uparrow}=(1,0)^{T}$ and $\chi_{-1}=\chi_{\downarrow}=(0,1)^{T}$ . The spin-dependent drift velocity $v_s=s\cdot v$, where $v=cJ_1 /2eB$. At $\nu=2$ the edge-like states, Eq.~\ref{eq:eigen_e_w}, are localized near the spectral crossing of $(n=0,\uparrow)$ and $(n=1,\downarrow)$ states and can propagate between the two gated regions with opposite velocities. A non-magnetic disorder cannot cause scattering between two edge-like states (\ref{eq:eigen_w}) due to their opposite spins. However, two edges with opposite velocities originating from neighboring Landau levels are coupled by spin-orbit interactions, similar to the coupling of edges in a 2D topological insulator introduced by an in-plane Zeeman field. The specific mechanism of such coupling is Rashba (but not the Dresselhaus) spin-orbit interactions, described by a 2D Hamiltonian $H_R= \gamma_R \mathcal E_z (\vec{k}\times \vec{\sigma})_z$. Here ${\mathcal{ E}}_z$ is the component of the electric field perpendicular to the 2D plane, and $\gamma_R$ is the Rashba coefficient. The resulting spin-orbit coupling $h_R= \int \psi^*_{0,1,k_y}H_R\psi_{1,-1,k_y}dxdy$ is given by \begin{equation} \label{eq:h_R} h_R= \sqrt{2}\frac{\gamma_R{\mathcal{ E}}_z}{\ell} e^{-\frac{m^2 \ell^2}{\hbar^2} v^2} \left[1-\frac{m^2 \ell^2}{\hbar^2}v^2\right]~, \end{equation} where $\ell=(eB/\hbar c)^{-1/2}$ is the magnetic length. In the presence of this spin-orbit coupling, the effective single-particle Hamiltonian in the basis of the $(n=0, \uparrow)$ and $(n=1, \downarrow)$ states (\ref{eq:eigen_e_w}) near their spectral crossing is given by \begin{equation} \label{eq:speffective} H_e=hk_yv\sigma_z-h_R\sigma_x~. \end{equation} Thus, this single-particle system, which serves as a setting for the proximity-induced topological superconductivity, is rather unusual: in contrast to the nanowires and topological insulators, where spin-orbit interactions result in the level crossing and the Zeeman interaction provides a gap, here the Zeeman interaction is responsible for the crossing at $k=0$ while spin-orbit interactions open a gap in the spectrum. The states (\ref{eq:speffective}) exhibit helical electron spin texture similar to the N\'eel domain walls. We have calculated the texture numerically, Fig.\ref{fig:spin_struct}, taking into account exchange interactions between electrons. \begin{figure} \vspace{-1cm} \includegraphics[width=0.7\textwidth]{Fig2a.pdf} \includegraphics[width=0.7\textwidth]{Fig2b.pdf} \caption{Panel a) Spin texture as seen by the ground state in a system of two induced edge states originating from LL1 spin down and LL0 spin up states and coupled by Rashba spin-orbit interactions near spectral crossing. Exchange interactions of electrons are taken into account. Gated areas are shown in yellow, while the edge channels are propagating through the green region. Panel b)- Average spin projection on $x$-direction (blue), $y-direction$ (red) and $z$-direction (black) directions in the ground state.}\label{fig:spin_struct} \end{figure} In order to see how non-Abelian quasiparticles can emerge in CdMnTe quantum Hall system, we consider superconductor proximity-induced electron pairing. To illustrate the potential of this system for hosting Majorana modes, we will first assume that the Fermi level is outside the spin-orbit gap and crosses edge-like states forming the helical domain wall. We then consider a proximity effect induced by superconducting Ohmic contacts directly coupling edge-like states to an s-wave superconductor and inducing an order parameter $\Delta(x,y)$. Pairing the states of Hamilonian (\ref{eq:speffective}) is described by the projected order parameter $\Delta_k= \int dxdy \psi_{0,\uparrow,k_y} \Delta(x) \psi_{1,\downarrow,k_y}$. Due to the opposite velocities of the coupled edge-like states, the $\Delta_k$ is sizable even in the approximation of a constant $\Delta(x,y)=\Delta$ despite different Landau indices for the two edges: \begin{equation} \label{Delta_k} \Delta_k=\Delta e^{ -\frac{m^2 \ell^2} {2\hbar^2}v^2 } \frac{\sqrt{2}m \ell}{\hbar }v. \end{equation} The corresponding Bogoliubov-de Gennes ( BdG) equation ${\cal H }\boldsymbol{ \psi} (x,y)=E \boldsymbol {\psi} (x,y)$, where $\boldsymbol {\psi} (x,y)= (u_{\uparrow},u_{\downarrow},v_{\downarrow},-v_{\uparrow})^T$, is defined by \begin{equation} \label{eq:bdG_gen}{\cal H }= \left[ \begin{array}{cccc} \hbar kv -\mu-E_k& -h_R & \Delta_k & 0\\ -h_R &- \hbar kv -\mu -E_k & 0 & \Delta_k \\ \Delta_k^* & 0 & -\hbar kv +\mu -E_k & -h_R\\ 0 & \Delta_k^* & -h_R& \hbar kv+\mu -E_k \end{array} \right] \end{equation} Its four eigenvalues are: \begin{equation} E_k=\pm \sqrt{\Delta_k^2+\mu^2+ \hbar^2 k^2v^2 \pm 2\sqrt{\Delta_k^2 h_R^2+\mu^2 h_R^2 +\hbar^2 k^2v^2}}~, \end{equation} where $\mu$ is the chemical potential measured from the crossing point energy in the absence of Rashba coupling. The system becomes gapless for $k=0$ and $\Delta_{k=0}^2+\mu^2=h_R^2$, and at $|h_R|<\sqrt{\Delta_{k=0}^2+\mu^2}$, exhibits a topologically non-trivial superconducting phase. Formally, the emergence of a topological superconducting phase is somewhat similar to the case of a topological insulator in proximity to an s-wave superconductor \cite{Alicea2012}, but because it is Zeeman splitting that gives level crossing and spin-orbit interactions that leads to the gap here, restriction on the topological phase is defined by the value of the spin-orbit coupling rather than by the Zeeman splitting. It is important to notice that for the chemical potential outside the superconducting gap, i.e., $\mu>h_R$, the induced superconducting order is always topological. Furthermore, topological superconductivity exists even in the absence of a spin-orbit coupling at $h_R=0$. Majorana fermions are localized at the contacts between an s-superconductor and a domain wall area. This Majorana system can be affected by non-magnetic disorder: in contrast to chiral states (\ref{eq:eigen_w}), eigenstates of Hamiltonian (\ref{eq:speffective}) in the presence of the spin-orbit coupling are subject to backscattering similarly to edge states in topological insulators in the presence of Zeeman spin splitting. Backscattering must lead to reduction of domain wall conductance compared to conductance of domain walls formed by chiral states (\ref{eq:eigen_w}), as supported by experimental data on resistance on the flanks of the quantum Hall $\nu=2$ plateau in experiments \cite{Kazakov2017}. Thus, for Majorana modes emerging in helical domain wall with the Fermi level positioned outside the spin-orbit gap in the domain wall but inside the quantum Hall gap in the adjacent 2D regions, impurity scattering becomes detrimental in much the same way as for chiral states in semiconducting wires. Majorana fermions are expected to arise only in a very high mobility quantum Hall samples with small impurity scattering. However, even in this case, due to rather narrow interval of energies, 2D regions show finite conduction at the lowest temperatures, complicating the Majorana setting. For chemical potential $\mu$ inside the spin-orbit gap, there exists a signigicant distiction between the present setting and Majorana modes in topological insulators in the presence of Zeeman splitting. In topological insulators, a superconductuctor is often assumed to cover the whole area above the edge states at the sample boundary as opposed to a small contact at the side of the domain wall envisioned here. Correspondingly, certain proximity pairing effect exists throughout topological insulator when $\mu$ is inside the gap, which is characterized by a trivial superconducting phase. In the present setting only very small area defined by a penetration of the wavefunction into an insulating gapped domain wall near the contact can bear some trace of superconductivity, while the rest of the domain wall is generally an insulator. However, as shall see, impurities drastically change this situation. \section{Topological superconductivity generated by disorder} In order to obtain a well-controlled Majorana setting, the electron transport has to be conducted exclusively along the helical domain wall. To achieve this, the quantum Hall ferromagnetic transition should be tuned very close to $\nu=2$, where the bulk 2D conduction vanishes. In this case $\mu$ lays inside the spin-orbit gap and conduction is exponentially suppressed at low temperatures in wide regions. However in short helical domain wall channels conduction remains finite, and it was concluded that the in-gap impurity states provide the conduction path \cite{Kazakov2017}. We now show that in the presence of superconducting proximity effect, the helical domain walls with in-gap states can be mapped into a generalized disordered Kitaev chain \cite{kitaev_unpaired_2001,dasarma_chain_2012} where a topologically non-trivial superconducting order and Majorana bound states emerge. To consider a superconducting proximity effect in the helical domain walls with the Fermi level inside the spin-orbit gap in the spectrum of edge states, we first solve a general quantum-mechanical problem of impurity-induced states in a magnetic field in the presence of spin-orbit interactions. We then find impurity states in the domain wall in the presence of the mean field gradient of exchange interactions between electrons and Mn ions $J_1$. \subsection{Effect of spin-orbit coupling on Landau level impurity states} Our goal here is to get analytic results for the impurity-induced states. We model potential variations from remote ionized impurities as short-range potentials with a bound state energy $E_b$ at zero magnetic field. We then solve the system, in which impurity potential is added to Hamiltonian Eq.(\ref{model}). Short-range impurities in quantizing magnetic field were considered in \cite{azbel_impurities_1_1993, azbel_impurities_2_1994}. It is convenient to use the wavefunctions of an unbounded electron in a symmetric gauge in a uniform magnetic field, \begin{eqnarray} \label{eq:wf_gen}\psi_{n,m,s}^0(r,\varphi)&=&\sqrt{\frac{n!}{(n+m)!2^{m+1}\pi \ell}} e^{\left( i m \varphi+ \frac{i}{4} \frac{r^2}{\ell ^2} \sin(2\varphi)-\frac{r^2}{4\ell ^2}\right)} \left(\frac{r}{\ell}\right)^m L_{n}^m\left(\frac{r^2}{2 \ell^2}\right)\chi_{s}~, \end{eqnarray} corresponding to degenerate states with energy $E_{n,m,s}^0=\hbar\omega_c \left(n+\frac{1}{2}\right)+s V_z$, $s=\pm 1$, $V_z$ is the spin splitting that includes band Zeeman effect and mean field exchange splitting due to the electron spin interaction with Mn spins, $L_n^m$ denotes the Laguerre polynomials, $r$ and $\varphi$ are the polar coordinates, $n\geq 0$ and $m\geq-n$ are integers, and $\chi_1=\chi_{\uparrow}=(1,0)^{T}$ and $\chi_{-1}=\chi_{\downarrow}=(0,1)^{T}$ are the spinors. Following \cite{azbel_impurities_1_1993, azbel_impurities_2_1994} we begin with considering a single impurity at the origin in the presence of the Landau quantization. The short-range impurity does not affect states with $m\neq0$ as their wavefunction is zero in the origin, and all states with $m\ne 0$ are still described by the wavefunctions given by Eq. (\ref{eq:wf_gen}) and the corresponding eigenenergies $E_{n,m\ne 0,s}^0$. The states with $m=0$ are bounded by the impurity and the energy and wavefunctions of these states are: \begin{eqnarray} \label{eq:en_bound}E_{n,0,s}^0&=&\hbar \omega_c\left(n+\frac{1}{2}-\delta_n\right)+s V_z\\ \psi_{n,0,s}^0&=&\frac{|\Gamma(-n+\delta_n)|}{\sqrt{\pi \Psi'(-n+\delta_n)}}\frac{(-1)^n}{r} \label{eq:wf_bound}W_{n+\frac{1}{2}-\delta_n,0}\left(\frac{r^2}{2} \right)\chi_s~, \end{eqnarray} where $W$ is the Whittaker function and $\Psi$ is the digamma function. In a high magnetic field limit the impurity split-off $\delta_n$ is given by \begin{equation} \label{eq:deltan} \delta_n= \left|\Psi(n+1)-\ln\frac{|E_b|} {\hbar\omega_c}\right|^{-1}~, \end{equation} For states with $\delta_n \ll1$ the digamma function in Eq. (\ref{eq:deltan}) is much smaller than the logarithmic part and $\delta_n=1/\ln( \hbar\omega_c/|E_b|)\equiv \delta$ is independent of $n$. To simplify our analysis, we will consider this approximation; our conclusions, however, are quite general and this restriction is not crucial. We now include the Rashba Hamiltonian $H_R$ using the basis set that includes the orthonormalized wavefunctions Eq. (\ref{eq:wf_gen}) for $m\neq 0$ and wavefunction determined by Eq. (\ref{eq:wf_bound}) for $m=0$. The non-zero matrix elements (neglecting terms of order of $\mathcal{O}(\beta \delta/\hbar\omega_c )$ and $\mathcal{O} (\delta^2$) ) are \begin{equation} \bra{\psi_{n,m-1,\uparrow}}H_R\ket{\psi_{n-1,m,\downarrow} } = \frac{\beta \sqrt{2 n}} {\ell} =\Delta_{so}\sqrt{n} ~, \end{equation} where $\beta=\gamma_R\mathcal{ E}_z$. $\Delta_{so}$ coinsides with $h_R$ given by Eq. (\ref{eq:h_R}) at $n=1$ when $J_1$ is neglected, i.e., at $v=0$. The effect of spin-orbit interaction on Landau electron states and impurity bound states in quantized magnetic field is two-fold. First, for all states except the lowest $n=0$ Landau level with spin down, spin-orbit interaction leads to an additional repulsion of Landau states $(n, [m\ne -1,0], \uparrow)$ and $(n+1, [m\ne -1,0],\downarrow)$ and results in energy series \begin{equation} \label{free} E_{n,m,s}=\hbar \omega_c n +s \sqrt{\frac{ 2 n\beta^2}{ \ell^2} +\left( \frac{1}{2}\hbar\omega_c-V_z\right)^2}~,~m\neq -1,0~, \end{equation} where $n\ge 1$, and $s=\pm1$ describes spin states. The nondegenerate state with $n=0$ has energy $E_0= \hbar\omega_c/2 -V_z$, and is a ground state for $\Delta_{so}\ll \hbar\omega_c$ considered here. In Eq.(\ref{free}), for a pair of states at a given n, $s=1$ characterises the electron state with bigger energy, while state with lower energy is characterized by $s=-1$. State $(n,s=1)$ originates from state $(n,\downarrow)$ in the absence of spin-orbit coupling, while the state $(n, s=-1)$ originates from the state $(n-1,\uparrow)$. Except for the exclusion of states with $m=0$ and $m=-1$ this is the Rashba spectrum for conduction electrons \cite{rashba60}. Energy separation $\delta E= E_{1,m,+1}- E_{1,m,-1}$ arising from cyclotron splitting as well as spin splitting due to Zeemann, exchange and spin-orbit interactions is given by \begin{equation} \delta E= 2 \sqrt{\frac{ 2\beta^2}{ \ell^2} +\left( \frac{1}{2}\hbar\omega_c-V_z\right)^2}. \end{equation} At Zeeman energy $V_z=g^*\mu_B B+J_0=\hbar \omega_c/2$, energy states $(n, s=1)$ and $(n, s=-1)$, and particularly $(n=1, s=1)$ and $(n=1,s=-1)$ energy states are degenerate in the absence of spin-orbit interactions, but are splitted in its presence, with energies $E_{\pm}=\hbar\omega_c\pm \Delta_{so}$. Second, spin-orbit interaction couples $(n-1, m=0, \uparrow)$ impurity-bound state with $(n, m=-1, \downarrow)$ Landau level state, as well as $(n, m=0, \downarrow)$ impurity-bound state with $(n-1, m=-1,\uparrow)$ Landau level state. Such coupling introduces level repulsion within these pairs of coupled states, which results in the splitting of the $m=-1$ levels off the angular-momentum--degenerate Landau levels. Therefore, we now have two bound states for each spin-resolved Landau level, defined by two linear combinations of $m=0$ and $m=1$ states; the exception is a single bound state associated with the lowest $(n=0,\downarrow)$ Landau level. Energies of the two series of bound states at $n>0$ are given by: \begin{eqnarray} \label{eq:e_so_bound_1_bulk} E_{n,\varsigma}^{+}&=&\hbar \omega_c \left( n- \frac{\delta}{2} \right)+ \varsigma \sqrt{\left(\hbar \omega_c \frac{1-\delta }{2}-V_z\right)^2+\frac{ 2 n\beta^2}{ \ell^2}} ~, \\ \label{eq:e_so_bound_2_bulk} E_{n,\varsigma}^{-}&=&\hbar \omega_c \left( n- \frac{\delta}{2} \right)+ \varsigma \sqrt{\left(\hbar \omega_c \frac{1+\delta}{2}-V_z\right)^2+\frac{ 2 n\beta^2}{ \ell^2}} ~, \end{eqnarray} where $\varsigma=\pm 1$ denotes two different superpositions of $m=0$ and $m=-1$ states for a given $n$ in each of the series. Impurity-bound states $E_{n,\varsigma}^{+}$ originate from $(n, m=-1, \downarrow)$ Landau level states, and states $E_{n,\varsigma}^{-}$ originate from $(n-1, m=-1, \downarrow)$ Landau level states. The electron and impurity-bound energy levels in quantized magnetic field in a quantum well in the presence of Rashba interactions are shown in Fig. \ref{FigSpectrumcross-section}. As follows from Eqs. (\ref{eq:e_so_bound_1_bulk},\ref{eq:e_so_bound_2_bulk}), splitting of levels with opposite $\varsigma$ in the same series, e.g., $\delta E^*$ in Fig.\ref{FigSpectrumcross-section} is bigger than splitting $\delta E$ between coupled Landau levels due to additional level repulsion caused by impurity split-off $\hbar\omega_c\delta$. \begin{figure} \includegraphics[width=0.95\textwidth]{FigBulk.pdf} \caption{Electron energy spectrum in quantized magnetic field in the presence of attractive impurity center and spin-orbit interactions. Splitting of the degenerate in $m$ $(n=1, s=1)$ and $(n=1, s=-1)$ levels is caused by the cyclotron splitting and spin splitting due to Zeemann, exchange and spin-orbit interactions. Each impurity results in two energy levels given by Eqs. (\ref{eq:e_so_bound_1_bulk},\ref{eq:e_so_bound_2_bulk}) due to two linear combinations of m=0 and m=-1 states for each of the spin-resolved Landau levels. Impurity-bound state $E_{1,1}^{+}$ is shown in blue, $E_{1,1}^{-}$ is shown in red, $E_{1,-1}^{+}$ is shown in magenta and $E_{1,-1}^{-}$ is shown in green. Only one impurity induced state is present for $(n=0, \downarrow)$ Landau level, which is not affected by spin-orbit coupling, shown in black.} \label{FigSpectrumcross-section} \end{figure} \begin{figure} \includegraphics[width=0.95\textwidth]{FigCompensation.pdf} \caption{Electron energy spectrum in quantized magnetic field in the presence of attractive impurity center and spin-orbit interactions, in the case of compensation between cyclotron splitting and the sum of the Zeemann and exchange interactions leading to degeneracy of $(n=0, \uparrow)$ and $(n=1, \downarrow)$ Landau levels given by Eq.(10). The splitting $2\Delta_{so}$ of the $(n=1, s=1)$ and $(n=1, s=-1)$ unbound states is due to Rashba coupling only. Impurity levels from series + and - in Eqs. (\ref{eq:e_so_bound_1_bulk},\ref{eq:e_so_bound_2_bulk}) become degenerate, so that $E_{1,1}^{+}=E_{1,1}^{-}$ and $E_{1,-1}^{+}=E_{1,-1}^{-}$ (shown as coincidence of blue and red and coincidence of magenta and green). Splitting between pairs of degenerate levels $\delta E^*$ is due to both Rashba coupling and level repulsion caused by impurity split-off $\hbar\omega_c\delta$.} \label{FigSpectrumCompensation} \end{figure} At Zeeman energy $V^*_z=g^*\mu_B B+J_0=\hbar \omega_c/2$, levels of series $ E_{n,\varsigma}^{+}$ and $ E_{n,\varsigma}^{-}$ become degenerate. In particular, the double degenerate level \begin{equation} \label{main} E^{*}_{n=1, +}=\hbar\omega_c\left(1-\frac{\delta}{2}\right)+\frac{1}{2} \sqrt{4h_R^2+(\hbar\omega_c\delta)^2} \end{equation} corresponding to $\varsigma= 1$ lies in between $(n=1,s=1)$ and $(n=1,s=-1)$ levels, and a double degenerate level at $\varsigma=- 1$ with energy \begin{equation} E^{*}_{n=1, -}=\hbar\omega_c(1-\frac{\delta}{2})-\frac{1}{2} \sqrt{4h_R^2+(\hbar\omega_c\delta)^2} \end{equation} lies below $(n=1,s=-1)$ level. Remarkably, degenerate impurity-bound states with energy $E^{*}_{n, +}$ have opposite spins, and states $E^{*}_{n=1, -}$ also have opposite spins. This is a consequence of degeneracy between $(n-1,\uparrow)$ and $(n,\downarrow)$ Landau levels when spin-orbit interactions are not included. Wavefunctions of degenerate Landau levels and impurity split-off states in the bulk in the presence of spin-orbit interactions for arbitrary $n>0$ and $V_z=V^*_z$ can be written as: \begin{eqnarray} \label{eq:wf_so_gen} \psi_{n,m,\varsigma}&=&\frac{1}{\sqrt{2}}\left( \psi^0_{n,m,1}-\varsigma \psi^0_{n-1,m+1,-1} \right) ~,~m\neq -1,0~,\\ \label{eq:wf_so_bound} \psi^1_{n,m,\varsigma}&=&\frac{1}{\sqrt{2}} \left( \varrho_{\beta,\delta}^{(-1)^m\varsigma,n} \psi^0_{n,m,1}-\varsigma \varrho_{\beta,\delta}^{(-1)^{m+1}\varsigma,n} \psi^0_{n-1,m+1,-1} \right) ~,~m=-1,0~, \end{eqnarray} where wavefunctions $\psi^0_{n,m,s}$ are defined by Eq.(\ref{eq:wf_gen}), and \begin{equation} \varrho_{\beta,\delta}^{\pm 1,n}= \sqrt{1\pm\frac{ \hbar \omega_c \ell \delta} {\sqrt{\left( \hbar \omega_c \ell \delta\right)^2+ 8 n\beta^2 }}}. \end{equation} For $n=0$, the eigenstates are defined by Eqs. (10, 11): $\psi_{0,m,1}^1=\psi_{0,m,1}^0$ and $E_{0,m,1}^1=E_{0,m,1}^0$. \subsection{Impurity states in a helical domain wall} \begin{figure} \includegraphics[width=0.9\textwidth]{FigChannel.pdf} \caption{Electron spectrum in the presence of impurities in a helical domain wall of width $W$. In the presence of compensation between cyclotron energy and the sum of Zeemann and exchange energies for electrons in the helical domain wall, splitting between red-blue and green-magenta doublets of impurity levels in the center of the channel is due to spin-orbit interactions. Electron edge states there are also separated only by the spin-orbit coupling. Red and blue levels (and green and magenta levels) are separated in energy due to angular momentum splitting arising because of the effective spin-dependent electric field $J_1$. One impurity doublet (red-blue) falls into the spin-orbit gap between edge states arising from $(n=1, s=1)$ and $(n=1, s=-1)$ 2D Landau states (dashed lines). The other doublet (green-magenta) is below the spin-orbit gap. Only a single non-degenerate level (black solid segment) is split off $(n=0, \downarrow)$ Landau level shown by the lower black dashed line.} \label{FigChannellSpectrum} \end{figure} So far we discussed the bulk Landau levels and the bulk impurity states in the presence of spin-orbit coupling. In the presence of the spin-dependent electric field $J_1$ in a narrow range of coordinate $x$, which leads to the formation of a helical domain wall, these bulk states change in a two-fold way. First, Landau levels with multiple degeneracy in angular momenta (\ref{eq:wf_so_gen}) form linear combinations that correspond to an edge-like states (\ref{eq:eigen_e_w}), which are gapped by spin-orbit interactions and described by the effective Hamiltonian (\ref{eq:speffective}). Two doublets of impurity states also evolve, Fig.\ref{FigChannellSpectrum}: one doublet with $\varsigma=+1$ falls into the gap between spin-orbit split edge-like states, and the other doublet with $\varsigma=-1$ is below the spin-orbit gap. The second effect of the effective spin-dependent electric field $J_1$ is angular momentum splitting of the in-gap impurity states. The angular momentum splitting of the $E^{*}_{1, +}$ double degenerate level (\ref{main}) for an impurity centered at the origin in the area of the helical domain wall, is given by \begin{equation} \label{eq:en_so_v_bound} L= \frac{ 2\hbar^2 v^2}{ \ell \sqrt{ \left(\hbar \omega_c \delta \ell \right)^2+8\beta^2} }. \end{equation} Angular momentum splitting $L$ arises in the second order in the effective spin-dependent electric field $J_1$, and therefore is quadratic in $v$. \subsection{Chain of impurity states} \begin{figure} \includegraphics[width=0.8\textwidth]{FigSchematicChannel_0.pdf} \vspace{-3cm} \caption{Schematic view of the conducting channel with proximity induced superconductivity (blue contact), with attractive impurity potential (red)} \label{FigChannell} \end{figure} Our goal is to study a chain of in-gap states, Fig. \ref{FigChannell}. For impurity potentials centered at ${\bf{R}}_k=(X_k,Y_k)$, their separation along the $y$-direction is assumed much larger than the width of a helical domain wall. Therefore the chain can be considered as one-dimensional, with ${\bf{R}}_k=(X_k=0,Y_k)$. Also, in high magnetic field $|R_k-R_{k-1}|\gg \ell$. We will assume that impurity centers may have slightly different binding energies and therefore different impurity split-offs $\delta$, e.g., because of their varying $z$-coordinate in a doping layer and therefore varying separation from the quantum well. We will denote the split-off for an impurity centered at ${\bf{R}}_k$ as $\delta_k$. Angular momentun splittings $L_k$ for impurity sites centered at ${\bf{R}}_k$ also differ from site to site: \begin{equation} L_k= \frac{2\hbar^2 v^2}{ \ell \sqrt{ \left(\hbar \omega_c \delta_k \ell\right)^2+8 \beta^2} }. \end{equation} The wavefunctions of electrons bound to a single impurity are given by \begin{equation} \label{eq:wavechain} \psi_m^{(k)}({\bf{r}})=\psi_{1,m,-1} ({\bf{r}}-{\bf{R}}_k)e^{iX_ky/\ell^2}=\psi_{1,m,-1}({\bf{r}}-{\bf{R}}_k). \end{equation} Considering a chain, we orthogonolize these wavefunctions assuming that only overlap between wavefunctions of electrons centered on the nearest neighbors is essential. The orthonormalized wavefunctions are: \begin{eqnarray} \ket{\tilde \psi_{m}^{(k)}}&&\simeq\ket{\psi_m^{(k)}} - \frac{1}{2}\sum_{m_1=-1}^0\ket{\psi_{m_1}^{(k+1)}} S^{k+1,k}_{m_1,m}\nonumber\\ &&-\frac{1}{2}\sum_{m_2=-1}^0\ket{\psi_{m_2}^{(k-1)}} S^{k-1,k}_{m_2,m}~,\hspace{3mm} m=-1,0~, \end{eqnarray} where the overlap integrals of the electron wavefunctions on isolated centers located at $\bf{R}_k^{\prime}$ and $ \bf{R}_k $ are given by \begin{equation} S^{k^{\prime},k}_{m^{\prime},m}=\bra{\psi_{m^{\prime}}^{k^{\prime}}}\ket{\psi_{m}^{(k)}}. \end{equation} We seek the wavefunctions of the Hamiltonian of the chain \begin{equation} H=-\frac{1}{2m^*}\left(-i\hbar \nabla-\frac{e{\bf{A}}}{c}\right)^2+V_z \sigma_z + \sum_k U( {\bf r},\bf{R}_k) \end{equation} in the form \begin{equation} \Psi= \sum_{m,k} a_{mk} \ket{\tilde \psi_{m}^{(k)}}~,\hspace{3mm} m=-1,0~. \end{equation} Then the effective Hamiltonian $H_{mk,m^{\prime}k^{\prime}}$ acting on coefficients $a_{mk}$ is defined by remormalized single-impurity site energies $\tilde{E_{mk}} = \bra{\tilde\psi_{m}^{k}} H \ket{\tilde\psi_{m}^{k}}$ and tunneling matrix elements $w^{m^{\prime},m}_{k^{\prime},k}=\bra{\tilde\psi_{m^{\prime}}^{k^{\prime}}} H \ket{\tilde\psi_{m}^{k}}$. The leading contribution to tunnelling arises from matrix elements \begin{eqnarray} \label{eq:mat_el_tun_s} w_{k+1,k}^{m,m} &\simeq& \tilde \delta_{k+1,k} (-1)^{m+1} P_{k+1,k}^ {\beta} \frac{1}{4} \left(\frac{Y_{k+1,k}}{\sqrt{2}\ell}\right)^{2m+2} e^{-\frac{Y_{k,k+1}^2} {4\ell^2}}~,\\ \label{eq:mat_el_tun_o} w_{k+1,k}^{-1-m,m} &\simeq& \delta^{d}_{k+1,k} Q_{k+1,k}^{\beta} \frac{1}{4} \frac{Y_{k+1,k}}{\sqrt{2}\ell} e^{-\frac{Y_{k,k+1}^2}{4\ell^2}}~, \end{eqnarray} where \begin{eqnarray} P_{k+1,k} ^{\beta}&=& 1-\frac{\hbar \omega_c \ell \tilde \delta_{k+1,k}}{\sqrt{( \hbar \omega_c \tilde \delta_{k+1,k} \ell)^2+8 \beta^2}}~,\\ Q_{k+1,k}^{\beta}&=& \frac{\beta}{\sqrt{( \hbar \omega_c \tilde \delta_{k+1,k} \ell)^2+8 \beta^2}}~, \end{eqnarray} $\tilde \delta_{k+1,k}=(\delta_{k}+\delta_{k+1})/2$ is an average split-off of the neighboring impurity centers, $\delta^{d}_{k+1,k}=\delta_{k}-\delta_{k+1} $, and $Y_{k+1,k}=Y_{k+1}-Y_{k}$ . These expressions are obtained by expanding the overlap matrix elements and keeping only the leading terms in $1/Y_{k+1,k}$, $e^{-Y_{k+1,k}^2}$ and $ \delta^{d}$. \subsection{Superconducting coupling} We project electron interactions due to the proximity-induced superconducting paring $H_{\Delta}=\Delta \int {\hat\psi^{\dagger}_{\uparrow} \hat\psi^{\dagger}_{\downarrow}}+ h.c.$, onto the Hilbert space of bound states $\psi^{(k)}_{m}$. As we are interested here is a single superconducting contact to a quantum Hall system, phase of the order parameter is unimportant and we take $\Delta>0$ without the loss of generality. The effective Hamiltonian for the superconducting pairing with a chain of impurity states then reads \begin{equation} \label{eq:gen Delta}H_{\Delta}\simeq \sum_{k}\tilde\Delta_k c_{k,0}^{\dagger} c_{k,-1}^{\dagger}+ \sum_{m,m'=-1,0}\Delta_{k,k+1}^{m,m'} c_{k,m}^{\dagger} c_{k+1,m'}^{\dagger} + h.c.~, \end{equation} where \begin{eqnarray} \label{eq:eff_d_single} \tilde\Delta_k&=&\Delta\frac{1-\gamma_0\delta_k}{\sqrt{8}}\\ \label{eq:eff_d_same} \Delta_{k,k+1}^{m,m}&=& \Delta i (4m+3)\left(\frac{Y_{k,k+1} }{\sqrt{2} \ell} \right) ^{2m+1} Q_{k+1,k}^{\beta}e^{-\frac{Y_{k+1,k}^2}{8 \ell^2}}\\ \label{eq:eff_d_diff} \Delta_{k,k+1}^{-1-m,m}&=& \Delta (-1)^m (4m+3) \left(\frac{Y_{k,k+1}} {\sqrt{2} \ell}\right) ^{2} \left(P_{k+1,k}^{\beta}+m-\frac{1}{2}\right)e^{-\frac{Y_{k+1,k}^2}{8 \ell^2}}~, \end{eqnarray} $\gamma_0\simeq 1.89258$ is a numerical constant, and $m=-1,0$. \subsection{Single impurity site in the presence of superconducting pairing} In order to address the topological superconductivity and Majorana fermions in a chain of impurity states, we first consider a single site in the presence of superconducting coupling within the Bogoliubov-DeGennes formalism. We restrict the Hilbert space to $\psi^{1}_{1,0,-1}$ and $\psi^{1}_{1,-1,-1}$ near impurity site $k$ with coordinates $R_k$. We denote electron creation operators for these states $c_{k,+1}^{\dagger}$ and $c_{k,-1}^{\dagger}$. Then the effective Hamiltonian is given by \begin{equation} H_{k}=\sum_{i,j}{(\varepsilon_k+L_k\sigma_z)_{i,j} c_{k, i}^{\dagger} c_{k, j}+i \tilde\Delta_k \hat c_{k, i}^{\dagger} \left(\sigma_{y}\right)_{i,j} \hat c_{k, j}^{\dagger}-i \tilde\Delta_k \hat c_{k, i} \left(\sigma_{y}\right)_{i,j} \hat c_{k, j}}~, \end{equation} where $\mu$ is the chemical potential, and on-site energies are \begin{equation} \varepsilon_k=-\hbar \omega_{c} \frac{\delta_{k}}{2} + \frac{1}{2} \sqrt{ \left(\hbar \omega_{c} \delta_{k} \right)^2+8 \left(\frac{\beta}{\ell}\right)^2}-\mu~, \end{equation} where $\tilde \Delta_k$ is defined by Eq.~(\ref{eq:eff_d_single}). We diagonaize this Hamiltonian using the Bogoliubov transformation \begin{equation} \label{eq:a_bog_pm} \hat a_{k,\pm} = \pm \sqrt{1+ \frac{\varepsilon_k}{\sqrt{\varepsilon_k^2+|\tilde\Delta_k|^2}}} e^{i\frac{\pi}{4}} \hat c_{k,\pm 1}+ \sqrt{1-\frac{\varepsilon_k} {\sqrt{\varepsilon_k^2+|\tilde \Delta_k|^2}}} e^{i\frac{\pi}{4}} c_{k,\mp 1}^{\dagger} \end{equation} that gives eigenvalues $\mu_{k}\pm L_k$, where \begin{equation} \mu_{k}=\sqrt{\tilde \Delta_k^2+\epsilon_{k}^2}. \end{equation} \subsection{Topological superconductivity in a chain of impurity-bound states} We now study a chain of impurity-bound sites placed at ${\bf{R}}_{k}=(0,Y_k)$. We denote ${\bf{R}}_{k,k+1}={\bf{R}}_{k+1} -{\bf{R}}_k$. The Hamiltonian of the chain is defined by the single site energies, superconducting coupling and inter-site tunneling: \begin{equation} \label{eq:all_chain} H_{c}=\sum_k H_k+\sum_{k,i,j} w_{k+1,k}^{i,j} \hat c_{k+1,i}^{\dagger} \hat c_{k,j}+\sum_{k,i,j} \Delta_{k+1,k}^{i,j} c_{k+1,i}^{\dagger} \hat c_{k,j}^{\dagger} +h.c~, \end{equation} where $w_{k+1,k}^{i,j}$ are given by Eqs. (\ref{eq:mat_el_tun_s}), (\ref{eq:mat_el_tun_o}) and $\Delta_{k+1,k}^{i,j}$ are given by Eqs. (\ref{eq:eff_d_same}) and (\ref{eq:eff_d_diff}). Analogous to \cite{Akhmerov_chain_2013}, we project the Hamiltonian $H_c$ onto the subspace of fermionic excitations given by $a_{k,-}$ on each site. These excitations are defined by Eq.(\ref{eq:a_bog_pm}). Then the effective Hamiltonian is \begin{equation} \label{eq:gen_kitaev} H=\sum_k\left[\left(\mu_{k}-Z_k\right) \hat a_{k,-}^{\dagger} \hat a_{k,-}+t_k \hat a_{k+1,-}^{\dagger} \hat a_{k,-}+ \bar\Delta_k \hat a_{k+1,-}^{\dagger} \hat a_{k,-}^{\dagger}\right]+h.c.~, \end{equation} where in the leading approximation \begin{eqnarray} t_k&=&\Delta \frac{\sqrt{2}}{4}\left(\frac{Y_{k+1,k}}{\sqrt{2}\ell}\right)^2 r_{k,\delta}\sqrt{1+r_{k,\delta}^2}\left(P_{k+1,k}^{\beta}-\frac{3}{4}\right) e^{-\frac{Y_{k+1,k}^2}{8 \ell^2}}~,\\ \bar\Delta_k&=&\Delta \frac{3}{16}\left(\frac{Y_{k+1,k}}{\sqrt{2}\ell}\right)^3 \sqrt{1+r_{k,\delta}^2} \left(\sqrt{1+r_{k,\delta}^2}-1\right)Q_{k+1,k}^{\beta}e^{-\frac{Y_{k+1,k}^2}{8 \ell^2}} ~, \end{eqnarray} $\mu_{k+1,k}=\left(\mu_k+\mu_{k+1} \right)/2$ and $r_{k,\delta}=\tilde\Delta/\mu_{k+1,k}$. The term proportional to $\bar\Delta_k$ constitutes a $p$-type superconducting pairing. We therefore arrived at a generalized version \cite{dasarma_chain_2012,Akhmerov_chain_2013} of the Kitaev chain \cite{kitaev_unpaired_2001}. Except possibly for the $\left(P_{k+1,k}^{\beta}-\frac{3}{4}\right)$ factors appearing in the definition of an effective tunneling amplitudes $t_k$, $t_k$ and a superconducting pairing $\bar\Delta_k$ do not change sign from site to site. $t_k$ becomes zero when $\hbar\omega_c\delta_{k}=\beta/\ell\sqrt{15}$. However, $\mu$ can be adjusted so that $t_k>0$ in a chain of impurity sites. \begin{figure} \includegraphics[width=0.7\textwidth]{FigSpectrumw1.pdf} \caption{Spectra of 100 realizations of a chain of 5 localized states with superconducting coupling. The total length of the chain is 200 nm. Bound states energies $\in[\mu-k_BT,\mu+k_BT]$, where $T=0.1 {\rm{K}}$. Minimal separation of centers of localized states is 25 nm, $\Delta=0.1 {\rm{meV}}$, $\gamma_R=0.44 {\rm{meV nm}}$ $\mu=32{\rm{\mu eV}}$. In-gap states that disappear with increasing velocity (transition from red to green) signify the existence of the Majorana bound states (red).} \label{FigSpectrumw1} \end{figure} We thus arrive to the realization of a sign ordered Kitaev chain \cite{dasarma_chain_2012} that supports two Majorana localized modes at its ends if $|\mu_k-L_k|<\max(t_{k+1},\bar\Delta_{k+1})$. Although this criterion creates an impression that it can possibly be satisfied even at $L_k=0$, it is important to keep in mind that non-zero $L_k>k_BT$ is an important factor that prevents fermion doubling. $L_k$ separates two angular momentum/spin species of in-gap states proximity-coupled to a superconductor. That constitutes a difference between the setting of in-gap Majorana modes and Majorana modes in a topological insulator. In a topological insulator proximity-coupled to a superconductor, Majorana modes can emerge at zero Zeeman splitting because fermion doubling in topological insulator is removed by the chiral character of spin edge states. However, the in-gap electron states in our setting do not propagate, and are not characterized by a wavevector. In the absence of $J_1$ defining the velocity $v$ of the edge states, the states are degenerate in angular momentum and spin simultaneously, which leads to the fermion doubling. However, the gradient of exchange interactions results in angular momentum splitting $L_k$ that removes fermion doubling, and leads to the emergence of topological superconductivity. In Fig. \ref{FigSpectrumw1}, we present numerically calculated spectra of a short chain of localized states with proximity-induced superconducting coupling. At small $J_1$ and $v$ ($\hbar v/\ell<0.3$) the chain is characterized by zero modes, but for larger $J_1$ and $v$ ($\hbar v/\ell > 0.3$) states inside the superconducting gap disappear. The condition $\hbar v/\ell \approx 0.3$ corresponds to the topological phase transition. Tuning the angular momentum splitting, we can bring the system in and out of the topological phase, creating and destroying Majorana modes at the end of the chain. $L_k$, in contrast to settings described in \cite{dasarma_chain_2012, Akhmerov_chain_2013} is unrelated to the value of a magnetic field, but is defined by velocities of gapped edge channels $v$, which are controlled by electrostatic gates. \begin{figure} \includegraphics[width=\textwidth]{MajoranaGenMove.pdf} \caption{Creating and moving a Majorana pair: a) Setting voltage differences between top and bottom gates to $V+\delta V$ yields trivial supercronductivity in all domain walls; b) setting voltage to $V$ on the second bottom gate drives the system into the topological phase in the domain wall above that gate and induces the Majorana modes at the ends of the domain wall; c) Setting the voltage to $V$ on the third bottom gate extends the topological region to the domain wall above that gate and moves one of the Majorana modes to a new boundary between topological and non-topological state; d) Setting the voltage to $V+\delta V$ on the second bottom gate moves the first of Majorana modes to the right. Blue areas are s-superconductors, yellow areas are top gates. Difference of voltages between two neighboring yellow gates defines the presence of domain wall and the type of the superconducting order parameter. Red domain walls are in topological superconducting state, and green domain walls are non-topological superconductors. Grey areas correspond to voltage differences between neighboring gates insufficient to create a domain wall. } \label{fig:Majorana_move} \end{figure} \begin{figure} \includegraphics[width=0.8\textwidth]{MExch.pdf} \caption{Exchanging a pair of Majorana modes using method of moving the Majorana pair.} \label{fig:exch} \end{figure} \begin{figure} \includegraphics[width=0.9\textwidth]{MFuse.pdf} \caption{Fusion and recreation of Majorana modes using method of moving the Majorana pair.} \label{fig:fuse} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{MBraid.pdf} \caption{Braiding Majorana modes achieved using a method of moving the Majorana pair and a $T$-junction of domain walls in topological superconducting state. } \label{MBraid} \end{figure} \subsection{Control of Majorana modes} Numerical simulations are performed for heterostructures studied in \cite{Kazakov2017} assuming $\Delta=0.1$ meV, $\gamma_R=0.44$ meV$\cdot$nm, and $\mu=32$ $\mu$eV. We estimate that the voltage difference between the gates $V=V_1-V_2\sim 129$ mV corresponds to the topological condition $\hbar v/\ell < 0.3$ with Majorana fermions formed at the end of the chain, while additional voltage $\delta V\sim 1$ mV (total voltage difference $V+\delta V$) brings the system to the normal superconducting proximity state. Thus, using electrostatic gates, we can move Majorana modes, and create and annihilate them. Furthermore, reduction of the difference of voltages on electrostatic gates on the sides of the domain wall area to a voltage below 10 meV (in theory, making it zero) erases the domain wall altogether, and can also serve as an instrument in manipulating reconfigurable network of topological superconductors. Figs. \ref{fig:Majorana_move}, \ref{fig:exch}, \ref{fig:fuse} and \ref{MBraid} demonstrate inducing, moving, exchange, fusion and braiding of Majorana modes. In these figures, blue areas are s-superconductors and yellow areas are top gates. Difference of voltages between two neighboring yellow gates defines the presence of domain wall and the type of the superconducting order parameter. Red domain walls are in topological superconducting state, and green domain walls are non-topological superconductors, while grey areas correspond to voltage differences between neighboring gates insufficient to create a domain wall. Braiding of Majorana fermions are achieved using a structure containing $T$-junction of domain walls in topological superconducting state, Fig. \ref{MBraid}. By moving Majorana modes, two pairs of such modes are brought to a T-junction as in panel d). Then a $T$-junction link is cut by increasing the voltage by $\delta V$ on the gate controlling that link. Gate voltages are then brought back to the initial configuration. We underscore that all manipulations are expected to be produced by voltage pulses. Calculated parameters and requirements for the scheme are realistic and feasible for experiments in near future. We note that in the schemes Fig.\ref{fig:Majorana_move}-\ref{MBraid}, a superconducting pairing potential $\Delta$ is assumed spatially uniform in the domain wall areas. In real settings with superconducting contacts on the sides of the domain walls, the induced superconducting gap is expected to be spatially dependent, decreasing from the contact area into the sample. Spatially dependent $\Delta(y)$ will re-define boundaries between topological and non-topological superconducting regions. These boundaries, and Majorana modes residing at boundaries, can be moved with adjusted gate voltages, when applied gate voltage exceeds the critical value in an area with lower $\Delta$ but is smaller than the critical value in the area closer to the contact. \section{Conclusion} In this work we considered Majorana modes in hybrid s-superconductor - filling factor $\nu=2$ quantum Hall ferromagnet domain wall system. We discovered that when the Fermi level is pinned to a gap between anticrossing spin-orbit coupled edge states, the impurity disorder in short domain walls generates proximity-induced topological superconductivity and the Majorana zero modes. Thus, in this case not only topological superconductivity is disorder robust, but it emerges exclusively due to impurity disorder. Hybrid structures of s-superconductor with fractional quantum Hall edge states were suggested as possible realization of parafermions, which could bring such settings closer to fault-tolerant quantum computing. Quantum Hall ferromagnet domain walls at fractional filling factors proximity-coupled to s-type superconductor can also potentially produce parafermions, making studies of helical domain walls an important area of the field of topological quantum computing. \vspace{0.5cm} \textbf{Acknowledgements}\\ \begin{acknowledgments} G.S., A.K., L.P.R., T.W. and Y.B.L-G. acknowledge support by the Department of Defence Office of Naval Research Award N000141410339. T.W. was partially supported by the National Science Centre (Poland) through Grant No. DEC- 2012/06/A/ST3/00247 and by the Foundation for Polish Science through the IRA Program co-financed by EU within SG OP. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} One approach to QCD confinement is that of centre vortices, where the key degrees of freedom are those in the centre of the gauge group, Z($N$) in the case of SU($N$). Numerical results for the case of SU(2) (also treated here) show that this is fruitful. The main technique is that of centre projection~\cite{dDFGO97}: fixing to a gauge where the links are as close as possible to a centre element, then projecting to that element, leaving a lattice of Z(2) links; negative plaquettes are called P-vortices and are interpreted as the source of confinement. Here we examine two related issues. \section{RANDOM VORTICES} A random gas of infinitely long vortices will cause linear confinement. This is too simplistic, but maybe can teach us something: indeed it gives about the right string tension from measured vortex densities. Viewed in four dimensions the vortices are defined by closed surfaces; confinement survives only so long as this surface percolates throughout the space-time manifold, and hence deconfinement may be due to loss of percolation~\cite{ELRT99}. This has all been argued from the point of view of taking SU(2) and reducing the degrees of freedom to the bare essentials. Here we shall attempt the opposite: to construct an (approximately) random vortex picture. Truly random vortices are difficult because of the strong coupling of adjacent plaquettes via the links, even with no gauge coupling present. Our lattice and observables are as in the projected Z(2) theory. We use the following procedure: \begin{itemize} \item Create a random set of links, either $\pm1$ with 50\% probability (`random start') or set to unity (`frozen start'). \item Let $v = \textrm{density of negative plaquettes}$ (corresponding to vortices); initially $v\sim0.5$ or $v\equiv0$. Pick a target $v=v_T$ chosen to correspond to the mean density of P-vortices in SU(2). At $\beta=2.3$, $v_T\approx0.0945$; at $\beta=2.4$, $v_t\approx0.0602$. \item Pick a link at random in the lattice. Flip the sign of this link either (i) if it does not alter $v$ or (ii) if it alters $v$ towards $v_T$. \item Continue until $v_T$ is achieved. Because of condition (i) it is useful to attempt to flip links already considered. In the case of the frozen start, we have tried further to make the vortices independent by making sets of flips which do not affect the overall vortex density. \item Generate many configurations of this sort and analyse them as a Monte Carlo sample. Note that here there is no Markov process, and hence no fluctuating action; in a sense our ensemble is microcanonical. \end{itemize} There is a bias in this proceedure because we flip links attached to sets of plaquettes predominantly of one sign, hence our vortices are not truly random. We could instead have chosen the target $v$ to correspond to the SU(2) string tension on the assumption of truly random vortices. Our actual choice reflects a desire to look at the cluster properties of vortices. \subsection{Results} \begin{table} \begin{tabular}{r|lll} \hline & $\beta=2.3$ & Random & Frozen \\ \hline (a) & $0.0945(6)$ & $0.0945$ & $0.0945$ \\ (b) & $0.0727(9)$ & $0.01060(11)$ & $0.9876(2)$ \\ (c) & $10890(60)$ & $11631(1)$ & $146(2)$ \\ (d) & $0.1362(2)$ & $0.189$ & $0.189$ \\ (e) & $0.145(7)$ & $0.41(3)$ & $0$ \\ \hline \hline & $\beta=2.4$ \\ \hline (a) & $0.06015(6)$ & $0.0602$ & $0.0602$ \\ (b) & $0.1125(12)$ & $0.00494(6)$ & $0.99742(3)$ \\ (c) & $20600(100)$ & $23553(1)$ & $61.0(6)$ \\ (d) & $0.0708(11)$ & $0.12$ & $0.12$ \\ (e) & $0.079(7)$ & $0.164(3)$ & 0 \\ \hline \end{tabular} \smallskip \caption{Comparsion between SU(2) and quasi-random vortices. The columns show SU(2), and quasi-random vortices starting from random or frozen links; the rows are (a) vortex density $v$ (fixed for the quasi-random vortices), (b) fraction of P-plaquettes \textit{not} in the largest cluster, (c) the number of P-plaquettes in the largest cluster, (d) the string tension based on large scale simulations of full SU(2)~\protect\cite{BSS} or purely random vortices, and (e) that actually measured from vortices, see text.} \label{fig:qrres} \vskip -4ex \end{table} Fig.~\ref{fig:qrres} shows results on bulk lattices, $12^4$ for $\beta=2.3$ and $16^4$ for $\beta=2.4$. The string tension is shown both for the two ideal cases (from a large scale run in full SU(2) and for fully random vortices) and as measured from vortices. In the quasi-random case with the random start, Creutz ratios show a string tension which for small loops lies near the expected value ($2v$) but which increases for larger loops. The results shown are from a full potential calculation where this increase tends to level out, although with some curvature, giving a rather larger string tension; the form fit to is necessarily somewhat ad hoc and here we have included a quadratic part. Furthermore, in the frozen start the vortices lack confinement and hence show in effect a repulsion. These are sizeable effects; a more truly random method will be needed for a more realistic comparison. An effective action would also presumably help~\cite{ER99}. Nonetheless, we examine cluster properties by methods similar to ref.~\cite{BFGO99}, dividing vortices into two clusters where the surfaces touch only along an edge. This difference between touching and joining is a lattice effect which makes a noticeable impact --- almost tripling the number of vortices not in the largest (percolating) cluster for the case of SU(2) with $\beta=2.3$ with the random start, and increasing the largest cluster size dramatically for the frozen start. Of course we would prefer to detect vortices directly with their physical size. \subsection{The deconfining transition} We have also examined a lattice in the deconfined phase, using Polyakov loops $L$ as the order parameter, although it is maybe unlikely that homogeneous random vortices alone can be sufficient to explain deconfinement. The lattice results show that $\langle\left| L\right|\rangle$ goes to 1 for small vortex density, but this is expected simply due to the fact that neighbouring loops are effectively Wilson loops with an area equal to the finite temperature extent of the lattice, and hence correlated by the vanishing string tension. There is no sign of a phase transition, nor finite size scaling behaviour. It may well be important to have the vortex surface orientated predominantly parallel to, and hence not piercing, temporal Wilson loops; it is not clear such an effect can come from just the Z(2) degrees of freedom. \section{PROBING WILSON LOOPS} The plaquette-sized P-vortices are expected to have a topological effect on Wilson loops, depending only on whether a vortex pierces the loop. We investigate this by looking at the correlations between P-vortices and Wilson loops. \begin{figure} \begin{center} \psfig{file=probe.eps,width=2.5in,angle=270} \end{center} \vskip -5ex \caption{Placement of a plaquette-sized probe in a Wilson loop, showing the diagonal distance $r$; for a loop size $L$ with $L$ odd, $r = (L+1)/2$ is just outside the loop.} \label{fig:pwl} \vskip -2ex \end{figure} Our method is the following (fig.~\ref{fig:pwl}). We take a plaquette $P$ on the centre-projected lattice within a wilson loop $W$, a certain distance from the centre of the loop. For present purposes we shall simply take the distance $r$ to be the number of plaquettes diagonally from the centre of the loop, as in the diagram. If $P=1$, we ignore $W$ and pass on to the next one; if $P=-1$ we examine the value of $W$. After sampling over many configurations, we can form an average $\langle W_{P=-1}(r)\rangle$. Note that in examining $P$ we take no account whatsoever of other centre plaquettes inside (or outside) $W$; the effect is purely the correlation between the Wilson loop and a centre vortex at the given position, whether or not the loop is pierced by other vortices. To achieve sufficiently large correlations we are restricted to loops of sizes that have $\mathcal{O}(1)$ vortices inside. Clearly, if there is no correlation, $\langle W_{P=-1}(r)\rangle=\langle W\rangle$. As a control, we have performed the same experiment replacing $P$ with the sign of a gauge plaquette $G$ located in the same place. \begin{figure} \vskip -2ex \begin{center} \psfig{file=probe2.4.ps,width=2.8in} \end{center} \vskip -5ex \caption{Probes $W_{P=-1}$ (circles) and $W_{G=-1}$ (squashy squares) for $\beta=2.4$, $16^4$ lattice, for (top) $1\times1$, $3\times3$, $5\times5$ (bottom) loops.} \label{fig:pwlres} \vskip -3ex \end{figure} The results (fig.~\ref{fig:pwlres}) show that $\langle W_{P=-1}(r)\rangle$ is rather flat inside the loop, but with a significant correlation. In contrast, the values of $\langle W_{G=-1}(r)\rangle$ vary much more widely over the inside of the loop. This is a sign that the dominant effect of the vortex is given by whether or not it pierces the loop, regardless of where it does so, an effect not expected and not shown by the sign of the full gauge plaquette. Both probes become uncorrelated very quickly when outside the loops. For gauge plaquettes this can be understood from strong coupling; such plaquettes only appear in quite high order. For P-plaquttes the natural interpretation is that vortices not piercing the Wilson loop have no effect on it. However, if the vortices really correspond to extended physical objects, it is not clear why the change from inside to outside should be so sharp; this raises questions about the size of the vortex core.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Leptogenesis\cite{lep1,lep2,lep3,lep4,lep5,lep6,lep7} is a simple mechanism to explain the observed baryon asymmetry of the universe\cite{Planck}. The right handed (RH) heavy neutrinos which are introduced in the Standard Model (SM) to generate light neutrino masses (Type-I seesaw), decay CP asymmetrically to create lepton asymmetry which is then converted to baryon asymmetry via Sphaleron transition\cite{Kuzmin:1985mm}. When it comes to the testability of leptogenesis, there are subtleties. If the heavy neutrino masses are not protected by any symmetry\cite{Altarelli:2010gt}, it is quite natural to assume that they are hierarchical in nature like any other family of SM fermions. In that case, the lightest RH mass scale is bounded from below $M\gtrsim 10^9$ GeV\cite{Davidson:2002qv}which is beyond the reach of the present collider experiments. Nonetheless, still the colliders and other low energy neutrino experiments can probe leptogenesis mechanisms that do not constitute hierarchical RH neutrinos--starting from $\mathcal{O}$(TeV) to $\mathcal{O}$(MeV) scale heavy neutrinos\cite{Ars,Ham,Pila,dev1}. A shift of attention from the collider experiments to the Gravitational Waves (GWs) physics is not less interesting in terms of testing leptogenesis. Particularly, this new cosmic frontier, in which after the discovery of GWs from black hole mergers by LIGO and Virgo collaboration\cite{gw1,gw2}, plenty of efforts are being made to detect primordial GWs from the Early Universe (EU) within a wide range of frequencies--starting from the Pulsar Timing Arrays (PTAs, $\sim$ nHz) to the LIGO($\sim$ 25Hz). A network of cosmic strings\cite{cs1,cs2,cs3} which is a generic consequence of breaking symmetries such as $U(1)$, is one of the prominent sources of strong stochastic primordial gravitational waves which can be tested in a complementary way in most of the planned GW detectors. Numerical simulations based on the Nambu–Goto action\cite{ng1,ng2} indicate that cosmic string loops loose energy dominantly via GW radiation, if the underlying broken symmetry corresponds to a local gauge symmetry. In the context of seesaw, this sounds music to the ears, since such a gauge symmetry is $U(1)_{B-L}$\cite{Davidson:1978pm,moha1,moha2}, breaking of which could be responsible for the dynamical generation of the heavy RH masses and hence the lepton number violation as well as creation of a network of cosmic strings. Having this set-up, there could be two categories to look for the GWs as a probe of leptogenesis. {\it Category A:} A scale separation between the RH masses and the typical Grand Unified Theory (GUT) scale ($\sim 10^{16}$ GeV), imposed by seesaw perturbativity condition and the neutrino oscillation data\cite{nuosc} implies that residual symmetries like $U(1)_{B-L}$ protects the RH neutrinos to get mass at the GUT scale. Therefore, breaking of that symmetry at a later stage and consequent emission of GWs from cosmic strings are natural probes of the scale of leptogenesis. In this case, it is the amplitude (GW energy density normalised by the critical energy density) of the GWs that matters as a probe and this approach has been taken in Refs.\cite{lepcs1,lepcs2}. {\it Category B:} To make the testability more robust, along with the amplitudes, one can associate leptogenesis also to the spectral shapes of the GWs\cite{lepcs3,lepcs4}. Cosmic string loops that originate and decay in the radiation domination, exhibit a flat plateau on the amplitude-frequency plane at the higher frequencies. This spectral shape may show an upward or a downward trend if something other than radiation dominates the energy density of the EU before the onset ($T_*$) of most recent radiation domination prior to the BBN ($T\sim 5$ MeV)\cite{bbn,bbn1,bbn2}. Such a non-standard cosmic history that is responsible for this spectral break which along with the GW amplitude, one aims to claim also as a probe, should therefore be a natural/well-motivated call from the perspective of leptogenesis. Two well-known scenarios in this context can be opted for. {\it Category B1:} A matter domination ($\omega=0<1/3$)\cite{matdom1,matdom2}. {\it Category B2:} Scenarios such as kination ($\omega=1>1/3$)\citep{kin1,kin2}. For the former (latter), one finds a spectral break followed by a downward (upward) going GW amplitude\cite{spb0,spb1,spb2,spb3}. Two leptogenesis mechanisms in the {\it Category B1}--a low-scale leptogenesis and a leptogenesis from ultralight primordial black holes ($M_{PBH} \lesssim 13g$) have been studied in Ref.\cite{lepcs3} and Ref.\cite{lepcs4} respectively. In this article, we discuss a scenario that falls in the {\it Category B2}, i.e., interpreting a flat then a spectral break followed by a rising GW amplitude as a signature of leptogenesis. Note that, two crucial ingredients for this typical signal are of course cosmic string network itself and then a non-standard equation of state ($\omega=1$ in our discussion). In the context of leptogenesis from decays\cite{lep1}, though the former is a natural consequence in the sense of {\it Category A}\cite{lepcs1}, a stiffer equation of state is not an indispensable criterion. However, in seesaw models, even when the Lagrangian is minimally coupled to gravity, through massive RH neutrino mediation one can generate an operator of the form $\partial_\mu R j^\mu/M^2$ at two-loop level\cite{grav1,grav2,grav3} (see also Ref.\cite{lepcs2,grav4} for a flavour generalisation and Ref.\cite{grav5} for a recent review), where $R$ is the Ricci scalar and $j^\mu$ is the lepton current. This operator is a well-studied operator\cite{grav6,grav7,Mohanty:2005ud,grav8} with the corresponding mechanism dubbed as ``gravitational lepto/baryogenesis" and produces final baryon asymmetry proportional to $\dot{R}\propto (1-3\omega)$. Interestingly, note now that two primary ingredients of the GW signal are also natural requirements to obtain non-zero lepton asymmetry, i.e., the symmetry breaking which gives rise to massive RH neutrinos (mediate in the loops\cite{grav3}) as well as cosmic strings and then an equation of state $\omega\neq 1/3$\cite{wsm}. We shall discuss later on, that indeed a stiffer equation of state is needed to efficiently produce lepton asymmetry. Plateau amplitudes corresponding to $G\mu\lesssim 10^{-12}$ with $G$ being the Newton constant and $\mu$ being the string tension, with a post LISA spectral break supplemented by a potential test in neutrino-less double beta decay experiments, make the scenario generally robust. The above introduction summarises the basic idea and the main results of this paper. The next sections are dedicated to a more detailed description and technicalities. \section{gravitational waves from cosmic strings} Cosmic strings may originate as the fundamental or composite objects in string theory\cite{cssu1,cssu2} as well as topological defects from spontaneous symmetry breaking (SSB) when the vacuum manifold $\mathcal{M}$ has a non-trivial first homotopy group $\pi_1(\mathcal{M})$. A theory with spontaneous breaking of a $U(1)$ symmetry exhibits string solution\cite{cs2,cs3}, since $\pi_1(\mathcal{M})=\mathbb{Z}$. An example of a field theory containing string like solution is a theory of $U(1)$-charged complex scalar field $\phi$ that in the context of seesaw could be a SM scalar singlet $\phi_{B-L}$ which is responsible for the dynamical generation of RH neutrino masses. After the formation, strings get randomly distributed in space and form a network of horizon-size long strings\cite{csrev1,csrev2} characterised by a correlation length $L=\sqrt{\mu/\rho_\infty}$, where $\mu $--the string tension or energy per unit length is in general constant (however, e.g., in case of global strings\cite{Chang:2021afa} and recently introduced melting strings\cite{Emond:2021vts} $\mu\sim f(T)$) and typically taken to be the square of the symmetry breaking scale $\Lambda_{CS}$ and $\rho_\infty$ is the long string energy density. When two segments of long strings cross each other they inter-commute and form loops with a probability $P=1$\cite{incom1} (exceptions\cite{incom2}). A string network may interact strongly with thermal plasma and thereby its motion gets damped\cite{fric}. After the damping stops, the strings oscillate and enter a phase of scaling evolution that constitute two competing dynamics namely the stretching of the correlation length due to the cosmic expansion and fragmentation of the long strings into loops which oscillate independently and produce particle radiation or gravitational waves\cite{looprad1,looprad2,looprad3}. Out of these two competing dynamics, there is an attractor solution called the scaling regime\cite{scl1,scl2,scl3} in which the characteristic length scales as $L\sim t$. This implies, for constant string tension, $\rho_\infty\propto t^{-2}$. Therefore, the network tracks any cosmological background energy density $\rho_{bg}\propto a^{-3(1+\omega)}\propto t^{-2}$ with the same equation of state and hence cosmic strings do not dominate the energy density of the universe like any other defects. The loops radiate GWs at a constant rate which sets up the time evolution of a loop of initial size $l_i=\alpha t_i$ as $l(\tilde{t})=\alpha t_i-\Gamma G\mu(\tilde{t}-t_i)$, where $\Gamma\simeq 50$\cite{looprad1,looprad3} and the initial loops size parameter $\alpha\simeq 0.1$--a value preferred by numerical simulations\cite{nusim1,nusim2}. The total energy loss from a loop is decomposed into a set of normal-mode oscillations with frequencies $f_k=2k/l=a(t_0)/a(\tilde{t})f$, where $k=1,2,3...k_{max}$ ($k_{max}$ is for numerical purpose, otherwise $\infty$) and $f$ is the frequency observed today. Given the loop number density $n\left(\tilde{t},l_k\right)$, the present time gravitational wave density parameter is given by $\Omega_{GW}(t_0,f)\equiv f\rho_c^{-1}d\rho_{GW}/df=\sum_k\Omega_{GW}^{(k)}(t_0,f)$, with the $k$th mode amplitude $\Omega_{GW}^{(k)}(t_0,f)$ as\cite{nusim1} \begin{eqnarray} \Omega_{GW}^{(k)}(f)=\frac{2kG\mu^2 \Gamma_k}{f\rho_c}\int_{t_{osc}}^{t_0} \left[\frac{a(\tilde{t})}{a(t_0)}\right]^5 n\left(\tilde{t},l_k\right)d\tilde{t}.\label{gwf1} \end{eqnarray} The quantity $\Gamma_k$ depends on the small scale structures of the loop and is given by $\Gamma^{(k)}=\frac{\Gamma k^{-\delta}}{\zeta(\delta)}$, e.g., $\delta=4/3$ and $5/3$ for cusps and kinks\cite{cuki}. The integration in Eq.\ref{gwf1} is subjected to a Heaviside function $\Theta\equiv \Theta(t_i-t_{osc})\Theta(t_i-\frac{l_{cric}}{\alpha})$, with $t_{osc}= {\rm Max}~\left[{\rm network~formation~time}(t_F),{\rm end~of~damping (t_{fric})}\right]$ and $l_{cric}$ is the critical length below which massive particle radiation dominates over GWs\cite{partrad1,partrad2}. Both these $\Theta$ functions set a high-frequency cut-off in the spectrum (a systematic analysis can be found in Ref.\cite{spb2}). The most important aspect to obtain the GW spectrum is the computation of the loop number density $n\left(\tilde{t},l_k\right)$ which we calculate from the Velocity-dependent-One-Scale (VOS) model\cite{vos1,vos2,vos3} which assumes the loop production function to be a delta function, i.e. all the loops are created with the same fraction of the horizon size with a fixed value of $\alpha$. Given a general equation of state parameter $\omega$, the number density $n_\omega\left(\tilde{t},l_k\right)$ is computed as \begin{eqnarray} n_\omega(\tilde{t},l_{k}(\tilde{t}))=\frac{A_\beta}{\alpha}\frac{(\alpha+\Gamma G \mu)^{3(1-\beta)}}{\left[l_k(\tilde{t})+\Gamma G \mu\tilde{t}\right]^{4-3\beta}\tilde{t}^{3\beta}},\label{genn0} \end{eqnarray} where $\beta=2/3(1+\omega)$ and we assume $A_\beta =29.6~(\omega=1),5.4~(w=1/3)$ and $0.39~(\omega=0)$\cite{vos3} is a step-function while changing the cosmological epochs. The most interesting feature of GWs from cosmic string is that the amplitude increases with the symmetry breaking scale $\Lambda_{CS}$. This can be seen by computing the $\Omega_{GW}^{(1)}$, considering loop production as well as decay in the radiation domination which gives an expression for a flat plateau at higher frequencies (see AUX A for an exact formula) \begin{eqnarray} \Omega_{GW}^{(1)}(f)=\frac{128\pi G\mu}{9\zeta(\delta)}\frac{A_r}{\epsilon_r}\Omega_r\left[(1+\epsilon_r)^{3/2}-1\right], \label{flp1} \end{eqnarray} where $\epsilon_r=\alpha/\Gamma G\mu$ and $\Omega_r\simeq 9\times 10^{-5}$. Such strong GWs as a consequence of a very high scale symmetry breaking thus serves as an outstanding probe of particle physics models\cite{pmcs1,pmcs2,pmcs3,pmcs4,pmcs5,pmcs6,Bian:2021vmi}. Possibly the most important recent development is the finding of a stochastic common spectrum process across 45 pulsars by NANOGrav PTA\cite{NANOGrav:2020bcs}, which if interpreted as GWs, corresponds to a strong amplitude and is better fitted with cosmic strings\cite{lepcs2,csfit1,csfit2} than the single value power spectral density as predicted by supermassive black hole models. Let's also mention that a very recent analysis by PPTA\cite{ppta} is in agreement with the NANOGrav result. In presence of an additional early kination era, the entire GW spectrum is determined by four dynamics. I) A peak at a lower frequency--caused by the loops which are produced in the radiation era and decay in the standard matter era. II) The flat plateau, $\Omega_{GW}^{\rm plt}$, as mention while describing Eq.\ref{flp1}. III) A spectral break at $f_*=\sqrt{\frac{8}{\alpha\Gamma G\mu}}t_*^{-1/2}t_0^{-2/3}t_{\rm eq}^{1/6}$--so called the turning point frequency\cite{spb1,spb2,lepcs4}, followed by a rising GW amplitude $\Omega_{GW}^{(1)}(f>f_*)\simeq \Omega_{GW}^{\rm plt}\left(f/f_*\right) $, caused by modified redshifting of the GWs during kination era V) a second turning point frequency $f_\Delta$ after which the GWs amplitude falls, e.g, due to particle productions below $l<l_{cric}=\beta_m\frac{\mu^{-1/2}}{(\Gamma G\mu)^m}$, with $\beta_m\sim $ $\mathcal{O}(1)$ and $m=1,2$ for loops with kinks or cusps\cite{partrad1,partrad2}. If the falling is caused due to thermal friction, then one needs to consider the damping of the smaller loops along with the long-string network for $t<t_{fric}$, discarding any GWs production by the smaller loops, i.e, the entire dynamics is completely frozen until $t_{fric}$\cite{fric}. In fact, in our computation we do not take into account any GWs produced from smaller loops prior to $t_{fric}$ and consider that the falling is due to particle production which sets the high-frequency cut-off that is much more stronger (appears at lower frequencies) than the friction cut-off\cite{spb2}. Note also that if the two turning-point frequencies are close to each other, potentially the GW detectors could see a small bump after the flat plateau with a peak amplitude $\simeq \Omega_{GW}^{\rm plt}\left(f_\Delta/f_*\right)$. Nevertheless, as we show in the next section that given a successful leptogenesis, the second turning point frequency as well as small bumps are most likely to be outside the frequency range of the GW detectors. Before concluding the section, we note two important points. Firstly, the VOS model overestimates the number density of the loops by an order of magnitude compared to the numerical simulations\cite{nusim1}. This is due to the fact that VOS model considers all the loops are of same size at the production. However, there could be a distribution of $\alpha$. Numerical simulation finds that only 10$\%$ of the energy of the long-string network goes to the large loops ($\alpha\simeq 0.1$) while the rest $90\%$ goes to the highly boosted smaller loops that do not contribute to the GWs. This fact is taken into account by including a normalisation factor $\mathcal{F}_\alpha\sim 0.1$ in Eq.\ref{genn0}\cite{vos3}. Secondly, the amplitude beyond $f_*$ goes as $f^1$ even after taking into account high-$k$ modes (see AUX A) unlike the case of an early matter domination where the same changes from $f^{-1}\rightarrow f^{-1/3}$ for cusps like structures\cite{lepcs3,lepcs4,spb2}. \section{Gravitational leptogenesis, results and discussion} The idea behind gravitational leptogenesis\cite{grav7} is, a C and CP-violating operator $\mathcal{L}_{CPV}\sim b\partial_\mu R j^\mu\sim b\partial_\mu R\bar{\ell}\gamma^\mu\ell$ with $b$ as a real effective coupling, corresponds to a chemical potential $\mu=b\dot{R}$ for the lepton number in the theory. Therefore, the normalised (by photon density $n_\gamma\sim T^3$) equilibrium lepton asymmetry (using standard Fermi-Dirac statistics with energies $E_\pm=E\pm\mu$) is given by $N_{B-L}^{eq}\sim \frac{b\dot{R}}{T}$. Interestingly, $\mathcal{L}_{CPV}$ can be generated in a UV framework using the seesaw Lagrangian even when it is minimally coupled to gravity (see e.g., Ref.\cite{grav2} for an in-depth discussion, sec.II of Ref.\cite{lepcs2} for a brief summary). As a computational insight, one calculates an effective $\ell\ell h$ vertex corresponding to the operator $\mathcal{L}_{CPV}$ using a conformally flat metric $g_{\mu\nu}=(1+h)\eta_{\mu\nu}$ with $R=-3\partial^2h$, capitalising the fact that the coupling `$b$' is independent of the choice of background. In seesaw model, a similar $\ell\ell h$ vertex that manifests the $\mathcal{L}_{CPV}$ operator, can be constructed at two-loop level, where the Higgs and the RH masses mediate the loops. Then simply comparing the coefficients of both the vertices up to linear order in $h$, the coupling $b$ can be calculated in terms of the Yukawa coupling $f$ (where, $f_{\alpha i}\bar{\ell}_{L\alpha}\tilde{H} N_{Ri}$ is the Yukawa interaction in seesaw, with $\ell_{L\alpha}$, $H$ and $N_R$ being the lepton doublet, Higgs and RH fields respectively) and RH neutrino masses $M_i$. The expression for the equilibrium asymmetry then reads \begin{eqnarray} N_{B-L}^{eq}=\frac{\pi^2\dot{R}}{36 (4\pi )^4}\sum_{j>i}\frac{{\rm Im}\left[k_{ij}^2\right]}{\zeta (3)T M_i M_j}{\rm ln }\left(\frac{M_j^2}{M_i^2}\right),\label{gl1} \end{eqnarray} where $k_{ij}=(f^\dagger f)_{ij}$. The above expression could be modulated by a factor $(M_j^2/M_i^2)^\gamma$, where $\gamma=0,1$. However, $\gamma=0$ appears to be the most natural solution which can be calculated exactly\cite{grav2,grav3}. In any case, even if one considers $\gamma=1$ or the `hierarchical enhancement', tuning the complex part in $k_{ij}^2$, correct baryon asymmetry can always be reproduced. The most important part is, $N_{B-L}\propto \dot{R}\propto 1-3\omega$ which is still vanishing in radiation domination at high temperatures with SM-QCD thermodynamic potential\cite{wsm}. Therefore, a general cosmological background other than radiation that is quite a natural call now, always stems a non-vanishing equilibrium asymmetry unless the Yukawa couplings are real or purely imaginary. In the EU, any dynamically produced lepton asymmetry tracks the $N_{B-L}^{eq}$ if the interaction that causes the asymmetry production is strong enough. When the interaction rate becomes weaker (compared to the Hubble expansion), the asymmetry freezes out with the potential to reproduce correct baryon asymmetry $N_{B-L}\sim 6\times 10^{-8}$\cite{Planck}. In seesaw model, $\Delta L=2$ interactions\cite{dell2} play this role. The general evolution equation that governs the entire dynamics is given by \begin{eqnarray} \frac{{ d N_{B-L}}}{ dz}=-\left(\frac{\kappa}{z^p}+W_{\rm ID}\right)\left[N_{B-L}-\frac{\beta}{z^q}\right],\label{beg0} \end{eqnarray} where $z=M_1/T$, $W_{\Delta L=2}(z)=\frac{\kappa}{z^p}$ with $p=\frac{5-3\omega}{2}$, $N_{B-L}^{eq}=\frac{\beta}{z^q}$ with $q=\frac{7+9 \omega}{2}$ and $W_{\rm ID}$ represents the inverse decay $\ell H\rightarrow N_1$ rate. The parameters $\kappa\sim f_\kappa(m_i,M_1)z_*^{\frac{1}{2}(1-3\omega)}$ and $N_{B-L}^{eq}\propto\beta\sim f_\beta(m_i,M_1, {\rm Im}[f_{ij}])(1-3\omega)z_*^{\frac{3}{2}(3\omega-1)}$, where $z_*=M_1/T_*$ and $m_i$ is the $i$-th light neutrino mass with $i=1,2,3$. All the exact expressions can be found in AUX B. Before proceeding further, let us mention that we do not include the charged lepton flavour effects in this analysis for simplicity. Nonetheless, a systematic description with flavour issues can be found in Ref.\cite{grav4} along with a more finer description in Ref.\cite{lepcs2}. To proceed further, the process consists of two distinct temperature regimes. At a higher temperature $T_{in}\sim \Lambda_{CS}$, as soon as the symmetry breaks, the RH neutrinos become massive and Eq.\ref{beg0} starts acting without $W_{\rm ID}$ which is negligible at this regime. In this gravitational leptogenesis scenario, typically, $z_{in}(=M_1/T_{in})$ can be constrained with so called weak field condition as $z_{in}\ge\sqrt{M_1/\tilde{M}_{Pl}}$, where $\tilde{M}_{Pl}$ is the reduced Planck constant. Once the asymmetry freezes out, at the lower temperatures, it faces a washout by the inverse decays which are strongly active at $T\sim M_1$. The final asymmetry is therefore of the form $N_{B-L}^f=N_{B-L}^{G0}e^{-\int_0^\infty W_{\rm ID} (z) dz}$, where $N_{B-L}^{G0}$ is the frozen out asymmetry after the system is done with $\Delta L=2$ interaction, and the exponential term represents a late-time washout by the inverse decays. A general solution of Eq.\ref{beg0} is complicated and depends on the properties of {\it incomplete Gamma functions}. However, for $\omega=1$, that corresponds to $p=1$ and $q=8$, a simpler solution can be obtained. \begin{figure} \includegraphics[scale=.42]{m1Vsnbl.pdf} \caption{Eos: $\omega=1$. The yellow, red and green lines correspond to the lightest RH mass $M_1=10^{8,7,6}$ GeV. For $M_1=10^{7,6}$ we do not show the lines corresponding to $z_*=10^1$. We take $M_3=M_1/z_{\rm in}$ GeV, $M_2=10^{-1}M_3$ GeV, $x_{ij}=\pi/4$, $y_{ij}=10^{-1}$ and two mass-squared differences are at their best-fit values. }\label{fig1} \end{figure} We find the expression for the final asymmetry $N_{B-L}^f(z\rightarrow \infty)$ to be \begin{eqnarray} N_{B-L}^f\simeq\frac{\kappa\beta}{8z_{\rm in}^8}{\rm Exp}\left[-\frac{4K_1}{z_*}\right],\label{master} \end{eqnarray} where the dimensionless washout/decay parameter $K_1$ is a function of Yukawa couplings. Eq.\ref{master} that matches with the numerical solutions of the Eq.\ref{beg0} with quite a high accuracy, is the master equation which we use to present all the results. Prior to the explanation of Fig.\ref{fig1}, let's introduce a parametrisation of the Yukawa matrix as $m_D=U\sqrt{m}\Omega\sqrt{M}$, where $m_D=fv$ with $v=174$ GeV, $U$ is the leptonic mixing matrix and $\Omega$ is a $3\times 3$ complex orthogonal matrix with a standard parametrisation in terms of three complex rotation matrices\cite{lepcs2} with complex angles $\theta_{ij}=x_{ij}+i y_{ij}$. In general, $\Omega$ is a completely `free' matrix unless one invokes additional symmetries to fix the flavour structure of the theory. A plethora of works is dedicated in this direction\cite{Altarelli:2010gt}. With this orthogonal parametrisation it is easy to show that the equilibrium asymmetry is independent of $U$. Therefore, as far as the seesaw parameters are concerned, the light, heavy neutrino masses and the orthogonal matrix take part in the process. The decay parameter can also be expressed in terms of these parameters as $K_1=m_*^{-1}\sum_k m_k|\Omega_{k1}|^2$ with $m_*\simeq 10^{-3}$ being the equilibrium neutrino mass\cite{lep4}. In Fig.\ref{fig1}, we show the variation of the produced asymmetry with the lightest neutrino mass for three benchmark values; $M_1=10^{6,7,8}$ GeV with a fixed orthogonal matrix and different values of $z_*$. The basic nature of the curves is quite interesting. Let's focus on the $z_*=10^3$ curve (yellow) for $M_1=10^8$ GeV. It shows a plateau until $m_1\simeq 10^{-2}$ eV, then an increase followed by a downfall at large $m_1$ values. First of all, for $w=1$, the parameter $\kappa\sim z_*^{-1}$ and therefore for large values of $z_*$, the strength of the $\Delta L=2$ process becomes so weak that the asymmetry instantly freezes out without tracking the equilibrium number density. The coefficient $f_\kappa$ does not change much until $m_1\sim 10^{-2}$ eV and then increases for $m_1\gtrsim 10^{-2}$ eV\cite{lepcs2}. This increase in $f_\kappa$ pushes the asymmetry more towards the equilibrium and hence the overall magnitude of $N_{B-L}$ increases for $m_1\gtrsim 10^{-2}$ eV. A downfall at large $m_1$ is caused by the exponential term in Eq.\ref{master}. The washout is in fact modulated by two parameters, $K_1$ and $z_*$. However, for large values of $m_1$, the parameter $K_1$ becomes huge and therefore, even if one has a large $z_*$, the frozen out asymmetry is completely washed out. On the other hand, when $z_*$ is small, e.g., $z_*=10^2$, the overall magnitude of $N_{B-L}$ decreases since $\beta\sim z_*^3 $. In this case however, $z_*^{-1}$ suppression in $\kappa$ is not that significant compared to the previous one. Until $m_1\sim 10^{-2}$ eV, it shows the constant behaviour due to the mentioned nature of $f_\kappa$, however, at large $m_1$ values, it becomes strong enough to maintain the asymmetry in equilibrium for a period of time. The downfall is mostly dominated due to this equilibrium asymmetry tracking and not due to the late time washout. \begin{figure} \includegraphics[scale=.42]{TstarVsGmu.pdf} \caption{$G\mu$ vs. $T_*$ plot against the sensitivities of various GW detectors.}\label{fig2} \end{figure} Note that for $\omega<1/3$, for a fixed value of $z_*$, $\kappa$ increases (causes delayed freeze out and hence dilution of the asymmetry $N_{B-L}^{G0}$) and $\beta$ decreases (causes a decrease in $N_{B-L}^{eq}$). A concrete example is a matter domination, i.e., $\omega=0$, where $\kappa\sim \sqrt{z_*}$ and $\beta\sim z_*^{-3/2}$. Moreover, these kind of scenarios are inclusive of a late time entropy production which dilutes the produced asymmetry significantly\cite{matdom2,lepcs4}. Therefore, $\omega<1/3$ scenarios are utterly inefficient. This possibly strengthens the claim that in the future, should the GW detectors find a flat and then a rising signal, RH neutrino induced gravitational leptogenesis with a stiffer equation of state is a natural mechanism to associate with, since both of them, successful leptogenesis and the GW signal, are triggered by common theoretical ingredients. In Fig.\ref{fig2}, we show the future sensitivities of the GW detectors such as LISA\cite{LISA}, BBO\cite{BBO}, CE\cite{CE}, ET\cite{ET} on the $G\mu-T_*$ plane. In the case of strong GW amplitudes, the most stringent constraint comes from the effective number of neutrino species which reads $\int df f^{-1}\Omega_{GW}(f)h^2<5.6\times 10^{-6}\Delta N_{eff}$. Considering $\Delta N_{eff}\le 0.2$, the peak of the spectrum at $f_\Delta$, and taking into account contributions from the infinite number of modes that give a factor of $\zeta(7/3)$ amplification compared to the fundamental mode, the BBN constraint translates to $G\mu<T_*^{4/7}\left(1.72\times 10^{-22}\right)^{4/7}$. This has been shown by the blue exclusion region. On the other hand, to observe two spectral breaks (at $f_*$ and $f_\Delta$) distinctly, one should have $f_\Delta>f_*$ which translates to the constraint $G\mu>T_*^{4/5}\left(2.88\times 10^{-20}\right)^{4/5}$, where we consider particle production from cusps\cite{partrad2}. The corresponding region has been shaded in red. We have ignored the variation of the effective relativistic degrees of freedom even when $T_*$ is below the QCD phase transition. Proper temperature dependence of the same, would include a factor of 1.5-3 modification. Since we are entirely onto the gravitational leptogenesis (to motivate $\omega\neq 1/3$), we take $M^{max}_1\sim 10^8$ GeV so that the contribution from the decays are negligible. This gives an upper bound on the $T_{in}(\Lambda_{CS})$ that corresponds to $G\mu\lesssim 10^{-12}$. Therefore, the mechanism can be tested with reasonably strong GW amplitudes even for the flat part (Eq.\ref{flp1}). For strong amplitudes, the spectral breaks are likely to happen at high-frequency GW detectors like CE and ET plus the bump like signals ($f_*$ and $f_\Delta$ are close to each other) in general lie outside those detectors. In Fig.\ref{fig2}, the black point represented by $\spadesuit$ ($\clubsuit$), should (not) be a signal (see a supplementary Fig.\ref{fig3}). \begin{figure} \begin{center} \includegraphics[scale=.65]{omgVsf.pdf} \caption{EOS: $\omega=1$. The curve in blue (red) is a valid (invalid) signal of leptogenesis. The curves are generated with $G\mu=10^{-12}$ and $T_*=10^{-1}$ GeV (red) and $T_*=10^{2}$ GeV (blue). A fall at a high frequency is due to the particle production from cusps for $l<l_{cric}=\frac{\mu^{-1/2}}{(\Gamma G\mu)^2}$\cite{spb2,partrad2}. We have shown the spectrum only for the fundamental mode.}\label{fig3} \end{center} \end{figure} We shall end the discussion with a `Neutrino-Gravitational Waves Complementarity (NGWC)' or more generally, how this type of GW signal could be supplemented by low energy neutrino experiments. NGWC depends on the $z_*$ and flavour structure of the theory or more precisely, on the orthogonal matrix. From Fig.\ref{fig1}, it can be seen that, depending on the RH neutrino mass (hence $G\mu$), various $z_*$ values are sensitive to the neutrinoless double beta decay experiments (the $N_{B-L}$ curves intersect with the $N_{B-L}^{Obs}$ at the same time falls within the vertical green region). For the parameter set in Fig.\ref{fig1}, the NGWC points fall unfortunately in the red region as well as they are well outside the GW detectors (showed by the $\heartsuit$ and $\diamondsuit$ points in Fig.\ref{fig2}). However, if one decreases the $y_{ij}$, to produce correct $N_{B-L}$, for a fixed value of $G\mu$ one needs larger values of $z_*$, meaning the NGWC points would move towards the left side, i.e., towards the smaller values of $T_*$. The entire picture can be encapsulated within the triangle drawn on the test parameter space shaded in green in Fig.\ref{fig2}. The red horizontal arm represents the constant $G\mu$ line along which the entries of $\Omega$ decrease as one goes from larger to smaller $T_*$. The yellow arm represents the constant $z_*$ line, as one goes along the line towards smaller $G\mu$ values, entries of $\Omega$ increase and the blue arm represents the constant (already predicted) orthogonal matrix and as one goes towards the higher values of $G\mu$, $z_*$ decreases or in other words, $T_*$ increases. The blue arm is of great interest. If one has a completely determined orthogonal matrix, from Fig.\ref{fig1} the NGWC points can be determined with the sets of $M_1$ and $T_*$. This means the blue arm is a line of predictions from the GW experiments, i.e., we can predict at which amplitude and at which frequency the spectral break would occur. The triangle as a whole can be pushed towards the larger $T_*$ values increasing $y_{ij}$. This implies, seesaw models which exhibit an orthogonal matrix with large imaginary part entries, would likely to show the spectral break at higher frequencies and therefore may not be tested with the planned detectors. These models are dubbed as `boosted' seesaw models where the light neutrino basis vectors and heavy neutrino basis vectors are strongly misaligned\cite{boost}. On the other hand, models with flavour structures close to `form dominance'\cite{Chen:2009um} that typically predicts a real orthogonal matrix ($\Omega=P$, where $P$ is a permutation matrix), would show a spectral break within the frequency range of the current or planned GW detectors. \\ {\it Acknowledgements:} RS is supported by the MSCA-IF IV FZU - CZ.02.2.69/0.0/0.0/$20\_079$/0017754 project and acknowledges European Structural and Investment Fund and the Czech Ministry of Education, Youth and Sports. RS acknowledges Graham M. Shore and Pasquale Di Bari for an useful discussion on gravitational leptogenesis and boosted seesaw models respectively, Kai Schmitz for a helpful chat on Ref.\cite{lepcs3} and Sabir Ramazanov for discussions on cosmic strings in general.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Section title} \section{Introduction} Observations and climate-model simulations show a pronounced land-ocean warming contrast in response to a positive radiative forcing, with land temperatures increasing more than ocean temperatures \citep{manabe_1991, sutton_2007, byrne_ogorman_2013}. A land-ocean contrast is also found for the response of near-surface relative humidity in climate-model simulations, with small increases in relative humidity over ocean and larger decreases in relative humidity over continents \citep{ogorman_muller_2010,laine_et_al_2014,fu_feng_2014}. This land-ocean contrast in changes in relative humidity is clearly evident in Fig. \ref{fig:delta_hurs_CMIP5} for simulations from the Coupled Model Intercomparison Project 5 (CMIP5) that will be discussed in detail in sections 3 and 4. However, the long-term observational trends in near-surface relative humidity are not yet clear. Based on observations over 1975-2005, \citet{dai_2006} found a decreasing trend in surface relative humidity over ocean, but no significant trend over land. Later studies have found a sharp decrease in land relative humidity since 2000 \citep{simmons_et_al_2010, willett_et_al_2014}, and this is more consistent with the long-term climate-model projections. Changes in land relative humidity are important for the land-ocean warming contrast \citep{byrne_ogorman_2013,byrne_ogorman_2013b} and for modulating changes in precipitation over land under global warming \citep{chadwick_et_al_2013, byrne_ogorman_2015}, and they may affect projected increases in heat stress \citep[e.g.,][]{sherwood_huber_2010}. Despite this importance, a clear understanding of what controls land relative humidity is lacking. Here, we introduce a conceptual model based on boundary-layer moisture balance to analyze changes in land relative humidity, and we apply this model to idealized and full-complexity general circulation model (GCM) simulations. \begin{figure}[t] \centering \noindent\includegraphics[width=19pc,angle=0]{delta_RH_lat_lon_and_zon_avgs_2_panel.eps} \caption{Multimodel-mean changes in surface-air relative humidity between the historical (1976-2005) and RCP8.5 (2070-2099) CMIP5 simulations, normalized by the global- and multimodel-mean surface-air temperature changes [(a) and (b)]. For (b), the zonal averages over all ocean (blue) and land (red) gridpoints are shown at each latitude. } \label{fig:delta_hurs_CMIP5} \end{figure} We first review the energy balance argument for the small increase in relative humidity over ocean \citep{held_soden_2000, schneider_2010} and why it does not apply over land. Ocean evaporation is strongly influenced by the degree of sub-saturation of near-surface air, and changes in ocean relative humidity with warming may be estimated from the changes in evaporation using the bulk formula for evaporation, provided that the air-surface temperature disequilibrium and the changes in the exchange coefficient and surface winds are negligible. \citet{schneider_2010} used this approach, together with an energetic estimate for changes in evaporation, to yield an increase in ocean relative humidity with warming of order $1\%\,\rm{K}^{-1}$ (here and throughout this chapter, relative humidity changes are expressed as absolute rather than fractional changes). The simulated increases over ocean are generally smaller (Fig.~\ref{fig:delta_hurs_CMIP5}), indicating that effects such as changes in surface winds must also play a role \citep[e.g.,][]{richter_xie_2008}. This approach to understanding the increases in ocean relative humidity under warming relies on there being a simple energetic constraint on changes in evaporation, and these evaporation changes being easily related to changes in temperature and surface-air relative humidity. These two conditions are generally not valid over land, where the moisture supply for evapotranspiration is limited and varies greatly across continents \citep{de_jeu_et_al_2008}. The spatially inhomogeneous response of soil moisture to global warming, in addition to changes in land use and stomatal conductance under elevated $\rm{CO}_2$ concentrations \citep[e.g.,][]{sellers_et_al_1996,piao_et_al_2007,cao_et_al_2010,andrews_et_al_2011,cronin_2013}, lead to land evapotranspiration changes with substantial spatial structure \citep{laine_et_al_2014}, and the near-surface relative humidity is merely one of many factors influencing evapotranspiration changes. To understand the simulated decreases in land relative humidity under global warming, we take a different approach following \citet{rowell_jones_2006}, \citet{simmons_et_al_2010}, \citet{ogorman_muller_2010} and \citet{sherwood_fu_2014}, who discuss how the land boundary-layer specific humidity is influenced by the moisture transport from the ocean. Under global warming, as continents warm more rapidly than oceans, the rate of increase of the moisture transport from ocean to land cannot keep pace with the faster increase in saturation specific humidity over land, implying a drop in land relative humidity. This explanation is attractive because it relies on robust features of the global warming response, namely the small changes in relative humidity over ocean and the stronger surface warming over land. Indeed, the most recent Intergovernmental Panel on Climate Change (IPCC) report cites this argument to explain both observed and projected land relative humidity decreases with warming \citep[][see Section 12.4.5.1 therein]{ipcc_ar5_wg1_chap_12}. However, this explanation has not been investigated quantitatively using either observations or climate models. Thus, it not clear to what extent changes in land relative humidity can be understood as a simple consequence of the land-ocean warming contrast and changes in moisture transport from ocean to land. Indeed, changes in evapotranspiration resulting from soil moisture decreases \citep{berg_et_al_2016} and stomatal closure \citep{cao_et_al_2010} have been shown to influence land relative humidity, though such effects are not considered in the simple argument outlined above. \begin{figure}[t] \centering \noindent\includegraphics[width=19pc,angle=0]{delta_RH_L_schematic_one_theory.eps} \caption{Schematic diagram of the processes involved in the moisture budget of the boundary layer above a land surface [see text and equation (\ref{eqn:box_model_1}) for definitions of the various quantities].} \label{fig:schematic_theory_1} \end{figure} Changes in land surface properties that affect evapotranspiration may affect relative humidity through induced changes in surface-air temperature as well as through changes in specific humidity. Previous studies have shown that soil drying or decreases in stomatal conductance lead to an increase in surface temperature \citep[e.g.,][]{sellers_et_al_1996,seneviratne_et_al_2010,cao_et_al_2010,andrews_et_al_2011, seneviratne_et_al_2013} and this is typically argued to be a result of decreased evaporative cooling of the land surface. But it is difficult to make a quantitative theory for the increase in temperature from the surface energy budget because the surface energy fluxes depend on multiple factors over land, and the effect of increased surface sensible heat flux on surface-air temperature cannot be estimated without taking into account atmospheric processes such as convection. The changes in the land surface-air temperature may be instead related in a straightforward way to changes in surface relative humidity under climate change by using the fact that atmospheric processes constrain the surface-air equivalent potential temperature \citep{byrne_ogorman_2013,byrne_ogorman_2013b}. In particular, changes in surface-air temperature and relative humidity combine to give approximately equal increases in equivalent potential temperature over land and ocean. This link between land and ocean is a result of atmospheric dynamical constraints on vertical and horizontal temperature gradients in the atmosphere [see also \citet{joshi_2008_2}], and it implies that there can be a strong feedback over land between decreases in relative humidity and increases in surface-air temperature. This temperature-relative humidity feedback is distinct from soil moisture-temperature or soil moisture-precipitation feedbacks that may also be operating \citep[e.g.,][]{seneviratne_et_al_2010}, and here we assess its importance for decreases in land relative humidity using the atmospheric dynamic constraint discussed above. We first derive a conceptual box model for the moisture balance of the land boundary layer (section 2). We apply the box model to idealized GCM and Coupled Model Intercomparison Project 5 (CMIP5) simulations, using first a simplified ocean-influence version of the box model (section 3) and then taking into account evapotranspiration (section 4). We then discuss the feedback between temperature and relative humidity changes over land (section 5), before summarizing our results (section 6). \section{Box model of the boundary-layer moisture balance over land} The box model is of the moisture balance of the atmospheric boundary layer above land (see schematic, Fig. \ref{fig:schematic_theory_1}). The specific humidity of the boundary layer is assumed to be determined by three processes: (i) horizontal mixing with the boundary layer over ocean (e.g., via mean-wind advection, diurnal sea breeze), (ii) vertical mixing with the free troposphere (via large-scale vertical motion, turbulent entrainment, shallow and deep convection), and (iii) evapotranspiration. The time evolution of the land boundary-layer specific humidity, $q_{\rm{L}}$, can then be written as: \begin{equation} l h \frac{d q_{\rm{L}}}{dt} = h v_1 (q_{\rm{O}} - q_{\rm{L}}) + l v_2 (q_{\rm{FT}} - q_{\rm{L}}) + \frac{l}{\rho_a}E_{\rm{L}}, \label{eqn:box_model_1} \end{equation} where $l$ is the horizontal length scale of the land, $h$ is the depth of the boundary layer over land, $v_1$ and $v_2$ are horizontal and vertical mixing velocities, respectively, $q_{\rm{O}}$ is the specific humidity of the ocean boundary layer, $q_{\rm{FT}}$ is the specific humidity of the free troposphere, $\rho_a$ is the density of air, and $E_{\rm{L}}$ is the evapotranspiration from the land surface. For convenience, we define $\tau_1 = l/v_1$ and $\tau_2 = h/v_2$ as horizontal and vertical mixing timescales, respectively. Lateral advection in the free-troposphere is assumed to lead to equal $q_{\rm{FT}}$ over land and ocean. We further assume that the free-tropospheric specific humidity is proportional to the ocean boundary-layer specific humidity, i.e. $q_{\rm{FT}} = \lambda q_{\rm{O}}$, where $\lambda$ is the constant of proportionality. Taking the steady-state solution of (\ref{eqn:box_model_1}) then gives \begin{equation} q_{\rm{L}} = \underbrace{\frac{\lambda \tau_1 + \tau_2}{\tau_1 + \tau_2}}_{\gamma} q_{\rm{O}} + \underbrace{\frac{\tau_1 \tau_2}{\rho_a h (\tau_1 + \tau_2)} E_{\rm{L}}}_{q_{\rm{E}}}, \label{eqn:new_box_model_2} \end{equation} where we have defined the parameter $\gamma = (\lambda \tau_1 + \tau_2)/(\tau_1 + \tau_2)$ to quantify the influence of ocean specific humidity on land specific humidity, and where $q_{\rm{E}} = (\tau_1 \tau_2) E_{\rm{L}} /(\rho_a h (\tau_1 + \tau_2))$ represents the influence of evapotranspiration on land specific humidity. \begin{figure}[t] \centering \noindent\includegraphics[width=19pc,angle=0]{iGCM_continent_schematic.eps} \caption{Continental configuration in the idealized GCM simulations. A subtropical continent spans $20^{\circ}\rm{N}$ to $40^{\circ}\rm{N}$ and $0^{\circ}\rm{E}$ to $120^{\circ}\rm{E}$, with a slab ocean elsewhere.} \label{fig:continent_schematic} \end{figure} Other idealized mixed-layer models, generally more complicated than the box model described above, have been previously developed to study land-atmosphere interactions \citep[e.g.,][]{brubaker_entekhabi_1995,betts_2000,joshi_2008_2}. Our box model must be taken as a time average over the diurnal cycle in the boundary layer [see discussion in \citet{betts_2000}]. A more complicated model could include the strong diurnal cycle over land, as well as explicitly accounting for the effects of the difference in maximum boundary layer depth over land and ocean \citep[e.g.,][]{von_engeln_teixeira_2013}. \section{Ocean-influence box model} For the simplest version of our box model, the ``ocean-influence box model'', we assume that the influence of evapotranspiration on the boundary-layer moisture balance over land is negligible. Setting $E_{\rm{L}}=0$ in (\ref{eqn:new_box_model_2}), we find: \begin{equation} q_{\rm{L}} \approx \gamma q_{\rm{O}}. \label{eqn:box_model_2} \end{equation} The parameter $\gamma$ may remain approximately constant under climate change even if there are changes in the mixing time scales $\tau_1$ and $\tau_2$. For example, if the overall tropical circulation and convective mass fluxes slow down with climate warming \citep[e.g.,][]{held_soden_2006,vecchi_soden_2007} such that both mixing time scales increase by the same factor, then this will not cause $\gamma$ to change. Assuming negligible changes in $\gamma$, we can write: \begin{equation} \delta q_{\rm{L}} \approx \gamma \delta q_{\rm{O}}, \label{eqn:box_model_3} \end{equation} where $\delta$ denotes the change in climate. The same results would also follow (with a different definition of $\gamma$) if the influence of evapotranspiration on land specific humidity, $q_{\rm{E}}$, is not neglected but is instead assumed to scale with land specific humidity. Note that $q_{\rm{E}}$ scaling with land specific humidity is not the same as the evapotranspiration rate scaling with land specific humidity. In particular, $q_E$ depends on $(\tau_1 \tau_2)/(\tau_1 + \tau_2)$, and this factor would change even if both $\tau_1$ and $\tau_2$ increase by the same factor. The ocean-influence box model suggests a straightforward hypothesis - that the ratio of land to ocean specific humidity remains approximately constant as the climate changes or, equivalently, that fractional changes in specific humidity over land and ocean are equal: \begin{equation} \frac{\delta q_{\rm{L}}}{q_{\rm{L}}} \approx \frac{\delta q_{\rm{O}}}{q_{\rm{O}}}. \label{eqn:constant_q_ratio} \end{equation} By contrast the fractional changes in saturation specific humidity depend on the local temperature change and will be bigger over land than ocean. To the extent that (\ref{eqn:constant_q_ratio}) holds, it is clear that if land warms more than ocean, and ocean relative humidity does not change greatly, then the land relative humidity will decrease. We now assess the applicability of this ocean-influence box model result to idealized and comprehensive GCM simulations. We use (\ref{eqn:box_model_3}) to estimate the change in land relative humidity under climate change given the changes in land temperature and ocean specific humidity, and calculating $\gamma$ as the the ratio of land to ocean specific humidities in the control climate. \begin{figure}[t] \centering \noindent\includegraphics[width=19pc,angle=0]{delta_q_L_and_delta_RH_L_vs_T_ocean_iGCM_sim_and_theory_1.eps} \caption{Changes over land in (a) surface-air specific humidity and (b) surface-air relative humidity between pairs of idealized GCM simulations with a subtropical continent. The relative humidity changes are normalized by the land surface-air temperature change. Solid black lines denote the simulated changes and the dashed lines represent the estimated changes using the ocean-influence box model (\ref{eqn:box_model_3}). Pseudo relative humidities are shown (see text), but the blue line in (b) shows the change in the mean of the actual relative humidity for comparison. The red line in (a) indicates what the change in surface-air land specific humidity would be if the land pseudo relative humidity did not change (i.e. for each pair of simulations, the land specific humidity change if the land pseudo relative humidity is fixed at its value in the colder simulation).} \label{fig:delta_q_L_and_RH_L_theory_1} \end{figure} \subsection{Application of ocean-influence box model to idealized GCM simulations} \label{sect:iGCM_analysis_theory_1} The ocean-influence box model is first applied to idealized GCM simulations over a wide range of climates. The idealized GCM is similar to that of \citet{frierson_2006} and \citet{frierson_2007}, with specific details as in \citet{byrne_ogorman_2013} and \citet{ogorman_2008}. It is based on a spectral version of the GFDL dynamical core, with a two-stream gray radiation scheme, no cloud or water vapor radiative feedbacks, and the simplified moist convection scheme of \citet{frierson_2007}. The simulations have a subtropical continent spanning $20^{\circ}\rm{N}$ to $40^{\circ}\rm{N}$ and $0^{\circ}\rm{E}$ to $120^{\circ}\rm{E}$, with a slab ocean elsewhere (Fig. \ref{fig:continent_schematic}). The land surface hydrology is simulated using a simple bucket model \citep{manabe_1969} and all other land surface properties are identical to those of the slab ocean. We vary the climate over a wide range of global-mean surface-air temperatures (between 260K and 317K) by changing the longwave optical thickness, which is analogous to varying the concentrations of $\rm{CO}_2$ and other greenhouse gases. The longwave optical thickness is specified by $\tau = \alpha \tau_{\rm{ref}}$, where $\tau_{\rm{ref}}$ is a reference optical thickness distribution, and we analyze simulations with 10 different values of the parameter $\alpha$\footnote{Simulations are performed with the following $\alpha$ values: 0.2, 0.4, 0.7, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, and 6.0.}. We present results based on time averages over 4000 days. When applying the box model to the simulations, we assume that the specific humidity is well-mixed in the boundary layer and use the surface-air specific humidities to represent the entire boundary layer. In the case of the idealized GCM, surface-air quantities are taken to be those of the lowest atmospheric level, $\sigma = 0.989$, where $\sigma=p/p_s$, and $p$ and $p_s$ are the pressure and surface pressure, respectively. The results are qualitatively similar when specific humidities are instead averaged between the surface and $\sigma = 0.9$ (not shown). Land values are averaged (with area weighting) over the entire subtropical continent, and the ocean averages are taken over neighboring ocean at the same latitudes, i.e. from $20^{\circ}\rm{N}$ to $40^{\circ}\rm{N}$ and $120^{\circ}\rm{E}$ to $360^{\circ}\rm{E}$.\footnote Our results are almost identical if we calculate ocean averages using the ``control'' Southern Hemisphere as in \citet{byrne_ogorman_2013}, i.e. ocean values averaged over $20^{\circ}\rm{S}$ to $40^{\circ}\rm{S}$ and $0^{\circ}\rm{E}$ to $120^{\circ}\rm{E}$. We choose to average over neighboring ocean in this study because the box model involves advection of moisture from ocean to land, and this naturally suggests averaging over ocean adjacent to the land continent.} \begin{figure}[t] \centering \noindent\includegraphics[width=19pc,angle=0]{gamma_and_epsilon_vs_T_ocean_nonlinear_new.eps} \caption{Parameters for the box models applied to the idealized GCM simulations. (a) The $\gamma$ parameter for the ocean-influence box model (solid black line) and the full box model (dashed black line). (b) The $q_{\rm{E }}$ parameter for the full box model (dashed black line) and the surface-air land specific humidity (red line) scaled by a factor of 0.25 so that is roughly matches the magnitude of $q_{\rm{E}}$.} \label{fig:gamma_and_epsilon} \end{figure} To apply the ocean-influence box model (\ref{eqn:box_model_3}), we calculate the $\gamma$ parameter (the ratio of land to ocean specific humidities) for each simulation (except the warmest). We then estimate the change in surface-air land specific humidity between pairs of nearest-neighbor simulations as a function of $\gamma$ and the changes in ocean specific humidity, where $\gamma$ is set to its value in the colder of the two simulations and assumed to be constant as the climate changes. Land surface-air specific humidity changes between the pairs of idealized GCM simulations, along with the estimates of these changes using (\ref{eqn:box_model_3}), are plotted against the mid-point ocean temperature for each pair in Fig. \ref{fig:delta_q_L_and_RH_L_theory_1}. The simulated specific humidity changes are well-captured by the ocean-influence box model (\ref{eqn:box_model_3}) over the full range of climates (Fig. \ref{fig:delta_q_L_and_RH_L_theory_1}a). The increases in specific humidity, $\delta q_{\rm{L}}$, generally increase in magnitude as the climate warms (Fig. \ref{fig:delta_q_L_and_RH_L_theory_1}a), and they are generally smaller than what would occur if land relative humidity remained constant (see the red line in Fig. \ref{fig:delta_q_L_and_RH_L_theory_1}a). The small deviations from the prediction of the ocean-influence box model (\ref{eqn:box_model_3}) could be due to the influence of evapotranspiration, changes in circulation patterns, or changes in the ratio $\lambda$ of free tropospheric to surface-air ocean specific humidity which is assumed to be constant in the box model. The parameter $\lambda$ might be expected to increase with warming because the fractional rate of increase in saturation vapor pressure with temperature is higher at the lower temperatures that occur further up in the atmosphere, and because there is enhanced warming aloft at low latitudes in simulations of global warming \citep[e.g.,][]{santer_et_al_2005}, and such effects could be included in a more complicated box model. The $\gamma$ parameter is relatively constant over the wide range of climates simulated (Fig. \ref{fig:gamma_and_epsilon}a) consistent with our neglect of changes in $\gamma$ when deriving (\ref{eqn:box_model_3}), with a mean value of 0.61 and minimum and maximum values of 0.56 and 0.68, respectively. Thus, for the subtropical continent in this idealized GCM, land specific humidity is approximately 60\% of the neighboring ocean specific humidity. The box model (\ref{eqn:box_model_3}) predicts the changes in mean specific humidity which must be combined with the mean temperatures to estimate the relative humidity changes. However, because of the nonlinearity of the Clausius-Clapeyron relation, it is not possible to reproduce the mean relative humidity using the mean temperature and mean specific humidity. We instead use a pseudo relative humidity, defined in terms of the mean temperature, specific humidity, and pressure as $H(\overline{T},\overline{p},\overline{q})$ where the bars denote time and spatial means, and $H(T,p,q)$ is the thermodynamic relationship between relative humidity, temperature, pressure and specific humidity. For convenience we will refer to this pseudo relative humidity as the relative humidity, but we also show the actual relative humidity changes for comparison in Figs. \ref{fig:delta_q_L_and_RH_L_theory_1}b and \ref{fig:delta_q_RH_CMIP5_sim_theory}b. The box-model estimate of the relative humidity changes is somewhat less accurate than for specific humidity, but the decreases in relative humidity with warming and the decreasing magnitude of these changes as the climate warms are both captured (Fig. \ref{fig:delta_q_L_and_RH_L_theory_1}b). Given the simplicity of the ocean-influence box model, its ability to describe the behavior of land relative humidity in this idealized GCM is impressive. However, the geometry and surface properties of Earth's land masses are more varied and complex than the idealized continent considered, and factors such as orography or cloud feedbacks that are not included in the idealized GCM could alter the surface humidity response. Therefore, to investigate the changes in land relative humidity further, we turn to more comprehensive simulations from the CMIP5 archive. \begin{figure}[t] \centering \noindent\includegraphics[width=19pc,angle=0]{delta_SH_land_and_delta_RH_CMIP5_and_estimates_vs_lat.eps} \caption{Multimodel-mean changes between the historical and RCP8.5 simulations in zonal and time mean (a) surface-air land specific humidity and (b) surface-air land relative humidity normalized by the global-mean surface-air temperature change. Solid black lines denote the simulated changes and dashed lines denote the estimated changes using the box model (\ref{eqn:box_model_3}). For (a), the red line indicates the change in surface-air land specific humidity for constant land pseudo relative humidity (i.e. for land pseudo relative humidity fixed at the values in the historical simulations). Pseudo relative humidities are shown, but the blue line in (b) shows the simulated mean changes for the relative humidity variable outputted by the models for comparison.} \label{fig:delta_q_RH_CMIP5_sim_theory} \end{figure} \subsection{Application of ocean-influence box model to CMIP5 simulations} \label{sect:CMIP5_analysis_theory_1} We apply the ocean-influence box model to changes in land surface-air relative humidity between 30 year time averages in the historical (1976-2005) and RCP8.5 (2070-2099) simulations from the CMIP5 archive \citep{taylor_et_al_bams_2012}. We analyze 19 models in total\footnote{The CMIP5 models considered are: ACCESS1-0, ACCESS1-3, BCC-CSM1-1, BCC-CSM1-1-M, BNU-ESM, CanESM2, CNRM-CM5, CSIRO-Mk3-6-0, GFDL-CM3, GFDL-ESM2M, INMCM4, IPSL-CM5A-LR, IPSL-CM5A-MR, IPSL-CM5B-LR, MIROC-ESM, MIROC-ESM-CHEM, MIROC5, MRI-CGCM3, and NorESM1-M. The variables used in this paper have the following names in the CMIP5 archive: evaporation (\textit{evspsbl}), surface-air specific humidity (\textit{huss}), surface-air temperature (\textit{tas}), and surface-air relative humidity (\textit{hurs}).}, and in each case the r1i1p1 ensemble member is used. As for the idealized GCM analysis, we assume moisture is well mixed in the boundary layer and take surface-air specific humidity to be representative of the boundary layer (using the average specific humidity between the surface and 900hPa gives similar results). The specific humidities in the box model are identified with the zonal and time mean specific humidities (over land or ocean) for each latitude and for each of the (12) months of the year in the CMIP5 simulations. We calculate $\gamma$ as the ratio of the mean land and ocean specific humidities at each latitude and for each month of the year in the historical simulations. By computing $\gamma$ in this way, we are assuming that the horizontal exchange of moisture between land and ocean, described by the box model, is taking place predominantly in the zonal direction. Using the diagnosed $\gamma$, and assuming it does not change as the climate warms, changes in mean surface-air land specific humidity are estimated for each latitude and each month of the year using (\ref{eqn:box_model_3}) and the changes in mean ocean specific humidity. \begin{figure}[t] \centering \noindent\includegraphics[width=19pc,angle=0]{gamma_epsilon_zeta_vs_lat_both_theories_CMIP5.eps} \caption{Multimodel means in the CMIP5 simulations of the (a) $\gamma$ parameter for the ocean-influence box model (solid black line) and for the regression approach including evapotranspiration (dashed black line), and (b) the regression coefficient $\epsilon$. The parameter $\gamma$ is evaluated based on the historical simulations for the ocean-influence box model, and $\gamma$ and $\epsilon$ are evaluated using equation \ref{eqn:regression} for the regression approach.} \label{fig:gamma_eps_zeta_vs_lat} \end{figure} The simulated and estimated annual mean changes in land specific humidity at each latitude are shown in Fig. \ref{fig:delta_q_RH_CMIP5_sim_theory}a, and the magnitude and latitudinal variations of the changes are reasonably well captured by the ocean-influence box model, including the flat region in the Northern Hemisphere midlatitudes. The magnitude of the increases is underestimated at most latitudes which, as discussed in the case of the idealized GCM simulations could be partly due to increases in the parameter $\lambda$ relating free-tropospheric specific humidity to ocean surface specific humidity, but other aspects of the ocean-influence box model such as neglecting the influence of evapotranspiration are also likely to play a role. The parameter $\gamma$ (i.e. the ratio of land and ocean specific humidities) is shown in Fig.~\ref{fig:gamma_eps_zeta_vs_lat}a. It has a global, annual and multimodel mean value of 0.74, which is somewhat larger than the value found in the idealized GCM simulations. This is not surprising given that the land in the idealized simulations is a subtropical continent, which is generally drier relative to neighboring oceans than continents at lower or higher latitudes. Together with the simulated changes in monthly-mean surface-air land temperature, the estimated changes in specific humidity are used to estimate the land pseudo relative humidity changes. As for the idealized GCM analysis, it is necessary to compare pseudo-relative humidities because of the difficulty in converting time and zonal mean specific humidities estimated by the box model to relative humidities. The use of pseudo relative humidities also avoids the complication that different climate models use different saturation vapor pressure formulations. The changes in pseudo relative humidity are calculated for each month of the year before taking the annual mean for both the simulated changes and the changes estimated by the box model. The changes in pseudo relative humidity and model-outputted relative humidity are somewhat similar at lower latitudes but quite different at higher latitudes (cf. blue and black solid lines in Fig. \ref{fig:delta_q_RH_CMIP5_sim_theory}b), where the differing computations of saturation vapor pressure over ice in the various models becomes important and there is larger variability. Nonetheless the pseudo relative humidity is a useful measure of subsaturation, and we will refer to pseudo relative humidity as relative humidity for simplicity. The simulated changes in (pseudo) land relative humidity are quite well described by the ocean-influence box model in the Southern Hemisphere and at lower latitudes (Fig. \ref{fig:delta_q_RH_CMIP5_sim_theory}b). The estimated and simulated global-mean land relative humidity changes in the various climate models are also correlated, with a correlation coefficient of 0.66. Due to the general underestimation of the specific humidity increases by the ocean-influence box model (Fig. \ref{fig:delta_q_RH_CMIP5_sim_theory}a), the relative humidity decreases are overestimated (Fig. \ref{fig:delta_q_RH_CMIP5_sim_theory}b), with a large discrepancy in the mid- to high-latitudes of the Northern Hemisphere. At these latitudes, there is more land than ocean and it is likely that changes in ocean specific humidity have a weak influence on the specific humidity in the interior of large continents, or that meridional moisture transports from ocean at other latitudes become more important. Changes in relative humidity in these inner continental regions are likely to be more strongly influenced by local evapotranspiration changes, and they could be influenced by shifts in the iceline or changes in soil moisture or vegetation, effects which are not considered in the ocean-influence box model. \section{Influence of evapotranspiration} \label{sect:full_model_RH_land} The ocean-influence box model captures much (but not all) of the behavior in vastly more complex GCMs. However, the moisture balance of the land boundary layer is also affected by evapotranspiration, and changes in land surface properties, such as soil moisture or stomatal conductance, can affect evapotranspiration in the absence of any changes in the overlying atmosphere. For example, changes in stomatal conductance under elevated $\rm{CO}_2$ conditions has been shown to reduce both evapotranspiration and land relative humidity without changes in ocean humidity \citep{andrews_et_al_2011}. \begin{figure}[t] \centering \noindent\includegraphics[width=19pc,angle=0]{delta_q_L_and_delta_RH_L_vs_T_ocean_iGCM_sim_and_theory_2_nonlinear_new.eps} \caption{As in Figure \ref{fig:delta_q_L_and_RH_L_theory_1}, but here showing estimates of the surface-air specific and relative humidity changes from the full box model (\ref{eqn:new_box_model_3}). The contributions due to ocean specific humidity changes (blue dashed lines) and evapotranspiration changes (green dashed lines) are also shown. The contributions to changes in relative humidity are calculated using (\ref{eqn:appendix_3}). Pseudo relative humidities are shown in this figure (the changes in actual relative humidity are shown by the blue line in Fig.~\ref{fig:delta_q_L_and_RH_L_theory_1}b.)} \label{fig:delta_q_L_and_RH_L_theory_2} \end{figure} We turn to the full box model (\ref{eqn:new_box_model_2}) which includes the effects of evapotranspiration. We assume once more that changes in $\gamma$ are negligible, such that: \begin{equation} \delta q_{\rm{L}} \approx \gamma \delta q_{\rm{O}} + \delta q_{\rm{E}}. \label{eqn:new_box_model_3} \end{equation} There are two terms contributing to changes in $q_{\rm{L}}$ in (\ref{eqn:new_box_model_3}): the term arising from changes in ocean specific humidity, $\gamma \delta q_{\rm{O}}$, and an additional land evapotranspiration term, $\delta q_{\rm{E}}$. We next assess their relative importance in controlling changes in land humidity in the idealized GCM and CMIP5 simulations. \subsection{Application of full box model to idealized GCM simulations} \label{sect:theory_2_simulation_analysis} We first examine the idealized GCM simulations with a subtropical continent. In contrast to the ocean-influence box model (\ref{eqn:box_model_3}), for which the single parameter $\gamma$ could be easily estimated in the control simulation in each case, the full model (\ref{eqn:new_box_model_3}) has two parameters to be estimated, $\gamma$ and $q_{\rm{E}}$. To estimate these parameters, we perform an additional set of simulations with the same longwave optical thicknesses as in the 10 simulations described previously but with the evapotranspiration set to zero over land. Specifying the land evapotranspiration in this way is analogous to drying out the soil. (Note that the change in evapotranspiration affects both the humidity of the atmosphere and the surface energy balance.) Using these additional simulations with $E_{\rm{L}} = 0$, we can estimate $\gamma$ for each climate using (\ref{eqn:new_box_model_2}): $\gamma = q_{\rm{L},E_{\rm{L}}=0} / q_{\rm{O},E_{\rm{L}}=0}$. The $\gamma$ values obtained are smaller than those calculated from the control climate for the ocean-influence box model (Fig. \ref{fig:gamma_and_epsilon}a) because the contribution of evapotranspiration to the land specific humidity is now also taken into account. We then use these $\gamma$ values to estimate $q_{\rm{E}}$ for the original simulations with dynamic land surface hydrology : $q_{\rm{E}} = q_{\rm{L}} - \gamma q_{\rm{O}}$. The values of $q_{\rm{E}}$ increase with warming except in hot climates (Fig. \ref{fig:gamma_and_epsilon}b). Interestingly, the influence of evapotranspiration on land specific humidity as measured by $q_{\rm{E}}$ roughly scales with the land specific humidity except in hot climates (compare the dashed black and solid red lines in Fig. \ref{fig:gamma_and_epsilon}b), and this helps to explain why the ocean-influence box model is accurate even though evapotranspiration affects land specific humidity. We then estimate the changes in land specific humidity between pairs of nearest-neighbor simulations from (\ref{eqn:new_box_model_3}) with $\gamma$ assumed to be constant as the climate changes. The simulated and estimated changes in surface-air land specific humidity, along with the contributions due to changes in ocean specific humidity and land evapotranspiration are shown in Fig. \ref{fig:delta_q_L_and_RH_L_theory_2}a. The full box model captures the basic behavior of the land specific humidity changes as a function of temperature, although it is less accurate in hot climates. The contribution from ocean specific humidity changes, $\gamma \delta q_{\rm{O}}$, is larger than the contribution from land evapotranspiration for all climates (Fig. \ref{fig:delta_q_L_and_RH_L_theory_2}a). The changes in simulated land (pseudo) relative humidity are also well captured by the full box model (Fig. \ref{fig:delta_q_L_and_RH_L_theory_2}b). Because relative humidity depends on temperature as well as specific humidity, there is no unique way to use the box model result (\ref{eqn:new_box_model_3}) to decompose changes in land relative humidity into contributions due to ocean specific humidity and land evapotranspiration. However, a decomposition derived in appendix A (equation \ref{eqn:appendix_3}) has several desirable properties. According to the decomposition, the contributions to the change in land relative humidity from evapotranspiration and ocean specific humidity are weighted according to their contribution to land specific humidity in the control climate. The change in ocean specific humidity leads to a decrease in land relative humidity if the fractional increase in ocean specific humidity is less than the fractional increase in saturation specific humidity over land. Similarly, evapotranspiration contributes to a decrease in land relative humidity if the fractional increase in $q_{\rm{E}}$ is less than the fractional increase in saturation specific humidity over land. (Note that the fractional change in $q_{\rm{E}}$ is generally different from the fractional change in evapotranspiration.) Using this decomposition of the change in land relative humidity, we find that the land evapotranspiration contribution is of comparable importance to the ocean specific humidity contribution for the idealized GCM simulations (Fig. \ref{fig:delta_q_L_and_RH_L_theory_2}b). By contrast, we found that the contribution of ocean specific humidity was more important than land evapotranspiration when land specific humidity changes were considered. The discrepancy arises because, according to the decomposition (\ref{eqn:appendix_3}), it is not the magnitude of a particular contribution to the change in specific humidity that matters for its contribution to the change in relative humidity, but rather how its fractional changes compare to the fractional changes in saturation specific humidity and how much it contributes to the land specific humidity in the control climate. \subsection{Influence of evapotranspiration in CMIP5 simulations} We now investigate how land evapotranspiration contributes to specific humidity changes in the CMIP5 simulations. We need to estimate both $\gamma$ and $q_{\rm{E}}$ for the full box model, but there are no CMIP5 simulations analogous to the zero-evapotranspiration simulations with the idealized GCM described above. Instead, we estimate the influence of evapotranspiration and ocean specific humidity on land specific humidity using a multiple linear regression approach based on the intermodel scatter across the CMIP5 models. We use the regression relationship \begin{equation} \delta q_{\rm{L}} = \gamma \delta q_{\rm{O}} + \epsilon \delta E_{\rm{L}} + \zeta, \label{eqn:regression} \end{equation} which is motivated by the full box model (\ref{eqn:new_box_model_2}), but note that $\epsilon \delta E_{\rm{L}}$ will not generally equal $\delta q_{\rm{E}}$ to the extent that parameters such as $\tau_1$ and $\tau_2$ change with climate, and changes in these parameters will contribute to the remainder term $\zeta$. The variables $\delta q_{\rm{L}},\delta q_{\rm{O}}$ and $\delta E_{\rm{L}}$ are identified as the zonal and time mean for each latitude and month of the year in each model. The regression coefficients $\gamma$, $\epsilon$, and $\zeta$ are then estimated using ordinary least squares regression for each latitude and month of the year. The annual means are shown for $\gamma$ and $\epsilon$ in Fig.~\ref{fig:gamma_eps_zeta_vs_lat} and for $\zeta$ in Fig.~\ref{fig:delta_SH_land_CMIP5_theory_2}. The regression coefficient $\gamma$ has a similar magnitude and latitudinal structure to the $\gamma$ parameter calculated for the ocean-influence box model (Fig.~\ref{fig:gamma_eps_zeta_vs_lat}a). The coefficient $\epsilon$ is positive at all latitudes (Fig.~\ref{fig:gamma_eps_zeta_vs_lat}b) indicating that enhanced evapotranspiration increases the land specific humidity, while the remainder term $\zeta$ (Fig.~\ref{fig:delta_SH_land_CMIP5_theory_2}) is negative at most latitudes. \begin{figure}[t] \centering \noindent\includegraphics[width=19pc,angle=0]{delta_SH_land_and_estimates_vs_lat_theory_2_v4.eps} \caption{Multimodel-mean changes in zonal- and time-mean surface-air land specific humidity (solid black line) in the CMIP5 simulations, and the contributions to these changes due to ocean specific humidity changes ($\gamma \delta q_{\rm{O}}$; blue dashed line), land evapotranspiration changes ($\epsilon \delta E_{\rm{L}}$; green dashed line), and the remainder term ($\zeta$; red dashed line) as estimated using the regression relation (\ref{eqn:regression}).} \label{fig:delta_SH_land_CMIP5_theory_2} \end{figure} By construction, the regression relationship (\ref{eqn:regression}) is exactly satisfied in the multimodel mean. Based on this relationship, the annual-mean contributions to changes in land specific humidity from changes in ocean specific humidity, changes in land evapotranspiration, and the remainder term are shown in Fig. \ref{fig:delta_SH_land_CMIP5_theory_2}. At all latitudes, changes in land specific humidity are dominated by the ocean specific humidity contribution. The contribution due to changes in land evapotranspiration is positive and has its largest values in the Northern Hemisphere where the land fraction is greatest. It is not possible to estimate the individual contributions to changes in land relative humidity for the CMIP5 simulations, as we did for the idealized GCM simulations. This is because the decomposition of relative humidity changes discussed in appendix A involves the individual contributions to land specific humidity in the control climate, and these are difficult to calculate using a regression approach. However, the results from the idealized GCM simulations suggest that evapotranspiration could be important for the changes in land relative humidity in the CMIP5 simulations, even though it is a second order influence for changes in land specific humidity. It would be worthwhile to estimate the land evapotranspiration contribution for full-complexity GCMs by performing simulations with specified land evapotranspiration rates as was done for the idealized GCM in this study. \section{Feedback between temperature and relative humidity changes over land} Throughout this paper, we have calculated changes in land relative humidity by first estimating the specific humidity changes and then combining these estimates with the temperature changes, which we have taken as given. However, changes in land relative humidity can be expected to lead to changes in surface-air temperature, and this can be quantified through the atmospheric dynamic constraint linking changes in temperature and relative humidity over land and ocean \citep{byrne_ogorman_2013, byrne_ogorman_2013b}. In the tropics, this constraint is based on convective quasi-equilibrium in the vertical, and weak gradients of free tropospheric temperatures in the horizontal, and extensions to the extratropics are also discussed in \citet{byrne_ogorman_2013b}. As a result, land temperatures and relative humidities must change in tandem as the climate warms such that the change in surface-air equivalent potential temperature ($\theta_e$) is approximately the same over land and ocean ($\delta \theta_{e,\rm{L}} \approx \delta \theta_{e,\rm{O}}$). Because this constraint follows from atmospheric dynamical processes, we will refer to it as the ``dynamic constraint'' on surface-air temperatures and humidities. By contrast, we will refer to the link between surface-air humidities over land and ocean because of moisture transport between them (as formulated in the box model in this paper) as the ``moisture constraint'' ($\delta q_{\rm{L}} = \gamma \delta q_{\rm{O}} + \delta q_{\rm{E}}$). \begin{figure}[t] \centering \noindent\includegraphics[trim=0.0in 0.0in 0.0in 0.0in,clip,width=19pc,angle=0]{RH_T_feedback_schematic_updated.eps} \caption{Schematic diagram describing the feedback between changes in temperature and relative humidity over land and ocean (assuming, for simplicity, that ocean relative humidity remains constant). The ``dynamic constraint'' arises from atmospheric processes that link temperatures and relative humidities over land and ocean. The ``moisture constraint'' is due to the limited supply of moisture from the ocean to the land boundary layer.} \label{fig:RH_T_feedback} \end{figure} A feedback loop is used to conceptualize the interaction between changes in temperature and relative humidity over land and ocean (Fig. \ref{fig:RH_T_feedback}). Land is drier than ocean in the control climate, and as a result the dynamic constraint implies that land temperatures increase more than the ocean temperature in response to a positive radiative forcing \citep{byrne_ogorman_2013}. The moisture constraint then implies that the enhanced land warming leads to a land relative humidity decrease because of the limited supply of moisture from the ocean. According to the dynamic constraint, a decrease in land relative humidity enhances the land warming further. The feedback loop can also be entered via a non-radiative forcing that causes a decrease in humidity, such as the physiological forcing from reduced stomatal conductance in an elevated-$\rm{CO}_2$ world or a local decrease in soil moisture. We assess the strength of the feedback between relative humidity and temperature changes over land for the case in which a forcing alters the specific humidity over land, while the ocean temperature and ocean specific humidity are assumed to remain constant. Considering the relative humidity to be a function of specific humidity and temperature and linearizing, we can write the total change in land relative humidity as the sum of contributions from changes in specific humidity at constant temperature (the ``forced'' component) and changes in temperature at constant specific humidity (the ``temperature feedback'' component): \begin{equation} \delta H_{\rm{L},total} = \delta H_{\rm{L},forced} + \left. \frac{\partial H_{\rm{L}}} {\partial T_{\rm{L}}} \right|_{q_{\rm{L}}} \delta T_{\rm{L}}, \label{eqn:feedback_loop_1} \end{equation} where $\partial H_{\rm{L}}/\partial T_{\rm{L}}|_{q_{\rm{L}}}$ is the sensitivity of relative humidity to warming at constant specific humidity, and $\delta T_{\rm{L}}$ is the change in land temperature that arises because of the change in land relative humidity. The land temperature change is then related to the relative humidity change by the dynamic constraint: \begin{equation} \delta T_{\rm{L}} = \left. \frac{\partial T_{\rm{L}}} {\partial H_{\rm{L}}} \right|_{\theta_{e,\rm{L}}} \delta H_{\rm{L},total}, \label{eqn:feedback_loop_2} \end{equation} where $\partial T_{\rm{L}}/\partial H_{\rm{L}}|_{\theta_{e,\rm{L}}}$ is the sensitivity of temperature to changes in relative humidity at constant equivalent potential temperature ($\theta_{e,\rm{L}}$). The land equivalent potential temperature remains constant because the dynamic constraint requires that changes in equivalent potential temperature are the same over land and ocean [see \citet{byrne_ogorman_2013b}], and we are assuming that the ocean temperatures and humidities (and therefore ocean equivalent potential temperature) are not changing in this example. Combining (\ref{eqn:feedback_loop_1}) and (\ref{eqn:feedback_loop_2}), we can express the total land relative humidity change as: \begin{equation} \delta H_{\rm{L},total} = \frac{\delta H_{\rm{L},forced}}{1 - \left. \frac{\partial H_{\rm{L}}} {\partial T_{\rm{L}}} \right|_{q_{\rm{L}}} \left. \frac{\partial T_{\rm{L}}} {\partial H_{\rm{L}}} \right|_{\theta_{e,\rm{L}}}}. \label{eqn:total_RH_change} \end{equation} We next evaluate (\ref{eqn:total_RH_change}) for a simple numerical example. For constant specific humidity, a 1K temperature increase leads to approximately a $6\%$ \textit{fractional} reduction in relative humidity. Assuming a land relative humidity of $50\%$ this corresponds to a $3\%$ \textit{absolute} reduction in relative humidity: $\partial H_{\rm{L}} / {\partial T_{\rm{L}}}|_{q_{\rm{L}}} \approx -3\,\%\,\rm{K}^{-1}$. The sensitivity of land temperature to changes in land relative humidity at fixed equivalent potential temperature, $\partial T_{\rm{L}}/\partial H_{\rm{L}}|_{\theta_{e,\rm{L}}}$, can be estimated using the thermodynamic relation $T(\theta_e, H, p)$. For a land relative humidity of $50\%$, a land surface temperature of $298\,\rm{K}$, and a surface pressure of 1000hPa, we find $\partial T_{\rm{L}}/\partial H_{\rm{L}}|_{\theta_{e,\rm{L}}} \approx -0.2 \, \rm{K}\,\%^{-1}$ where we use the \citet{bolton_1980} formulation of $\theta_e$ [note that $\partial T_{\rm{L}} / \partial H_{\rm{L}}|_{\theta_{e,\rm{L}}}$ is plotted as a function of ocean surface temperature in Fig.3a of \citet{byrne_ogorman_2013}]. Using (\ref{eqn:total_RH_change}) and the values for the partial derivatives given above, we find that a forced decrease in land relative humidity of $1\%$ results in a total land relative humidity decrease of $2.5\%$ when the temperature feedback is included for this illustrative example, implying an amplification by a factor of 2.5 in the decrease in relative humidity. (For a higher control relative humidities of 70\% over land, the amplification would be by a factor of 3.) Thus, feedbacks between temperature and relative humidity over land strongly amplify forced changes in relative humidity due to, say, changes in stomatal conductance or changes in soil moisture. A further implication is that more than half of the total change in relative humidity comes from the change in temperature rather than the changes in specific humidity, and this holds true for land surface temperatures above 292K in this illustrative example with control land relative humidity of 50\%. \section{Conclusions} We have introduced a conceptual box model to investigate the response of near-surface land relative humidity to changes in climate. Neglecting the contribution of evapotranspiration to the moisture balance over land (or assuming that $q_{\rm{E}}$ scales with land specific humidity), the simplest version of our box model suggests a purely oceanic control on land boundary-layer humidity, with equal fractional changes in specific humidity over land and ocean. Together with enhanced warming over land relative to ocean and small changes in ocean relative humidity, this simple box model implies a decrease in land relative humidity as the climate warms. The ocean-influence box model captures many features of the humidity response in idealized GCM and CMIP5 simulations, supporting the hypothesis of a strong oceanic influence on boundary-layer specific humidity over land. The full box model, incorporating evapotranspiration, is applied to the idealized GCM simulations using additional simulations with specified evapotranspiration rates, and to the CMIP5 simulations using a linear regression approach. Compared to moisture transport from the ocean, evapotranspiration has only a secondary influence on the land specific humidity and its changes. However, according to a decomposition of the relative humidity change that weights the different contributions according to their importance in the control climate, evapotranspiration plays an important role for the changes in land relative humidity in the idealized GCM simulations. In addition, feedbacks between temperature and relative humidity changes over land, associated with the dynamic constraint on the land-ocean warming contrast and the moisture constraint described in this paper, can strongly amplify relative humidity changes. We have derived an expression for the strength of this amplification, and we have given a simple example in which the relative humidity change resulting from a moisture forcing is amplified by a factor of $2.5$ when changes in temperature are taken into account. For sufficiently high control-climate temperatures, the majority of the change in land relative humidity comes from the induced change in temperature rather than the change in specific humidity. This amplification is consistent with the strong influence of reduced stomatal conductance or decreases in soil moisture on land relative humidity found in previous studies \citep[e.g.,][]{cao_et_al_2010,andrews_et_al_2011,berg_et_al_2016}. As mentioned in section 1, the pattern of relative humidity changes influences the response of the water cycle to climate change. In particular, spatial gradients of fractional changes in surface-air specific humidity ($\delta q/q$) contribute a negative tendency to precipitation minus evapotranspiration ($P-E$) over continents as the climate warms \citep{byrne_ogorman_2015}. The ocean-influence box model predicts that $\delta q/q$ is spatially uniform, implying no effect of spatial gradients in this quantity on $P-E$ changes over land. However, the CMIP5 simulations do show spatial gradients in $\delta q/q$, and thus a more detailed understanding of the pattern of relative humidity changes is needed for the purpose of understanding changes in $P-E$ changes over land. Future work could investigate the controls on the detailed pattern of $\delta q/q$ over land in order to better understand the $P-E$ response over land. Further investigation of the temperature-relative humidity feedback over land in comprehensive models would also be valuable. Finally, it is of interest to determine if the box models discussed here can be adapted for application to shorter-term variability, and in particular to the sharp decrease in global-mean land relative humidity that is seen in observations between 2000 and 2008 \citep{simmons_et_al_2010,willett_et_al_2014}. \acknowledgments We thank Alexis Berg, Bill Boos, Jack Scheff, Sonia Seneviratne, and Bjorn Stevens for helpful discussions. We acknowledge the World Climate Research Programme's Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modeling groups for producing and making available their model output. For CMIP, the U.S. Department of Energy's Program for Climate Model Diagnosis and Intercomparison provides coordinating support and led development of software infrastructure in partnership with the Global Organization for Earth System Science Portals. We acknowledge support from NSF grant AGS-1148594.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Neutral hydrogen gas ({H\,{\textsc i}}) is a primary ingredient for star formation and is a crucial component in understanding the physical processes that convert gas into stars and govern galaxy formation and evolution. At low and high redshift, the majority of the {H\,{\textsc i}}\ gas (by mass) is located in the neutral gas disks in galaxies with column densities exceeding $N_{\rm HI}\ge2\times10^{20}$~cm$^{-2}$ \citep{wolfe86}. At the characteristic column densities of galactic {H\,{\textsc i}}\ disks, the ultraviolet Ly$\alpha$ transition is damped and the observed absorption systems are referred to as damped Ly$\alpha$ systems \citep[DLAs;][]{wolfe05}. A complete census of DLAs and their properties allow us to investigate the neutral gas content in our Universe and shed light on the role these gas reservoirs play in the formation and evolution of galaxies over cosmic time. Absorption against redshifted quasars allows us to study the abundance and evolution of {H\,{\textsc i}}\ gas residing within intervening absorption systems located along the line of sight to the bright background source. While DLAs have been studied extensively beyond $z\gtrsim1.6$, when the Ly$\alpha$ ultraviolet transition becomes redshifted into optical wavelengths, DLA studies below $z\lesssim1.6$ requires UV observations with spectrographs on the Hubble Space Telescope. The expense of space-based observations coupled with the rarity of DLAs in random lines of sight makes it impossible to undertake large-scale surveys comparable with large ground-based (optical) surveys such as the Sloan Digital Sky Survey (SDSS). The hyperfine transition of hydrogen in the radio regime (21~cm; 1420.405752~MHz) opens an alternate window into studying neutral gas systems at all redshifts. Flux limitations confine 21~cm emission surveys to the local universe (i.e., $z\lesssim0.2$), making previous high redshift studies only possible with 21~cm absorption line surveys \footnote{Future {H\,{\textsc i}}\ emission studies with the Square Kilometre Array (SKA) and its pathfinder telescopes, ASKAP and MeerKAT, will enable deeper {H\,{\textsc i}}\ emission studies beyond $z\gtrsim0.2$ \citep[e.g.,][]{staveley-smith15}.}. The absorption signal is not distance dependent and depends only on the column, spin temperature, and background source flux. Finally, unlike the saturated lines of the Ly$\alpha$ ultraviolet transition, 21~cm absorption profiles are usually optically thin owing to a very low Einstein rate coefficient for spontaneous emission. Large observational studies of {H\,{\textsc i}}\ 21-cm absorption could thus provide a comprehensive observational census of the physical conditions of cold neutral {H\,{\textsc i}}\ gas. Despite the benefits, surveys for 21~cm absorption encounter challenges that make the discovery of new detections especially challenging. Known redshifted 21~cm absorbers are rare in the universe, with less than 50 intervening absorbers known to date above $z>0.1$ \citep{curran16}. The 21~cm line has a low observed optical depth, requiring long integration times to achieve high sensitivity for detection. The presence of radio frequency interference (RFI) at sub-GHz frequencies interferes with much of redshift space, making it difficult to confirm potential weak absorption lines of 21~cm. Small bandwidth coverage from radio receivers force small redshift intervals to be searched, causing large redshift coverage to be very time consuming. Finally, the spin temperature of the absorbing cloud and the covering factor of the background radio source can also reduce sensitivity to low-column density systems, where high spin temperatures coupled with low covering factors decrease sensitivity to low-column systems. The neutral hydrogen column density frequency distribution, $f(N_{\rm HI},X)$, and its moments quantifies the overall {H\,{\textsc i}}\ content of the universe and constrains the evolution of neutral hydrogen absorption systems over cosmic time \citep{prochaska05}. $f(N_{\rm HI},X)$ describes the projected column density distribution of {H\,{\textsc i}}\ gas in galaxies per comoving absorption distance path length $dX$ \citep{wolfe95, prochaska09}. $f(N_{\rm HI},X)$ is easily constrained from intervening absorption surveys, as has been done at high redshift with Ly$\alpha$ \citep{prochaska09, noterdaeme09, noterdaeme12} and at low redshift with 21~cm emission \citep{zwaan05} and 21~cm absorption \citep{darling11, allison20} studies. These prior studies find that the shape of $f(N_{\rm HI},X)$ is invariant whereas the overall normalization decreases with redshift. This may signify a conversion of gas into stars \citep{prochaska09}. The gas mass density of neutral gas $\Omega_{\rm HI}$, the first moment of $f(N_{\rm HI},X)$, can be leveraged to show the evolution of the {H\,{\textsc i}}\ gas content associated in high column {H\,{\textsc i}}\ absorption systems. Studies of the evolution of neutral gas in DLAs have begun to shed light on the census of the neutral gas reservoirs, revealing that there may exist twice as much hydrogen gas in the high redshift Universe as measured with Ly$\alpha$ studies \citep[e.g., ][]{storrie-lombardi00, rao00, prochaska05, prochaska09, noterdaeme09, noterdaeme12, bird17} than in the local Universe as measured with 21~cm studies \citep[e.g., ][]{zwaan05, lah07, martin10, delhaize13, hoppmann15, rhee16, rhee18, mjones18, bera19, hu19}. The {H\,{\textsc i}}\ content is less constrained at intermediate redshifts $0.2\lesssim z\lesssim2$ where there are a dearth of measurements because Ly$\alpha$ cannot be observed at optical wavelengths and 21~cm is not yet viable for emission studies. Studies in this redshift range often show discrepant results in the measured values of $\Omega_{\rm HI}$ and introduce controversy in the exact form and evolution of $\Omega_{\rm HI}$ from high redshift to today \citep{rao06, rao17, lah07, kanekar16}. The difficulty in understanding the redshift evolution of DLAs and the possible disconnect between low redshift 21~cm studies and high redshift Ly$\alpha$ studies is complicated by the different methods used in these different redshift regimes. Observations from $0.2\lesssim z\lesssim2$ require space-based observatories to observe Ly$\alpha$, which result in smaller sample sizes and correspondingly poorer statistical constraints due to the expensive nature of observations from space. Possible selection effects between low and high redshift methods highlight the need to carry out blind surveys across all redshift ranges. 21~cm absorption surveys may be the next step necessary to bridge the gap between $z\sim0$ and $z>2$ observations and constrain the redshift-evolution of $\Omega_{\rm HI}$ and the role neutral gas systems play in galaxy and stellar evolution \citep[see e.g., recent work by][]{sadler20}. The work here will help pave the way for future large-scale 21-cm absorption line surveys to place strong statistically constraints on the cosmological evolution of {H\,{\textsc i}}\ gas at intermediate redshifts with the Square Kilometre Array and its pathfinder telescopes, including the First Large Absorption Survey in {H\,{\textsc i}}\ \citep[FLASH;][]{allison2016_mnras} with the Australian Square Kilometre Pathfinder \citep[ASKAP;][]{johnston07} and the South African MeerKAT Absorption Line Survey \citep[MALS;][]{gupta16}. Observations with Ly$\alpha$ and 21~cm absorption can in principle provide the necessary information to measure the spin temperature of the absorbing gas from the total neutral hydrogen {H\,{\textsc i}}\ column density. This requires the assumption that Ly$\alpha$ and 21~cm absorption trace the same sight line, which may not always be valid \citep[see, e.g.,][]{kanekar07}. The few intervening absorbers with observations of both 21~cm absorption and Ly$\alpha$ absorption reveal a broad range in the measured spin temperature of the absorbing gas, from 100~K to nearly 10,000~K \citep[e.g., ][]{wolfe81, kanekar03, york07, nroy13, kanekar14_mnras, dutta17, curran19_mnras}. Combining our large intervening {H\,{\textsc i}}\ 21~cm absorption study with prior Ly$\alpha$ and 21~cm emission studies potentially provides the ability to constrain the largely uncertain spin temperature and covering factor of the absorbers through means of comparison with prior measurements of $f(N_{\rm HI},X)$ and $\Omega_{\rm HI}$. Measurements of $f(N_{\rm HI},X)$ and $\Omega_{\rm HI}$ are temperature independent when constrained via Ly$\alpha$ and 21~cm emission surveys. The unknown spin temperature for 21~cm absorbers can be tuned to constrain the temperature range such that the observations are consistent with the Ly$\alpha$ and/or 21~cm emission measurements. Large surveys such as this one can potentially supply adequate coverage to provide a physically motivated constraint for a range of spin temperatures. As an added benefit, 21~cm absorption lines, combined with the hydroxyl (OH) 18~cm transition line and other millimetre rotational lines, allows for accurate measurements of fundamental physical constants, such as the fine structure constant $\alpha \equiv e^2/\hbar c$, the proton-electron mass ratio $\mu \equiv m_{\rm p}/m_{\rm e}$, and the proton g-factor $g_{\rm p}$ \citep{darling04, kanekar05}. It is possible to constrain cosmological evolution of these constants with these molecular absorption systems. Redshifted OH 18~cm absorption systems are very rare, with only five systems known, all limited to $z<1$ \citep{chengalur99, kanekar03a, darling04, kanekar05}. The few known redshifted OH absorbers are always found in systems with 21~cm absorption. For this reason, studies aimed at observing new 21~cm absorbers quite often tune to the frequencies of 18~cm absorption as well, as we do this study. The blind 21~cm absorption line survey presented here was established to have the sensitivity to detect all DLA absorption systems in a large number of sightlines from $z=0$ to $z=2.74$ with just minutes of observing time per source. Our survey aims at detecting new intervening {H\,{\textsc i}}\ 21~cm and molecular OH 18~cm absorption systems in a redshift-independent fashion with no prior knowledge of known absorption systems in order to provide adequate statistics to help constrain factors of cosmological importance. We demonstrate that despite the lack of new detections, largely due to severe RFI along much of redshift space, we are able to redetect all known absorbers present in our survey and it is possible with our large sample size and redshift coverage to place meaningful limits on the spin temperature of {H\,{\textsc i}}\ gas with the column density frequency distribution function $f(N_{\rm HI},X)$ (Section \ref{sec:fN}) and the evolution of the cosmological neutral gas mass density $\Omega_{\rm HI}$ (Section \ref{sec:omega}). Throughout this paper, we adopt a $\Lambda$CDM cosmology of $H_\circ = 71$~{\hbox{km s$^{-1}$}}~Mpc$^{-1}$, $\Omega_m = 0.27$, and $\Omega_{\Lambda} =0.73$. \section{Sample Selection} We select 260 radio sources to search for new intervening 21~cm lines along the sight lines with no requirement placed on the source type. The sources are selected from the \citet{white92} catalog with requirements of a known optical redshift, north of declination $\delta \gtrsim 30$~deg, and a continuum flux density of $S>0.3$~Jy at 780~MHz (Figure~\ref{fig:sources}). No prior knowledge about possible absorption systems, intervening or intrinsic, was considered for source selection. Requiring an optical redshift does impose some bias, selecting the brightest, least obscured, and UV-bright radio sources at all redshifts. Figure~\ref{fig:sources} shows the distribution the redshift of the sources and the rest-frame radio continuum at 780 MHz for the sources in the sample. The majority of our sources are classified as Flat-Spectrum Radio Sources (FSRSs), radio sources characterized with a double-peaked synchrotron/Compton spectral energy distribution and possibly associated with a blazar or with the compact core of a radio galaxy \citep{fugmann88}. Our sample also includes Giga-Hertz Peaked (GPS) sources, radio sources that are believed to be intrinsically small (not foreshortened by projection effects; \citealt{fanti90}) and exhibit a low-frequency turn-over in their spectra, attributed to synchrotron self-absorption and thermal bremsstrahlung absorption \citep{jones74, menon83, odea97}. \begin{figure} \includegraphics[scale=0.43]{SourcesvsZ_merge-eps-converted-to.pdf} \vspace{-15pt} \caption{Left: Histogram of the number of sources versus redshift for the 260 sources in this survey with the lookback time listed on the top axis. The average redshift is 1.244. Right: Flux density of each source at 780~MHz. \label{fig:sources}} \end{figure} \section{Observations and Data Analysis} We observed 89 sources in January 2004 (GBT program 02C--008) for intervening {H\,{\textsc i}}\ 1420.405752~MHz absorption from $z=0.6$ to $z=1.1$ at the Green Bank Telescope\footnote{The National Radio Astronomy Observatory (NRAO) is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.} (GBT). We chose GBT's Prime Focus 1 receiver (PF1-800), covering 680--880~MHz ($0.6<z<1.1$) due to the relatively minimal RFI present at these frequencies. These observations are complemented with observations of an additional 182 radio sources between September 2004 and August 2005 (GBT program 04C--018) at the GBT for {H\,{\textsc i}}\ 21~cm absorption in intervening {H\,{\textsc i}}\ systems between $z = 0$ up to $z = z_{\rm source}$. The redshift coverage is not always complete for each source. 11 sources appeared in both observing programs, resulting in a total of 260 unique radio sources in this survey. Observations were divided among the GBT receivers as follows: 147 sources were observed with the 1.4~GHz L-band receiver ($0<z<0.235$), 126 with the 1~GHz Prime Focus (PF) 2 receiver ($0.150<z<0.570$), 204 with the 800~MHz PF1-800 receiver ($0.536<z<1.10$; including both GBT programs), two with the 600~MHz PF1-600 receiver ($1.06<z<1.79$), and 12 sources with the 450~MHz PF1-450 receiver ($1.58<z<2.74$). These receivers were selected to cover the full spectral path for each target from $z=0$ to the redshift of the source. Six intermediate frequencies (IFs) were employed per receiver for both PF2 and PF1-800, PF1-600 and PF1-450 employed four IFs, and L-band employed eight IFs. All IFs have a bandpass of 50~MHz, divided into 4098 channels (channel spacing of 12.21~kHz). The measured noise levels for identical integration times are comparable for the L-band, PF2 (where the bandpass is not heavily impacted by RFI), and PF1-800 receivers, but is greatly increased in the PF1-600 and PF1-450 receivers. The redshift coverage of the L-band is the cleanest with minimal RFI presence. Most of the redshift coverage of the 1~GHz receiver (PF2), 600~MHz receiver (PF1-600), and 450~MHz receiver (PF1-450) are unusable due to severe RFI. Total on-source integration times are listed for each source in Table \ref{tab:obs} for each receiver with typical noise for each receiver, and total redshift coverage $\Delta z$ searched per source. All observations were conducted in position-switched mode in two linear polarizations with 9-level sampling, and a range of 1.5-second to 15-second records. For all observations, a calibration diode signal was injected for half of each record. Each flattened and calibrated spectral record were inspected and flagged for RFI. All records were then averaged in time and polarization for each observed source and Hanning smoothed to a resolution of 12.2~kHz per channel (a resolution of 4.7~{\hbox{km s$^{-1}$}}\ per channel at 780~MHz). RFI features were interactively removed and the continuum at the RFI feature replaced with an average of the noise from the surrounding five channels to aid in measuring the noise across the spectral region and aid in base-line flattening the spectrum. Each spectrum was baseline-flattened using a polynomial fit, typically of fifth order. Final mean spectra are inspected for absorption lines in each 50~MHz bandwidth region. The actual range of each region was typically determined by the RFI conditions of each spectrum. In cases of non-detections, spectral noise measurements are recorded for the maximum range possible. All data reduction was performed using GBTIDL\footnote{GBTIDL (\url{http://gbtidl.nrao.edu/}) is the data reduction package produced by NRAO and written in the IDL language for the reduction of GBT data.} Feed resonances at 1263, 924, 817 and 769~MHz, introduced by the GBT receivers, are removed by fitting the resonance with another observation in the same observing session, using the second observation as a template and subtracting the feed shape, flattening the baseline across the feed resonance. The 21~cm absorber toward 2351+456 occurred at the edge of the 796~MHz feed resonance, and was removed in this fashion with a template supplied by another source in the same observing run as demonstrated by \citet{darlingetal04}. \section{Results}\label{sec:results} \subsection{Flux Density Values}\label{sec:cntm} Low-frequency ($\lesssim1$~GHz) spectra frequently contain negative continuum values, slopes, steps, and other unnatural features caused by instrumental effects. As a result, the flux density at the frequency of the 21~cm observations for each object cannot be measured directly from our spectra. We obtain the estimates of the flux density at our desired frequency by interpolating between extant continuum measurements in the NASA Extragalactic Database (NED) using a single spectral index power-law fit (a linear fit in log~$S_{\nu}$--log~$\nu$ space). In order to cope with the heterogeneous literature continuum measurements, we try whenever possible to select values published from the same catalog sources using the same pass bands, most often 1410, 1340, 750, 635, 408, and 365~MHz observations \citep{laing80, ficarra85, large81, white92, douglas96, rengelink97, condon98, stanghellini98, stanghellini05, orienti07, petrov08}. This treatment neglects potentially significant time variation in radio source fluxes. The continuum flux densities are listed in Table~\ref{tab:obs}. \subsection{Line Search Results} \subsubsection{HI 21~cm Intervening Line Observations}\label{sec:interveningD} Out of the 260 sources in this survey, eight sources remain indeterminate to absorption systems due to persistent and irremovable RFI for the entire observed spectral range (OJ+287, 3C308, HB89~1602$-$001, 1743+173, 1800+440, 1921$-$293, 1958$-$179, and 2007+777). We have searched the remaining 252 partially RFI-free sources for intervening {H\,{\textsc i}}\ 21~cm absorption and we find intervening 21~cm absorbers to be present in the spectra toward nine radio sources. All of the absorption systems in this survey are re-detections, shown in Figure~\ref{fig:detects}. Table~\ref{tab:detections} lists the properties of the detected absorption lines, fitted Gaussian components, and comparison to the prior 21~cm detections of the absorption systems. The exact value of the spin temperature $T_s$ for intervening 21~cm absorbers and the covering factor $f$ of their background radio source are unconstrained. We therefore report all column density measurements without an assumption on $T_s/f$, where possible. For our analysis in Section~\ref{sec:fN} and Section~\ref{sec:omega}, in order to compare our work to prior literature measurements that do not depend on spin temperature assumptions, we normalize all prior {H\,{\textsc i}}\ column density values to a range of values that span values commonly adopted in the literature: $T_s/f=100, 250, 500$, and 1000~K \citep[e.g., ][]{curran05, curran17b, curran17c, darling11, allison20}. Under the assumption that radio and ultraviolet illumination trace the same line of sight, it is possible to constrain the spin temperature of {H\,{\textsc i}}\ absorbers with measurements of both Ly$\alpha$ and 21~cm absorption. \citet{kanekar14_mnras} uses these temperature constraints, combined with very long baseline interferometric observations to constrain the covering factor, to report estimates of $T_s/f$ for four of our intervening absorbers\footnote{A caveat to consider is the relative sizes of the {H\,{\textsc i}}\ absorber and background radio source; at radio wavelengths, the source may be significantly larger than the spatial distribution of {H\,{\textsc i}}\ absorber, influencing the inferred spin temperature for the absorber \citep{curran05, braun12}. }, listed in Table~\ref{tab:detections}. These four absorbers show a range of $T_s/f = 560- 965$~K. \citet{kanekar14_mnras} find a strong redshift dependency to the spin temperature for absorbers at high redshift ($z>2.4$); high redshifts absorbers are more likely to exhibit spin temperature above 1000~K compared to low redshift absorbers \citep[see, however,][where the spin temperature may inversely trace the star formation rate density, with $T_s$ increasing towards low redshifts]{curran19_mnras}. As all of our absorbers have measured $T_s/f<1000$~K and are located at redshifts below $z=2.4$, $T_s/f=1000$~K represents a realistic upper limit in this study. \begin{figure*} \includegraphics[width=\linewidth]{detections_merge-eps-converted-to.pdf} \vspace{-15pt} \caption{Sources detected in {H\,{\textsc i}}\ 21~cm absorption. The dotted lines indicate Gaussian fits, numbered by component listed in Table \ref{tab:detections}. Upper spectra show the residual spectrum, offset for clarity. Spectral regions lost to radio frequency interference have not been shown. 3C216 is an {H\,{\textsc i}}\ absorber associated with its host galaxy ($z_{\rm absorption} = z_{\rm source}$) and is excluded from all plots an analysis related to the intervening absorption and the calculations for $\Omega_{\rm HI}$ and $f(N_{\textrm {HI}},X)$. The Gaussian fit to absorption line in PKS~1830$-$211 is not shown due to the presence of RFI. }\label{fig:detects} \end{figure*} \subsubsection{HI 21~cm Intrinsic Line Observations}\label{sec:intrinsicD} 97 sources have spectral coverage of the redshift of the host galaxy, enabling a search for intrinsic 21~cm absorption. This paper focuses only on the search for intervening systems and we disregard all spectral measurements within $\pm$3000~{\hbox{km s$^{-1}$}}\ of the systemic redshift of the host galaxy as any absorbers found here may be influenced by the AGN of the host galaxy and can potentially impact cosmological calculations (Section \ref{sec:fN} and \ref{sec:omega}). We re-detect a previously known 21~cm intrinsic absorption system of 3C216, shown in Figure~\ref{fig:detects}, along with the nine re-detections of intervening 21~cm absorption systems. The intrinsic {H\,{\textsc i}}\ 21~cm absorbers are treated separately (Section~\ref{sec:interveningD}) and are further discussed in \citet{grasha19_apjs} in the context of searching for neutral gas intrinsic to these radio sources. There are an additional five sources with have spectral coverage at the redshift of the host galaxy that were not included in the associated {H\,{\textsc i}}\ 21~cm absorption study \citep{grasha19_apjs}: HB89~0312$-$034, 3C124, 4C$-$05.60, HB89~1437+624, and 3C356. Of these five sources, three remain indeterminate due to RFI. We place upper limits on the intrinsic {H\,{\textsc i}}\ column density for the remaining two sources, HB89~0312$-$034 and 4C$-$05.60. These two systems have no prior 21~cm absorption limits and this marks the first time these sources have been searched for intrinsic 21~cm absorption (as well as OH absorption; Section~\ref{sec:OH}) associated with the radio source. The non-detection spectra of these two intrinsic {H\,{\textsc i}}\ systems are shown in Figure~\ref{fig:OH_nonD} and the $3\sigma$ column density limits are reported in Table~\ref{tab:OH}. \begin{figure*} \includegraphics[scale=0.65]{OHplots.pdf} \vspace{-15pt} \caption{ Spectra of all non-detection observations in the two intrinsic 21~cm absorption searches (HB989 0312--034, 4C--05.60; Section~\ref{sec:intrinsicD}) and the OH line search of known intervening 21~cm absorbers. The velocity scale is in the rest-frame of each object source. All spectral regions lost to RFI have been omitted. While we search each system for all four OH lines (when observations are available), we only show the redshifted OH line responsible for deriving the OH column density (preferentially the 1667~MHz line, only using the 1612 or 1720~MHz lines if 1667~MHz is unusable due to RFI). } \label{fig:OH_nonD} \end{figure*} \subsubsection{HI 21~cm Non-Detection Observations} Our final sample consists of 2147 individual spectral measurement regions in 252 partially RFI-free sources. Each 50~MHz spectral region is typically limited by the presence of RFI. 99.5\% of our individual spectral regions (2137 regions) show no absorption lines. The 3$\sigma$ upper limit to the {H\,{\textsc i}}\ column density in each spectral region not detected in 21~cm absorption (and not lost to RFI) is calculated with the average spectral noise $rms$ measured across the region. Figure~\ref{fig:NHI} shows the {{H\,{\textsc i}}\ 21~cm line strength ($N_{\rm HI}/(T_s/f)$) for each individual spectral measurement in our survey as a function of redshift as well as the redshift and measured column densities for the re-detections. \begin{figure} \includegraphics[scale=0.45]{new_NHI_merge_v2_log-eps-converted-to.pdf} \caption{{H\,{\textsc i}}\ column density $N_{\rm HI}/(T_s/f)$ versus $\log(1+z_{\rm abs}$) for spectral regions where we search for {H\,{\textsc i}}\ 21~cm absorption (redshift on the top axis). We show either the detected {H\,{\textsc i}}\ column density versus systems detected at the 21~cm redshift (pink solid circles and accompanying 1$\sigma$ uncertainty in the measured column density) or its upper limit in the case of non-detections (black lines). The black horizontal bars on the 21~cm non-detections represent the 3$\sigma$ upper limit to the column density and the width of the bar indicates the redshift search region, where there is an implied downward-pointing arrow (not shown for clarity). There are a total of 2147 individual regions searched for 21~cm absorption, generally determined by RFI conditions. 58\% of the 3$\sigma$ upper limits to the {H\,{\textsc i}}\ column density lie below the smallest column absorber, indicating that we reached sufficient sensitivity to detect new systems had they been present in our survey. \label{fig:NHI}} \end{figure} Although this survey was designed with the sensitivity to be able to detect DLA systems, RFI affected large redshift regions, lowering our ability to search the full potential redshift range observed toward each source (almost all spectral measurements are rendered unusable due to RFI for large regions of the 1~GHz receiver (PF2) and for most of the 600~MHz (PF1-600) and 450~MHz (PF1-450) receivers). Absorption lines that might appear in these regions therefore remain undetected. Additionally, increased rms noise levels at lower frequencies further hampered our sensitivity to low-column density systems. The non-detections reach an average optical depth of $\tau_{3\sigma}=0.0146$. This corresponds to 93\% of our upper limit measurements being below the DLA column density limit of $N_{\rm HI}=2\times10^{20}$~cm$^{-2}$ with the assumption $T_s/f=100$~K. While we cannot say there are not other DLAs present in our survey, we are confident we would have detected the presence of any major 21~cm absorbing feature as 58\% of our 3$\sigma$ column density upper limits lie below the weakest absorber ($N_{\rm HI}/(T_s/f)=6.5\times10^{17}$~cm$^{-2}$~K$^{-1}$) and 100\% of our 3$\sigma$ column density upper limits lie below the strongest absorber ($N_{\rm HI}/(T_s/f) = 3.2\times10^{19}$~cm$^{-2}$~K$^{-1}$), barring regions lost to RFI. Figure~\ref{fig:delz} highlights the strong absorbers ($N_{\rm HI}/(T_s/f)> 10^{19}$~cm$^{-2}$~K$^{-1}$) re-detected in the survey that lie significantly above the non-detections, demonstrating the sensitivity of our survey. Our low rate of detections is consistent with prior surveys of intervening {H\,{\textsc i}}\ 21~cm absorption \citep{darling11, allison14, allison20}. Despite this, the large coverage of our survey allows us to attempt to place meaningful constraints on both the spin temperature of the {H\,{\textsc i}}\ gas (Section~\ref{sec:fN}) and the cosmological evolution of the mass density of cold neutral gas as traced with the {H\,{\textsc i}}\ 21~cm transition (Section~\ref{sec:omega}). \begin{figure} \includegraphics[scale=0.45]{new_delz_merge_v2_detnond-eps-converted-to.pdf} \caption{ Redshift search path versus the 21~cm column density sensitivity for the nine intervening detections (pink) and non-detections (black). Each column density bin indicates the lower bound on the column density detectable toward the sources in that bin. Our sensitivity for non-detections peaks at $N_{\rm HI}/(T_s/f) \sim 3 \times 10^{17}$~cm$^{-2}$~K$^{-1}$, corresponding to $N_{\rm HI, 100K} \sim 5 \times 10^{19}$~cm$^{-2}$ under the assumption of $T_s/f=100$~K, well below the sub-DLA limit of $N_{\rm HI} \geq 2\times10^{20}$~cm$^{-2}$. At $T_s/f=100$~K, 93\% of the upper limits below the DLA column density threshold. A value of $T_s/f=1000$~K corresponds to our survey peaking at a column density sensitivity of $N_{\rm HI, 1000K} \sim 3 \times 10^{20}$~cm$^{-2}$. At a value of $T_s/f=1000$~K, 15\% of the 3$\sigma$ upper limits lie below the DLA column density limit. \label{fig:delz}} \end{figure} \subsubsection{OH 18~cm Line Observations}\label{sec:OH} We search for the four OH 18~cm absorption lines at 1612.231, 1665.4018, 1667.35903, and 1720.5300~MHz in 11 sources associated with known 21~cm intervening absorbers as well as in the five systems where the redshift of the host galaxy falls in our spectral coverage that were not included in the \citet{grasha19_apjs} intrinsic absorption dataset. Redshifted OH absorption systems are very rare and are always found in systems that also exhibit 21~cm absorption and a strong correlation is observed between the full width half max (FWHM) of the OH and {H\,{\textsc i}}\ absorption profiles \citep{curran07}. We thus only search for OH lines in the 21~cm absorber systems. Two sources were unusable due to RFI at the frequencies of all four OH lines. Of the 14 sources with usable spectra, two have prior OH measurements and we supply the first OH observations for the remaining 12 systems. We use the 1667~MHz line to derive OH column density because it places the strongest upper limit on the OH column density among all of the OH 18~cm lines. The continuum flux density refers to the redshifted 1667~MHz line frequency, except in the cases where the 1667~MHz was lost to RFI and the 1612 or 1720~MHz transition was used instead, and was obtained from power-law fits to literature radio continuum measurements. 1667~MHz OH absorption is re-detected in the spectrum of PKS~1830$-$211, a known 21~cm and molecular absorber at $z=0.885$. Figure~\ref{fig:OH} shows this sole OH absorber in our sample with the corresponding 21~cm absorption line. We measure an OH column density of $N_{\rm OH}/T_x = 60 \pm 3 \times10^{13}$~cm$^{-2}$~K$^{-1}$, compared to the value of $N_{\rm OH}/T_x = 40 \times10^{13}$~cm$^{-2}$~K$^{-1}$ as first reported by \citet{chengalur99}. We do not re-detect the OH 1665 or 1667~MHz absorption lines toward 1413+135 at $z_{\rm abs}=0.24671$ despite reaching a smoothed spectral noise at a resolution similar to that of \citet{kanekar02}. They report an OH column density of $N_{\rm OH}/T_x = 5.1 \times10^{13}$~cm$^{-2}$~K$^{-1}$ and we place an upper limit of $N_{\rm OH}/T_x < 3.8 \times10^{13}$~cm$^{-2}$~K$^{-1}$. We are unable to re-detect the satellite OH lines at 1612 and 1720~MHz detected by \citet{darling04} and \citet{kanekar04_phrvl} due to noise at those frequencies. Excluding our re-detection of OH 1667~MHz toward PKS~1830$-$211, no new OH emission or absorption lines are detected in the remaining 13 sources with usable, RFI-free spectra in at least one of the OH transitions. The lack of detected OH absorbers is not surprising given that we also do not detect new {H\,{\textsc i}}\ 21~cm absorbers; the optical selection of our DLA targets may bias against selecting obscured, molecular-rich absorbers along the sight-line to the DLA \citep{curran11b}. The RFI-free spectra for non-detections are shown in Figure~\ref{fig:OH_nonD}. Table~\ref{tab:OH} lists the results of the OH search, with the optical depths and column density limits quoted at the $3\sigma$ confidence level using the 1667~MHz line observations, when available. \begin{figure} \includegraphics[scale=0.45]{new_OH-eps-converted-to.pdf} \vspace{-15pt} \caption{OH 18~cm 1667~MHz absorption toward PKS 1830$-$211 (black) overlaid with the corresponding 21~cm absorber (red) at the same redshift, shifted to the frequency of the 1667~MHz OH absorption line. Both continua have been arbitrarily normalized. The features to the left of the OH absorption line are RFI features as well as the spike centered in the 21~cm absorption profile.} \label{fig:OH} \end{figure} \subsection{Column Density Measurements}\label{sec:NHI} \subsubsection{HI Column Density}\label{sec:columnHI} Column density measurements are derived from the integrated optical depths of absorption lines. For optically thin systems ($\tau << 1$), the {H\,{\textsc i}}\ column density of each absorption system is related to the optical depth $\tau$ of the line and the spin temperature $T_s$ of the absorbing system as \citep{wolfe75} \begin{equation}\label{eq:NHI} N{\rm _{HI}} = 1.8 \times 10^{18} \ \textrm{cm}^{-2} \ \frac{T_s}{f} \int \tau \ {\rm d}v, \end{equation} where $f$ is the covering factor of the fractional amount of the background continuum flux occulted by the absorber, $T_s$ is the spin temperature of the 21~cm absorption line in Kelvin, and $\int \tau \ dv$~({\hbox{km s$^{-1}$}}) is the line-integrated optical depth. Lines with Gaussian absorption profiles can be integrated as $\int \tau \ dv = \sqrt{\pi/\ln 2}\ \tau_{\rm max}\,\Delta v/2$, where $\tau_{\rm max}$ is the peak optical depth of a line with a FWHM of $\Delta v$. The column densities for the ten re-detections (Section~\ref{sec:interveningD}) are listed in Table~\ref{tab:detections}. For non-detections in the optically thin regimes ($\tau \la 0.1$, which applies to all our sources) the $3\sigma$ upper limit column density can be approximated by assuming a Gaussian absorption profile, where the integrated optical depth reduces to $\int \tau \ dv \approx 1.0645 \ dv$, and $\tau$ is the $3\sigma$ peak optical depth for an assumed line FWHM of $dv$. Our $3\sigma$ upper limit column density measurements are approximated as \begin{equation} N{\rm _{HI, 3\sigma}}/(T_s/f) < 1.8\times 10^{18} \ \textrm{cm}^{-2} \times \ 1.0645 \ \tau_{3\sigma} \ {\rm d}v, \end{equation} for the sources not detected in 21~cm absorption, where $\tau_{3\sigma} \approx 3\sigma/S$ and $S$ is the continuum flux value (obtained in Section \ref{sec:cntm}) of the background source at the redshifted line with measured spectral noise $rms$. We assume a FWHM width of 30~{\hbox{km s$^{-1}$}}\ for all sources not detected in 21~cm absorption. For all column density measurements and limits, we make no assumption on the covering factor or spin temperature of the gas. \subsubsection{OH Column Density} For the sources where we have spectral coverage to search for OH 18~cm absorption lines, we measure a column density using the 1667~MHz OH transition to place the strongest upper limit on the derived column densities for our OH search, given by \begin{equation} N{\rm _{OH, 3\sigma}} < X_{1667~ \rm MHz} \ \textrm{cm}^{-2} \times 1.0645 \ \frac{T_x}{f} \ \tau_{3\sigma} \ {\rm d}v, \end{equation} where $X_{1667~ \rm MHz} = 2.38 \times 10^{14}$ \citep{curran08}, $T_x$ is the excitation temperature of the OH line, and the integrated optical depth $\tau$ ($\tau_{3\sigma}$ for non-detections) is calculated in the same way as the {H\,{\textsc i}}\ 21~cm line detections (or non-detections). When the 1667~MHz line is unobservable due to RFI or is located outside our observing frequencies, we compute the OH column density using the 1612 or 1720~MHz transition, where $X = 2.14 \times 10^{15}$ ($X_{1720~ \rm{MHz}} = X_{1612~ \rm{MHz}}= 9 \times X_{1667~ \rm MHz}$). The observational details for the OH search toward known 21~cm absorbers are listed in Table~\ref{tab:OH}. No assumption is made about the excitation temperature $T_x$. We assume the OH absorbing systems have the same width as {H\,{\textsc i}}\ absorbing profiles of 30~{\hbox{km s$^{-1}$}}. \section{Analysis}\label{sec:analysis} \subsection{The {H\,{\textsc i}}\ Column Density Frequency Distribution Function, $f(N_{\textrm {HI}},X)$}\label{sec:fN} The parameter $f(N_{\textrm {HI}},X)$ describes the distribution of {H\,{\textsc i}}\ column density in galaxies per comoving `absorption distance' path length $\Delta X$. $f(N_{\textrm {HI}},X)$ is calculated \citep{prochaska05} as \begin{equation}\label{eq:fN} f(N_{\textrm {HI}},X) = \frac{\lambda_{absorbers}}{\Delta N_{\textrm {HI}} \Delta X} , \end{equation} where $\lambda_{absorbers}$ is the total number of absorption systems within each column density interval $N$ to $N + dN$ and $\Delta X$ is the comoving redshift path over which the absorption systems can be detected for a given column density sensitivity. The absorption distance $\Delta X$ is calculated \citep{wolfe05} as \begin{equation}\label{eq:X} \Delta X = \int_{\rm z_{min}}^{\rm z_{max}} \frac{\mathrm{d}z \ (1+z)^2}{\sqrt{(1+z)^2~(1+z \ \Omega_m) - z~(z+2)~\Omega_{\Lambda}}~}, \end{equation} integrated from $z_{\rm min}$ to $z_{\rm max}$ from all steps in redshift searched over the whole range. $\Delta X$ is crucial in estimating the frequency distribution per unit column density and the cosmological mass density in neutral gas from the damped systems we are sensitive to in our survey. Our survey has a total redshift region searched $\Delta z=88.64$, corresponding to a comoving absorption distance $\Delta X=159.5$. We calculate $f(N_{\textrm {HI}},X)$ for the nine detected intervening 21~cm absorbers with a column density $N_{\rm HI}$ in interval $\Delta N_{\rm HI}$ using Eq.~\ref{eq:fN}. We assume that $\Delta X$ does not vary with $N_{\rm HI}$. Table \ref{tab:fN} lists the values for $f(N_{\textrm {HI}},X)$ for values of $T_s/f=100, 250, 500$, and 1000~K along with the comoving absorption path length $\Delta X$ for which we are sensitive in each column density interval bin. The intrinsic detection 3C216 is not included in the $f(N_{\textrm {HI}},X)$ calculation. For the case of non-detections, limiting column density sensitivity estimates become an upper limit, calculated as \begin{equation} f(N_{\textrm {HI}},X) < \frac{\lambda_{max}}{\Delta N_{\textrm {HI}} \Delta X}, \end{equation} where $\lambda_{max}$ is the Poisson upper limit on the detection rate of absorption systems in a given column density limit bin. The 95\% confidence upper limit on the Poisson rate is $\lambda_{max} = 3.0$ for column density sensitivity bins with no detections \citep{gehrels86}. Figure~\ref{fig:fN} shows our upper limits on the column density frequency distribution function, consistent with previous studies of $f(N_{\textrm {HI}},X)$ in high redshift DLAs \citep{prochaska09, noterdaeme12}, low/intermediate redshift DLAs \citep{rao17}, and low redshift 21~cm emission \citep{zwaan05}. Due to the unconstrained nature of the hydrogen gas spin temperature, we compare the $f(N_{\textrm {HI}},X)$ distribution calculated at varying spin temperatures of 100, 250, 500, and 1000~K against the these four prior empirical measurements to enable a statistical constraint on the spin temperature and source covering fraction ratio $T_s/f$. An increase in the spin temperature for the {H\,{\textsc i}}\ column density limit measurements shifts the data to both larger column densities and smaller $f(N_{\rm HI},X)$ values. \begin{figure*} \includegraphics[scale=0.93]{new_fNplot_4temps_merge_consttemp-eps-converted-to.pdf} \vspace{-10pt} \caption{ The column density frequency distribution function $f(N_{\textrm {HI}},X)$ versus {H\,{\textsc i}}\ column density for varying values of spin temperature and covering factor $T_s/f$ for the detections and upper limits. Black points represent the measurements with an assumed spin temperature of $T_s/f=100$~K, pink represent $T_s/f=250$~K, orange represents $T_s/f=500$~K, and purple represents $T_s/f=1000$~K. The median value of $T_s/f$ for the detections with known measurements in the survey is $T_s/f=800$~K (Table~\ref{tab:detections}). The horizontal bars indicate the column density range spanned by each value. Open symbols represent column density bins with at least one 21~cm absorber detected and bins with a downward pointing arrow are non-detections with upper limits to the measured column density. The upper limits show the 95\% Poisson confidence level ($\lambda_{max}=3.0$) for bins with no absorption lines. The short dotted black line (PW09) indicates the double-power law fit by \citet{prochaska09} for absorption systems at $z =2-5$, the long dashed black line (N12) indicates the empirical relation to $z=2-3.5$ DLAs from SDSS-III DR9 by \citet{noterdaeme12}, the gray dotted line (R17) shows the power law fit to ultraviolet measurements of Ly$\alpha$ from $0.11 < z < 1.65$ by \citet{rao17}, and the solid grey line (Z05) is the $\Gamma$-function fit by \citet{zwaan05} with 21~cm emission observations at $z=0$. Our results for $f(N_{\textrm {HI}},X)$ are consistent with earlier $f(N_{\textrm {HI}},X)$ estimates at both low and high redshifts. We use our detections and upper limit measurements to constrain the range of the spin temperature of the absorbing gas. Our column density measurements and limits are consistent with {H\,{\textsc i}}\ spin temperatures up 1000~K with prior literature measurements in the same redshift range. Our observations support lower $T_s/f$ values in the highest column density bins in order to reproduce the powerlaw break in $f(N_{\textrm {HI}},X)$ at $N_{\rm HI}>10^{21}$~cm$^{-2}$ as seen in prior literature measurements. \label{fig:fN}} \end{figure*} Despite being dominated by non-detections, our 21~cm observations are able to accurately describe the $f(N_{\rm HI},X)$ distribution even with the ill-constrained nature of the spin temperature. The $f(N_{\rm HI},X)$ measurements for low column systems ($N_{\rm HI}<10^{21.5}$~cm$^{-2}$) are consistent with prior estimates of $f(N_{\textrm {HI}},X)$ with a range of temperatures from $T_s/f=100-1000$~K. The upper limits best agree with prior fits to $f(N_{\textrm {HI}},X)$ for gas with spin temperatures warmer than $T_s/f=500$~K. For sub-DLA systems ($N_{\rm HI}<2\times10^{20}$~cm$^{-2}$), our limits and measurements are consistent with the low redshift 21~cm emission measurements from \citet{zwaan05}. While overlapping with the reshift of our survey, the observations from \citet{rao17} do not extend to sub-DLA absorbers and place no additional constraint on the $T_s/f$. In comparison, in order to reproduce the turnover in $f(N_{\textrm {HI}},X)$ above $N_{\rm HI}>10^{22}$~cm$^{-2}$, our data do not warrant these high column absorbers residing in systems with spin temperatures significantly warmer than $\sim250$~K. These high column absorbers are inconsistent with prior measurements from \citet{zwaan05, prochaska09, noterdaeme12} at all values above $T_s/f=250$~K; the same motivating constraint for adopting a spin temperature in these absorbers of $T_s/f\sim250$~K is reflected in our constraint of $\Omega_{\textrm {HI}}$ (Section~\ref{sec:omega}) as well. While the assumption of a constant and universal spin temperature across all of our absorbers is likely too simplified, \citet{darling11} shows that a random distribution of temperatures between 100-1000~K give results very similar to a constant value of $T_s/f=550$~K. Their temperature range is consistent with the temperatures we find for agreement with prior empirical fits for absorbers and upper limits with column densities below $N_{\rm HI}<10^{21.5}$~cm$^{-2}$. Our results suggest that an increase in the column density sensitivity along with larger redshift coverage should place more meaningful constraints on the spin temperature of {H\,{\textsc i}}\ absorbing systems, especially toward the lower column systems where there still remains little constraint on $Ts/f$ from our data. \subsection{The Cosmological Mass Density of Neutral Gas, $\Omega_{\textrm {HI}}$}\label{sec:omega} The {H\,{\textsc i}}\ gas mass density $\Omega_{\textrm {HI}}$ describes the total {H\,{\textsc i}}\ column density per unit absorption distance, obtained by the first moment of the column density frequency distribution $f(N_{\textrm {HI}},X)$. The {H\,{\textsc i}}\ gas mass density $\Omega_{\textrm {HI}}$ is a useful parameter to express how the {H\,{\textsc i}}\ gas content of galaxies evolves as a function of cosmic time. We estimate $\Omega_{\textrm {HI}}$ from 21~cm detections \citep{prochaska05} as \begin{equation} \Omega_{\textrm {HI}} \equiv \frac{H_{\circ} \ m_{\rm H}}{c \rho_{crit}} \int_{N_{\rm min}}^{N_{\rm max}} N_{\rm HI} \ f(N_{\rm HI},X) \ \rm{d}N, \end{equation} where $f(N_{\rm HI},X)$ is described and calculated in Section~\ref{sec:fN}, $m_{\textrm H}$ is the mass of the hydrogen atom, $\rho_{crit}$ is the critical mass density, $c$ is the speed of light, and $H_{\circ}$ is Hubble's constant. The integral can be evaluated discretely, which is most commonly done by setting the limits of $N_{\rm min}=2\times10^{20}$~cm$^{-2}$ and $N_{\rm max}=\infty$ to find the cosmological mass density of neutral gas associated with DLAs, $\Omega_{\rm DLA}$. $\Omega_{\rm DLA}$ may be significantly less than $\Omega_{\textrm {HI}}$ if absorbers below the damped Ly$\alpha$ threshold contribute to the global {H\,{\textsc i}}\ mass density \citep{peroux03}. Sub-DLAs ($19.0 \leq \log N_{\rm HI} < 20.3$ do contribute 10--20\% of the total neutral gas content of the universe at redshifts $2 < z < 5$, and thus these sub-DLAs are a small contribution to the total {H\,{\textsc i}}\ budget of the universe over this range \citep{tberg19}. $\Omega_{\textrm {HI}}$ integrated from damped Ly$\alpha$ systems thus best defines the predominately neutral gas reservoir that are available for star formation at high redshift. At the smallest value of $T_s/f=100$~K that we consider in this study, two of our intervening 21~cm absorbers do not meet the classical definition of a DLA. For consistency, we refer to the neutral mass density of gas as $\Omega_{\textrm {HI}}$, the total gas mass density associated with {H\,{\textsc i}}\ atoms regardless of the column density of the absorbing system from which they arise. The integral for the mass density associated with all {H\,{\textsc i}}\ atoms within our 21~cm absorbing systems is integrated from the lowest column density of our nine intervening absorbers ($N_{\rm HI}=0.65\times10^{20}$~cm$^{-2}$) to $N_{\rm max}=\infty$. The discrete limit of $\Omega_{\textrm {HI}}$ in our survey is estimated as: \begin{equation}\label{eq:omega} \Omega_{\textrm {HI}} = \frac{H_{\circ} \ m_{\textrm H}}{c \rho_{crit}} \frac{\sum_i N_{i, \textrm {HI}}}{\Delta X}, \end{equation} where $\sum_i N_{i, \textrm {HI}}$ is the sum of the column density measurements for each system in a given redshift interval $\Delta X$ for which each absorber $N_{i, \textrm {HI}}$ could be detected. The intrinsic detection in 3C216 is not included in the $\Omega_{\rm HI}$ calculation. We assume no contribution from helium or molecular hydrogen in our calculations of $\Omega_{\rm HI}$. To addresses the statistical error dominated from small number of absorbers in each redshift bin, we calculate 1$\sigma$ errors in $\Omega_{\rm HI}$ from 10,000 bootstrap samples. Despite the evolution of $\Omega_{\rm HI}$ with redshift being rather uncertain, prior measurements of $\Omega_{\rm HI}$ at $z\sim0$ show remarkable consistency with each other and serve as anchor points for our low redshift observations. We therefore physically motivate a range of $T_s/f$ for our calculations of $\Omega_{\rm HI}$ by leveraging prior literature measurements of $\Omega_{\rm HI}$. The assumed value of $T_s/f$ will affect the overall normalization of the $\Omega_{\textrm {HI}}$ and is discussed further below. We show our measurements of $\Omega_{\rm HI}$ with select prior values of low redshift 21~cm emission surveys, 21~cm spectral stacking, and Ly$\alpha$ studies of DLAs \citep{zwaan05, rao06, lah07, noterdaeme12, hoppmann15, neeleman16, bird17, rhee18, hu19} in Figure~\ref{fig:omega}. We remove the mean molecular weight of $\mu=1.3$ (correcting for 25\% contribution of neutral gas by He II) from literature values if applicable. Despite only nine intervening absorption systems, we supply the very first measurement of $\Omega_{\rm HI}$ from a blind intervening 21~cm absorption survey with continuous redshift coverage over a lookback time of 11~Gyr (Figure~\ref{fig:omega}). Our sensitivity to low-column systems (Figure~\ref{fig:delz}) coupled with a large total redshift region searched $\Delta z=88.64$ ($\Delta X=159.5$; Eqn.~\ref{eq:X}) allows our measurement of $\Omega_{\rm HI}$ to be competitive with previous studies even with our low detection rate. To trace the evolution of neutral gas over cosmic time, we split our sample into two redshift bins at $z=0.69$. Five detections fall into the lower redshift bin and four detections lie beyond $z=0.69$. In the low-redshift interval of $0<z<0.69$ we calculate $\Omega_{\rm HI/(T_s/f) = 0.21 \pm 0.10 \times 10^{-5}~K^{-1}}$. We then compare our $\Omega_{\rm HI/(T_s/f)}$ values with the average of prior $\Omega_{\textrm {HI}}$ measurements obtained from previous low redshift {H\,{\textsc i}}\ studies to anchor our $\Omega_{\rm HI/(T_s/f)}$ values and constrain the unknown quantity $T_s/f$. Previous 21~cm emission studies and {H\,{\textsc i}}\ spectral stacking \citep[e.g.,][]{zwaan05, hoppmann15, rhee18, hu19} show consistent values of $\Omega_{\rm HI} \sim 0.3-0.4 \times 10^{-3}$. Using the average of these previous low redshift $\Omega_{\rm HI}$ studies, we compare our derive values of $\Omega_{\rm HI/(T_s/f)}$ and constrain a spin temperature -- covering factor of $T_s/f \sim$ 175~K. At a value of $T_s/f=$ 175~K, we measure $\Omega_{\rm HI} = 0.37 \pm 0.13 \times 10^{-3}$ for the low redshift interval $0<z<0.69$. In the high-redshift interval of $0.69<z<2.74$ we calculate $\Omega_{\rm HI/(T_s/f) = 0.69 \pm 0.45 \times 10^{-5}~K^{-1}}$. We use the same $T_s/f=175$~K constrained from prior low redshift $\Omega_{\rm HI}$ studies to calculate $\Omega_{\rm HI}$ in our high redshift bin. We report $\Omega_{\rm HI} = 1.2 \pm 0.6 \times 10^{-3}$ for $0.69<z<2.74$. This constrained value of $T_s/f=$ 175~K falls in the range of $T_s/f=100-1000$~K we considered for the the column density frequency distribution $f(N_{\textrm {HI}},X)$ analysis (Section~\ref{sec:fN}) in this study. The small number of absorbers results in poor statistical constraints on our measured $\Omega_{\rm HI}$ and we have large uncertainties in comparison with prior observations, especially within the higher redshift interval of $0.69<z<2.74$ where our two strongest absorbers reside. We additionally measure $\Omega_{\textrm {HI}}$ at the same four spin temperatures of $T_s/f=100, 250, 500,$ and 1000~K that used for the $f(N_{\rm HI},X)$ calculations, listed in Table~\ref{tab:fN} in each redshift bin. This allows us to estimate upper limits to the average $T_s/f$ value of our survey consistent with prior studies in the same redshift range. In our low redshift interval, we remain consistent within the errors with prior UV-selected DLAs studies \citep{rao06, rao17} for a spin temperature up to $T_s/f=500$~K ($\Omega_{\rm HI} = 1.05 \pm 0.33 \times 10^{-3}$; Table~\ref{tab:fN}). In the high redshift interval of $0.69<z<2.74$, compared against prior measurements of $\Omega_{\rm HI}$ near $z\sim1$, the median redshift of the absorbers in this interval, we constrain the spin temperature up to a value of $T_s/f\sim250$~K while still consistent with prior measurements, calculating $\Omega_{\rm HI} = 1.7 \pm 0.9 \times 10^{-3}$. While our value of $\Omega_{\rm HI}$ at $T_s/f=250$~K is larger than prior $\Omega_{\rm HI}$ measurements at the same redshift, it remains consistent given the large uncertainties resulting from few absorbers. The two largest column absorbers reside in this high redshift bin and the upper constraint for $T_s/f=250$~K from prior DLA $\Omega_{\rm HI}$ measurements agrees with the constraint we placed on $T_s/f$ from prior the $f(N_{\textrm {HI}},X)$ measurements in Section~\ref{sec:fN}. \begin{figure*} \includegraphics[scale=0.93]{new_omega10absorbersB_colors_merge_v2_200K-eps-converted-to.pdf} \vspace{-20pt} \caption{ The cosmological mass density of neutral gas $\Omega_{\textrm {HI}}$ over the past 11 Gyr as a function of redshift (lookback time on the top axis) shown as pink diamonds in two redshift bins from $0<z<2.74$ for $T_s/f=175$~K; this temperature is consistent with $\Omega_{\textrm {HI}}$ obtained from prior low redshift measurements. At $T_s/f=$ 175~K, we measure $\Omega_{\rm HI} = 0.37 \pm 0.13 \times 10^{-3}$ for the low redshift interval $0<z<0.69$ and $\Omega_{\rm HI} = 1.2 \pm 0.6 \times 10^{-3}$ in the high-redshift interval of $0.69<z<2.74$. The vertical error bars represent the 1$\sigma$ uncertainties of $\Omega_{\textrm {HI}}$ from 10,000 bootstrap samples and the horizontal error bars indicate the redshift bin sizes. We compare estimates of the cosmological mass density of neutral {H\,{\textsc i}}\ to prior surveys: the down-pointing triangle (Z05) at $z=0$ represents the HIPASS {H\,{\textsc i}}\ 21~cm emission survey \citep{zwaan05}, the triangles (R06) at $0<z<1.7$ represent a survey for DLAs selected as strong Mg II absorbers with the HST \citep{rao06}, the X-symbol (L07) represents a survey of {H\,{\textsc i}}\ emission from star-forming galaxies at $z=0.24$ \citep{lah07}, the right-pointing triangles (N09) at $2.2<z<5$ represent a DLA survey from the SDSS-DR9 \citep{noterdaeme12}, the hexagons (Z13) at $1.5<z<5$ are VLT/UVES measurements of DLAs \citep{zafar13}, the circle (H15) represents HI spectral stacking from the Arecibo Ultra Deep Survey at $z=0$ \citep{hoppmann15}, the left-pointing triangle (N16) represent a DLA survey with the HST from $0<z<1.5$ \citep{neeleman16}, the $+$ symbols (B17) from $2<z<5$ represent a DLA survey from the SDSS-DR12 \citep{bird17}, the star (R18) at $z=0.32$ is from {H\,{\textsc i}}\ stacking with the WSRT \citep{rhee18}, and the square (H19) at $0<z<0.11$ represents a survey to identify new DLA and Lyman-limit systems with spectral stacking from the WSRT \citep{hu19}, We have corrected all measurements to a consistent definition of $\Omega_{\textrm {HI}}$. Table~\ref{tab:fN} reports the calculations of $\Omega_{\textrm {HI}}$ for a range of $T_s/f$. \label{fig:omega}} \end{figure*} \section{Discussion}\label{sec:discussion} 21~cm emission line measurements at low redshift all provide consistent constraints with each other for the {H\,{\textsc i}}\ mass density in the local Universe \citep{zwaan05, martin10, hoppmann15}. DLA surveys at redshifts $z>2$ suggest an increase in the {H\,{\textsc i}}\ gas mass density, suggesting significant evolution of {H\,{\textsc i}}\ gas from $z\sim2$ to $z=0$ \citep{storrie-lombardi00, peroux03, prochaska05, noterdaeme09, noterdaeme12, crighton15, bird17}. Despite the fact that measurements at $z \gtrsim 2$ consistently tend to show larger values of $\Omega_{\rm HI}$ than those at low redshifts, $\Omega_{\rm HI}$ measurements at intermediate redshift $0.4\lesssim z \lesssim2$ are inconsistent and there is still much controversy in the exact form and evolution of $\Omega_{\rm HI}$ over cosmic time \citep[see, e.g.,][and references therein]{crighton15, rhee18}. Some of the discrepancies in this redshift range likely come from the different techniques used for target acquisition and subsequent measurement of $\Omega_{\textrm {HI}}$ in high redshift DLA surveys, such as potential biases arising from target pre-selection. Pre-selecting sightlines based on the strength of the metal absorption is commonly done at intermediate redshifts to improve the DLA detection rate with space-based observatories, however this may bias the detection rate of {H\,{\textsc i}}\ absorbers, especially against those with low column densities and metallicities \citep[][and references therein, but see Rao et al. 2017]{dessauges-zavadsky09, neeleman16, berg17}. Despite uncertainty in the spin temperature of intervening {H\,{\textsc i}}\ 21~cm absorbers, our measurements allow us to bridge the redshifted DLA surveys with low redshift 21~cm survey results and allow us to place constrains on $\Omega_{\textrm {HI}}$ when anchored to $z\sim0$ {H\,{\textsc i}}\ 21~cm surveys \citep[e.g., ][]{zwaan05, hoppmann15, mjones18}. We find agreement between our low redshift $\Omega_{\textrm {HI}}$ value from $0<z<0.69$ with previous $\Omega_{\textrm {HI}}$ measurements in the same redshift range for spin temperatures in the range 100~K to 500~K -- where prior low redshift $\Omega_{\textrm {HI}}$ measurements constrain our survey to $T_s/f=175$~K -- supporting a mild evolution in $\Omega_{\rm HI}$ in the redshift range from $z=0$ to $z\sim2$. Several of our absorbers have a constraint on $T_s/f$ from Ly$\alpha$ observations (Table~\ref{tab:detections}), with a median value of $T_s/f\sim 800$~K. For the lower redshift absorbers ($0<z<0.69$), this corresponds to $\Omega_{\rm HI} = 1.7 \pm 0.5 \times 10^{-3}$, which remain consistent with larger values of $\Omega_{\textrm {HI}}$ at the low redshift range \citep{rao06, lah07}. A spin temperature of $T_s/f\sim 800$~K for the high redshift bin gives a measurement of $\Omega_{\rm HI} = 5.4 \pm 2.7 \times 10^{-3}$, significantly larger than prior $\Omega_{\textrm {HI}}$ observations for absorbers at $z>1$. Theoretical understanding of the evolutionary trend of $\Omega_{\textrm {HI}}$ remain incomplete and our survey demonstrates the success of 21~cm absorption surveys to assist in solving tension and shaping future theoretical evolutionary models $\Omega_{\textrm {HI}}$ \citep[e.g.,][]{kimhs15, dave17, lagos18, diemer19, hu20}. Despite DLAs tracing the bulk of neutral gas at $2<z<3$ \citep[e.g.,][]{prochaska05,noterdaeme09, crighton15} and expectations that the abundance of hydrogen is higher during the era of cosmic noon than in the present-day universe \citep{peroux01}, 21~cm absorption line surveys at high redshift do not have the detection rate seen in low redshift surveys. There are several factors that negatively impact the detection rates of 21~cm absorbers. The biggest detriment to the detection of new 21~cm absorbing systems in our survey is irremovable RFI in our single-dish observations. RFI generally worsens toward lower frequencies and renders large areas of the radio spectrum unusable, especially toward high redshift. Spectral noise measurements, coupled with radio-faint sources, result in weak constraints to our measured integrated optical depths, lowering our sensitivity to detect absorption systems. While there is no easy way to alleviate RFI in low-frequency spectral observations with single-dish observations such as from GBT, increased redshift coverage and sample size, as well as targeting regions with minimal RFI impact, will help future surveys to reach the sensitivity necessary to detect new 21~cm absorption systems. Interferometric data are in general less susceptible to RFI than single-dish observatories. ASKAP Band 1 (700--1000~MHz) is essentially free of RFI and will make large blind surveys, such as FLASH \citep{allison16, allison20}, more effective than previous searches. The spin temperature -- covering factor degeneracy $T_s/f$ (Eqn.~\ref{eq:NHI}) may further impact the sensitivity of observations, worsening toward high redshifts. Poor covering of the background radio source will decrease sensitivity of the absorber, negatively impacting the detection of new absorbers. Absorbers at higher redshifts will suffer from geometric effects more and lower the incidence of the absorber covering the background source \citep{curran12a}. Likewise, spin temperature may also be systematically higher at higher redshifts, decreasing the measured absorber optical depth with increasing redshift \citep{kanekar03, srianand12, nroy13, curran19_mnras}. Neither the spin temperature of the absorber nor the covering factor of the background source is constrained for most {H\,{\textsc i}}\ 21~cm absorption systems. As the spin temperature cannot be constrained with observations of {H\,{\textsc i}}\ 21~cm absorption alone, studies typically assume a canonical value ranging from $T_{s}/f=100-1000$~K and these appear to be reasonable estimates for intervening {H\,{\textsc i}}\ absorption systems at redshifts $z\lesssim2$ \citep{kanekar14_mnras}. Our intervening 21~cm absorbers support this range in $T_{s}/f$ values through comparing our $f(N_{\textrm {HI}},X)$ (Figure~\ref{fig:fN}) results with expectations from prior temperature-independent methods. Our two largest absorbers do not have prior constraints on $T_{s}/f$ and are more consistent with prior $f(N_{\textrm {HI}},X)$ measurements for lower spin temperature values of $T_s/f=\sim250$~K. This suggests that the $T_s/f$ distribution may depends on the {H\,{\textsc i}}\ column density of the absorber where absorbers with the largest {H\,{\textsc i}}\ column densities contain a larger contribution of cooler gas. Deviations toward higher temperatures and/or lower covering factors will decrease sensitivity and bias against detection of lower column systems and is an important factor in our lack of new detections. Lastly, source selection criteria may bear some of the responsibility, worsening at high redshifts, where optical biases select galaxies that are less likely to be obscured by gas and dust. Future ``blind'' 21~cm absorption surveys along dusty sightlines may alleviate the bias of selecting against UV-faint and dusty sources where it is more likely to have cold gas along those lines of sight, otherwise missed by optical/UV surveys \citep[e.g., ][]{curran11b, yan12, yan16, glowacki19}. Such surveys may be more fruitful in selecting cold gas systems readily available for detection in 21~cm absorption as well as rarer molecular absorption systems in order to provide adequate statistics to help constrain factors of cosmological importance. \section{Summary and Conclusions}\label{sec:conclusions} We present the results of a blind survey for intervening {H\,{\textsc i}}\ 21~cm absorption along the line of sight to 260 radio sources in the redshift range $0<z<2.74$. We make 2147 individual RFI-free spectral measurements and we re-detect ten previously known 21~cm absorbers in our sample (9 intervening, 1 intrinsic) and observe no new detections of intervening 21~cm absorption systems. We place a $3\sigma$ upper limit on the {H\,{\textsc i}}\ column density for all searchable spectra not detected in 21~cm absorption in 2137 individual spectral regions across the sample for a total redshift searched path of $\Delta z=88.64$ (comoving path of $\Delta X=159.5$). We record a mean 21~cm optical depth for the non-detections of $\tau_{3\sigma}=0.0146$, corresponding to 93\% of upper limit spectral measurements lying below the DLA column density threshold under the assumption of a spin temperature and covering factor for the absorbers of $T_s/f=100$~K. At $T_s/f=100$~K, 15\% of upper limit spectral measurements are below the DLA column density limit. 99\% of our spectral measurements are constrained to $z<1$. Sixteen of our sources are searched for OH 18~cm absorption at the redshift of the known 21~cm absorbing systems. We re-detect the 1667~MHz line toward PKS~1830$-$211. We do not re-detect the 1665 or 1667~MHz absorption features toward 1413+135 as detected by \citet{kanekar02} despite smoothing our spectral measurements to the same resolution. We do not have adequate signal and sensitivity to re-detect the two satellite OH lines toward 1413+135 at 1612 and 1720~MHz at the same redshift by \citet{darling04, kanekar04_phrvl}. We place upper limits on the OH column density for the remaining 12 sources with usable OH data. While not all of our OH column density limits are significant due to RFI and/or bad spectral measurements, it provides a reference point for observations and the sensitivity necessary in the future to detect possible OH absorbing systems. We place measurements and upper limit estimates for the {H\,{\textsc i}}\ column density frequency distribution function $f(N_{\rm HI},X)$. We estimate $f(N_{\rm HI},X)$ at spin temperatures of $T_s/f = 100, 250, 500$, and 1000~K. Through comparison with prior empirical measurements of 21~cm emission line surveys at $z\sim0$ \citep{zwaan05}, ultraviolet DLA surveys at $z<1.65$ \citep{rao17}, and optical DLA surveys at $z>2$ \citep{prochaska09, noterdaeme12}, we constrain the spin temperature of the {H\,{\textsc i}}\ absorption systems and place limits on the spin temperature of our non-detection systems. We find that our absorbers and upper limit measurements are consistent with prior measurements in the range $T_s/f = 100-1000$~K for the redshift range of our survey. We find that the largest {H\,{\textsc i}}\ column absorbers $N_{\rm HI} > 10^{22}$~cm$^{-2}$ require temperatures lower than $T_s/f\lesssim250$~K in order to reproduce the turnover in $f(N_{\rm HI},X)$ distribution at high {H\,{\textsc i}}\ column densities observed in prior temperature-independent observations. This suggests that the spin temperature values for our non-detection and low {H\,{\textsc i}}\ column density absorbers have larger fractional contributions of warmer gas compared to the high {H\,{\textsc i}}\ column density absorbers. We estimate the cosmological mass density of neutral gas $\Omega_{\rm HI}$, the largest blind survey of redshifted {H\,{\textsc i}}\ 21~cm absorbers to constrain $\Omega_{\rm HI}$ continuously in the redshift range $0<z<2.74$. We estimate $\Omega_{\rm HI}$ at spin temperatures of $T_s/f = 100, 250, 500$, and 1000~K in two redshift bins: $0<z<0.69$ and $0.69<z<2.74$ to constrain the relative difference between $\Omega_{\rm HI}$ with redshift and infer a possible redshift evolution between the low and high redshift absorbers. We are able to place statistical constraints on $T_s/f$ through comparing our low redshift $\Omega_{\rm HI}$ value with prior 21~cm emission and 21~cm absorption stacking studies. With previous $z<0.69$ studies of $\Omega_{\rm HI}$ serving as anchor points, we estimate a mean $T_s/f=175$~K for our calculations of $\Omega_{\rm HI}$. At $T_s/f=$ 175~K, we measure $\Omega_{\rm HI} = 0.37 \pm 0.13 \times 10^{-3}$ for the low redshift interval $0<z<0.69$ and $\Omega_{\rm HI} = 1.2 \pm 0.6 \times 10^{-3}$ in the high-redshift interval of $0.69<z<2.74$. Our values remain consistent with studies up to $T_s/f = 250-500$~K. Despite our estimate of the {H\,{\textsc i}}\ mass density at high redshift ($0.69<z<2.74$) accompanied with large uncertainties, we show agreement with previous estimates owing to the large redshift search path and sensitivity to sub-DLA absorption systems. Our results support an overall relative decrease in the neutral gas density over the past $\sim$11~Gyr between the two redshift bins we use to constrain $\Omega_{\rm HI}$. This demonstrates that it is possible for a redshifted blind 21~cm absorption survey to complement and connect high redshift Ly$\alpha$ studies to the 21~cm emission studies that anchor $\Omega_{\rm HI}$ at $z\sim0$ and will be insightful in forthcoming SKA pathfinder surveys. \section*{Acknowledgements} We are grateful for the valuable comments on this work by an anonymous referee that improved the scientific outcome and quality of the paper. The authors thank the staff members at the Green Bank Telescope for their assistance and support. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. This research has made use of NASA's Astrophysics Data System Bibliographic Services. KG gratefully acknowledges the support of Lisa Kewley's ARC Laureate Fellowship (FL150100113). Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. This research made use of Astropy, a community-developed core Python package for Astronomy \citep{astropy13, astropy18}. The authors thank the invaluable labor of the maintenance and clerical staff at their institutions, whose contributions make scientific discoveries a reality. Facility: {GBT.}\\ Software: {ASTROPY \citep{astropy13, astropy18}, GBTIDL \citep{marganian06}.}\\ Data availability: The data underlying this article are available in National Radio Astronomy Observatory Archive at https://archive.nrao.edu/archive/advquery.jsp, and can be accessed with project code 02C-008, 04C-018. \begin{landscape} \begin{table} \tiny \caption{Journal of Observations. Columns list the source name, right ascension, declination (J2000 coordinates), optical redshift of the background radio object, total integration time spent on the reduced spectrum for each receiver bandpass, the average measured spectral noise level for each receiver, the flux and column density limits $N_{\rm HI} / (T_s/f)$ measured as closely to 1420, 1000, 800, and 450~MHz (exact frequency determined by RFI conditions) and the total redshift searched per source in all the receivers. The L-band covers the range of $0<z<0.235$ (1150-1420~MHz), PF2 covers the range of $0.150<z<0.570$ (905-1235~MHz), PF1-800 covers $0.536<z<1.10$ (675-925~MHz), PF1-600 covers $1.06<z<1.79$ (510-690~MHz), and PF1-450 covers $1.58<z<2.74$ (380-550~MHz). \newline $a$ -- Only two sources were observed with the GBT's PF1-600 receiver, 0016+731 and SBS~0804+499 with 516 and 596 seconds, respectively. The entire spectral range for 0016+731 was affected by RFI and the average measured spectral noise level for SBS~0804+499 was 16~mJy and a column density limit of $N_{\rm} < 3.6\times10^{20}$~cm$^{-2}$ at 600~MHz. \newline $b$ -- This source was observed in the 800~MHz band in both GBT programs. The observations from the 02C--008 program suffer less RFI and are used for this band throughout this paper. \newline $c$ -- This source was observed in the 800~MHz band in both GBT programs. The observations from the 04C--018 program suffer less RFI and are used for this band throughout this paper. \newline This table is available in its entirety in machine-readable form. } \label{tab:obs} \hskip-1.5cm\begin{tabular}{lllccccccccccccccccccccc} \hline & & & & \multicolumn{4}{c}{1.4 GHz} & & \multicolumn{4}{c}{1.0 GHz} & & \multicolumn{4}{c}{800 MHz} & & \multicolumn{4}{c}{450 MHz} & \\ \cline{5-8} \cline{10-13} \cline{15-18} \cline{20-23} \noalign{\smallskip} & & & & \multicolumn{4}{c}{$z=0-0.235$} & & \multicolumn{4}{c}{$0.150-0.570$} & & \multicolumn{4}{c}{$0.536-1.10$} & & \multicolumn{4}{c}{$1.58-2.74$} & \vspace{8pt}\\ Source & $\alpha$ & $\delta$ & $z$ & Time & $\overline{rms}$ & S$_{\rm 1.4 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 1.0 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 0.8 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 0.45 GHz}$ & $N_{\rm HI}$ & $\Delta z_{\rm total}$ \\ & (J2000) & (J2000) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & \\ \hline 0016+731$^a$ & 00 19 45.79 & +73 27 30.0 & 1.781 & 0 & ... & ... & ... & & 392 & 5.4 & 1.162 & $<$0.65 & & 268 & 12 & 1.058 & $<$2.1 & & 0 & ... & ... & ... & 0.31 \\ 3C013 & 00 34 14.5 & +39 24 17 & 1.351 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1490 & 4.8 & 3.476 & $<$0.22 & & 0 & ... & ... & ... & 0.39 \\ 3C014 & 00 36 06.5 & +18 37 59 & 1.4690(10) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1494 & 4.6 & 3.483 & $<$0.21 & & 0 & ... & ... & ... & 0.4147 \\ B3 0035+413 & 00 38 24.84 & +41 37 06.0 & 1.353(3) & 329 & 5.1 & 0.692 & $<$1.6 & & 398 & 4.3 & 0.614 & $<$1.2 & & 278 & 6.6 & 0.589 & $<$1.7 & & 0 & ... & ... & ... & 0.40 \\ LBQS 0106+0119$^b$ & 01 08 38.77 & +01 35 00.3 & 2.099(5) & 279 & 6.2 & 2.314 & $<$0.59 & & 0 & ... & ... & ... & & 1464 & 4.8 & 2.715 & $<$0.27 & & 564 & 9.6 & 2.829 & $<$0.58 & 0.8304 \\ UM310 & 01 15 17.10 & $-$01 27 04.6 & 1.365 & 260 & 4.7 & 1.052 & $<$0.85 & & 388 & 4.3 & 1.031 & $<$0.66 & & 448 & 8.2 & 1.016 & $<$1.3 & & 0 & ... & ... & ... & 0.61 \\ 0113$-$118 & 01 16 12.52 & $-$11 36 15.4 & 0.67 & 258 & 5.5 & 1.787 & $<$0.64 & & 360 & RFI & ... & ... & & 398 & 4.5 & 1.977 & $<$0.37 & & 0 & ... & ... & ... & 0.27 \\ 3C036 & 01 17 59.5 & +45 36 22 & 1.301 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1487 & 5.4 & 2.342 & $<$0.36 & & 0 & ... & ... & ... & 0.3654 \\ 0119+041 & 01 21 56.86 & +04 22 24.7 & 0.637 & 277 & 5.3 & 1.44 & $<$0.72 & & 382 & RFI & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.22 \\ HB89 0119$-$046 & 01 22 27.9 & $-$04 21 27 & 1.9250(10) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1482 & 5.1 & 1.99 & $<$0.41 & & 0 & ... & ... & ... & 0.3934 \\ UM321$^b$ & 01 25 28.84 & $-$00 05 55.9 & 1.07481(15) & 259 & 4.4 & 1.543 & $<$0.59 & & 0 & ... & ... & ... & & 1490 & 4.5 & 1.334 & $<$0.54 & & 0 & ... & ... & ... & 0.6189 \\ 0133+476 & 01 36 58.59 & +47 51 29.1 & 0.859 & 398 & 15 & 2.294 & $<$0.93 & & 398 & 6.8 & 2.061 & $<$0.52 & & 293 & 6.5 & 1.883 & $<$0.54 & & 0 & ... & ... & ... & 0.71 \\ 3C048 & 01 37 41.30 & +33 09 35.1 & 0.367 & 557 & 6.7 & 17.4 & $<$0.058 & & 378 & 4.9 & 21.4 & $<$0.037 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.18 \\ 0138$-$097 & 01 41 25.83 & $-$09 28 43.7 & 0.5 & 298 & 5.1 & 0.661 & $<$1.5 & & 0 & ... & ... & ... & & 298 & 9.5 & 0.765 & $<$1.9 & & 0 & ... & ... & ... & 0.35 \\ 0149+218 & 01 52 18.06 & +22 07 07.7 & 1.32 & 687 & 5.7 & 1.151 & $<$1.2 & & 687 & 4.8 & 1.454 & $<$0.58 & & 283 & 6.5 & 1.528 & $<$0.94 & & 0 & ... & ... & ... & 0.41 \\ 4C+15.05 & 02 04 50.41 & +15 14 11.0 & 0.405 & 359 & 5.2 & 4.182 & $<$0.17 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.26 \\ 0202+319 & 02 05 04.92 & +32 12 30.1 & 1.466 & 398 & 9.8 & 0.656 & $<$3.3 & & 586 & 5.8 & 0.695 & $<$1.3 & & 283 & RFI & ... & ... & & 0 & ... & ... & ... & 0.32 \\ 0212+735 & 02 17 30.81 & +73 49 32.6 & 2.367 & 0 & ... & ... & ... & & 270 & 6 & 2.04 & $<$0.46 & & 283 & RFI & ... & ... & & 528 & 13 & 1.65 & $<$1.2 & 0.13 \\ 3C066A & 02 22 39.61 & +43 02 07.8 & 0.444 & 380 & 8.7 & 2.226 & $<$0.72 & & 299 & 8.3 & 2.83 & $<$0.58 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.16 \\ B2 0218+357 & 02 21 05.47 & +35 56 13.7 & 0.68466(4) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 299 & 7.1 & 1.453 & $<$0.98 & & 0 & ... & ... & ... & 0.025 \\ 3C065 & 02 23 43.2 & +40 00 52 & 1.176 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 1.8 & 5.445 & $<$0.52 & & 0 & ... & ... & ... & 0.4015 \\ 0221+067 & 02 24 28.43 & +06 59 23.3 & 0.511 & 298 & 8 & 0.827 & $<$2.3 & & 296 & 5.9 & 0.961 & $<$1.0 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.22 \\ 4C+34.07 & 02 26 10.3 & +34 21 30 & 2.910(2) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1493 & 5.5 & 2.891 & $<$0.30 & & 0 & ... & ... & ... & 0.3853 \\ 0229+131$^b$ & 02 31 45.89 & +13 22 54.7 & 2.0590(10) & 0 & ... & ... & ... & & 299 & 9.5 & 1.515 & $<$0.93 & & 1461 & 3.8 & 1.713 & $<$0.35 & & 0 & ... & ... & ... & 0.5757 \\ 3C068.1 & 02 32 28.9 & +34 23 47 & 1.238 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1478 & 4.4 & 4.351 & $<$0.1.6 & & 0 & ... & ... & ... & 0.3926 \\ 3C068.2 & 02 34 23.8 & +31 34 17 & 1.575 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1485 & 5 & 2.45 & $<$0.32 & & 0 & ... & ... & ... & 0.3925 \\ 0234+285 & 02 37 52.40 & +28 48 09.0 & 1.213 & 268 & 10 & 1.282 & $<$1.8 & & 299 & 11 & 1.37 & $<$1.9 & & 258 & 11 & 1.45 & $<$1.2 & & 0 & ... & ... & ... & 0.32 \\ 0235+164 & 02 38 38.93 & +16 36 59.3 & 0.94 & 557 & 6.5 & 0.979 & $<$1.1 & & 298 & 7.6 & 1.028 & $<$1.2 & & 281 & 7.5 & 1.058 & $<$1.0 & & 0 & ... & ... & ... & 0.48 \\ 0237$-$027 & 02 39 45.47 & $-$02 34 40.9 & 1.116 & 298 & 8.8 & 0.315 & $<$5.1 & & 277 & RFI & ... & ... & & 383 & 5.6 & 0.287 & $<$2.9 & & 0 & ... & ... & ... & 0.41 \\ 0237$-$233 & 02 40 08.17 & $-$23 09 15.7 & 2.223 & 259 & 5.1 & 6.94 & $<$0.36 & & 0 & ... & ... & ... & & 398 & 6.2 & 5.14 & $<$0.19 & & 716 & RFI & ... & ... & 0.63 \\ PKS 0239+108 & 02 42 29.17 & +11 01 00.7 & 2.68 & 398 & 8.5 & 1.531 & $<$1.2 & & 263 & RFI & ... & ... & & 286 & 9.6 & 1.437 & $<$0.95 & & 322 & 15 & 1.406 & $<$1.0 & 0.35 \\ 0248+430 & 02 51 34.54 & +43 15 15.8 & 1.31 & 0 & ... & ... & ... & & 256 & 5.8 & 0.795 & $<$1.1 & & 272 & 6.6 & 0.784 & $<$1.9 & & 0 & ... & ... & ... & 0.29 \\ 0306+102 & 03 09 03.62 & +10 29 16.3 & 0.863 & 254 & 7.8 & 0.512 & $<$2.0 & & 273 & 8.4 & 0.502 & $<$2.7 & & 297 & 6.7 & 0.48 & $<$2.6 & & 0 & ... & ... & ... & 0.44 \\ HB89 0307+444 & 03 10 31.0 & +44 35 48 & 1.165 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1490 & 4.4 & 2.5 & $<$0.28 & & 0 & ... & ... & ... & 0.3738 \\ TXS 0311+430 & 03 14 43.6 & +43 14 05 & 2.87 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 6.3 & 2.712 & $<$0.37 & & 0 & ... & ... & ... & 0.3928 \\ HB89 0312$-$034 & 03 15 22.7 & $-$03 16 46 & 1.072 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1688 & 5.8 & 2.505 & $<$0.37 & & 0 & ... & ... & ... & 0.4233 \\ NGC 1275 & 03 19 48.16 & +41 30 42.1 & 0.01765(4) & 278 & 4.9 & 23.3 & $<$0.037 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.033 \\ TXS 0319+131 & 03 21 53.1 & +12 21 14 & 2.662 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1543 & 3.2 & 2.242 & $<$0.22 & & 0 & ... & ... & ... & 0.3686 \\ PKS 0327$-$241 & 03 29 54.07 & $-$23 57 08.8 & 0.895 & 267 & 5.2 & 0.682 & $<$0.93 & & 252 & RFI & ... & ... & & 278 & 6.4 & 0.902 & $<$1.3 & & 0 & ... & ... & ... & 0.44 \\ 0333+321 & 03 36 30.11 & +32 18 29.3 & 1.258 & 279 & 5.8 & 2.68 & $<$0.61 & & 299 & 9.2 & 2.8 & $<$0.57 & & 299 & 9.2 & 2.9 & $<$0.46 & & 0 & ... & ... & ... & 0.83 \\ 0336$-$019 & 03 39 30.94 & $-$01 46 35.8 & 0.852 & 597 & 7.7 & 2.098 & $<$0.61 & & 268 & 8.1 & 2.146 & $<$0.64 & & 290 & 6.8 & 2.264 & $<$0.47 & & 0 & ... & ... & ... & 0.35 \\ 0346$-$279 & 03 48 38.14 & $-$27 49 13.6 & 0.991 & 268 & 5.3 & 0.837 & $<$1.1 & & 298 & 4.8 & 0.957 & $<$0.95 & & 297 & 9.7 & 1.032 & $<$1.6 & & 0 & ... & ... & ... & 0.31 \\ 0405$-$123 & 04 07 48.43 & $-$12 11 36.7 & 0.5726(2) & 298 & 6.8 & 2.912 & $<$0.63 & & 286 & 20 & 3.99 & $<$0.89 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.23 \\ 3C108 & 04 12 43.6 & +23 05 05 & 1.215 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1490 & 6.5 & 1.83(10) & $<$0.56 & & 0 & ... & ... & ... & 0.3005 \\ 0414$-$189 & 04 16 36.54 & $-$18 51 08.3 & 1.536(2) & 259 & 8 & 1.258 & $<$1.1 & & 299 & 5.2 & 1.109 & $<$0.69 & & 263 & 6.4 & 1.064 & $<$1.5 & & 0 & ... & ... & ... & 0.49 \\ 0420$-$014 & 04 23 15.80 & $-$01 20 33.1 & 0.91609(18) & 298 & 8.9 & 2.587 & $<$0.48 & & 278 & 8.6 & 2.097 & $<$0.68 & & 298 & 13 & 1.901 & $<$1.6 & & 0 & ... & ... & ... & 0.33 \\ 0422+004 & 04 24 46.84 & +00 36 06.3 & 0.31 & 398 & 9.2 & 0.494 & $<$3.7 & & 298 & 7.1 & 0.516 & $<$2.4 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.11 \\ 3C124 & 04 41 59.1 & +01 21 02 & 1.083 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1494 & 4.7 & 2.058(8) & $<$0.36 & & 0 & ... & ... & ... & 0.4163 \\ 0440$-$003 & 04 42 38.66 & $-$00 17 43.4 & 0.844 & 255 & 5.8 & 3.342 & $<$0.28 & & 254 & 5.5 & 2.763 & $<$0.31 & & 274 & 6.1 & 2.372 & $<$0.53 & & 0 & ... & ... & ... & 0.38 \\ 4C+02.16 & 04 44 40.5 & +02 47 53 & 1.43 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1494 & 5.5 & 1.979(5) & $<$0.44 & & 0 & ... & ... & ... & 0.4146 \\ PKS 0454$-$234 & 04 57 03.18 & $-$23 24 52.0 & 1.003 & 268 & 8.5 & 1.654 & $<$0.97 & & 265 & 11 & 1.589 & $<$0.85 & & 298 & 12 & 1.564 & $<$3.8 & & 0 & ... & ... & ... & 0.41 \\ 0458$-$020$^b$ & 05 01 12.81 & $-$01 59 14.3 & 2.286 & 298 & 5.1 & 2.274 & $<$0.34 & & 299 & 4 & 2.302 & $<$0.30 & & 1499 & 3.9 & 2.37(4) & $<$0.26 & & 680 & 33 & 2.436 & $<$2.2 & 0.5969 \\ PKS 0500+019 & 05 03 21.20 & +02 03 04.7 & 0.58457(2) & 398 & 7.3 & 2.335 & $<$0.71 & & 299 & 6.8 & 1.736 & $<$0.62 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.33 \\ PKS 0511$-$220 & 05 13 49.11 & $-$21 59 16.1 & 1.296 & 269 & 7.2 & 0.648 & $<$2.6 & & 273 & 5.8 & 0.626 & $<$1.6 & & 297 & 7.5 & 0.619 & $<$1.9 & & 0 & ... & ... & ... & 0.36 \\ 3C138 & 05 21 09.88 & +16 38 22.1 & 0.759 & 278 & 5.7 & 10.14 & $<$0.078 & & 279 & RFI & ... & ... & & 291 & 8.9 & 12.86 & $<$0.14 & & 0 & ... & ... & ... & 0.35 \\ 0528$-$250 & 05 30 07.96 & $-$25 03 29.3 & 2.778(7) & 298 & 5.6 & 1.19 & $<$0.70 & & 298 & 18 & 1.284 & $<$3.3 & & 280 & 7.9 & 1.139 & $<$1.2 & & 687 & 26 & 1.454 & $<$4.2 & 0.82 \\ PKS 0528+134 & 05 30 56.42 & +13 31 55.1 & 2.06 & 361 & 4.8 & 1.829 & $<$0.23 & & 277 & 6.7 & 1.579 & $<$0.55 & & 298 & 8.8 & 1.414 & $<$1.0 & & 0 & ... & ... & ... & 0.50 \\ TXS 0529+483 & 05 33 15.86 & +48 22 52.8 & 1.162 & 0 & ... & ... & ... & & 298 & 7.6 & 0.514 & $<$1.9 & & 298 & 8.8 & 0.549 & $<$2.3 & & 0 & ... & ... & ... & 0.14 \\ PKS 0539$-$057 & 05 41 38.08 & $-$05 41 49.4 & 0.839(3) & 398 & 5.5 & 0.882 & $<$1.0 & & 250 & RFI & ... & ... & & 298 & 7.6 & 0.915 & $<$1.5 & & 0 & ... & ... & ... & 0.36 \\ 3C147 & 05 42 36.14 & +49 51 07.2 & 0.545 & 361 & 7.3 & 21.44 & $<$0.059 & & 298 & 26 & 29.35 & $<$0.16 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.17 \\ 0552+398 & 05 55 30.80 & +39 48 49.2 & 2.365(5) & 0 & ... & ... & ... & & 299 & 8.3 & 0.997 & $<$1.6 & & 298 & 9.7 & 0.89 & $<$2.1 & & 716 & 6.8 & 0.447 & $<$3.0 & 0.15 \\ 0605$-$085 & 06 07 59.70 & $-$08 34 50.0 & 0.872 & 398 & 6.6 & 2.942 & $<$0.58 & & 299 & 9.8 & 2.862 & $<$0.62 & & 516 & 7.8 & 2.828 & $<$0.51 & & 0 & ... & ... & ... & 0.53 \\ 0607$-$157 & 06 09 40.95 & $-$15 42 40.7 & 0.3226(2) & 299 & 9.5 & 2.261 & $<$0.61 & & 299 & RFI & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.21 \\ \hline \end{tabular} \end{table} \end{landscape} \begin{landscape} \begin{table} \tiny \contcaption{Journal of Observations. } \hskip-1.5cm\begin{tabular}{lllccccccccccccccccccccc} \hline & & & & \multicolumn{4}{c}{1.4 GHz} & & \multicolumn{4}{c}{1.0 GHz} & & \multicolumn{4}{c}{800 MHz} & & \multicolumn{4}{c}{450 MHz} & \\ \cline{5-8} \cline{10-13} \cline{15-18} \cline{20-23} \noalign{\smallskip} & & & & \multicolumn{4}{c}{$z=0-0.235$} & & \multicolumn{4}{c}{$0.150-0.570$} & & \multicolumn{4}{c}{$0.536-1.10$} & & \multicolumn{4}{c}{$1.58-2.74$} & \vspace{8pt}\\ Source & $\alpha$ & $\delta$ & $z$ & Time & $\overline{rms}$ & S$_{\rm 1.4 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 1.0 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 0.8 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 0.45 GHz}$ & $N_{\rm HI}$ & $\Delta z_{\rm total}$ \\ & (J2000) & (J2000) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & \\ \hline 0642+449 & 06 46 32.02 & +44 51 16.6 & 3.3960(10) & 0 & ... & ... & ... & & 286 & 4.7 & 0.52 & $<$1.5 & & 298 & 6.7 & 0.51 & $<$1.6 & & 693 & 12 & 0.475 & $<$4.9 & 0.33 \\ 4C+68.08 & 07 13 14.0 & +68 52 09 & 1.141(2) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 4.7 & 2.788(9) & $<$0.27 & & 0 & ... & ... & ... & 0.4038 \\ PKS 0727$-$11 & 07 30 19.11 & $-$11 41 12.6 & 1.591(3) & 299 & 6.1 & 2.765 & $<$0.29 & & 299 & 6.2 & 2.457 & $<$0.33 & & 365 & 6.4 & 2.359 & $<$0.36 & & 0 & ... & ... & ... & 0.49 \\ 0735+178 & 07 38 07.39 & +17 42 19.0 & 0.424 & 362 & 4.4 & 2.143 & $<$0.31 & & 298 & 7.3 & 2.16 & $<$0.72 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.25 \\ 0736+017 & 07 39 18.03 & +01 37 04.6 & 0.189410(9) & 349 & 6.6 & 2.53 & $<$0.54 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.17 \\ HB89 0738+313 & 07 41 10.70 & +31 12 00.2 & 0.6320(4) & 0 & ... & ... & ... & & 289 & 5.4 & 1.861 & $<$0.37 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.18 \\ 3C186 & 07 44 17.4 & +37 53 17 & 1.0670(19) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1477 & 5.9 & 2.698(2) & $<$0.35 & & 0 & ... & ... & ... & 0.3938 \\ 0743$-$006 & 07 45 54.08 & $-$00 44 17.5 & 0.994 & 398 & 5.6 & 0.688 & $<$1.7 & & 597 & 4 & 0.643 & $<$1.2 & & 278 & 5.3 & 0.608 & $<$2.8 & & 0 & ... & ... & ... & 0.74 \\ PKS 0745+241 & 07 48 36.11 & +24 00 24.1 & 0.4099(4) & 0 & ... & ... & ... & & 299 & 4.9 & 1.039 & $<$1.4 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.055 \\ 0746+483 & 07 50 20.43 & +48 14 53.6 & 1.9562(6) & 0 & ... & ... & ... & & 298 & 4.8 & 0.699 & $<$1.1 & & 298 & 6.5 & 0.686 & $<$1.5 & & 0 & ... & ... & ... & 0.40 \\ 0748+126 & 07 50 52.04 & +12 31 04.8 & 0.889 & 398 & 4.8 & 1.495 & $<$0.38 & & 283 & 4.3 & 1.49 & $<$0.41 & & 283 & 8.7 & 1.48 & $<$0.75 & & 0 & ... & ... & ... & 0.64 \\ 0754+100 & 07 57 06.64 & +09 56 34.9 & 0.2660(10) & 398 & 4.9 & 1.121 & $<$0.91 & & 299 & 6.7 & 1.03 & $<$0.76 & & 298 & 7.5 & 0.998 & $<$1.2 & & 0 & ... & ... & ... & 0.51 \\ FBQS J0804+3012 & 08 04 42.1 & +30 12 38 & 1.4513(16) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1487 & 5.8 & 2.09 & $<$0.44 & & 0 & ... & ... & ... & 0.4174 \\ 3C191 & 08 04 47.9 & +10 15 23 & 1.956 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1482 & 6.3 & 3.459 & $<$0.29 & & 0 & ... & ... & ... & 0.3977 \\ 0805$-$077 & 08 08 15.53 & $-$07 51 09.9 & 1.837 & 385 & 6.8 & 1.523 & $<$0.48 & & 281 & 5.5 & 1.726 & $<$0.48 & & 398 & 9.7 & 1.905 & $<$0.83 & & 0 & ... & ... & ... & 0.75 \\ SBS 0804+499$^a$ & 08 08 39.66 & +49 50 36.5 & 1.4352(5) & 0 & ... & ... & ... & & 266 & 6 & 0.952 & $<$0.74 & & 277 & 7.4 & 0.847 & $<$1.2 & & 0 & ... & ... & ... & 0.48 \\ 0808+019 & 08 11 26.71 & +01 46 52.2 & 1.148 & 398 & 4.7 & 0.612 & $<$1.0 & & 299 & 5.7 & 0.589 & $<$1.3 & & 298 & 6.3 & 0.566 & $<$4.2 & & 0 & ... & ... & ... & 0.57 \\ FBQS J082455.4+391641 & 08 24 55.5 & +39 16 42 & 1.2157(11) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1475 & 5.9 & 1.951 & $<$0.48 & & 0 & ... & ... & ... & 0.4211 \\ PKS 0823+033 & 08 25 50.34 & +03 09 24.5 & 0.506 & 298 & 5.6 & 1.368 & $<$0.68 & & 262 & 6.2 & 1.293 & $<$0.75 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.39 \\ PKS 0823$-$223 & 08 26 01.57 & $-$22 30 27.2 & 0.91 & 994 & 5 & 0.519 & $<$1.2 & & 277 & 6.3 & 0.573 & $<$2.2 & & 298 & 8.5 & 0.63 & $<$2.1 & & 0 & ... & ... & ... & 0.50 \\ 0828+493 & 08 32 23.22 & +49 13 21.0 & 0.548 & 361 & 5.1 & 0.798 & $<$1.1 & & 256 & 7.7 & 0.939 & $<$1.1 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.23 \\ 0834$-$201 & 08 36 39.21 & $-$20 16 59.5 & 2.752 & 298 & 7.5 & 2.052 & $<$0.57 & & 269 & 7.9 & 2.473 & $<$0.92 & & 298 & 6.4 & 2.672 & $<$0.96 & & 252 & 9.9 & 3.457 & $<$0.46 & 0.50 \\ SBS 0833+585 & 08 37 22.41 & +58 25 01.8 & 2.101 & 0 & ... & ... & ... & & 298 & 5.4 & 0.723 & $<$1.3 & & 299 & 7.6 & 0.743 & $<$1.8 & & 716 & RFI & ... & ... & 0.36 \\ 3C205 & 08 39 06.4 & +57 54 17 & 1.534 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 6.4 & 3.95 & $<$0.26 & & 0 & ... & ... & ... & 0.3929 \\ 0836+710 & 08 41 24.36 & +70 53 42.2 & 2.1720(10) & 0 & ... & ... & ... & & 298 & 6.9 & 4.248 & $<$0.21 & & 256 & 6.5 & 4.41 & $<$0.22 & & 520 & 11 & 4.794 & $<$0.39 & 0.40 \\ OJ+287 & 08 54 48.87 & +20 06 30.6 & 0.30560(10) & 0 & ... & ... & ... & & 270 & RFI & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & RFI \\ 3C210 & 08 58 09.9 & +27 50 52 & 1.169 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1493 & 7.9 & 3.2 & $<$0.39 & & 0 & ... & ... & ... & 0.3649 \\ 0859$-$140 & 09 02 16.83 & $-$14 15 30.9 & 1.3324(2) & 398 & 5.3 & 3.343 & $<$0.29 & & 228 & 6.5 & 3.574 & $<$0.26 & & 398 & 9.8 & 3.743 & $<$0.35 & & 0 & ... & ... & ... & 0.39 \\ 0859+470$^b$ & 09 03 03.99 & +46 51 04.1 & 1.4650(5) & 0 & ... & ... & ... & & 299 & 5.1 & 2.34 & $<$0.39 & & 1491 & 4.8 & 2.2 & $<$0.34 & & 0 & ... & ... & ... & 0.6165 \\ 0906+015 & 09 09 10.09 & +01 21 35.6 & 1.0249(4) & 0 & ... & ... & ... & & 259 & 5.5 & 1.36 & $<$0.64 & & 298 & 7.1 & 1.32 & $<$0.89 & & 0 & ... & ... & ... & 0.34 \\ 3C216 & 09 09 33.49 & +42 53 46.1 & 0.6699(4) & 259 & 4.8 & 4.05 & $<$0.14 & & 248 & 6.6 & 5.4 & $<$0.20 & & 271 & 8.1 & 6.4 & $<$0.18 & & 0 & ... & ... & ... & 0.32 \\ 0917+449 & 09 20 58.46 & +44 41 54.0 & 2.1864(7) & 0 & ... & ... & ... & & 289 & 6 & 1.02 & $<$0.96 & & 398 & 8.3 & 0.99 & $<$2.6 & & 577 & RFI & ... & ... & 0.32 \\ 0919$-$260 & 09 21 29.35 & $-$26 18 43.4 & 2.3 & 299 & 4.3 & 1.296 & $<$0.42 & & 299 & RFI & ... & ... & & 597 & 8.8 & 1.06 & $<$1.3 & & 597 & 4.2 & 0.864 & $<$0.91 & 0.47 \\ TXS 0917+624 & 09 21 36.23 & +62 15 52.2 & 1.446(3) & 0 & ... & ... & ... & & 299 & 5.4 & 1.1 & $<$0.77 & & 298 & 7 & 1.2 & $<$1.0 & & 0 & ... & ... & ... & 0.41 \\ 0923+392 & 09 27 03.01 & +39 02 20.9 & 0.6953(3) & 893 & 4.8 & 2.728 & $<$0.15 & & 299 & 7.1 & 3.013 & $<$0.39 & & 298 & 9.4 & 3.131 & $<$0.39 & & 0 & ... & ... & ... & 0.37 \\ 0925$-$203 & 09 27 51.82 & $-$20 34 51.2 & 0.34741(15) & 526 & 5.1 & 0.778 & $<$1.1 & & 298 & 7.8 & 0.887 & $<$1.3 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.19 \\ 3C220.2 & 09 30 33.5 & +36 01 24 & 1.1577(14) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1496 & 5 & 2.363 & $<$0.33 & & 0 & ... & ... & ... & 0.3452 \\ FBQS J094855.3+403944 & 09 48 55.3 & +40 39 45 & 1.2491(13) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1493 & 1.9 & 2.07 & $<$0.14 & & 0 & ... & ... & ... & 0.39 \\ 3C230 & 09 51 58.8 & +00 01 27 & 1.487 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1490 & 2.6 & 6.23 & $<$0.066 & & 0 & ... & ... & ... & 0.3951 \\ 0954+556 & 09 57 38.18 & +55 22 57.8 & 0.8996(4) & 0 & ... & ... & ... & & 291 & 5.9 & 4.102 & $<$0.24 & & 268 & 8.6 & 4.354 & $<$0.32 & & 0 & ... & ... & ... & 0.20 \\ 0954+658 & 09 58 47.24 & +65 33 54.8 & 0.368 & 0 & ... & ... & ... & & 292 & 6.8 & 0.764 & $<$1.4 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.12 \\ 1004+141 & 10 07 41.50 & +13 56 29.6 & 2.707(5) & 297 & RFI & ... & ... & & 282 & 8.2 & 1.027 & $<$0.99 & & 228 & 8.4 & 1.087 & $<$1.2 & & 519 & 9.4 & 1.305 & $<$1.5 & 0.30 \\ 3C239 & 10 11 45.4 & +46 28 20 & 1.781 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1690 & 5.1 & 3.049 & $<$0.26 & & 0 & ... & ... & ... & 0.3706 \\ PKS 1012+232 & 10 14 47.06 & +23 01 16.6 & 0.5664(4) & 258 & 3.4 & 1.208 & $<$0.48 & & 256 & 6 & 1.366 & $<$0.63 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.22 \\ 3C241 & 10 21 54.5 & +21 59 30 & 1.617 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1497 & 7.9 & 3.125 & $<$0.40 & & 0 & ... & ... & ... & 0.4205 \\ 1034$-$293 & 10 37 16.08 & $-$29 34 02.8 & 0.312 & 596 & 6.8 & 1.108 & $<$1.1 & & 298 & 4.4 & 1.06 & $<$0.62 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.15 \\ 1039+811 & 10 44 23.06 & +80 54 39.4 & 1.26 & 263 & 4.5 & 0.829 & $<$0.91 & & 285 & 5.2 & 0.78 & $<$1.1 & & 299 & 6.8 & 0.744 & $<$1.5 & & 0 & ... & ... & ... & 0.52 \\ 1045$-$188 & 10 48 06.62 & $-$19 09 35.7 & 0.595 & 596 & 4.7 & 1.154 & $<$0.59 & & 299 & 7 & 1.394 & $<$0.76 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.35 \\ WMAP J1047+7143 & 10 48 27.62 & +71 43 35.9 & 1.15 & 298 & 6 & 0.743 & $<$1.4 & & 259 & 6.7 & 0.756 & $<$1.3 & & 298 & 5.8 & 0.761 & $<$1.4 & & 0 & ... & ... & ... & 0.36 \\ 1049+215 & 10 51 48.79 & +21 19 52.3 & 1.300(5) & 297 & 4.4 & 1.237 & $<$0.61 & & 248 & 5.9 & 1.359 & $<$0.69 & & 299 & 6.3 & 1.466 & $<$0.59 & & 0 & ... & ... & ... & 0.47 \\ 1055+018 & 10 58 29.60 & +01 33 58.8 & 0.890(5) & 546 & 5.9 & 3.315 & $<$0.20 & & 597 & 5.9 & 3.544 & $<$0.26 & & 243 & 13 & 3.759 & $<$0.54 & & 0 & ... & ... & ... & 0.33 \\ 3C252 & 11 11 33.1 & +35 40 42 & 1.1 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1490 & 4.9 & 2.67 & $<$0.29 & & 0 & ... & ... & ... & 0.3992 \\ HB89 1109+437 & 11 12 39.4 & +43 25 47 & 1.6819(19) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1492 & 2.3 & 2.737 & $<$0.13 & & 0 & ... & ... & ... & 0.3863 \\ 1116+128$^b$ & 11 18 57.30 & +12 34 41.7 & 2.1257(4) & 595 & 5 & 2.3 & $<$0.40 & & 291 & 8.3 & 2.7 & $<$0.57 & & 1386 & 2.9 & 3 & $<$0.13 & & 567 & RFI & ... & ... & 0.8691 \\ 3C255 & 11 19 25.2 & $-$03 02 52 & 1.355 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1489 & 7.3 & 3.813 & $<$0.30 & & 0 & ... & ... & ... & 0.4185 \\ 3C256 & 11 20 43.0 & +23 27 55 & 1.819 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1490 & 5.7 & 2.7 & $<$0.33 & & 0 & ... & ... & ... & 0.3945 \\ 3C257 & 11 23 09.2 & +05 30 19 & 2.474 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1486 & 6.9 & 2.997 & $<$0.37 & & 0 & ... & ... & ... & 0.3919 \\ 4C+33.26 & 11 26 23.7 & +33 45 27 & 1.23 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1482 & 2 & 2.1 & $<$0.15 & & 0 & ... & ... & ... & 0.3819 \\ PKS 1124$-$186 & 11 27 04.39 & $-$18 57 17.4 & 1.05 & 526 & 5 & 0.538 & $<$1.6 & & 272 & 11 & 0.5 & $<$3.0 & & 249 & 8.4 & 0.475 & $<$2.6 & & 0 & ... & ... & ... & 0.31 \\ 1127$-$145 & 11 30 07.05 & $-$14 49 27.4 & 1.184 & 596 & 6.3 & 5.59 & $<$0.17 & & 245 & 9.9 & 5.538 & $<$0.27 & & 262 & 7.5 & 5.496 & $<$0.20 & & 0 & ... & ... & ... & 0.49 \\ B2 1128+38 & 11 30 53.28 & +38 15 18.5 & 1.7405(5) & 556 & 5.2 & 0.703 & $<$1.2 & & 299 & 8 & 0.692 & $<$1.2 & & 299 & 8.3 & 0.679 & $<$1.6 & & 0 & ... & ... & ... & 0.48 \\ 3C266 & 11 45 43.4 & +49 46 08 & 1.275 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1488 & 5.1 & 2.465 & $<$0.29 & & 0 & ... & ... & ... & 0.0589 \\ B3 1144+402 & 11 46 58.30 & +39 58 34.3 & 1.0901(5) & 529 & 4.5 & 0.325 & $<$2.3 & & 299 & 6.9 & 0.431 & $<$3.3 & & 262 & 7.5 & 0.539 & $<$1.9 & & 0 & ... & ... & ... & 0.46 \\ 3C267 & 11 49 56.5 & +12 47 19 & 1.14 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 7.1 & 4.137 & $<$0.27 & & 0 & ... & ... & ... & 0.3985 \\ LBQS 1148$-$0007$^b$ & 11 50 43.87 & $-$00 23 54.2 & 1.9801(5) & 596 & 7.2 & 3.28 & $<$0.37 & & 289 & 5.5 & 3.32 & $<$0.25 & & 1481 & 2 & 3.36 & $<$0.093 & & 0 & ... & ... & ... & 0.6398 \\ 1150+812 & 11 53 12.50 & +80 58 29.2 & 1.25 & 298 & 6.1 & 1.34 & $<$0.81 & & 299 & 9.5 & 1.35 & $<$1.2 & & 268 & 7.9 & 1.36 & $<$1.1 & & 0 & ... & ... & ... & 0.48 \\ FBQS J1159+2914 & 11 59 31.83 & +29 14 43.8 & 0.7247(4) & 596 & 4.7 & 1.85 & $<$0.39 & & 263 & 7.5 & 2.086 & $<$0.57 & & 298 & 9.1 & 2.25 & $<$0.59 & & 0 & ... & ... & ... & 0.38 \\ 1202$-$262 & 12 05 33.21 & $-$26 34 04.5 & 0.789 & 556 & 10 & 1.455 & $<$1.5 & & 261 & 8.4 & 1.856 & $<$0.79 & & 298 & 8.4 & 2.209 & $<$0.36 & & 0 & ... & ... & ... & 0.26 \\ PKS B1206$-$238 & 12 09 02.44 & $-$24 06 20.8 & 1.299 & 544 & 6.3 & 0.423 & $<$1.7 & & 253 & 9.1 & 0.361 & $<$5.3 & & 299 & 9 & 0.328 & $<$6.2 & & 0 & ... & ... & ... & 0.42 \\ 3C268.4 & 12 09 13.5 & +43 39 18 & 1.3978(14) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 2.3 & 3.52 & $<$0.10 & & 0 & ... & ... & ... & 0.3795 \\ B2 1219+28 & 12 21 31.69 & +28 13 58.5 & 0.102 & 596 & 7.7 & 0.875 & $<$1.7 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.11 \\ 1222+037 & 12 24 52.42 & +03 30 50.3 & 0.9550(4) & 596 & 6.6 & 1.292 & $<$0.53 & & 299 & 7.8 & 1.046 & $<$0.80 & & 261 & 6.7 & 0.882 & $<$1.2 & & 0 & ... & ... & ... & 0.54 \\ PG 1222+216 & 12 24 54.46 & +21 22 46.4 & 0.4320(10) & 546 & 5.9 & 1.715 & $<$0.49 & & 299 & 7.7 & 2.048 & $<$0.59 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.30 \\ 3C273 & 12 29 06.70 & +02 03 08.6 & 0.15834(7) & 595 & 7.7 & 45.1 & $<$0.028 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.18 \\ 1243$-$072 & 12 46 04.23 & $-$07 30 46.6 & 1.286 & 298 & 7.3 & 0.55 & $<$1.3 & & 266 & 7.5 & 0.637 & $<$1.8 & & 299 & 9.3 & 0.724 & $<$2.1 & & 0 & ... & ... & ... & 0.57 \\ 1244$-$255 & 12 46 46.80 & $-$25 47 49.3 & 0.633 & 447 & 6.8 & 1.17 & $<$0.89 & & 261 & 7.1 & 1.185 & $<$0.92 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.23 \\ PKS 1245$-$19 & 12 48 23.90 & $-$19 59 18.6 & 1.275 & 893 & 6.7 & 5.324 & $<$0.15 & & 299 & 7.5 & 6.052 & $<$0.20 & & 277 & 9.3 & 6.793 & $<$0.22 & & 0 & ... & ... & ... & 0.52 \\ \hline \end{tabular} \end{table} \end{landscape} \begin{landscape} \begin{table} \tiny \contcaption{Journal of Observations. } \hskip-1.5cm\begin{tabular}{lllccccccccccccccccccccc} \hline & & & & \multicolumn{4}{c}{1.4 GHz} & & \multicolumn{4}{c}{1.0 GHz} & & \multicolumn{4}{c}{800 MHz} & & \multicolumn{4}{c}{450 MHz} & \\ \cline{5-8} \cline{10-13} \cline{15-18} \cline{20-23} \noalign{\smallskip} & & & & \multicolumn{4}{c}{$z=0-0.235$} & & \multicolumn{4}{c}{$0.150-0.570$} & & \multicolumn{4}{c}{$0.536-1.10$} & & \multicolumn{4}{c}{$1.58-2.74$} & \vspace{8pt}\\ Source & $\alpha$ & $\delta$ & $z$ & Time & $\overline{rms}$ & S$_{\rm 1.4 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 1.0 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 0.8 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 0.45 GHz}$ & $N_{\rm HI}$ & $\Delta z_{\rm total}$ \\ & (J2000) & (J2000) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & \\ \hline 3C279 & 12 56 11.16 & $-$05 47 21.5 & 0.5362(14) & 298 & 13 & 9.71 & $<$0.23 & & 299 & 5.2 & 10.7 & $<$0.097 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.27 \\ PG 1302$-$102 & 13 05 33.01 & $-$10 33 19.4 & 0.2784(4) & 298 & 7.4 & 0.713 & $<$1.3 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.25 \\ 4C+09.45 & 13 05 36.0 & +08 55 14 & 1.409(2) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1491 & 7.2 & 2.804 & $<$0.40 & & 0 & ... & ... & ... & 0.3416 \\ 1308+326 & 13 10 28.66 & +32 20 43.8 & 0.9980(5) & 565 & 4.7 & 1.158 & $<$0.84 & & 0 & ... & ... & ... & & 228 & 12 & 1.168 & $<$1.3 & & 0 & ... & ... & ... & 0.31 \\ MG J132701+2210 & 13 27 00.86 & +22 10 50.2 & 1.4 & 596 & 4.4 & 0.852 & $<$1.0 & & 0 & ... & ... & ... & & 266 & 10 & 0.78 & $<$1.7 & & 0 & ... & ... & ... & 0.32 \\ 3C286 & 13 31 08.29 & +30 30 33.0 & 0.8499(4) & 557 & 8.8 & 14.7 & $<$0.10 & & 0 & ... & ... & ... & & 788 & 9.2 & 19.3 & $<$0.068 & & 0 & ... & ... & ... & 0.21 \\ 1334$-$127 & 13 37 39.78 & $-$12 57 24.7 & 0.539 & 247 & 8.7 & 2.676 & $<$0.44 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.19 \\ 4C+24.28 & 13 48 14.8 & +24 15 52 & 2.879(6) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1496 & 5.8 & 2.21 & $<$0.42 & & 0 & ... & ... & ... & 0.3994 \\ 1354+195 & 13 57 04.43 & +19 19 07.4 & 0.72 & 892 & 4.3 & 2.226 & $<$0.36 & & 0 & ... & ... & ... & & 298 & 7.1 & 3.133 & $<$0.38 & & 0 & ... & ... & ... & 0.23 \\ 1354$-$152 & 13 57 11.24 & $-$15 27 28.8 & 1.89 & 287 & 5.6 & 0.689 & $<$1.1 & & 272 & RFI & ... & ... & & 275 & 5.6 & 0.635 & $<$1.7 & & 0 & ... & ... & ... & 0.31 \\ 3C294 & 14 06 44.0 & +34 11 25 & 1.779 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1462 & 6.3 & 2.441 & $<$0.41 & & 0 & ... & ... & ... & 0.4197 \\ 4C$-$05.60 & 14 13 48.5 & $-$05 59 56 & 1.094(2) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1478 & 2.3 & 2.55 & $<$0.14 & & 0 & ... & ... & ... & 0.3969 \\ 1413+135 & 14 15 58.82 & +13 20 23.7 & 0.24671(10) & 596 & 4.2 & 1.085 & $<$0.58 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.25 \\ 3C297 & 14 17 24.0 & $-$04 00 48 & 1.406 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1488 & 5.5 & 2.575 & $<$0.34 & & 0 & ... & ... & ... & 0.3903 \\ 3C298 & 14 19 08.2 & +06 28 35 & 1.4373(16) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 2.2 & 12.45 & $<$0.028 & & 0 & ... & ... & ... & 0.3951 \\ 1418+546 & 14 19 46.60 & +54 23 14.8 & 0.1526(3) & 298 & 5.1 & 0.919 & $<$0.82 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.17 \\ 3C300.1 & 14 28 31.3 & $-$01 24 08 & 1.159 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1490 & 2 & 5.32 & $<$0.059 & & 0 & ... & ... & ... & 0.3407 \\ HB89 1437+624 & 14 38 44.9 & +62 11 55 & 1.0935(17) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 8.9 & 2.61 & $<$0.54 & & 0 & ... & ... & ... & 0.2824 \\ 3C305.1 & 14 47 09.5 & +76 56 22 & 1.132 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1489 & 2.7 & 2.51 & $<$0.17 & & 0 & ... & ... & ... & 0.3849 \\ 3C308 & 14 54 08.3 & +50 03 31 & 2.8490(10) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1497 & RFI & ... & ... & & 0 & ... & ... & ... & RFI \\ 4C+04.49 & 14 58 59.35 & +04 16 13.8 & 0.3915(4) & 297 & 6.3 & 1.064 & $<$1.3 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.17 \\ 3C309.1 & 14 59 07.58 & +71 40 19.9 & 0.905 & 298 & 6.5 & 8.72 & $<$0.11 & & 0 & ... & ... & ... & & 285 & 13 & 12.2 & $<$0.16 & & 0 & ... & ... & ... & 0.31 \\ 1502+106 & 15 04 24.98 & +10 29 39 2 & 1.8383(15) & 272 & 5.3 & 1.484 & $<$0.63 & & 0 & ... & ... & ... & & 287 & 17 & 1.448 & $<$1.9 & & 0 & ... & ... & ... & 0.37 \\ 1504$-$166 & 15 07 04.79 & $-$16 52 30.3 & 0.876(4) & 227 & 4.7 & 3.45 & $<$0.17 & & 0 & ... & ... & ... & & 298 & 9.7 & 2.82 & $<$0.55 & & 0 & ... & ... & ... & 0.26 \\ HB89 1508$-$055 & 15 10 53.6 & $-$05 43 08 & 1.185 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1491 & 2 & 5.53 & $<$0.057 & & 0 & ... & ... & ... & 0.3909 \\ 1510$-$089 & 15 12 50.53 & $-$09 05 59.8 & 0.36 & 297 & 5.6 & 3.56 & $<$0.22 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.18 \\ 1511$-$100 & 15 13 44.89 & $-$10 12 00.3 & 1.513 & 282 & 5.8 & 0.624 & $<$1.4 & & 0 & ... & ... & ... & & 277 & 9.9 & 0.761 & $<$3.4 & & 0 & ... & ... & ... & 0.29 \\ 3C322 & 15 35 01.2 & +55 36 53 & 1.681 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 2.7 & 3.418 & $<$0.12 & & 0 & ... & ... & ... & 0.3574 \\ 1538+149 & 15 40 49.49 & +14 47 45.9 & 0.605 & 297 & 6.7 & 1.43 & $<$0.77 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.22 \\ 1546+027 & 15 49 29.44 & +02 37 01.2 & 0.4144(4) & 276 & 5.1 & 0.853 & $<$0.84 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.18 \\ 1548+056 & 15 50 35.27 & +05 27 10.5 & 1.422 & 285 & 7.8 & 2.85 & $<$0.56 & & 0 & ... & ... & ... & & 290 & 7.5 & 2.75 & $<$0.44 & & 0 & ... & ... & ... & 0.39 \\ 1555+001 & 15 57 51.43 & $-$00 01 50.4 & 1.77 & 297 & 6.3 & 2.12 & $<$0.49 & & 0 & ... & ... & ... & & 272 & 10 & 1.49 & $<$1.3 & & 0 & ... & ... & ... & 0.29 \\ TXS 1600+335 & 16 02 07.2 & +33 26 53 & 1.1 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1562 & 3 & 2.941 & $<$0.16 & & 0 & ... & ... & ... & 0.3939 \\ HB89 1602$-$001 & 16 04 56.1 & +00 19 07 & 1.629(2) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1489 & RFI & ... & ... & & 0 & ... & ... & ... & RFI \\ 4C+10.45 & 16 08 46.20 & +10 29 07.8 & 1.2260(10) & 285 & 5.9 & 1.484 & $<$0.44 & & 0 & ... & ... & ... & & 271 & 7.5 & 2.058 & $<$0.65 & & 0 & ... & ... & ... & 0.29 \\ 1611+343 & 16 13 41.06 & +34 12 47.9 & 1.3991(5) & 297 & 5.2 & 4.05 & $<$0.16 & & 0 & ... & ... & ... & & 266 & 8.6 & 3.53 & $<$0.35 & & 0 & ... & ... & ... & 0.23 \\ HB89 1622+158 & 16 25 14.4 & +15 45 22 & 1.406 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1485 & 2.6 & 1.783 & $<$0.23 & & 0 & ... & ... & ... & 0.3465 \\ PKS 1622$-$253 & 16 25 46.89 & $-$25 27 38.3 & 0.786 & 297 & 5.7 & 2.521 & $<$0.30 & & 0 & ... & ... & ... & & 277 & 9 & 2.447 & $<$0.55 & & 0 & ... & ... & ... & 0.26 \\ HB89 1624+416 & 16 25 57.7 & +41 34 41 & 2.55 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1491 & 2.7 & 1.84 & $<$0.23 & & 0 & ... & ... & ... & 0.3918 \\ PKS 1622$-$29 & 16 26 06.02 & $-$29 51 27.0 & 0.815 & 288 & 8.2 & 2.278 & $<$0.67 & & 0 & ... & ... & ... & & 266 & 9.3 & 2.83 & $<$0.50 & & 0 & ... & ... & ... & 0.18 \\ HB89 1629+120 & 16 31 45.2 & +11 56 03 & 1.795 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1390 & 4 & 2.53 & $<$0.25 & & 0 & ... & ... & ... & 0.1978 \\ 1633+382 & 16 35 15.49 & +38 08 04.5 & 1.8131(5) & 287 & 4.5 & 2.332 & $<$0.24 & & 0 & ... & ... & ... & & 286 & 9.9 & 2.349 & $<$0.64 & & 0 & ... & ... & ... & 0.14 \\ 1637+574 & 16 38 13.45 & +57 20 24.0 & 0.7506(10) & 296 & 4.7 & 0.792 & $<$0.96 & & 0 & ... & ... & ... & & 298 & 7.3 & 0.925 & $<$0.99 & & 0 & ... & ... & ... & 0.34 \\ 1638+398 & 16 40 29.63 & +39 46 46.0 & 1.66 & 292 & 5 & 1.111 & $<$0.82 & & 0 & ... & ... & ... & & 298 & 7 & 0.954 & $<$1.1 & & 0 & ... & ... & ... & 0.36 \\ 1642+690 & 16 44 07.85 & +68 56 39.8 & 0.751 & 295 & 6.3 & 1.433 & $<$0.60 & & 0 & ... & ... & ... & & 297 & 9.6 & 1.675 & $<$0.72 & & 0 & ... & ... & ... & 0.29 \\ 3C345 & 16 42 58.81 & +39 48 37.0 & 0.5928(4) & 297 & 5 & 7.3 & $<$0.11 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.11 \\ 1656+477 & 16 58 02.78 & +47 37 49.2 & 1.622 & 297 & 4.7 & 0.858 & $<$0.82 & & 0 & ... & ... & ... & & 299 & 6.5 & 0.884 & $<$1.2 & & 0 & ... & ... & ... & 0.48 \\ 1655+077 & 16 58 09.01 & +07 41 27.5 & 0.621 & 284 & 6.1 & 1.688 & $<$0.42 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.23 \\ 1656+053 & 16 58 33.45 & +05 15 16.4 & 0.879 & 255 & 5.4 & 1.525 & $<$0.45 & & 0 & ... & ... & ... & & 282 & 9.4 & 1.926 & $<$0.77 & & 0 & ... & ... & ... & 0.26 \\ HB89 1702+298 & 17 04 07.2 & +29 46 59 & 1.927 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 3.2 & 2.17 & $<$0.23 & & 0 & ... & ... & ... & 0.2828 \\ 3C356 & 17 24 19.0 & +50 57 40 & 1.079 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1587 & 7.9 & 2.842 & $<$0.44 & & 0 & ... & ... & ... & 0.2849 \\ HB89 1729+501 & 17 31 03.6 & +50 07 34 & 1.107 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1590 & 7.4 & 1.82 & $<$0.64 & & 0 & ... & ... & ... & 0.3721 \\ 1730$-$130 & 17 33 02.70 & $-$13 04 49.5 & 0.902 & 289 & 4.8 & 5.97 & $<$0.15 & & 0 & ... & ... & ... & & 284 & 9.8 & 6.66 & $<$0.33 & & 0 & ... & ... & ... & 0.33 \\ HB89 1732+160 & 17 34 42.6 & +16 00 31 & 1.296 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1599 & 4.7 & 2.63 & $<$0.27 & & 0 & ... & ... & ... & 0.2656 \\ 1739+522 & 17 40 36.98 & +52 11 43.4 & 1.375 & 596 & 7.1 & 0.836 & $<$1.5 & & 0 & ... & ... & ... & & 298 & 7.9 & 0.989 & $<$1.2 & & 0 & ... & ... & ... & 0.37 \\ 1741$-$038 & 17 43 58.85 & $-$03 50 04.6 & 1.054 & 297 & 5.2 & 1.41 & $<$0.47 & & 0 & ... & ... & ... & & 271 & 14 & 1.4 & $<$1.6 & & 0 & ... & ... & ... & 0.28 \\ 1743+173 & 17 45 35.21 & +17 20 01.4 & 1.702 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 263 & RFI & ... & ... & & 0 & ... & ... & ... & RFI \\ 3C362 & 17 47 07.0 & +18 21 10 & 2.281 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1590 & 7.2 & 1.804 & $<$0.63 & & 0 & ... & ... & ... & 0.369 \\ 1749+701 & 17 48 32.84 & +70 05 50.8 & 0.77 & 269 & 5 & 1.209 & $<$0.55 & & 0 & ... & ... & ... & & 278 & 7.8 & 1.307 & $<$0.78 & & 0 & ... & ... & ... & 0.36 \\ 1803+784 & 18 00 45.68 & +78 28 04.0 & 0.68 & 298 & 4.8 & 2.221 & $<$0.37 & & 0 & ... & ... & ... & & 298 & 6 & 2.105 & $<$0.45 & & 0 & ... & ... & ... & 0.26 \\ 1800+440 & 18 01 32.31 & +44 04 21.9 & 0.663 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 298 & RFI & ... & ... & & 0 & ... & ... & ... & RFI \\ 4C+13.66 & 18 01 38.9 & +13 51 24 & 1.450(5) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1510 & 9.2 & 3.05 & $<$0.47 & & 0 & ... & ... & ... & 0.2025 \\ 3C368 & 18 05 06.3 & +11 01 33 & 1.131(2) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1490 & 8.2 & 2.47 & $<$0.51 & & 0 & ... & ... & ... & 0.1187 \\ 1823+568 & 18 24 07.07 & +56 51 01.5 & 0.6640(10) & 271 & 6 & 1.457 & $<$0.80 & & 384 & 6.2 & 1.613 & $<$0.86 & & 288 & 5.7 & 1.668 & $<$0.15 & & 0 & ... & ... & ... & 0.36 \\ PKS 1830$-$211 & 18 33 39.89 & $-$21 03 39.8 & 2.507(2) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 458 & 13 & 9.9 & $<$0.24 & & 0 & ... & ... & ... & 0.16 \\ 3C380 & 18 29 31.80 & +48 44 46.6 & 0.692(2) & 298 & 7.3 & 13.8 & $<$0.089 & & 384 & 5.3 & 19.3 & $<$0.044 & & 597 & 15 & 21.7 & $<$0.098 & & 0 & ... & ... & ... & 0.39 \\ TXS 1849+670 & 18 49 16.07 & +67 05 41.7 & 0.6570(10) & 0 & ... & ... & ... & & 384 & RFI & ... & ... & & 283 & 6.2 & 0.624 & $<$1.6 & & 0 & ... & ... & ... & 0.15 \\ HB89 1857+566 & 18 58 26.7 & +56 45 57 & 1.595 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 9.9 & 1.82 & $<$0.86 & & 0 & ... & ... & ... & 0.2469 \\ PKS B1908$-$201 & 19 11 09.65 & $-$20 06 55.1 & 1.119 & 0 & ... & ... & ... & & 384 & 5.2 & 2.703 & $<$0.31 & & 298 & 8.5 & 2.701 & $<$0.59 & & 0 & ... & ... & ... & 0.32 \\ 1921$-$293 & 19 24 51.05 & $-$29 14 30.1 & 0.35263(18) & 0 & ... & ... & ... & & 398 & RFI & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & RFI \\ 1928+738 & 19 27 48.49 & +73 58 01.6 & 0.3021(3) & 260 & 7.2 & 3.925 & $<$0.31 & & 398 & RFI & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.23 \\ 1936$-$155 & 19 39 26.66 & $-$15 25 34.1 & 1.657 & 0 & ... & ... & ... & & 384 & 5.7 & 0.624 & $<$1.4 & & 285 & 6.3 & 0.636 & $<$2.0 & & 0 & ... & ... & ... & 0.35 \\ 1954+513$^b$ & 19 55 42.74 & +51 31 48.5 & 1.22 & 298 & 7 & 1.79 & $<$0.62 & & 0 & ... & ... & ... & & 1490 & 9.5 & 3.24 & $<$0.46 & & 0 & ... & ... & ... & 0.6637 \\ 1958$-$179 & 20 00 57.09 & $-$17 48 57.7 & 0.65 & 0 & ... & ... & ... & & 384 & RFI & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & RFI \\ 2007+777 & 20 05 30.93 & +77 52 43.1 & 0.342 & 0 & ... & ... & ... & & 394 & RFI & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & RFI \\ HB89 2003$-$025 & 20 06 08.5 & $-$02 23 35 & 1.457 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1495 & 6.7 & 2.89 & $<$0.35 & & 0 & ... & ... & ... & 0.4081 \\ 2005+403 & 20 07 44.94 & +40 29 48.6 & 1.736 & 245 & 6.3 & 2.473 & $<$0.41 & & 377 & 5.9 & 2.26 & $<$0.42 & & 277 & 7.1 & 2.091 & $<$0.74 & & 0 & ... & ... & ... & 0.28 \\ \hline \end{tabular} \end{table} \end{landscape} \begin{landscape} \begin{table} \tiny \contcaption{Journal of Observations. } \hskip-1.5cm\begin{tabular}{lllccccccccccccccccccccc} \hline & & & & \multicolumn{4}{c}{1.4 GHz} & & \multicolumn{4}{c}{1.0 GHz} & & \multicolumn{4}{c}{800 MHz} & & \multicolumn{4}{c}{450 MHz} & \\ \cline{5-8} \cline{10-13} \cline{15-18} \cline{20-23} \noalign{\smallskip} & & & & \multicolumn{4}{c}{$z=0-0.235$} & & \multicolumn{4}{c}{$0.150-0.570$} & & \multicolumn{4}{c}{$0.536-1.10$} & & \multicolumn{4}{c}{$1.58-2.74$} & \vspace{8pt}\\ Source & $\alpha$ & $\delta$ & $z$ & Time & $\overline{rms}$ & S$_{\rm 1.4 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 1.0 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 0.8 GHz}$ & $N_{\rm HI}$ & & Time & $\overline{rms}$ & S$_{\rm 0.45 GHz}$ & $N_{\rm HI}$ & $\Delta z_{\rm total}$ \\ & (J2000) & (J2000) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & & (sec) & (mJy) & (Jy) & (10$^{18}$ ($Ts/f$) cm$^{-2}$) & \\ \hline 2008$-$159 & 20 11 15.71 & $-$15 46 40.3 & 1.18 & 266 & 6.2 & 0.568 & $<$1.4 & & 384 & 5.4 & 0.569 & $<$1.4 & & 280 & 7 & 0.569 & $<$2.4 & & 0 & ... & ... & ... & 0.39 \\ COINS J2022+6136 & 20 22 09.68 & +61 36 58.8 & 0.227 & 258 & 7 & 2.234 & $<$0.44 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.23 \\ PKS 2022+031 & 20 25 09.63 & +03 16 44.5 & 2.21 & 0 & ... & ... & ... & & 384 & 5.7 & 0.454 & $<$2.1 & & 281 & 7.2 & 0.481 & $<$2.3 & & 0 & ... & ... & ... & 0.29 \\ 3C418 & 20 38 37.03 & +51 19 12.7 & 1.686 & 298 & 6.8 & 6.042 & $<$0.14 & & 396 & 7.2 & 6.836 & $<$0.21 & & 398 & 15 & 7.48 & $<$0.30 & & 0 & ... & ... & ... & 0.34 \\ PKS 2055+054 & 20 58 28.8 & +05 42 51 & 1.381 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1566 & 7.7 & 1.94 & $<$0.63 & & 0 & ... & ... & ... & 0.4114 \\ 2059+034 & 21 01 38.83 & +03 41 31.3 & 1.013 & 0 & ... & ... & ... & & 384 & 5.6 & 0.401 & $<$2.3 & & 242 & 7 & 0.326 & $<$3.4 & & 0 & ... & ... & ... & 0.26 \\ 3C432 & 21 22 46.2 & +17 04 38 & 1.7850(10) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1488 & 7.5 & 2.919 & $<$0.41 & & 0 & ... & ... & ... & 0.3232 \\ 2121+053 & 21 23 44.52 & +05 35 22.1 & 1.941 & 298 & 10 & 0.943 & $<$1.8 & & 365 & 5.5 & 0.851 & $<$0.87 & & 298 & 6.9 & 0.789 & $<$1.4 & & 0 & ... & ... & ... & 0.59 \\ 2128$-$123 & 21 31 35.26 & $-$12 07 04.8 & 0.501 & 297 & 5.2 & 1.98 & $<$0.46 & & 384 & 6.1 & 1.87 & $<$0.52 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.39 \\ 2131$-$021$^c$ & 21 34 10.31 & $-$01 53 17.2 & 1.285 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 252 & 5.6 & 1.94 & $<$0.54 & & 0 & ... & ... & ... & 0.25 \\ 2134+004 & 21 36 38.58 & +00 41 54.2 & 1.9446(4) & 298 & 11 & 3.293 & $<$0.57 & & 398 & RFI & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.25 \\ 2145+067 & 21 48 05.46 & +06 57 38.6 & 0.99 & 298 & 8.2 & 3.145 & $<$0.45 & & 0 & ... & ... & ... & & 299 & 14 & 3.355 & $<$0.69 & & 0 & ... & ... & ... & 0.23 \\ HB89 2149+212 & 21 51 45.9 & +21 30 14 & 1.5385(8) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1499 & 5.6 & 1.78 & $<$0.50 & & 0 & ... & ... & ... & 0.313 \\ HB89 2150+053 & 21 53 24.7 & +05 36 19 & 1.9670(10) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1486 & 8.6 & 1.877 & $<$0.72 & & 0 & ... & ... & ... & 0.3258 \\ 2155$-$152 & 21 58 06.28 & $-$15 01 09.3 & 0.672 & 0 & ... & ... & ... & & 370 & 6.2 & 2.799 & $<$0.33 & & 298 & 5.9 & 2.681 & $<$0.35 & & 0 & ... & ... & ... & 0.22 \\ HB89 2156+297 & 21 58 41.9 & +29 59 08 & 1.753 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1495 & 8.8 & 2.123 & $<$0.66 & & 0 & ... & ... & ... & 0.4085 \\ 2201+315 & 22 03 14.97 & +31 45 38.3 & 0.29474(9) & 297 & 6.5 & 1.869 & $<$0.59 & & 398 & 9.5 & 1.927 & $<$0.72 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.22 \\ 2203$-$188 & 22 06 10.42 & $-$18 35 38.7 & 0.6185(14) & 594 & 8 & 6.818 & $<$0.22 & & 398 & 5.4 & 7.476 & $<$0.089 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.24 \\ HB89 2207+374 & 22 09 21.4 & +37 42 18 & 1.493(4) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1602 & 8.7 & 2.57 & $<$0.54 & & 0 & ... & ... & ... & 0.3936 \\ 2210$-$257 & 22 13 02.50 & $-$25 29 30.1 & 1.833 & 258 & 6.6 & 1.201 & $<$0.68 & & 343 & RFI & ... & ... & & 256 & RFI & ... & ... & & 0 & ... & ... & ... & 0.24 \\ 2216$-$038 & 22 18 52.04 & $-$03 35 36.9 & 0.901 & 277 & 7 & 1.065 & $<$1.1 & & 398 & 4.8 & 1.212 & $<$0.60 & & 281 & 6.9 & 1.348 & $<$0.79 & & 0 & ... & ... & ... & 0.63 \\ HB89 2223+210 & 22 25 38.0 & +21 18 06 & 1.959 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1499 & 6.5 & 2.7 & $<$0.40 & & 0 & ... & ... & ... & 0.3904 \\ 3C446$^b$ & 22 25 47.26 & $-$04 57 01.4 & 1.404 & 298 & 4.2 & 6.51 & $<$0.061 & & 0 & ... & ... & ... & & 1491 & 8.8 & 9.11 & $<$0.15 & & 0 & ... & ... & ... & 0.746 \\ 2227$-$088 & 22 29 40.08 & $-$08 32 54.4 & 1.5605(4) & 298 & 12 & 0.964 & $<$1.9 & & 373 & 8.3 & 1.026 & $<$1.3 & & 262 & 12 & 1.084 & $<$4.1 & & 0 & ... & ... & ... & 0.48 \\ 2230+114 & 22 32 36.41 & +11 43 50.9 & 1.037 & 298 & 8.6 & 7.379 & $<$0.23 & & 0 & ... & ... & ... & & 298 & 7.2 & 7.867 & $<$0.16 & & 0 & ... & ... & ... & 0.22 \\ 2234+282 & 22 36 22.47 & +28 28 57.4 & 0.795 & 257 & 9.1 & 0.709 & $<$2.4 & & 344 & RFI & ... & ... & & 298 & 8.2 & 0.597 & $<$2.1 & & 0 & ... & ... & ... & 0.30 \\ HB89 2243$-$032 & 22 46 11.3 & $-$03 00 38 & 1.347(5) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1472 & 5.8 & 1.998 & $<$0.46 & & 0 & ... & ... & ... & 0.4659 \\ 2243$-$123 & 22 46 18.23 & $-$12 06 51.3 & 0.632 & 298 & 5.7 & 1.884 & $<$0.51 & & 398 & RFI & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.22 \\ 3C454.1 & 22 50 32.9 & +71 29 19 & 1.841 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1488 & 3.7 & 3.13 & $<$0.19 & & 0 & ... & ... & ... & 0.3713 \\ 3C454.3 & 22 53 57.75 & +16 08 53.6 & 0.859 & 262 & 6.2 & 13.27 & $<$0.097 & & 0 & ... & ... & ... & & 252 & 9.2 & 13.87 & $<$1.1 & & 0 & ... & ... & ... & 0.26 \\ HB89 2251+244 & 22 54 09.3 & +24 45 24 & 2.328(5) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1493 & 5.9 & 2.385 & $<$0.39 & & 0 & ... & ... & ... & 0.2953 \\ 4C+41.45 & 22 57 22.1 & +41 54 17 & 2.150(10) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1482 & 8.6 & 2.85 & $<$0.848 & & 0 & ... & ... & ... & 0.391 \\ 2255$-$282 & 22 58 05.96 & $-$27 58 21.3 & 0.92584(15) & 258 & 5 & 1.251 & $<$0.72 & & 372 & 4.2 & 1.174 & $<$0.56 & & 298 & 6.1 & 1.114 & $<$0.8 & & 0 & ... & ... & ... & 0.34 \\ 3C459 & 23 16 35.23 & +04 05 18.1 & 0.22012(3) & 292 & 5.8 & 4.858 & $<$0.17 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.23 \\ 2318+049 & 23 20 44.85 & +05 13 50.0 & 0.622 & 245 & 5.1 & 0.543 & $<$1.3 & & 381 & 4.1 & 0.582 & $<$1.2 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.28 \\ 4C$-$05.96 & 23 25 19.6 & $-$04 57 37 & 1.188(2) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1426 & 4.6 & 2.86 & $<$0.25 & & 0 & ... & ... & ... & 0.3583 \\ HB89 2338+042 & 23 40 57.9 & +04 31 16 & 2.594 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1490 & 3.4 & 3.05 & $<$0.18 & & 0 & ... & ... & ... & 0.3721 \\ 2344+092 & 23 46 36.84 & +09 30 45.5 & 0.677 & 277 & 7.3 & 2.122 & $<$0.60 & & 391 & 6.8 & 2.186 & $<$0.57 & & 298 & RFI & ... & ... & & 0 & ... & ... & ... & 0.48 \\ 2345$-$167 & 23 48 02.61 & $-$16 31 12.0 & 0.576 & 260 & 5.1 & 1.912 & $<$0.36 & & 392 & 5.7 & 2.049 & $<$0.44 & & 0 & ... & ... & ... & & 0 & ... & ... & ... & 0.42 \\ 2351+456$^b$ & 23 54 21.68 & +45 53 04.2 & 1.992(3) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1726 & 5.2 & 2.016 & $<$0.40 & & 0 & ... & ... & ... & 0.3763 \\ 3C469.1 & 23 55 23.3 & +79 55 20 & 1.336 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1182 & 2.8 & 3.021 & $<$0.15 & & 0 & ... & ... & ... & 0.3957 \\ HB89 2354+144 & 23 57 18.6 & +14 46 07 & 1.8142(17) & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1498 & 4 & 1.8 & $<$0.35 & & 0 & ... & ... & ... & 0.3902 \\ 2355$-$106 & 23 58 10.88 & $-$10 20 08.6 & 1.6363(5) & 259 & 5.1 & 0.772 & $<$1.1 & & 351 & 5.6 & 0.596 & $<$1.5 & & 277 & 8.1 & 0.518 & $<$2.3 & & 0 & ... & ... & ... & 0.33 \\ 3C470 & 23 58 23.3 & +44 04 39 & 1.653 & 0 & ... & ... & ... & & 0 & ... & ... & ... & & 1372 & 3.8 & 4.14 & $<$0.15 & & 0 & ... & ... & ... & 0.4085 \\ \hline \end{tabular} \end{table} \end{landscape} \begin{table*} \scriptsize \caption{Sources with {H\,{\textsc i}}\ 21~cm Absorption. Sources in which {H\,{\textsc i}}\ absorption has been detected. Columns list the (1) source name, (2) numbered Gaussian component (from Figure \ref{fig:detects}), (3) central frequency of each component, (4) redshift of each absorption feature, (5) continuum flux density, (6) observed width of the line (kHz), (7) rest-frame line width ({\hbox{km s$^{-1}$}}), (8) depth of the absorption line, (9) peak optical depth, (10) derived HI column density $N_{\rm HI} / (T_s/f)$, (11) measured column density $N_{\rm HI} / (T_s/f)$ from previous 21~cm observations with respective references; and (12) measured $T_s/f$ values constrained from Ly$\alpha$ and VLA observations from \citet{kanekar14_mnras}. Numbers in parentheses show uncertainties in the final digit(s) of listed quantities (when available). References for original detection of the 21~cm absorber and measured column densities: 1 -- \citet{carilli93}; 2 -- \citet{briggs89}; 3 -- \citet{lane01}; 4 -- \citet{wolfe85}; 5 -- \citet{lane98}; 6 -- \citet{vermeulen03}; 7 -- \citet{brown73}; 8 -- \citet{chengalur99}; 9 -- \citet{darlingetal04}. \newline $a$ -- Intrinsic 21~cm absorption line. This system is not included in our cosmological measurements of the $f(N_{\rm HI},X)$ distribution or $\Omega_{\rm HI}$. } \label{tab:detections} \begin{tabular}{lccccccccccc} \hline Source & Comp. & $\nu$ & $z$ & Cntm. & FWHM & FWHM & Depth & $\tau$ & $N_{\rm {HI}}$ & Prev. $N_{\rm {HI}}$ & $T_s/f$ \\ & & (MHz) & & (Jy) & (kHz) & ({\hbox{km s$^{-1}$}}) & (mJy) & (10$^{-2}$) & \multicolumn{2}{c}{($10^{18} \ (T_s/f)$~cm$^{-2}$~K$^{-1}$)} & (K) \\ \hline B2 0218+357 & 1 & 843.134(7) & 0.684675(14)& 1.484(4) & 74(11) & 26(4) & 85(9) & 5.9(7) & 3.0(6)&... & ... \\ % & 2 & 843.209(12) & 0.68452(2) & 1.484(4) & 76(18) & 27(6) & 51(9) & 3.5(6) & 1.8(5)&... & ... \\ & Tot & 843.16(8) & 0.68468(16) & 1.484(4) & 116(20) & 41(7) & 95(4) & 7.2(7) & 5.7(1.1) & 4.0$^1$ & ... \\ 0248+430 & 1 & 1018.8618(18)& 0.39411(3) & 0.795(3) & 71(6) & 21.0(1.8) & 97(6) & 13.0(3) & 5.2(6)&... & ... \\ & 2 & 1018.940(2) & 0.394003(4) & 0.795(3) & 29(4) & 8.5(1.4) & 10(4) & 10.5(5) & 1.7(4)&... & ... \\ & 3 & 1019.107(17) & 0.39378(4) & 0.795(3) & 80(5) & 23.5(1.2) & 79(9) & 13(5) & 0.6(2)&... & ... \\ & Tot & 1018.88(3) & 0.39409(2) & 0.795(3) & 150(10) & 44(3) & 109(6) & 7.9(5) & 6.7(6) & 5.4(6)$^3$ & ... \\ 0458$-$020 & 1 & 467.332(3) & 2.03939(2) & 2.45(5) & 34(8) & 22(5) & 218(46) & 9.0(2) & 3.8(1.3) & 10$^{2,4}$& 560 \\ HB89 0738+313& 1 & 1163.0764(13)& 0.22125(14) & 1.9453(10)& 35(2) & 9.1(7) & 76(5) & 4.0(2) & 0.70(7) & 0.64(14)$^5$ & 870 \\ 3C216$^a$ & 1 & 851.05(12) & 0.6690(3) & 6.39(4) & 811(90) & 285(32) & 18(2)& 0.29(3) & 1.6(2)&... & ... \\ & 2 & 850.30(6) & 0.67047(12) & 6.37(4) & 671(95) & 237(33) & 30(4) & 0.47(6)& 2.1(4)&... & ... \\ & Tot & 850.56(5) & 0.66996(14) & 6.39(4) & 682(98) & 240(35) & 27(5) & 0.40(6) & 1.8(4) & 1.2$^6$ & ... \\ 1127$-$145 & 1 & 1082.130(2) & 0.312602(2) & 5.548(3) & 20(2) & 5.5(6) & 115(40) & 2.1(7) & 0.22(8)&... & ... \\ & 2 & 1082.08(3) & 0.31266(4) & 5.548(3) & 41(3) & 11.4(8) & 512(42) & 9.7(8) & 2.1(2)&... & ... \\ & 3 & 1082.032(6) & 0.312721(7) & 5.548(3) & 24(4) & 6.71(1.1) & 472(63) & 8.90(12) & 1.1(2)&... & ... \\ & 4 & 1082.001(5) & 0.312758(6) & 5.548(3) & 9.3(1.1)& 2.6(3) & 136(22) & 2.5(4) & 0.12(2)&... & ... \\ & 5 & 1081.96(5) & 0.31281(6) & 5.548(3) & 71(5) & 19.7(1.4) & 276(11) & 5.1(2) & 1.92(16)&... & ... \\ & 6 & 1081.89(7) & 0.31289(8) & 5.548(3) & 73(11) & 20(3) & 22(3) & 0.40(5) & 0.15(3)&... & ... \\ & Tot & 1082.04(5) & 0.31272(6) & 5.548(3) & 170(17) & 47(5) & 543(8) & 6.8(7) & 6.2(2) & 5.1(9)$^5$ & 820 \\ 1243$-$072 & 1 & 988.635(5) & 0.436734(7) & 0.64(2) & 14.4(1.0)& 4.4(3) & 179(10) & 33(2)& 2.7(3) & 1.7(6)$^3$ & ... \\ 3C286 & 1 & 839.4074(9) & 0.6921530(18)& 18.64(9)& 32.3(1.4)& 11.5(5) & 274(9) & 1.48(5) & 0.327(18)&... & ... \\ & 2 & 839.39(2) & 0.69219(5) & 18.64(9) & 348(64) & 124(23) & 23(4) & 0.12(2) & 0.29(7)&... & ... \\ & Tot & 839.41(3) & 0.69215(6) & 18.64(9) & 24(2) & 8.6(7) & 296(6) & 3.9(3) & 0.65(7) & 2.6$^7$ & 965 \\ PKS 1830$-$211 & 1 & 753.5(3) & 0.8851(8) & 9.95(4) & 450(40) & 179(16) & 1030(40) & 10.9(5) & 37(4) & 30$^8$ &... \\ 2351+456 & 1 & 798.334(9) & 0.77921(2) & 2.006(2) & 14.4(8) & 5.4(3) & 253(8) & 13.5(5) & 1.4(9)&... & ... \\ & 2 & 798.240(6) & 0.77942(13) & 2.006(2) & 123(8) & 46(3) & 545(34) & 32(2) & 28(3)&... & ... \\ & 3 & 798.23(3) & 0.77944(6) & 2.006(2) & 13.0(9) & 4.9(3)& 236(12)& 12.5(7) & 1.17(10)&... & ... \\ & 4 & 798.142(12) & 0.77964(3) & 2.006(2) & 115(11) & 43(4) & 242(22)& 12.9(1.2)&10.6(1.5)&... & ... \\ & Tot & 798.22(3) & 0.77948(7) & 2.006(2) & 115(11) & 43(4) & 638(8) & 38(4) & 32(3) & 24$^9$ & ... \\ \hline \end{tabular} \end{table*} \begin{landscape} \begin{table} \scriptsize \caption{Properties of the HI 21~cm and OH 18~cm Line Observations. Columns list the source name and redshift of known intervening 21~cm absorbers or redshift of the host galaxy in the case of intrinsic absorbers with a redshift lying in our spectral coverage, the frequency, ranged searched (in {\hbox{km s$^{-1}$}}), measured rms noise over the searched region, continuum, and the derived $3\sigma$ upper limit to the {H\,{\textsc i}}\ column density for the 21~cm absorption search. We list the same parameters for the OH 1612, 1667, and 1720~MHz search at known intervening and intrinsic absorbers. The OH continuum flux density and OH column density limit are computed for the 1667~MHz line unless stymied by RFI. Numbers in parentheses show uncertainties in the final digit(s). \newline $a$ -- Sources searched for possible intrinsic HI and OH absorption. \newline $b$ -- The corresponding 21~cm absorbers at the same redshift remain undetected due to the expected 21~cm absorber lying outside the redshift coverage or is irrecoverable due to RFI. \newline $d$ -- \citet{chengalur99} measures an OH column density of $N_{\rm {OH}}/ T_x=40\times10^{13}$~cm$^{-2}$~K$^{-1}$. \newline $3$ -- We do not reach sufficient signal to noise to detect the known OH satellite lines at 1612 and 1720~MHz toward 1413+135 at $z_{abs}=0.24671$ \citep{darling04, kanekar04_phrvl}. We do not detect the 1665 or 1667~MHz OH absorption at the same redshift toward this system \citep{kanekar02} with a reported column density of $N_{\rm {OH}}/ T_x = 5.1 \times10^{13}$~cm$^{-2}$~K$^{-1}$.} \label{tab:OH} \begin{tabular}{lcccccccccccccccccccc} \hline & & \multicolumn{5}{c}{HI 1420 MHz} & & \multicolumn{3}{c}{OH 1612 MHz} & & \multicolumn{3}{c}{OH 1667 MHz} & & \multicolumn{3}{c}{OH 1720 MHz} & & \\ \cline{3-7} \cline{9-11} \cline{13-15} \cline{17-19} \noalign{\smallskip} Source & $z_{\rm abs}$ & $\nu$ & $\Delta v$ & $rms$ & Cntm. & $N_{\rm {HI}}/(T_s/f)$ & & $\nu$ & $\Delta v$ & $rms$ & & $\nu$ & $\Delta v$ & $rms$ & & $\nu$ & $\Delta v$ & $rms$ & Cntm. & $N_{\rm {OH}}/ T_x$ \\ & & (MHz) & ({\hbox{km s$^{-1}$}}) & (mJy) & (Jy) & ($10^{18} \,(T_s/f)$~cm$^{-2}$~K$^{-1}$) & & (MHz) & ({\hbox{km s$^{-1}$}}) & (mJy) & & (MHz) & ({\hbox{km s$^{-1}$}}) & (mJy) & & (MHz) & ({\hbox{km s$^{-1}$}}) & (mJy) & (Jy) & ($10^{13}$~cm$^{-2}$~K$^{-1}$) \\ \hline 0235+164$^b$ & 0.524 & 932.02 & ... & ... & ... & ... && 1057.9 & $-$313, 735 & 5.4 & & 1094.1 & ... & RFI & & 1129.0 & ... & RFI &1.79(17) & $<$55\\ 0248+430 & 0.39409 & 1018.88(3) & $-$3654, 2581 & 5.3 & 0.795(3) & 6.7(6) && 1156.5 & $-$5118, 1467 & 5.4 && 1196.0 &$-$4386, 2357 & 4.2 & & 1234.1 & ... & ... & 0.801(4) & $<$11.4\\ HB89 0312$-$034$^a$ & 1.072 & 685.52 & $-$5456, 3465 & 8.1 & 2.741(5) & $<$0.52 && 778.1 & $-$1656, 1757 & 5.4 & & 804.71 & $-$5017, 2719 & 4.6 && 830.4 & $-$7708, 3704 & 2.4 & 2.328(4) & $<$3.7\\ 3C124$^a$ & 1.083 & 681.90 & ... & RFI &... &... && 773.99 & $-$275, 1874 & 4.7 && 800.46 & $-$5137, 1770 & 4.2 && 825.99 & $-$2295, 2115 & 2.8 & 1.935(7) & $<$4.0\\ 0458$-$020 & 2.03939 & 467.332(3) & $-$1316, 846 & 33 & 2.45(5) & 3.8(1.3)&& 530.45 & ... & RFI && 548.58 & ... & RFI && 566.08 &... & ... &... &... \\ SBS 0804+499$^b$ & 1.4073 & 590.04 & ... & ... & ... & ... && 669.73 & ... & RFI & & 692.63 & ... & RFI & & 714.71 & $-$355, 1159 & 8.8 & 0.813(13) & $<$217\\ 3C216 & 0.66996 & 850.56(5) & $-$737, 3580 & 8.1 & 6.39(4) & 1.8(4) && 965.43 & $-$3015, 1649 & 7.1 && 998.44 & $-$4701, 2073 & 6.2 & & 1030.3 & $-$3904, 606 & 9.3 &5.52(4)&$<$2.3\\ 4C$-$05.60$^a$ & 1.094(2) & 678.32 & $-$621, 363 & 3.7 & 2.80(14) &$<$0.23 && 769.93 & $-$1658, 1713 & 2.3 && 796.26 & $-$5429, 4907 & 2.1 && 821.65 & $-$3063, 544 & 2.2 & 2.41(11)& $<$1.6\\ 1127$-$145 & 0.31272 &1082.04(5) & $-$2734, 1722 & 7.6 & 5.548(3) & 6.2(2) && 1228.2 & $-$3427, 1187 & 4.9 && 1270.2 &$-$2064, 2633 & 6.1 & &1310.7 & $-$1885, 2507 & 7.8 & 5.574(3) & $<$2.3 \\ 1243$-$072 & 0.436734 & 988.635(5) & $-$909, 3347 & 7.9 & 0.64(2) & 2.7(3) && 1122.2 & ... & RFI && 1160.5 & $-$9759, 2149 & 7.1 & & 1197.5 & $-$2922, 2085 & 5.5 & 0.60(3) & $<$27\\ 1413+135$^{b,e}$ & 0.24671& 1139.3 & ... & ... & ... & ... & & 1293.2 & $-$8024, 8807 & 6.1 && 1337.4 & $-$8113, 2445 & 2.2 & & 1380.1 & $-$6374, 4075& 5.6 & 1.1240(6) &$<$3.8 \\ HB89 1437+624$^a$ & 1.0935(17) & 678.48 & ... & RFI & ... & ... && 770.11 & $-$1577, 1306 & 8.9 && 796.45 & $-$1883, 280 & 4.6 && 821.84 & $-$1060, 576 & 4.3 & 2.59(12) & $<$3.3\\ 3C356$^a$ & 1.079 & 683.22 & ... & RFI & ... & ... && 775.48 & $-$2362, 2193 & 6.1 && 802.00 & $-$6344, 4516 & 6.1 && 827.58 & $-$1195, 1960 & 6.6 & 2.644(6) & $<$4.3\\ PKS 1830$-$211 & 0.8851& 753.5(3) & $-$3585, 3447 & 15 & 9.95(4) & 37(4) && 855.25 & $-$740, 5258 & 10 && 884.49 & $-$1629, 309 & 9.5 && 912.70 & ... & ... & 9.76(4) & 60(3)$^d$ \\ 2351+456 & 0.77948 & 798.22(3) & $-$4191, 6957 & 10 & 2.006(2) & 32(3) && 906.02 & ... & RFI && 937.0 & ... & ... && 966.88 & ... & ... & ... & ...\\ 2355$-$106$^b$ & 1.1726 & 653.78 & ... & ... & ... & ... && 742.07& $-$13560, 5573 & 5.5 && 767.45 & ... & RFI && 791.92 & $-$3929, 4960 & 4.4 & 0.54(5) & $<$147\\ \hline \end{tabular} \end{table} \end{landscape} \begin{table*} \caption{$f(N_{\rm HI},X)$ and $\Omega_{\rm HI}$ Measurements. Columns list the (1) $\Delta \log N_{\textrm {HI}}$ bin, (2) number of detections made in each $\Delta \log~N_{\rm HI}$ bin, (3) total number of observational samples in each column density sensitivity bin, (4) total $\Delta X$ searched for which a system with that column density could have been detected, (5) calculated column density frequency distribution for the entire sample in each bin, and (6) measured $\Omega_{\rm HI}$ for each $\log N_{\textrm {HI}}=0.5$~dex bin for our redshift bin below/above the redshift cut at $z=0.69$. We repeat all calculations for $T_s/f$ values of 100, 250, 500, and 1000~K. Numbers in parentheses indicate uncertainties in the final digit(s) of listed quantities, when available.} \label{tab:fN} \begin{tabular}{lcccccc} \hline $\log N_{\textrm {HI}}$ & Detections & Samples & $\Delta X$ & log~$f(N_{\rm HI},X)$ & \multicolumn{2}{c}{$\Omega_{\rm HI}\times10^{-3}$} \\ (cm$^{-2}$) & & & & & $0<z<0.69$ & $0.69<z<2.74$ \\ \hline \hline \multicolumn{7}{c}{$T_s/f = 100$~K} \vspace{3pt} \\ \hline 18.25 & 0 & 9 & 0.740 & $<-$17.73 & ... & ...\\ 18.75 & 0 & 84 & 7.23 & $<-$19.22 & ... & ...\\ 19.25 & 0 & 484 & 56.03 & $<-$20.61 & ... & ...\\ 19.75 & 2 & 1013 & 121.2 & $-$21.6(7) & ... & ...\\ 20.25 & 1 & 506 & 155.2 & $-$22.5(1.0) & ... & ...\\ 20.75 & 4 & 48 & 159.1 & $-$22.4(5) & ... & ...\\ 21.25 & 0 & 1 & 159.2 & $<-$23.06 & ... & ...\\ 21.75 & 2 & 2 & 159.5 & $-$23.7(9) & ... & ...\\ \hline Total & 9 & 2147 & 159.5 & ... & 0.21(10) & 0.69(45) \vspace{3pt}\\ \hline \hline \multicolumn{7}{c}{$T_s/f = 250$~K} \vspace{3pt} \\ \hline 18.75 & 0 & 15 & 1.03 & $<-$18.37 & ... & ...\\ 19.25 & 0 & 113 & 10.6 & $<-$19.88 & ... & ...\\ 19.75 & 0 & 652 & 73.0 & $<-$21.22 & ... & ...\\ 20.25 & 2 & 977 & 132.1 & $-$21.2(7) & ... & ...\\ 20.75 & 2 & 364 & 156.9 & $-$22.7(7) & ... & ... \\ 21.25 & 3 & 24 & 159.2 & $-$23.1(6) & ... & ... \\ 21.75 & 2 & 2 & 159.5 & $-$23.7(7) & ... & ... \\ \hline Total & 9 & 2147 & 159.5 & ... & 0.53(16) & 1.7(9) \vspace{3pt} \\ \hline \hline \multicolumn{7}{c}{$T_s/f = 500$~K} \vspace{3pt} \\ \hline 18.75 & 0 & 2 & 0.085 & $<-$17.29 & ... & ...\\ 19.25 & 0 & 37 & 3.37 & $<-$19.39 & ... & ...\\ 19.75 & 0 & 263 & 28.2 & $<-$20.81 & ... & ...\\ 20.25 & 0 & 909 & 101.4 & $<-$21.86 & ... & ...\\ 20.25 & 2 & 777 & 147.9 & $-$22.7(7) & ... & ...\\ 21.25 & 4 & 155 & 159.1 & $-$22.9(5) & ... & ...\\ 21.75 & 1 & 2 & 159.2 & $-$24.0(1.0) & ... & ...\\ 22.25 & 2 & 2 & 159.5 & $-$24.2(7) & ... & ...\\ \hline Total & 9 & 2147 & 159.5 & ... & 1.05(33) & 3.4(1.6) \vspace{3pt} \\ \hline \hline \multicolumn{7}{c}{$T_s/f = 1000$~K} \vspace{3pt} \\ \hline 19.25 & 0 & 9 & 0.740 & $<-$18.73 & ... & ...\\ 19.75 & 0 & 84 & 7.23 & $<-$20.22 & ... & ...\\ 20.25 & 0 & 484 & 56.03 & $<-$21.61 & ... & ...\\ 20.75 & 2 & 1013 & 121.2 & $-$22.6(7) & ... & ...\\ 21.25 & 1 & 506 & 155.2 & $-$23.5(1.0) & ... & ...\\ 21.75 & 4 & 48 & 159.1 & $-$23.4(5) & ... & ...\\ 22.25 & 0 & 1 & 159.2 & $<-$24.1 & ... & ...\\ 22.75 & 2 & 2 & 159.5 & $-$24.7(8) & ... & ...\\ \hline Total & 9 & 2147 & 159.5 & ... & 2.1(5) & 6.9(2.7) \vspace{3pt} \\ \hline \end{tabular} \end{table*} \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} Thermodynamic uncertainty relations are a remarkable set of inequalities in Stochastic Thermodynamics that bound the coefficient of variation of empirical currents to their averages and to the entropy production rate (see, e.g., the research papers \cite{barato2015thermodynamic,GiRoHo17} and/or the monograph \cite{PePi20} for overview). In a nutshell, they intimate that achieving very accurate currents, with a very small coefficient of variation, requires in general a minimal cost in terms of entropy production. Their name are evocative of the uncertainty relations of quantum physics, which set a bound on the accuracy by which the position of a particle and its velocity can be evaluated. It is interesting to remark that a similar analogy between the inequality expressing Heisenberg uncertainty relations and a similar one which applies to diffusion processes like Brownian motion was pointed out in a remarkable paper by Reinhold Fürth in 1933. The paper, which points to a profound analogy between the uncertainty arising from quantum fluctuations and that due to random forces acting on a diffusing particle, opened the way in some sense to more recent developments like Nelson's stochastic mechanics approach~\cite{Nelson85} to quantum mechanics, and the stochastic quantization approach championed by Parisi and Wu~\cite{parisi1981perturbation,Nam92}. We hope to be helpful to the community by providing a translation of this comparatively little known paper. The translation is preceded by a brief biographical skecth of its author and is followed by some remarks on the translation and on some early developments of the approach initiated by the paper. Notice that the author's references appear as footnotes as in the original paper. The references due to the curator appear in brackets and are listed at the end. \section*{Reinhold Fürth} Reinhold Fürth was born in Prague (then Austria-Hungary) in 1893. He obtained a doctorate from the German Charles-Ferdinand University in Prague in 1916, where he became a professor of experimental physics in 1931. After the German takeover in 1938-39 he moved to Scotland and became a research fellow at the University of Edinburgh. After having been elected fellow of the Royal Society in 1943, he moved in 1947 to Birkbeck College, London, to become there professor of theoretical physics. He died in 1979. His name is well known as the editor of a collection of papers by Albert Einstein on the theory of Brownian movement~\cite{FurEin22}, whose English translation~\cite{FurEin56} is widely read. His lecture on the physics of social equilibrium~\cite{Fur51}, read before the British Association for the Advancement of Science in Edinburgh in 1951, can be considered as one of the earliest examples of the approach known as sociophysics. \section*{Text} \begin{center} From the Physics Institute of the German University in Prague\\ \textbf{\large On certain relations between classical Statistics and Quantum Mechanics.}\\ From Reinhold Fürth in Prague.\\ With 4 figures. (Received on January 19, 1933.) \end{center} \begin{center} \textbf{Abstract} \end{center} \begin{quote} It is highlighted the formal analogy between the differential equations for the probability distribution of the position of a mechanical system according to classical Statistics and to Quantum Mechanics which can also be interpreted as equations for the motion of a cluster of identical particles, a diffusion. The physical origin of such diffusion will be ascribed in the classical case to the collision with molecules of the surrounding matter, in the case of Quantum Mechanics to the uncertainty relations. In the last case, diffusion in the absence of forces is discussed and a simple derivation of the uncertainty relations is given on this basis. The line of reasoning can be carried over to classical diffusion and it is possible to derive an inequality for the variance of the position and the velocity which is in strict analogy with \so{Heisenberg}'s uncertainty relations. The relation found can be also applied to a single particle and more generally to an arbitrary mechanical system, since it states that the simultaneous measurement of the position and of the corresponding velocity is possible only up to a maximal accuracy in consequence of the \so{Brown}ian motion. It is discussed the relation of this finding with the problem of [determining] with which accuracy it is possible to measure a physical quantity with a mechanical measurement device and as a result it turns out that there exists also here in analogy [with Quantum Mechanics] an accuracy limit which cannot be overcome. Finally, it is shed light from the point of view of wave Mechanics on the question why the classical diffusion equation holds for a real density function with a real diffusion coefficient in contrast to the Schr\"odinger equation [which holds] for a complex function with an imaginary coefficient, and [this fact] is related to the problem of the observability of physical quantities and of the reversibility versus irreversibility of natural processes. \end{quote} \section*{} \label{sec:intro} In what follows there shall be a discussion of certain relations between classical statistics -- the classical diffusion theory and the theory of \so{Brown}ian motion -- on the one hand and, Quantum Mechanics on the other, [discussion] which arises from formal reasons and [which], although it might be already known to some, to the best of my knowledge has yet not been addressed in this context. In particular it is possible to show that \so{Heisenberg}'s uncertainty relations carry over to processes which are governed by classical Statistics and that it is thus possible to bring about new perspectives on the often addressed question of the limit of measurability with an measurement device. It is furthermore attempted to make precise the physical meaning of the aforementioned similarities and differences. \section{}\label{sec:1} The classical theory of diffusion is governed~\footnote{See, e.g., Frank-Mises, \textsl{Differential-u. Integralgleichungen d. math. Physik} 2, 248} by the generalised diffusion equation \begin{align} \frac{\partial u}{\partial t}= D \,\laplace u-\operatorname{div}(u \mathfrak{v}) \label{diffeq} \end{align} where $ u(x,y,z,t) $ denotes the density as function of the position and time, $ D $ (assumed constant) the diffusion coefficient and $ \mathfrak{v} $ the velocity vector of the convection current occasioned by external forces. The solution of this equation under given boundary conditions determines the distribution of the density at any future instant of time if the distribution is known in the present. If one interpret the diffusion experiment as a collective experiment with a spatial ensemble of many identical particles then $ u \mathrm{d}V $ is the relative frequency with which any element of the ensemble is found in the volume element $ \mathrm{d}V $ at time $ t $ during the collective experiment if $ u $ satisfies the normalization condition \begin{align} \iiint\,u\,\mathrm{d}V=1 \label{norm} \end{align} for all $ t $. The replacement of the spatial ensemble with a virtual ensemble turns the diffusion equation (\ref{diffeq}) into an equation for the ``probability density'' $ u $ of the position of an individual particle which can be computed as a function of time when it is known at time zero: \so{Smoluchowski}'s differential equation for \so{Brown}ian motion of an individual particle under the action of external forces~\footnote{M. v. Smoluchowski, \textit{Ann. d. Phys.} 43, 1105, 1915}. It is possible to show that \so{Smoluchowski}'s equation is a special case of another differential equation which can be derived under very general conditions for the \so{Brown}ian motion of an arbitrary mechanical system and [which] is usually referred to as the \so{Fokker-Planck} equation~\footnote{See, among others, F. Zernike, \textsl{Handb. d. Phys. Bd. III,} S. 457}. Following \so{Schr\"odinger}~\footnote{E. Schr{\"{o}}dinger, \textit{Ann. de l'Inst. H. Poincar\'e} 1931, S. 296ff. \textit{S. Ber. Berl. Akad.} 1931, S. 148; see also J. Metadier, \textit{C. R.} \textbf{193,} 1173, 1931.}, [the Fokker-Planck equation] can be written as \begin{align} \frac{\partial u}{\partial t}= F u \label{FPeq} \end{align} where $ F $ denotes a certain differential operator which, in agreement with (\ref{diffeq}), reduces to $ F=D \Delta- \operatorname{div}\mathfrak{v} $ in the case when the system is a particle under the action of a force. The differential equation (\ref{FPeq}) is, as \so{Schr\"odinger}~\footnote{E. Schr{\"{o}}dinger, \textit{Ann. de l'Inst. H. Poincar\'e} 1931, S. 296ff. \textit{S. Ber. Berl. Akad.} 1931, S. 148; see also J. Metadier, \textit{C. R.} \textbf{193,} 1173, 1931. [sic]} also already pointed out, formally identical to the time dependent \so{Schr\"odinger} differential equation of wave mechanics for the wave function $ \psi $ which is usually written in the form \begin{align} \label{Seq} \frac{\partial \psi}{\partial t}= H \psi \end{align} where $ H $ denotes the \so{Hamilton} operator for the mechanical problem of interest. According to the statistical formulation of wave mechanics also this equation is a ``probability equation'' inasmuch it allows one to compute this quantity at any arbitrary later instant of time from the knowledge of $ \psi(q) $ at time zero and the ``probability amplitude'' $ \psi $ is linked to the probability density for the sojourn of the system in a certain volume element of the $ q $-space by the relation \begin{align} \label{Born} w=\psi\psi^{*} \end{align} ($ \psi^{*} $ is the complex conjugate of $ \psi $) as far as $ \psi $ satisfies the normalization condition \begin{align} \iiint\,\psi\psi^{*}\,\mathrm{d}V=1 \label{psinorm} \end{align} By reversing the line of reasoning, one can also construe the quantity $ w $ defined via (\ref{Born}) as the phase point density of a large number of identical non interacting systems in $ q $-space. Equation (\ref{Seq}) then determines the evolution of such distribution density and permits to compute the density at any further time if the density function is assigned at time zero. In the case of special importance of an individual point system of mass $ m $ subject to the action of a force which can be derived from a potential $ U $, equation (\ref{Seq}) reads \begin{align} \label{} -\frac{h}{2\,\pi\, i }\frac{\partial \psi}{\partial t}=-\frac{h^{2}}{8\,\pi^{2}\,m}\laplace\psi+U\,\psi \end{align} The discussion of this equation teaches, as \so{Ehrenfest} first showed~\footnote{P. Ehrenfest, \textit{ZS. f. Phys.}\textbf{45}, 455, 1927} that the centre of mass of a cluster of particles obeying the conditions expounded afore moves in the usual three dimensional space according to the prescription of classical mechanics when the assigned forces act on the particle but also that the cluster of particles spreads around the centre of mass via a sort of diffusion. We therefore encounter here a convection current with overlaid a diffusion in analogy with the motion of a cluster of particles according to the classical theory of diffusion. As we are interested only in the last phenomenon, we wish in what follows set to zero the external force. The equations (\ref{Seq}) and (\ref{diffeq}) become then formally identical, namely \begin{align} \label{freediff} \frac{\partial u}{\partial t} = D\, \laplace\,u \end{align} and \begin{align} \label{freeS} \frac{\partial \psi}{\partial t}=\epsilon \laplace\,\psi \end{align} where the shortcut \begin{align} \label{sc} \epsilon=\frac{ i \,h}{4\,\pi\,m} \end{align} is used. Subject to the same boundary and initial conditions the solutions of (\ref{freediff}) and (\ref{freeS}) read hence completely the same. Sure enough a substantial difference arises from the fact that in the case of Quantum Mechanics not the function (in general complex) $ \psi $ but rather according to (\ref{Born}) its norm plays the role of density function and that according to (\ref{sc}) the diffusion coefficient is here purely imaginary. We return to the physical meaning of this fact later below. \section{}\label{sec:2} The deeper reason for the analogy emerging in the comparative presentation of \S~1 between the motion of a cluster of particles according to the classical theory of diffusion and Quantum Mechanics resides in the fact that in both cases the velocities of individual particles in the cluster differ and obey a statistical law. In the first case, this (phenomenon) stems from the fact that particles incur in irregular collisions with molecules of the surrounding element whereby the particles' momentum is continuously varied in intensity and direction in such a way that there is no relation between the change of momenta of distinct particles. This [fact] becomes manifest when considering an individual particle in its irregular \so{Brown}ian motion, and, when considering a particle cluster, in the fact that for an assigned initial state of the cluster and initially vanishing ``macroscopically'' measured velocity, the particles actually possess velocity irregularly distributed across the cluster and that in the course of time the initial distribution varies in the characteristic way of a diffusion. In the case of Quantum Mechanics, the very assumption of an initial density distribution implies that the condition of vanishing initial velocity of all the particles cannot be strictly satisfied. According to \so{Heisenberg}'s fundamental uncertainty relations governing Quantum Mechanics a complete assignment of the initial velocity of the particles would be possible only in the presence of a complete uncertainty about the initial positions. As a certain information about the initial position of the particles is conveyed by the assignement of the initial distribution, one must admit a certain blurring of the initial velocities, i.e., a certain statistical distribution of the initial velocities of the cluster particles. But a necessary consequence of this is that a variation of the initial density distribution as well as a diffusion of the cluster must have occurred after a certain time. That the uncertainty on the value of position of the particles of the diffusing cluster really satisfies \so{Heisenberg}'s uncertainty relations with the uncertainty about the value of the velocity (momentum), has been shown by \so{Heisenberg}~\footnote{W. Heisenberg, \textit{ZS. f. Phys.} \textbf{43,} 172, 1927.} and \so{Kennard}~\footnote{E. H. Kennard, {loc. cit.} \textbf{44,} 326, 1927.} among others. A brief derivation may be given here for the one-dimensional case, which, without resorting to the theory of transformations, makes use~\footnote{I need to thank here Mr. K. \so{L{\"o}wner}, Prague, for some hints.} only of equation (\ref{freeS}) and its complex conjugate taking in one dimension the form \begin{align} \left. \begin{array}{l} \dfrac{\partial \psi}{\partial t} = \epsilon \dfrac{\partial^{2} \psi}{\partial x^{2}} \\[0.3cm] \dfrac{\partial \psi^{*}}{\partial t} = -\epsilon \dfrac{\partial^{2} \psi^{*}}{\partial x^{2}} \end{array} \right \} \label{system} \end{align} Let $ x_{0} $ the initial position of one particle of the cluster, $ v $ its initial velocity and $ x $ its position after a time $ t $, then \begin{align} \label{linear} x=x_{0}+v\,t \end{align} holds. If the centre of mass at time zero is located in the origin of the coordinates and its velocity is zero, i.e. $ x_{0}=0 $ and $ \bar{v}=0 $ then according to (\ref{linear}) it is also clear that $ \bar{x}=0 $ for all $ t $. Evaluating the quadratic expectation value of (\ref{linear}) one gets into \begin{align} \label{quadratic} \overline{x^{2}}=\overline{x_{0}^{2}}+2\,\overline{x_{0}\,v}\,t+\overline{v^{2}}\,t^{2} \end{align} By definition \begin{align} \overline{x^{2}}=\int_{-\infty}^{+\infty}x^{2}\psi\psi^{*}\mathrm{d}x \label{defx2} \end{align} holds true. Using equation (\ref{system}) and under the assumption that $ \psi $ vanishes sufficiently fast at infinity, one gets into after a simple calculation \begin{align} \label{der1} \frac{\mathrm{d}}{\mathrm{d} t}\overline{x^{2}}=2\,\epsilon\,\int_{-\infty}^{\infty} x \left(\psi\frac{\partial \psi^{*}}{\partial x }-\psi^{*}\frac{\partial \psi}{\partial x }\right)\mathrm{d}x \end{align} \begin{align} \label{der2} \frac{\mathrm{d}^{2}}{\mathrm{d} t^{2}}\overline{x^{2}}=-8\,\epsilon^{2}\,\int_{-\infty}^{\infty} \frac{\partial \psi^{*}}{\partial x }\frac{\partial \psi}{\partial x }\mathrm{d}x \end{align} \begin{align} \label{der3} \frac{\mathrm{d}^{3}}{\mathrm{d} t^{3}}\overline{x^{2}}=0 \end{align} From (\ref{der3}) it follows that $ \overline{x^{2}}$ must be a quadratic function of time in agreement with (\ref{quadratic}) ; it also follows from (\ref{der2}) that $ \overline{v^{2}} $ as coefficient of $ t^{2} $ (\ref{quadratic}) satisfies \begin{align} \label{v2} \frac{\mathrm{d}^{2}}{\mathrm{d} t^{2}}\overline{v^{2}}=\frac{1}{2}\frac{\mathrm{d}^{2}}{\mathrm{d} t^{2}}\overline{x^{2}}=-\,4\,\epsilon^{2}\,\int_{-\infty}^{\infty} \left |\frac{\partial \psi}{\partial x }\right |^{2}\mathrm{d}x \end{align} According to \so{Heisenberg}~\footnote{W. \so{Heisenberg}, \textsl{Die physikalischen Prinzipien der Quantentheorie.} Leipzig 1930. Page 13 and following.}, it now follows from the self-evident inequality \begin{align} \label{ineq1} \left |\frac{x}{2\,\bar{x}}\psi+\frac{\partial \psi}{\partial x}\right |^{2}\,\geq\,0 \end{align} with use of (\ref{norm}) and (\ref{defx2}) \begin{align} \nonumber \int_{-\infty}^{\infty} \left |\frac{\partial \psi}{\partial x }\right |^{2}\mathrm{d}x\,\geq\,\frac{1}{4\,\overline{x^{2}}} \end{align} whence from (\ref{der3}) \begin{align} \label{Heisenberg1} \overline{x^{2}}\overline{v^{2}}\,\geq\,-\epsilon^{2} \end{align} If one introduces the uncertainties on the position and momentum of the particle cluster under consideration with the avail of the relations \begin{align} \label{uncertain} \left . \begin{array}{l} \Delta x = \sqrt{\overline{x^{2}}} \\[0.3cm] \Delta p = m \sqrt{\overline{v^{2}}} \end{array} \right \} \end{align} using (\ref{sc}) from (\ref{Heisenberg1}) follows for their product \so{Heisenberg}'s relation \begin{align} \Delta x \, \Delta p \,\geq\, \frac{h}{4\,\pi} \label{Heisenberg} \end{align} The equality holds here if and only if the inequality (\ref{ineq1}) turns in an equation. The integration of the latter yields for $ \psi $ \begin{align} \psi=\mathrm{Const.} \,e^{-x^{2}/4\,(\Delta x)^{2}} \label{Gpsi} \end{align} and also for the density of the particle cluster (\ref{Born}) in consideration of (\ref{norm}) the Gaussian distribution \begin{align} \label{Gauss} w=\frac{1}{\sqrt{2\,\pi}\Delta x}\,e^{-x^{2}/2\,(\Delta x)^{2}} \end{align} If $ \psi $ takes at time $ t=0 $ the special form (\ref{Gpsi}) then it follows from (\ref{der1}) $ \frac{\mathrm{d}}{\mathrm{d} t} \overline{x^{2}}=0$ and as a consequence the coefficient of $ t $ disappears from (\ref{quadratic}). If there is a corresponding initial distribution of the position in the particle cluster under consideration, so that $ (\overline{x_{0} v})=0 $ [holds] at the same time, the variance of the positions and of the initial velocities of the single particles are also statistically independent one from another. Conversely, by no means the existence at time zero of the density distribution (\ref{Gauss}) implies by itself the statistical independence of position and velocities and hence the vanishing of the linear term in (\ref{quadratic}). \section{}\label{sec:3} In accordance with what was said at the beginning of section~\ref{sec:2}, it is natural to apply the above reasoning, which is based on the \so{Heisenberg} uncertainty relation in the quantum mechanical case, to the case of classic diffusion. Also in this case we restrict ourselves to the one dimensional case with vanishing convection current and so we start from equation (\ref{freediff}) which in one dimension reads \begin{align} \label{3:freediff} \frac{\partial u}{\partial t}=D \frac{\partial^{2} u}{\partial x^{2}} \end{align} where in agreement with (\ref{norm}) $ u $ satisfies the condition \begin{align} \int_{-\infty}^{\infty} u\,\mathrm{d}x=1 \label{3:norm} \end{align} We define the uncertainty of the particle cluster by means of the quantity $ \overline{x^{2}} $ as \begin{align} \overline{x^{2}}=\int_{-\infty}^{\infty}x^{2}\, u\,\mathrm{d}x \label{3:var} \end{align} At $ t=0 $ the centre of mass of the cluster lies again in the origin of the coordinates so that $ \overline{x_{0}}=0 $. To start with, we look for the derivation of the analog of equation (\ref{quadratic}) which expresses how the uncertainty $ \overline{x^{2}} $ initially present in the diffusing particle cluster grows in the course of time. \begin{align} \frac{\mathrm{d}}{\mathrm{d} t}\overline{x}=\frac{\mathrm{d}}{\mathrm{d} t}\int_{-\infty}^{\infty}x\, u\,\mathrm{d}x = \int_{-\infty}^{\infty}x\, \frac{\partial u}{\partial t}\,\mathrm{d}x\\ = D\int_{-\infty}^{\infty}x\, \frac{\partial^{2} u}{\partial x^{2}}\,\mathrm{d}x=0 \nonumber \end{align} The center of mass of the cluster constantly remains at rest, as the absence of a convection immediately evinces, so that for all times $ \overline{x}=0 $. From (\ref{3:var}) in an analogous way it follows that \begin{align} \label{3:dvar} \frac{\mathrm{d}}{\mathrm{d} t}\overline{x^{2}}= \int_{-\infty}^{\infty}x^{2}\, \frac{\partial u}{\partial t}\,\mathrm{d}x = D\int_{-\infty}^{\infty}x^{2}\, \frac{\partial^{2} u}{\partial x^{2}}\,\mathrm{d}x= 2\,D \end{align} and therefore that $ \overline{x^{2}} $ is a linear function of time of the form \begin{align} \overline{x^{2}}=\overline{x_{0}^{2}}+2\,D\,t \label{3:quadratic} \end{align} The comparison of (\ref{3:quadratic}) with (\ref{quadratic}) shows that by all means in both cases the uncertainty over the position indefinitely grows over a sufficient long time and thus that a diffusion of the cluster occurs. Whereas here, however, the growth of $ \overline{x^{2}} $ occurs independently of $ \overline{x_{0}^{2}} $ and \textit{linearly} in time, there the growth in time is \textit{quadratic} and in consequence of (\ref{Heisenberg1}) is itself dependent upon $ \overline{x_{0}^{2}} $ (it takes place in a particularly sudden way if $ \overline{x_{0}^{2}} =0$ inasmuch $ \overline{v^{2}} $ becomes infinitely large); finally, if the linear term in $ t $ is non-vanishing the so that dispersion of the positions and of the velocities are not statistically independent at time zero, it may be that the cluster undergoes first a contraction to a minimum and only afterwards a spreading. The formal causes for the aforementioned differences have been already discussed at the end of \S~1. The differences are physically explained by the fact that in the case of classical diffusion there is no ``initial velocity'' of the particles and therefore no equation of the form (\ref{linear}) exist, and that furthermore the instantaneous speed of the particles is due to the collisions with the molecules, as already mentioned~\footnote{If one, following \so{Schr{\"o}dinger} (Ber. Ber. 1930, S. 296, Nr. 19), sets out to choose the value of $ \overline{x_{0}^{2}} $ such that $ \Delta x $ is as small as possible and the product $ \Delta x \Delta p =\frac{h}{4\,\pi} $ holds true so that the initial distribution (\ref{Gpsi}) is satisfied, then in (\ref{quadratic}) the second term of the right hand side vanishes whereas the first and the third become equal upon setting $ \overline{v^{2}} = - \frac{\epsilon^{2}}{\overline{x^{2}}}$. This implies for $ \overline{x^{2}} $ \begin{align} \overline{x^{2}}=\frac{h}{2\,\pi\,m}t \label{3:note} \end{align} In this case, the formal analogy with (\ref{3:quadratic}) is strikingly evident when one thereby replaces $ D $ with the absolute value of the ``imaginary diffusion coefficient'' $ \epsilon $ according to (\ref{sc}). }. On the basis of the statistical independence of the dispersion process and the initial distribution in the classic case, one can immediately write equation (\ref{3:quadratic}), since it expresses that this is due to two causes: the ``square error'' of $ x $ resulting from initial spread and diffusion is the sum of these two just mentioned ingredients, the second being the well-known \so{Einstein} law for the mean square of the \so{Brown}ian motion. In order to find the analog of the uncertainty relations (\ref{Heisenberg1}) to (\ref{Heisenberg}), we need in first place to find a suitable definition of velocity for the classical diffusion. From the above it is clear that by no means this role can be played by the microscopic velocity produced by molecular collisions. Likewise, as we have already seen, the macroscopic velocity of the cluster regarded as a single entity, or strictly speaking the velocity of its centre of mass, [is not a good candidate since it] vanishes. A suitable quantity comes about from the consideration of the ``diffusion current'', i.e., the quantity of diffusing matter crossing in the unit of time a fixed section in the diffusion domain. As it is well known~\footnote{Compare ref.~1.}, the vector $ \mathfrak{Q} $ of the diffusion current is a local function in the diffusion domain, connected with the scalar $ u $ by the relation \begin{align} \label{3:j} \mathfrak{Q} = - D \operatorname{grad}u \end{align} Based on the fact that $ u $ is nothing else than density of matter of the diffusing element, we find the corresponding velocity vector $ \mathfrak{v} $ according to \begin{align} \mathfrak{v}=\frac{1}{u}\mathfrak{Q}=- D \frac{1}{u}\operatorname{grad}u \label{3:cv} \end{align} which in the one-dimensional case becomes \begin{align} \label{3:cv1d} v=- D \frac{1}{u}\frac{\partial u}{\partial x} \end{align} If we now compute the particle cluster mean value of $ v $ at a certain time instant, we obtain by definition using (\ref{3:freediff}) \begin{align} \label{3:meancv} \overline{v}=\int_{-\infty}^{\infty}\,v\, u\, \mathrm{d}x =- D\int_{-\infty}^{\infty}\frac{\partial u}{\partial x} \mathrm{d}x =0 \end{align} as it must be, since $ \overline{v} $ is nothing else than the macroscopic velocity of the centre of mass. For the mean value of $ \overline{v^{2}} $, one finds \begin{align} \label{3:cvvar} \overline{v^{2}}=\int_{-\infty}^{\infty}v^{2} \,u \,\mathrm{d}x= D^{2}\int_{-\infty}^{\infty} \frac{1}{u}\left(\frac{\partial u}{\partial x}\right)^{2}\mathrm{d}x \end{align} By a straightforward application of the reasoning of \S~2 one can establish the derivation of an inequality for the product $ \overline{v^{2}}\overline{x^{2}} $, by proceeding once again from the self-evident inequality \begin{align} \label{3:obvious} \left(\frac{1}{u}\frac{\partial u}{\partial x}+\frac{x}{\overline{x^{2}}}\right)^{2}\,\geq\,0 \end{align} whence by expanding the product it follows \begin{align} \frac{1}{u}\left(\frac{\partial u}{\partial x}\right)^{2}\,\geq\,-2\frac{x}{\overline{x^2}}\frac{\partial u}{\partial x}-\frac{x^{2}\,u}{(\overline{x^2})^{2}} \nonumber \end{align} Upon integrating, a simple calculation making use of (\ref{3:norm}) and (\ref{3:var}) yields \begin{align} \nonumber \int_{-\infty}^{\infty}\frac{1}{u}\left(\frac{\partial u}{\partial x}\right)^{2}\,\geq\,\frac{1}{\overline{x^2}} \end{align} whence finally according to (\ref{3:cvvar}) \begin{align} \overline{x^{2}}\,\,\overline{v^{2}}\,\geq\,D^{2} \label{fuerth} \end{align} As one can see, the inequality (\ref{fuerth}) has the same form of the inequality (\ref{Heisenberg1}), which turns into (\ref{fuerth}) if one again replace the absolute value of $ \epsilon $ with $ D $. Introducing the notation $ \Delta x $ and $ \Delta p $ in analogy with (\ref{uncertain}), we write our uncertainty relation in the simpler form \begin{align} \label{3:main} \Delta x \, \Delta p \,\geq\, D \end{align} stating that in a classically diffusing particle cluster the position and the velocity of the particles at any instant of time cannot be \textit{simultaneously} determined with arbitrary accuracy and that furthermore the product of the uncertainties must be always larger than the diffusion coefficient $ D $. The lower bound is attained, i.e., the inequality turns into an equality if and only if (\ref{3:obvious}) [also] holds as an equality. The solution of the differential equation obtained in this way immediately yields \begin{align} u=\frac{1}{\sqrt{2\,\pi}\Delta x}\,e^{-x^{2}/2(\Delta x)^{2}} \label{3:Gauss} \end{align} having taken (\ref{3:norm}) into account, and [is] therefore again the \so{Gauss}ian distribution, as in the quantum mechanical case, in formal agreement with (\ref{Gauss}). Whereas in the present case, from the occurrence of the distribution (\ref{3:Gauss}) the equality $ \Delta x \, \Delta p \,=\, 0 $ necessarily follows, the occurrence of the distribution (\ref{Gauss}) is there only a necessary but not sufficient condition for the product $ \Delta x \, \Delta p $ to attain its minimum. Furthermore, whereas in a cluster of particles left to itself and satisfying at time zero the minimum uncertainty condition this condition continues to hold at any further time (because the distribution (\ref{3:Gauss}) is self-sustaining) in the quantum mechanical case the minimum condition is only instantaneously satisfied, e.g., at time zero and later no more (since the form of the distribution (\ref{Gpsi}) is not preserved by the motion of the particles). Finally, it should be emphasized that in the classical case one can always think a cluster of particles satisfying the minimum condition as one brought about by the diffusion of one which at a certain instant of time was completely concentrated in the origin of the coordinates. In order to see this, one needs only to make use in (\ref{3:Gauss}) of (\ref{3:quadratic}) where one insert the abbreviation $ \overline{x_{0}^{2}}=2\,D\,t_{0} $; one then obtains \begin{align} u=\frac{1}{2\sqrt{\pi \,D\,(t+t_{0})}}\,e^{-x^{2}/4\,D\,(t+t_{0})} \label{3:diff} \end{align} which entails that indeed, for $ t=-t_{0} $, $ u $ vanishes in the full space with the exception of $ x=0 $. In the quantum mechanical case this reduction, as we have already seen, is not possible. \section{}\label{sec:4} In the two preceding paragraphs we discussed the application of uncertainty relations to a spatial aggregate of identical particles in the quantum and in the classical case. As it is well known, the fundamental significance of the uncertainty relation in Quantum Mechanics appears, however, when it is applied to an individual system. It teaches that the simultaneous measurement of the position and the momentum of a force-free particle can be performed with the maximum accuracy $ h/4\pi $ predicted by formula (\ref{Heisenberg}) since the measurement process during the measurement of one of the two quantities disturbs the other to an amount that the product of the uncertainties of both quantities cannot be lower than the aforementioned value. One can reformulate the statement for a general mechanical system by saying that the simultaneous measurement of a coordinate $q$ and of the impulse canonically conjugated to it is only possible with an uncertainty of the order of magnitude of $ h $. We can now also in a straightforward way apply the relation (\ref{3:main}) obtained in \S~3 to the problem of the simultaneous measurement of the position and speed of a particle, which is under the action of irregular impacts, and therefore performs a \so{Brown}ian motion. Our relation teaches that the product of the uncertainty of a simultaneous measurement of position and velocity cannot be lower than the value $ D $, whereby velocity must be understood as the macroscopic speed of the particle, i.e. the quantity $ \delta x /\delta t $ (assuming that $ \delta t $ is large compared to the time between two successive molecular collisions of the particle). One sees that, as in the quantum mechanical case, there is an actual impossibility of a simultaneous, precise measurement of position and velocity, which, however, is not, as in Quantum Mechanics, determined by the process of measurement itself and governed by a universal constant, but it is rather caused by the influence of the environment on the observed system, and as a consequence it is clearly not of universal nature (for example, by lowering the temperature, which determines the value $ D $, [the effect] can be made arbitrarily small). The following argument evinces that formula (\ref{3:main}) holds true also in the case of the measurement of an individual particle: we consider a force-free particle which at time zero is located at the origin of the coordinates and has vanishing macroscopic velocity. If we measure the position of the particle after a short time $ t $ then the expected value $ \overline{x^{2}} $ satisfies Einstein's formula \begin{align} \overline{x^{2}}=2\,D\,t \label{4:Einstein} \end{align} whence it follows \begin{align} \frac{\mathrm{d}}{\mathrm{d} t}\frac{\overline{x^{2}}}{2}=D \nonumber \end{align} If we now exchange the order between time differentiation ad expectation value, we get furthermore \begin{align} \label{4:Stratonovich} \overline{\left(\frac{\mathrm{d}}{\mathrm{d} t}\frac{x^{2}}{2}\right)} = \overline{\left(x\frac{\mathrm{d}}{\mathrm{d} t}x\right)}=D \end{align} $ x $ is evidently now the uncertainty over the position of the particle (we assumed $ x=0 $ at time zero) caused by the \so{Brown}ian motion, and similarly $ \mathrm{d} x/\mathrm{d} t $ is the uncertainty over the velocity (which we assumed vanishing at time zero) brought about by the very same causes. The product $ \overline{\left(x\frac{\mathrm{d}}{\mathrm{d} t}x\right)}$ thus specifies the value to be expected by averaging over many measurements of the uncertainty product $ \Delta x \Delta v$ which according to equation (\ref{4:Stratonovich}) is equal to $ D $. The fact that we obtained exactly the minimum value here instead of equation (\ref{3:main}) is due to the fact we evaluated the mean value over repeated measurements of a particle, which we assumed to have always the same starting position and starting velocity at time zero. It is immediately obvious that without this assumption the uncertainty can in any case only increase, so that the product $ \Delta x \Delta v $ is actually larger than $ D $, as required by the relationship (\ref{3:main}). Our relation states that an increase in the measurement accuracy of the position of a \so{Brown}ian particle reduces the accuracy of a simultaneous measurement of the velocity and vice versa. The physical meaning of this statement can be visualized with the help of the following Figs 1-4 of which Fig 1 plots as the function $ x(t) $ the position as a function of time of a particle falling under the effect of gravity in a liquid, observed with a certain magnification, and Fig. 2 represents the function $ v(t)=\dot{x}(t) $ obtained from it. Fig 3 shows the beginning of Fig. 1, plotted with a stronger magnification, and Fig. 4 again the velocity curve obtained therefrom. \begin{figure}[htb] \begin{center} \includegraphics[width=0.4\textwidth]{furthFig1} \end{center} \caption{Plot of the position of a \so{Brown}ian particle as a function of time (stylized).} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.4\textwidth]{furthFig2} \end{center} \caption{Velocity $ v $ of the particle as a function of time computed from Fig.1 (dashed line mean value $ \overline{v} $). } \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.4\textwidth]{furthFig3} \end{center} \caption{5-times magnification of the beginning of the plot in Fig.~1 (stylized).} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.4\textwidth]{furthFig4} \end{center} \caption{Velocity $ v $ computed from Fig.~3 (dashed line mean value $ \overline{v} $).} \end{figure} One can immediately see how increasing the accuracy in the determination of the position by increasing the magnification necessarily increases the uncertainty in the simultaneous determination of the velocity. Our relation thus expresses in an exact way the fact known to everyone familiar with \so{Brown}ian motion that the trajectory of a \so{Brown}ian particle exhibits more discontinuities with increasing magnification. Exactly as in the case of Quantum Mechanics, we can extend the uncertainty relation (\ref{3:main}) also to any mechanical system in contact with a surrounding temperature bath. Then, to every degrees of freedom is associated the \so{Brown}ian motion of the corresponding coordinate which we denote again by $ x $. The \so{Fokker-Planck} equation (\ref{FPeq}) takes the place of the differential equations (\ref{3:freediff}) or (\ref{freediff}). It is plausible that also in this general case an uncertainty relation on the form \begin{align} \Delta x\, \Delta v \approx D \label{4:general} \end{align} holds true where $ v $ is the velocity associated to the coordinate $ x $, and $ D $ denotes the coefficient of the term $ \frac{\partial^{2} u}{\partial x^{2}} $ on the right hand side of (\ref{freediff}) and expresses the characteristic constant of this \so{Brown}ian motion. The relation states that the simultaneous measurement of the coordinate $ x $ and of its associated speed $ v $ is possible only with an uncertainty of order~$ D $. \section{}\label{sec:5} We can also extend the domain of validity of formula (\ref{4:general}) to any non-mechanical quantity since any physical quantity, even of non-mechanical nature, is measured using mechanical measurement instruments, for example a current [is measured] using a galvanometer itself consisting of mechanical components. We assume that the ``deflection'' $ x $ of the mechanical instrument in use be proportional to the quantity $ J $ to be measured (for example the deflection of a galvanometer [is proportional] to the intensity of the current). When this is not the case from the start, one can always apply a compensation method in order to implement the desired condition within strict accuracy. Let $ \dot{J} $ be the speed of variation of $ J $. Then it holds true that \begin{equation} \begin{split} &J=a\,x\hspace{1.7cm}\dot{J}=a\,\dot{x}=a\,v \\ & \Delta J=a\,\Delta x\hspace{1.0cm}\Delta\dot{J}=a\,\Delta\dot{x}=a\,\Delta v \end{split} \end{equation} whence with the help of (\ref{4:general}) \begin{align} \Delta J\,\Delta\dot{J}\approx a^{2}\,D \label{5:general} \end{align} The relation (\ref{5:general}) teaches that although one can arbitrarily increase the measurement accuracy by choosing an appropriate measurement device, specifically by reducing $ a $, simply increasing the reading accuracy of the pointer cannot improve above a certain value the precision of a simultaneous measurement of the quantity $ J $ and its speed-of-variation owing to the \so{Brown}ian motion of the measuring instrument. One can thus reduce $ a $ by reinforcing the magnetic field in a moving coil galvanometer with given mechanical properties and as a consequence enhance the accuracy of current measurement at least in principle arbitrarily; one cannot, however, achieve any reduction of the product $\Delta J\,\Delta\dot{J} $ by a simple increase of the reading accuracy for example by magnifying the deflection using a microscopic reading pointer~\footnote{G. Ising, \textit{Ann. d. Phys.} \textbf{14} 755, 1932.}) or using a thermal relay~\footnote{N. Moll u. N. Burger, \textit{Phil. Mag.} \textbf{1,} 624, 1925} or a light electric relay~\footnote{L. Bergmann, \textit{Phys,ZS.} \textbf{32,} 688, 1931.}) The problem of the limits of measurement accuracy due to \so{Brown}ian motion of instruments, in particular galvanometers, has been recently repeatedly discussed by several authors~\footnote{G. Ising, \textit{Phil. Mag.} \textbf{1,} 827, 1926; \textit{Ann.d.Phys.} \textbf{8,} 911, 1931; \textbf{14,} 755, 1932; F. Zernike, \textit{ZS. f. Phys.} \textbf{40,} 628, 1926; \textbf{79,} 516, 1932; R. Gans, \textit{Schriften d. K{\"o}nigsberger Gel. Ges.} \textbf{7,} 177, 1930; M. Czerny, \textit{Ann. d. Phys.} 12, 993, 1932} and it has been thoroughly discussed with which procedures one can perform the most accurate possible measurement of a quantity of interest with an instrument of a given type. In my opinion, these discussion have always overlooked an important point. The task of the experimentalist is certainly that of recording the quantity $ J $ of interest as a function of time, i.e. the function $ J(t) $ with the highest accuracy possible. If one restricts [the attention] to a short interval of time, this requirement is equivalent to the task of \emph{determining a quantity $ J $ and its variation speed $ \dot{J} $ at a given instant of time with the highest possible accuracy}. The relation (\ref{5:general}) teaches that with a given instrument this is possible only with a certain uncertainty completely independent from any procedure to increase the reading accuracy of the pointer. The procedure suggested by many authors to increase the maesurement accuracy of $ J $ despite the \so{Brown}ian motion by taking many readings and taking their average which should be then more precise than an individual measurement or by using an integrating measuring instrument makes sense only when one knows in advance that the quantity of interest is exactly constant. But how can one know this without having performed first a corresponding measurement to ascertain such stipulation? If one really tries this, then one would obtain by repeated observation or by continuous recording a time dependence of the [pointer] deflection (because of the \so{Brown}ian motion) whence it is certainly not possible to determine whether the observed quantity remains constant or whether it varies in time within the limit of accuracy of the recorded oscillations. This circulus vitiosus is the reason why the method proposed to increase measurement accuracy is not really feasible. Actually we can even say with certainty that the requirement of constancy of $ J $ implied by the mentioned procedure is certainly not satisfied because any macroscopically defined quantity, which can be measured by a macroscopic measurement instrument, undergoes oscillations. For instance, in reality there is certainly no constant electromotive force even if the power source is protected from external interference with all possible refinement because of the occurrence of spontaneous potential oscillations induced by the thermal motion of electrons how it has been experimentally shown by several researchers over the last years~\footnote{J. B. Johnson, \textit{Phys. Rev.} \textbf{29,} 367, 1927; \textbf{32,} 97, 1928; N. H. Williams, \textit{ibidem} \textbf{40,} 121, 1932; L. S. Ornstein, H. C. Burger, J. Taylor and W. Clarkson, \textit{Proc. Roy. Soc. London (A)} \textbf{115,} 391, 1927.}. Thus to measure an electromotive force with the highest possible accuracy obviously means to record as precisely as possible its time dependence or in a short time interval to simultaneously measure as precisely as possible the electromotive force and its variation velocity. But, as we have shown above, this accuracy has because of the \so{Brown}ian motion an upper limit which is independent of the way the measurement is performed. \section{}\label{sec:6} The results reported in the previous paragraphs are, as it has been repeatedly mentioned, due to the formal analogy between the fundamental differential equations of classical diffusion theory and quantum mechanics, a fact which becomes particularly evident when contrasting the equations (\ref{freediff}) and (\ref{freeS}) of \S~1. Already there we have however pointed out essential formal differences between the two equations. We now want to try to understand the physical origins of these differences. The following considerations should at the same time contribute to clarify certain ambiguities, which have recently been highlighted by \so{Ehrenfest} with the invitation to the physicists to tackle these problems. Classical diffusion can be regarded as a current which, as we saw in \S~1, is governed by a differential equation of the form (\ref{freediff}), where $ F $ is a real differential operator and $ u $ is a real function of position and time, representing the density of the diffusing element. It follows that it must be possible from the assignement of $ u $ at any instant of time to compute the density distribution at any later (and of course also earlier) instant of time. In contrast to a problem of ordinary hydrodynamics, the diffusion current in the system under consideration is thus completely determined by the assignement at an arbitrary instant of time of the density as a function of the coordinates, without simultaneously requiring the knowledge of current velocity as a function of the coordinates. This is due to the fact that the current velocity defined by equation (\ref{3:cv1d}) is a function of $ u $ and the coordinates alone and does not depend on the history of the system. Thus if $u(x, y, z)$ is known, then it also specifies $ v(x, y, z) $ and therefore the evolution of the system in the following time step is completely determined in the sense of classic hydrodynamics. We also note that a time reversal operation, an exchange of $ t $ with $ -t $ in equation (\ref{freediff}) is not possible because $ D $, the diffusion coefficient, owing to its molecular theoretical meaning, is positive-definite. The diffusion process is therefore ``irreversible''. This is also evident from the fact that the velocity current is for given $ u $ a pure function of the position, so the initial velocities are not reversible and are determined solely by the collisions with the surrounding molecules. The situation is quite different in the quantum mechanical case. Since the particle motion is not disturbed here by collisions with the molecules of the surrounding element, the motion of the particle cluster is essentially determined by the initial positions and the initial velocities of the particles. It is therefore clear that there cannot be a differential equation for the density function $ w $ in the same way as it occurs for classic diffusion. That on the contrary an equation of the form (\ref{freeS}) holds can be most easily seen from the point of view of wave mechanics. From this point of view, the particle cluster forms a ``wave packet'', i.e., a superposition of harmonic partial waves of the form \begin{align} \psi_{k}=\varphi_{k}e^{2\,\pi\, i \,E_{k}\,t/h} \nonumber \end{align} the number whereof has the cardinality of the continuum for the boundary conditions considered here. Here $ \varphi_{k} $ stands for the ``\emph{amplitude function}'' a complex function of the position of the form \begin{align} \varphi_{k}=a_{k}e^{ i \,S_{k}} \nonumber \end{align} containing two real functions of the position, the amplitude $ A_{k} $ and the phase $ S_{k} $. The assignment of all the $ A_{k} $'s and $ S_{k} $'s as functions of the position fully specifies the $ \varphi $ in the wave packet under consideration at a given instant time as well as for every later (or earlier) instants of time in consequence of the differential equation (\ref{freeS}), which is physically obvious, since the fate of each partial wave is determined by the specification of amplitude and phase at time zero and thus also the fate of the wave packet created by interference from the partial waves. So it is immediately comprehensible that for description of the state of the wave field two scalars or one complex function, the \so{Schr\"odinger} function, are necessary. Since the density of the cluster under consideration (now considered from the corpuscular point of view) is specified solely by $ |\psi| $ according to equation (\ref{Born}), the assignement of $ \psi $ as a function of the position entails more detailed information than the distribution of the particles' \emph{positions} at a certain instant of time. According to what said above, as the fate of the cluster is determined by $ \psi $, it is evident that the assignement of $ \psi $ contains information also about the distribution of the \emph{velocities} at a certain instant of time. If, conversely, the initial velocities are not known, then it is not possible from the initial distribution alone to make predictions about the motion of the particles' cluster. In fact there cannot be a differential equation for $ |\psi| $. Nevertheless only the density $ w=\psi\psi^{*} $ or, interpreted as a virtual entity, the probability density of the position, is observable and not $ \psi $ itself. This paradoxical state of the matter can be immediately explained as a consequence of the uncertainty relations. Were $ \psi $ indeed observable then according to our discussion the position and velocity distribution would be simultaneously assigned for our particle cluster which is not possible! The fact that the coefficient on the left side of equation (\ref{freeS}) must be purely imaginary or the diffusion coefficient $ \epsilon $ in (\ref{sc}) must be purely imaginary can be seen as follows: if at an arbitrary instant of time the phases $ S_{k} $ of all the partial waves are reversed by $ 180^{\circ} $, then every $ \phi_{k} $ turns into $ \phi_{k}^{*} $ and therefore $ \psi $ into $ \psi^{*} $. At the same time, however, the reversal of all phases means turning all wave processes in the opposite direction or the complete reversal of the motion of the wave packet. The exchange of $ \psi $ with its conjugate complex value $ \psi^{*} $ means nothing else than a time reversal, and the differential equation (\ref{freeS}), which $ \psi $ satisfies , must therefore remain unchanged under the simultaneous replacement of $ \psi $ with $ \psi^{*} $ and of $ t $ with $ -t $. This is actually only possible, provided that the \so{Hamilton} operator $ H $ is time independent, if the coefficient of $ \frac{\partial \psi}{\partial t} $ is purely imaginary. The occurrence of the imaginary diffusion coefficient means, as \so{Schr\"odinger} has already pointed out~\footnote{Erwin \so{Schr{\"o}dinger},~\textit{loc. cit.}, ref~5.}, simply the reversibility of the quantum mechanical ``diffusion'' in contrast to the classical one, a discrepancy that was already emphasized in \S~2 and 3 in the [discussion of the] differences between equations (\ref{quadratic}) and (\ref{3:quadratic}). \vspace{0.5cm} Prague, January 1933.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and preliminaries} Differential games and pursuit--evasion problems are investigated by many authors and significant researches are given by Isaacs \cite{isa} and Petrosyan \cite{pet}. Ibragimov and Salimi \cite{is} study a differential game of optimal approach of countably many pursuers to one evader in an infinite dimensional Hilbert space with integral constraints on the controls of the players. Ibragimov et al.\ \cite{isaa} study an evasion problem from many pursuers in a simple motion differential game with integral constraints. In \cite{sal} Salimi et al.\ investigate a differential game in which countably many dynamical objects pursue a single one. All the players perform simple motions. The duration of the game is fixed. The controls of a group of pursuers are subject to integral constraints and the controls of the other pursuers and the evader are subject to geometric constraints. The payoff of the game is the distance between the evader and the closest pursuer when the game is terminated. They construct optimal strategies for players and find the value of the game. In the present paper, we solve a pursuit--evasion differential game with geometric constraints on the controls of players. In other words a pursuit of one player by finite number of dynamical players. In the Hilbert space $\ell_2 = \{\alpha = (\alpha_k)_{k \in \mathbf{N}} \in \mathbf{R}^{\mathbf{N}} : \sum_{k=1}^\infty\alpha_k^2<\infty\}$ with inner product $(\alpha,\beta)=\sum_{k=1}^\infty\alpha_k\beta_k$, the motions of the pursuers $P_i$ and the evader $E$ are defined by the equations: \begin{equation}\label{ab0} \begin{split} &(P_i):\dot{x}_i=u_i(t), \quad x_i(0)=x_{i0}, , \quad i=1,2,\ldots,m\\ &(E):\dot{y}=v(t), \quad y(0)=y_0, \end{split} \end{equation} where $x_i,x_{i0},y,y_0\in \ell_2,$ $u_i=(u_{i1},u_{i2},\ldots,u_{i\zeta},\ldots)$ is the control parameter of the pursuer $P_i$, and $v=(v_1,v_2,\ldots,v_\zeta,\ldots)$ is that of the evader $E$. In the following definitions $i=1,2,\ldots,m$. \begin{dfn} A function $u_i(\cdot),$ $u_i:[0,\infty)\to \ell_2,$ such that $u_{i\zeta}:[0,\infty)\to R^1,$ $\zeta=1,2,\ldots,$ are Borel measurable functions and $$ \|u_i(t)\|=\Big( \sum_{\zeta=1}^{\infty} u_{i\zeta}(t)^{2} \,dt \Big)^{\frac{1}{2}} \leq 1 $$ is called an \textit{admissible control of the ith pursuer}. \end{dfn} \begin{dfn} A function $v(\cdot),$ $v:[0,\infty)\to \ell_2,$ such that $v_\zeta:[0,\infty)\to R^1,$ ${\zeta=1,2,\ldots,}$ are Borel measurable functions and $$ \|v(t)\|=\Big(\sum_{\zeta=1}^{\infty} v_{\zeta}(t)^{2} \,dt \Big)^{\frac{1}{2}} \leq 1, $$ is called an \textit{admissible control of the evader}. \end{dfn} Once the players' admissible controls $u_i(\cdot)$ and $v(\cdot)$ are chosen, the corresponding motions $x_i(\cdot)$ and $y(\cdot)$ of the players are defined as $$ x_i(t)=(x_{i1}(t),x_{i2}(t),\ldots,x_{i\zeta}(t),\ldots), \quad y(t)=(y_1(t),y_2(t),\ldots,y_\zeta(t),\ldots), $$ \begin{equation*} x_{i\zeta}(t)=x_{i\zeta0}+\int \limits_0^tu_{i\zeta}(s)\,ds, \quad y_\zeta(t)=y_{\zeta0}+\int\limits_0^tv_\zeta(s)\,ds. \end{equation*} \begin{dfn} A function $U_i(t,x_i,y,v),$ $U_i:[0,\infty)\times \ell_2\times \ell_2\times \ell_2\to \ell_2,$ such that the system \begin{equation*} \begin{split} &\dot{x}_i=U_i(t,x_i,y,v), \quad x_i(0)=x_{i0},\\ &\dot{y}=v, \quad y(0)=y_0, \end{split} \end{equation*} has a unique solution $(x_i(\cdot),y(\cdot))$ for an arbitrary admissible control $v=v(t),$ $0\le t<\infty,$ of the evader $E,$ is called a \textit{strategy of the pursuer~$P_i$}. A strategy $U_i$ is said to be \textit{admissible} if each control formed by this strategy is admissible. \end{dfn} \begin{dfn} A function $V(t,x_1,\ldots,x_m,y),$ $V:[0,\infty)\times \underbrace{\ell_2\times\ldots\times \ell_2}_{m+1}\to \ell_2,$ such that the system of equations \begin{equation*} \begin{split} &\dot{x}_i=u_i, \quad x_i(0)=x_{i0},\\ &\dot{y}=V(t,x_1,\ldots,x_m,y), \quad y(0)=y_0, \end{split} \end{equation*} has a unique solution $(x_1(\cdot),\ldots,x_m(\cdot),y(\cdot))$ for arbitrary admissible controls $u_i=u_i(t),$ $0\le t<\infty,$ of the pursuers $P_i,$ is called a \textit{strategy} of the evader $E.$ If each control formed by a strategy $V$ is admissible, then the strategy $V$ itself is said to be \textit{admissible}. \end{dfn} \section{Pursuit problem and its solution} \begin{dfn} If $x_i(\tau)=y(\tau)$ at some $i$ and $\tau>0$, then pursuit is considered complete. \end{dfn} \begin{thm} Suppose the initial positions of the pursuers and the evader in the game (\ref{ab0}) are different and for any non-zero vector $p\in \ell_2$, there is $k\in \{1,2,\ldots,m\}$ such that $(y_0-x_{k0},p)<0$, then pursuit is complete. \end{thm} \begin{proof} We define the pursuers' strategy as follow: \begin{equation}\label{a} u_i(t)=v(t)-\left(v(t),e_i\right)e_i+e_i\left(1-\|v(t)\|^2+\left(v(t), e_i\right)^2\right)^{1/2}, \end{equation} where $e_i=\dfrac{y_0-x_{i0}}{\|y_0-x_{i0}\|}$, $i=1,2,\ldots,m$. The above strategy is admissible. Indeed \begin{equation*} \begin{split} \|u_i(t)\|^2&=\|v(t)-(v(t),~e_i)e_i\|^2+2\big(v(t)-(v(t),~e_i)e_i,~e_i\big(1-\|v(t)\|^2+(v(t),~e_i)^2\big)^{1/2}\big)\\ &+1-\|v(t)\|^2+(v(t),~e_i)^2\\ &=\|v(t)\|^2-2(v(t),~e_i)^2+(v(t),~e_i)^2+(v(t),~e_i)^2+1-\|v(t)\|^2\leq 1. \end{split} \end{equation*} By (\ref{a}), we have $y(t)-x_i(t)=e_i \Omega_i(t)$, where \begin{equation*} \Omega_i(t)=\|y_0-x_{i0}\|-\int_0^t\left(\left(1-\|v(s)\|^2+\left(v(s),e_i\right)^2\right)^{1/2}-\left(v(s),e_i\right)\right) \,ds. \end{equation*} We are going to show that $\Omega_i(\tau)=0$, for some $i=1,2,\ldots,m$, and $\tau>0$.\\ It is clear that $\Omega_i(0)=\|y_0-x_{i0}\|>0$ for $i=1,2,\ldots,m$. Suppose that $\Omega(t)=\sum_{i=1}^{m}\Omega_i(t)$, thus \begin{equation*} \Omega(t)=\sum_{i=1}^{m}\|y_0-x_{i0}\|-\int_{0}^{t}\sum_{i=1}^{m}\left(\left(1-\|v(s)\|^2+(v(s),e_i)^2\right)^{1/2}-(v(s),e_i)\right)\,ds. \end{equation*} Obviously \begin{equation*} \Lambda(v)=\sum_{i=1}^{m}\left(\left(1-\|v\|^2+(v,e_i)^2\right)^{1/2}-(v,e_i)\right)\geq0. \end{equation*} Define \begin{equation*} \Theta:=\inf_{\|v\|\leq1}\Lambda(v), \end{equation*} so $\Theta>0 ~~\text{or}~~ \Theta=0$. We show that $\Theta\neq0$. Assume by contradiction that $\Theta=0$. Then there exists a minimizing sequence $\{v_n\}_n\subset \ell_2$ with $\|v_n\|\leq 1$ for the value $\Theta=0$, i.e., \begin{equation}\label{elso-osszef} \lim_{n\to \infty}\Lambda(v_n)=0, \end{equation} where $$\Lambda(v)=\sum_{i=1}^m\left((1-\|v\|^2+(v,e_i)^2)^\frac{1}{2}-(v,e_i)\right)\geq 0.$$ On one hand, since the unit ball $B=\{v\in \ell_2:\|v\|\leq 1\}$ is weakly compact (but not strongly compact due to the fact that $\ell_2$ is an infinite dimensional Hilbert space), we may extract a subsequence (denoted in the same way) from $\{v_n\}$ which converges weakly to an element $v_0\in B$, i.e. $$v_n \xrightarrow[]{*} v_0\ {\rm as}\ n \xrightarrow[]{} \infty.$$ In particular, this fact implies that \begin{equation}\label{masodik-osszef} \lim_{n\to \infty}(v_n,w)= (v_0,w),\ \forall w\in \ell_2. \end{equation} On the other hand, since the real-valued sequence $\{\|v_n\|\}_n$ is bounded, up to some subsequence, we may assume that it converges to an element $c_0\in [0,1]$. We claim that $c_0=1$. Assume that $c_0<1.$ Then, we have by (\ref{elso-osszef}) and (\ref{masodik-osszef}) that \begin{eqnarray*} 0 &=& \lim_{n\to \infty}\Lambda(v_n)\\&=&\lim_{n\to \infty}\sum_{i=1}^m\left((1-\|v_n\|^2+(v_n,e_i)^2)^\frac{1}{2}-(v_n,e_i)\right) \\ &=& \sum_{i=1}^m\left((1-c_0^2+(v_0,e_i)^2)^\frac{1}{2}-(v_0,e_i)\right)\\&>&0, \end{eqnarray*} a contradiction. Therefore, $c_0=1,$ i.e., $\lim_{n\to \infty}\|v_n\|=1.$ Now, we come back again to (\ref{elso-osszef}), obtaining that \begin{eqnarray*} 0 &=& \lim_{n\to \infty}\Lambda(v_n)\\&=&\lim_{n\to \infty}\sum_{i=1}^m\left((1-\|v_n\|^2+(v_n,e_i)^2)^\frac{1}{2}-(v_n,e_i)\right) \\ &=& \sum_{i=1}^m\left(((v_0,e_i)^2)^\frac{1}{2}-(v_0,e_i)\right)\\&=&\sum_{i=1}^m\left(|(v_0,e_i)|-(v_0,e_i)\right). \end{eqnarray*} Since every term under the sum is non-negative, we necessarily have that $|(v_0,e_i)|=(v_0,e_i)$ for all $i\in \{1,...,m\}.$ Therefore, $$(v_0,e_i)=|(v_0,e_i)|\geq 0,\ \forall i\in \{1,...,m\},$$ which is inconsistent with the hypothesis of the theorem, therefore $\Theta>0$. So, \begin{equation*} \Omega(t)\leq \Omega(0)-\int_{0}^{t}\Theta \,ds=\Omega(0)-\Theta t, \end{equation*} therefore, in time $\eta=\frac{\Omega(0)}{\Theta}$ we have $\Omega(\eta)=\sum_{i=1}^{m}\Omega_i(\eta)\leq0$, and then $\Omega_i(\tau)=0$, for some $i=1,2,\ldots,m$, $\tau \in(0,\eta]$ and pursuit is complete. \end{proof} \section{Evasion problem and its solution} \begin{dfn} If there exists a strategy of the evader such that $x_i(t)\neq y(t)$, $t>0$, then evasion is possible. \end{dfn} \begin{thm} Suppose the initial positions of the pursuers and the evader in the game (\ref{ab0}) are different and there exists a non-zero vector $p\in \ell_2$ such that $\|p\|=1$ and $(y_0-x_{i0},p)\geq0$, $i\in \{1,2,\ldots,m\}$, then evasion is possible. \end{thm} \begin{proof} We define the evader's strategy as follow: \begin{equation}\label{ae} v(t)=p, \quad t\geq 0. \end{equation} Obviously the above strategy is admissible. We have \begin{equation*} \left(y(t)-x_i(t),p\right)=\left(y_0-x_{i0},p\right)+\int_{0}^{t}\left(v(s),p\right)\,ds-\int_{0}^{t}\left(u_i(s),p\right)\,ds. \end{equation*} By taking strategy (\ref{ae}) we obtain \begin{equation*} \left(y(t)-x_i(t),p\right)=\left(y_0-x_{i0},p\right)+\int_{0}^{t}\left[1-\left(u_i(s),p\right)\right]\,ds. \end{equation*} Let's assume that evasion is not possible, so there are $\tau >0$ and $k\in \{1,2,\ldots,m\}$ such that $y(\tau)=x_k(\tau)$. Then \begin{equation*} \left(y(\tau)-x_k(\tau),p\right)=(y_0-x_{k0},p)+\int_{0}^{\tau}\left[1-\left(u_k(s),p\right)\right]\,ds=0. \end{equation*} By the assumption of the theorem, $(y_0-x_{k0},p)\geq0$. On the other hand $\|u_i(t)\|\leq 1$, and then \begin{equation*} |(u_i(t),p)|\leq \|u_i(t)\|\cdot\|p\|\leq 1, \quad t\in\left[0,\tau\right], \end{equation*} so $(u_i(t),p)\leq 1$. Thus $(y_0-x_{k0},p)=0$ and then \begin{equation*} \int_{0}^{\tau}\left[1-\left(u_k(s),p\right)\right]\,ds=0. \end{equation*} From the above equality we obtain $1-\left(u_k(s),p\right)=0$, $s\in\left[0,\tau\right]$, almost everywhere. Hence $\left(u_k(s),p\right)=1$, and $u_k(s)=p$, $s\in\left[0,\tau\right]$. Therefore \begin{equation*} y(\tau)-x_k(\tau)=y_0+\int_{0}^{\tau}p \,ds-x_{k0}-\int_{0}^{\tau}p \,ds=y_0-x_{k0}=0, \end{equation*} which is a contradiction with the initial positions of the pursuers and the evader. So $x_i(t)\neq y(t)$, $i\in \{1,2,\ldots,m\}$, $t>0$. In other words, evasion of the evader from all the pursuers is possible. \end{proof} \section{Conclusion} We considered a pursuit--evasion problem with finite number of pursuers and one evader in the Hilbert space $\ell_2$. The controls of pursuers and the evader are subject to geometric constraints. We constructed admissible strategies for pursuers which guarantee capture of the evader as well as an admissible strategy for the evader which guarantees evasion from all pursuers. \\\\ {\bf Acknowledgments.} The present research was supported by the MEDAlics, Research Center at Universit\`{a} per Stranieri Dante Alighieri, Reggio Calabria, Italy.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{s:intro} The aim of this paper is to develop a notion of parity for the orthogonal arrays that define sets of mutually orthogonal Latin squares (MOLS). The important notion of parity for permutations is widely known. A \emph{Latin square} of order $n$ is an $n \times n$ square with entries in an $n$-set $\Lambda$, called the alphabet, having the property that every element of $\Lambda$ occurs exactly once in each row and each column of the square. Latin squares are $2$-dimensional analogues of permutations and they too have a notion of parity. This parity plays a pivotal role in a famous conjecture of Alon and Tarsi (see e.g.~\cite{AL15,SW12} and the references therein) and has also proved crucial in a variety of other quite distinct investigations. In \cite{DGGL10}, parity was found to explain observed limitations on which Latin squares could be embedded together in topological surfaces. In \cite{Wan04} and \cite{KMOW14}, parity explains large components that arise in graphs made by local switchings in Latin squares or in 1-factorisations of the complete graph, respectively. Parity considerations can also assist in diagnosing symmetries of Latin squares \cite{Kot12}. It is clear that parity of single Latin squares is a useful concept. It is therefore natural to try to extend this concept to sets of MOLS. A set of MOLS is a set of Latin squares such that when any two of the squares are superimposed, every ordered pair of symbols occurs exactly once. For any list $M=[M_1,\dots,M_{k-2}]$ of MOLS on an alphabet $\Lambda$ we define an $n^2\times k$ matrix, denoted $\mathscr{A}(M)$, by taking one row $\big[r,c,M_1[r,c],\dots,M_{k-2}[r,c]\big]$ for each pair $(r,c)\in\Lambda^2$. (For the sake of definiteness, we insist that these rows are ordered lexicographically. Also, if $M$ is given as a set rather than a list, then we impose lexicographic order on $M$ in order to create $\mathscr{A}(M)$.) Now, $\mathscr{A}(M)$ is an \emph{orthogonal array} $\mathrm{OA}(k,n)$ (of strength $2$ with $n$ levels and index $1$) because it has the defining property that every pair of columns contains every ordered pair of elements of $\Lambda$ exactly once. Conversely, if $A$ is an $\mathrm{OA}(k,n)$ we define $\mathscr{M}(A)$ to be the set of $k-2$ MOLS formed by taking the entry in row $r$, column $c$ of the $i$-th Latin square to be the entry in column $i+2$ of the row of $A$ that begins $[r,c,\dots]$. In this sense, an $\mathrm{OA}(k,n)$ is equivalent to a set of $k-2$ MOLS of order $n$ (see e.g.~\cite[III.3]{handbook} for more details and background). Throughout this paper, we assume that $k$ and $n$ are integers and that $n\geqslant 2$ and $3 \leqslant k \leqslant n+1$. Let $\mathcal{S}_\Lambda$ denote the permutations of $\Lambda$. Two orthogonal arrays on alphabet $\Lambda$ are {\em isotopic} if one can be obtained from the other by some sequence of operations of the following type: choose a $\gamma\in\mathcal{S}_\Lambda$ and a column $c$ and apply $\gamma$ to every entry in column $c$. We say two orthogonal arrays are {\em conjugate} if one can be obtained from the other by permuting the columns. We say two Latin squares are isotopic (respectively, conjugate) if their orthogonal arrays are isotopic (respectively, conjugate). We say two orthogonal arrays are isomorphic if, up to possible reordering of the rows of the arrays, one is isotopic to a conjugate of the other. A finite projective plane of order $n$ (see e.g.~\cite{handbook} for the definition) can be used to define an $\mathrm{OA}(n+1,n)$ and vice versa. However, there are some subtleties to this relationship. Let $\mathcal{L}$ be a line of a finite projective plane of order $n$. We can make an $\mathrm{OA}(n+1,n)$, using $\mathcal{L}$ as the {\em ``line at infinity''}, as follows. First, we number the points on $\mathcal{L}$ as $p_1,\ldots,p_{n+1}$. Next, for $1\leqslant i\leqslant n+1$, we number the lines (other than $\mathcal{L}$) through $p_i$, calling them $\ell_{i1},\ldots,\ell_{in}$. Finally, for each point $q$ not on $\mathcal{L}$ we add a row to $A$ which has entry $i$ in column $j$ if the line through $q$ and $p_j$ is $\ell_{ji}$. The choices for the numbering of lines and points do not change the isomorphism class of the resulting orthogonal array. However, different choices for $\mathcal{L}$ may produce non-isomorphic orthogonal arrays. Hence, each projective plane of order $n$ can potentially produce representatives from up to $n^2+n+1$ isomorphism classes of $\mathrm{OA}(n+1,n)$. The reverse relationship is simpler. Every $\mathrm{OA}(n+1,n)$ can be derived in the above way from a unique projective plane. Given a Latin square $L=(l_{ij})$ of order $n$, we can identify $3$ parities $\pi_r(L)$, $\pi_c(L)$ and $\pi_s(L)$ as follows. We assume that the symbols $\Lambda$ index the rows and columns of $L$ (in this paper $\Lambda$ will either be $\{1,\dots,n\}$ or $\mathbb{Z}_n$). Let $\pi:\mathcal{S}_\Lambda\rightarrow\mathbb{Z}_2$ denote the usual {\em parity} homomorphism with kernel the alternating group. For all $i\in\Lambda$ we can define a permutation of $\Lambda$ by $j\mapsto l_{ij}$. Applying $\pi$ to these permutations and taking the sum, mod 2, we obtain the {\em row-parity} $\pi_r(L)$. The {\em column-parity} $\pi_c(L)$ is defined similarly, using each permutation $i\mapsto l_{ij}$ formed by fixing some $j\in\Lambda$. The {\em symbol-parity} $\pi_s(L)$ is the sum of the parities of the permutations formed by fixing an $\ell\in\Lambda$ and mapping $i\mapsto j$ whenever $l_{ij}=\ell$. These three parities are related by \begin{equation}\label{e:fundLSpar} \pi_r + \pi_c + \pi_s \= {n \choose 2}\quad\mod2. \end{equation} This relation has been rediscovered numerous times. Different proofs have been published in \cite{DGGL10, Gly10, Jan95, Wan04, Zappa96} and we are also aware of other researchers finding their own proof but not publishing it. We will prove a generalisation of \eref{e:fundLSpar} in \lref{l:LStriangleparity}, providing a new proof of the original result in the process. By \eref{e:fundLSpar}, a Latin square has one of the following {\em parity types}: \begin{align*} \pi_r \pi_c \pi_s &\in \{000, 011, 101, 110 \} && \textrm{ if } n \equiv 0,1 \mod{4},\\ \pi_r \pi_c \pi_s &\in \{111, 100, 010, 001 \} && \textrm{ if } n \equiv 2,3 \mod{4}. \end{align*} It is known \cite{CW16} that the proportion of Latin squares which have each of the four parity types that are possible for order $n$ is $\frac14+o(1)$ as $n\rightarrow\infty$ (for a related result, see \cite{Alp17}). An \textit{equiparity} Latin square has $\pi_r \pi_c \pi_s \in \{000, 111\}$. Loosely speaking, ``nice'' Latin squares have a greater than $1/4$ chance of being equiparity. For example, all Cayley tables of finite groups are necessarily equiparity, because they are isotopic to all of their conjugates \cite[Thm~4.2.2]{KD15}. In \sref{s:definitions}, we will define two notions of parity associated with an orthogonal array. The first one we refer to as $\tau$-parity; it is a direct generalisation of the definition of parities $\pi_r,\pi_c,\pi_s$ for a Latin square. The second one we call $\sigma$-parity. It was introduced by Glynn and Byatt~\cite{GB12} (see also Glynn~\cite{Gly10}) for $n$ even, but we define it for all $n$. We establish the precise relationship between these two seemingly different definitions of parity of an $\mathrm{OA}$. For the remainder of the paper we look into properties of $\tau$-parity and $\sigma$-parity and insights that each can offer. In \sref{s:graphs} we introduce graphs that are useful tools for studying both notions of parity. In \sref{s:numpar} we consider the question of how many different $\tau$-parities an $\mathrm{OA}(k,n)$ can have. We view this question in information theory terms, by measuring the number of independent bits of information that there are in the $\tau$-parity. For example, the $\tau$-parity of an $\mathrm{OA}(3,n)$ has an information content of 2 bits given that any two of $\pi_r,\pi_c,\pi_s$ determine the third, by \eref{e:fundLSpar}. We prove a bound on the information content of the $\tau$-parity of an $\mathrm{OA}(k,n)$, and prove that this bound is achieved for all large $n$ when $k\leqslant 5$. In \sref{s:PP} we consider the important special case of the orthogonal arrays derived from projective planes as described above. The theory developed in earlier sections is applied to this case. Stronger conclusions can be drawn than in the general case. For example, we show that the information content in the $\tau$-parity is lower than the general bound from \sref{s:numpar}. Let $A$ be an $\mathrm{OA}(k,n)$. The {\it ensemble} of $A$ is the set of Latin squares $\mathscr{M}(B)$, where $B$ ranges across all ${k\choose 3}$ choices of $\mathrm{OA}(3,n)$ formed by $3$ columns of $A$ (the columns should occur in the same order in $B$ as they do in $A$). In \sref{s:ensemble} we investigate the parities of the Latin squares in the ensemble of $A$. We find significant restrictions on the number of equiparity Latin squares, particularly in the case when $k=n+1$. Among other things, these allow us to deduce that for $n\= 2\mod4$ there is no $\mathrm{OA}(n+1,n)$ for which all Latin squares in the ensemble are isotopic to each other. This is in contrast with the $n\not\= 2\mod4$ case, where the Desarguesian projective planes provide examples of $\mathrm{OA}(n+1,n)$ for which all Latin squares in the ensemble are isotopic to the Cayley table of the elementary abelian group of order $n$. \section{Two notions of parity}\label{s:definitions} Throughout the paper, we use the discrete interval notation $[a,b]=\{a,a+1,\dots, b\}$ for $a,b \in \mathbb{Z}$. In this section we define two notions of parity for orthogonal arrays and demonstrate the relationship between them. The two notions of parity are called $\tau$-parity and $\sigma$-parity. Each $\tau^c_{ij}$ of the $\tau$-parity and each $\pi(\sigma_{ij})$ of the $\sigma$-parity is an element of $\mathbb{Z}_2$ and hence, throughout this paper, every equation involving $\tau$-parities or $\sigma$-parities is assumed to be calculated in $\mathbb{Z}_2$. Also, because it makes the definition of $\sigma$-parity more natural, we will choose to always index the rows of an $\mathrm{OA}$ by $\Lambda^2$, where $\Lambda$ is the alphabet. \subsection{$\tau$-parity}\label{ss:tau-parities} \begin{definition}[$\tau$-parity] Let $A=(a_{r\ell})$ be an $\mathrm{OA}(k,n)$ on an alphabet $\Lambda$, where $r \in \Lambda^2$ and $\ell \in [1,k]$. For each ordered triple $(c,i,j)$ of distinct numbers in $[1,k]$ and for each $s \in \Lambda$, define $\rho_s(c,i,j)$ to be the permutation of $\Lambda$ mapping $a_{ri} \mapsto a_{rj}$ whenever $a_{rc}=s$. Define $\tau^c_{ij}=\tau^c_{ij}(A)$ as the sum of the parities of these $n$ permutations; that is, $\tau^c_{ij}=\sum_{s \in \Lambda}^{}\pi(\rho_s(c,i,j))$. We refer to the vector of parities $\tau^c_{ij}$ indexed by the $k(k-1)(k-2)$ triples $(c,i,j)$ as the \textit{$\tau$-parity of $A$}. \end{definition} Note that $\{a_{ri}:a_{rc}=s \} = \Lambda = \{a_{rj}:a_{rc}=s\}$ by the definition of an orthogonal array. Hence, $\{\rho_s(c,i,j) : s \in \Lambda\}$ is indeed a set of $n$ well-defined permutations. The $\tau$-parity of an orthogonal array $A$ naturally extends the notion of the parity of a single Latin square to a set of MOLS. If $M_1,\dots,M_{k-2}$ are the MOLS in $\mathscr{M}(A)$ then for the Latin square $M_{i-2}$ we have $\pi_r =\tau^1_{2i}$, $\pi_c=\tau^2_{1i}$, and $\pi_s = \tau^i_{12}$. For distinct $c,i,j, \ell \in [1,k]$ we have \begin{align} \tau_{ij}^c &= \tau_{ji}^c, \label{e:tauidxcommutativity} \\ \tau_{ij}^c &= \tau_{i\ell}^c + \tau_{\ell j}^c. \label{e:Fixedcol} \end{align} The first equation follows from the fact that a permutation and its inverse have the same parity. The second equation follows from composition of permutations. Next we consider $\tau$-parities of isomorphic orthogonal arrays. Permuting the rows of an orthogonal array $A$ has no effect on $\mathscr{M}(A)$, and permuting the symbols within a column results in an isotopic set of MOLS. Permuting the columns yields a conjugate set of MOLS. We investigate the effect of these basic operations on the $\tau$-parity. \begin{lemma}\label{l:tauparconj} Let $A_1$ and $A_2$ be two orthogonal arrays, $\mathrm{OA}(k,n)$, on alphabet $\Lambda$. Index the columns of $A_1$ and $A_2$ by $[1,k]$. Let $c,i,j \in [1,k]$ be distinct integers. \begin{itemize} \item[(i)] If $A_2$ is obtained by permuting the rows of $A_1$, then $\tau^c_{ij}(A_2)=\tau^c_{ij}(A_1)$. \item[(ii)] If $A_2$ is obtained by permuting the columns of $A_1$ by $\gamma$, then $\tau^{\gamma(c)}_{\gamma(i)\gamma(j)}(A_2)=\tau^c_{ij}(A_1)$. \item[(iii)] Let $\gamma\in\mathcal{S}_\Lambda$ and $c\in[1,k]$. If $A_2$ is obtained by applying $\gamma$ to every entry in column $c$ of $A_1$, then \begin{align*} \tau^{c}_{ij}(A_2) &= \tau^{c}_{ij}(A_1), \\ \tau^i_{jc}(A_2) &= \tau^i_{jc}(A_1) + n \cdot \pi(\gamma) ,\\ \tau^{d}_{ij}(A_2) &= \tau^{d}_{ij}(A_1) \textrm{ if } d\notin\{c,i,j\}. \end{align*} \end{itemize} \end{lemma} \begin{proof} Statements (i) and (ii) follow immediately from the definition of $\tau$-parity. To prove statement (iii), first note that $\tau^{c}_{ij}(A_1)$ and $\tau^{c}_{ij}(A_2)$ are each the sum of the parities of the same set of $n$ permutations of $\Lambda$, hence $\tau^{c}_{ij}(A_2)=\tau^{c}_{ij}(A_1)$. It is also clear that $\tau^{d}_{ij}(A_2) = \tau^{d}_{ij}(A_1)$ for $d\notin\{c,i,j\}$. Finally, in $\mathbb{Z}_2$ we have \[ \tau^i_{jc}(A_2) = \sum_{s \in \Lambda} \pi(\gamma \rho_s(i,j,c)) = \sum_{s \in \Lambda} \pi(\rho_s(i,j,c)) + \sum_{s \in \Lambda}\pi(\gamma) = \tau^i_{jc}(A_1) + n\cdot\pi(\gamma). \qedhere \] \end{proof} Suppose $M=[M_1,M_2,\dots,M_{k}]$ are MOLS and that $A=\mathscr{A}(M)$ is the corresponding $\mathrm{OA}(k+2,n)$. Let $A'$ be the $\mathrm{OA}(k+2,n)$ obtained by permuting the columns of $A$ by some permutation $\gamma$. Suppose $M'=\mathscr{M}(A')=[M'_1,M'_2,\dots,M'_{k}]$. Let $\pi_{r,i},\pi_{c,i},\pi_{s,i}$ denote, respectively, the row, column and symbol parity of $M_i$ for $i\in[1,k]$, and let $\pi'_{r,i},\pi'_{c,i},\pi'_{s,i}$ be the corresponding parities of $M'_i$. It will follow from our work in \sref{s:numpar} that there is not enough information in $\{\pi_{r,i},\pi_{c,i},\pi_{s,i}:i\in[1,k]\}$ to determine $\{\pi'_{r,i},\pi'_{c,i},\pi'_{s,i}:i\in[1,k]\}$ for all $\gamma$. However, there are some things we can say: \begin{itemize} \item[(i)] If the first two columns are fixed points of $\gamma$ then $M'$ is simply a reordering of the Latin squares in $M$, and the parities will be permuted accordingly. \item[(ii)] If $\gamma$ is the transposition $(12)$ then $M'_i$ is the transpose of $M_i$, so $\pi'_{r,i}=\pi_{c,i}$, $\pi'_{c,i}=\pi_{r,i}$ and $\pi'_{s,i}=\pi_{s,i}$ for $i\in[1,k]$. \item[(iii)] If $\gamma$ is the transposition $(23)$ then $M'_1$ is a conjugate of $M_1$ for which $\pi'_{r,1}=\pi_{r,1}$, $\pi'_{c,1}=\pi_{s,1}$ and $\pi'_{s,1}=\pi_{c,1}$. Moreover, $\pi'_{r,i}=\pi_{r,i}+\pi_{r,1}$ for $i\in[2,k]$, because $\tau^1_{2(i+2)}=\tau^1_{23}+\tau^1_{3(i+2)}$ by \eref{e:Fixedcol}. \end{itemize} Note that any permutation of the columns can be achieved by composing the permutations in (i), (ii) and (iii) above. \subsection{$\sigma$-parity}\label{ss:sigma-parities} In this subsection, we consider an alternative definition of parity of an orthogonal array. This definition was introduced by Glynn and Byatt~\cite{GB12} (see also Glynn~\cite{Gly10}) for orthogonal arrays with even alphabet size, although it is also useful for odd alphabet sizes. \begin{definition}[$\sigma$-parity] Let $A=(a_{r\ell})$ be an $\mathrm{OA}(k,n)$ on alphabet $\Lambda$, where $r\in \Lambda^2$ and $\ell \in [1,k]$. Let $i,j \in [1,k]$ be two distinct integers. Then $\sigma_{ij}:\Lambda^2 \rightarrow \Lambda^2$ is the permutation defined by $\sigma_{ij}(r) = (a_{ri},a_{rj})$. We sometimes write $\sigma_{ij}^A$ for $\sigma_{ij}$ to stress the role that $A$ plays. We refer to the vector of parities $\pi(\sigma_{ij})$ indexed by the $k(k-1)$ pairs $(i,j)$ as the \textit{$\sigma$-parity of~$A$}. \end{definition} In \sref{ss:equivalencetausigma}, we establish an equivalence between $\tau$-parity and $\sigma$-parity. Here, we consider some basic properties of $\sigma$-parity. First, we observe the effect of interchanging the indices. \begin{lemma}\label{l:sigmaijvssigmaji} Given an $\mathrm{OA}(k,n)$ and distinct integers $i,j \in [1,k]$, \begin{equation} \label{e:sigmacommutativity} \pi(\sigma_{ji}) = \pi(\sigma_{ij}) + {n \choose 2}. \end{equation} \end{lemma} \begin{proof} Let $A=(a_{r\ell})$ be an $\mathrm{OA}(k,n)$ on alphabet $\Lambda$, where $r\in \Lambda^2$ and $\ell \in [1,k]$. Since $A$ is an $\mathrm{OA}$, there is a bijection $R : \Lambda^2 \rightarrow\Lambda^2$ for which $R(u,v) = r$ where $u=a_{ri}$ and $v=a_{rj}$. Then $\sigma_{ji} = \sigma_{ij} \prod_{\{u,v\}} \big(R(u,v), R(v,u)\big)$, where $\big(R(u,v), R(v,u)\big)$ is a transposition and the product is over all unordered pairs of distinct elements of $\Lambda$. The claim follows. \end{proof} Note that \eref{e:sigmacommutativity} relies on an essential property of orthogonal arrays; namely, that every ordered pair of distinct symbols occurs exactly once in every pair of columns of an $\mathrm{OA}$. We refer to this equation to derive further properties of both $\sigma$- and $\tau$-parities. Next, we consider the analogue of \lref{l:tauparconj} for $\sigma$-parity; that is, we consider the $\sigma$-parities of isomorphic orthogonal arrays. \begin{lemma}\label{l:sigmaparconj} Let $A_1$ and $A_2$ be two orthogonal arrays, $\mathrm{OA}(k,n)$, on an alphabet $\Lambda$. Index the columns of $A_1$ and $A_2$ by $[1,k]$. Let $i,j \in [1,k]$ be two distinct integers. \begin{itemize} \item[(i)] If $A_2$ is obtained by permuting the rows of $A_1$ by $\gamma$, then $\pi(\sigma^{A_2}_{ij})=\pi(\sigma^{A_1}_{ij}) + \pi(\gamma)$. \item[(ii)] If $A_2$ is obtained by permuting the columns of $A_1$ by $\gamma$, then $\pi(\sigma^{A_2}_{\gamma(i)\gamma(j)})=\pi(\sigma^{A_1}_{ij})$. \item[(iii)] Let $\gamma\in\mathcal{S}_\Lambda$ and $i\in[1,k]$. If $A_2$ is obtained by applying $\gamma$ to every entry in column $i$ of $A_1$, then $\pi(\sigma^{A_2}_{ij})= \pi(\sigma^{A_1}_{ij}) + n \cdot \pi(\gamma)$, and if $i' \neq i$ then $\pi(\sigma^{A_2}_{i'j})= \pi(\sigma^{A_1}_{i'j})$. \end{itemize} \end{lemma} \begin{proof} If $\gamma$ is a permutation of $\Lambda^2$ which acts on the rows of $A_1$, then $\sigma^{A_2}_{ij} = \sigma^{A_1}_{ij} \gamma$, which implies the first statement. Similarly, if $\gamma$ is a permutation of $[1,k]$ which acts on the columns of $A_2$, then $\sigma^{A_1}_{ij}=\sigma^{A_2}_{\gamma(i)\gamma(j)}$. Now assume that $\gamma$ is a permutation of $\Lambda$ that is applied to every entry in column $i$ of $A_1=(a_{r\ell})$. Write $\gamma$ as a product of transpositions: $\gamma = (v_1,v'_1)(v_2,v'_2) \cdots (v_m, v'_m)$ where $m\ge0$ and $v_l \neq v'_l$ for $l \in [1,m]$. Let $R$ be the bijection from the proof of \lref{l:sigmaijvssigmaji}. Then \[ \sigma^{A_2}_{ij} = \sigma^{A_1}_{i j}\prod_{l \in [1,m]} \prod_{s \in \Lambda} \big( R(v_l, s), R(v'_l, s) \big). \] Therefore, $\pi(\sigma^{A_2}_{ij}) = \pi(\sigma^{A_1}_{i j}) + mn = \pi(\sigma^{A_1}_{ij}) + n \cdot \pi(\gamma)$ in $\mathbb{Z}_2$. The claim is obvious for $i' \neq i$. \end{proof} \subsection{Equivalence between $\tau$-parity and $\sigma$-parity} \label{ss:equivalencetausigma} In this subsection, we show that the $\sigma$-parity of an orthogonal array determines its $\tau$-parity. Also, the converse statement is true up to complementation. Let $A=(a_{r\ell})$ be an $\mathrm{OA}(k,n)$ and fix distinct $c,i,j\in[1,k]$. Consider a row $r$ of $A$ and let $x=a_{rc}$, $y=a_{ri}$ and $z=a_{rj}$. The permutation $\sigma_{cj}\sigma_{ci}^{-1}$ maps $(x,y)$ to $(x,z)$. Thus the contribution to $\pi(\sigma_{cj}\sigma_{ci}^{-1})$ from the rows in which the symbol $x$ occurs in column $c$ is precisely $\pi(\rho_x(c,i,j))$. Summing over $x$, we find that \begin{equation}\label{e:taufromsigmas} \tau^c_{ij} = \pi(\sigma_{cj}\sigma_{ci}^{-1}) = \pi(\sigma_{ci}\sigma_{cj}). \end{equation} This demonstrates that $\sigma$-parity determines $\tau$-parity. Note that \lref{l:sigmaparconj}(i) implies that if two rows of an $\mathrm{OA}$ are interchanged, then every $\pi(\sigma_{ij})$ changes value. We call this {\it $\sigma$-complementation}. However, interchanging two rows of an orthogonal array $A$ does not change $\mathscr{M}(A)$, hence one would not expect the parity to change. Also, permuting the rows of an $\mathrm{OA}$ does not affect $\tau$-parity (see \lref{l:tauparconj}(i)), which means there is no hope of recovering $\sigma$-parity from $\tau$-parity. However, if some kind of standardisation is imposed to choose between a $\sigma$-parity and its $\sigma$-complement, then this standardised choice may be recovered from the $\tau$-parity. We will use two forms of standardisation in this paper. The simplest is just to insist that $\pi(\sigma_{12})=0$. Under this convention, it is easy to recover the $\sigma$-parity given the $\tau$-parity of an orthogonal array, as follows: \begin{align} \pi(\sigma_{12}) &= 0, \label{e:convent} \\ \pi(\sigma_{1j}) &= \tau_{2j}^1 && \text{for } j \geqslant 3, \label{e:sig1j} \\ \pi(\sigma_{2j}) &= \tau_{1j}^2 + {n \choose 2} && \text{for } j \geqslant 3, \label{e:sig2j}\\ \pi(\sigma_{ij}) &= \tau_{2i}^1 + \tau_{1j}^i + {n \choose 2} && \text{for } 3 \leqslant i < j. \label{e:sigij} \end{align} As usual, all parity equations are in $\mathbb{Z}_2$. The first of these four equations, \eref{e:convent}, is our standardisation. By~\eref{e:taufromsigmas}, $\tau_{2j}^1 = \pi(\sigma_{12} \sigma_{1j}) = \pi(\sigma_{12}) + \pi(\sigma_{1j})$ when $j \geqslant 3$, giving \eref{e:sig1j}. Note that $\tau_{2j}^1 + \tau_{1j}^i = \pi(\sigma_{12}\sigma_{1j}\sigma_{i1}\sigma_{ij})$. When $i=2$ and $j \geqslant 3$, using \eref{e:convent} and \eref{e:sig1j} as well as the property~\eqref{e:sigmacommutativity}, we have \begin{align*} \tau_{2j}^1 + \tau_{1j}^2 &= \pi(\sigma_{1j}) + \pi(\sigma_{21}) + \pi(\sigma_{2j})\\ &= \tau_{2j}^1 + {n \choose 2} + \pi(\sigma_{2j}), \end{align*} which gives \eref{e:sig2j}. Similarly, if $i \geqslant 3$, we have \begin{align*} \tau_{2j}^1 + \tau_{1j}^i &= \pi(\sigma_{1j}) + \pi(\sigma_{i1}) + \pi(\sigma_{ij}) \\ & = \tau_{2j}^1 + \tau_{2i}^1 + {n \choose 2} + \pi(\sigma_{ij}), \end{align*} which gives \eref{e:sigij}. In several arguments towards the end of \sref{s:ensemble} it will be convenient to use a form of standardisation other than \eref{e:convent}. However, any method which decides whether to take a $\sigma$-parity or its $\sigma$-complement will allow $\sigma$-parity to be recovered from $\tau$-parity. We can simply find the answer determined by \eref{e:convent}--\eref{e:sigij} and then take the $\sigma$-complement or not, as required. We can use the relationship between $\tau$-parity and $\sigma$-parity to generalise \eref{e:fundLSpar}, and give a simple proof. \begin{lemma}\label{l:LStriangleparity} Suppose $c_1, c_2, \dots, c_l \in [1,k]$ are distinct integers, where $l\geqslant 3$. For any $\mathrm{OA}(k,n)$ we have, in $\mathbb{Z}_2$, \[ \tau^{c_1}_{c_lc_2} + \tau^{c_2}_{c_1c_3} + \cdots + \tau^{c_i}_{c_{i-1}c_{i+1}} + \cdots + \tau^{c_l}_{c_{l-1}c_1} = l {n \choose 2}. \] \end{lemma} \begin{proof} By~\eref{e:taufromsigmas} and \eref{e:sigmacommutativity}, \[ \tau^{c_1}_{c_lc_2} + \cdots + \tau^{c_i}_{c_{i-1}c_{i+1}} + \cdots + \tau^{c_l}_{c_{l-1}c_1} = \pi(\sigma_{c_1c_l}\sigma_{c_lc_1}) +\sum_{i\in[1,l-1]} \pi(\sigma_{c_i c_{i+1}} \sigma_{c_{i+1}c_i} ) = l {n \choose 2}. \qedhere \] \end{proof} Applying \lref{l:LStriangleparity} in the case where $l = 3$, we find that for an $\mathrm{OA}(k,n)$ and three distinct integers $c,i,j \in [1,k]$ we have \begin{equation}\label{e:Triple} \tau^c_{ij} + \tau^i_{cj} + \tau^j_{ci} = {n \choose 2}. \end{equation} Of course, this is essentially a restatement of \eref{e:fundLSpar}. \section{Graphs that model parities}\label{s:graphs} We next describe graph theoretic interpretations for both our notions of parity ($\tau$-parity and $\sigma$-parity). Our graphs will not include loops or multiple edges, but will sometimes have directed edges. Given a graph or digraph $G$, we use $V(G)$ for the vertex set of $G$ and $E(G)$ for the set of (possibly directed) edges. For an undirected graph $G$ we use $N_G(v) \subseteq V(G)$ to denote the neighbourhood of $v$ in $G$, the set of vertices in $G$ which are adjacent to a given vertex $v$. The \textit{complement} of a undirected graph $G$, denoted $\overline{G}$, is the graph obtained by replacing edges of $G$ with non-edges and vice versa. By \textit{switching} an undirected $G$ at $v\in V(G)$ we obtain the graph, denoted $G^v$, which is equal to $G$ except that the neighbourhood of $v$ in $G^v$ is the complement of the neighbourhood of $v$ in $G$. In other words, $N_{G^v}(v)=N_{\overline{G}}(v)$. The \textit{reverse} of a digraph $G$, also denoted $\overline{G}$, is the digraph obtained from $G$ by reversing the direction of every edge. By \textit{switching} a directed $G$ at $v\in V(G)$, we obtain the digraph, denoted $G^v$, which is equal to $G$ except that the direction of each edge incident with $v$ is reversed. Note that we use the same notation $\overline{G}$ and $G^v$, where the meaning is determined by context, depending on whether $G$ is a undirected graph or a digraph. For a fixed initial digraph, the digraph obtained by applying a finite sequence of switchings and reversals depends only on the parity of the number of switchings taken at each vertex, and the parity of the number of reversals taken. It does not depend on the order in which these operations are applied (see \cite{MS13} for further details). It is easy to check that analogous properties hold in the context of undirected graphs as well. \subsection{Graphs related to $\tau$-parity} \begin{definition} Let $A$ be an $\mathrm{OA}(k,n)$. For each $c \in [1,k]$, we define an undirected graph $G_c$ with $V(G_c)=[1,k]$, where vertex $c$ is isolated and $\{i,j\} \in E(G_c)$ if and only if $\tau_{ij}^c = 1$ for $i,j\ne c$. We call the graphs $G_1, \dots, G_k$ the \textit{$\tau$-graphs} of $A$. \end{definition} We find it convenient to consider empty graphs to be complete bipartite graphs (with one side of the graph having cardinality zero). \begin{lemma}\label{l:taugraphbipartite} Let $G_1, \dots, G_k$ be the $\tau$-graphs of an $\mathrm{OA}(k,n)$. Then, for each $c \in [1,k]$, graph $G_c$ is the disjoint union of an isolated vertex and a complete bipartite graph on $k-1$ vertices. \end{lemma} \begin{proof} For some $c \in [1,k]$, let $G$ be $G_c$ with the isolated vertex $c$ removed. If $G$ has no edges, then $G$ is $K_{0,k-1}$, so assume that $G$ has at least one edge. Let $i \in V(G)$ be such that $V_1=N_{G}(i) \neq \emptyset$. By~\eqref{e:Fixedcol}, there is an even number of edges between every triple of vertices in $G$, hence $V_1$ is an independent set. Now consider $V_2 = V(G) \backslash V_1$. For every $j \in V_2\setminus \{i\}$ and $\ell \in V_1$, we know that $\{j,\ell\} \in E(G)$ because $j \not\in V_1=N_{G}(i)$ and there must be an even number of edges induced by the vertices $\{i,j,\ell\}$. Finally, $V_2=N_G(\ell)$ for $\ell\in V_1$, so it is an independent set. Thus $G$ is a complete bipartite graph with partite sets $V_1$ and $V_2$. \end{proof} We next state the analogue of \lref{l:tauparconj} for $\tau$-graphs. \begin{lemma}\label{l:taugraphs} Let $A_1$ and $A_2$ be $\mathrm{OA}(k,n)$ and let $G_1, \dots, G_k$ be the $\tau$-graphs of $A_1$ and $H_1, \dots, H_k$ be the $\tau$-graphs of $A_2$. \begin{itemize} \item[(i)] If $A_2$ is obtained by permuting the rows of $A_1$, then $G_c$ is equal to $H_c$ for each $c \in [1,k]$. \item[(ii)] If $A_2$ is obtained by permuting columns of $A_1$ by $\gamma$, then $\gamma$ is an isomorphism which maps $G_c$ to $H_{\gamma(c)}$ for all $c \in [1,k]$. \item[(iii)] Let $\gamma\in\mathcal{S}_\Lambda$ and $c\in[1,k]$ and suppose $A_2$ is obtained by applying $\gamma$ to every entry in column $c$ of $A_1$. If $n$ is even or $\pi(\gamma)=0$, then $H_i=G_i$ for all $i \in [1,k]$. If $n$ is odd and $\pi(\gamma)=1$ then $H_c=G_c$ and, for $i\in[1,k]\setminus\{c\}$, we obtain $H_i$ by switching $G_i\backslash \{i\}$ at vertex $c$, with vertex $i$ remaining isolated. \end{itemize} \end{lemma} We remark that the only operation that may produce isomorphic $\mathrm{OA}$s with non-isomorphic $\tau$-graphs is an odd permutation of the symbols in a column when the alphabet size is odd. This results in switchings at the vertex that corresponds to the column being permuted. The $\tau$-graphs have the same vertex set, and by \lref{l:taugraphbipartite}, they are a disjoint union of an isolated vertex and a complete bipartite graph. Next we study what happens if we superimpose them, modulo 2. Define the {\em stack} corresponding to an $\mathrm{OA}(k,n)$ to be the undirected graph with vertices $[1,k]$ and an edge $\{i,j\}$ if and only if $\sum_c\tau^c_{ij}\= 1\mod2$ (in other words, the edge $\{i,j\}$ is present in an odd number of the $\tau$-graphs for the $\mathrm{OA}$). The following lemma is an interesting observation about the stack corresponding to an $\mathrm{OA}(k,n)$. \begin{theorem}\label{t:stackedgraph} Let $G$ be the stack for an $\mathrm{OA}(k,n)$. Then \begin{itemize} \item[(i)] $G$ is a complete bipartite graph if $n \equiv 0,1 \mod{4}$, and \item[(ii)] $G$ is a vertex disjoint union of at most two complete graphs if $n \equiv 2,3 \mod{4}$. \end{itemize} \end{theorem} \begin{proof} We consider the number of edges between any three distinct vertices $i,j,\ell \in [1,k]$. Working in $\mathbb{Z}_2$, we have, \begin{align*} \sum\limits_{c \neq i,j} \tau^c_{ij} \, + \sum\limits_{ c \neq j,\ell} \tau^c_{j\ell} \, + \sum\limits_{c \neq \ell,i} \tau^c_{\ell i} &= \pi \bigg( \prod_{c \neq i,j} \sigma_{ci} \sigma_{cj} \, \prod_{ c \neq j,\ell} \sigma_{cj} \sigma_{c\ell} \, \prod_{c \neq \ell,i} \sigma_{c\ell} \sigma_{ci}\bigg) && \mbox{by~\eref{e:taufromsigmas}} \\ &= \pi \big( \sigma_{\ell i} \sigma_{\ell j} \, \sigma_{ij} \sigma_{i \ell} \, \sigma_{j\ell} \sigma_{j i}\big) && \\ &= 3 {n \choose 2}. && \mbox{by~\eref{e:sigmacommutativity}.} \end{align*} Therefore, when $n \equiv 0,1 \mod{4}$, there is an even number of edges between any three distinct vertices in $G$. Analogous to the proof of \lref{l:taugraphbipartite}, this implies that $G$ must be a complete bipartite graph. If $n \equiv 2,3 \mod{4}$, there is an odd number of edges between any three distinct vertices in $G$. In particular, there are no induced paths of two edges. Since there cannot be two vertices at distance $2$ from each other, it follows that each component of $G$ is a complete graph. There are at most two components, since otherwise there would be three vertices inducing a graph with no edges, but any three vertices must induce an odd number of edges. \end{proof} In \cyref{c:stackPP} we give a stronger restriction on the stack corresponding to an $\mathrm{OA}(n+1,n)$. \subsection{Graphs related to $\sigma$-parity} \begin{definition} Let $A$ be an $\mathrm{OA}(k,n)$. The {\it $\sigma$-matrix} of $A$ is the $k\times k$ matrix $M = (m_{ij})$ where \begin{displaymath} m_{ij} = \left\{ \begin{array}{ll} 1 & \textrm{if } i \ne j \textrm{ and } \pi(\sigma_{ij}) = 1, \\ 0 & \textrm{otherwise.} \end{array} \right. \end{displaymath} The {\it $\sigma$-graph} of $A$, denoted by $\mathscr{G}(A)$, is a graph or digraph on $k$ vertices with adjacency matrix $M$. If $M$ is symmetric, then we interpret $\mathscr{G}(A)$ to be an undirected graph. Otherwise, we interpret $\mathscr{G}(A)$ to be a digraph. \end{definition} \begin{lemma}\label{l:Sym} Let $A$ be an $\mathrm{OA}(k,n)$. If $n\equiv 0,1 \mod{4}$, then the $\sigma$-matrix is symmetric and the $\sigma$-graph is an undirected graph. If $n \equiv 2,3 \mod{4}$, then the $\sigma$-graph is a tournament. \end{lemma} \begin{proof} By~\eqref{e:sigmacommutativity}, we have $\pi(\sigma_{ji}) = \pi(\sigma_{ij}) + {n \choose 2}$ for distinct $i,j \in [1,k]$. If $n \equiv 0,1 \mod{4}$, then $\sigma_{ij}$ and $\sigma_{ji}$ have the same parity, so the $\sigma$-matrix of $A$ is symmetric. If $n \equiv 2,3 \mod{4}$, then $\sigma_{ij}$ and $\sigma_{ji}$ have opposite parity, so exactly one of the directed edges $(i,j)$ and $(j,i)$ is in $\mathscr{G}(A)$. Hence $\mathscr{G}(A)$ is a tournament. \end{proof} We next state the analogue of \lref{l:sigmaparconj} for $\sigma$-graphs. Glynn and Byatt \cite[Lem.\,2.4]{GB12} showed that $\sigma$-graphs of orthogonal arrays $\mathrm{OA}(k,n)$ for $n$ even are invariant (up to complementation) for isomorphic orthogonal arrays. Here we consider $n$ odd as well. The proof is essentially the same as for the even case, so we omit it. \begin{lemma} \label{l:sigmagraphs} Let $A_1$ and $A_2$ be $\mathrm{OA}(k,n)$ and let $\mathscr{G}_1=\mathscr{G}(A_1)$ and $\mathscr{G}_2=\mathscr{G}(A_2)$. \begin{itemize} \item[(i)] If $A_2$ is obtained by permuting the rows of $A_1$ by $\gamma$, then $\mathscr{G}_2$ is either $\mathscr{G}_1$ if $\gamma$ is an even permutation, or $\overline{\mathscr{G}_1}$ if $\gamma$ is an odd permutation. \item[(ii)] If $A_2$ is obtained by permuting the columns of $A_1$ by $\gamma$, then $\gamma$ is a graph isomorphism which maps $\mathscr{G}_1$ to $\mathscr{G}_2$. \item[(iii)] Let $\gamma\in\mathcal{S}_\Lambda$ and $i \in[1,k]$ and suppose $A_2$ is obtained by applying $\gamma$ to every entry in column $i$ of $A_1$. Then $\mathscr{G}_2 = \mathscr{G}_1$ if $\pi(\gamma)=0$ or $n$ is even, and $\mathscr{G}_2 = \mathscr{G}_1^{i}$ if $\pi(\gamma)=1$ and $n$ is odd. \end{itemize} \end{lemma} We see in \lref{l:sigmagraphs} that taking a conjugate of a set of MOLS only affects the order of columns of the orthogonal arrays, and hence yields an isomorphic $\sigma$-graph. On the other hand, the $\sigma$-graphs of isotopic sets of MOLS are not necessarily isomorphic if $n$ is odd, because isotopisms can cause switchings at some vertices. \section{How many parities are there?}\label{s:numpar} If $A$ is an $\mathrm{OA}(k,n)$ then there is a corresponding $\tau$-parity $\tau(A)$ of dimension $k(k-1)(k-2)$ which stores the parities $\tau^c_{ij}$, indexed by the triples $(c,i,j)$. In this section we are interested in the number of different $\tau$-parities that are possible. One way to measure this is to consider $\mathcal{B}(k,n)$, which we define as $\log_2$ of the number of different $\tau$-parities achieved by orthogonal arrays $\mathrm{OA}(k,n)$. We have already determined a number of relationships between components of the $\tau$-parity, including \eqref{e:tauidxcommutativity}, \eqref{e:Fixedcol} and \eqref{e:Triple}. Studying $\mathcal{B}(k,n)$ is one way to determine whether there are other relationships waiting to be discovered. We will be able to resolve this question in some cases, but it will remain open in general. Another way to view $\mathcal{B}(k,n)$ is that it is the information content (in number of bits) of $\tau(A)$ for an $\mathrm{OA}(k,n)$. For a given value of $(k,n)$, we call any $\mathbb{Z}_2$ vector satisfying \eqref{e:tauidxcommutativity}, \eqref{e:Fixedcol} and \eqref{e:Triple} a {\it plausible $\tau$-parity}. We call it an {\it actual $\tau$-parity} if there is an $\mathrm{OA}(k,n)$ that achieves it. The set of plausible $\tau$-parities depends only on the value of $n\mod4$ and on $k$. \subsection{Switching classes}\label{ss:swclass} Once we have an actual $\tau$-parity, there will be many other actual $\tau$-parities that we can establish by taking isomorphic orthogonal arrays. For a plausible $\tau$-parity $p$, the \textit{switching class} of $p$ is the set of all plausible $\tau$-parities obtained from $p$ by some sequence of the following operations. These operations are analogues of (ii) and (iii) from \lref{l:tauparconj}, and will map any actual $\tau$-parity to another actual $\tau$-parity. \textbf{Permuting Operation:} \ For a permutation $\gamma \in\mathcal{S}_k$, a new $\tau$-parity is obtained by {\it permuting by $\gamma$} as follows: we move each $\tau_{ij}^c$ from the coordinate indexed by $(c,i,j)$ to the coordinate indexed by $\big(\gamma(c),\gamma(i),\gamma(j)\big)$. \textbf{Swapping Operation:} \ If $n$ is odd then a new $\tau$-parity is obtained by {\em swapping at a subset $C \subseteq [1,k]$}, which means for each $\tau_{ij}^c$, we change its value if and only if $|\{i,j\} \cap C| = 1$. We stress that the swapping operation is only available if $n$ is odd. \begin{theorem}\label{t:switchsizes} Let $k \geqslant 3$ and $n \geqslant 2$. Then each switching class for parameters $(k,n)$ has size which divides $k!$ if $n$ is even and $k!\,2^{k-1}$ if $n$ is odd. \end{theorem} \begin{proof} First, suppose that $n$ is even. The group $\mathcal{S}_k$ acting on the labels $c,i,j$, induces an action on the $\tau$-parities. The switching classes are the orbits of this action. Hence the result follows. The situation for odd $n$ is similar, except that we have the option to swap at any subset $C\subseteq[1,k]$. However, swapping at $C$ has the same effect as swapping at the complement of $C$. We can remove this duplication by marking one index in $[1,k]$, and never swapping at the marked index (the mark gets moved when we permute indices). In this way we find an action of the group $\mathcal{S}_2^{k-1} \rtimes \mathcal{S}_k$ on $\tau$-parities. This group has order $k!\,2^{k-1}$ and its orbits are the switching classes. \end{proof} We remark that $\mathcal{S}_2^{k-1} \rtimes \mathcal{S}_k$ is the automorphism group of the folded $k$-cube (at least when $k>4$), see \cite[pp.\,264--265]{BCN89}. For $k\in[3,8]$ and for each possible value of $n\mod4$, we calculated the number of switching classes and the size of each switching class for every plausible $\tau$-parity. The results are given in \Tref{T:switchclass}. Instances that achieve the bounds in \tref{t:switchsizes} are shown in {\bf bold}. Note that for odd $n$ the bounds are not achieved until $k=8$. \begin{table}[h!] \centering \footnotesize \begin{tabular}{|c|c|l|c|l|} \hline $k$ & \multicolumn{2}{c|}{$n \equiv 0 \mod{4}$} & \multicolumn{2}{c|}{$n \equiv 2 \mod{4}$} \\[0.5ex] & \# & sizes & \# & sizes \\ \hline 3 &2& $ 1 , 3 $ &2& $ 1 , 3 $ \\[1ex] 4 &6& $ 1 , 3 , 4 , 6 , 12 $ &3& $ 8 , 12 $ \\[1ex] 5 &18& $ 1 , 5 , 6 , 10 , 15 , 20 , 30 , 60 $ &10& $ 12 , 20 , 40 , 60 , {\bf 120} $\\[1ex] 6 &78& $ 1 , 6 , 10 , 15 , 20 , 30 , 45 , 60 , 72 , 90 , 120 , 180 , 360 , {\bf 720} $ &34& $ 40 , 120 , 144 , 240 , 360 , {\bf 720} $ \\[1ex] 7 &522& $ 1 , 7 , 21 , 35 , 42 , 70 , 105 , 140 , 210 , 252 , 315 , 360 , 420 , $ &272& $ 120 , 280 , 360 , 504 , 560 , 840 , $ \\ && $504 , 630 , 840 , 1260 , 2520 , {\bf 5040}$ && $1008 , 1680 , 2520 , {\bf 5040}$\\[0.5ex] 8&6178& $1,8,28,35,56,70,105,168,210,280,315,336,$&3528&$1920,2240,2688,4480,$\\ && $420,560,630,672,840,1120,1260,1680,2016,2520,$&&$5760,6720,8064,13440,$\\ && $2880,3360,4032,5040,6720,10080,20160,{\bf40320}$&&$20160,{\bf40320}$\\ \hline \end{tabular} \medskip \footnotesize \begin{tabular}{|c|c|l|c|l|} \hline $k$ & \multicolumn{2}{c|}{$n \equiv 1 \mod{4}$} & \multicolumn{2}{c|}{$n \equiv 3 \mod{4}$} \\[0.5ex] & \# & sizes & \# & sizes \\ \hline 3 &1& $ 4 $ &1& $ 4 $ \\[1ex] 4 &2& $ 8 , 24 $ &2& $ 8 , 24 $\\[1ex] 5 &4& $ 16 , 96 , 160 , 240 $ &2& $ 192 , 320 $ \\[1ex] 6 &10& $ 32 , 192 , 320 , 480 , 1440 , 1920 , 2880 , 5760 $ &6& $ 640 , 1920 , 2304 , 3840 , 5760 $ \\[1ex] 7 &27& $ 64 , 1344 , 2240 , 4480 , 6720 , 13440 , 16128 , 20160 ,$ &12& $ 7680 , 17920 , 23040 , $ \\ && $ 23040 , 26880, 40320 , 53760 , 80640 , 161280$ && $ 32256 , 53760 , 161280$\\[1ex] 8 &131& $ 128, 3584, 4480, 7168, 13440, 21504, 26880, 35840,$ &69& $15360, 143360, 172032, 215040,$ \\ && $ 40320, 53760, 71680, 86016, 107520, 161280, 215040, $ && $286720, 322560, 368640, 430080, $ \\ && $258048, 322560, 368640, 430080, 645120, 860160,$ && $516096, 645120, 860160, 1290240,$ \\ && $ 1290240, 2580480, {\bf 5160960}$ && $1720320, 2580480, {\bf5160960}$ \\[0.5ex] \hline \end{tabular} \caption{\label{T:switchclass}Number and sizes of switching classes.} \end{table} Each plausible $\tau$-parity corresponds to a $\sigma$-graph up to complementation, and vice versa. For even $n$, a switching class is defined by the permuting operation, which simply applies an isomorphism to the $\sigma$-graph. It follows that the number of switching classes for $(k,n)$ when $n\= 0\mod4$ is the number of complementary pairs of graphs on $k$ nodes (sequence A007869 in \cite{OEIS}). Similarly, the number of switching classes for $(k,n)$ when $n\= 2\mod4$ is the number of complementary pairs of tournaments on $k$ nodes (sequence A059735 in \cite{OEIS}). Up to isotopism there are 19 complete sets of MOLS of order 9, falling into 7 isomorphism classes of $\mathrm{OA}(10,9)$. For each complete set $M$ of MOLS, we constructed $\mathscr{A}(M)$, the corresponding $\mathrm{OA}(10,9)$. We then determined its $\tau$-parity, and the switching class it belongs to. The results are shown in \Tref{TableIsom}. Sets of MOLS are numbered using the numbering in \cite{handbook}. Set number 1 corresponds to the Desarguesian plane, sets 2 to 6 correspond to the Hall plane, sets 7 to 11 correspond to the dual Hall plane, and sets 12 to 19 correspond to the Hughes plane. It turns out that only 4 different switching classes are represented by the 19 complete sets. These 4 switching classes do {\em not} correspond to the 4 projective planes. In fact one of the switching classes contain OAs from three different projective planes, and another contains OAs from two different projective planes, as indicated in the last column of \Tref{TableIsom}. Clearly, not all OAs from a given plane end up in the same switching class. \begin{table}[h!] \begin{center} \begin{tabular}{ |c|c|l|c| } \hline Switching Class & Size of class & Sets of $8$ MOLS(9) in the class & \# Projective planes\\ \hline Class 1 & 1290240 & 1 & 1 \\ Class 2 & 483840 & 2,3,10,11 & 2\\ Class 3 & 512 & 4,5,6,7,8,9,17,18,19 & 3 \\ Class 4 & 1075200 & 12,13,14,15,16 & 1\\ \hline \end{tabular} \caption{\label{TableIsom}Switching classes of complete sets of MOLS(9)} \end{center} \end{table} \Tref{TableIsom} also shows the size of each switching class and here it is slightly surprising that the most symmetric plane (the Desarguesian one) produces OAs in the largest switching class. In fact the smallest switching class contains the zero vector. Each of the non-Desarguesian planes have OAs which achieve this zero vector, but the Desarguesian plane does not. \subsection{Achieving all plausible $\tau$-parities}\label{ss:plausisactual} We next consider the question of how many plausible $\tau$-parities there are. After that we will consider how many of them are actual $\tau$-parities. \begin{theorem}\label{t:sigmaplausible} There are $2^{{k\choose2}-1}$ plausible $\tau$-parities for an $\mathrm{OA}(k,n)$, because each standardised $\sigma$-parity corresponds to a different plausible $\tau$-parity. \end{theorem} \begin{proof} We choose the standardisation \eref{e:convent}. By \sref{ss:equivalencetausigma} we know that each standardised $\sigma$-parity determines a different $\tau$-parity, so we consider the number of options for the $\sigma$-matrix. Our choice for $\pi(\sigma_{12})$ is fixed but we can freely pick $\pi(\sigma_{ij})$ for all other $1\leqslant i<j\leqslant k$. This gives us $2^{{k\choose2}-1}$ options, but we have to check that all of them produce plausible $\tau$-parities. Suppose we have settled on values for $\pi(\sigma_{ij})$ for $1\leqslant i<j\leqslant k$. We then determine $\pi(\sigma_{ij})$ for $1\leqslant j<i\leqslant k$ from \eref{e:sigmacommutativity}, and $\tau^c_{ij}$ for all distinct $c,i,j\in[1,n+1]$ from \eref{e:taufromsigmas}. It is straightforward to check \eqref{e:tauidxcommutativity} and \eqref{e:Fixedcol} hold. Also, in $\mathbb{Z}_2$, \begin{align*} \tau^c_{ij} + \tau^i_{cj} + \tau^j_{ci} &= \pi(\sigma_{ci}) + \pi(\sigma_{cj}) + \pi(\sigma_{ic}) + \pi(\sigma_{ij}) + \pi(\sigma_{jc}) + \pi(\sigma_{ji}) && \textrm{ by \eqref{e:taufromsigmas}}\\ &= 3 {n \choose 2}, && \textrm{ by \eqref{e:sigmacommutativity}} \end{align*} and hence \eqref{e:Triple} is satisfied. \end{proof} \begin{corollary}\label{c:dim} We have $\mathcal{B}(k,n)\leqslant { {k \choose 2} -1}$ for $k \geqslant 3$ and arbitrary $n$. \end{corollary} In \cyref{c:dimPP} we find a better upper bound on $\mathcal{B}(k,n)$ in the case when $k = n+1$. We now know there are $2^{ {k \choose 2} -1}$ plausible $\tau$-parities. How many of these are actual $\tau$-parities? In \Tref{T:actualparvec}, we list some values of $(k,n)$ for which all plausible $\tau$-parities are actual. Examples that justify each claim made in the table can be downloaded from \cite{WWWW}. This table is exhaustive for $n\le9$, where a complete catalogue of MOLS is known \cite{EW16,WWWW}. For $n\ge10$ we used ad-hoc methods to find our specimens, so no inference should be made from an entry not appearing. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|} \hline $n \mod{4}$ & $k=3$ & $k=4$ & $k=5$\\ \hline 0 & 8 & 8 & 16 \\ 1 & $5,9$ & 9 & 9 \\ 2 & 6 & $10,14,18$ & \\ 3 & $3,7$ & 7 & $11,19,23,\dots$ \\ \hline \end{tabular} \caption{\label{T:actualparvec}Values of $(k,n)$ for which all plausible $\tau$-parities are achieved.} \end{table} As \Tref{T:actualparvec} shows, we constructed explicit examples which achieve all plausible $\tau$-parities for $k\in\{3,4,5\}$ and each congruence class of $n$ mod 4 (except for $n \equiv 2$ mod {4} when $k=5$). This exception is not a genuine one, since it will follow from \cyref{c:actualNsufflarge} below that all plausible $\tau$-parities are actual for all sufficiently large $n$ when $k=5$. Note that $n=9$ is the smallest order which achieves all plausible $\tau$-parities for $k=5$. Among the $\mathrm{OA}(6,9)$, there are 13312 actual $\tau$-parities, and 16384 plausible $\tau$-parities. The entry for $n\equiv3\mod4$ and $k=5$ in \Tref{T:actualparvec} is justified by our next result. In it we construct an infinite family of orthogonal arrays which achieve all plausible $\tau$-parities when $k=5$ (and hence also for smaller $k$). \begin{theorem}\label{t:InfiniteFamAllPar} For prime $n \geqslant 11$ satisfying $n \equiv 3 \mod{4}$ there exist examples of $\mathrm{OA}(5,n)$ which achieve all $512$ plausible $\tau$-parities. \end{theorem} \begin{proof} Suppose $n \geqslant 11$ is prime and $n \equiv 3 \mod{4}$. Let $A$ be the $\mathrm{OA}(n+1,n)$ corresponding to $n-1$ MOLS$(n)$, $\{ L_1, \dots, L_{n-1}\}$, where $L_\lambda[r,c] = \lambda r + c \mod{n}$ for $\lambda \in [1,n-1]$. First we determine $\tau_{ij}^c$ for every $c,i,j \in [1,\dots,k]$. It is straightforward to check that $\tau_{ij}^1 = 0$ for every $i,j\in [2,k]$ since each of the permutations $\rho_a(1,i,j)$ is of the form $x \mapsto x+d\mod n$ for a constant $d$. Observe that the parity of a permutation $x \mapsto bx+d$ is the same as the parity of $x \mapsto bx$. If $b$ has (multiplicative) order $q$ in $\mathbb{Z}_n$ then the permutation $x\mapsto bx$ has cycle structure consisting of $({n-1})/{q}$ cycles of length $q$. Hence it has even parity if and only if $({n-1})(q-1)/q$ is even, which occurs if and only if $q$ is odd, given that $n \equiv 3 \mod{4}$. Now, $\rho_a(c,1,j)$ is the permutation $x\mapsto (j-c)x+a$ for $j\ge2$. For $j>i \geqslant 2$, we see that $\rho_a(c,i,j)$ is the permutation $(i-c)x + a \mapsto (j-c)x+a$, which has the same parity as the permutation $x\mapsto(i-c)^{-1}(j-c)x$. It follows that in $\mathbb{Z}_2$, \begin{align*} & \tau_{ij}^1 = 0, \\ & \tau_{1j}^c = q-1, \text{ for }j\ge2,\text{ where } q \text{ is the order of $j-c$, and} \\ & \tau_{ij}^c = q-1, \text{ for }j>i\ge2, \text{ where } q \text{ is the order of } (i-c)^{-1}(j-c) \end{align*} Here order means multiplicative order, modulo $n$. Let $a-1,a,a+1 \in \mathbb{Z}_n$ all be quadratic non-residues and define $A_1=\mathscr{A}(L_a, L_{a^2},L_{a^3})$. Thus, columns $1,2,3,4,5$ of $A_1$ correspond to columns $1,2, a+2, a^2+2, a^3+2$ of $A$. The entire vector $\tau(A_1)$ can be reconstructed once we work out that $$ [\tau_{ 2 3 }^ 1, \hspace{0.1cm} \tau_{ 2 4 }^ 1, \hspace{0.1cm} \tau_{ 2 5 }^ 1, \hspace{0.1cm} \tau_{ 1 2 }^ 3, \hspace{0.1cm} \tau_{ 1 2 }^ 4, \hspace{0.1cm} \tau_{ 1 3 }^ 4, \hspace{0.1cm} \tau_{ 1 2 }^ 5, \hspace{0.1cm} \tau_{ 1 3 }^ 5, \hspace{0.1cm} \tau_{ 1 4 }^ 5 \hspace{0.1cm}] = [0, 0, 0, 0, 1, 1, 0, 0, 0].$$ These values are not hard to derive. For example, $\tau_{13}^5$ for $A_1$ corresponds to $\tau_{1j}^c$ for $A$ where $j = a+2$ and $c=a^3+2$. Since $j-c = a-a^3 = (-a)(a+1)(a-1)$ is a quadratic residue, it follows that $q$ is odd and hence $\tau_{13}^5=0$ for $A_1$. Let $a-1,a,a+1 \in \mathbb{Z}_n$ be such that $a$ is a quadratic non-residue and $a-1$ and $a+1$ are quadratic residues. Let $A_2=\mathscr{A}(L_a, L_{a^2}, L_{a^3})$, so columns $1,2,3,4,5$ of $A_1$ correspond to columns $1,2, a+2, a^2+2, a^3+2$ of $A$. As before, we determine $\tau(A_2)$ by finding $$ [\tau_{ 2 3 }^ 1, \hspace{0.1cm} \tau_{ 2 4 }^ 1, \hspace{0.1cm} \tau_{ 2 5 }^ 1, \hspace{0.1cm} \tau_{ 1 2 }^ 3, \hspace{0.1cm} \tau_{ 1 2 }^ 4, \hspace{0.1cm} \tau_{ 1 3 }^ 4, \hspace{0.1cm} \tau_{ 1 2 }^ 5, \hspace{0.1cm} \tau_{ 1 3 }^ 5, \hspace{0.1cm} \tau_{ 1 4 }^ 5 \hspace{0.1cm}] = [0, 0, 0, 0, 1, 0, 0, 0, 1].$$ Consulting our computation of the switching classes we find $\tau(A_1)$ in the switching class of size 192, while $\tau(A_2)$ belongs to the switching class of size 320. Jacobsthal \cite{Jac06} showed that for prime $n \equiv 3 \mod{4}$, there exists a sequence of three consecutive elements which are quadratic non-residues for prime $n > 7$ and there exists a sequence of consecutive elements which are residue, non-residue, residue for prime $n > 3$. Thus, for prime $n \equiv 3 \mod{4}$, $n \geqslant 11$, there are elements $a-1, a, a+1 \in \mathbb{Z}_n$ such that the above constructions produce actual $\tau$-parities belonging to each of the two switching classes. \end{proof} In principle, similar families can be constructed for other cases. However, examination of \Tref{T:switchclass} reveals that the case that we solved in \tref{t:InfiniteFamAllPar} involves fewer switching classes (and hence less work) than would be required for different values of $n\mod4$ or for larger values of $k$. In the remainder of this section we demonstrate a different way to achieve infinite families that achieve all plausible $\tau$-parities. Specifically, we will show that if there are examples of $\mathrm{OA}(k,n)$ achieving all plausible $\tau$-parities for some $n$, then there are examples of $\mathrm{OA}(k,N)$ achieving all plausible $\tau$-parities for all sufficiently large $N$. We first require some definitions. An {\it incomplete Latin square} $L = (l_{ij})$ of order $n$ with a hole of order $h$ is an $n \times n$ array on an $n$-set $\Lambda$ with a {\it hole} $H \subseteq \Lambda$ such that each cell $(i,j)$ is empty if $\{i,j\} \subseteq H$ and contains exactly one symbol otherwise, every row and every column of $L$ contains each symbol at most once, and rows and columns indexed by $H$ do not contain symbols in $H$. A pair of incomplete Latin squares of order $n$ with common hole of order $h$ are {\it orthogonal} if, when superimposed, all of the $n^2 - h^2$ ordered pairs of symbols in $\Lambda^2\,\backslash H^2$ occur amongst the cells $(i,j)$ where $\{i,j\} \not\subseteq H$. A set of incomplete Latin squares of order $n$ with a common hole of order $h$ which are pairwise orthogonal is called a set of incomplete MOLS. An {\it incomplete pairwise balanced design} on $n$ points with hole size $h$ and blocks sizes $K$, denoted $IPBD((n,h),K)$, is a triple $(V,H,\mathscr{B})$ such that $V$ is a set of $n$ points, $H$ is a subset of $V$ of size $h$ called the {\it hole}, and $\mathscr{B}$ is a collection of subsets of $V$ where $|B| \in K$ for each $B \in \mathscr{B}$ and every pair of points not both in $H$ occur together in exactly one block. Combining a result from \cite{CD11} (given as Construction 6.1.2 in \cite{vB15}) and \cite[Prop.~6.1.1]{vB15}, we have the following theorem. \begin{theorem}\label{t:IMOLSconstruction} If there exists an $IPBD((n,h),K)$ and for each $k \in K$ there exists $t+1$ MOLS of order $k$, then there exist $t$ incomplete MOLS of order $n$ with a common hole of order $h$. \end{theorem} The following proof is due to Peter Dukes. \begin{theorem}\label{t:asymIMOLSexistence} For any fixed $t$ and $h$ and for all sufficiently large $n$, there exist $t$ incomplete MOLS of order $n$ with a common hole of order $h$. \end{theorem} \begin{proof} Let $K = \{2^a, 2^{a+1}, 3^b\}$ where $2^a>t+1$ and $3^b>t+1$. Then, by McNeish's Theorem (see, e.g.~\cite[Thm~1.1.2]{vB15}), there exist $t+1$ MOLS of order $v$ for $v \in \{2^a, 2^{a+1}, 3^b \}$. Also, by \cite[Thm~5.1.2]{vB15} and the choice of $K$, there exists an $IPBD((n,h),K)$ for all sufficiently large $n$. The result follows by \tref{t:IMOLSconstruction}. \end{proof} We remark that Dukes and van Bommel \cite{DvB15} proved that there exists a set of $t$ incomplete MOLS of order $n$ with a common hole of order $h$ for all sufficiently large $n$ and $h$ satisfying $n \geqslant 8(t+1)^2h$. \begin{theorem}\label{t:actualNsufflarge} Let $N$ be sufficiently large relative to $n$. The number of actual $\tau$-parities for $(k,N)$ is no less than the number of actual $\tau$-parities for $(k,n)$. \end{theorem} \begin{proof} Let $A_1$ and $A_2$ be two orthogonal arrays $\mathrm{OA}(k,n)$ with different $\tau$-parities. By \tref{t:asymIMOLSexistence}, for all sufficiently large $N$, there exists an $\mathrm{OA}(k,N)$, say $A'_1$, that contains $A_1$ as a subarray. Let $A'_2$ be the $\mathrm{OA}(k,N)$ obtained from $A'_1$ by replacing the subarray $A_1$ by $A_2$. Since $A_1$ and $A_2$ have different $\tau$-parities, there exist distinct $c,i,j\in [1,k]$ such that $\tau^c_{ij}({A_1})\ne\tau^c_{ij}({A_2})$. This forces $\tau^c_{ij}({A'_1})\ne\tau^c_{ij}({A'_2})$, so $A'_1$ and $A'_2$ have different $\tau$-parities. The result follows. \end{proof} \begin{corollary}\label{c:actualNsufflarge} If all plausible $\tau$-parities are actual for $(k,n)$ then, for all sufficiently large $N$, all plausible $\tau$-parities are actual for $(k,N)$. \end{corollary} On the basis of the evidence gathered above, we propose: \begin{conjecture}\label{cj:allactual} For any fixed $k$ and all sufficiently large $N$, all plausible $\tau$-parities are actual for $(k,N)$. \end{conjecture} When $k$ and $n$ are comparable in size we might expect there to be plausible $\tau$-parities that are not actual for $(k,n)$. Indeed, we will see in the next section that this is definitely the case when $k=n+1$. We close the section with an observation related to the comments at the end of \sref{ss:tau-parities}. If $M=\mathscr{M}(A)$ where $A$ is an $\mathrm{OA}(k,n)$, then the individual Latin squares in $M$ have at most $2(k-2)$ independent bits of information in their row, column and symbol parities. For $k\ge4$, this is strictly less than ${k\choose 2}-1$, which suggests, on the basis of everything we have seen in this section, that a set of MOLS has more information in its $\tau$-parity than there is in the $\tau$-parity of its constituent Latin squares. Certainly, this is true for pairs and triples of MOLS of large orders, by combining \cyref{c:actualNsufflarge} and \Tref{T:actualparvec}. \section{Projective planes}\label{s:PP} In this section, we study the important special case of orthogonal arrays which correspond to finite projective planes. To begin, we use the well-known fact that such orthogonal arrays correspond to sharply $2$-transitive sets of permutations to derive a constraint on the $\tau$-parity of the $\mathrm{OA}$. A set $S$ of permutations of a set $X$ is said to be \textit{sharply $2$-transitive} if for all $x,x',y,y' \in X$ where $x \ne x'$ and $y \ne y'$, there exists a unique $\gamma \in S$ such that $\gamma(x) = y$ and $\gamma(x') = y'$. \begin{theorem} Let $A=(a_{r\ell})$ be an orthogonal array $\mathrm{OA}(n+1,n)$ on alphabet $\Lambda$, where $r \in \Lambda^2$ and $\ell \in [1,n+1]$. Let $i,j \in [1,n+1]$ be distinct integers. Then, in $\mathbb{Z}_2$, \begin{equation} \label{e:PPcond} \sum_{c \in [1,n+1]\setminus\{i,j\} } \tau_{ij}^c = {n \choose 2}. \end{equation} \end{theorem} \begin{proof} Let $C_{ij} = [1,n+1]\setminus\{i,j\}$. By definition, $\tau^c_{ij} = \sum_{x \in \Lambda} \pi(\rho_x(c,i,j))$ for $c\in C_{ij}$. We exploit the fact that $\{\rho_x(c,i,j) : x \in \Lambda, c \in C_{ij}\}$ is a sharply $2$-transitive set of permutations. Indeed, let $u, u', v, v' \in \Lambda$ be such that $u < u'$ and $v>v'$. Let $r, r' \in \Lambda^2$ be such that $a_{ri} = u$, $a_{rj} = v$, $a_{r'i} = u'$, $a_{r'j} = v'$. No two rows of $A$ agree in more than one column, and overall there are $(n+1)n{n\choose 2}$ places in $A$ where two rows agree. Since this matches the number of pairs of rows, any two rows must agree in exactly one column. Thus, there is a unique $c \in C_{ij}$ such that $a_{rc}=a_{r'c}=x$, and $\rho_x(c,i,j)$ maps $u\mapsto v$ and $u'\mapsto v'$, producing an inversion. So in $\mathbb{Z}_2$, \[ \sum_{c \in C_{ij} } \tau_{ij}^c = \sum_{c \in C_{ij} } \sum_{x \in \Lambda} \pi\big(\rho_x(c,i,j)\big) = {n \choose 2}^2 = {n \choose 2}, \] because each of the possible ${n \choose 2}{}^2$ inversions occurs exactly once amongst the permutations in our sharply $2$-transitive set. \end{proof} By \eqref{e:PPcond}, we have a strengthening of \tref{t:stackedgraph} for the case when $k = n+1$. \begin{corollary}\label{c:stackPP} Let $G$ be the stack corresponding to an $\mathrm{OA}(n+1,n)$. Then $G$ is an empty graph when $n \equiv 0,1 \mod{4}$ and $G$ is a complete graph when $n \equiv 2,3 \mod{4}$. \end{corollary} Next we study the consequences of equation~\eref{e:PPcond} on $\tau$- and $\sigma$-graphs. First, we give an interpretation of this equation for a $\sigma$-graph. Then we use this fact to derive a property of the $\tau$-graphs. \begin{lemma}\label{l:sigmagraphdegrees} Let $A$ be an $\mathrm{OA}(n+1,n)$ with $\sigma$-graph $\mathscr{G}$. \begin{itemize} \item[(i)] If $n \equiv 0 \mod{4}$, then every vertex of $\mathscr{G}$ has even degree. \item[(ii)] If $n \equiv 1 \mod{4}$, then the degrees of the vertices of $\mathscr{G}$ all have the same parity. \item[(iii)] If $n \equiv 2 \mod{4}$, then every vertex of $G$ has odd in-degree and odd out-degree. \item[(iv)] If $n \equiv 3 \mod{4}$, then the in-degrees of the vertices of $\mathscr{G}$ are all of one parity and the out-degrees of the vertices of $\mathscr{G}$ are all of the other parity. \end{itemize} \end{lemma} \begin{proof} By the handshake lemma, it suffices to show that the (in-)degrees of the $n+1$ vertices in $\mathscr{G}$ all have the same parity. The parity of the (in-)degree of vertex $i \in [1,n+1]$ is $\pi\big(\prod_{c\neq i} \sigma_{ci}\big)$. Let $i,j \in [1,n+1]$ be two distinct vertices. Then, in $\mathbb{Z}_2$, \begin{align*} \pi\bigg(\prod_{c \neq i} \sigma_{ci}\bigg)+ \pi\bigg(\prod_{c \neq j} \sigma_{cj}\bigg) &= \pi\bigg(\sigma_{ij} \sigma_{ji} \, \prod_{c\neq i,j} \sigma_{ci}\sigma_{cj}\bigg) && \\ &= \pi(\sigma_{ij} \sigma_{ji}) + \sum\limits_{c \neq i,j} \tau^c_{ij} && \mbox{by \eref{e:taufromsigmas}} \\ &= {n \choose 2} + {n \choose 2} = 0 && \mbox{by \eref{e:sigmacommutativity} and \eref{e:PPcond}} \end{align*} Hence, the sum of the (in-)degrees of vertices $i$ and $j$ is even. Therefore, any two vertices $i$ and $j$ have the same parity of their (in-)degrees. \end{proof} Recall that every $\tau$-graph for an $\mathrm{OA}$ is a disjoint union of an isolated vertex and a complete bipartite graph. When we have an $\mathrm{OA}(n+1,n)$, the complete bipartite subgraphs in its $\tau$-graphs are induced on $n$ vertices. When $n$ is odd, one of the partite sets has an even number of vertices and the other has an odd number of vertices. The next theorem considers what happens when $n$ is even. \begin{lemma}\label{l:parityofpartitesets} Let $n$ be a positive even integer and suppose that a $\tau$-graph of an $\mathrm{OA}(n+1, n)$ is the disjoint union of an isolated vertex and $K_{n_1,n_2}$ for some nonnegative integers $n_1$ and $n_2$. Then $n_1 \equiv n_2 \equiv n/2 \mod{2}$. \end{lemma} \begin{proof} We have that $n_1+n_2=n \equiv 0 \mod{2}$ and the number of edges is $n_1n_2$. Hence, if there is an odd number of edges, then $n_1 \equiv n_2 \equiv 1 \mod{2}$. Otherwise, $n_1 \equiv n_2 \equiv 0 \mod{2}$. Let $c \in [1,n+1]$. Working in $\mathbb{Z}_2$, the parity of the number of edges in the $\tau$-graph $G_c$ is \begin{align*} \sum\limits_{i<j} \tau^c_{ij} &= \pi \bigg(\prod _{i<j}\sigma_{ci} \sigma_{cj}\bigg) = (n-1)\pi\bigg(\prod_{i \ne c} \sigma_{ci}\bigg) = \left\{\begin{array}{ll} 0 & \text{ if } n \equiv 0 \mod{4}, \\ 1 & \text{ if } n \equiv 2 \mod{4}, \\ \end{array} \right. \end{align*} by \eref{e:taufromsigmas} and \lref{l:sigmagraphdegrees}, where $i,j \in [1,n+1] \setminus \{c\}$. \end{proof} When $k=n+1$, we call a plausible $\tau$-parity that satisfies \eqref{e:PPcond} a {\it PP-plausible $\tau$-parity}. We next show that only some of the plausible $\tau$-parities are PP-plausible. \begin{theorem}\label{t:PPplausible} The number of PP-plausible $\tau$-parities is $2^{n \choose 2}$ if $n$ is odd and $2^{ {n \choose 2}-1}$ if $n$ is even. \end{theorem} \begin{proof} We choose to use \eqref{e:convent} for standardising $\sigma$-parity. First suppose that $n$ is even. We can choose $\pi(\sigma_{ij})$ for $1\leqslant i<j\leqslant n$, except $\pi(\sigma_{12})$. The values of $\pi(\sigma_{ij})$ for $1\leqslant j<i\leqslant n$ are then determined by \eref{e:sigmacommutativity}. The values of $\pi(\sigma_{ij})$ when $n+1\in\{i,j\}$ can now be determined from \lref{l:sigmagraphdegrees}. The $\sigma$-parity (and hence the $\tau$-parity) is thus determined by ${n \choose 2} -1$ binary choices. Next we argue that each of these options for the $\tau$-parity satisfies \eref{e:PPcond} and hence is PP-plausible. Fix $i,j\in[1,n+1]$ and let $\delta_i,\delta_j$ be, respectively, the (in-)degrees of vertices $i,j$ in the $\sigma$-graph. By construction, $\delta_i$ and $\delta_j$ agree mod 2. So, by \eref{e:taufromsigmas} and \eref{e:sigmacommutativity}, we have \[ \sum_{c\in[1,n+1]\setminus\{i,j\}}\tau^c_{ij} =\sum_{c\in[1,n+1]\setminus\{i,j\}}\pi(\sigma_{ci}\sigma_{cj}) =\delta_i+\delta_j-\pi(\sigma_{ij})-\pi(\sigma_{ji})={n\choose2} \] in $\mathbb{Z}_2$, confirming \eref{e:PPcond}. The situation for odd $n$ is similar except that we also get to choose $\pi(\sigma_{1(n+1)})$. Once we have chosen $\pi(\sigma_{1j})$ for $1<j\leqslant n+1$ we know the parity of the (out-)degree of every vertex in the $\sigma$-graph, and we can proceed as for the even $n$ case. \end{proof} \begin{corollary}\label{c:dimPP} We have $\mathcal{B}(n+1,n)\leqslant {n \choose 2}$ if $n$ is odd and $\mathcal{B}(n+1,n)\leqslant { {n \choose 2}-1}$ if $n$ is even. \end{corollary} Comparing to \tref{t:sigmaplausible}, this means that not all plausible $\tau$-parities are actual $\tau$-parities for $(k,n)=(n+1,n)$. We have no way of judging how many PP-plausible $\tau$-parities are actual, although it is an interesting question. \section{Parity of Latin squares in the ensemble}\label{s:ensemble} In this section we primarily consider the parity of Latin squares in the ensemble of an orthogonal array. We will find various bounds and congruences that must be satisfied by the number of Latin squares of a particular parity in the ensemble. The results will depend on the congruence class of $n$ modulo $4$. While most of the section deals with properties of the ensemble, we begin with a result that constrains the number of each parity-type allowed among the $n-1$ Latin squares in a complete set of MOLS. \begin{theorem}\label{t:n-1molsparitytypes} Suppose there exist $n-1$ MOLS of order $n$ and let $z, y_1,y_2$ and $y_3$ be the number of these Latin squares of parity types $000$, $011$, $101$ and $110$, respectively, if $n \equiv 0,1 \mod{4}$, or of parity types $111$, $100$, $010$ and $001$, respectively, if $n \equiv 2,3 \mod{4}$. Then \begin{itemize} \item[(i)] $z+y_1+y_2+y_3 = n-1$, and \item[(ii)] if $n$ is even then $y_1\= y_2\= y_3\not\= z\;\mod2$;\\ if $n\= 1\;\mod4$ then $y_1\= y_2$ and $y_3\= z\;\mod2$;\\ if $n\= 3\;\mod4$ then $y_1\not\= y_2$ and $y_3\not\= z\;\mod2$. \end{itemize} Moreover, if $z,y_1,y_2,y_3 \in [0,n-1]$ satisfy {\it(i)} and {\it(ii)} then there is a PP-plausible $\tau$-parity corresponding to $n-1$ MOLS of order $n$ which include $z,y_1,y_2$ and $y_3$ Latin squares of the appropriate parity types. \end{theorem} \begin{proof} We choose to use \eqref{e:convent} for standardising $\sigma$-parity. Let $M = \{M_1, \dots, M_{n-1}\}$ be a set of MOLS of order $n$ and let $A=\mathscr{A}(M)$, which is an $\mathrm{OA}(n+1,n)$. Let $z, y_1,y_2$ and $y_3$ be the number of squares in $M$ that are of parity types $111$, $100$, $010$ and $001$, respectively if $n \equiv 2,3 \mod{4}$, or of parity types $000$, $011$, $101$ and $110$, respectively if $n \equiv 0,1 \mod{4}$. Clearly {\it(i)} is satisfied. By \eqref{e:sigmacommutativity} and \eqref{e:taufromsigmas}, for $c\in[3,n+1]$, we have $\tau^1_{2c} = \pi(\sigma_{1c})$ and \begin{align*} \tau^2_{1c} = \begin{cases} 1 + \pi(\sigma_{2c}) & \text{ if } n \equiv 2,3 \mod{4},\\ \pi(\sigma_{2c}) & \text{ if } n \equiv 0,1 \mod{4}. \end{cases} \end{align*} In the $\sigma$-matrix of $A$, the entries in the first two rows in column $c$ determine the parity type of $M_{c-2}$ for $c \in [3, n+1]$. These entries will be, respectively, \begin{align*} & (0,0), \, (0,1), \, (1,0) \, \text{ or } (1,1) \text{ if } M_{c-2} \text{ has parity type } 000, \, 011, \, 101 \text{ or } 110, \text{ or} \\ & (1,0), \, (1,1), \, (0,0) \, \text{ or } (0,1) \text{ if } M_{c-2} \text{ has parity type } 111, \, 100, \, 010 \text{ or } 001. \end{align*} Thus, if $n \equiv 0,1 \mod{4}$ the number of ones in the first row of the $\sigma$-matrix is $y_2+y_3$ and the number of ones in the second row is $y_1+y_3$. If $n \equiv 2,3 \mod{4}$ the number of ones in the first row of the $\sigma$-matrix is $z+y_1$ and the number of ones in the second row is $1 + y_1+y_3$, since $\pi(\sigma_{21})=1$. Now {\it(ii)} follows from \lref{l:sigmagraphdegrees}. On the other hand, suppose $z,y_1,y_2,y_3 \in [0,n-1]$ satisfy {\it(i)} and {\it(ii)}. Define a matrix $W =(w_{ij})$ or order $n+1$ such that $w_{12}=0$, and among the pairs $\{(w_{1c}, w_{2c}):c \in [3, n+1]\}$ there are, respectively, $z$, $y_1$, $y_2$ and $y_3$ occurrences of $(1,0)$, $(1,1)$, $(0,0)$ and $(0,1)$, if $n \equiv 2,3\mod{4}$ or $(0,0)$, $(0,1)$, $(1,0)$ and $(1,1)$, if $n \equiv 0,1 \mod{4}$. As in the proof of \tref{t:PPplausible} the remaining entries of $W$ above the main diagonal, except the last column, can be chosen to be either $0$ or $1$; then the last column and the remaining entries below the main diagonal can be determined such that $W$ is a $\sigma$-matrix corresponding to a PP-plausible $\tau$-parity. The condition $(ii)$ ensures that the first two rows of $W$ have the same total, mod 2. \end{proof} \begin{corollary}\label{c:n-1alloddsquares} There exists a PP-plausible $\tau$-parity corresponding to a complete set of equiparity MOLS if and only if $n\not\= 3\mod4$. \end{corollary} We stress that \cyref{c:n-1alloddsquares} only claims that for the corresponding $\mathrm{OA}(n+1,n)$, choosing the first two columns and any other column will create a subarray $B$ for which $\mathscr{M}(B)$ is equiparity. However, there will be Latin squares in the ensemble (arising from other choices of $3$ columns) which are not equiparity. For the rest of this section, we consider the number of equiparity Latin squares in the ensemble of an $\mathrm{OA}(k,n)$. Next, we derive some equations which will be used in several of our proofs. Let $A$ be an $\mathrm{OA}(k,n)$ where $3 \leqslant k \leqslant n+1$ and $n \geqslant 2$. Let $T$ be the total number of edges among the $\tau$-graphs for $A$. Let $x$ denote the number of equiparity Latin squares in the ensemble of $A$, so $x$ is the number of $111$ type Latin squares if $n \= 2,3\mod4$ or the number of $000$ type Latin squares if $n \= 0,1\mod4$. We next relate $T$ and $x$. If $n \= 0,1 \mod4$ then each non-equiparity Latin square in the ensemble contributes $2$ to $T$, while equiparity Latin squares contribute nothing. If $n \= 2,3\mod4$ then each non-equiparity Latin square in the ensemble contributes $1$ to $T$, while each equiparity Latin square contributes $3$. Thus, \begin{equation}\label{e:totedgex} T = \left\{ \begin{array}{ll} 2{k \choose 3} -2x & \quad \text{if } n \equiv 0,1\mod4,\\ 2x +{k \choose 3} & \quad \text{if } n \equiv 2,3\mod4.\\ \end{array} \right. \end{equation} We can also count $T$ using the row sums of the $\sigma$-matrix $M$, i.e.~the adjacency matrix of the $\sigma$-graph. Let $\mu_c$ denote the sum of the entries in row $c$ of $M$. Observe that, by \eqref{e:taufromsigmas}, $\tau^c_{ij}$ is $1$ if and only if exactly one of $\pi(\sigma_{ci})$ and $\pi(\sigma_{cj})$ is $1$, for distinct integers $c,i,j\in[1,k]$. For a fixed integer $c$, the number of pairs of columns such that the cells in row $c$ (not including the main diagonal) have entries $0$ and $1$, is the number of $\tau^c_{ij}$ which are $1$; there are $\mu_c(k-1-\mu_c)$ such pairs. Thus, \begin{equation}\label{e:Tintermsofmu} T = \sum_{c=1}^{k}\mu_c(k-1-\mu_c). \end{equation} \medskip Next we consider each of the congruence classes for $n$ modulo $4$ separately. When $n \equiv 0 \mod{4}$, there are many examples of orthogonal arrays $\mathrm{OA}(n+1,n)$ for which every Latin square in the ensemble is an equiparity Latin square (necessarily of type $000$). For example, as reported in \cite{GB12}, 18 of the 22 known projective planes of order 16 have the property that every associated $\mathrm{OA}(17,16)$ has only equiparity Latin squares in its ensemble. The only four that do not, namely MATH-D, JOHN-D, BBS4 and BBS4-D, still have the property that {\em some} associated $\mathrm{OA}(17,16)$ has only equiparity Latin squares in its ensemble. As an aside, we noticed that every one of the 22 projective planes has at least one associated complete set of MOLS in which each of the 15 Latin squares has all 48 permutations that contribute to the row, column and symbol parity being even. Such Latin squares are, of course, equiparity. Clearly it is possible for all the Latin squares in an ensemble to be equiparity. However there are restrictions on the number of equiparity Latin squares in the ensemble. \begin{theorem}\label{t:equiparity0mod4} If $n \equiv 0 \mod{4}$ then the number of equiparity Latin squares in the ensemble of an $\mathrm{OA}(n+1,n)$ is even and at least $n(n+1)(n-4)/24$. \end{theorem} \begin{proof} Suppose $n \equiv 0 \mod{4}$ and let $A$ be an $\mathrm{OA}(n+1,n)$. Let $T$ be the total number of edges among the $\tau$-graphs of $A$, $\mu_c$ be the sum of the entries in row $c$ of the $\sigma$-matrix, and $x$ be the number of equiparity Latin squares in the ensemble of $A$. By \eqref{e:totedgex} and \eqref{e:Tintermsofmu}, \[ T = 2 {n+1 \choose 3} - 2x = \sum_{c=1}^{n+1}\mu_c(n-\mu_c) \= 0 \mod4, \] where the congruence uses the fact that, by \lref{l:sigmagraphdegrees}, $\mu_c$ is even for each $c\in[1,n+1]$. It follows that $x \= {n+1 \choose 3} \mod2$, and hence $x$ is even, since $n \equiv 0 \mod4$. Finally, we note that $x$ is minimised when each $\mu_c=n/2$ and $T = (n+1)(n/2)^2$. So $x \geqslant n(n+1)(n-4)/{24}$. \end{proof} For $n\= 1\mod4$, the analogue of \tref{t:equiparity0mod4} is as follows, given that the maximum possible value of $T$ is $(n+1)^2(n-1)/4$: \begin{theorem}\label{t:equiparity1mod4} If $n \equiv 1 \mod{4}$ then the number of equiparity Latin squares in the ensemble of an $\mathrm{OA}(n+1,n)$ is at least $(n+1)(n-1)(n-3)/24$. \end{theorem} In the case $n \= 1\mod4$, we cannot deduce whether the number of equiparity Latin squares is odd or even. Indeed, both possibilities occur amongst the PP-plausible $\tau$-parities. To see this, consider $\sigma$-graphs that are isomorphic to cycles of length $3$ and $4$, respectively. By \eqref{e:totedgex} and \eqref{e:Tintermsofmu}, the difference in the number of equiparity Latin squares between these two examples is $n-2$, which is odd. Another difference between the cases $n \= 0\mod4$ and $n \= 1\mod4$ can be seen by considering a Desarguesian plane $\Pi$ of order $n$ if $n$ is a prime power. It is well-known that the collineation group of $\Pi$ is doubly-transitive. Given our comments in \sref{s:intro}, this means that every set of MOLS associated with $\Pi$ is isotopic. (Firstly, transitivity on lines ensures there is only one isomorphism class of $\mathrm{OA}$ associated with $\Pi$. Secondly, as a result of double transitivity, every ordered pair of columns of the $\mathrm{OA}$ is equivalent. The first two columns define the rows and columns of the Latin squares and the ordering of the other columns of the $\mathrm{OA}$ merely determines the ordering of the Latin squares, which is irrelevant in a set of MOLS.) The standard set of MOLS associated with $\Pi$ consists exclusively of Latin squares that are isotopic to the elementary abelian group. By the above comments, the same must be true of {\it every} set of MOLS derived from $\Pi$. If $\Pi$ has even order, this means that every Latin square in the ensemble must be equiparity since the Cayley table of any group is equiparity (see \sref{s:intro}) and parity is an isotopism invariant for even orders, by \lref{l:tauparconj}. For odd orders the situation is different. In \sref{ss:swclass}, we saw that the Desarguesian plane of order $9$ does not correspond to any $\mathrm{OA}(10,9)$ for which the ensemble consists exclusively of equiparity Latin squares (although the other 3 projective planes of order $9$ do!). Unlike the case when $n\= 0,1\mod4$, it is impossible to build an $\mathrm{OA}(k,n)$ purely from equiparity Latin squares when $n\= 2,3\mod4$ and $k>3$. \begin{lemma}\label{l:notallequipar} Suppose $n \equiv 2,3 \mod{4}$ and $A$ is an $\mathrm{OA}(k,n)$. Let $B$ be any $\mathrm{OA}(4,n)$ formed from $4$ columns of $A$. At most $2$ of the $4$ Latin squares in the ensemble of $B$ are equiparity. \end{lemma} \begin{proof} Without loss of generality, consider columns 1,2,3,4 and suppose the Latin squares defined by columns $1,2,3$ and by columns $1,2,4$ are of parity type $111$. Then $\tau_{34}^1=0=\tau_{34}^2$ by \eqref{e:Fixedcol}, so the other two squares are not of parity type $111$. \end{proof} This last result captures the intention behind Theorem 3.3 in \cite{GB12}. However, that Theorem was stated for Latin squares that are group-based. Such Latin squares are necessarily equiparity, as we noted in \sref{s:intro}. However, when $n\equiv2\mod 4$ they are well-known to not have any orthogonal mate, making Theorem 3.3 as stated in \cite{GB12} vacuous. However, the class of equiparity Latin squares is far broader than the group-based Latin squares. Yet it turns out there cannot be too many of them in large OAs: \begin{theorem}\label{t:asyequi} The proportion of the Latin squares in the ensemble of an $\mathrm{OA}(k,n)$ that are equiparity is no more than $\frac{1}{4}+o(1)$ for $n,k\rightarrow\infty$ with $n\equiv2,3\mod4$. \end{theorem} \begin{proof} We can bound the number of equiparity Latin squares in an $\mathrm{OA}(k,n)$ using the fact that the $\tau$-graphs are all $K_{a,b}$ where $a+b=k-1$. The maximum number of edges in total amongst all $G_c$ is $k \lfloor \frac{k-1}{2} \rfloor \lceil \frac{k-1}{2} \rceil$. Thus, by \eqref{e:totedgex}, the number of equiparity Latin squares is at most \begin{equation}\label{e:mostequi} \frac{1}{2}\left( k \lfloor (k-1)/2 \rfloor \lceil (k-1)/2 \rceil - {k \choose 3} \right). \end{equation} Hence asymptotically, the proportion of the ${k \choose 3}$ Latin squares that can be equiparity is at most $\frac{1}{4}+O(1/k)$. \end{proof} We next show that for every $n\= 2,3\mod4$ the bound \eref{e:mostequi} can be achieved by some PP-plausible $\tau$-parity. \begin{lemma} For each $n \equiv 2,3 \mod{4}$, there exists a PP-plausible $\tau$-parity of an $\mathrm{OA}(n+1,n)$ such that each $\tau$-graph is isomorphic to $K_1\cup K_{\lfloor{n/2} \rfloor, \lceil n/2 \rceil}$. \end{lemma} \begin{proof} In this proof, for every integer $\xi$ we let $(\xi)_n$ denote the lowest non-negative integer that is congruent to $\xi$ mod $n$. For distinct integers $i,j \in [1,n+1]$, define \[ \pi(\sigma_{ij}) = \left\{ \begin{array}{ll} 0 & \text{if } (j-i)_n \in [1, \lfloor n/2 \rfloor], \\ 1 & \text{otherwise.} \end{array} \right. \] By \tref{t:sigmaplausible} and \lref{l:sigmagraphdegrees}, this corresponds to a PP-plausible $\tau$-parity. In particular, by \eqref{e:taufromsigmas}, for distinct integers $c,i,j\in[1,n+1]$, \[ \tau^c_{ij} = \pi(\sigma_{ci}\sigma_{cj}) = \left\{ \begin{array}{ll} 1 & \text{if precisely one of }(i-c)_n\text{ and }(j-c)_n\text{ is in } [1,\lfloor n/2 \rfloor],\\ 0 & \text{otherwise.} \end{array} \right. \] Thus, for each $c \in [1,n+1]$, the $\tau$-graph $G_c$ is the disjoint union of an isolated vertex and a complete bipartite graph with partite sets of size $\lfloor \frac{n}{2} \rfloor $ and $\lceil\frac{n}{2} \rceil $. \end{proof} We have just considered the maximum value for the number of equiparity Latin squares in the ensemble of an $\mathrm{OA}(n+1,n)$. In the next two theorems, we derive a constraint in the form of a congruence and then a lower bound on this number. For the $n\= 3\mod4$ case in both results we invoke a form of standardisation different from \eref{e:convent}. It will be more convenient for us to choose the parity of the out-degrees of the vertices in the $\sigma$-graph $\mathscr{G}$. By \lref{l:Sym}, $\mathscr{G}$ is a tournament and by \lref{l:sigmagraphdegrees}, all out-degrees have the same parity. As $n+1$ is even, $\sigma$-complementation changes the parity of all out-degrees, so we can use it to select the parity that we want. \begin{theorem}\label{t:equiparity23mod4} If $n \equiv 2,3 \mod{4}$ then the number of equiparity Latin squares in the ensemble of an $\mathrm{OA}(n+1,n)$ is congruent to $\lceil n/4 \rceil\,\mod{4}$. \end{theorem} \begin{proof} Let $A$ be an $\mathrm{OA}(n+1,n)$. Let $x$ denote the number of equiparity Latin squares in the ensemble of $A$ and let $\mu_c$ denote the sum of the entries in row $c$ of the adjacency matrix of the $\sigma$-graph $\mathscr{G}$. By \lref{l:Sym}, $\mathscr{G}$ is a tournament, and by \lref{l:sigmagraphdegrees}, $\mu_c$ is odd for each $c\in[1,n+1]$ if $n \equiv 2 \mod{4}$. If $n \equiv 3 \mod{4}$, then by standardisation we may assume that the $\mu_c$ are all odd. Now, by \eqref{e:totedgex} and \eqref{e:Tintermsofmu}, \begin{align} 2x + {n+1 \choose 3} &= \sum_{c=1}^{n+1}\mu_c(n-\mu_c) \nonumber \\ &= n {n+1 \choose 2} - \sum_{c=1}^{n+1}\mu_c^2 \hspace{3cm} \textrm{ as } \mathscr{G} \text{ is a tournament} \label{e:totedges}\\ &\equiv n {n+1 \choose 2} - (n+1) \quad \mod{8}, \nonumber \end{align} since each $\mu_c$ is odd. Hence, we find that \[ x \= \left\{ \begin{array}{ll} \dfrac{(2n-3)(n+1)}{3}\cdot \dfrac{n+2}{4} \= \dfrac{n+2}{4}\mod4 & \text{ if } n \= 2 \mod4, \\[3ex] \dfrac{(2n-3)(n+2)}{3}\cdot \dfrac{n+1}{4} \= \dfrac{n+1}{4}\mod4 & \text{ if } n \= 3 \mod4. \\ \end{array} \right \] \end{proof} We define a sequence $\mu_1,\dots,\mu_{n+1}$ to be {\em good} if \begin{equation}\label{e:rowsumbnd} \sum_{i=1}^m\mu_i\leqslant nm-\binom{m}{2} \end{equation} for each $m\in[1,n+1]$. For our next theorem we will need the following property of good sequences: \begin{lemma}\label{l:good} If $\mu_1,\dots,\mu_{n+1}$ is a good sequence and $\mu_{c}=\mu_{c+1}-2$ for some $c\in[1,n]$ then the sequence obtained by interchanging $\mu_c$ with $\mu_{c+1}$ is also good. \end{lemma} \begin{proof} Consider a hypothetical counterexample to the claim. Let $K=\sum_{i=1}^{c-1}\mu_c$. Applying \eref{e:rowsumbnd} for $m\in\{c-1,c,c+1\}$, we see that \begin{align} K&\leqslant n(c-1)-\binom{c-1}{2},\label{e:pre}\\ K+\mu_c&\leqslant nc-\binom{c}{2}<K+\mu_c+2,\label{e:on}\\ K+2\mu_c+2&\leqslant n(c+1)-\binom{c+1}{2}.\label{e:post} \end{align} It follows from \eref{e:on} that $K+\mu_c=nc-\binom{c}{2}-\varepsilon$ where $\varepsilon\in\{0,1\}$. Subtracting this equation from \eref{e:post} gives $\mu_c+2\leqslant n-c+\varepsilon$. Then, using \eref{e:on} again, we get $K> nc-\binom{c}{2}-n+c-\varepsilon=n(c-1)-\binom{c-1}{2}+1-\varepsilon$, contradicting \eref{e:pre}. \end{proof} We can now derive a lower bound on the number of equiparity Latin squares in the ensemble. \begin{theorem}\label{t:linmanyequipar} Let $n\equiv2,3\mod4$. Any $\mathrm{OA}(n+1,n)$ has at least $\lceil n/4\rceil$ equiparity Latin squares in its ensemble. \end{theorem} \begin{proof} Let $A$ be an $\mathrm{OA}(n+1,n)$ and let $M$ be the $\sigma$-matrix of $A$. Let $\mu_i$ denote the total of the entries in row $i$ of $M$. In the $n\equiv2\mod4$ case, each $\mu_i$ is odd by \lref{l:sigmagraphdegrees}. In the $n\equiv3\mod4$ case, by standardisation we can ensure that each $\mu_i$ is even. In either case, we have \begin{equation}\label{e:mun-1} \mu_i\equiv n-1\quad\mod2 \end{equation} for each $i$. Let $T$ be the total number of edges among the $\tau$-graphs for $A$. The number of equiparity Latin squares in the ensemble of $A$ is $\big(T-{n+1\choose 3}\big)/2$ by \eref{e:totedgex}, which will be minimised by minimising $T$. By \eref{e:totedges}, this is achieved by maximising $\sum\mu_i^2$. We may assume that $\mu_1\geqslant\mu_2\geqslant\cdots\geqslant\mu_{n+1}$ by relabelling if necessary. Also, \eref{e:rowsumbnd} holds for each $m$ because the leading principal minor of order $m$ in $M$ has half its off-diagonal entries equal to zero. Assume now that $\mu_1,\dots,\mu_{n+1}$ is the monotonic good sequence that maximises $\sum\mu_i^2$ subject to \eref{e:mun-1}. We claim that the sequence can be found with a greedy algorithm, in the sense that \begin{equation}\label{e:greedy} \sum_{i=1}^m\mu_i\geqslant nm-\binom{m}{2}-1 \end{equation} for $1\leqslant m\leqslant n+1$. Suppose not, and let $m=a$ be such that this inequality fails. Consider a new sequence $\mu'_1,\dots,\mu'_{n+1}$ for which $\mu'_a=\mu_a+2$, $\mu'_{a+1}=\mu_{a+1}-2$, and $\mu'_i=\mu_i$ for $i\notin\{a,a+1\}$. Clearly this new sequence is good, satisfies \eref{e:mun-1} and can be made monotonic by repeated application of \lref{l:good}. However, \[ \sum_{i=1}^m(\mu'_i)^2=\sum_{i=1}^m\mu^2_i+4(\mu_a-\mu_{a+1})+8>\sum_{i=1}^m\mu^2_i, \] contradicting the optimality of our original sequence. This proves our claim that the optimal sequence satisfies \eref{e:greedy}. Note that there can be at most one monotonic good sequence satisfying \eref{e:mun-1} and \eref{e:greedy}, because the constraints are such that each term of the sequence is determined by its predecessors. We next argue that this optimal sequence is given by $\mu_1=\mu_2=\mu_3=n-1$, $\mu_4=n-3$ and $\mu_i=\mu_{i-4}-4$ for $4<i\leqslant n+1$. Certainly this is a monotonic sequence of non-negative integers, since $\mu_{n+1}=n-1-4(n-2)/4=1$ for $n\= 2\mod4$ and $\mu_{n+1}=n-3-4(n-3)/4=0$ for $n\= 3\mod4$. Also, for $m=4\alpha+\beta$ with $\beta\in\{1,2,3\}$, we have \begin{align*} \sum_{\smash{i=1}}^m\mu_i&=4(n-1+n-5+\cdots+n-4\alpha+3) -2\alpha +\beta(n-4\alpha-1)\\%+(n-3+n-7+\cdots+n-4\alpha+5)\\ &=4\alpha(n-2\alpha+1)-2\alpha +\beta(n-1-4\alpha) =nm-\binom{m}{2}+\binom{\beta}{2}-\beta. \end{align*} Since $\binom{\beta}{2}-\beta\in\{-1,0\}$, we see that both \eref{e:rowsumbnd} and \eref{e:greedy} are satisfied in this case. These constraints are also satisfied in the case $m=4\alpha$, because \begin{align*} \sum_{i=1}^m\mu_i&=4(n-1+n-5+\cdots+n-4\alpha+3)-2\alpha =4\alpha(n-2\alpha+1)-2\alpha =nm-\binom{m}{2}. \end{align*} Returning to \eref{e:totedges}, we now know that \begin{align*} T&\geqslant n{n+1\choose 2}-3\sum_{i=0}^{\lfloor(n-1)/4\rfloor}(n-1-4i)^2 -\sum_{i=0}^{\lfloor(n-3)/4\rfloor}(n-3-4i)^2\\ &= \begin{cases} n^3/6+n/3+1&\text{if }n\= 2 \mod4,\\ n^3/6+n/3+1/2&\text{if }n\= 3 \mod4,\\ \end{cases} \end{align*} which then implies the result, via \eref{e:totedges} and \eref{e:totedgex}. \end{proof} \begin{corollary}\label{cy:atleast2noniso} If $n \equiv 2 \mod{4}$ and $n>2$ then the ensemble of an $\mathrm{OA}(n+1,n)$ contains at least two Latin squares that are not isotopic to each other. \end{corollary} \begin{proof} By \tref{t:linmanyequipar} the ensemble contains an equiparity Latin square. However, given that $n>2$, \lref{l:notallequipar} tells us that the ensemble also contains a Latin square which is not equiparity. Parity is an isotopism invariant for even orders. \end{proof} In our definition of ensemble we have insisted that each set of three columns produces a single Latin square. If we had instead allowed the three columns to be used in any order then 6 potentially different Latin squares would be produced for each set of three columns. \cyref{cy:atleast2noniso} could then be strengthened to say that there will be at least 4 isotopism classes among the Latin squares corresponding to an $\mathrm{OA}(n+1,n)$ when $n\equiv2\mod4$, since each of the parity types 001,010,100,111 must occur. It is also worth noting that there is a PP-plausible $\tau$-parity that achieves the bound in \tref{t:linmanyequipar}. We can build the adjacency matrix $M$ for the corresponding $\sigma$-graph as follows. Let \[ B_3=\left[ \begin{array}{ccc} 0&0&1\\ 1&0&0\\ 0&1&0\\ \end{array} \right] \mbox{ \ and \ } B_4=\left[ \begin{array}{cccc} 0&0&1&1\\ 1&0&0&1\\ 0&1&0&1\\ 0&0&0&0\\ \end{array} \right]. \] If $n=4\alpha+2$, place $\alpha$ copies of the block $B_4$ and one copy of $B_3$ down the diagonal of $M$; if $n=4\alpha+3$, place $\alpha+1$ copies of $B_4$ down the diagonal of $M$. Fill all other entries above the diagonal with 1's and all those below with 0's. It is routine to check that this construction has the required properties, including achieving the optimal sequence $\{\mu_i\}$. In contrast to \tref{t:linmanyequipar}, there are plausible $\tau$-parities for which the ensemble contains no equiparity Latin squares. \begin{lemma}\label{l:zeroallodd} If $n \equiv 2,3 \mod{4}$ and $k$ is arbitrary, then there exists a plausible $\tau$-parity for which there are no equiparity Latin squares in the ensemble of the associated $\mathrm{OA}(k,n)$. \end{lemma} \begin{proof} Take the $\sigma$-matrix to be lower triangular. Then the $\tau$-parities (for $i<j$) are given by: $$ \tau_{ij}^c = \left\{ \begin{array}{ll} 0 & \textrm{if } i<j<c \textrm{ or } c<i<j, \\ 1 & \textrm{if } i<c<j. \end{array} \right.$$ For any three distinct integers $c,i,j \in [1, k]$, exactly one of $\tau_{jc}^i, \tau_{ic}^j, \tau_{ij}^c$ is $1$, and the result follows. \end{proof} We stress that the plausible $\tau$-parities in \lref{l:zeroallodd} are not PP-plausible, by \lref{l:sigmagraphdegrees}. Most, though not all, of the restrictions that we have demonstrated on the parities of Latin squares in the ensemble only apply to the projective plane case. \section{Concluding remarks} We have considered two notions of parity for the orthogonal arrays that correspond to MOLS. One of these, $\sigma$-parity, was introduced by Glynn and Byatt \cite{GB12} in a more limited setting. The other, $\tau$-parity, is a direct generalisation of the row, column and symbol parities of Latin squares. These two notions of parity are closely related; $\tau$-parity is determined by $\sigma$-parity, and the converse is true up to complementation (see \sref{ss:equivalencetausigma}). The relationship between the two parities proved very fruitful throughout our investigations. For example, it provided a very simple proof of \lref{l:LStriangleparity}, which generalises the well-known relationship between the row, column and symbol parities of Latin squares. In \sref{s:graphs} we introduced useful graph theoretic models for the two notions of parity. Each $\mathrm{OA}(k,n)$ has one $\sigma$-graph, $k$ $\tau$-graphs and one ``stack'' which is formed by merging the $\tau$-graphs, modulo $2$. The $\sigma$-graph is undirected for $n\= 0,1\mod 4$, but is a tournament for $n\= 2,3\mod 4$ (\lref{l:Sym}). The $\tau$-graphs and stack are highly structured (\lref{l:taugraphbipartite}, \tref{t:stackedgraph}). Further restrictions apply to the structure of $\sigma$-graphs, $\tau$-graphs and the stack for $\mathrm{OA}$s that come from projective planes (\lref{l:sigmagraphdegrees}, \lref{l:parityofpartitesets}, \cyref{c:stackPP}). These restrictions are based on the equivalence between projective planes and sharply 2-transitive sets of permutations. In \sref{s:numpar} we considered the question of how many different $\tau$-parities might be obtained by an $\mathrm{OA}(k,n)$. We phrased this in terms of $\mathcal{B}(k,n)$, the information content of the $\tau$-parity, in bits. We showed that $\mathcal{B}(k,n)\leqslant{k\choose 2}-1$ (\cyref{c:dim}). Later we showed that $\mathcal{B}(n+1,n)\leqslant {n \choose 2}$ if $n$ is odd and $\mathcal{B}(n+1,n)\leqslant { {n \choose 2}-1}$ if $n$ is even (\cyref{c:dimPP}). An interesting question is whether there are restrictions on parities of an $\mathrm{OA}(k,n)$ other than the ones that we have demonstrated. This question is wide open for the case when $k$ is comparable in size to $n$. However, we conjecture that when $n$ is large relative to $k$, no further restrictions apply (\cjref{cj:allactual}). We proved this conjecture for $k\le5$ in \sref{ss:plausisactual} by showing that all plausible $\tau$-parities are actually achieved. Moreover, it is possible to embed any $k$-MOLS of order $n$ inside some set of $k$-MOLS of order $N$ whenever $N$ is large enough. Hence if $(k,n)$ is such that $\mathcal{B}(k,n)$ achieves its upper bound then $\mathcal{B}(k,N)=\mathcal{B}(k,n)$ for all large $N$. An open question, related to the issue of whether there are further constraints when $k$ is comparable to $n$, is the issue of unique completion. It was shown by Metsch \cite{Met91} that any $\mathrm{OA}(n-O(n^{1/3}),n)$ has a unique completion to an $\mathrm{OA}(n+1,n)$, up to isomorphism. It would be interesting to find parity analogues of this result. Examining the proof of \tref{t:PPplausible}, we see that the $\sigma$-parity of an $\mathrm{OA}(n,n)$ determines the $\sigma$-parity of its completion to an $\mathrm{OA}(n+1,n)$, but the same is not true for an $\mathrm{OA}(n-1,n)$ without further developments in the theory. In \sref{s:ensemble} we considered the {\em ensemble} of an $\mathrm{OA}(k,n)$, which is the set obtained by taking 3 columns of the $\mathrm{OA}$ at a time and interpreting the result as a Latin square. It turns out that there are many bounds and congruences that restrict the number of Latin squares of each parity that may occur within the ensemble. The restrictions, which mostly apply when $k=n+1$, are weakest for $n\= 0,1\mod 4$. Many of the results in \sref{s:ensemble} that apply to $n\= 2\mod 4$ also apply to $n\= 3\mod 4$, but are not so limiting in that case since parity is not an isotopism invariant for odd $n$. Hence, our constraints are strictest when $n\= 2\mod 4$, perhaps offering some insight into why projective planes of these orders are hard to construct (and are believed by many not to exist for $n>2$). In \cyref{cy:atleast2noniso} we showed that all the Latin squares in the ensemble cannot be isotopic to each other when $n\= 2\mod4$. Indeed, as $n$ grows the number of ``equiparity'' Latin squares in the ensemble grows at least linearly (\tref{t:equiparity23mod4}), but it can never be more than $\frac14+o(1)$ of all the Latin squares in the ensemble (\tref{t:asyequi}). The preponderance of even parities among the known projective planes of order 16 invites further investigation. Certainly, for $n\= 0\mod4$ all of the constraints that we have found are trivially satisfied when all parities are even. This means the $\sigma$-graph, all $\tau$-graphs and the stack are all empty graphs, which seems to be the easiest way to satisfy all known requirements. In contrast, for $n\= 2\mod 4$ all these graphs are forced to have edges, and the constraints such as \lref{l:parityofpartitesets} and \cyref{c:stackPP} seem at first glance to be much harder to satisfy. However, it is worth stressing that in \tref{t:PPplausible} we have a mechanism for producing many choices for the parities that satisfy all the constraints that we have demonstrated in this paper. Our work cannot be used to rule out the existence of any projective plane without the discovery of a new constraint on its parity. \subsection*{Acknowledgements} The authors are very grateful to Peter Dukes for supplying \tref{t:asymIMOLSexistence} and to Darcy Best for helpful proofreading. \let\oldthebibliography=\thebibliography \let\endoldthebibliography=\endthebibliography \renewenvironment{thebibliography}[1]{% \begin{oldthebibliography}{#1}% \setlength{\parskip}{0.4ex plus 0.1ex minus 0.1ex}% \setlength{\itemsep}{0.4ex plus 0.1ex minus 0.1ex}% }% {% \end{oldthebibliography}% }
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \begin{figure}[hb!] \begin{center} \includegraphics[width=0.8\linewidth]{general_idea_in_loss_space.pdf} \caption{ {With BPN, one can switch at runtime the network parameters that are global optimal for each task. Training trajectories are illustrated in loss and parameter space. The green curve shows loss as a function of network parameters for a first task A, with optimal parameters shown by the green circle. The purple curve and circle correspond to a second task B. Training first task A then task B with stochastic gradient descend (SGD, without any constraints on parameters, gray) leads to optimal parameters for task B (purple circle), but those are destructive for task A. When, instead, learning task B using EWC or PSP (have some constraints on parameters, yellow), the solution is a compromise that can be sub-optimal for both tasks (black circle). Beneficial perturbations (blue curve for task A, red curve for task B) push the representation learned by EWC or PSP back to their task-optimal states.}} \label{fig:concept_in_loss_space} \end{center} \end{figure} {The human brain is the benchmark of adaptive learning. While interacting with new environments that are not fully known to an individual, it is able to quickly learn and adapt its behavior to achieve goals as well as possible, in a wide range of environments, situations, tasks, and problems. In contrast, deep neural networks only learn one sophisticated but fixed mapping between inputs and outputs, thereby limiting their application in more complex and dynamic situations in which the mapping rules are not kept the same but change according to different tasks or contexts. One of the failed situations is continual learning - learning new independent tasks sequentially without forgetting previous tasks. In the domain of image classification, for example, each task may consist of learning to recognize a small set of new objects. A standard neural network only learns a fixed mapping rule between inputs and outputs after training on each task. Training the same neural network on a new task would destroy the learned fixed mapping of an old task. Thus, current deep learning models based on stochastic gradient descent suffer from so-called "catastrophic forgetting" \citep{mccloskey1989catastrophic,french1994dynamically,sloman1992episodic}, in that they forget all previous tasks after training each new one.} {Here, we propose a new biological plausible (Discussion) method~--- Beneficial Perturbation Network (BPN)~--- to accommodate these dynamic situations. The key new idea is to allow one neural network to learn potentially \textit{unlimited} task-dependent mappings and to switch between them at runtime. To achieve this, we first leverage existing lifelong learning methods to reduce interference between successive tasks (Elastic Weight Consolidation, EWC \citep{kirkpatrick2017overcoming}, or parameter superposition, PSP \cite{cheung2019superposition}). We then add out-of-network, task-dependent bias units, to provide per-task correction for any remaining parameter drifts due to the learning of a sequences of tasks. We compute the most beneficial biases~---~beneficial perturbations~---~for each task in a manner inspired by recent work on adversarial examples. The central difference is that, instead of adding adversarial perturbations that can force the network into misclassification, beneficial perturbations can push the drifted representations of old tasks back to their initial task-optimal working states (Fig.~\ref{fig:concept_in_loss_space}).} \begin{figure*}[htb] \begin{center} \includegraphics[height = 15cm]{different_types_publication_ready.pdf} \caption{{\bf Concept:} Type 1 - constrain the network weights while training the new task: (a) Retraining models such as elastic weight consolidation \citep{kirkpatrick2017overcoming}: retrains the entire network learned on previous tasks while using a regularizer to prevent drastic changes in the original model. Type 2 - expanding and retraining methods (b-c); (b) Expanding models such as progressive neural networks \citep{rusu2016progressive} expand the network for new task \textit{$t$} without any modifications to the network weights for previous tasks. (c) Expanding model with partial retraining such as dynamically expandable networks \citep{yoon2018lifelong} expand the network for new task t with partial retraining on the network weights for previous tasks. Type 3 - episodic memory methods (d): Methods such as Gradient Episodic Memory \citep{lopez2017gradient} store a subset of the original dataset from previous tasks into the episodic memory and replays them with new data during the training of new tasks. Type 4 - Partition network (e): these use context or mask matrices to partition the core network into several sub-networks for different tasks \citep{cheung2019superposition,zeng2019continual,mallya2018piggyback,du2019single,yoon2019oracle}. Type 5 - beneficial perturbation methods (f): Beneficial perturbation networks create beneficial perturbations which are stored in bias units for each task. Beneficial perturbations bias the network toward that task and thus allow the network to switch into different modes to process different independent tasks. It retrains the normal weights learned from previous tasks using elastic weight consolidation \citep{kirkpatrick2017overcoming} or parameter superposition \citep{cheung2019superposition}. (g) Strengths and weaknesses for each type of method.} \label{fig:concept} \end{center} \end{figure*} There are three major benefits of BPN: {\bf{1)}} BPN is memory and parameter efficient: to demonstrate it, we validate our BPN for continual learning on incremental tasks. We test it on multiple public datasets (incremental MNIST \citep{lecun1998gradient}, incremental CIFAR-10 and incremental CIFAR-100 \citep{krizhevsky2009learning}), on which it achieves better performance than the state-of-the-art. For each task, by adding bias units that store beneficial perturbations to every layer of a 5-layer fully connected network, we only introduce a 0.3\% increase in parameters, compared to a 100\% parameter increase for models that train a separate network, and 11.9\% - 60.3\% for dynamically expandable networks \citep{yoon2018lifelong}. Our model does not need any episodic memory to store data from the previous tasks and does not need to replay them during the training of new tasks, compared to episodic memory methods \citep{rebuffi2017icarl,lopez2017gradient,rannen2017encoder,rios2018closed}. Our model does not need large context matrices, compared to partition methods \citep{cheung2019superposition,zeng2019continual,yoon2019oracle,mallya2018piggyback,du2019single,farajtabar2020orthogonal,srivastava2013compete,masse2018alleviating}. {\bf{2)}} BPN achieves state-of-the-art performance across different datasets and domains: to demonstrate it, we consider a sequence of eight unrelated object recognition datasets (Experiments). After training on the eight complex datasets sequentially, the average test accuracy of BPN is better than the state-of-the-art. {\bf{3)}} BPN has capacity to accommodate a large number of tasks: to demonstrate it, we test a sequence of 100 permuted MNIST tasks (Experiments). A variant of BPN that uses PSP to constrain the normal network achieves 30.14\% better performance than the second best, the original PSP \citep{cheung2019superposition}, a partition method which performs well in incremental tasks and eight object recognition tasks. Thus, BPN has a promising future to solve continual learning compared to the other types of methods. {To lay out the foundation of our approach we start by introducing the following key concepts: Sec.~\ref{sectypes}: Different types of methods for enabling lifelong learning; Sec.~\ref{adp}: Adversarial directions and perturbations; Sec.~\ref{bdp}: Beneficial directions and perturbations, and the effects of beneficial perturbations in sequential learning scenarios; Sec.~\ref{bpn}: Structure and updating rules for BPN.} {We then present experiments (Sec.~\ref{experiments}), results (Sec.~\ref{results}) and discussion (Sec.~\ref{discussion}).} \section{Types of methods for enabling lifelong learning} \label{sectypes} Four major types of methods have been proposed to alleviate catastrophic forgetting. Type 1: constrain the network weights to preserve performance on old tasks while training the new task \citep{kirkpatrick2017overcoming,lee2017overcoming, aljundi2018memory} (Fig.~\ref{fig:concept}a); A famous example of type 1 methods is EWC \citep{kirkpatrick2017overcoming}. EWC constrains certain parameters based on how important they are to previously seen tasks. {The importance is calculated from their task-specific Fisher information matrix. However, solely relying on constraining the parameters of the core network eventually exhausts the core network's capacity to accommodate new tasks. After learning many tasks, EWC cannot learn anymore because the parameters become too constrained (see Results).} Type 2: dynamic network expansion \citep{li2017learning,lee2017overcoming,rusu2016progressive,yoon2018lifelong} creates new capacity for the new task, which can often be combined with constrained network weights for previous tasks (Fig.~\ref{fig:concept}b-c); {However, this type is not scalable because it is not parameter efficient (e.g., 11.9\% - 60.3\% additional parameters per task for dynamically expandable networks \citep{yoon2018lifelong})}. Type 3: using an episodic memory \citep{rebuffi2017icarl,lopez2017gradient,rannen2017encoder} to store a subset of the original dataset from previous tasks, then rehearsing it while learning new tasks to maintain accuracy on the old tasks (Fig.~\ref{fig:concept}d). { However, this type is not scalable because it is neither memory nor parameter efficient.} All three approaches attempt to shift the network's single fixed mapping initially obtained by learning the first task to a new one that satisfies both old and new tasks. They create a new, but still fixed mapping from inputs to outputs across all tasks so far, combined. Type 4: Partition Network: using task-dependent context \citep{cheung2019superposition,zeng2019continual,yoon2019oracle,masse2018alleviating} or mask matrices \citep{mallya2018piggyback,du2019single,du2019single,farajtabar2020orthogonal,srivastava2013compete} to partition the original network into several small sub-networks (Fig.~\ref{fig:concept}e, flow chart - Fig.~\ref{fig:Flow_charts}a). Zeng {\em et al.} \cite{zeng2019continual} used context matrices to partition the network into independent subspaces spanned by rows in the weight matrices to avoid interference between tasks. However, context matrices introduce as many additional parameters as training a separate neural network for each new task (additional 100\% parameters per task). To reduce parameter costs, Cheung {\em et al.} proposed binary context matrices \citep{cheung2019superposition}, further restricted to diagonal matrices with -1 and 1 values. The restricted context matrices \citep{zeng2019continual} (1 and -1 values) behave similarly to mask matrices \citep{mallya2018piggyback} (0 and 1 values) that split the core network into several sub-networks for different tasks. With too many tasks, the core network would eventually run out of capacity to accommodate any new task, because there is no vacant route or subspace left. Although type 4 methods create multiple input to output mappings for different tasks, many of these methods are too expensive in terms of parameters, and none of them has enough capacity to accommodate numerous tasks because methods such as PSP run out of unrealized capacity of the core network. In marked contrast to the above artificial neural network methods, here, we propose a fundamentally new fifth type (Fig.~\ref{fig:concept}f, flow chart - Fig.~\ref{fig:Flow_charts} b): We add out-of-network, task-dependent bias units to neural network. Bias units enable a neural network to switch into different modes to process different independent tasks through beneficial perturbations (the memory storage cost of these new bias units is actually lower than the cost of adding a new mask or context matrix). With only an additional 0.3\% of parameters per mode \footnote{ {Check supplementary discussion for more information about additional parameter costs}}, this structure allows BPN to learn potentially unlimited task-dependent mappings from inputs to outputs for different tasks. The strengths and weaknesses of each type are in Fig.~\ref{fig:concept}g. \begin{figure*}[htb] \begin{center} \includegraphics[width=0.9\linewidth]{beneficial_perturbation_story_publication_ready.pdf} \caption{Defining adversarial perturbations in input space vs. beneficial perturbations in activation space. We consider two digits recognition tasks; Task A (recognizing 1s and 2s) and Task B (recognizing 3s and 4s). (a) {\bf Adversarial directions (AD)}. Adding adversarial perturbations (calculated from digits 1) to input digits 2 can be viewed as adding an adversarial direction vector (gray arrow) to the clear input image of digit 2 in the input space. Thus, the network misclassifies the clear input image of digit 2 as digit 1. Beneficial directions are not operated as adding beneficial perturbations to the clear input image of digit 2 in the input space to assist the correct classification (orange arrow). (b) {\bf Beneficial directions (class specific) for each class of task A .} $R_1$ ($R_2$) is the classification region (region of constant estimated label) of digit 1 (digit 2) from the MNIST dataset. Subregion $R_{1\_high}$ ($R_{1\_low}$) is the high (low) confidence classification region of digit 1, and likewise for $R_{2\_high}$ ($R_{2\_low}$) for digit 2. The point $A_1$ is the activations of normal neurons of each layer from an input image of task A. It lies in the intersection of $R_{1\_low}$ and $R_{2\_low}$. $BD_1$ ($BD_2$) are beneficial directions for class digit 1 (digit 2). $A_1 + BD_1$, blue arrows,($A_+BD_2$, red arrows) pushes the activation $A_1$ across the decision boundary of $R_2$ ($R_1$) and towards $R_{1\_high}$ ($R_{2\_high}$). Thus, the network classifies $A_1 + BD_1$ ($A_1 + BD_2$) as digit 1 (digit 2) with high confidence. (c) {{\bf After training task B, beneficial perturbations (task specific) for task A push the drifted representation of inputs from task A back to its initial optimal working region of task A.}} $R_3$ ($R_4$ ) is the classification region (region of constant estimated label) of digit 3 (digit 4) from the MNIST dataset. $BD_1$ ($BD_2$) is a beneficial direction for digit 1 (digit 2). During the training of task A, the network has been trained on two images from digit 1 ($1^a$ and $1^b$) and two images from digit 2 ($2^a$ and $2^b$). Thus, the beneficial perturbations for task A are the vector ($BD_2^{a} + BD_1^{a} + BD_2^{b} + BD_1^{b}$). After training task B, with gradient descent, point $A_1$ in b) is drifted to the $A'_1$ which lies inside of the classification regions of task B ($R_2$ or $R_3$). The drifted point $A'_1$ alone cannot be correctly classified as digit 1 or 2 because it lies outside of the classification region of task A ($R_1$ or $R_2$). At test time, adding beneficial perturbations for task A to the activations of $A'_1$, can drag it back the correct classification regions for task A (intersection of $R_1$ and $R_2$). Thus, it biases the network's outputs toward the correct classification region and push task representations back to { their initial task-optimal working region.} }\label{fig:beneficial_perturbations} \end{center} \end{figure*} \section{Adversarial directions and perturbations} \label{adp} {Three spaces of a neural network are important for this and the following sections: The {\em input space} is the space of input data (e.g., pixels of an image); the {\em parameter space} is the space of all the weights and biases of the network; the {\em activation space} is the space of all outputs of all neurons in all layers in the network.} By adding a carefully computed "noise" (adversarial perturbations) to the input space of a picture, without changing the neural network, one can force the network into misclassification. The noise is usually computed by backpropagating the gradient in a so-called "adversarial direction" such as by using the fast gradient sign method (FGSD) \citep{tramer2017space}. For example, consider a task of recognizing handwritten digits "1" versus "2". Adversarial perturbations aimed at misclassifying an image of digit 2 as digit 1 may be obtained by backpropagating from the class digit 1 to the input space, following any of the available adversarial directions. In Fig.~\ref{fig:beneficial_perturbations}a, adding adversarial perturbations to the input image can be viewed as adding an adversarial direction vector (gray arrows $AD$) to the clear (non-perturbated) input image of digit 2. The resulting vector crosses the decision boundary. Thus, adversarial perturbations can force the neural network into misclassification, here from digit 2 to digit 1. Because the dimensionality of adversarial directions is around 25 for MNIST \citep{tramer2017space}, when we project them into a 2D space, we use the fan-shaped gray arrows to depict those dimensions. \section{Beneficial directions and perturbations, \& The effects of beneficial perturbations in multitask sequential learning scenario} \label{bdp} {In this section, we first introduce the definition of beneficial directions and beneficial perturbations. Then, we explain why beneficial perturbations can help a network recover from a parameter drifting of old tasks after learning new tasks and can push task representations back to their initial task-optimal working region.} We consider two incremental digits recognition tasks; Task A (recognizing 1s and 2s) and Task B (recognizing 3s and 4s). Attack and defense researchers usually view adversarial examples as a curse of neural networks, but we view it as a gift to solve continual learning. Instead of adding input "noise" (adversarial perturbations) to the {\em input space} calculated from other classes to force the network into misclassification, {we add "noise" to the {\em activation space}, using {\em beneficial perturbations} stored in bias units added to the {\em parameter space} (Supplementary Fig.~\ref{fig:Flow_charts}b) calculated by the input's own correct class to assist in correct classification.} To understand beneficial perturbations, we first explain beneficial directions. Beneficial directions are vectors that point toward the direction of high confidence classification region for each class (Fig.~\ref{fig:beneficial_perturbations}b); ${BD}_1$ (${BD}_2$) are the beneficial directions that point to the high confidence classification region of digit 1 (digit 2). The point $A_1$ represents the activation of the normal neurons of each layer generated from an input image of task A. $A_1 + {BD}_1$ ($A_1 + {BD}_2$) pushes the activation $A_1$ across the decision boundary of $R_2$ ($R_1$) and toward $R_{1\_high}$ ($R_{2\_high}$). Thus, the network would classify the $A_1 + {BD}_1$ ($A_1 + {BD}_2$) as digit 1 (2) with high confidence. To overcome catastrophic forgetting, we create some beneficial perturbations for each task and store them in task-dependent bias units (Fig.~\ref{fig:explanation_structure}, Supplementary Fig.~\ref{fig:Flow_charts}b). Beneficial perturbations allow a neural network to operate in different modes by biasing the network toward that particular task, even though the shared normal weights become contaminated by other tasks. The beneficial perturbations for each task are created by aggregating the beneficial direction vectors sequentially for each class through mini-batch backpropagation. For example, during the training of task A, the network has been trained on two images from digit 1 ($1^a$ and $1^b$) and two images from digit 2 ($2^a$ and $2^b$). The beneficial perturbations for task A are the summation of the beneficial directions calculated from each image ($BD_2^{a} + BD_1^{a} + BD_2^{b} + BD_1^{b}$ in Fig. ~\ref{fig:beneficial_perturbations}c, { $BD_i^j$ is the beneficial direction for sample $j$ in class $i$}). During the training of task B, with gradient descent, the point $A_1$ (Fig.~\ref{fig:beneficial_perturbations}b) is drifted to $A'_1$ which lies inside the classification regions for task B ($R_3\bigcup R_4$). The drifted $A'_1$ alone cannot be classified as digit 1 or 2 since it lies outside of the classification regions of task A ($R_1 \bigcup R_2$). However, during testing of task A, after training task B, adding beneficial perturbations for task A to the drifted activation ($A'_1$) drags it back to the correct classification regions for task A ( $R_1$ $\bigcup$ $R_2$ in Fig.~\ref{fig:beneficial_perturbations}c). Thus, beneficial perturbations bias the neural network toward that task and {push task representations back to their initial task-optimal working region. Note that in this work we focus on adding more compact beneficial perturbations to the activation space, as adding perturbations to the input space has already been explored in adversarial attack methods, and adding perturbations to the parameter space is unlikely to be scalable due to the very large number of parameters in a typical neural network.} \begin{figure*}[htb] \begin{center} \includegraphics[width=0.8\linewidth]{structure_publication_ready.pdf} \caption{ {\bf{ Beneficial perturbation network (BD + EWC or BD + PSP variant) with two tasks.}} (a) Structure of beneficial perturbation network. (b) Train on task A. Backpropagating through the network to bias units for tasks A in beneficial direction (FGSD) using input's own correct class (digits label 1 and 2), normal weights (gradient descent). (c) Test on task A. Feed the input images to the network. Activating bias units for task A and adding the stored beneficial perturbations to the activations. The beneficial perturbations bias the network to mode on classifying digits 1, 2 task. (d) Train on task B. Backpropagating through the network to bias units for tasks B in beneficial direction (FGSD) using input's own correct class (digits label 3 and 4), normal weights (constrained by EWC or PSP). (e) Test on task B. Feed the input images to the network. Activating bias units for task B and adding the stored beneficial perturbations to the activations. The beneficial perturbations bias the network to mode on classifying digits 3, 4 task.} \label{fig:explanation_structure} \end{center} \end{figure*} \section{Beneficial Perturbation Network} \label{bpn} We implemented two variants of BPN: BD + EWC and BD + PSP (Experiments). The backbone - BD (updating extra out-of-network bias units in beneficial directions to create beneficial perturbations) is the same for both methods. The only difference is BD + EWC (BD + PSP) uses EWC (PSP) method to retrain the normal weights while attempting to minimize disruption of old tasks. Here, we choose BD + EWC to explain our method (for BD + PSP, see Supplementary). We use a scenario with two tasks for illustration; task A - recognizing MNIST digit 1s, 2s, task B - recognizing MNIST digit 3s, 4s. BPN has task-dependent bias units ($\mathbf{BIAS}_{t}^{i}\in R^{1{\times}K}$, K is the number of normal neurons in each layer, $i$ is the layer number, and $t$ is the task number) in each layer to store the beneficial perturbations. The beneficial perturbations are formulated as an additive contribution to each layer's weighted activations. Unlike most adversarial perturbations, beneficial perturbations are not specific to each example, but are applied to all examples in each task (Fig.~\ref{fig:beneficial_perturbations} c, d). We define beneficial perturbations as a task-dependent bias term: \begin{equation} {\bm{V}}^{i+1} = \sigma({\bm{W}}^{i}{\bm{V}}^{i} +b^i +\mathbf{BIAS}_{t}^{i} ) \ \ \ \mathbf{\forall} \ \ i \in [1,n] \label{Eqn:activations_rules} \end{equation} \noindent where $V^{i}$ is the activations at layer $i$, $W^{i}$ is the normal weights at layer $i$, $BIAS_{t}^{i}$ is the task dependent bias units at layer $i$ for task $t$, $\sigma(\cdot)$ is the nonlinear activation function at each layer, $b^i$ is the normal bias term at layer $i$, $n$ is the number of layers. For a simple fully connected network (Fig.~\ref{fig:explanation_structure} a), the forward functions are: \begin{equation} {\bm{V}}^{1} = \sigma({\bm{W}}^{1}{\bm{X}}_{t}+ b^1 +\mathbf{BIAS}_{t}^{1} ) \end{equation} \begin{equation} {\bm{V}}^{2} = \sigma({\bm{W}}^{2}{\bm{V}}^1+b^2 +\mathbf{BIAS}_{t}^{2}) \end{equation} \begin{equation} \mathbf{y} = Softmax({\bm{W}}^{3}{\bm{V}}^2+ b^3 +\mathbf{BIAS}^{3}_{t}) \end{equation} \noindent where $\mathbf{y}$ is the output logits, ${\bm{X}}_{t}$ is the input data for task t, $Softmax$ is the normalization function, other notations are the same as in Eqn.~\ref{Eqn:activations_rules}. During the training of a specific task, the bias units are the product of two terms\footnote{the factorization provides more degrees of freedom to better learn the beneficial perturbations \citep{haeffele2017global,du2019gradient}}: ${\bm{M}}_{t}^{i}\in R^{1{\times}H}$ and ${\bm{W}}_{t}^{i}\in R^{H{\times}K}$ (H is the hidden dimension (a hyper-parameter), K is the number of normal neurons in each layer, and $t$ is the task number). After training a specific task, we discard both ${\bm{M}}_{t}^{i}$ and ${\bm{W}}_{t}^{i}$, and only keep their product $\mathbf{BIAS}_{t}^{i}$, reducing memory and parameter costs to a negligible amount (0.3$\%$ increase for parameters per task, and 4*K Bytes increase per layer per task, it is just a bias term). After training on different sequential tasks, at test time, the stored beneficial perturbations from the specific bias units can bias the neural network outputs to each task. Thus, these allow the BPN to switch into different modes to process different tasks. We use the forward and backward rules (Alg.~\ref{alg:FORTA}, Alg.~\ref{alg:BACTA}) to update the BPN. {{\bf{For training}}}, first, during the training of task A, our goal is to maximize the probability $P(\mathbf{y} = \mathbf{y}_{A}|{\bm{X}}_{A},{\bm{W}}^{i},\mathbf{BIAS}_{A}^{i}) \ \ \ \mathbf{\forall} \ \ i \in [1,n]$ by selecting the bias units corresponding to tasks A . Thus, we set up our optimization function as: \begin{equation} \small \begin{aligned} {\bm{W}}^{i},&\,\mathbf{BIAS}_{A}^{i} = { \mathop{\arg\min}_{ {\bm{W}}^{i},\,\mathbf{BIAS}_{A}^{i}}} \\ & { -\, log\,[\,P(\mathbf{y} = \mathbf{y}_{A}|{\bm{X}}_{A},\,{\bm{W}}^{i},\,\mathbf{BIAS}_{A}^{i})\,]} \ \ \ \ \mathbf{\forall} \ \ i \in [1,n] \end{aligned} \label{Eqn:training_A} \end{equation} \noindent where $\mathbf{y}_{A}$ is the true label for data in task A (MNIST input images 1, 2), $X_{A}$ is the data for task A, other notations are the same as notations in Eqn.~\ref{Eqn:activations_rules}. We update ${\bm{M}}_{A}^{i}$ in the beneficial direction (FGSD) as $\epsilon sign(\nabla_{{\bm{M}}_{A}^{i}} L({\bm{M}}_{A}^{i},{\bm{y}}_{A}))$ to generate beneficial perturbations for task A, where ${\bm{M}}_{A}^{i}$ are the first term of bias units for task A. We update ${\bm{W}}_{A}^{i}$ (the second term of bias units for task A) in the gradient direction. The factorization allows the bias units for task A to better learn the beneficial perturbations for task A (a vector towards the work space of task A that has non-negligible network response for MNIST digits 1, 2, similar to Fig.~\ref{fig:beneficial_perturbations}b, c ). {We use a softmax cross entropy loss to optimize Eqn.~\ref{Eqn:activations_rules}.} After training task A, the bias units for task A ($\mathbf{BIAS}_{A}^{i}$) are the product of ${\bm{M}}_{A}^{i}$ and ${\bm{W}}_{A}^{i}$. We discard ${\bm{M}}_{A}^{i}$ and ${\bm{W}}_{A}^{i}$ to reduce the memory storage and parameter costs and freeze the $\mathbf{BIAS}_{A}^{i}$ to ensure that the beneficial perturbations are not being corrupted by other tasks (Task B). Then, we discard all of the MNIST input images 1, 2 because all of the information is stored inside the bias units for task A and we do not need to replay these images when we train on the following sequential tasks. After training task A, during the training of task B (Fig.~\ref{fig:explanation_structure} d), our goal is to maximize the probability ${P(\mathbf{y} = \mathbf{y}_{B}|{\bm{X}}_{B},{\bm{W}}^{i},\mathbf{BIAS}_{B}^{i})}$ ${ \mathbf{\forall} \ i \in [1,n]}$ by selecting the bias units corresponding to tasks B. To minimize the disruption for task A, we apply EWC or PSP constraints on normal weights. We set up our optimization function as \begin{equation} \small \begin{aligned} {\bm{W}}^{i},&\,\mathbf{BIAS}_{B}^{i} = \mathop{\arg\min}_{ {\bm{W}}^{i},\,\mathbf{BIAS}_{B}^{i}} \\ & -\, log\,[\,P(\mathbf{y} = \mathbf{y}_{B}|{\bm{X}}_{B},\,{\bm{W}}^{i},\,\mathbf{BIAS}_{B}^{i})\,] + EWC({\bm{W}}^{i}) \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathbf{\forall} \ \ i \in [1,n] \end{aligned} \label{Eqn:training_EWC_constrained}\end{equation} where $\mathbf{y}_{B}$ is the true label for data in task B (MNIST input images 3, 4), $X_{B}$ is the data for task B, $EWC(\cdot)$ is the EWC constraint \citep{kirkpatrick2017overcoming} on normal weights, other notations are the same as in Eqn.~\ref{Eqn:activations_rules}. In the loss function of Alg.~\ref{alg:BACTA}, $\lambda F_j(W_j-W^{A*}_{j})^2$ is the EWC constraint on the normal weights, where $j$ labels each parameter, $F_j$ is the Fisher information matrix for each parameter $j$ {(determine which parameters are most important for a task \citep{kirkpatrick2017overcoming})}, $\lambda$ sets how important the old task is compared to the new one, $W_j$ is normal weight $j$, and $W_{j}^{A*}$ is the optimal normal weight $j$ after training on task A. Apart from the additional EWC constraint, training task B and all subsequent tasks then simply proceeds in the same manner as for task A above. {{{\bf For testing}}}, after training task B, we test the accuracy for task A on a test set by manually activating the bias units corresponding to task A (Fig.~\ref{fig:explanation_structure} c, Alg.~\ref{alg:FORTA}). Although the shared normal weights have been contaminated by task B, the integrity of bias units for task A that store the beneficial perturbations still can bias the network outputs to task A (set the network into a mode to process input from task A, see Results). In another word, the task-dependent bias units can still maintain a high probability ~--~ $P(\mathbf{y} = \mathbf{y}_{A}|{\bm{X}}_{A},{\bm{W}}^{i},\mathbf{BIAS}_{A}^{i})$ for task A. During testing of task B, we test the accuracy for task B on a test set by manually activating the bias units corresponding to task B (Fig.~\ref{fig:explanation_structure} e, Alg.~\ref{alg:FORTA}). The bias units for task B can bias the network outputs to task B and maintain a high probability ~--~ $P(\mathbf{y} = \mathbf{y}_{B}|{\bm{X}}_{B},{\bm{W}}^{i},\mathbf{BIAS}_{B}^{i})$ for task B, in case the shared normal weights are further modified by later tasks. In scenarios with more than two tasks, the forward and backward algorithms for later tasks are the same as for task B, except that they will select and update their own bias units. In sum, beneficial perturbations act upon the network not by adding biases to the input data (like adversarial examples do, Fig.~\ref{fig:beneficial_perturbations}a), but instead by dragging the drifted activations back to the correct working region in activation space for the current task ( Fig.~\ref{fig:concept_in_loss_space} and Fig. ~\ref{fig:beneficial_perturbations} c). The intriguing properties of task-dependent beneficial perturbations on maintaining high probabilities for different tasks can further be explained in two ways. The beneficial perturbations from the bias units can be viewed as features that capture how "furry" the images are for task A (or B). Olshausen {\em et al.} \citep{cheung2019superposition} showed that training a neural network only on these features is sufficient to make correct classification on the dataset that generates these features. They argued that these features have sufficient information for a neural network to make correct classification. In our continual learning scenarios, although the shared normal weights (${\bm{W}}^{i}$) have been contaminated after the sequential training of all tasks, by activating corresponding bias units, the task-dependent bias units still have sufficient information to bias the network toward that task. In other words, the task-dependent bias units can maintain high probabilities ~--~ $P(\mathbf{y} = \mathbf{y}_{A}|{\bm{X}}_{A},{\bm{W}}^{i},\mathbf{BIAS}_{A}^{i})$ for task A or $P(\mathbf{y} = \mathbf{y}_{B}|{\bm{X}}_{B},{\bm{W}}^{i},\mathbf{BIAS}_{B}^{i})$ for task B . Thus, bias units can assist the network to make correct classification. In addition, Elsayed {\em et al.} \cite{elsayed2018adversarial} showed how a carefully computed adversarial perturbations for each new task embedded in the input space can repurpose machine learning models to perform a new task. Here, these beneficial perturbations can be viewed as task-dependent beneficial "programs"\cite{elsayed2018adversarial} in the parameter space. Once activated, these task-dependent "programs" can maximize the probability for corresponding tasks. \begin{algorithm*}[h] \small \caption{BD + EWC : forward rules for task t } \label{alg:FORTA} \begin{algorithmic} \State {\quad For each fully connected layer i} \State {\quad Select bias units (task t): $\mathbf{BIAS}_{t}^{i}$) for the current task } \State{\quad{\bfseries Input:}\hspace{0.15cm} $\mathbf{BIAS}_{t}^{i}$ \textemdash \hspace{0.08cm} Bias units for task t} \hfill {\color{green}// provide beneficial perturbations to bias the neural network} \State{\quad\quad\quad\quad\hspace{0.33cm}${\bm{V}}^{i-1}$\textemdash \hspace{0.08cm} Activations from the last layer} \State{\quad{\bfseries Output:} ${\bm{V}}^{i} = \sigma({\bm{W}}^{i} \cdot {\bm{V}}^{i-1} + \mathbf{b}^{i} +\mathbf{BIAS}_{t}^{i}) \ \ \ \mathbf{\forall} \ i \in [1,n]$ \hfill {\color{green}// activations for the next layer}} \State{\quad\quad\quad\qquad \hspace{0.13cm} where: ${\bm{W}}^{i}$\textemdash \hspace{0.08cm} normal neuron weights at layer $i$. $\mathbf{b}^{i}$\textemdash \hspace{0.08cm} normal bias term at layer $i$} \State{\quad\quad\quad\qquad \hspace{1.23cm} $n$ \textemdash \hspace{0.08cm} the number of FC layers. \hspace{1.1cm} $\sigma(\cdot)$ \textemdash \hspace{0.08cm} the nonlinear activation function at each layer} \end{algorithmic} \end{algorithm*} \begin{algorithm*}[h] \small \caption{BD + EWC : backward rules for task t } \label{alg:BACTA} \begin{algorithmic} \State {{\underline {For the first task A ($t = 1$):}}} \State{{\quad Minimizing loss function: $L({\bm{X}}_{A},{\bm{W}}^{i},\mathbf{BIAS}_{A}^{i}) \ \ \ \mathbf{\forall} \ i \in [1,n]$}} \State{{\quad \quad where: ${\bm{X}}_{A}$\textemdash \hspace{0.08cm} data for task One.\quad ${\bm{W}}^{i}$\textemdash \hspace{0.08cm} normal neuron weights at layer $i$. }} \State{{ \quad\quad\quad\quad \hspace{0.16cm} $\mathbf{BIAS}_{A}^{i}$ \textemdash \hspace{0.08cm} bias units for task One from FC layers $i$, which is the product of $({\bm{M}}_{A}^{i}, {\bm{W}}_{A}^{i})$}} \State{{\hspace{0.16cm}\quad\quad\quad\quad\quad $n$\textemdash \hspace{0.08cm} the number of FC layers.}} \State{} \State {\underline {For task B ($t > 1$):}} \State{\quad Minimizing loss function: $L({\bm{X}}_{t},{\bm{W}}^{i}, \mathbf{BIAS}_{B}^{i})+ \sum_{j} \lambda F_j(W_j-W^{A*}_{j})^2 \ \ \ \mathbf{\forall} \ i \in [1,n]$} \State{{\quad \quad where: ${\bm{X}}_{B}$\textemdash \hspace{0.08cm} data for task B.\quad ${\bm{W}}^{i}$\textemdash \hspace{0.08cm} normal neuron weights from FC at layers $i$}} \State{{\hspace{0.1cm}\quad\quad\quad\quad\quad $j$\textemdash \hspace{0.08cm} labels each parameter.\quad $F_{j}$ \textemdash \hspace{0.08cm} Fisher information matrix for parameter j.}} \State{{\hspace{0.1cm}\quad\quad\quad\quad\quad $W_j$\textemdash \hspace{0.08cm} normal weight j.\hfill $W^{A*}_{j}$ \textemdash \hspace{0.08cm} optimal normal weight j after training on task A.}} \State{{ \quad\quad\quad\quad \hspace{0.16cm} $\mathbf{BIAS}_{B}^{i}$ \textemdash \hspace{0.08cm} bias units for task B at FC layers $i$, which is the product of $({\bm{M}}_{B}^{i}, {\bm{W}}_{B}^{i})$}} \State{{\hspace{0.1cm}\quad\quad\quad\quad\quad $n$\textemdash \hspace{0.08cm} the number of FC layers.}} \State{} \State{\bf \underline{For each fully connected layer i:}} \State{} \State{\quad \underline {During the training of task t}} \State {\quad \hspace{0.07cm} Select bias units for the current task t ($\mathbf{BIAS}_{t}^{i}$)} \State{\hspace{0.07cm} \quad{\bfseries Input:}\hspace{0.15cm} $\mathbf{Grad}$ \textemdash \hspace{0.08cm} Gradients from the next layer} \State{\hspace{0.07cm} \quad{\bfseries output:} $\mathbf{dW_{t}^{i}} = \mathbf{Grad}\cdot(({\bm{M}}_{t}^{i})^T)$ {\color{green}// gradients for the second term of bias units for task t at layer i } } \State{\hspace{0.05cm} \quad $\hspace{33pt}$ $\mathbf{dM_{t}^{i}} = \epsilon\; sign\;((\mathbf{W}_{t}^{{i}})^T\cdot(\mathbf{Grad}))$ } \State{\hspace{0.07cm} \hfill {\color{green}// gradients for the first term of bias units for task t at layer i using FGSD method}} \State{\hspace{0.08cm} \quad $\hspace{33pt}$ $\mathbf{dW^i} = \mathbf{Grad}\cdot((\mathbf{V}^{i})^T)$\hfill {\color{green}// gradients for normal weights at layer i}} \State{\hspace{0.08cm} \quad $\hspace{33pt}$ $\mathbf{dV^{i}} = (\mathbf{W}^{i})^T \cdot (\mathbf{Grad})$ \hfill {\color{green}// gradients for activations at layer i to last layer i -1}} \State{\hspace{0.07cm} \quad $\hspace{33pt}$ $\mathbf{db^i} = \sum_{j} {Grad}_j $ \hfill {\color{green}// gradients for normal bias at layer i, j is iterator over the first dimension of {\bf{Grad}} }} \State{} \State {\quad \underline {After training of task t}} \State{\quad \hspace{0.07cm} Freeze the $\mathbf{BIAS}_{t}^{i}$} \State{\quad \hspace{0.07cm} Delete the $\mathbf{W_{t}^{i}}$ and $\mathbf{M_{t}^{i}}$ to reduce parameter and memory storage cost} \end{algorithmic} \end{algorithm*} \section{Experiments} \label{experiments} \subsection{Experimental Setup For Incremental Tasks} To demonstrate that BPN is very parameter efficient and can learn different tasks in an online and continual manner, we used a fully-connected neural network with 5 hidden layers of 300 ReLU units. We tested it on three public computer vision datasets with "single-head evaluation", where the output space consists of all the classes from all tasks learned so far. {\bf 1. Incremental MNIST.} A variant of the MNIST dataset \citep{lecun1998gradient} of handwritten digits with 10 classes, where each task introduces a new set of classes. We consider 5 tasks; each new task concerns examples from a disjoint subset of 2 classes. {\bf 2. Incremental CIFAR-10.} A variant of the CIFAR object recognition dataset \citep{krizhevsky2009learning} with 10 classes. We consider 5 tasks; each new task has 2 classes. {\bf 3. Incremental CIFAR-100.} A variant of the CIFAR object recognition dataset \citep{krizhevsky2009learning} with 100 classes. We consider 10 tasks; each new task has 2 classes. We use 20 classes for CIFAR-100 experiment. \subsection{Experimental Setup For Eight Sequential Object Recognition Tasks} To demonstrate the superior performance of BPN across different datasets and domains, we consider a sequence of eight object recognition datasets: {\bf 1.} Oxford \textit{Flowers} \citep{nilsback2008automated} for fine-grained flower classification (8,189 images in 102 categories); {\bf 2.} MIT \textit{Scenes} \citep{quattoni2009recognizing} for indoor scene classification (15,620 images in 67 categories); {\bf 3.} Caltech-UCSD \textit{Birds} \citep{wah2011caltech} for fine-grained bird classification (11,788 images in 200 categories); {\bf 4.} Stanford \textit{Cars} \citep{krause20133d} for fine-grained car classification (16,185 images of 196 categories); {\bf 5.} FGVC-\textit{Aircraft} \citep{maji2013fine} for fined-grained aircraft classification (10,200 images in 70 categories); {\bf 6.} VOC \textit{actions} \citep{everingham2015pascal}, the human action classification subset of the VOC challenge 2012 (3,334 images in 10 categories); {\bf 7.} \textit{Letters}, the Chars74K datasets \citep{de2009character} for character recognition in natural images (62,992 images in 62 categories); and {\bf 8.} the Google Street View House Number \textit{SVHN} dataset \citep{netzer2011reading} for digit recognition (99,289 images in 10 categories). To have a fair comparison, we use the same AlexNet \citep{krizhevsky2012imagenet} architecture pretrained on ImageNet \citep{russakovsky2015imagenet} as Aljundi {\em et al.} \cite{aljundi2018selfless,aljundi2018memory}, and tested on 8 sequential tasks with a "multi-head evaluation", where each task has its own classification layer (introduce same parameter costs for every method) and output space. All methods have a task oracle at test time to decide which classification layer to use. We run the different methods on the following sequence: Flower -> Scenes -> Birds -> Cars -> Aircraft -> Action -> Letters -> SVHM. \subsection{Experimental Setup for 100 permuted MNIST dataset} To demonstrate that BPN has capacity to accommodate a large number of tasks, we tested it on 100 permuted MNIST datasets generated from randomly permuted handwritten MNIST digits. We consider 100 tasks; each new task has 10 classes. We used a fully-connected neural network with 4 hidden layers of 128 ReLu Units (a core network with small capacity) to compare the performances of different methods. The type 4 methods, such as the Parameter Superposition (PSP \citep{cheung2019superposition}) would exhaust the unrealized capacity and inevitably dilute the capacity of the core network under a large number of tasks: in their Fig. 2, with a network that has 128 hidden units (leftmost panel), the average task performance for all tasks trained so far, is 95\% after training one task, but decreases to 50\% after training fifty tasks. While a larger network with 2048 hidden units shows much smaller decrease from 96\% to about 90\% (see Fig. 2 in their paper, rightmost panel). The reason is that this method generates a random diagonal binary matrix for each task, which in essence is a key or selector for that task. As more and more tasks are learned, those keys start to overlap more, causing interference among tasks. In comparison, BPN can counteract the dilution, hence it can accommodate a large number of tasks. \subsection{Our model and baselines} We compared the proposed Beneficial Perturbation Network ( Beneficial Perturbation + Elastic Weight Consolidation (the eleventh model), BD + EWC (variant 1) and Beneficial Perturbation + Parameter Superposition (the twelfth model), BD + PSP (variant 2)) to 11 alternatives to demonstrate its superior performance. {\bf 1. Single Task Learning (STL).} We consider several 5-layer fully-connected neural networks. Each network is trained for each task separately. Thus, STL does not suffer from catastrophic forgetting at all. It is used as an upper bound. {\bf 2. Elastic Weight Consolidation (EWC) \citep{kirkpatrick2017overcoming}.} The loss is regularized to avoid catastrophic forgetting. {\bf 3. Gradient Episodic Memory with task oracle (GEM (*)) \citep{lopez2017gradient},} GEM uses a task oracle to build a final linear classifier (FLC) per task. The final linear classifier adapts the output distributions to the subset of classes for each task. GEM uses an episodic memory to store a subset of the observed examples from previous tasks, which are interleaved with new data from the latest task to produce a new classifier for all tasks so far. We use notation GEM (*) for the rest of the paper, where * is the size of episodic memory (number of training images stored) for each class. {\bf 4. Incremental Moment Matching \citep{lee2017overcoming} (IMM)} IMM incrementally matches the moment of the posterior distribution of the neural network with a L2 penalty and equally applies it to changes to the shared parameters. {\bf 5. Learning without forgetting \citep{li2017learning} (LwF)} First, LwF freezes the shared parameters while learning a new task. Then, LwF trains all the parameters until convergence. {\bf 6. Encoder based lifelong learning \citep{rannen2017encoder} (EBLL)} Based on LwF, using an autoencoder to capture the features that are crucial for each task. {\bf 7. Synaptic Intelligence \citep{zenke2017continual} (SI)} While training on new task, SI estimates the importance weights in an online manner. Parameters important for previous tasks are penalized during the training of new task. {\bf 8. Memory Aware Synapses \citep{aljundi2018memory} (MAS)} Similar to SI method, MAS estimates the importance weights through the sensitivity of the learned function on training data. Parameters important for previous tasks are penalized during the training of new task. {\bf 9. Sparse coding through Local Neural Inhibition and Discounting \citep{aljundi2018selfless} (SLNID)} SLNID proposed a new regularizer that penalizes neurons that are active at the same time to create sparse and decorrelated representations for different tasks. {\bf 10. Parameter Superposition \citep{cheung2019superposition} (PSP)} PSP used task-specific context matrices to map different inputs from different tasks to different subspaces spanned by rows in the weight matrices to avoid interference between tasks. We use the binary superposition model of PSP throughout the paper, because it is not only more memory efficient, but also, in our testing, it performed better than other PSP variants (e.g., complex superposition). {\bf 11. BD + EWC (ours):} Beneficial Perturbation Network (variant 1). The first term (${\bm{M}}_{t}$) of the bias units is updated in the beneficial direction (BD) using FGSD method. The second term (${\bm{W}}_{t}$) of the bias units is updated in the gradient direction. The normal weights are updated with EWC constraints. {\bf 12. BD + PSP (ours):} Beneficial Perturbation Network (variant 2). The first term (${\bm{M}}_{t}$) of the bias units is updated in the beneficial direction (BD) using FGSD method. The second term (${\bm{W}}_{t}$) of the bias units is updated in the gradient direction. The normal weights are updated using PSP (binary superposition model, Supplementary). \begin{figure}[] \begin{center} \includegraphics[width=0.85\linewidth]{visualization_vertical_publication_ready.pdf} \caption{{\bf Visualization of classification regions:} classify 3 randomly generated normal distributed clusters. Task 1: separate black from red clusters. Task 2: separate black from light blue clusters. The yellower (bluer) the heatmap, the higher (lower) the chance the neural network classifies a location as the black cluster. After training task 2, only BD + EWC remembers task 1 by maintaining its decision boundary between the black and red clusters. Both plain gradient descent and GD + EWC forget task 1 entirely.} \label{fig:visualization} \end{center} \end{figure} {\bf 13. GD + EWC:} The update rules and network structure are the same as BD + EWC, except the first term (${\bm{M}}_{t}$) of the bias units is updated in the Gradient direction (GD). This method has the same parameter costs as BD + EWC . The failure of GD + EWC suggests that the good performance of BD + EWC is not from the additional dimensions provided by bias units. \section{ Results:} \label{results} \subsection{The beneficial perturbations can bias the network and maintain the decision boundary} To show the advantages of our method are really from the beneficial perturbations and not just from additional dimensions to the neural network, we compare between updating the first term of the bias units in the beneficial direction (BD + EWC which comes from beneficial perturbations) and in the gradient direction (GD + EWC, which just comes from the additional dimensions that our bias units provide). We use a toy example (classifying 3 groups of Normal distributed clusters) to demonstrate it and to visualize the decision boundary (Fig.~\ref{fig:visualization}). We randomly generate 3 normal distributed clusters different locations. We have two tasks - Task 1: separate the black cluster from the red cluster. Task 2: separate the black cluster from the light blue cluster. The yellower (bluer) the heatmap, the higher (lower) the confidence that the neural network classifies a location into the black cluster. After training task 2, both plain gradient descent and GD + EWC forget task 1 (dark blue boundary around the red cluster disappeared). However, BD + EWC not only learns how to classify task 2 (clear decision boundary between light blue and black clusters), but also remembers how to classify the old task 1 (clear decision boundary between red and black clusters). Thus, the beneficial perturbations are what can bias the network outputs and maintain the decision boundary for each task, not just adding more dimensions. \begin{figure*}[h] \begin{center} \includegraphics[height=10.8cm]{mnist_5_tasks_pnas_with_psp_publication_ready.pdf} \caption{ Results for a fully-connected network with 5 hidden layers of 300 ReLU units. (a) Incremental MNIST tasks (5 tasks, 2 classes per task). (b) Incremental CIFAR-10 tasks (5 tasks, 2 classes per task). For a and b, the dashed line indicates the start of a new task. The vertical axis is the accuracy for each task. The horizontal axis is the number of epochs. (c) Incremental CIFAR-100 tasks (10 tasks, 2 classes per task). The vertical axis is the accuracy for task 1. The horizontal axis is the number of tasks.} \label{fig:quantative_results} \end{center} \end{figure*} \begin{table*}[h!] \caption{Task 1 performance with "single-head" evaluation after training all sequential tasks on incremental MNIST, CIFAR-10 and CIFAR-100 Dataset. We include additional memory storage costs per task (extra components that are necessary to be stored onto the disks after training each task, Supplementary) of GEM , BD+EWC, BD + PSP and PSP method.} \label{tab:memory_performance} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{cccl} \toprule Dataset & Method & \makecell{Task 1 performance after\\ training all sequential tasks} & \makecell{additional memory storage \\ costs per task (Bytes)}\\ \midrule \makecell{Incremental MNIST\\ (5 tasks, 2 classes per task)} & \makecell{GEM(10)\\BD+EWC}& \makecell{0.980 \\\bf{0.980}}& \makecell[r]{\ \ \ \ \ 47,040 \\\bf{4,808} }\\\hline \makecell{Incremental CIFAR-10\\ (5 tasks, 2 classes per task)} & \makecell{GEM(256)\\GEM(150)\\BD+EWC}& \makecell{\bf{0.800} \\0.698\\0.795}& \makecell[r]{4,718,592 \\2,764,800 \\\bf{4,808} }\\\hline \makecell{Incremental CIFAR-100\\ (10 tasks, 2 classes per task)} & \makecell{GEM(256)\\GEM(209)\\BD+PSP\\PSP\\BD+EWC}& \makecell{0.790\\0.775 \\ \bf{0.850}\\0.830\\0.845}& \makecell[r]{4,718,592 \\3,852,288 \\20,776\\15,968\\\bf{4,808} }\\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table*} \subsection{Quantitative analysis for incremental tasks} Our BPN achieves a comparable or better performance than PSP, GEM, EWC, GD + EWC in "single-head" evaluations, where the output space consists of all the classes from all tasks learned so far. In addition, it introduces negligible parameter and memory storage costs per task. Fig.~\ref{fig:quantative_results} and Tab.~\ref{tab:memory_performance} summarize performance for all datasets and methods. STL has the best performance since it trained for each task separately and did not suffer from catastrophic forgetting at all. Thus, STL is the upper bound. BD + EWC performed slightly worse than STL (1\%,4\%,1\% worse for incremental MNIST, CIFAR-10, CIFAR-100 datasets). BD + EWC achieved comparable or better performance than GEM. On incremental CIFAR-100 (10 tasks, 2 classes per task), BD + EWC outperformed PSP, GEM (256) and GEM (10) by 1.80\%, 6.96\%, and 22.4\%. BD + PSP outperformed PSP, GEM (256) and GEM (10) by 2.40\%, 7.59\%, and 23.1\%. By comparing the memory storage costs (Tab.~\ref{tab:memory_performance}, Supplementary), to achieve similar performance, BD + EWC only introduces an additional 4,808 Bytes memory per task, which is only 0.1\% of the memory storage cost required by GEM (256). BD + PSP only introduces 20,776 Bytes, or 0.44\% of the memory storage cost required by GEM (256). The memory storage costs of BD + EWC is 30\% of that of PSP. The memory storage costs of BD + PSP is of the same order of magnitude as PSP. EWC alone rapidly decreased to 0\% accuracy. This confirms similar results on EWC performance on incremental datasets \citep{rios2018closed, kemker2017fearnet, parisi2019continual, kemker2018measuring} in "single-head" evaluations although EWC generally performs well in "multi-head" tasks. GD + EWC has the same additional dimensions as BD + EWC, but GD + EWC failed in the continual learning scenario. This result suggests that it is not the additional dimensions of the bias units, but the beneficial perturbations, which help overcome catastrophic forgetting. \begin{table*}[h] \caption{Test accuracy (in percent correct) achieved by each method with "multi-head" evaluation for each dataset after training on the 8 sequential object recognition datasets. (Dash (--) means that the results are not available in their papers. Star (*) means that we didn't reproduce the methods and the results were taken from SLNID \citep{aljundi2018selfless} and MAS \citep{aljundi2018memory}. Thus, we keep the same percentage table format as theirs).} \label{tab:eighttasks} \begin{center} \includegraphics[width=0.97\linewidth]{eight_datasets_table_IEEE_TNNLS.pdf} \end{center} \end{table*} \subsection{Quantitative analysis for eight sequential object recognition tasks} The eight sequential object recognition tasks demonstrate the superior performance of BPN (BD + PSP or BD + EWC) compared to the state-of-the-art and the ability to learn sequential tasks across different datasets and different domains. Our BPN achieves much better performance than IMM \citep{lee2017overcoming}, LwF\citep{li2017learning}, EWC \citep{kirkpatrick2017overcoming}, EBLL \citep{rannen2017encoder}, SI \citep{zenke2017continual}, MAS \citep{aljundi2018memory}, SLNID \citep{aljundi2018selfless}, PSP \citep{cheung2019superposition} in "multi-head" evaluations, where each task has its own classification layer and output space. After training on the 8 sequential object recognition datasets, we measured the test accuracy for each dataset and calculated their average performance (Tab.~\ref{tab:eighttasks}). On average, BD + PSP (ours) outperforms all other methods: PSP (7.52\% better), SLNID (8.02\% better), MAS (11.73\% better), SI (16.60\% better), EBLL (17.07\% better), EWC (17.75\% better), LwF (18.96\% better) and IMM (35.62\% better). Although MAS, SI and EBLL performed better than EWC alone, with the help of our beneficial perturbations (BD), BD + EWC can achieve a better performance than these methods: MAS (0.34\% better), SI (4.71\% better), EBLL (5.13\% better) and EWC (5.74\% better). By including the BD (BD + PSP and BD + EWC), we can significantly boost performance when compared to using PSP or EWC alone (black arrows in the Tab.~\ref{tab:eighttasks}). \begin{figure}[h] \includegraphics[width=\columnwidth]{permuted_MNIST_100_tasks_ewc_publication_ready.pdf} \caption{100 permuted MNIST datasets results for a fully-connected network with 4 hidden layers of 128 ReLU units. This network is relatively small for these tasks and hence does not offer much available redundancy or unrealized capacity. (a) The average task accuracy of all tasks trained so far as the number of tasks increases. (b) After training 100 tasks, the average task accuracy for a group 10 tasks. We use t-test to validate the results. \label{fig:100_permuted_MNIST_tasks}} \end{figure} \subsection{Quantitative analysis for 100 permuted MNIST dataset} 100 permuted MNIST dataset demonstrates that our BPN has capacity to accommodate a large number of tasks. After training 100 permuted MNIST tasks, the average task performance of BD + PSP is 30.14\% better than PSP. The average task performance of BD + EWC is 35.47\% higher than EWC (Fig.~\ref{fig:100_permuted_MNIST_tasks}.a). As the number of tasks increases (Fig.~\ref{fig:100_permuted_MNIST_tasks}.a), the average task performance of BD + PSP becomes increasingly better than PSP. The reason is that adding new tasks significantly dilutes the capacity of the original network in Type 4 methods (e.g., PSP) as there are limited routes or subspaces to form sub-networks. In this case, even though the core network can no longer fully separate each task, the Beneficial perturbations (BD) can drag the misrepresented activations back to their correct work space of each task and recover their separation (as demonstrated in Fig.~\ref{fig:beneficial_perturbations}). Thus, the BD of BD + PSP can still increase the capacity of the network and boost the performance. Similarly, BD components in BD + EWC can boost performance, increasing the capacity of the network to accommodate more tasks than EWC alone (Fig.~\ref{fig:100_permuted_MNIST_tasks}.b). In addition, after training 100 tasks (Fig.~\ref{fig:100_permuted_MNIST_tasks}. b), the accuracy of BD + EWC for the first 50 tasks is higher than PSP, likely because BD+EWC did not severely dilute the core network's capacity while PSP did. This means BD + EWC has a larger capacity than PSP. In contrast, the lower performance of the last 50 tasks for BD + EWC comes from the constraints of EWC (do not allow the parameters of the network learned from the new tasks to have large deviations from the parameters trained from old tasks). Although the performance of PSP is much better than EWC, with the help of BD, BD + EWC still reaches a similar performance as PSP. \section{Discussion} \label{discussion} We proposed a fundamentally new biologically plausible type of method - beneficial perturbation network (BPN), a neural network that can switch into different modes to process independent tasks, allowing the network to create potentially unlimited mappings between inputs and outputs. We successfully demonstrated this in the continual learning scenario. Our experiments demonstrate the performance of BPN is better than the state-of-the-art. 1) BPN is more parameter efficient (0.3\% increase per task) than the various network expansion and network partition methods. it does not need a large episodic memory to store any data from previous tasks, compared to episodic memory methods, or large context matrices, compared to partition methods. 2) BPN achieves state-of-the-art performance across different datasets and domains. 3) BPN has a larger capacity to accommodate a higher number of tasks than the partition networks. Through visualization of classification regions and quantitative results, we validate that beneficial perturbations can bias the network towards a task, allowing the network to switch into different modes. Thus, BPN significantly contributes to alleviating catastrophic forgetting and achieves much better performance than other types of methods. Elsayed {\em et al.} \cite{elsayed2018adversarial} showed how carefully computed adversarial perturbations embedded in the input space can repurpose machine learning models to perform a new task without changing the parameters of the models. This attack finds a single adversarial perturbation for each task, to cause the model to perform a task chosen by the adversary. This adversarial perturbation can thus be considered as a program to execute each task. Here, we leverage similar ideas. But, in sharp contrast, instead of using malicious programs embedded in the input space to attack a system, we embedded beneficial perturbations ('beneficial programs') into the network's parameter space (the bias terms), enabling the network to switch into different modes to process different tasks. The goal of both approaches is similar - maximizing the probability ($P(current \ task|image\ input,\ program)$) of the current task given the image input and the corresponding program for the current task. This can be achieved by either forcing the network to perform an attack task in Elsayed {\em et al.}, or assisting it to perform a beneficial task in our method. The addition of programs to either input space (Elsayed {\em et al.}'s method) or the network's activation space (our method) helps the network maximize this probability for a specific task. We suggest that the intriguing property of the beneficial perturbations that can bias the network toward a task might come from the property of adversarial subspaces. Following the adversarial direction, such as by using the fast gradient sign method (FGSD) \cite{goodfellow6572explaining}, can help in generating adversarial examples that span a continuous subspace of large dimensionality (adversarial subspace). Because of "excessive linearity" in many neural networks \cite{tramer2017space} \cite{goodfellow2016}, due to features including Rectified linear units and Maxout, the adversarial subspace often takes a large portion of the total input space. Once an adversarial input lies in the adversarial subspace, nearby inputs also tend to lie in it. Interestingly, this corroborates recent findings by Ilyas {\em et al.} \citep{ilyas2019adversarial} that imperceptible adversarial noise can not only be used for adversarial attacks on an already-trained network, but also as features during training. For instance, after training a network on dog images perturbed with adversarial perturbation calculated from cat images, the network can achieve a good classification accuracy on the test set of cat images. This result shows that those features (adversarial perturbations) calculated from the cat training sets, contain sufficient information for a machine learning system to make correct classification on the test set of cat images. In our method, we calculate those features for each task, and store them into the bias units. In this case, although the normal weights have been modified (information from old tasks are corrupted), the stored beneficial features for each task have sufficient information to bias the network and enable the network to make correct predictions. {BPN is loosely inspired by its counterpart in the human brain: having task-dependent modules such as bias units in our Beneficial Perturbation Network, and long-term memories in hippocampus (HPC, \cite{bakker2008pattern}) in a brain network, are crucial for a system to switch into different modes to process different tasks. During weight consolidation, the HPC \citep{lesburgueres2011early,squire1995retrograde,frankland2005organization,helfrich2019bidirectional} fuses features from different tasks into coherent memory traces. Over days to weeks, as memories mature, the HPC progressively stores permanent abstract high-level long-term memories to remote memory storage areas (neocortical regions). The HPC can then maintain and mediate their retrieval independently when a specific memory is in need. We suggest that when a specific memory is retrieved, it helps the HPC switch into distinct modes to process different tasks. Thus, our analogy between HPC and BPN can be formulated as: during the training of BPN, updating the shared normal weights using EWC or PSP in theory leads to distinct task-dependent representations (similar to the coherent memory traces in HPC). However, some overlap between these representations is inevitable because model parameters become too constrained for EWC, or PSP runs out of unrealized capacity of the core network. To circumvent this effect, Bias units (akin to the long-term memories in the neocortical areas) are trained independently for each task. At test time, bias units for a given task are activated to push representations of old tasks back to their initial task-optimal working regions in an analogous manner to maintaining and mediating the retrieval of Long-term memories independently in HPC.} {An alternative biological explanation evokes the concept of factorized codes. In biological neuronal populations, neurons can be active for one task or, in many cases, for more than one tasks. At the population level, different tasks are encoded by different neuronal ensembles which can overlap. In our model, the PSP component deploys binary keys to activate task-specific readouts in hidden layers, in an analogy to neuronal task ensembles. When activating a BD component for a task, we would be further disambiguating a task-specific ensemble, particularly across neurons which are active for more than one task. The reason for this is that adding task-specific beneficial perturbations to activations of hidden layers can shift the distribution of the net activation (akin to a DC offset or carrier frequency). Evidence from nonhuman primate experiments \citep{roy2010prefrontal,cromer2010representation} and human behavioral results \citep{flesch2018comparing} support this factorized code theory. Electrophysiological experiments using monkeys demonstrated that neurons in prefrontal cortex are either representing competing categories independently \citep{roy2010prefrontal} or could represent multiple categories \citep{cromer2010representation}. In human behavior experiments, "humans tend to form factorized representation that optimally segregated the tasks \citep{flesch2018comparing}". In addition, recent neural network simulations \citep{yang2019task} demonstrated that "network developed mixed task selectivity similar to recorded prefrontal neurons after learning multiple tasks sequentially with a continual learning technique". Thus, having factorized representations for different tasks is important for enabling life-long learning and designing a general adaptive artificial intelligence system. } \section*{Acknowledgment} This work was supported by the National Science Foundation (grant number CCF-1317433), C-BRIC (one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA), and the Intel Corporation. The authors affirm that the views expressed herein are solely their own, and do not represent the views of the United States government or any agency thereof. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\@hangfrom@section#1#2#3{\@hangfrom{#1#2}\MakeTextUppercase{#3}}% \def\subsection{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\subsubsection{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape \centering }% }% \def\paragraph{% \@startsection {paragraph}% {4}% {\parindent}% {\z@}% {-1em}% {\normalfont\normalsize\itshape}% }% \def\subparagraph{% \@startsection {subparagraph}% {5}% {\parindent}% {3.25ex \@plus1ex \@minus .2ex}% {-1em}% {\normalfont\normalsize\bfseries}% }% \def\section@preprintsty{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries }% }% \def\subsection@preprintsty{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries }% }% \def\subsubsection@preprintsty{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape }% }% \@ifxundefined\frontmatter@footnote@produce{% \let\frontmatter@footnote@produce\frontmatter@footnote@produce@endnote }{}% \def\@pnumwidth{1.55em} \def\@tocrmarg {2.55em} \def\@dotsep{4.5pt} \setcounter{tocdepth}{3} \def\tableofcontents{% \addtocontents{toc}{\string\tocdepth@munge}% \print@toc{toc}% \addtocontents{toc}{\string\tocdepth@restore}% }% \def\tocdepth@munge{% \let\l@section@saved\l@section \let\l@section\@gobble@tw@ }% \def\@gobble@tw@#1#2{}% \def\tocdepth@restore{% \let\l@section\l@section@saved }% \def\l@part#1#2{\addpenalty{\@secpenalty}% \begingroup \set@tocdim@pagenum{#2}% \parindent \z@ \rightskip\tocleft@pagenum plus 1fil\relax \skip@\parfillskip\parfillskip\z@ \addvspace{2.25em plus\p@}% \large \bf % \leavevmode\ignorespaces#1\unskip\nobreak\hskip\skip@ \hb@xt@\rightskip{\hfil\unhbox\z@}\hskip-\rightskip\hskip\z@skip \par \nobreak % \endgroup }% \def\tocleft@{\z@}% \def\tocdim@min{5\p@}% \def\l@section{% \l@@sections{}{section }% \def\l@f@section{% \addpenalty{\@secpenalty}% \addvspace{1.0em plus\p@}% \bf }% \def\l@subsection{% \l@@sections{section}{subsection }% \def\l@subsubsection{% \l@@sections{subsection}{subsubsection }% \def\l@paragraph#1#2{}% \def\l@subparagraph#1#2{}% \let\toc@pre\toc@pre@auto \let\toc@post\toc@post@auto \def\listoffigures{\print@toc{lof}}% \def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}} \def\listoftables{\print@toc{lot}}% \let\l@table\l@figure \appdef\class@documenthook{% \@ifxundefined\raggedcolumn@sw{\@booleantrue\raggedcolumn@sw}{}% \raggedcolumn@sw{\raggedbottom}{\flushbottom}% }% \def\tableft@skip@float{\z@ plus\hsize}% \def\tabmid@skip@float{\@flushglue}% \def\tabright@skip@float{\z@ plus\hsize}% \def\array@row@pre@float{\hline\hline\noalign{\vskip\doublerulesep}}% \def\array@row@pst@float{\noalign{\vskip\doublerulesep}\hline\hline}% \def\@makefntext#1{% \def\baselinestretch{1}% \reset@font \footnotesize \leftskip1em \parindent1em \noindent\nobreak\hskip-\leftskip \hb@xt@\leftskip{% \Hy@raisedlink{\hyper@anchorstart{footnote@\the\c@footnote}\hyper@anchorend}% \hss\@makefnmark\ }% #1% \par }% \prepdef \section{Introduction} The energy emitted by active galactic nuclei (AGN) is argued to be the result of matter accreting onto supermassive black holes at the center of galaxies \citep{lynden71}. However, the details of the geometry and kinematics of the region around the accretion disk are not well understood . In the standard model of AGN, the region around the accretion disk is called the broad line region (BLR) due to the broad emission lines from rapidly moving clouds of material near the black hole \citep{antonucci93, urry95}. Models for the BLR attempt to explain the many categories of AGN by the observer's viewing angle and the covering fraction of inflowing or outflowing material \citep[see e.g.][]{murray95, elvis00}, since the degree to which the BLR geometry and kinematics vary between individual systems is also unknown. The mass of the central black hole is a fundamental parameter in galaxy evolution, as suggested by a relation between black hole mass and stellar velocity dispersion of the host galaxy bulge, the $M_{BH} - \sigma_{\star}$ relation \citep[see e.g.][]{bennert11}. While the black hole masses of very nearby galaxies can be measured by observing the orbits of stars or gas, a different approach is needed for more distant galaxies because the gravitational sphere of influence of the black hole cannot be spatially resolved \citep[e.g.][]{ferrarese05}. In active galaxies, the small size of the BLR, estimated to be around $\sim 10^{14}-10^{16}$m \citep{wandel99, kaspi00, bentz06}, inhibits direct imaging of the accretion disk and orbiting BLR clouds. Reverberation mapping provides a method to determine the black hole mass, along with the geometry and the kinematics of the BLR \citep{blandford82, peterson93, peterson04}. Without relying on spatially resolving the gravitational sphere of influence of the black hole, reverberation mapping provides a powerful tool for studying black holes over a range of redshifts and black hole masses \citep[e.g.][]{peterson04, woo07, bentz09, denney09}. The method relies on the large time-variability of AGN luminosity, spanning timescales of days to years \citep[e.g.][]{webb00}. Reverberation mapping data consists of a timeseries of frequent measurements of the intensity of the continuum and broad line emission. The line emission strength is assumed to be proportional to the continuum emission strength, but with a time lag due to the light travel time from the central ionizing source to the BLR material. The lag time between line and continuum flux contains information about the size of the BLR, while the shape of the spectral line encodes the velocity information. An estimate of the black hole mass can be calculated assuming the BLR clouds orbit in the Keplerian potential of the black hole with velocities determined by the width of the spectral line and at a radius given by the average lag between the line and continuum fluxes. A weakness of this traditional reverberation mapping method is that the relation between velocity and position observables of the clouds and the black hole mass depends on an unknown dimensionless proportionality constant that depends on the geometry and kinematics of the BLR. In practice, this so-called ``virial'' factor is estimated based upon some external criteria, such as the average factor that makes the $M_{BH}-\sigma_{\star}$ relation consistent between different black hole mass estimators and between samples of active and inactive galaxies \citep{onken04, collin06, woo10, greene10, graham10}. Ideally, we would like to infer directly the morphology and kinematics of the BLR, and the black hole mass, including its uncertainty \citep[for a discussion of potential systematic errors in reverberation mapping see][]{krolik01}. The models for the structure and kinematics may include a net inflow or outflow of BLR clouds, among other physically motivated models \citep[e.g.][]{murray95, marconi08}. Developing such a method is the goal of this paper. The data required for reverberation mapping encode the geometry and kinematic information to some degree, depending upon the quality, in the form of the transfer function (or response function) that maps the continuum emission onto the line emission. The average lag used to estimate black hole mass is the first moment of the transfer function. Previous analysis involved estimating the transfer function and then interpreting the transfer function in relation to a model of the BLR \citep[see][]{krolik95, done96, horne03, bentz10}. This is necessary because the transfer function is a function of time lag, not position within the BLR, so its interpretation requires a physical model. Estimating the transfer function requires inverting a linear integral equation, and while the method of \citet{krolik95} and \citet{done96} uses regularized linear inversion, thus allowing for uncertainty estimation, other inversion methods such as ``maximum entropy'' do not allow for straightforward uncertainty estimation or model selection \citep[e.g.][]{horne03, bentz10}. Our method of analyzing reverberation mapping data simplifies the process of obtaining a transfer function and then interpreting the result using different models. We compare reverberation mapping data directly with models of the broad line region, obtaining uncertainty estimates as well as allowing for model selection. Once we have found models and model parameters that fit the data, we can easily compute the transfer function and average time-lag. Our goal is to constrain the geometry and kinematics of the BLR and provide an internally consistent factor for the black hole mass. We note that the traditionally determined average time-lag is exactly equivalent to a model where the BLR is a face-on ring of a given radius (response = $\delta$-function) or a spherical shell (response = step function). This implicit assumption drives the inference on the average lag and its result, as we will show in this paper. An important part of our method for directly modeling re- verberation mapping data requires that we predict the AGN continuum light curve between the observations. Recent work has found that AGN continuum light curves are well-modeled by a damped random walk: \citet{2009ApJ...698..895K} used a continuous time stochastic process; \citet{2010ApJ...708..927K} used the formalism of \citet{1992ApJ...385..404P} and \citet{1992ApJ...398..169R,1994comp.gas..5004R}. This model for AGN variability was applied to ∼900 AGNs by \citet{2010ApJ...721.1014M} in order to correlate variability with other parameters of AGNs. \citet{zu10} were then able to use this model for AGN variability to improve the standard analysis of reverberation mapping data, including a better understanding of the uncertainties involved. They model the continuum light curve using Gaussian Processes to recover the transfer function, assumed to be a top-hat. As with \citet{2010ApJ...708..927K}, they use an exponential covariance matrix to relate the continuum flux at different points in the time series. We also use Gaussian Processes to model the continuum light curve, as well as a slightly more general exponential covariance matrix. Our method improves upon the approach of \citet{zu10} by modeling the reverberation mapping data directly in terms of a geometric and dynamical model, rather than recovering the transfer function. \citet{bottorff97} have also modeled reverberation mapping directly in an attempt to understand the BLR dynamics in the well-studied AGN NGC 5548. They expand upon the hydromagnetically driven outflow model of \citet{emmering92} and use one set of parameter values to compare their model with NGC 5548. While the specific models presented here are clearly not as sophisticated from a physical point of view, our method improves upon that approach by finding the best fit parameter values of our simple models and believable estimates of their uncertainties. We consider two types of reverberation mapping data sets: velocity-unresolved, where there is a time series of the continuum flux and a time series of the integrated line flux, and velocity-resolved, where the data consist of a continuum flux time series, and a series of entire line spectra as a function of time. The paper is organized as follows. In \S\,\ref{sect_pic} we define and describe the physical problem. In \S\,\ref{sect_method} we outline our methods in the formalism of Bayesian probability theory and describe the algorithms we use to compare reverberation mapping data to mock data created from a model of the BLR. In \S\,\ref{sect_tests} we test our method using simple models of the BLR and show that we are able to recover the parameter values of our test systems. Finally, in \S\,\ref{sect_concl}, we summarize our conclusions. Flux units throughout the paper are arbitrary, but computed consistently within our method. \begin{figure*} \begin{center} \includegraphics[scale=0.8]{Figure1.eps} \caption{BLR clouds around the central ionizing source (central engine). The extra path length the light must travel from the central engine to the BLR cloud and then to the observer is the cause of the delayed response of the line flux.\label{diagram}} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.75]{Figure2.eps} \end{center} \caption{Simulated continuum emission datapoints with examples of the continuum interpolated using gaussian processes. The dispersion of the lines represents the uncertainty of the recovered light curve. As expected the uncertanty is greatest where there are no data points. The top panel shows the simulated data used throughout this paper, whereas the bottom panel shows an example with gaps in the data. Our procedure takes into account the amount of information available and therefore the recovered light curve suffers from a larger uncertainty during the gaps. \label{continuum}} \end{figure*} \section{The Physical Picture} \label{sect_pic} Throughout this paper, we assume a simple model for the BLR, described as follows. The AGN is defined to be at $(0,0,0)$, and the observer is at $(d, 0, 0)$. We model the distribution of BLR gas by defining the gas density profile $\rho(x,y,z)$, assumed to be normalized such that \begin{equation} \int_V \rho(x, y, z) \, dV = 1 \end{equation} where $dV = dx \, dy \, dz$ and $V$ is all of space. We assume that the gas absorbs the ionizing radiation, but is not self-shielding, so that gas at larger radii is still illuminated. It should be noted that our approach is fully general and can support more complex models of the optical properties of the BLR, as well as its geometry and dynamics. \subsection{Velocity-Unresolved Reverberation Mapping} If the continuum flux varies with time according to $f_{\rm cont}(t)$, then the total line flux as a function of time is given by \begin{equation}\label{transfer} f_{\rm line}(t) = A \int_V \ f_{\rm cont}(t - l(x, y, z)) \rho(x, y, z) \, dV \end{equation} where $l(x, y, z)$ is the {\it lag}, or time delay, associated with BLR gas at position $(x, y, z)$, and $A$ is a response coefficient. The lag $l$ for each position is simply the excess light travel time from taking a path starting at $(0,0,0)$ that travels to some gas at $(x, y, z)$, where the light is absorbed and reemitted as line emission, and that finally travels to the observer, relative to a direct path straight from the AGN to the observer: \begin{eqnarray} l(x,y,z) = \left(\sqrt{x^2 + y^2 + z^2}\right. \\ + \left.\sqrt{(x - d)^2 + y^2 + z^2} - d\right)/c \end{eqnarray} For any case of interest, $d \gg \sqrt{x^2 + y^2 + z^2}$, and therefore this is well approximated by: \begin{equation} l(x,y,z) \approx \left(\sqrt{x^2 + y^2 + z^2} - x\right)/c \end{equation} which is the formula adopted throughout this paper. See Figure~\ref{diagram} for an illustration of this model. Note that Equation~\ref{transfer} is a special case of the general equation \begin{equation}\label{transfer2} f_{\rm line}(t) = A \int \Psi(\tau) f_{\rm cont}(t - \tau) \, d\tau \end{equation} where $\Psi(t)$ is the so-called {\it transfer function}, which gives the response of the line flux to a delta-function pulse in the continuum flux\footnote{For readers more familiar with image analysis, the transfer function is analogous to a PSF.}. Thus, for any particular system, if we can infer the density of BLR clouds throughout space, we can automatically deduce the corresponding transfer function: \begin{equation} \Psi(\tau) = \int_V \delta\left(\tau - l(x, y, z)\right)\rho(x, y, z) \, dV \end{equation} The meaning of this equation is that each location in space contributes to the transfer function at the value of the location's lag, with the size of the contribution being proportional to the amount of gas at that location. \subsection{Velocity-Resolved Reverberation Mapping} Now suppose that the BLR gas is in motion, such that the system can be described by a time-invariant distribution function $g$ defined over the phase space of a single particle: \begin{equation} g(x, y, z, v_x, v_y, v_z) = \rho(x, y, z)g(v_x, v_y, v_z | x, y, z) \end{equation} The motion of the gas along the line of sight is assumed to affect the wavelength of reemitted light, but its distribution function is assumed to be time invariant and therefore does not vary during the observing campaign. Then the emission line profile at time $t$ will be a function of the line of sight velocity, $v_{\rm los}$: \begin{eqnarray} f_{\rm line}(v_{\rm los},t) = A \int_{v_y, v_z} \int_V \ f_{\rm cont}(t - l(x, y, z)) \\ \times g(x, y, z, v_x, v_y, v_z) \, dx \, dy \, dz \, dv_y \, dv_z \end{eqnarray} where $v_{\rm los}$ is in the $x$ direction. This is the velocity-resolved equivalent of Equation~\ref{transfer}. \section{Method} \label{sect_method} Our method for constraining the geometry and kinematics of the BLR is an application of Bayesian Inference \citep{sivia06}. In general, to infer parameters $\theta$ from data $D$, we begin by assigning a prior probability distribution $p(\theta)$ describing our initial uncertainty about the parameters. Sampling distributions $p(D|\theta)$ are also assigned to describe our uncertainty about how the data are related to the parameters. Once specific data $D=D^*$ are obtained, our updated state of knowledge about the parameters is described by the posterior distribution, given by Bayes' rule: \begin{equation} \label{eqn_posterior} p(\theta | D=D^*, I) \propto p(\theta|I) p(D|\theta, I)|_{D=D^*} \end{equation} Here $I$ is any background information we have about the problem. In complex problems, where $\theta$ consists of a large number of parameters, Monte Carlo methods are used to produce random samples from the posterior distribution for $\theta$. Methods such as Nested Sampling \citep{dnest} can also provide the normalization constant for the posterior, known as the evidence, which is the key quantity for comparing the entire model with an alternative \citep{sivia06}. In our method, the parameters $\theta$ to be inferred are those describing the spatial profile of the BLR gas, and the continuous continuum flux, $f_{\rm cont}(t)$. Since it is impossible to represent a continuum in a computer, we instead infer $f_{\rm cont}(t)$ evaluated at 500 time points, covering a time interval larger than the continuum data. The continuum modeling technique is described in detail in the next section. Throughout this paper, both the continuum flux and line flux timeseries are considered part of the dataset $D$: \begin{equation} D = \left\{ \mathbf{y}_{\rm line}, \mathbf{y}_{\rm continuum} \right\} \end{equation} The prior information consists of the times at which the line flux and continuum flux are measured, $\mathbf{t}$ and the error bars on the line flux and continuum flux measurements, $\bm{\sigma}$: \begin{equation} I = \left\{ \left(\mathbf{t}, \bm{\sigma}\right)_{\rm line}, \left(\mathbf{t}, \bm{\sigma}\right)_{\rm continuum}\right\} \end{equation} The likelihood function is chosen to be Gaussian, centered around the model-predicted line flux timeseries: \begin{equation} p(D|\theta) = \prod_{i=1}^n \frac{\exp\left[-\frac{1}{2}\left(\frac{y_{i, \rm line} - m_i(\theta)}{(\kappa\sigma_i)}\right)^2\right]}{(\kappa\sigma_i)\sqrt{2\pi}} \end{equation} where $\kappa$ is a ``noise boost'' parameter to account for the presence of unknown systematic effects not included in the reported error bars, such as those due to flux calibration, wavelength calibration, and continuum subtraction. Once the posterior distribution is obtained, many different algorithms are available for exploring it and computing summaries such as marginal distributions for parameters. We have implemented our model with two methods, the first is Metropolis-Hastings, a Markov-Chain Monte Carlo (MCMC) algorithm, which provides samples from the posterior PDF for the model parameters. The second is Diffusive Nested Sampling \citep{dnest}, which provides samples from the posterior PDF {\it and} an estimate of the evidence value for the model. Although the evidence calculation makes the second algorithm significantly slower than the first, Diffusive Nested Sampling is much faster than alternative MCMC-based implementations of Nested Sampling \citep{dnest}. The results presented here to test the method use the MCMC algorithm, while the Diffusive Nested Sampling algorithm is used to apply the method to real reverberation mapping data \citep[][in prep]{brewer11}. \subsection{Continuum Interpolation} \label{sect_contint} In order to create a mock line flux time series to compare with the data, it is necessary to interpolate between the continuum flux datapoints. Linear interpolation is the simplest approach, but it does not provide an estimate of the uncertainty in the interpolation, suggesting that we know precisely the value of the continuum $f(t)$ at all times between the measured datapoints. If we want to obtain reliable uncertainties in our results, we should acknowledge the uncertainty introduced by the interpolation process. To account for this, we consider the entire continuum function $f_{\rm cont}(t)$ to be an unknown parameter to be inferred from the data. The prior distribution for $f_{\rm cont}(t)$ is a Gaussian Process \citep{2003itil.book.....M, rasmussen}, which is a convenient class of probability distributions over function space. Given a mean function $\mu(t)$ and a covariance function $C(t_1, t_2)$, the probability distribution for the function value $f$ at any finite set of times is a multivariate Gaussian: \begin{equation} p(\mathbf{f}|\mu, C) = \frac{1}{\sqrt{(2\pi)^n {\rm det }\mathbf{C}}} \exp \left(-\frac{1}{2}(\mathbf{f} - \bm{\mu})^T\mathbf{C}^{-1}(\mathbf{f} - \bm{\mu})\right) \end{equation} where $\mathbf{\mu}$ is a vector of means at the relevant time-points, and $\mathbf{C}$ is the covariance matrix, obtained by evaluating the covariance function at the relevant times. In the reverberation mapping problem, $f_{\rm cont}(t)$ is constrained by {\it two} data sets: the continuum measurements, and the line measurements. We parameterize the covariance function and mean function with four hyperparameters: $\mu$ (the long-term mean), $\sigma$ (the long-term standard deviation), $\tau$ (typical timescale of variations) and $\alpha$ (a smoothness parameter between 1 and 2), such that the mean function is a constant $\mu(t) = \mu$ and the covariance function is \begin{equation} C(t_1, t_2) = \sigma^2 \exp\left[-\left(\frac{|t_2 - t_1|}{\tau}\right)^\alpha\right] \end{equation} The posterior distribution function for $f(t)$ given some continuum data (but not the line data) is shown in Figure~\ref{continuum}. Note that outside the areas where we have data, the uncertainty gets large, but in areas where the data are well sampled, the uncertainty in the interpolation is small. We keep track of $f(t)$ at 500 times, both slightly preceding and following the data. Further interpolation between these 500 points is linear. 500 continuum parameters is sufficient to render the distance between continuum flux points much smaller than the maximum monitoring cadence, allowing us to resort to linear interpolation only on scales not probed by the data. We change the 500 parameters in the same way as the model parameters, with every new proposal for the continuum function related to the one before. The function $f(t)$ can be parameterized by 500 variables with standard normal priors, which are converted to $f(t)$ values by multiplication with the Cholesky decomposition of $\mathbf{C}$. We note that our Gaussian Process method for interpolation, in the special case $\alpha=1$, is equivalent to the method of \citet{zu10}, apart from computational details. $\alpha=1$ has also been used in detailed studies of quasar variability \citep[e.g.][]{2010ApJ...721.1014M}. \subsection{Creating Mock and Simulated Data} Given the phase-space density for the BLR gas and the continuous continuum light curve, we can easily create a mock line flux timeseries by adding together the line flux from all the gas, which is proportional to the continuum flux at the respective lag of the gas. The resulting mock line flux timeseries can then be compared to the reverberation mapping data and does not depend on the kinematics of the gas. If we include the velocity information of the gas, we can create a mock spectrum for each point in the timeseries. In order to create a mock spectrum, we make a histogram weighted on flux of the amount of gas with a given velocity using the same velocity resolution as the data. We then convolve the histogram with a gaussian whose width is defined by a combination of thermal broadening and instrumental resolution. The mock spectrum can then be compared to the reverberation mapping spectral data and depends on the kinematics of the gas. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.9]{Figure3.eps} \end{center} \caption{Example spatial distributions of the broad line emitting gas that can be recovered by our generic geometric model. They include a ring/disk (top panel), a spherical shell (middle panel), and a spherical gaussian distribution (bottom panel). \label{fig_geomodels} } \end{figure} \section{Illustration and Tests Using Simple Models} \label{sect_tests} In order to illustrate our method, we have developed simple models of the BLR region geometry and dynamics. As this method is fully general, it is also possible to implement more complex models within the framework described so far. We showcase these simple models by creating simulated data with known true parameter values in our models. This allows us to test our code as well as to explore the accuracy and precision of the results obtainable by this method of reverberation mapping analysis. Such tests on simulated data also allow us to ascertain the data quality needed to perform inferences regarding increasingly complicated model parameters. We showcase both geometry-only and geometry plus kinematics models, where the latter are the same as the first with the addition of velocity information given to the BLR gas. We show transfer functions for the geometry models and velocity-resolved transfer functions for the kinematics models. To ensure that in our method the true parameter values are recovered, we save instances of each model and use them as simulated reverberation mapping data, adding noise and varying the timeseries characteristics to match reverberation mapping campaigns of varying quality. A simulated dataset consists of line flux and continuum flux measurements. Given a BLR model, the continuous continuum light curve is all that we need to create, since the mock line flux measurements can be obtained from the model and continuous continuum light curve. We create continuous continuum light curves by using the hyperparameters of the Gaussian Processes continuum interpolation. The hyperparameters contain information about the timescales and levels of variability in an AGN continuum timeseries. We use values for the hyperparameters from interpolation of the Lick AGN Monitoring Project \citep[LAMP;][]{walsh09,bentz09} continuum timeseries of Arp 151, one of the most variable AGN in the LAMP sample. The values used for the hyperparameters were $\mu = 75$ (arbitrary units), $\sigma = 30$ (same units as $\mu$), $\tau = 6 \times 10^6$ seconds and $\alpha$ = 1.5 (dimensionless). \subsection{Geometry Model: Ring/Disk/Shell} \subsubsection{Model definition} We use a flexible geometry model of the BLR gas density to test our method when only integrated line flux measurements are used instead of the full spectral shape. The model is that of a spherical shell centered on the central engine with parameters allowing partial, axisymmetric illumination of the shell and varying inclination of the resulting ring/disk. Examples of possible configurations, ranging from a complete shell to a thin ring/disk, are shown in Figure~\ref{fig_geomodels}. The parameters of the model are the mean radius of the disk, $r_{0}$, the thickness of the disk in the radial direction, $\sigma_r$, the illumination angle of the shell, and the inclination of the shell. The illumination angle is defined so that values approaching 0 define an increasingly thin ring/disk and a value of $\pi/2$ defines a spherical shell. The inclination angle is defined so that values approaching 0 define a face-on ring/disk and a value of $\pi/2$ is an edge-on ring/disk. We use a normal distribution to define the radial thickness of the shell, so that $r_{0}$ and $\sigma_r$ are the average and 1$\sigma$ width of a normal distribution. The normal distribution is created in the $x$, $y$, and $z$ cartesian coordinates. It is important to set appropriate prior probability distributions for each model parameter. For parameters where we know the order of magnitude of the parameter value we use a flat prior in the parameter. Examples of parameters with flat priors in the parameter include the inclination angle and the illumination angle, which may only vary between 0 and $\pi/2$. For parameters where we do {\it not} know the order of magnitude of the parameter value we need a prior that treats many orders of magnitude equally, so we use a flat prior in the log of the parameter. Examples of parameters with flat priors in the log of the parameter include $r_{0}$ and $\sigma_r$. These choices of prior probability express complete ignorance in the value of a parameter within some reasonable range, but it is necessary to make the distinction between whether or not the order of magnitude of a parameter value is known. In the cases considered in the remainder of this paper, the posterior is much narrower than the prior, and therefore the inference is dominated by the likelihood, i.e. the information contained in the data. The underlying spherical symmetry of these models and the angular dependence of the ring/disk model allow us to use spherical coordinates. In order to sample the gas phase-space density at a finite number of points, we use a grid in $\log(r)$, $\phi$, and $\cos(\theta)$. Using equal steps in $\cos(\theta)$ instead of $\theta$ means that the volume of each grid point depends only on the radius, $r$. The density is then multiplied by the volume of the grid point to find the total mass of gas in each grid point. The emissivity of each grid point also depends on the radius $r$ because the continuum ionizing radiation flux falls off as $r^{-2}$, requiring more gas mass at larger radii to have the same line flux contribution as less gas mass at smaller radii. In general, the illumination parameter allows us to model any axisymmetric ionizing flux. We test our method to recover the BLR model parameters by creating simulated data, the true parameter values of which are given in Table~\ref{table_simdata}. The continous continuum function is obtained using the hyperparameters from the Gaussian Processes interpolation of Arp 151 reverberation mapping data, as described in Section~\ref{sect_contint}, and evaluated at 120 consecutive ``observations'' one day apart. The line flux timeseries for each model are generated using this continuous continuum function and a given set of model parameters. The line flux timeseries contain 60 ``observations'' one day apart, starting 60 days after the start of the continuum flux ``observations''. These simulated data are meant to represent excellent reverberation mapping data, with an observation campaign of similar length to recent campaigns \citep[see e.g.][]{bentz09}, but without gaps due to difficult weather conditions. Additional noise has also been added to the simulated data. Most simulated datasets have line flux errors of 1.5\%, which represents very favorable observing conditions, but we have also tested simulated data with errors of 5\% to reflect the current typical error of reverberation mapping line flux measurements. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.3]{Figure4a.eps} \includegraphics[scale=0.3]{Figure4b.eps} \includegraphics[scale=0.3]{Figure4c.eps} \includegraphics[scale=0.3]{Figure4d.eps} \end{center} \caption{Posterior probability distributions for face-on disk geometry model parameters of simulated data 4 (see Table \ref{table_simdata}) with 1.5\% line flux uncertainty. Top to bottom: $r_{0}$, $\sigma_r$, inclination angle, and illumination angle. The inclination angle and illumination angle both have a resolution given by the grid in $\cos\theta$. The true value for each parameter in this model is shown by the vertical red line and the grid in $\cos\theta$ is shown along the x-axis with green crosses for the angular parameters. The grid used to create these posterior distributions is 60 steps in $\log(r)$, 40 steps in $\phi$, and 60 steps in $\cos\theta$. \label{fig_pdf10}} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.3]{Figure5a.eps} \includegraphics[scale=0.3]{Figure5b.eps} \includegraphics[scale=0.3]{Figure5c.eps} \includegraphics[scale=0.3]{Figure5d.eps} \end{center} \caption{Posterior probability distributions for shell geometry model parameters of simulated data 5 (see Table \ref{table_simdata}) with 1.5\% line flux uncertainty. Top to bottom: $r_{0}$, $\sigma_r$, inclination angle, and illumination angle. The true values for the parameters and the grid points are shown as in Figure\,\ref{fig_pdf10}. Note that since the simulated data is spherically symmetric, it should not strongly prefer an inclination angle, and thus no true parameter value is shown in the inclination angle pdf. \label{fig_pdf11}} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.45]{Figure6a.eps} \includegraphics[scale=0.45]{Figure6b.eps} \end{center} \caption{Joint posterior probability distributions for inclination and illumination angles for face-on disk with 1.5\% line flux uncertainty (simulated data 4) and shell with 1.5\% line flux uncertainty (simulated data 5). The true parameter values are shown by (top) the black cross and (bottom) the black dashed line. \label{fig_pdf}} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.45]{Figure7a.eps} \includegraphics[scale=0.45]{Figure7b.eps} \end{center} \caption{Timeseries for face-on disk (simulated data 4, top panel) and shell (simulated data 5, bottom panel), both with 1.5\% line flux uncertainty. Simulated data are shown in blue with error bars and the mock data from a random set of parameter values sampled from the posterior is shown in red. The continuum light curve used to create these line light curves is shown in Figure\,\ref{continuum}. \label{fig_timeseries}} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.45]{Figure8a.eps} \includegraphics[scale=0.45]{Figure8b.eps} \end{center} \caption{Velocity-unresolved transfer functions for face-on disk (simulated data 4) and shell (simulated data 5), both with 1.5\% line flux uncertainty. The same grid was used to make these transfer functions as was used to obtain the posterior probability distributions shown in Figures\,\ref{fig_pdf10} and \ref{fig_pdf11}. \label{fig_transfer}} \end{figure} \subsubsection{Testing the geometry model} The first test is whether we can recover the parameter values of the simulated data using the MCMC algorithm described in Section~\ref{sect_method}. Since our one flexible geometry model encompasses a number of different geometries, such as a shell, thin or thick ring or disk, we do not have to consider model selection at this point. We test the many possible geometries of this model by creating five simulated datasets, whose true parameter values are given in Table~\ref{table_simdata}. The simulated datasets include an inclined disk with line flux errors of 1.5\% and 5\% and an edge-on disk, a face-on disk, and a shell with line flux errors of 1.5\%. The MCMC algorithm is typically run for 150,000 iterations and all parameter values are recovered to within two standard deviations of the posterior probability distributions of the parameters, with 10/13 recovered to within one standard deviation. This is as expected, since we should find the true parameter value to lie within $1\sigma$ about $\sim 68\%$ of the time and to lie within $2\sigma$ about $\sim 95\%$ of the time. The mean and standard deviation of the posterior distributions are given in Table~\ref{table_simresults}, with the exception of many of the angular parameters, where the quoted $1\sigma$ uncertainty does not adequately describe the posterior distribution. Part of the reason for the standard deviation of the angular parameters not describing the posterior is due to the uneven step size in $\theta$, so that values of the illumination angle close to $\pi/2$ and values of the inclination angle close to 0 radians have much poorer angular resolution. This might lead to an angular parameter being quoted as having a mean of 1.22 radians and a $1\sigma$ uncertainty of 0.29, as for the illumination angle of the Shell model simulated data, but while this uncertainty may seem large, it corresponds to an uncertainty of only 1-2 grid points in $\theta$. The posterior distributions for the face-on disk and shell simulated data are shown in Figures~\ref{fig_pdf10} and \ref{fig_pdf11}. Select joint probability distributions between the inclination and illumination angles are also shown in Figure~\ref{fig_pdf} in order to show the degeneracies between different models. In particular, for the shell model, the inclination is not constrained unless the illumination angle is small, or rather, unless the sphere of BLR gas is not entirely illuminated. The posterior pdfs for the five simulated datasets show that the edge-on disk, face-on disk, and shell geometries allow for excellent recovery of the parameter values with estimates of the uncertainty. For the two inclined disk simulated datasets, there is some degeneracy in the angular parameters, leading to large uncertainties in their average values. The MCMC algorithm finds a more likely geometry configuration than the true configuration for the inclined disk datasets, although the true configuration is still a valid possibility with posterior local maxima at the true parameter values. With the increased simulated line flux error from 1.5\% to 5\% however, it becomes increasingly difficult to recover the angular parameters, and only the mean radius is recovered with a small enough uncertainty as to be useful in describing the BLR. This emphasizes the importance of obtaining high quality line flux data in reverberation mapping campaigns. The timeseries and transfer functions for the face-on disk and shell MCMC geometry model tests are shown in Figures~\ref{fig_timeseries} and \ref{fig_transfer}, respectively. The timeseries show the simulated data overlaid with mock data created with parameters sampled randomly from the posterior probability distributions. The fit of the mock data to the simulated data is excellent for all five models. The variety in the shape of the simulated data timeseries, all well-fit by their respective models, shows that the MCMC algorithm for model parameter value recovery is robust for a wide range of models. The transfer functions also show a variety of shapes. For a thin shell geometry, thinner than the shell of simulated dataset 5, our resulting transfer function agrees with the analytic form of a tophat function \citep[see][]{peterson93}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{Figure9.eps} \caption{Sketch of the dynamical model. The angular momentum vector ${\mathbf L}$ defines the plane of the orbits. Owing to cylindrical symmetry, for each value of $\theta_0$ we consider the entire family of ${\mathbf L}$ generated by rotation around the z-axis. The observer is assumed to be in the x-z plane, at angle $\theta_i$ from the z-axis. \label{kinematics_diagram}} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.55]{Figure10.eps} \caption{Illustration of the combined constraints given by the illumination function and by the dynamical model. The red line shows an example of the distribution of illuminated BLR gas mass assuming a uniform underlying density. The blue line shows the actual underlying mass distribution as constrained by the dynamical model. The resulting effective distribution of illuminated mass, consistent with both the geometry and dynamical constraints is given by the product of the two functions, shown in black. \label{dynamics_illumination}} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=1.3]{Figure11.eps} \caption{Example spectra from three simulated datasets: (top) face-on disk with orbits confined to the disk, (middle) face-on disk with isotropic distribution of orbit orientations, and (bottom) spherical distribution with isotropic distribution of orbit orientations. The instrumental resolution of the simulated spectra is FWHM $\sim$ 800\,km\,s$^{-1}$. The bottom spectrum for a spherical distribution of orbits is wider than for a face-on disk because the spherical distribution allows for orbits to move directly along the line of sight, while the face-on disk only results in a small component of the BLR gas velocity lying parallel to the line of sight. The width of the spectral line is thus directly connected to both the opening angle of the disk and the inclination angle. \label{fig_spectra}} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.4]{Figure12a.eps} \includegraphics[scale=0.4]{Figure12b.eps} \includegraphics[scale=0.4]{Figure12c.eps} \caption{Posterior pdfs for the first dynamical simulated dataset: face-on disk with the orbits confined to the disk. (Top) black hole mass, (middle) the average radius of the BLR gas mass, and (bottom) the average width of the BLR gas mass. \label{fig_dyn1a}} \end{center} \end{figure} \subsection{Dynamical Model} \subsubsection{Model definition} In order to constrain the kinematics of the BLR and the mass of the central black hole, we must model the velocity distribution of the BLR gas in the context of a dynamical model. For simplicity of illustration and speed of computation, we consider here a cylindrically symmetric model where the BLR gas is considered to be made of test particles in bound orbits within the spherical Keplerian potential of the black hole. We parameterize the model in terms of energy and angular momentum, constants of the BLR gas motion, so we are guaranteed velocity and geometry distributions that do not evolve in time, and are therefore stationary during the monitoring campaign. In future papers we will generalize the model to include unbound orbits to describe inflows and outflows, and also other physical mechanisms, such as radiation pressure or winds \citep{marconi08, netzer10}. The model is illustrated in Figure~\ref{kinematics_diagram}. For any choice of angular momentum ${\mathbf L}$, energy $E$ and black hole mass $M_{\rm BH}$, the motion of the BLR test particles is then described by the standard conservation equation resulting in elliptical orbits in the plane perpendicular to the angular momentum. Given our cylindrical symmetry we will consider families of angular momenta obtained by rotation around the z-axis and defined by the polar angle $\theta_0$ (see Figure~\ref{kinematics_diagram}). The spatial density of the BLR is then given by \begin{equation} P(r, \theta, \phi | E, L, \theta_0) \propto \frac{1}{v} \times \frac{1}{| \sqrt{\sin^2\theta_0 - \cos^2\theta}|}, \end{equation} where the angular term comes from integrating over the uniform distribution of azimuthal angle $\phi_0$ of the angular momentum vector, and $v$ is the total magnitude of the velocity vector: \begin{equation} v = \sqrt{2E + \frac{2GM_{BH}}{r}}. \end{equation} Owing to the symmetry of our model we can consider only $\theta_0<\pi/2$ (i.e. $L_z>0$), obtaining the following limits on the allowed $\theta$ coordinate for the BLR: \begin{eqnarray} \frac{\pi}{2}-\theta_0 < \theta < \frac{\pi}{2}+\theta_0. \end{eqnarray} As $\theta_0$ approaches zero, the model represents a thin disk, while as $\theta_0$ approaches $\pi/2$ the model covers the whole sphere. Conservation of energy and angular momentum limits the radial coordinate to the range: \begin{equation} r>-\frac{GM_{BH}}{2E} - \frac{1}{2} \sqrt{\left(\frac{GM_{BH}}{E}\right)^2 + \frac{2L^2}{E}}, \end{equation} \begin{equation} r< -\frac{GM_{BH}}{2E} + \frac{1}{2} \sqrt{\left(\frac{GM_{BH}}{E}\right)^2 + \frac{2L^2}{E}}. \end{equation} Finally, $E$ and $L$ are connected by the usual condition: \begin{equation} |L| \leq \frac{GM_{BH}}{\sqrt{-2E}}. \end{equation} For every allowed value of $r$, $\theta$, and $\phi$, the component of the velocity vector along the line of sight can be computed in the standard manner, resulting in two solutions per position, in general (outbound and inbound; four if one considers $L_z<0$ as well). More complex geometries and kinematics can be obtained by superpositions of multiple sets of $E$, $L$, and $\theta_0$ values within the same potential given by $M_{\rm BH}$. However, this further increases the dimensionality of parameter space and computing time. Therefore in this example we will only use one such set. We apply prior probability distributions to the model parameters as described for the geometry model. The priors for the extra parameters in the dynamical model not part of the geometry model are as follows. The parameter $\theta_0$ has a flat prior in the parameter ranging from 0 to $\pi/2$. The parameters $M_{\rm BH}$, $E$, and $L$ have flat priors in the log of the parameter. In addition, in order to impose a BLR gas geometry, we model the distribution of {\it illuminated gas}, as the product of the spatial distribution given by the dynamical model with that imposed by one of our geometrical models, representing in this case the illumination function. This results in a broad range of geometries, giving the model a considerable flexibility (for example, in the future one could think of an anisotropic illumination function to model dust obscuration). The procedure is illustrated in Figure~\ref{dynamics_illumination}. Note that the radial distribution of the illuminated gas is not gaussian anymore, as in the ring/disk/shell geometry model. The mean radius then is not the $r_{0}$ parameter of the geometry model, but must be computed numerically for each set of geometric and dynamical parameters. Similarly, the mean width is no longer $\sigma_r$ and must be computed numerically. A model spectrum at a given time is obtained by summing all the line of sight velocities, weighted by the spatial density of illuminated gas multiplied by the continuum flux at an epoch corresponding to the appropriate lag-time. In order to compare with real data, the model spectrum is then convolved with a gaussian to represent instrumental broadening. Since we do not expect real data to match our model perfectly, we introduce a relatively large uncertainty in the form of the spectral line by adding gaussian noise with a variance of $\sigma^2(F) = \alpha\,F + \beta$, where $\alpha=0.00018$ and $\beta = 0.025$. This model for the variance assumes both a dependence on spectral line flux $F$ through the $\alpha$ parameter and a dependence on the continuum uncertainty through the $\beta$ parameter. The units of $\alpha$ are flux and the units of $\beta$ are flux$^2$. The specific values of $\alpha$ and $\beta$ are related to the arbitrary flux units of our simulated spectra and result in a signal to noise of $\sim 4$. Conservatively this signal-to-noise ratio is lower than typically achieved in state of the art spectral monitoring campaigns. Examples of synthetic spectra at a resolution of FWHM$=13.1$\,\AA, or $\sim 800$\,km\,s$^{-1}$ at the wavelength of H$\beta$, are shown in Figure~\ref{fig_spectra}. The face-on disk systems (top and middle panel of Figure~\ref{fig_spectra} have velocity bins of $\sim$120\,km\,s$^{-1}$ while the sphere system (bottom panel) has velocity bins of $\sim$20\,km\,s$^{-1}$. Notice how the line shapes are clearly different even for models with the same black hole mass. This is a clear illustration of the power of velocity resolved reverberation mapping as a diagnostic of the BLR geometry as well as kinematics. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.4]{Figure13a.eps} \includegraphics[scale=0.4]{Figure13b.eps} \includegraphics[scale=0.4]{Figure13c.eps} \caption{Posterior pdfs for the first dynamical simulated dataset: face-on disk with the orbits confined to the disk. (Top) inclination angle, (middle) $\theta_0$, and (bottom) the joint pdf of $\theta_0$ and the illumination angle. Notice in the joint pdf that $\theta_0$ may only be larger than $\sim 0.3$\,radians when the illumination angle is $\sim 0.3$\,radians, so the angular extent of the disk is well determined. \label{fig_dyn1b}} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.4]{Figure14a.eps} \includegraphics[scale=0.4]{Figure14b.eps} \includegraphics[scale=0.4]{Figure14c.eps} \caption{Posterior pdfs for the second dynamical simulated dataset: face-on disk with the orbits in the entire sphere. (Top) black hole mass, (middle) the average radius of the BLR gas mass, and (bottom) the average width of the BLR gas mass. \label{fig_dyn2}} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.4]{Figure15a.eps} \includegraphics[scale=0.4]{Figure15b.eps} \includegraphics[scale=0.4]{Figure15c.eps} \caption{Posterior pdfs for the third dynamical simulated dataset: sphere configuration with orbits allowed in the entire sphere. (Top) black hole mass, (middle) the average radius of the BLR gas mass, and (bottom) the average width of the BLR gas mass. \label{fig_dyn3}} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=1.3]{Figure16.eps} \caption{Velocity-resolved transfer functions for the three dynamical simulated datasets: (top) face-on disk with orbits confined to the disk, (middle) face-on disk with orbits allowed in entire sphere, and (bottom) sphere configuration with orbits allowed in entire sphere. The red crosses show the response weighted mean lag in 10 velocity bins across the spectra. \label{fig_vres_transfunct}} \end{center} \end{figure} \subsubsection{Testing the dynamical model} We test our dynamical model by creating simulated data-sets consisting of timeseries of the continuum flux and of the line profiles of a broad line. The line profiles of the simulated datasets are shown in Figure~\ref{fig_spectra}. The kinematics parameters $E$ and $L$ are initially chosen to satisfy nearly circular orbits of the BLR gas at the mean radius given by the illumination function. A disk of broad line emitting material can be constrained by either the illumination function or the value of $\theta_0$. The first simulated dataset is a thin disk viewed nearly face-on, with dynamics imposed by a single value of energy and angular momentum. The thin disk is constrained by the value of $\theta_0$, while the illumination function describes the whole sphere being illuminated. This means that all allowed orbits lie in the disk and that the rest of the sphere does not contain broad line emitting gas. The second simulated dataset is also a thin disk viewed nearly face-on with a single value of energy and angular momentum, but for this case the illumination function constrains the disk. We choose a value of $\theta_0$ close to $\pi/2$ so that orbits are allowed in the entire sphere. The third simulated dataset is a fully illuminated sphere with orbits that are also allowed in the entire sphere, so again $\theta_0$ is close to $\pi/2$. This is still an axisymmetric configuration, as the BLR gas density imposed by the kinematics depends upon the $\theta$-coordinate. The true parameter values of the three simulated datasets used to test the kinematics model are shown in Table~\ref{table_simdyndata}. We test each of the three simulated datasets assuming only one set of kinematics parameters. The parameter values inferred using our method are shown in Table~\ref{table_simdynresults}, while the full posterior pdfs are shown for all parameters of interest for the first simulated dataset in Figures~\ref{fig_dyn1a} and \ref{fig_dyn1b}. The posterior pdfs for the black hole mass, average radius of BLR gas mass, and average width of the BLR gas mass are also shown for the second and third simulated datasets in Figures~\ref{fig_dyn2} and \ref{fig_dyn3}. They show that the black hole mass, average radius, and average width of the BLR are well determined for all three simulated datasets. The angular parameters are also well determined when physically possible. For example, for the first dynamics simulated dataset of a face-on disk with orbits confined to the disk, the inclination angle and $\theta_0$ are determined to within one or two grid points, while the illumination angle is only constrained to be $\gtrsim 0.3$\,radians. The illumination angle cannot be determined more accurately because the BLR gas emission only comes from the disk, so as long as the entire disk is illuminated the spectrum is not sensitive to further changes in the illumination angle. Finally, we also compute the velocity-resolved transfer functions for the three simulated datasets, shown in Figure~\ref{fig_vres_transfunct}. As expected, the transfer functions for the face-on disk configurations show little response at very small lags, while the sphere configuration shows the highest intensity of response at small lags. The transfer functions for the face-on disk configurations are similar, but clearly lead to different line profiles, again illustrating the power of modeling the full dataset rather than just trying to model the transfer function. \section{Summary and Conclusions} \label{sect_concl} We introduce and test a new method for analyzing reverberation mapping data of AGN by directly modeling the BLR. We illustrate our method by creating simple geometry and dynamical models of the BLR. Using a model of the BLR geometry to reproduce the integrated line flux timeseries from reverberation mapping data allows us to estimate the average radius of the BLR, as well as the mean width, illumination function, and inclination angle to the line of sight. Models of the BLR that include geometry {\it and} dynamical information allow us to additionally estimate the black hole mass and obtain an estimate of the extent to which the BLR gas orbits are confined to a disk or the whole sphere. Our method of analysis provides several advantages over previous methods. First, previous methods rely upon cross-correlation to obtain a mean radius for the BLR and a virial relation with unknown virial coefficient to obtain an estimate of the black hole mass. Our method estimates the black hole mass self-consistently, without the need for a virial coefficient. Second, work modeling reverberation mapping data has previously focused on modeling the velocity-resolved or unresolved transfer function. However the implications for the geometry and kinematics of the BLR are not clear for such analysis, as the transfer function is a function of the lag between the continuum and line emission. Instead of modeling the transfer function and then interpreting the transfer function in terms of a geometrical or dynamical model of the BLR, we focus on modeling the BLR directly. This allows us to extract more information and thus constrain the models more tightly. Finally, our fast method provides estimates of the uncertainty in the model parameter values and can be used with numerical algorithms such as Nested Sampling that allow for model selection. Our main results can be summarized as follows: \begin{enumerate} \item We create simulated datasets using the geometry model with known true parameter values and find that we can recover these values with uncertainties that depend upon the random uncertainty of the reverberation mapping data. We can recover the mean radius of the BLR to within $\sim0.1$\,dex and the mean width of the BLR to within $\sim 0.2$\,dex for simulated data with an integrated line flux uncertainty of $1.5$\%. We can also place constraints on the inclination and illumination with uncertainties of $\sim 0.2$\, radians for simulated data with face-on and spherical geometry configurations and $1.5$\% integrated line flux uncertainty. Current integrated line flux uncertainties of about $\sim 5$\% are on the edge of what would allow for successful recovery of more than just a mean radius for the BLR. \item We create simulated datasets using the dynamical model that consist of timeseries of a broad line profile and we compare them to mock spectra made using our model. Despite the larger number of free parameters in our dynamical model, we find that we can recover all the parameters physically possible because the line profile is a stronger constraint on the model than the integrated line flux. We can recover the black hole mass and the mean radius of the BLR to within $\sim0.05$\,dex, for simulated data with a line profile signal to noise ratio of $\sim4$ per spectral pixel. We can also recover the mean width of the BLR to within $\sim0.1$\,dex and the inclination angle and illumination angle to within $\sim 2$ grid spacings over which the BLR density is defined. \end{enumerate} The small random uncertainties obtained in our tests of the simple geometry and dynamical models are partly due to the inherent assumption that our simulated data is drawn directly from the set of possible model configurations. In order to simulate the expected systematic error in applying simple models to complicated real BLR systems, we have added substantial Gaussian noise to instances of the model in order to create our simulated datasets. The timeseries of line profiles, in the case of the dynamical model, is very constraining, and leads to the reduced random uncertainty in the mean radius and mean width of the BLR by a factor of two for the dynamical model, as compared to the geometry model. When applying the method to real data we expect larger uncertainties, owing to modelling errors. The uncertainties quoted here should therefore be considered as lower limits to the overall precision of the method for data of comparable quality. This emphasizes the importance of good quality data {\it and} increasingly more realistic models, for recovering detailed information about the BLR from reverberation mapping data. While we have created and tested both simple geometry and dynamical models, our method is more general, allowing for use of any geometry or dynamical model that can be simply parameterized. We plan to expand our library of models to include inflowing or outflowing BLR gas, which may be needed to explain some of the line profile asymmetries of current reverberation mapping data. \acknowledgments We thank the referee for helpful comments. We thank our friends and collaborators in the LAMP 2008 project for many insightful conversations. We are grateful to Chris Kochanek and Vardha Nicola Bennert for helpful suggestions on the manuscript. We acknowledge support by the NSF through CAREER award NSF-0642621, and by the Packard Foundation through a Packard Fellowship. AP also acknowledges support by the NSF through the Graduate Research Fellowship Program.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:introduction} {\color{black} Blockchain-based technologies have been deeply adopted by multiple applications that are closely related to every corner of our daily life, such as cryptocurrencies, tokenomics, business applications, Internet-of-Things (IoT) applications, and etc.. % Decentralized Autonomous Organization (DAO), as one of the blockchain applications shown in Fig. \ref{fig:DAOnet}, is growing rapidly and drawing great attention from both academia and the governments around the world. % Although DAO has brought a lot of opportunities to blockchain technologies, we are surprised to find out that an overview of DAO in the perspective of blockchains is still missing. % Based on the background mentioned above, we perform a comprehensive classification on the latest studies combining the blockchain and DAO. The taxonomy in this article mainly includes three categories. % In the first category, we discuss the common problems and the related studies of blockchain and DAO, including various attacks and security issues of blockchains, and the counter-trend issues. % In the second category, we focus on the issues related to DAO governance and the existing development of a more in-depth discussion. % In the third category, we evaluate the latest development of DAO in various fields, such as e-government, economy, and etc., and predict the future development directions of the relevant fields. \begin{figure}[t] \centering \includegraphics[width=0.68\linewidth]{./figures/DAOnet.pdf} \caption{The structure of DAO.} \label{fig:DAOnet} \end{figure} Although a small number of researchers are worried about the future of DAO and blockchain due to the \textit{hard fork} caused by \textit{The DAO} hacking incident \cite{dhillon2017dao}, most of them still have high expectations on this technology. % We also believe that with the inspiring pace of the development of DAO and blockchain, the DAO projects can become mature, and DAO will show overwhelming advantages over many existing solutions. Through this overview, we find the most important findings on this direction is that the future effort can be devoted to improving DAO from a better balance of decentralization, security and scalability. We look forward to a new integration of both social and organizational structure of DAO in the context of blockchain technologies. % To help have a clear clue of this article, Fig. \ref{fig:structure} shows the organization of this article. } \section{Preliminaries} \subsection{Decentralization} {\color{black} In general case of centralization, the use of a database is basically based on the trust of a third-party organization. For example, as the third party, people all trust the banking system, which can correctly manage the database and record our transactions. The bank keeps accounts for every transaction, and only the bank has the authority to keep users' accounts. However, the shortcoming of such centralized organization is obvious. No one can make sure that whether the centralized organization that manages the database is entirely trustworthy. For example, during the global economic crisis in 2008, the US government could issue money indefinitely, because it is the central institution of monetary management. On the contrary, decentralization means that the database does not depend on a specific organization or administrator but is distributed among all peers. Blockchain is essentially a decentralized database. Each full node has a complete copy of the blockchain ledger. If the database is modified, the information saved by all nodes will be noticed. Thus, the information in the blockchain database will be open and transparent. Decentralization solves the trust problem through redundant data validation. } \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{./figures/PaperStructure.pdf} \caption{The structure of this overview.} \label{fig:structure} \end{figure} \subsection{Bitcoin Blockchain} {\color{black} Originally proposed by Satoshi Nakamoto \cite{casino2019systematic}, Bitcoin is the first application of the blockchain. In Bitcoin, the blockchain serves as a distributed database that stores all transactions originated from one account to another. The advantages of blockchain brings many pros for the Bitcoin ecosystem. For example, everyone has the right to validate accounts, the currency cannot be overissued, and the entire ledger is completely open and transparent. % When processing a transaction, Bitcoin adopts the techniques of digital signatures to identify the ownership of a coin. Each Bitcoin account address has a private and a public keys. The private key is private and is used to exercise the bitcoin ownership of the bitcoin account, while the public key is known to all nodes to verify the transaction and the balance of a Bitcoin account. When the transfer information of a transaction is published, a digital signature must be embedded by encrypting the digital summary of the transfer message together with the private key of the sender. Thus, other nodes can use the public key of the sender's account to decrypt and verify the legality of the transaction. After such verification, each blockchain node is acknowledged this transaction. } \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{./figures/blockchain.png} \caption{Blockchain-related fields.} \label{fig:blockchain} \end{figure} \subsection{Definition and Background of DAO} {\color{black} DAO was originally introduced by the white paper \cite{buterin2014ethereum}, in which DAO is defined as an organization built on smart contracts that can execute autonomously. Unlike the conventional centralized entities, it doesn't include the central control or management. % As shown in Fig. \ref{fig:DAOnet}, DAO achieves the decentralized organization by encoding a set of rules in smart contracts, where how DAO performs is predefined. Although this completely decentralized way makes investors typically don't know and trust each other, blockchain is a good tool to help achieve the goal of DAO. However, DAOs are not necessarily to built on top of an existing blockchain such as Ethereum \cite{wood2014ethereum}. % During the getting-start guidelines, Casino \cite{casino2019systematic} was published as an introduction to DAO. It presents the concepts, characteristics, frameworks, applications, the future trends of DAO, and etc. The readers will gain a systematic overview of DAO that spans multiple domains. } \subsection{Project The DAO} {\color{black} In the development of DAO, the first DAO project is proposed with a historic and dramatic meaning. A particular historical moment of DAO is the creation of the first DAO and how it was eventually hacked. \textit{The DAO} project was started on April 30th, 2016. By the end of the entire funding period, more than 11,000 enthusiastic members had participated and raised 150 million dollars, making the \textit{The DAO} the largest crowdfunding project in history. It was an overnight success, but the idea of \textit{The DAO} vulnerability had been circulating in the developer community. Finally on June 18th, a hacker began using a ``recursive call vulnerability" in the software to collect Ether coins from The DAO's token sales. He took advantages of a well-intentioned but poorly implemented feature of DAO designed to prevent the majority from tyrannizing over dissenting DAO token holders. But this ``split" feature was implemented to make \textit{The DAO} vulnerable to catastrophic reentrance errors. As a result, the attacker was able to steal about 3.6 million ETH, worth about \$50 million at the time of the attack, bringing the price of the coin from more than 20 dollars to less than 13 dollars. Losses reached 70 million dollars. \textit{The DAO}'s problems have had a negative impact on both the Ethereum network and its cryptocurrency. The situation for \textit{The DAO} investors is particularly precarious. Eventually, over 90\% of the hashrates indicated support for a fork. \textit{The DAO} fund was returned to investors as if the organization never existed. \textit{The DAO} hack sparked a debate about hard forks and soft forks, and the creation of Ethereum Classic \cite{dhillon2017dao}. } \subsection{To Launch a DAO Project} {\color{black} Although the first DAO project failed, it did not completely prevent the initiation and development of other DAO projects. The operation of launching a DAO project includes the following steps \cite{ProjectSteps}, which are also illustrated in Fig. \ref{fig:a_DAO_project}: \begin{enumerate} \item Developing and deploying smart contracts according to predefined rules. \item Handling the token issues (through ICO) at the initial financing stage. \item At the end of the financing phase, a DAO starts running. \item Proposals are made and members can vote on them. \end{enumerate} } \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{./figures/a_DAO_project.png} \caption{The four-step launch of a DAO project.} \label{fig:a_DAO_project} \end{figure} \subsection{Existing Popular DAOs} The representative existing implementations of DAO are reviewed as follows. \subsubsection{Aragon} {\color{black} Aragon \cite{aragon2020} is a platform that enables any participant to collaborate with others without any third-party organizers. On Aragon, people can create a decentralized digital jurisdiction for their company, community, and organization. The Aragon users are also allowed to create diverse communities. For example, a financial DAO can be generated to incentivize internal usage of coninbase Apps, a council DAO is launched to expand the utilization of wealth management, or a pocket DAO is established to eliminate blockchain infrastructure monopolies. \subsubsection{Colony} Colony \cite{colony2020} is designed as an infrastructure that enables organizations collaborate with each other via the decentralized software implemented on top of Ethereum. It treats every participant impartially. Unlike other DAOs, Colony advocates to eliminate the requirement of voting from members. Instead, it focus on mechanisms that enforce people to get their job done. \subsubsection{DAOstack} DAOstack \cite{DAOstack} is an open-source, modular DAO project, which leverages the technology and adoption of decentralized governance, enabling people to create the DApps (decentralized apps), DAOs, and DAO tools. Although several DAOs have been implemented, we should notice that DAO is still in its immature stage, a large number of new DAOs will be developed for future, and many insights of DAOs will be perceived then. } \section{Taxonomy of State-of-the-art Studies}\label{sec:taxonomy} {\color{black} In this part, we perform a thorough taxonomy towards the existing up-to-date studies of blockchain-related DAO. Through the classification of the existing literature related to DAO, we find that in addition to the introduction of DAO systems, existing studies mainly focus on the security issues, applications in various fields, and DAO governance issues. Therefore, we divide it into the categories presented in each subsection. Among those categories, on the analysis of the questions about DAOs related issues, we divide them into two classes. On one hand, a DAO project is based on the blockchain, thus the problems existing in blockchain are also the existing problems of DAO. On the other hand, DAO induces some new problems in governance. Such relevant studies are reviewed in governance problems and solutions. Finally, in the last category, DAO and related areas, we pay attention to the current development of DAO in the context of various fields. } \subsection{Existing Problems and Solutions of Blockchains} {\color{black} The hacking incident of the first \textit{The DAO} project triggered the introspection of DAO project \cite{santos2018dao}. % The first generation blockchain technology, Blockchain 1.0, was mainly invented for cryptocurrency purposes. Then, the second generation of Blockchain 2.0, represented by Ethereum, is an open platform that enables a new decentralized computing paradigm. The DAO is exactly based on Ethereum. While there are no obvious security vulnerabilities in pure cryptocurrency systems such as Bitcoin \cite{chen2020survey}, the second-generation blockchain applications and semantics inevitably introduce security vulnerabilities \cite{7906988, 8369416}. } {\color{black} Besides, from the perspective of social development, we observe some common problems that need to be solved together for both blockchains and the DAO. For example, the typical problems include: \begin{itemize} \item the interpretation of fork culture by DAO hacker event; \item whether there are problems in the application development of distributed ledger technology (DLT); \item and whether the main trend of blockchain technology is reasonable. \end{itemize} % Inspired by those questions, the blockchain system hacker attacks \cite{dhillon2017dao} and security issues \cite{li2017survey}, blockchain counter-trend issues \cite{manski2017building} have drawn a lot of attention. } \begin{table*}[h] \caption{Existing DAO as blockchain problems and solutions} \centering \footnotesize \begin{tabular}{|p{0.15\textwidth}|p{0.2\textwidth}|p{0.4\textwidth}|}% \hline \textbf{References} &\textbf{Recognition} &\textbf{Methodology}\\ \hline Li \cite{li2017survey} & Security threats and enhancement solutions in blockchain & A systematic study on the security threats to blockchain and the corresponding real attacks, and suggests some future directions to stir research efforts into this area.\\ \hline Zhou \cite{zhou2020solutions} & Scalability of blockchain & Existing blockchain scalability solutions are classified according to the blockchain hierarchy.\\ \hline Manski \cite{manski2017building} & Countervailing trend & Blockchain applications could exacerbate inequality.\\ \hline LSE Team \cite{LSE2018} & DLT & The potential of DLT is great but need to be assessed the feasibility.\\ \hline \end{tabular} \label{Table:Existing} \end{table*} \subsubsection{\modified{Security threats and enhancement solutions in blockchain}} {\color{black} Blockchain technology has shown a promising application prospect since its birth. Blockchain has been used in many fields, ranging from the original cryptocurrency to various applications based on smart contracts. Along with the booming development of blockchains, the security and privacy of blockchains should not be ignored. On the basis of existing studies on blockchain security and privacy issues, Li \textit{et al.} \cite{li2017survey} systematically studied the security threats of blockchains through the analysis of popular blockchain systems. Their major contribution includes (a) analyzing the causes and possible consequences of each risk or vulnerability, (b) investigating the corresponding actual attack, and (c) analyzing the exploited vulnerability. From the generation perspective of blockchains, we summarize the common risks of the blockchain 1.0 as follows: (a) 51\% vulnerability, (b) Private key security, (c) Criminal activity, (d) Double spending, and (d) Transaction privacy leakage. While for blockchain 2.0, the common risks include: (a) Criminal smart contracts, (b) Vulnerabilities in smart contracts, (c) Under-optimized smart contracts, and (d) Under-price operations. Furthermore, the popular attacks towards the blockchain include: Selfish mining attacks, DAO attacks (which is also the focus of our article), BGP hijacking attacks, Eclipse attacks, Liveness attacks, and Balance attacks. Considering those attacks, Li \textit{et al.} \cite{li2017survey} summarized the security-enhancement solutions of the blockchain system as follows: (a) SmartPool, (b) Quantitative framework, (c) Oyente, (d) Hawk, and (e) Town Crier. Those solutions have made a good prediction for the future direction of the blockchain. } \subsubsection{\modified{Scalability of blockchain \cite{zhou2020solutions}}} {\color{black} Similar to CAP theory in the field of traditional distributed systems, the three important attributes of blockchain systems, including decentralization, security, and scalability, cannot be fulfilled altogether. For example, Bitcoin faces performance problems with low throughput and high transaction latency. Other cryptocurrencies also face these flaws, leading researchers to pay more attention on the scalability of blockchains. To have a clear clue about the blockchain scalability solutions, Zhou \textit{et al.} \cite{zhou2020solutions} attempted to overview the related state-of-the-art studies by categorizing them according to their blockchain hierarchy. The hierarchical structure mainly consists of two layers. The first-layer solution is executed on the chain, focusing on the blockchain consensus, networks and data structures. Such as increasing the block size of Bitcoin blockchain, optimizing the storage scheme, as well as adopting the sharding technology. Various improved consistency algorithms, where transaction throughput can be increased and transaction latency is decreased, are also reviewed. The second-layer solutions seek opportunities to extend the blockchain through the off-chain channels, side-chain and cross-chain protocols. Basically, these solutions have both advantages and limitations as they strive to achieve decentralization, security, and scalability at the same time. The insightful classification and analysis of current solutions can inspire further researches. } \subsubsection{\modified{Countervailing trend}} {\color{black} To our surprise, the blockchain technology also somewhat shows a counter-trend problem. Blockchains, like other technologies, have shown a tendency to pursue different future trajectories, depending on their implementation details. For example, blockchain technology can help build a technology community in which advanced exchanges, communications, and decision-making technologies are used to aggregate, allocate, and manage the capital at multiple levels. However, a series of anti-subsidy trends indicate a deepening of inequality and democratic decline. Because technology is stratified, a large number of employees are reduced to less disposable population, regulation is reduced and corporate personnel are technologized. While the mainstream trends in blockchain technology are greatly believed as distribution, decentralization and democratization, the most powerful blockchain applications are likely to exacerbate inequality. } \subsubsection{\modified{The future of the DLT}} {\color{black} In fact, blockchain is essentially an application of the distributed ledger technology (DLT). LSE Team \cite{LSE2018} describes the problem to be solved before DLT is used. The potential of DLT is too great to ignore. Its commitment to decentralization, data security and privacy can help improve and make public services more affordable by reducing the role of government as an intermediary. The decentralized and transparent properties of the DLT lead to greater collaboration and integration with the private and social sectors by enhancing the government's own transparency, accountability and inclusiveness. However, there is no general rule book that specifies where a DLT should be deployed. If governments desire to use DLT for governance, they need to assess the feasibility of DLT and implement the DLT-based applications only when the benefits of speed, security and privacy outweigh the social costs. Governments must govern in the context when the implemented DLT applications are transparent to the underlying algorithms and ensure that the applications truly represent public value. All these visions will have to wait for the DLT technologies to evolve further. } \subsection{Governance Problems and Solutions} {\color{black} DAO, as an application of blockchain in governance, raises new issues about governance that are not found in conventional blockchain. % Through professional knowledge and experiments \cite{tarasiewiczforking}, there is still existing differences between code-based governance and blockchain-based governance. To use DAOs in certain areas, DuPont\cite{dupont2017experiments} argues that DAO may simply be a risky investment which masquerads as a new way of doing things. Furthermore, once governance is applied in the area of blockchains, legal problems are inevitable. In particular, the characteristics of blockchain governance have led to tensions between strict ``on-chain" governance systems and possible ``off-chain" governance applications \cite{reijers2018now}. Before \textit{The DAO} attack, some lawyers expressed concerns about DAO programs, saying that DAOs have touched on legal issues related to security in several countries \cite{blemus2017law}. } \begin{table*}[h] \caption{Governance problems and solutions} \centering \footnotesize \begin{tabular}{|p{0.1\textwidth}|p{0.1\textwidth}|p{0.13\textwidth}|p{0.44\textwidth}|}% \hline \textbf{Categories} &\textbf{References} &\textbf{Recognition} &\textbf{Methodology}\\ \hline Fork & Tarasiewicz \cite{tarasiewiczforking} & fork and culture & The author interprets fork culture based on The DAO hacker event.\\ \hline {Governance in company} & Kaal \cite{kaal2019blockchain} & corporate governance & Problems with DAOs used in corporate governance.\\ \cline{2-4} { } & Lafarre \cite{lafarre2018blockchain} & DAO in AGM & Problems with DAOs used in corporate governance(especially AGM).\\ \hline {Economy} & Beck \cite{beck2018governance} & Blockchain economy & DAO could lead to blockchain economy.\\ \cline{2-4} { } & Massacci \cite{massacci2017seconomics} & security and economics vulnerabilities & The failure of a security property, e.g. anonymity, can destroy a DAOs because economic attacks can be tailgated to security attacks.\\ \hline {Law} & Blemus \cite{blemus2017law} & blockchain laws & This paper describes blockchain regulations discussed in the US, EU and major economic countries.\\ \cline{2-4} { } & Reijers \cite{reijers2018now} & DAO in legal philosophy & This paper seeks to situate the blockchain discussion within the field of legal philosophy, examining how legal theory can apply in the context of blockchain governance.\\ \cline{2-4} { } & Shakow \cite{shakow2018tao} & DAO tax issues & This article explains how a decentralized autonomous organization operates and interacts with the U.S. tax system.\\ \hline \end{tabular} \label{Table:governance} \end{table*} \subsubsection{\modified{Fork and culture for DAO}} {\color{black} Interpreting fork culture based on \textit{The DAO} hacker event, Tarasiewicz \textit{et al.} \cite{tarasiewiczforking} argue that a strong emphasis must be placed on interaction and communication between institutions and informal coding communities to further research and develop new blends of social and organizational structures. } \subsubsection{\modified{Problems with DAOs used in corporate governance}} {\color{black} Kaal \textit{et al.} \cite{kaal2019blockchain} advocated that DAO technology could help improve the agency relationship, but also proposed the potential of blockchain technology as an emerging technology of governance design, in which many ideal models and theoretical evaluations were limited by the real world. First of all, the blockchain is a fundamental technology, and its transformative impact will take decades rather than years to establish and reform the system. In the corporate governance environment, the application of blockchain technology may develop in the existing centralized structure or decentralized environment. The former requires consensus on how and when to implement such technologies for governance use cases. For the latter, only when a true decentralized common blockchain emerges, with scalability and full security, can the proxy relationships be truly removed to overhaul them. With the complexity of the agency relationship, human behavior in the agency relationship needs a backstop, namely the continuous support of human code. Without decentralized human support for code, the immutability of the blockchain and its cryptography security systems may not create true transactional guarantees and trust between principals and agents to maintain the integrity of their contractual relationships. The blockchain-based corporate governance solution in DAO requires an incremental blockchain governance protocol. Thus, the socially optimal hard fork rule may not be applicable. } \subsubsection{\modified{Problems with DAOs used in AGM}} {\color{black} In calls for using blockchain technology to realize the modernization of the Annual general Meeting (AGM), however, at the same time, Lafarre \textit{et al.} \cite{lafarre2018blockchain} are also pointed out that it is important to realize the annual general meeting of shareholders based on blockchain will bring important legal issues. Whether it should be abolished physical classic annual general meeting of shareholders, or let it outside of the annual general meeting of shareholders based on blockchain coexist? If it is desirable to organize a decentralized AGM only on the blockchain, how many forum functions are included in this technology? Record dates and notice periods must be reconsidered, and the role of intermediaries in (cross-border) chains. More importantly, are shareholders and companies ready to participate in non-entity meetings? Recent evidence shows that even most institutional investors do not favor full virtualization. } \subsubsection{\modified{Blockchain economy's rethink}} {\color{black} Following the development of blockchain, blockchain economy is on the rise, also needs new governance methods. Beck \textit{et al.} \cite{beck2018governance} explored a DAO case, Swarm City to explore the decision right, responsibility and incentive mechanism related to governance. Decision rights involve controlling certain assets, monitoring decisions, and have to do with accountability and incentives motivate agents to take action. The authors used a novel IT governance framework to show that the emergence of the blockchain economy requires a rethink of governance. Compared with the digital economy, the location of decision-making power in the blockchain economy will be more decentralized. The accountability system in principle will be more and more established technically rather than institutionally, and the consistency of incentives will become more and more important. Therefore, from the three governance dimensions, the authors proposed the governance research agenda in the blockchain economy in each dimension respectively. For example, in the aspect the decision right, the research agenda include: (a) how to make decisions in the blockchain economy? (b) how to allocate decision management right and decision control right? (c) how to resolve decision-making differences in the blockchain economy? and (d) what is the role of ownership in the blockchain economy? The work \cite{beck2018governance} has identified an important approach to governance research in the blockchain economy and provided a rich foundation for further theoretical work. } \subsubsection{\modified{Security and economics vulnerabilities}} {\color{black} Traditionally, security and economics functionalities in IT Financial services and Protocols (FinTech) have been separate goals. Only security vulnerabilities in security-critical systems that could be exploited by terrorists, criminals or malicious government actors turned into security problems. Massacci \textit{et al.} \cite{massacci2017seconomics} believe that security and economics are crucial issues for DAOs. \textit{The DAO}'s hack is essentially a combination of a security vulnerability (recursive calls constantly extract coins from \textit{The DAO}) and an economic attack (The user is only authorized to withdraw money the first time). In DAO Futures Exchange, on one hand, a failure of integrity could be dramatic for the agreement. On the other hand, if anonymity fails, futures exchange DAO may face economic attacks combining anonymity failure and price discrimination. Failure of security attributes, such as anonymity, can destroy DAOs because economic attacks can be considered security attacks. The danger is not the vulnerabilities themselves, but the combination of an attack on software and an attack on the economy. Economic vulnerabilities, presumably, cannot be repaired, since the economic damage they may cause is unlikely to be reversed by pure technology such as forks. Thus for DAOs, economic vulnerabilities (security and economic vulnerabilities) are indeed the new ``beast" to be ignored. } \subsubsection{\modified{Infancy of blockchain laws}} {\color{black} Blockchain has become a major topic for public policymakers around the world. As this disruptive and decentralized technology has become a key business issue for start-ups and market participants, central banks and financial regulators, particularly in the US and EU, have shifted from initially intense hostility to a more cautious and market-friendly stance. Blemus \textit{et al.} \cite{blemus2017law} collected and compared regulatory trends in various applications or issues within the area of United States, the European Union, and other key countries. These were supported by blockchain technologies, including Bitcoin/virtual currency/cryptocurrencies, smart contracts, decentralized autonomous organizations, initial coin offering (ICO), and others. It mainly includes three supervision projects: (a) supervision of virtual currency, (b) supervision of ICO (and cryptocurrency), and (c) legal validity of blockchain technology and intelligent contract. The conclusion shows that distributed ledger technology regulation is in its infancy. As of 2017, when the work \cite{blemus2017law} was published, uncertainty remains about the legal and economic qualifications of virtual currencies, tokens, ICOs, smart contracts and distributed ledger technologies. Predictably, over time, the need for extensive research into blockchain technology will become less controversial. } \subsubsection{\modified{Discussion of DAO in legal philosophy}} {\color{black} Recently, the blockchain developer community has begun to turn its attention to governance issues. The governance of blockchain-based systems typically consists of various rules and procedures that can be implemented ``on-chain" and ``off-chain". On-chain governance refers to the rules and decision processes that are directly encoded into the underlying infrastructure of blockchain-based systems. Off-chain governance includes all other rules and decision-making processes that may affect the operation and future development of blockchain-based systems. The characteristics of blockchain governance raise the issue of possible tensions between a strict ``on-chain" governance system and possible ``off-chain" governance applications. Through investigations, Reijers \textit{et al.} \cite{reijers2018now} believe that chain governance and \modified{Kelsen's positivist concept of legal order \cite{golding1961kelsen}} have a striking similarity. Blockchain-based systems become vulnerable when private interest groups use off-chain mechanisms to usurp governance systems on the chain. \textit{The DAO} attack shows that while ``code rules" can be formally followed in a specific chained order, in exceptional states, sovereignty is asserted through a chained mechanism. As reflected in \modified{Kelsen's argument}, the combination of private interests is a weakness of positivist legal systems, and it can be assumed that this is an inherent weakness of governance on the chain of existing blockchain-based systems. Given these characteristics, future research could consider possible steps the blockchain community could take to address abnormal states in a manner consistent with their respective ideologies. } \subsubsection{\modified{DAO tax issues}} {\color{black} After \textit{The DAO} hack, the Ethereum community voted to create a ``hard fork" for the Ethereum chain, creating two Ethereum chains in the future. To add insult to injury, Securities and Exchange Commission (SEC) used this DAO to explain for the first time its view that some blockchain-related issuance would be considered securities subject to SEC regulation. The possibility of using smart contracts to allow entities to operate entirely autonomously on blockchain platforms seems attractive. It is not hard to see that these structures of DAO raise significant tax issues. However, little thought has been given to how such an entity would qualify for the tax system. Thus, Shakow \textit{et al.} \cite{shakow2018tao} explains how a decentralized autonomous organization operates and interacts with the U.S. tax system by describing how a DAO works, and raises many of the tax issues raised by these structures. As a result, there is no evidence that DAOs have considered being subject to various requirements under the tax code. For those who want to comply, the easy solution is to use a site like Overstock.com. If they don't, they may be penalized by the IRS. However, without international cooperation and innovation, it is difficult for tax administrators to find out who should tax a ``DAO" income. } \subsection{DAO Technologies and the Related Areas} {\color{black} It is nearly undeniable that DAO still has a certain trend across diverse sectors such as supply chain, business, healthcare, IoT, privacy, and data management \cite{beck2018governance, zichichi2019likestarter, dai2017toward, jeong2018blockchain}. The emerging DAO is on the rise. The work of Beck \textit{et al.} \cite{beck2018governance} and other papers have discussed in the fields of blockchain economy, crowdfunding, accounting, and even electric cars and charging station billing systems. Especially, more than one studies \cite{beck2018governance} \cite{zichichi2019likestarter} have found that in addition to e-government, the blockchain in the financial industry is also very promising. } \begin{table*}[h] \caption{DAO And Related Areas} \centering \footnotesize \begin{tabular}{|p{0.1\textwidth}|p{0.1\textwidth}|p{0.13\textwidth}|p{0.44\textwidth}|}% \hline \textbf{Categories} &\textbf{References} &\textbf{Recognition} &\textbf{Methodology}\\ \hline Governance in company & Kaal \cite{kaal2019blockchain} & corporate governance & Blockchain provides an unprecedented solution to the agency problem in corporate governance.\\ \cline{2-4} { } & Lafarre \cite{lafarre2018blockchain} & DAO in AGM & Using blockchain technology to realize the modernization of AGM.\\ \hline {eGov DAO} & Diallo \cite{diallo2018egov} & DAO in e-government system & They provide a concrete use case to demonstrate the usage of DAO e-government and evaluate its effectiveness.\\ \cline{2-4} { } & Jun \cite{jun2018blockchain} & DAO replacing existing social apparatuses and bureaucracy & Blockchain creating ``absolute law" makes it possible to implement social technology that can replace existing social apparatuses including bureaucracy.\\ \hline {Economy} & Beck \cite{beck2018governance} & Blockchain economy & DAO could lead to blockchain economy.\\ \cline{2-4} { } & Zichichi \cite{zichichi2019likestarter} & crowdfunding and DAO, LikeStarter & LikeStarter is structured as a DAO, that fosters crowdfunding, and recognizes the active role of donors, enabling them to support artists or projects, while making profits.\\ \hline {Accounting} & Dai \cite{dai2017toward} & blockchain in accounting profession & Blockchain has the potential to transform current auditing practices, resulting in a more precise and timely automatic assurance system.\\ \cline{2-4} { } & Karajovic \cite{karajovic2019thinking} & blockchain in accounting profession & A analysis of the implications of blockchain technology in the accounting profession and its broader industry. Criticisms will be raised to address concerns regarding blockchain's widespread use. \\ \cline{2-4} { } & Jeong \cite{jeong2018blockchain} & billing system & This paper proposes the blockchain based billing system. The EV and the charging station store the billing information in the blockchain and prevent the modification.\\ \hline {Voting} & Zhang \cite{zhang2018privacy} & DAO in voting & This paper proposes a local voting mechanism based on blockchain.\\ \hline \end{tabular} \label{Table:DAO related areas} \end{table*} \subsubsection{Blockchain solutions for agency problems in corporate governance} {\color{black} Agency theory is the dominant theory of governance conflicts among shareholders, company managers and creditors, in which one party entrusts the work to the other party. The core agency conflict caused by the separation of ownership and control cannot be fully resolved under the existing theoretical and legal framework. Attempts to monitor agents are inevitably costly and transaction costs are high. Kaal \textit{et al.} \cite{kaal2019blockchain} point out that blockchain provides an unprecedented solution to the agency problem in corporate governance. Traditionally, principals have controlled the oversight tasks of their agents, which now can be delegated to decentralized computer networks, with the following advantages: \begin{itemize} \item Blockchain technology provides a formal guarantee for principals and agents involved in solving agency problems in corporate governance. \item Blockchain technology can facilitate the elimination of agents as intermediaries in corporate governance through code, peer-to-peer connectivity, groups, and collaboration. \item DAO token holders are not affected by the existing corporate hierarchy and its restrictive effect. DAO token holder focuses on adding value, which benefits all components. \item The ``work value focus of workflow" in DAO structure has the potential to reform the agency relationship. \end{itemize} DAO technology greatly contributes to improving the efficiency of agency relationships and reducing agency costs by an order of magnitude. } \subsubsection{Blockchain technology for corporate governance and shareholder activism} {\color{black} } The classic Annual general Meeting (AGM) has three functions for shareholders: information, forum and decision-making. AGM also has important theoretical significance in the collective supervision of shareholders. However, AGM is often regarded as a dull and obligatory annual ceremony, and all three functions are actually eroded. For example, (almost) all information is often disclosed well before the AGM. In addition to expensive shareholder voting decisions, the decision-making function of AGM also has static annual characteristics. Besides, the annual general meeting also has procedural defects. Especially when shareholders vote remotely, there is great uncertainty about whether the information between shareholders and the company (including shareholder voting records) is correctly communicated. Lafarre \textit{et al.} \cite{lafarre2018blockchain} therefore strongly call for the use of blockchain technology to modernize the AGM. Blockchain technology can significantly reduce the cost of shareholder voting and corporate organization. And it also can improve the speed of decision-making, and promote the rapid and efficient participation of shareholders. In addition, the main problems with current mediation chains and remote voting systems are related to transparency, validation, and identification, which are directly related to the advantages of blockchain technology. The latest prototype of the blockchain-based AGM discussed by \cite{lafarre2018blockchain} shows that blockchain technology is absolutely viable as a tool for shareholder participation and has the potential to make the AGM a fast, lean enterprise institution. \subsubsection{eGov-DAO: A better government using blockchain based decentralized autonomous organization} The e-government system has greatly improved the efficiency and transparency of the government's daily operations. However, most existing e-government services are provided centrally and rely heavily on individual control. Highly centralized IT infrastructures are more vulnerable to external attacks. Moreover, internal malicious users can easily compromise data integrity. In addition, relying on individuals to monitor some workflows makes the system error-prone and leaves room for corruption. In fact, both government and business services have been hacked multiple times, from ransomware to denial-of-service attacks. To address these challenges, Diallo \textit{et al.} \cite{diallo2018egov} suggest using blockchain technology and Decentralized Autonomous Organization to improve e-government systems. Diallo describes the high-level architecture of the government DAO and gives the detailed design of a DAO e-government system. This is the first system to allow real-time monitoring and analysis of e-government services. The system retained all audit records, thereby limiting litigation between parties, increasing the speed of contract allocation and enforcement, providing transparency, accountability, immutability and better management of the national resources of the service. The evaluation of this system indicates that the government DAO system faces two main threats: data integrity and rule integrity. Besides, it can be found that modern government work does not require much delay. We can get conclusions that by implementing transparent and secure e-government systems at the lowest cost, eGov DAO's solution can help governments save unlimited resources, manage government businesses more effectively, and reduce the risk of providing contracts to companies that lack the capacity to deliver. \subsubsection{Blockchain government: a next form of infrastructure for the twenty-first century} Today, there are hundreds of blockchain projects around the world to transform government systems. There are signs that blockchain is a technology directly related to social organization. However, according to Jun \textit{et al.} \cite{jun2018blockchain}, there may be an epistemological rejection of the idea of blockchain-based automated systems replacing familiar public domains such as bureaucracy. Society must accept that such a shift is inevitable, and open discussion is needed to reduce the fear and side effects of introducing revolutionary new technologies. By applying Lawrence Lassig's ``code is law" proposition, the authors of \cite{jun2018blockchain} suggested that five principles should be followed when replacing bureaucracy with blockchain system: a) Introducing blockchain regulations; b) Transparent disclosure of data and source code; c) Implement independent executive management; d) Establish a governance system based on direct democracy; e) Make distributed autonomous government. Jun \textit{et al.}' proposed blockchain feature, which creates inviolable ``absolute laws", makes it possible to implement social technologies that can replace existing social institutions, including bureaucracy. \subsubsection{Governance in the blockchain economy: A framework and research agenda} Blockchain and the smart contract supported by blockchain may give birth to a new economic system, which Beck \textit{et al.} \cite{beck2018governance} call blockchain economy. The blockchain economy goes beyond the digital economy because agreed transactions are executed autonomously, by following rules defined in smart contracts, without the need for agency intervention or third-party approval. They can embed digital assets or tokens into digital representations of physical assets to enforce autonomous contract performance. Smart contracts, for example, could automatically lock down a car remotely if the owner fails to fulfil a rental obligation. The blockchain ensures that contracts are honored without being broken. It is in the new form of organization as DAO that blockchain economy will manifest itself. \subsubsection{LikeStarter: a Smart-contract based Social DAO for Crowdfunding} Social media platforms are recognized as important media for the global transmission and dissemination of information. The combination of social interaction and crowdfunding represents a powerful symbiotic relationship. On the other hand, blockchain technology has revolutionized the way we think about the Internet. So Zichichi \textit{et al.} \cite{zichichi2019likestarter} introduced LikeStarter, a social network where users can raise money for other users through simple ``like" on the Ethereum blockchain. LikeStarter is built on the Ethereum blockchain, structured as a DAO that promotes crowdfunding without any central agency intervention and uses smart contracts to control and manage money. Zichichi \textit{et al.} \cite{zichichi2019likestarter} have used a case-study to show that LikeStarter successfully makes it easy for people to get funding and reach as many people as possible. \subsubsection{Toward blockchain-based accounting and assurance} Since 2009, blockchain has become a potentially transformative information technology that promises to be as revolutionary as the Internet. Accounting and insurance could be one of the industries where blockchain could bring huge benefits and fundamentally change the current model. However, the potential benefits and challenges that blockchain could bring to accounting and assurance remain to be explored. For this reason, Dai \textit{et al.} \cite{dai2017toward} proposed an accounting and assurance method (an accounting ecosystem that supports blockchain, real-time, verifiable and transparent) based on the reference of multiple disciplines and ideas of the accounting industry. This method will provide real-time, verifiable information disclosure and step by step automation. As a result, blockchain can be used as a tool to verify any information related to auditing. However, it is worth pondering how to adapt the existing blockchain mechanism to the field of accounting and auditing. The insights of Dai \textit{et al.} \cite{dai2017toward} will help integrate blockchain into the existing business processes and facilitate the transformation of the current audit model to the next generation. \subsubsection{Thinking outside the block: Projected phases of blockchain integration in the accounting industry} The rapid growth of blockchain has sparked curiosity across the industry, leading to talks about setting up a blockchain consortium in accounting. Accountancy firms such as PwC, Deloitte, EY and KPMG have pledged to integrate blockchain into their financial services. Innovative products such as Vulcan (for managing digital assets), Rubix (for improving supply change management), editable blockchain and blockchain as a service are among there. Karajovic \textit{et al.} \cite{karajovic2019thinking} conducted an in-depth and detailed analysis on the application of blockchain technology in the accounting profession. As block linking becomes more mainstream, the technique can be used to simplify many redundant and vulnerable accounting practices. While the initial cost of developing and integrating blockchain infrastructure can be high, this can be offset by the cost savings it brings to the enterprise in terms of long-term, improved efficiency. Karajovic\textit{et al.}'s philosophical analysis of the technology's application illustrates many questions about the uncertain relationship between accounting and blockchain. While the technology has the potential to reshape entire capital markets, the social and political barriers to blockchain proliferation need to be looked at critically. But one thing is certain: accounting is just one block in the chain that is being dramatically redefined by this disruptive technology. \subsubsection{Blockchain based billing system for electric vehicle and charging station} {\color{black} DAO can also be used in the charging system. The charging results measured in the charging electric vehicles may differ from the amount claimed in the charging station. This is because electric vehicles and charging station measure the charge amount with their own measurement equipment. If the electric vehicles or charging station provide fault information, the billing may be invalid. In addition, billing information can be manipulated. To prevent these problems, Jeong \textit{et al.} \cite{jeong2018blockchain} proposed a blockchain-based billing system. After mutual authentication, electric cars and charging stations both store billing information in the blockchain. A DAO is a system where all nodes have the same ledger, thus the ledger data cannot be tampered. Jeong \textit{et al.} have shown that the system prevents users from modifying their records after electric vehicles have been charged. } \subsubsection{A privacy-preserving voting protocol on blockchains} Voting is a universal phenomenon and to some extent is part of various societies. As the technology matures and more voting is expected in the future, there is an urgent need for a local voting mechanism built directly on the blockchain network to decentralize and dis-intermediate the network. Zhang \textit{et al.} \cite{zhang2018privacy} argue that the need for this voting mechanism is not limited to public blockchain networks, but also applies to syndicate-licensed blockchain networks. In this regard, the authors categorized and summarized existing voting systems, and proposed a local voting mechanism based on blockchain to facilitate the decision-making of nodes in the blockchain network. The core idea is (a) distribution voting, (b) distributed tally, and (c) cryptography-based verification. The implementation on the Hyperledger structure shows that the protocol is feasible for small and medium-sized voting issues. The agreement protects the privacy of voters and allows for the detection and correction of cheating without any credible parties. \section{Open Issues}\label{sec:openissue} {\color{black} According to Diallo \textit{et al.} \cite{diallo2018egov}, we can see that although The DAO project was abolished, the future of Ethereum is bright. In the future, research on DAO can start from the following aspects. To help gain a quick clue, we propose several questions here. Based on the system design of the blockchain itself, the first question is how to optimize and handle the performance trilemma, i.e., decentralization, security and scalability, in a balanced way in the context of DAO? Secondly, a better blockchain system and the evaluation tool of blockchain security-enhancement solutions are anticipated for DAO. Furthermore, the feasibility of those blockchain system and evaluation tool needs to be assessed, because the governance is possible only if the benefits of efficiency, security and privacy of DAO exceed the social costs. Thus, another questions is how to reduce the social costs such that the feasibility of DAO can be improved. } {\color{black} From the operation perspective of DAO, we have the following concerns. \begin{itemize} \item Only depending on the social-optimal hard-fork rule is not applicable. The corporate governance solution based on DAO needs to propose new protocols for blockchain governance. \item Leveraging DAO, it is necessary to create the trust of real transactions, thus maintaining the integrity of its contractual relationship. \item While ``code rules" can be formally followed on a particular chain. In exceptional states, sovereignty is asserted through an off-chain mechanism. This requires the study of possible actions, which are taken by the blockchain community to address anomalies in the ways that are consistent with their respective ideologies. \item Legal issues related to DAOs need to be taken into account, such as how an entity should comply with the tax system. \item Researchers need to explore the ultimate impact of DAO on human beings. The most powerful blockchain applications are likely to exacerbate inequality. Both researchers and engineers shall think about how to avoid and solve this issue. \end{itemize} All of the mentioned concerns require a strong emphasis on interaction and communication between institutions and informal coding communities, thus it can help further research and develop new integration of social and organizational structures. } {\color{black} Currently, people are still maintaining a high expectation on DAO. Governments around the world are betting that it will change the way they govern \cite{8368494}. For example, Dubai \cite{Dubai} sets a goal to ensure that by 2020 all government documents are stored in the blockchains. Other governments are studying its potential applicability in central banking, electronic voting, identity management and registry management. Blockchain transforms government operations to inspire new service delivery models for governments \cite{alketbi2020novel}. DAO is believed to have many advantages over existing solutions. With the emergence of Ethereum Classic \cite{8345547} and the incredible pace of new development, the platform is becoming matured \cite{jones2019blockchain}. In the infancy stage of DAO, smart contracts are bound to cause vulnerabilities like \textit{The DAO} attack, which will lead to better code checking mechanisms and secure coding practices to avoid such pitfalls. In the future, not only may it be possible to establish full-fledged DAOs in multiple fields, it is also very likely to establish a unified single currency platform leveraging the technologies of DAO. } \section{Conclusion} {\color{black} DAO is viewed as a very promising paradigm for our future decentralized organizational solutions. This article reviews the most recent research activities on both academic and engineering scenarios, which basically include the governance problems and solutions, and typical DAO technologies and the related areas. We hope that this overview can help researchers and engineers to identify the state-of-the-art studies of blockchain-based DAO. } \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Few-electron gate-defined quantum dots\cite{hanson} are exploited for single spin manipulation that allows for realization of single-qubit quantum gates.\cite{loss} While the desired spin rotation involves a single spin as a carrier of quantum information, multielectron systems provide a feasible environment for readout of the controlled spin. Strong spin-orbit (SO) coupling that is present in InSb and InAs nanowires \cite{fasth, pfund} allows for electrical spin rotations\cite{nowack} that are performed by electric dipole spin resonance (EDSR)\cite{golovach} which excludes the need for introduction of local magnetic field gradients\cite{laird,koppens} or usage of hyperfine interaction.\cite{osika} The readout of the spin is realized via spin to charge conversion that relies on the Pauli spin blockade.\cite{ono} The single electron current $(1,0)\rightarrow(1,1)\rightarrow(2,0)\rightarrow(1,0)$ [the numbers in the brackets correspond to the number of electrons in a particular dot] is blocked at the transition from $(1,1)$ triplet to $(2,0)$ singlet. Rotation of one of the spins of electrons constituting the $(1,1)$ triplet unblocks the current which serves as a proof for the coherent spin control. On the other hand strong SO coupling leads to unavoidable spin relaxation which results in a spontaneous lifting of the Pauli blockade when one of the $(1,1)$ triplets is close in energy to the $(2,0)$ singlet.\cite{nowak2014} EDSR lifting of the current blockade is observed already for two electrons bound in the double dot, which indeed is the case for many of the experiments.\cite{nadj-perge,nadj2012,schroer,frolov} However some of the experimentally studied devices consist an even number $N$ of electrons greater than two.\cite{berg,petersson, stehlik} In this case the system is biased such the Pauli blockade is between $(N-1,1)$ and $(N,0)$ states. It is assumed that such configuration is equivalent to the two-electron system.\cite{rossella} This approximation resembles the well established concept in chemistry, that the valence electrons are responsible for creation of bonds and the rest in the deep levels can be treated as the frozen core.\cite{fc} This assumption seems questionable for quantum dots in which the single-electron shells are separated by much smaller energies than for the Coulomb potential, nevertheless this problem has not been discussed by a theoretical study. The present work addresses this issue. We find that for a system with $N>2$ all but two electrons form closed singlet shells. This is in accordance with predictions of Hubbard model,\cite{tasaki} that appear as a consequence of Mattis-Lieb theorem,\cite{lieb} and which states that the lowest energy states posses the lowest spin (S=0). As a consequence, in general, the low-energy spectra of multielectron double quantum dots in the $(N-1,1)$ configuration resemble the spectra of quantum dots in $(1,1)$ configuration and the states have similar total spins. We find that the $(N-1,1)$ spectra can be well recreated by a configuration interaction calculation in which one excludes the single-electron orbitals that form the singlet shells. This is analogous to the frozen core approximation regardless of the fact that there are no orbital shells in quantum-dots. The main finding of the work is that though a general resemblance of $N>2$ and $N=2$ spectra is found, the occupation of excited single-electron orbitals in the $N>2$ case leads to lifting of the degeneracy of spin-zero states. This in turn is translated to different effective $g$-factors in the dots. Such differences have been observed in recent EDSR experiments on nanowire quantum dots\cite{nadj-perge,nadj2012,schroer,frolov,berg,petersson, stehlik} and have been related to the differences in the confinement as predicted by the study on self-organized quantum dots.\cite{pryor} Here we strictly connect the effective $g$-factors with the number of electrons in the system and the length of the dots. We find that unequal effective $g$-factors for $N=2$ appear only for an asymmetric system but for $N>2$ they are observed already for the dots of the same lengths. \section{Theory} In the present work we follow the common approach\cite{oned} that treats the nanowire quantum dots as quasi-one dimensional. The N-electron system is described by the Hamiltonian, \begin{equation} H=\sum_{i=1}^N h_i + \sum_{i=1,j=i+1}^N \frac{\sqrt{\pi/2}}{4\pi\varepsilon_0\varepsilon \ell}\mathrm{erfcx}\left[\frac{|x_1-x_2|}{\sqrt{2}\ell}\right]. \label{hne} \end{equation} The form of the Coulomb interaction term results from the assumption that the electrons are localized in the ground state of the lateral quantization along the nanowire cross section with the wavefunction of a Gaussian shape with $\psi(y,z)=(\pi^{1/2}\ell)^{-1}\exp\{-(y^2+z^2)/(2\ell^2)\}$. The integration of three-dimensional Coulomb interaction term $H_C= \sum_{i=1,j=i+1}^N \frac{e^2}{4 \pi \varepsilon \varepsilon_0} \frac{1}{|r_i-r_j|}$ leads\cite{bednarek} to the operator including $\mathrm{erfcx}(x)=\exp(x^2)\mathrm{erfc}(x)$ which is the exponentially scaled complementary error function.\cite{book} To obtain N-electron spin-orbitals we diagonalize the Hamiltonian (\ref{hne}) in a basis of Slater determinants consisting of single-electron spin-orbitals. \begin{equation} \Psi(\nu_1,\nu_2,...,\nu_N)=\sum_{i=1}^M c_i \emph{A} \{\psi_{i_1}(\nu_1)\psi_{i_2}(\nu_2)...\psi_{i_N}(\nu_N)\}, \end{equation} where $\nu_i=(x_i,\sigma^i)$ corresponds to the orbital and spin coordinates, $\emph{A}$ is the antisymmetrization operator and $c_i$ is obtained by the diagonalization. We use $M=20$ single-electron spin-orbitals which provides accuracy better than $0.1\;\mu$eV. The single-electron orbitals $\psi(\nu)$ are described by the Hamiltonian, \begin{equation} h = \frac{\hbar^2 k_x^2}{2m^*} + V(x) - \alpha \sigma_y k_x + \frac{1}{2}\mu_B gB\sigma_x, \label{1eham} \end{equation} where $H_{SO 1D}=-\alpha\sigma_yk_x$ corresponds to Rashba SO coupling\cite{rashba} resulting from $H_{SO}=\alpha(\sigma_xk_y-\sigma_yk_x)$ Hamiltonian averaged in the $y$-direction. $V(x)$ describes the potential profile of the double dot, \begin{equation} V(x) = \left\{ \begin{array}{l l} V_b & \quad x<-w/2\;\textrm{and}\;x>-(l_1+w/2)\\ V_i & \quad |x|<w/2\\ 0 & \quad x>w/2\;\textrm{and}\;x<l_2+w/2 \end{array} \right. \end{equation} where $l_1$ and $l_2$ determine the length of each dot, $w$ is the interdot barrier width, $V_i$ is the barrier height and $V_b$ is the bias potential applied to the bottom of the left dot. We assume a $w=20$ nm thin and $V_i=200$ meV high interdot barrier. The computational box ends at the edges of the defined potential and the magnetic field is applied along the nanowire axis. The single-electron eigenstates are obtained by exact diagonalization of Hamiltonian (\ref{1eham}) on a mesh of 201 points with $\Delta x = 1.095$ nm. We adopt parameters corresponding to InSb nanowires, i.e. $m^*=0.014$, $\varepsilon=16.5$, $g=-51$ and $\alpha=30$ meVnm which corresponds to spin-orbit length $l_{so}=\hbar/(m^*\alpha)=182$ nm comparable to the value measured experimentally in Ref. \onlinecite{nadj2012}. We take $\ell=20$ nm. \section{Results} \subsection{Two-electron quantum dot} \begin{figure}[ht!] \epsfxsize=75mm \epsfbox[20 54 567 800] {spectr2e.eps} \caption{Energy spectra of two-electron double dot system. (a) Symmetric system with the dots width equal to 100 nm. (b) Asymmetric configuration: width of the left dot is 150 nm and the width of the right one is 50 nm. Inset in (a) shows the energy spectrum including the ground state (2,0) singlet. The insets in (b) depict the spin densities with the color curves presented in the arbitrary units. The black contours show the confinement potential.} \label{spectr2e} \end{figure} Let us first consider a symmetric system of two quantum dots of lengths $l_1=l_2=100$ nm. We set the bottom of the left dot to $V_b=-3.8$ meV. The bias results in the energy level configuration such that $(2,0)$ singlet\cite{comment} is the ground state and the lowest-energy excited states are $(1,1)$ states with different spin polarizations. This configuration is necessary for observation of spin Pauli blockade. The inset to Fig. \ref{spectr2e}(a) shows the lowest part of the energy spectrum. The ground state singlet of (2,0) occupation has mean value of $<S^2>$ operator 0.12 $\hbar^2/4$. Figure \ref{spectr2e}(a) presents energy levels of (1,1) states. The two Zeeman split energy levels correspond to a spin-positive triplet $T_+$ ($<S^2>=1.98\;\hbar^2/4$) with spins oriented approximately along the magnetic field and to a spin-negative triplet $T_-$ ($<S^2>=1.97\;\hbar^2/4$) with spins oriented against the magnetic field. The horizontal curve corresponds to a degenerate energy level of a singlet (S) and a triplet ($T_0$) states with zero spin projection along the direction of the magnetic field. The degeneracy results from the negligible overlap between the adjacent electrons and hence nearly zero exchange interaction. The mean values of $<S^2>$ operator for these states are: 1.04, 1.01 $[\hbar^2/4]$. In EDSR experiments the spin rotations are performed from one of the non-zero spin triplets: $T_+$ or $T_-$.\cite{nowak2014} When a resonance to a state with zero spin component along the magnetic field occurs the blockade is lifted. The experimentally measured resonances exhibit a linear dependence of the driving frequency on the magnetic field, equal to (considering $T_+$ as the initial state) $\omega=[E(S)-E(T_+)]/\hbar$. The corresponding energy $\omega\hbar$ is plotted in Fig. \ref{2ediffen} with the red-dashed curve. \begin{figure}[ht!] \epsfxsize=70mm \epsfbox[19 200 570 640] {2ediffen.eps} \caption{Energy difference (the red-dashed curve) between the energy level of $T_+$ state and the degenerated energy level from Fig. \ref{spectr2e}(a) or two levels that correspond to states with opposite spin configurations from Fig. \ref{spectr2e}(b) (black curves).} \label{2ediffen} \end{figure} \begin{figure*}[ht!] \epsfxsize=170mm \epsfbox[25 339 570 515] {spectr1e.eps} \caption{(a,e) Single electron charge densities and the potential profile. (b,f) Energy spectrum -- the colors of the curves corresponds to the states from (a,e). (c,g) Energy levels from (b,f) shifted to compare the Zeeman splittings. (d,h) Sum of the single-electron energies as they enter into configuration interaction approach.} \label{spectr1e} \end{figure*} Let us now consider the case in which the dots are of unequal length -- $l_1=150$ nm and $l_2=50$ nm -- but we keep the $g$-factor constant along the structure. Energy levels of states in which electrons occupy adjacent dots are presented in Fig. \ref{spectr2e}(b). To keep the energy separation between (2,0) singlet and four (1,1) states 1.5 meV at $B=0$ as in the case of symmetric system we set $V_{bias}=5.07$ meV in the left dot. The striking difference between the spectra of Fig. \ref{spectr2e}(b) as compared to the symmetric case of Fig. \ref{spectr2e}(a) is that for the non-zero magnetic field the degeneracy of horizontal energy levels is lifted. The spin densities of the states that correspond to these energy levels calculated as $\sigma^j_x(x)=\sum_{i=1}^N\langle\Psi_j(\nu_1,\nu_2,...,\nu_N)|\sigma^i_{x}\delta_{x_i,x}|\Psi_j(\nu_1,\nu_2,...,\nu_N)\rangle$ are depicted in the insets to Fig. \ref{spectr2e}(b). We observe that in the state with lower energy the spin in the left dot is oriented against the magnetic field, while in the right dot it is oriented along the magnetic field. Further on we will address to this state as to $(\downarrow,\uparrow)$. The state $(\uparrow,\downarrow)$ with an increasing energy in $B$ has an opposite spin configuration. The $(\downarrow,\uparrow)$, $(\uparrow,\downarrow)$ states have zero total spins along the $x$-direction therefore the EDSR transitions to this states lift the Pauli blockade and such transition are visible as a resonance lines in EDSR spectra. We calculate corresponding energy differences $\Delta E_1 =\omega_1 \hbar=[E(\downarrow,\uparrow)-E(T_+)]$, $\Delta E_2 =\omega_2 \hbar=[E(\uparrow,\downarrow)-E(T_+)]$ and plot them in Fig. \ref{2ediffen} with black solid lines. We note that this arrangement of resonance lines is present in every EDSR map registered experimentally [see Refs. \onlinecite{nadj-perge,nadj2012,berg,schroer,frolov,petersson, stehlik}] and attributed to different $g$-factors in the dots. Here it is obtained for a constant $g$ along the structure. The slopes of the curves in Fig. \ref{2ediffen} are connected to {\it effective} $g$-factor. For symmetric system we calculate $g^*=\frac{E(S)-E(T_+)}{\mu_0 B}$ equal to $g^*=-49.93$ for $B=100$ mT. In the case of asymmetric dots the effective $g$-factors are $g_1^*=-48.70$, $g_2^*=-50.71$. To explain the impact of the dots width on the energy spectra and effective $g$-factors let us inspect the single electron spin-orbitals that constitute the two-electron orbitals. Figures \ref{spectr1e} (a,e) show the charge densities of the single-electron states. The densities correspond to the ground states of orbital quantization of each dot. The black curves correspond to $s_{l,\uparrow}$ and $s_{l,\downarrow}$ states [the main letter denotes the orbital excitation, $(l,r)$ denotes the dot in which the electron is localized and the arrows correspond to the average spin polarization direction]. The red-dashed curve shows the charge densities of higher energy states $s_{r,\uparrow}$ and $s_{r,\downarrow}$ in which the electron occupies the right dot. We extract the squared absolute values of coefficients -- $|c_i|^2$ -- for each of the Slater determinant that is used in the configuration interaction approach. For the symmetric case we get 0.806 for the determinant consisting of $\{s_{l,\uparrow},s_{r,\downarrow}\}$ single-electron orbitals and 0.194 for consisting $\{s_{l,\downarrow},s_{r,\uparrow}\}$ orbitals for one of the states from the degenerate pair of spin zero two-electron states (the coefficients for the second state are reversed). For the asymmetric case we get 0.992 for $\{s_{l,\downarrow},s_{r,\uparrow}\}$ for the $(\downarrow,\uparrow)$ state and 0.996 for $\{s_{l,\uparrow},s_{r,\downarrow}\}$ for the $(\uparrow,\downarrow)$ state. The lack of the admixture of other Slater determinants is due to small size of the dots which results in a considerable kinetic energy separation of the single-electron orbitals. The energy spectra displayed in Fig. \ref{spectr1e}(b,f) show the Zeeman splittings of the single-electron energy levels. If we overlay the energy levels of the states in which the electron is localized in the left and right dot (solid black and red-dashed curves in Fig. \ref{spectr1e}(b,f) respectively) we observe that they are exactly the same [Fig. \ref{spectr1e}(c)] when the dots are of identical length but they differ when the dots are of unequal length [Fig. \ref{spectr1e}(g)]. Now we sum the single-electron energies accordingly to the way the states enter the configuration-interaction calculation, i.e. the $(\downarrow,\uparrow)$ state corresponds to the occupation of $s_{l,\downarrow}$, $s_{r,\uparrow}$ single-electron orbitals with energies $E(s_{l,\downarrow})$ and $E(r_{r,\uparrow})$. The $(\uparrow,\downarrow)$ corresponds to the occupation of $s_{l,\uparrow}$ , $s_{r,\downarrow}$ single-electron orbitals with energies $E(s_{l,\uparrow})$ and $E(s_{r,\downarrow})$. The obtained sums are plotted in Fig. \ref{spectr1e}(d) for the symmetric and in Fig. \ref{spectr1e}(h) for the asymmetric case. We find that the degeneracy is lifted due to different Zeeman splittings of single-electron energy levels of electrons confined in dots of different length. \begin{figure}[ht!] \epsfxsize=60mm \epsfbox[107 18 503 823] {g1g2.eps} \caption{$g_1^*/g_2^*$ versus the length of the dots for (a) two-electrons, (b) four-electrons and (c) six-electrons.} \label{g1g2} \end{figure} In a single one-dimensional quantum dot SO interaction impacts the Zeeman splittings according to $E_z=g\mu_B B\lambda_i$ where\cite{nowaksp} \begin{equation} \lambda_i=\int|\Psi_i(x)|^2\cos(2\alpha m^*x/\hbar^2) dx. \label{integral} \end{equation} Due to a high interdot barrier we can effectively treat the considered system as two separate dots. For a quantum dot in a form of an infinite quantum well the term $\lambda_i$ that controls the strength of the Zeeman splitting is \begin{equation} \lambda_1(l)=\frac{\hbar^6\pi^2\sin(l\alpha m^*/\hbar^2)}{\alpha m^* l (\pi^2 \hbar^4-\alpha^2{m^*}^2l^2)}, \end{equation} for $\Psi_i(x)$ in a form of $s$-like orbital. $\lambda_i$ changes from 1 for narrow quantum dots to 0 in the limit of infinite dot length. Accordingly, the Zeeman splittings are the strongest (as strong as in the absence of SO coupling) for a narrow quantum dot and become weaker if the length of the dot is increased. The $g^*$-factors calculated from $\Delta E_1 =\omega_1 \hbar=[E(\downarrow,\uparrow)-E(T_+)]$ and $\Delta E_2 =\omega_2 \hbar=[E(\uparrow,\downarrow)-E(T_+)]$ depend on the Zeeman splitting of the single electron energy levels as follows [taking $E_0$ as the orbital excitation energy of $(\downarrow,\uparrow)$,$(\uparrow,\downarrow)$ and $T_+$ states]: $\Delta E_1 = E_0 + E(s_{l,\downarrow})+E(s_{r,\uparrow})-E_0-E(s_{l,\uparrow})-E(s_{r,\uparrow})=E(s_{l,\downarrow})-E(s_{l,\uparrow})=g\mu_BB\lambda_1(l_1)$ and $\Delta E_2 = E_0+ E(s_{l,\uparrow})+E(s_{r,\downarrow})-E_0-E(s_{l,\uparrow})-E(s_{r,\uparrow})=E(s_{r,\downarrow})-E(s_{r,\uparrow})=g\mu_BB\lambda_1(l_2)$. Therefore $g_1^*=g\lambda_1(l_1)$ and $g_2^*=g\lambda_1(l_2)$. We plot the ratio $g_1^*/g_2^*=\lambda_1(l_1)/\lambda_1(l_2)$ in Fig. \ref{g1g2}(a). For $l_1=150$ and $l_2=50$ we obtain $g_1^*/g_2^*=0.960$ which matches well the value obtained in the exact calculation of Fig. \ref{2ediffen}, $g_1^*/g_2^*=0.961$ It should be noted here that the effect of the SO interaction on the strength of the Zeeman splittings is influenced also by the orientation of the magnetic field.\cite{nowaksp} For the magnetic field vector forming a $\phi$ angle with the nanowire axis the splitting becomes $E_z=g\mu_B B \sqrt{1-(1-\lambda_i^2)\cos^2\phi}$, i.e. the $g^*$ values obtained for the magnetic field oriented perpendicular to the nanowire axis approach the bulk $g$-factor value. \subsection{Four- and six-electron case} Figure \ref{spectr4e}(a) with black solid curves presents energy levels of four-electron symmetric system with $V_{b}=-14.21$ meV. The levels correspond to the states with (3,1) occupation. The plot omits the ground state with (4,0) occupation that is $1.5$ meV lower in energy with respect to presented energy levels for $B=0$. The mean value of $<S^2>$ operator for the following (3,1) states are: 1.97, 1.04, 1.03, 1.97 $[\hbar^2/4]$. These values are close to the ones obtained for the two electron system. The absence of total spins of two electrons shows that two electrons form a singlet state with zero total spin. \begin{figure}[ht!] \epsfxsize=70mm \epsfbox[40 20 550 820] {spectr4e.eps} \caption{(a) Energy levels of four-electron symmetric double dot in (3,1) occupation regime depicted with black-solid curves. The red-dashed curves are energy levels of a two-electron system with excluded $s_{l,\uparrow}$ and $s_{l,\downarrow}$ single-electron orbitals that form a singlet closed shell. (b) Energy spectrum for asymmetric system with $l_1=150$ nm and $l_2=50$ nm.} \label{spectr4e} \end{figure} Let us extract the coefficients for each Slater determinant that is used to create configuration interaction basis. For the subsequent states whose energy levels are depicted in Fig. \ref{spectr4e}(a) the only non-zero (and nearly equal to unity, the other coefficients are less than 0.006) are coefficients for Slater determinants consisting of following single electron orbitals: $\{s_{l,\uparrow}s_{l,\downarrow}p_{l,\uparrow}s_{r,\uparrow}\}$, $\{s_{l,\uparrow}s_{l,\downarrow}p_{l,\downarrow}s_{r,\uparrow}\}$, $\{s_{l,\uparrow}s_{l,\downarrow}p_{l,\uparrow}s_{r,\downarrow}\}$ and $\{s_{l,\uparrow}s_{l,\downarrow}p_{l,\downarrow}s_{r,\downarrow}\}$ respectively. The corresponding orbitals are depicted in Fig. \ref{1efor4e}(a,b,c). Let us assume now that two electrons of the four-electron system form a singlet closed-shell that does not impact the spin properties of the two remaining electrons and thus they can be separated away: we exclude from the configuration interaction basis the $s_{l,\uparrow},s_{l,\downarrow}$ orbitals and limit number of electrons in the calculation to two. The obtained energy levels are depicted with the red-dashed curves in Fig. \ref{spectr4e}(a). Besides the shift between the energy levels obtained in full four-electron and two-electron calculation with restricted basis the spectra perfectly match. The total spins and the Zeeman splittings in the four-electron energy spectra resemble the ones obtained for two electrons. However, the striking feature of spectrum of Fig. \ref{spectr4e}(a) is that the two horizontal energy levels become separate in non-zero magnetic field already for a symmetric system. Let us invoke the two-electron approximation with the restricted basis to explain this observation. The $s_{l,\uparrow},s_{l,\downarrow}$ orbitals are occupied by two spin-opposite electrons that form the singlet state and are separated from the basis. The two energy levels that slightly split in the magnetic field are constructed from an $p$-like orbital of electron localized in the left dot ($p_{l,\uparrow}, p_{l,\downarrow}$) [see Fig. \ref{1efor4e}(b)] and an $s$-like orbital formed by an electron localized in the right dot ($s_{r,\uparrow},s_{r,\downarrow}$) [see Fig. \ref{1efor4e}(a)]. The single-electron energy levels are depicted in Fig. \ref{1efor4e}(d). We observe that the Zeeman splittings between energy levels of $s$-states differ from the ones for $p$-orbitals: 0.282 meV compared to 0.294 meV. Here the dots are symmetric so it its the shape of $\psi_i(x)$ that is changed. We integrate Eq. \ref{integral} for an $p$-like orbital and obtain, \begin{equation} \lambda_2(l)=\frac{4 \hbar^6 \pi^2\sin(l\alpha m^*/\hbar^2)}{\alpha m^* l (4\pi^2\hbar^4-\alpha^2{m^*}^2l^2)}. \end{equation} The effective $g$-factors are obtained from the energy splittings analogically as in the two-electron case: $\Delta E_1=E_0+E(p_{l,\downarrow})+E(s_{r,\uparrow})-E_0-E(p_{l,\uparrow})-E(s_{r,\uparrow})=E(p_{l,\downarrow})-E(p_{l,\uparrow})=g\mu_BB\lambda_2(l_1)$ and $\Delta E_2=E_0+E(p_{l,\uparrow})+E(s_{l,\downarrow})-E_0-E(p_{l,\uparrow})-E(s_{r,\uparrow})=E(s_{r,\downarrow})-E(s_{r,\uparrow})=g\mu_BB\lambda_1(l_2)$. As a result the two states constructed from $\{p_{l,\downarrow},s_{r,\uparrow}\}$ and $\{p_{l,\downarrow},s_{r,\uparrow}\}$ single-electron orbitals have different energies at $B\ne0$ for $l_1=l_2$. \begin{figure}[ht!] \epsfxsize=85mm \epsfbox[21 290 570 550] {1efor4e.eps} \caption{(a,b,c) Charge densities of single-electron states and the potential profile for symmetric quantum dots with $V_b=-14.21$ meV. (d) Single electron energy spectrum.} \label{1efor4e} \end{figure} Figure \ref{g1g2}(b) presents $g_1^*/g_2^*=\lambda_2(l_1)/\lambda_1(l_2)$. The plot suggests that the $g$-factor ratio can be altered significantly as compared to the two-electron case for an asymmetric system. Namely, if one makes the dot that is occupied by three electrons longer one can amplify the ratio of the $g$-factors in the dots greater than elongating the dot with a single electron. The energy spectra for asymmetric system with $l_1=150$ nm and $l_2=50$ nm are presented in Fig. \ref{spectr4e}(b). The splitting between the central lines is visibly increased as compared to the symmetric case of Fig. \ref{spectr4e}(a). We calculate $g_1^*/g_2^*=0.896$ which is close to the value obtained in analytical calculation from Fig. \ref{g1g2}(b) equal to 0.910. \begin{figure}[ht!] \epsfxsize=70mm \epsfbox[48 32 540 820] {spectr6e.eps} \caption{(a) Six-electron energy spectrum for symmetric system with $V_b=-30.125$ meV. Black curves correspond to the exact six-electron calculation. Red-dashed curves are obtained in two-electron calculation with the basis excluding four lowest energy single-electron spin-orbitals. (b) Single electron energy spectrum.} \label{spectr6e} \end{figure} Figure \ref{spectr6e}(a) presents the energy spectrum of (5,1) states for six-electron double dot system for the bias $V_b=-30.125$ meV. The energy level structure resembles the spectrum of four-electron system depicted in Fig. \ref{spectr4e}(a). We again obtain the splitting of the central lines already for a symmetric system. For $B=100$ mT is 7.2 $\mu$eV for four electrons while for six electrons we get 8.6$\mu$eV. Also the total spins of (5,1) states are similar: 1.97, 1.04, 1.03, 1.97 $[\hbar^2/4]$. The coefficients for Slater determinants extracted from the configuration interaction calculation show that mainly a single determinant (with the square of absolute value equal to 0.987) describes each of the discussed six-electron state. The single electron states that constitute the determinants are: $\{s_{l,\uparrow},s_{l,\downarrow},p_{l,\uparrow},p_{l,\downarrow},d_{l,\uparrow},s_{r,\uparrow}\}$, $\{s_{l,\uparrow},s_{l,\downarrow},p_{l,\uparrow},p_{l,\downarrow},d_{l,\downarrow},s_{r,\uparrow}\}$, $\{s_{l,\uparrow},s_{l,\downarrow},p_{l,\uparrow},p_{l,\downarrow},d_{l,\uparrow},s_{r,\downarrow}\}$, $\{s_{l,\uparrow},s_{l,\downarrow},p_{l,\uparrow},p_{l,\downarrow},d_{l,\downarrow},s_{r,\downarrow}\}$ for the following (5,1) states from Fig. \ref{spectr6e}(a). The single-electron energy levels are depicted in Fig. \ref{spectr6e}(b). The determinants correspond to the occupation of single-electron orbitals in which two pairs of electrons occupy closed singlet shells: two electrons occupy spin opposite $s$-like orbitals and the next pair occupies two spin opposite $p$-like orbitals. The two remaining electrons occupy an $d$-like orbital in the left dot and an $s$-like orbital in the right dot. The calculated spectrum for the basis excluding the four single electron states that form the two singlet shells is presented in Fig. \ref{spectr6e}(a) with the red-dashed curves. The spectra obtained in the exact calculation and in the restricted basis agree. For six electrons we calculate the ratio $g_1^*/g_2^*=\lambda_3(l_1)/\lambda_1(l_2)$ where \begin{equation} \lambda_3(l)=\frac{\hbar^6\sin(l\alpha m^*/\hbar^2)(9\pi^2\hbar^4-2\alpha^2{m^*}^2l^2)}{\alpha m^* l (9\pi^2 \hbar^4-\alpha^2{m^*}^2l^2)}, \end{equation} is determined from integration of Eq. \ref{integral} with an $d$-like orbital and plot it in Fig. \ref{g1g2}(c). We see that it is similar to the four-electron case of Fig. \ref{g1g2}(b) and is strongly altered as compared to the $N=2$ case. \subsection{Comparison with the experiments} Our work shows that increasing the number of electrons results in an amplification of the difference between effective $g$-factors in the dots. Table I shows $g_1^*/g_2^*$ values taken from the experimental works. It is clearly seen that the studies that considered $N>2$ electrons indeed measured ratios that deviate more from 1 as compared to the $N=2$ cases. The actual experimental values could be affected by a number of effects omitted in the present modeling: the detailed structure of the confinement potential, or they can be impacted by non-zero exchange interaction. \cite{nowakharm} Nevertheless the tendency drawn by these data is clear and agrees with the result of the present study. \begin{table}[H] \centering \begin{tabular}{|c|c|c|} \noalign{\hrule height 0.7pt} Reference No. & Number of electrons & $g_1^*/g_2^*$ \\ \noalign{\hrule height 0.7pt} \onlinecite{nadj-perge} & N=2 & 0.967\\\hline \onlinecite{schroer} & N=2 & 0.923\\\hline \onlinecite{nadj2012} & N=2 & 0.922\\\hline \onlinecite{berg} & $N>2$ & 0.750\\\hline \onlinecite{petersson} & $N>2$ & 0.760\\\hline \onlinecite{stehlik} & $N=6$ & 0.872\\\hline \end{tabular} \label{t1} \caption{$g_1^*/g_2^*$ ratio given in the experimental works versus the number of electrons.} \end{table} \section{Summary and conclusions} We investigated nanowire double quantum dots occupied by an even number of electrons tuned to the Pauli spin blockade regime. By the exact configuration interaction study we found that in a system with an even, larger than two, number of electrons all but two electrons form closed singlet shells. This allows to obtain the properties of these structures by configuration interaction calculation where the number of electrons is limited to two and where $N-2$ lowest in energy single-electron orbitals forming singlet shells are excluded. Despite the fact that for $N>2$ the properties of the system are controlled by only two electrons the dots with such occupation cannot be treated as an exact equivalent of two-electron systems. We found that the occupation of excited single-electron orbitals by the valence electrons results in different effective $g$-factors in the adjacent dots. For $N>2$ the difference is obtained already for a symmetric system while for two-electrons it results from the dots asymmetry. The differences of effective $g$-factors present in our results are observed in recent EDSR experimental studies on double-quantum dots. \section{Acknowledgements} This work was supported by the funds of Ministry of Science and Higher Education for 2014 and by PL-Grid Infrastructure.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{s0} Voiculescu's work in \cite{voiculescu}, a far-reaching extension of the results by Weyl and von Neumann on unitary equivalence up to compact perturbation of self-adjoint operators (\cite{weyl}, \cite{vNeu}), is one of the cornerstones of the theory of extensions of separable $\mathrm{C}^\ast$-algebras (see \cite{david} for a survey of these results). The label `Voiculescu's theorem' often refers to a collection of results and corollaries from \cite{voiculescu}, rather than a specific theorem. Through this paper, it always refers to the following statement, where for a complex Hilbert space $H$, $\mathcal{B}(H)$ is the algebra of linear bounded operators from $H$ into itself, and $\mathcal{K}(H)$ is the algebra of compact operators. \begin{VT*} \label{voict} Let $H, L$ be two separable Hilbert spaces, $\mathcal{A} \subseteq \mathcal{B}(H)$ a separable unital $\mathrm{C}^\ast$-algebra and $\sigma : \mathcal{A} \to \mathcal{B}(L)$ a unital completely positive map such that $\sigma(a) = 0$ for all $a \in \mathcal{A} \cap \mathcal{K}(H)$. Then there is a sequence of isometries $V_n: L \to H$ such that $\sigma(a) - V_n^* a V_n \in \mathcal{K}(L)$ and $\lim_{n \to \infty} \lVert \sigma(a) - V_n^* a V_n \rVert = 0$ for all $a \in \mathcal{A}$. \end{VT*} See \S \ref{s1} for a definition of completely positive map. We remark that all $*$-homomorphisms, as well as the states of any given $\mathrm{C}^\ast$-algebra, are completely positive (see \cite[Example 1.5.2]{brownozawa}). The specific instance of Voiculescu's theorem when $L =\mathbb{C}$ and $\sigma$ is a state is known as Glimm's lemma (\cite[Lemma 3.6.1]{higsonroe}), whose statement is the following. \begin{GL*} Let $H$ be a separable Hilbert space, $\mathcal{A} \subseteq \mathcal{B}(H)$ a separable unital $\mathrm{C}^\ast$-algebra and $\sigma: \mathcal{A} \to \mathbb{C}$ a state such that $\sigma(a) = 0$ for all $a \in \mathcal{A} \cap \mathcal{K}(H)$. There exists a sequence of orthonormal vectors $\set{\xi_n}_{n \in \mathbb{N}}$ such that $\sigma(a) = \lim_{n \to \infty} \langle a \xi_n, \xi_n \rangle$ for every $a \in \mathcal{A}$. \end{GL*} We refer the reader to \cite{arveson} and \cite[\S 3.4--3.6]{higsonroe} for a proof of these classical results. The study of the extensions of a $\mathrm{C}^\ast$-algebra and of the invariant Ext, one of the main fields of applications of Voiculescu's theorem, flourished after the seminal work in \cite{bdf} (see \cite{higsonroe} or \cite{blackk} for an introduction to this subject). Given a unital $\mathrm{C}^\ast$-algebra $\mathcal{A}$, $\text{Ext}(\mathcal{A})$ is the set of all unital embeddings of $\mathcal{A}$ into the Calkin algebra $\mathcal{Q}(H)$ (modulo the relation of unitary equivalence). The set $\text{Ext}(\mathcal{A})$ can be endowed with a semigroup structure, and one of the main consequences of Voiculescu's theorem in this framework is that $\text{Ext}(\mathcal{A})$ has an identity when $\mathcal{A}$ is separable and unital. This, along with the results in \cite{choieffros}, implies for instance that $\text{Ext}(\mathcal{A})$ is a group for every nuclear separable unital $\mathrm{C}^\ast$-algebra $\mathcal{A}$. Voiculescu's theorem has also been recently employed in combination with set theory in the study of which nonseparable $\mathrm{C}^\ast$-algebras embed into the Calkin algebra, in \cite{farahvignati} and \cite{fkv}. The current paper pushes further the interaction of the results in \cite{voiculescu} with set theory, consistently extending Voiculescu's theorem to certain `small' nonseparable $\mathrm{C}^\ast$-algebras. Let $H, L$ be two separable Hilbert spaces, $\mathcal{A} \subseteq \mathcal{B}(H)$ a nonseparable unital $\mathrm{C}^\ast$-algebra and $\sigma : \mathcal{A} \to \mathcal{B}(L)$ a unital completely positive map such that $\sigma(a) = 0$ for all $a \in \mathcal{A} \cap \mathcal{K}(H)$. If we do not assume anything else, all that Voiculescu's theorem can guarantee is the existence of a net of isometries $\set{V_\lambda}_{\lambda \in \Lambda}$ from $L$ into $H$ such that, for any separable subalgebra $\mathcal{B} \subset \mathcal{A}$ and $\epsilon > 0$, there is $\mu \in \Lambda$ such that $\lambda \ge \mu$ implies $\sigma(a) - V_\lambda^* a V_\lambda \in \mathcal{K}(L)$ and $\lVert \sigma(a) - V_\lambda^* a V_\lambda \rVert < \epsilon$ for all $a \in \mathcal{B}$. In this note we prove the following theorem. \begin{theorem*} \label{t1} Let $H$ be a separable Hilbert space. \begin{enumerate} \item \label{i1t1} Let $L$ be a separable Hilbert space, $\mathcal{A} \subseteq \mathcal{B}(H)$ a unital $\mathrm{C}^\ast$-algebra of density character strictly less than $\mathfrak{p}$\footnote{The cardinal invariant $\mathfrak{p}$ is the least size of a centered subfamily $F$ of $\mathcal{P}(\mathbb{N}) / \Fin$ which does not have a lower bound in $\mathcal{P}(\mathbb{N}) / \Fin$ (for an introduction to this cardinal invariant and its basic properties see \cite{bart}).} and $\sigma : \mathcal{A} \to \mathcal{B}(L)$ a unital completely positive map such that $\sigma(a) = 0$ for all $a \in \mathcal{A} \cap \mathcal{K}(H)$. Then there is a sequence of isometries $V_n: L \to H$ such that $\sigma(a) - V_n^* a V_n$ is compact and $\lim_{n \to \infty} \lVert \sigma(a) - V_n^* a V_n \rVert = 0$ for all $a \in \mathcal{A}$. \item \label{i2t1} Given a cardinal $\lambda$, it is consistent with $\textsf{ZFC} + \mathfrak{c} \ge \lambda$ (where $\mathfrak{c}$ is the cardinality of the continuum) that there exist a unital $\mathrm{C}^\ast$-algebra $\mathcal{A} \subseteq \mathcal{B}(H)$ of density character less than $\mathfrak{c}$, and $\sigma$, a state of $\mathcal{A}$ annihilating $\mathcal{A} \cap \mathcal{K}(H)$, for which Glimm's lemma fails. \end{enumerate} \end{theorem*} Theorem \ref{t1} gives the following corollary. \begin{corollary*} \label{c} The statement `Voiculescu's theorem holds for all separably representable $\mathrm{C}^\ast$-algebras of density character less than $\mathfrak{c}$' is independent from \textsf{ZFC}. Moreover, it is independent from $\textsf{ZFC} + \mathfrak{c} \ge \lambda$ for any cardinal $\lambda$. \end{corollary*} \begin{proof} Under Martin's axiom, which implies $\mathfrak{p} = \mathfrak{c}$, the statement holds by item \ref{i1t1} of theorem \ref{t1}. The consistent failure of the statement follows by item \ref{i2t1} of theorem \ref{t1}. \end{proof} The argument used to obtain the first part of theorem \ref{t1} is inspired to the proof of Voiculescu's theorem as given by Arveson in \cite{arveson} (see also \cite[\S 3.4--3.6]{higsonroe}). We show that such proof essentially consists of a sequence of diagonalization arguments which are equivalent to applications of the Baire category theorem to certain $\sigma$-centered partial orders (see \S \ref{s1} for a definition). Item \ref{i1t1} of theorem \ref{t1} then follows by the results in \cite{bell}, where it is shown that Martin's axiom holds for $\kappa$-sized families of dense subsets of $\sigma$-centered partial orders if and only if $\kappa < \mathfrak{p}$. The second item of theorem \ref{t1} is obtained via an application of Cohen's forcing and a simple cardinality argument. Starting from a $\mathrm{C}^\ast$-algebra $\mathcal{A}$ of density character $\mathfrak{c}$ for which Glimm's lemma fails, we show that the lemma still fails for $\mathcal{A}$ also after adding enough (but not too many) Cohen reals. We remark that Voiculescu's theorem is false in general for subalgebras of $\mathcal{B}(H)$ of denisty character $\mathfrak{c}$, as witnessed by $\ell^\infty(\mathbb{N})$, $L^\infty([0,1])$ and $\mathcal{B}(H)$ itself (see \S \ref{s3}). We do not know if the notion of smallness given by $\mathfrak{p}$ in this context is optimal, or if it is consistent that there are $\mathrm{C}^\ast$-algebras of density character greater than or equal to $\mathfrak{p}$ for which the conclusion of Voiculescu's theorem holds. The paper is organized as follows. Section \ref{s1} is devoted to definitions and preliminaries. In section \ref{s2} we prove item \ref{i1t1} of theorem \ref{t1} and list some standard corollaries of Voiculescu's theorem which generalize to $\mathrm{C}^\ast$-algebras of density character smaller than $\mathfrak{p}$. Finally in \S \ref{s3} we give a proof of item \ref{i2t1}. \section{Preliminaries} \label{s1} Through this paper, given a complex Hilbert space $H$, $\mathcal{B}(H)$ is the algebra of linear bounded operators from $H$ into itself, and $\mathcal{K}(H)$ is the algebra of compact operators. The Calkin algebra $\mathcal{Q}(H)$ is the quotient $\mathcal{B}(H) / \mathcal{K}(H)$. An \emph{approximate unit} of a $\mathrm{C}^\ast$-algebra $\mathcal{A}$ is a net $\set{h_\lambda}_{\lambda \in \Lambda}$ of positive contractions of $\mathcal{A}$ such that $\lim_{\lambda} \lVert h_\lambda a - a \rVert = \lim_{\lambda} \lVert a h_\lambda - a \rVert = 0$ for all $a \in \mathcal{A}$. Given a $\mathrm{C}^\ast$-algebra $\mathcal{A} \subseteq \mathcal{B}(H)$, an approximate unit of $\mathcal{K}(H)$ is \emph{quasicentral} for $\mathcal{A}$ if $\lim_\lambda \lVert [h_\lambda, a ] \rVert = 0$ for all $a \in \mathcal{A}$, where $[b,c]$ denotes, for two operators $b,c \in \mathcal{B}(H)$, the commutant $bc- cb$. For a $\mathrm{C}^\ast$-algebra $\mathcal{A}$, let $M_n(\mathcal{A})$ be the $\mathrm{C}^\ast$-algebra of $n \times n$ matrices with entries in $\mathcal{A}$. Given two $\mathrm{C}^\ast$-algebras $\mathcal{A}$ and $\mathcal{B}$, a bounded linear map $\sigma: \mathcal{A} \to \mathcal{B}$ is \emph{completely positive} if for all $n \in \mathbb{N}$ all the maps $\phi_n : M_n (\mathcal{A}) \to M_n(\mathcal{B})$, defined as \[ \phi_n([a_{ij}]) = [\phi(a_{ij})] \] are positive, i.e. they send positive elements into positive elements. The notation $F \Subset G$ stands for `$F$ is a finite subset of $G$'. A \emph{partially ordered set} (or simply \emph{poset}) $(\mathbb{P}, \le)$ is a set equipped with a binary, transitive, antisymmetric, reflexive relation $\le$. The poset $(\mathbb{P}, \le)$ is \emph{centered} if for any $F \Subset \mathbb{P}$ there is $q \in \mathbb{P}$ such that $q \le p$ for all $p \in F$, and it is $\sigma$\emph{-centered} if it is the union of countably many centered sets. A subset $D \subseteq \mathbb{P}$ is \emph{dense} if for every $p \in \mathbb{P}$ there is $q \in D$ such that $q \le p$. A set $G \subseteq \mathbb{P}$ is a \emph{filter} if $q \in G$ and $q \le p$ implies $p \in G$, and if for any $p, q \in G$ there is $r \in G$ such that $r \le p$, $r \le q$. Given $\mathcal{D}$ a collection of dense subsets of $\mathbb{P}$, a filter $G$ is $\mathcal{D}$\emph{-generic} if $G \cap D \not= \emptyset$ for all $D \in \mathcal{D}$. By the results in \cite{bell}, $\kappa < \mathfrak{p}$ is equivalent to the following weak form of Martin's axiom. \begin{MA*} Given a $\sigma$-centered poset $(\mathbb{P} , \le)$ and $\mathcal{D}$ a collection of size $\kappa$ of dense subsets of $\mathbb{P}$, there exists a $\mathcal{D}$-generic filter on $\mathbb{P}$. \end{MA*} Before moving to the proof of theorem \ref{t1}, we prove a simple preliminary fact. It is known that for every $\mathrm{C}^\ast$-algebra $\mathcal{A} \subseteq \mathcal{B}(H)$ there is an approximate unit of the compact operators which is quasicentral for $\mathcal{A}$ (see \cite[Theorem 1 p.330]{arveson}). Moreover, if $\mathcal{A}$ is separable, the quasicentral approximate unit can be chosen to be countable, hence sequential. This property can be generalized to all $\mathrm{C}^\ast$-algebras of density character $\kappa$ such that $\textsf{MA}_\kappa$($\sigma$-centered) holds. This is a simple fact, nevertheless its proof gives a fairly clear idea, at least to the reader familiar with the proof of Voiculescu's theorem given in \cite{arveson}, of how to prove item \ref{i1t1} of theorem \ref{t1}. \begin{proposition} \label{qc} Let $H$ be a separable Hilbert space and $\mathcal{A} \subseteq \mathcal{B}(H)$ a $\mathrm{C}^\ast$-algebra of density character less than $\mathfrak{p}$. Then there exists a sequential approximate unit $\set{h_n}_{n \in \mathbb{N}}$ of $\mathcal{K}(H)$ which is quasicentral for $\mathcal{A}$. \end{proposition} \begin{proof} Fix a countable dense $K$ in the set of all positive norm one elements of $\mathcal{K}(H)$, and $B$ dense in $\mathcal{A}$ of size smaller than $\mathfrak{p}$. Let $\mathbb{P}$ be the set of tuples \[ p = (F_p, J_p, n_p, (h^p_j)_{j \le n_p}) \] where $F_p \Subset B$, $J_p \Subset K$, $n_p \in \mathbb{N}$ and $h^p_j \in K$ for all $j \le n_p$. For $p, q \in \mathbb{P}$ we say $p \le q$ if and only if \begin{enumerate} \item $F_q \subseteq F_p$, \item $J_q \subseteq J_p$, \item $n_q \le n_p$, \item $h^p_j = h^q_j$ for all $j \le n_q$, \item \label{itemqc} if $n_q < n_p$ then, for all $n_q < j \le n_p$, all $k \in J_q$ and all $a \in F_q$, the following holds \[ \max \set{ \lVert [ h_j^p , a] \rVert , \lVert h^p_j k - k \rVert, \lVert kh^p_j - k \rVert} < 1/j \] \end{enumerate} The poset $(\mathbb{P}, \le)$ is $\sigma$-centred since, for any finite $X\Subset \mathbb{P}$ such that there is $n \in \mathbb{N}$ and $(h_j)_{j \le n} \in K^n$ satisfying $n_p = n$ and $(h^p_j)_{j \le n} = (h_j)_{j \le n}$ for all $p \in X$, the condition \[ r = \left( \bigcup_{p \in X} F_{p}, \bigcup_{p \in X} J_{p}, n, (h_j)_{j \le n}\right) \] is a lower bound for $X$. Let $\mathcal{D}$ be the collection of the sets \[ \Delta_{F, J,n} = \set{p \in \mathbb{P} : F_p \supseteq F, J_p \supseteq J, n_p \ge n} \] for $F \Subset B$, $J \Subset K$ and $n \in \mathbb{N}$. The sets $\Delta_{F, J,n}$ are dense since for every separable subalgebra of $\mathcal{B}(H)$ there is a sequential approximate unit of $\mathcal{K}(H)$ which is quasicentral for it. A $\mathcal{D}$-generic filter produces a sequential approximate unit of $\mathcal{K}(H)$ which is quasicentral for $\mathcal{A}$. Such filter exists by $\textsf{MA}_{\lvert \mathcal{D} \rvert}$($\sigma$-centered), which holds since $\mathcal{D}$ has size smaller than $\mathfrak{p}$. \end{proof} \section{Voiculescu's Theorem and Martin's Axiom} \label{s2} Similarly to what happens in \cite{arveson}, we split the proof of item \ref{i1t1} of theorem \ref{t1} in two steps. First we prove the statement assuming that the completely positive map $\sigma$ is block-diagonal (see lemma \ref{voiclemma1}), then in lemma \ref{voiclemma2} we show that the general case can be reduced to the block-diagonal case. \subsection{Block-Diagonal Maps} A completely positive map $\sigma: \mathcal{A} \to \mathcal{B}(L)$ is \emph{block-diagonal} if there is a decomposition $L = \bigoplus_{n \in \mathbb{N}} L_n$, where $L_n$ is finite-dimensional for all $n \in \mathbb{N}$, which in turn induces a decomposition $\sigma = \bigoplus_{n \in \mathbb{N}} \sigma_n$ where the maps $\sigma_n : \mathcal{A} \to \mathcal{B}(L_n)$ are completely positive. \begin{lemma} \label{voiclemma1} Let $H, L$ be two separable Hilbert spaces, $\mathcal{A} \subseteq \mathcal{B}(H)$ a unital $\mathrm{C}^\ast$-algebra of density character less than $\mathfrak{p}$ and $\sigma: \mathcal{A} \to B(L)$ a block-diagonal, unital, completely positive map such that $\sigma(a) = 0$ for all $a \in \mathcal{A} \cap \mathcal{K}(H)$. Then there is a sequence of isometries $V_n: L \to H$ such that $\sigma(a) - V_n^* a V_n \in \mathcal{K}(L)$ and $\lim_{n \to \infty} \lVert \sigma(a) - V_n^* a V_n \rVert = 0$ for all $a \in \mathcal{A}$. \end{lemma} \begin{proof} Fix $\epsilon > 0$. By hypothesis $L = \bigoplus_{n \in \mathbb{N}} L_n$, with $L_n$ finite-dimensional for all $n \in \mathbb{N}$, and $\sigma$ decomposes as $\bigoplus_{n \in \mathbb{N}} \sigma_n$, where $\sigma_n(a) = 0$ whenever $a \in \mathcal{A} \cap \mathcal{K}(H)$ for all $n \in \mathbb{N}$. Let $K$ be a countable dense subset of the unit ball of $H$ such that, for every $\xi \in K$ the set $\set{ \eta \in K : \eta \perp \xi}$ is dense in $\set{ \eta \in H : \lVert \eta \rVert = 1, \ \eta \perp \xi}$. Fix an orthonormal basis $\set{\xi^n_j}_{j \le k_n}$ for each $L_n$. Consider the set $\mathbb{P}$ of the tuples \[ p = (F_p, n_p, (W^p_i)_{i \le n_p}) \] where $F_p \Subset \mathcal{A}$, $n_p \in \mathbb{N}$ and $W^p_i$ is an isometry of $L_i$ into $H$ such that $W^p_i \xi^i_j \in K$ for every $j \le k_i$ and $i \le n_p$. We say $p \le q$ for two elements in $\mathbb{P}$ if and only if \begin{enumerate} \item $F_q \subseteq F_p$, \item $n_q \le n_p$, \item $W^p_i = W^q_i$ for all $i \le n_q$, \item \label{i4bd} for $ n_q < i \le n_p$ (if any) we require $W_i L_i$ to be orthogonal to $\set{ W_j L_j, a W_j L_j, a^* W_j L_j : j \le i, \ a \in F_q}$ and \[ \lVert \sigma_i(a) - W^*_i a W_i \rVert < \epsilon/2^{i} \] for all $a \in F_q$. \end{enumerate} For any finite set of conditions $X \Subset \mathbb{P}$ such that there is $n \in \mathbb{N}$ and $(W_i)_{i \le n}$ satisfying $n_p = n$ and $(W^p)_{i \le n_p} = (W_i)_{i \le n}$ for all $p \in X$, the condition \[ r = \left( \bigcup_{p \in X} F_p, n, (W_i)_{i \le n} \right) \] is a lower bound for $X$. Thus the poset $(\mathbb{P}, \le)$ is $\sigma$-centered. Let $\mathcal{D}$ be the collection of the sets \[ \Delta_{F, n} = \set{p \in \mathbb{P} : F_p \supseteq F, n_p \ge n} \] for $n \in \mathbb{N}$ and $F \Subset B$, where $B$ is a fixed dense subset of $\mathcal{A}$ of size smaller than $\mathfrak{p}$. By theorem \ref{voict} every $\Delta_{F,n}$ is dense in $\mathbb{P}$ (the orthogonality condition in item \ref{i4bd} of the definition of the order relation can be obtained using proposition 3.6.7 in \cite{higsonroe}). Let $G$ be a $\mathcal{D}$-generic filter, which exists since $\lvert \mathcal{D} \rvert < \mathfrak{p}$, and thus $\textsf{MA}_{\lvert \mathcal{D} \rvert}$($\sigma$-centered), holds. Let $V$ be the isometry from $\bigoplus L_n$ into $H$ defined as $\bigoplus_{n \in \mathbb{N}} W_n$ where $W_n = W_n^p$ for some $p \in G$ such that $n_p \ge n$. The isometry is well defined since $G$ is a filter. The proof that $\sigma(a) - V^* a V \in \mathcal{K}(L)$ and that $\lVert \sigma(a) - V^* a V \rVert < \epsilon$ for all $a \in \mathcal{A}$ is the same as in lemma 3.5.2 in \cite{higsonroe}. \end{proof} \subsection{The General Case} The following lemma generalizes theorem 3.5.5 of \cite{higsonroe} to all $\mathrm{C}^\ast$-algebras of density character smaller than $\mathfrak{p}$. \begin{lemma} \label{voiclemma2} Let $H, L, L'$ be separable Hilbert spaces, $\mathcal{A} \subseteq \mathcal{B}(H)$ a unital $\mathrm{C}^\ast$-algebra of density character less than $\mathfrak{p}$ and $\sigma: \mathcal{A} \to B(L)$ a unital completely positive map such that $\sigma(a) = 0$ for all $a \in \mathcal{A} \cap \mathcal{K}(H)$. Then there is a block-diagonal, unital completely positive map $\sigma': \mathcal{A} \to B(L')$, such that $\sigma'(a) = 0$ for all $a \in \mathcal{A} \cap \mathcal{K}(H)$, and a sequence of isometries $V_n : H \to L$ such that $\sigma(a) - V_n^* \sigma'(a) V_n \in \mathcal{K}(H)$ and $\lim_{n \to \infty} \lVert \sigma(a) - V_n^* \sigma'(a) V_n \rVert = 0$ for all $a \in \mathcal{A}$. \end{lemma} \begin{proof} Fix $\epsilon > 0$. We use the same poset (and notation) defined in proposition \ref{qc} to generate an approximate unit of $\mathcal{K}(H)$ which is quasicentral for $\sigma[\mathcal{A}]$. Adjusting suitably the inequality in item \ref{itemqc} of the definition of such poset (see \cite[Lemma p.332]{arveson}), by $\textsf{MA}_{\lvert \mathcal{D} \rvert}$($\sigma$-centered) there is a filter of $\mathbb{P}$ which generates an approximate unit $(h_n)_{n \in \mathbb{N}}$ such that if $a \in F_p$ for some $p \in G$, then for all $n > n_p$ we have \[ \lVert [(h_{n+1} - h_n)^{1/2}, \sigma(a)] \rVert < \epsilon/2^n \] From this point the proof goes verbatim as in theorem 3.5.5 of \cite{higsonroe}. \end{proof} The thesis of item \ref{i1t1} of theorem \ref{t1} follows composing the isometries obtained from lemmas \ref{voiclemma1} and \ref{voiclemma2}. \subsection{Corollaries and Remarks} Voiculescu's theorem allows to infer several corollaries if the completely positive map $\sigma$ is assumed to be a $*$-homomorphism. Using item \ref{i1t1} of theorem \ref{t1}, these results generalize to separably representable $\mathrm{C}^\ast$-algebras of density character less than $\mathfrak{p}$. We omit the proofs in this part as they can be obtained following verbatim the arguments used in the separable case. We introduce some definitions to ease the notation in the following statements. Given a representation $\phi : \mathcal{A} \to \mathcal{B}(H)$, let $H_e$ be the Hilbert space spanned by $(\phi[\mathcal{A}] \cap \mathcal{K}(H))H$. Since $\phi[\mathcal{A}] \cap \mathcal{K}(H)$ is an ideal of $\phi[A]$, the space $H_e$ is invariant for $\phi[A]$. The \emph{essential part of} $\phi$, denoted $\phi_e$, is the restriction of $\phi$ to $H_e$. Two representations $\phi: \mathcal{A} \to \mathcal{B}(H_1)$ and $\psi : \mathcal{A} \to \mathcal{B}(H_2)$ are \emph{equivalent} if there is a unitary map $U: H_1 \to H_2$ such that $U^* \psi(a) U = \phi(a)$ for all $a \in \mathcal{A}$. They are \emph{approximately equivalent} if there is a sequence of unitary maps $U_n : H_1 \to H_2$ such that $U_n^* \psi(a) U_n - \phi(a) \in \mathcal{K}(H_1)$ and $\lim_{n \to \infty} \lVert U_n^* \psi(a) U_n - \phi(a) \rVert = 0$ for all $a \in \mathcal{A}$. Finally, they are \emph{weakly approximately equivalent} if there are two sequences of unitary maps $U_n : H_1 \to H_2$ and $V_n : H_2 \to H_1$ such that $U_n^* \psi(a) U_n \to \phi(a)$ and $V_n^* \phi(a) V_n \to \psi(a)$ in the weak operator topology. Corollaries \ref{voicMA2} and \ref{voicMA3} can be proved using the proofs of \cite[Corollary 2 p. 339]{arveson} and \cite[Theorem 5]{arveson} plus \cite[Corollary 1 p. 343]{arveson} respectively, after substituting all the instances of Voiculescu's theorem with item \ref{i1t1} of theorem \ref{t1}. \begin{corollary} \label{voicMA2} Let $H, L$ be two separable Hilbert spaces, $\mathcal{A} \subseteq \mathcal{B}(H)$ a unital $\mathrm{C}^\ast$-algebra of density character less than $\mathfrak{p}$ and $\phi: \mathcal{A} \to B(L)$ a unital representation such that $\phi(a) = 0$ for all $a \in \mathcal{A} \cap \mathcal{K}(H)$. Then the direct sum representation $\text{Id} \oplus \phi$ on $H \oplus L$ is approximately equivalent to $\phi$. \end{corollary} \begin{corollary} \label{voicMA3} Let $\mathcal{A}$ be a separably representable unital $\mathrm{C}^\ast$-algebra of density less than $\mathfrak{p}$ and $\phi, \psi$ two unital representations on some separable, infinite dimensional Hilbert space $H$. The following are equivalent. \begin{enumerate} \item $\phi$ and $\psi$ are approximately equivalent, \item $\phi$ and $\psi$ are weakly approximately equivalent, \item $\text{ker}(\phi) = \text{ker} (\psi)$, $\text{ker} (\pi \circ \phi) = \text{ker} (\pi \circ \psi)$ (here $\pi: \mathcal{B} (H) \to \mathcal{Q}(H)$ is the quotient map) and $\phi_e$ is equivalent to $\psi_e$. \end{enumerate} In particular, if $\text{ker}(\phi) = \text{ker} (\psi)$ and $\phi[A] \cap \mathcal{K}(H) = \psi[A] \cap \mathcal{K}(H) = \set{0}$ then $\phi$ and $\psi$ are approximately equivalent. \end{corollary} A further consequence of Voiculescu's theorem is that every separable unital subalgebra of the Calkin algebra is equal to its double commutant in the Calkin algebra (see \cite[p. 345]{arveson}; see also \cite{bic} for a version of this statement in the context of ultrapowers). It is not clear whether $\textsf{MA}_\kappa$($\sigma$-centered) could be used to generalize this fact to $\mathrm{C}^\ast$-algebras of density character $\kappa$, even assuming they are separably representable. \section{Independence} \label{s3} In this section we prove item \ref{i2t1} of theorem \ref{t1}. \begin{proof}[Proof of item \ref{i2t1} of theorem \ref{t1}] Let $H$ be a separable Hilbert space and $\mathcal{A} \subseteq \mathcal{B}(H)$ a maximal abelian atomic subalgebra, hence isomorphic to $\ell^\infty(\mathbb{N})$. Since the pure states of $\mathcal{A}$ annihilating $\mathcal{A} \cap \mathcal{K}(H)$ are in bijection with the non-principal ultrafilters on $\mathbb{N}$ (see \cite[Example 6.2]{setoperator}), there are $2^\mathfrak{c}$ of them. Since there are only $\mathfrak{c}$ (countable) sequences of vectors in $H$, there are $2^\mathfrak{c}$ states of $\mathcal{A}$ for which Glimm's lemma fails (it actually fails for all pure states annihilating $\mathcal{A} \cap \mathcal{K}(H)$, as shown in proposition 2.7 of \cite{hadwin}). We prove the statement of item \ref{i2t1} of theorem \ref{t1} for $\lambda = \aleph_2$, as the proof in the general case is analogous. Consider a model of \textsf{ZFC} where $\mathfrak{c} = \aleph_1$ and $2^{\aleph_1} = \aleph_3$ and add to it $\aleph_2$ Cohen reals. In the generic extension we have $\mathfrak{c} = \aleph_2$, thus (the closure of) $\mathcal{A}$ has density character strictly smaller than $\mathfrak{c}$. Glimm's lemma fails for $\mathcal{A}$ also in the generic extension. There are in fact at most $ \aleph_2$ new sequences of vectors of $H$, which are still not enough to cover all the $\aleph_3$ states of $\mathcal{A}$ for which Glimm's lemma failed in the ground model. \end{proof} The argument we just exposed can be generalized verbatim to other $\mathrm{C}^\ast$-algebras of density character $\mathfrak{c}$ such as $\mathcal{B}(H)$ or $L^\infty([0,1])$, which all have more than $2^{\mathfrak{c}}$ different states.\footnote{Notice that, if $V$ is the ground model of \textsf{ZFC} and $V[G]$ a generic extension, the closure of $\mathcal{B}(H)^V$ in $V[G]$ is generally strictly contained in $\mathcal{B}(H)^{V[G]}$. The same happens for $\ell^\infty(\mathbb{N})$ and $L^\infty([0,1])$.} \begin{acknow} I would like to thank Ilijas Farah for the interesting conversations on this topic we had and for his useful suggestions on the earlier drafts of this paper. \end{acknow} \bibliographystyle{amsalpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:intro}Introduction} The Polyakov line action (PLA) is an action obtained from lattice gauge theory when all degrees of freedom are integrated out, under the constraint that the Polyakov line holonomies are held fixed. There are some indications \cite{Gattringer:2011gq,*Mercado:2012ue,Fromm:2011qi,Aarts:2011zn,Greensite:2012xv} that the sign problem in this theory, at non-zero chemical potential, may be more tractable than the sign problem in the underlying lattice gauge theory (for a review, cf.\ \cite{deForcrand:2010ys}), and if so it could provide us with a new tool for investigating the QCD phase diagram. It is fairly straightforward, given the PLA at chemical potential $\mu=0$, to introduce a non-zero chemical potential, as we discuss in section \ref{sec:conclude}. The problem we address here is how to extract the PLA from the underlying lattice gauge theory at $\mu=0$. This article is a follow-up to ref.\ \cite{Greensite:2012dy}, which presented a novel ``relative weights" technique for deriving the PLA, based on a method used previously in studies of the Yang-Mills vacuum wavefunctional \cite{Greensite:2011pj,*Greensite:1988rr}. The method was tested at strong couplings, where the answer is known, and a conjecture for the action at a weaker coupling, for SU(2) pure gauge theory at $\b=2.2$ and inverse temperature $N_t=4$ lattice spacings, was presented. This conjecture was based, however, on a study limited to fairly atypical regions of field configuration space. Below we will apply the method in the region expected to dominate the path integral, and a rather different (and in fact simpler) action from the one conjectured in ref.\ \cite{Greensite:2012dy} emerges. As a crucial test of the derived PLA, we compute the two-point Polyakov line correlator from Monte Carlo simulations of both the PLA and the underlying gauge theory. These correlators will be seen to agree quite accurately. Our effective PLA turns out to be only bilinear in the Polyakov line variables, with a simple expression for the finite range kernel. There have been a number of previous attempts to derive the PLA from lattice gauge theory at finite temperature. These include strong-coupling expansions \cite{Fromm:2011qi}, the Inverse Monte Carlo method \cite{Dittmann:2003qt,*Heinzl:2005xv}, and the demon approach \cite{Velytsky:2008bh,*Wozar:2008nv}. All of these methods generate effective Polyakov line actions of varying degrees of complexity. We believe, however, that an accurate agreement of Polyakov line correlators in the confined phase, computed in the effective and underlying lattice gauge theories, has not been demonstrated in any of the previous studies, at least not beyond two or three lattice spacings in Polyakov line separation. It should also be mentioned that there are a number of studies which are concerned with deducing the Polyakov line potential, with particular application to the deconfinement transition, c.f.\ \cite{Dumitru:2012fw} and references therein. There have also been efforts, e.g.\ \cite{Langfeld:1999ig}, to express the fermion determinant in terms of a potential involving Polyakov lines. These studies do not arrive at a full PLA as defined above, and hence their focus is somewhat different from ours. Our article is organized as follows: An improved version of the relative weights method is presented in section \ref{sec:rw} below. The technique is applied to pure SU(2) gauge theory in section \ref{sec:grad}, again at $\b=2.2$ and $N_t=4$, and the Polyakov line correlators of the derived PLA are compared to those of lattice gauge theory. Application to a gauge-Higgs theory, with a scalar matter field explicitly breaking global $Z_2$ center symmetry, is presented in section \ref{sec:gh}. Section \ref{sec:conclude} contains our conclusions. The extension of our method to gauge theories with dynamical fermions is discussed in an appendix. \section{\label{sec:rw}The relative weights method} The relative weights method, as applied to deriving the PLA $S_P$, was introduced in ref.\ \cite{Greensite:2012dy}. The technique is particularly well adapted to computing path (or ``directional'') derivatives of the effective action $S_P$ in the space of all Polyakov line configurations. A single configuration $\{U_{{\vec{x}}}\}$ is a point in this space, where we specify the group-valued Polyakov line holonomies $U_{{\vec{x}}}$ at each spatial point ${\vec{x}}$ in a three-dimensional volume. Let $\{U_{\vec{x}}(\l)\}$ be a path through this space of configurations, where $\l$ parametrizes the path. The relative weights method computes the path derivative $\partial S_P[U_{\vec{x}}(\l)] /\partial \l$ at some given $\l=\l_0$. In this section we present a variant of the relative weights approach which, while equivalent to the original method of \cite{Greensite:2012dy}, is numerically more efficient. In order to minimize minus signs later on, we adopt the convention that the Boltzmann weight is proportional to $\exp[+S_P]$. Let $S_L$ be the lattice gauge action on an $L^3 \times N_t$ volume with coupling $\b$ for the Wilson action. $S_L$ may contain pseudofermion or bosonic matter degrees of freedom, collectively denoted by $\phi$. It is convenient to go to temporal gauge, so that all timelike link variables are set to the unit matrix except on a timeslice at $t=0$. Then the PLA $S_P$ is defined as \begin{eqnarray} \exp\Bigl[S_P[U_{{\vec{x}}}]\Bigl] = \int DU_0({\vec{x}},0) DU_k D\phi ~ \left\{\prod_{{\vec{x}}} \d[U_{{\vec{x}}}-U_0({\vec{x}},0)] \right\} e^{S_L} \ . \label{S_P} \eea Because of the residual $U_0({\vec{x}},0) \rightarrow g({\vec{x}}) U_0({\vec{x}},0) g^\dagger({\vec{x}})$ symmetry in temporal gauge, it follows that $S_P$ can only depend on the eigenvalues of the $U_{{\vec{x}}}$ matrices. While the functional integration in \rf{S_P} can only be carried out in special cases, e.g. via strong coupling and hopping parameter expansions valid a certain range of parameters, the ratio (or ``relative weights'') $\exp[S_P[U'_{{\vec{x}}}]]/\exp[S_P[U''_{{\vec{x}}}]]$ evaluated at nearby configurations $U'_{{\vec{x}}},~U''_{{\vec{x}}}$ is calculable numerically. This fact enables us to compute path derivatives of $S_P$. Let us consider a set of $M$ Polyakov line configurations \begin{eqnarray} \Bigl\{ \{U^{(n)}_{{\vec{x}}}, \mbox{all~} {\vec{x}} \}, ~ n=1,2,...,M \Bigr\} \ , \label{set} \eea corresponding to values of the path parameter \begin{eqnarray} \l_n = \l_0 + \left( n - {M+1 \over 2}\right) \Delta \l ~~~,~~~ n=1,2,...,M ~~~ \ , \eea and define \begin{eqnarray} S_L^{(m)}[U,\phi] \equiv S_L\Bigl[U_0({\vec{x}},0)=U^{(m)}_{\vec{x}},U_k(x,t),\phi(x,t)\Bigr] \eea to be the lattice action in temporal gauge with the timelike links at $t=0$ fixed to the $m$-th member of the set \rf{set}. We also define \begin{eqnarray} \Delta S_P^{(m+1)} &\equiv& S_P[U^{(m+1)}] - S_P[U^{(m)}] \nonumber \\ Z_m &\equiv& \int DU_k D\phi ~ e^{S_L^{(m)}} \ . \eea From \rf{S_P}, we have \begin{eqnarray} \exp[\Delta S_P^{(m+1)}] &=& {\exp\Bigl[S_P[U^{(m+1)}\Bigr] \over \exp\Bigl[S_P[U^{(m)}]\Bigr] } \nonumber \\ &=& {\int DU_k D\phi ~ e^{S_L^{(m+1)}} \over \int DU_k D\phi ~ e^{S_L^{(m)}} } \nonumber \\ &=& {\int DU_k D\phi ~ \exp\Bigl[S_L^{(m+1)}-S_L^{(m)}\Bigr] e^{S_L^{(m)}}\over \int DU_k D\phi ~ e^{S_L^{(m)}} } \nonumber \\ &=& \left\langle \exp\Bigl[\Delta S^{(m+1)}\Bigr] \right\rangle_m \ , \eea where $\langle ... \rangle_m$ indicates that the expectation value is taken from ensembles with Boltzmann factor $\exp[S^{(m)}]/Z_m$. For sufficiently small $\Delta \l$, \begin{eqnarray} {d S_P[U_{\vec{x}}(\l)] \over d\l} &\approx& {\Delta S_P^{(m+1)} \over \Delta \l} \nonumber \\ &=& {1\over \Delta \l} \log\left( \left\langle \exp\Bigl[\Delta S^{(m+1)}\Bigr] \right\rangle_m \right) \ , \eea and these should closely agree, for all $m<M$, with the derivative evaluated at the central value of $\l=\l_0$. We can then improve our estimate by making use of all $M$ configurations, taking the average of derivatives \begin{eqnarray} \left({d S_P[U_{\vec{x}}(\l)] \over d\l}\right)_{\l=\l_0} \approx {1\over \Delta \l} {1\over M-1} \sum_{m=1}^{M-1} \log\left( \left\langle \exp\Bigl[\Delta S^{(m+1)}\Bigr] \right\rangle_m \right) \ . \label{snake} \eea The question then becomes which point $\{U_{{\vec{x}}}(\l_0)\}$ in configuration space should be chosen for the computation, and which directional derivatives $dS_P/d\l$ at this point should computed, in order to deduce $S_P$. It is possible that the choice is not very important, and that $S_P$ is well approximated by the same simple expression everywhere in configuration space. However, if this is not the case, then calculating path derivatives in some very atypical corner of configuration space may lead to an approximate answer for $S_P$ which may be correct in that particular corner, but misleading in the bulk of configuration space. In ref.\ \cite{Greensite:2012dy} the path derivatives were computed using three types of sets \rf{set} for SU(2) lattice gauge theory. These were (i) Polyakov lines which were constant in space, and the $\l$ parameter was the amplitude of $P_{{\vec{x}}}= \frac{1}{2} \text{Tr}[U_{{\vec{x}}}]=\l$; (ii) Polyakov lines which consisted of small plane wave fluctuations around a constant background, $P_{{\vec{x}}} = P_0 + \l \cos(\vk \cdot {\vec{x}})$ with $\l \ll P_0$; and (iii) Polyakov lines in which $P_{{\vec{x}}}$ varied as $P_{{\vec{x}}}= \l \cos(\vk \cdot {\vec{x}})$. Compared to thermalized timeslice configurations $U_0({\vec{x}},0)$ generated in a normal lattice Monte Carlo simulation, such configurations are very atypical. In a thermalized configuration in the confined phase, the Fourier components of the configuration are all of $O(1/\sqrt{V_3})$, where $V_3=L^3$ is the lattice volume of the D=3 dimensional timeslice, whereas in the special configurations just mentioned, one Fourier component (which may be the zero mode) is of $O(1)$. For computing the PLA at a strong lattice coupling, where this action can be evaluated via a strong coupling expansion, the atypical nature of the constant + plane wave, or pure plane wave configurations did not seem important, and the PLA deduced from the relative weights data was a close match to the known result. There was no similar result to compare to at $\b=2.2, ~ N_t=4$, and although an expression for $S_P$ matching the results for $dS_P/d\l$ was deduced from fitting the data, there is a concern that this expression might only be valid in the special region of configuration space where it was derived. We will investigate the action in more typical regions of configuration space in the next section. \section{\label{sec:grad}Derivatives of $\mathbf{S_P}$ in a thermalized background} The goal is to use the expression for the path derivative \rf{snake} to determine $S_P$. However, if $S_P$ does not have a simple form everywhere in configuration space (and it may not), then it is at least required that we have a fairly accurate approximation to $S_P$ in the region which is important for the computation of observables, i.e.\ the region occupied by typical thermalized configurations $\{U_{{\vec{x}}}\}$. A set of timelike link configurations $\{U_0({\vec{x}},0)\}$ on the $t=0$ timeslice, generated by a numerical simulation of the underlying lattice gauge theory, is a sample of such configurations. Let us define, for the SU(2) gauge group that we will consider here, \begin{eqnarray} P_{{\vec{x}}} \equiv \frac{1}{2} \text{Tr}[U_{{\vec{x}}}] = \frac{1}{2} \sum_{\vk} \Bigl\{ a_{\vk} \cos(\vk \cdot {\vec{x}}) + b_{\vk} \sin(\vk \cdot {\vec{x}}) \Bigr\} \ , \label{dft} \eea where the sum runs over all wavevectors $\vk$ on a cubic lattice of volume $L^3$, and ${a_{\vk}=a_{-\vk}, ~ b_{\vk}=-b_{-\vk}}$ are real-valued. Then we may consider calculating numerically, by the relative weights method, derivatives with respect to the Fourier components \begin{eqnarray} \left( {\partial S_P \over \partial a_{\vk}} \right)_{a_{\vk}=\a} ~~,~~ \left( {\partial S_P \over \partial b_{\vk}} \right)_{b_{\vk}=\a} \ , \label{grad1} \eea in a background in which all other Fourier components are drawn from a thermalized configuration. By calculating the derivative for some range of $\a$, it is possible to extrapolate to small $\a$ of order $1/\sqrt{V_3}$, which is the typical magnitude of a Fourier component in thermalized configurations. It is sufficient in practice to concentrate on the coefficients of the cosine terms in the Fourier expansion, since the sine terms give similar results. We then try to reconstruct $S_P$ from this information. There are potentially two obstacles to this approach. First, there are as many independent Fourier components as there are lattice sites in the $V_3$ volume, and this is too many to calculate in practice. Secondly, it might be that the results are strongly dependent on the particular thermalized background which is used. Concerning the first obstacle, this will not be a problem if the derivatives with respect to $a_{\vk}$ have a simple dependence on the lattice momentum \begin{eqnarray} k_L = \sqrt{4 \sum_{i=1}^3 \sin^2(\frac{1}{2} k_i)} \ , \label{kL} \eea which can be deduced from a small sample of all possible components. As for the second obstacle, this is not a problem if it turns out that the dependence of the final results on the particular choice of thermalized configuration is very weak. \subsection{Deriving the effective PLA} The first step in the extrapolation to small $\a$ is to run a standard lattice Monte Carlo, stop at some thermalized configuration, and calculate all the Polyakov line holonomies, \begin{eqnarray} {\cal P}_{\vec{x}} &=& U_0({\vec{x}},1) U_0({\vec{x}},2)...U_0({\vec{x}},N_t) \nonumber \\ &=& d_4(n_1,n_2,n_3) \mathbbm{1} + i \vec{d}(n_1,n_2,n_3) \cdot \vec{\sigma} \ , \label{tslice} \eea where ${\vec{x}}=(n_1,n_2,n_3)$, in that configuration. Define $W({\vec{x}})=d_4({\vec{x}})$. We pick a particular wavevector $\vk$ which is specified by three integers $(m_1,m_2,m_3)$ with corresponding wavenumber components \begin{eqnarray} k_i = {2\pi \over L} m_i \ , \label{waveno} \eea and set the coefficient of $\cos(\vk \cdot {\vec{x}})$, in the sine-cosine expansion of $W({\vec{x}})$, to zero. Denote the modified array, with the $\cos(\vk \cdot {\vec{x}})$ term removed, as $W'({\vec{x}})$. Next, construct a set of $M=20$ configurations with: \begin{eqnarray} P^{(n)}_{\vec{x}} &=& a_\vk^{(n)} \cos(k_1 n_1 + k_2 n_2 + k_3 n_3) + (1-\a-\d)W'({\vec{x}}) \nonumber \\ a_\vk^{(n)} &=& \a + \Bigl( n - \frac{1}{2} (M+1)\Bigr) \gamma /L^3 ~~~,~~~ n=1,2,...,M \ , \label{config1} \eea where $\gamma=L^3 \Delta a$ is a constant chosen to be as small as possible, but still large enough to get some spread in the data. Typically $\gamma \approx 0.5$. The factor $1-\a-\d$ in \rf{config1} is introduced in order to keep $P^{(n)}_{\vec{x}}$, with rare exceptions, inside the range $[-1:1]$. Ideally one would like to leave all $\{a_{\vk'},b_{\vk'}\}$ in the thermalized configuration unaltered, apart from the mode with $\vk'=\vk$, i.e. $P^{(n)}_{\vec{x}} = a_{\vk} \cos{\vk \cdot {\vec{x}}} + W'({\vec{x}})$. At finite $\a$, however, this has the disadvantage that at many sites $|P_{\vec{x}}| > 1$. To see this, note that $W({\vec{x}})$, from which $W'({\vec{x}})$ is derived, may come close to the limits $\pm 1$ at some sites, and at these sites the additional contribution $a_{\vk} \cos(\vk \cdot {\vec{x}})$ may put the sum outside the allowed range by as much as $\a$. Moreover, by removing the $\cos(\vk \cdot {\vec{x}})$ mode, $W'({\vec{x}})$ may already lie outside the range $[-1:1]$ at some sites; this is especially true for $\vk=0$. We must also allow for the fact that half of the $\{a_\vk^{(n)}\}$ are slightly greater than $\a$. For this reason we reduce the amplitude of the added thermalized configuration by a factor of $1-\a-\d$ (in our simulations we found $\d=.04$ sufficient). In the exceptional cases where $P^{(n)}_{\vec{x}}$ still lies outside the allowed range, it is truncated to the nearest limit, i.e. $\pm 1$. We then construct the SU(2) variables, at each site, which have $2 P^{(n)}_{\vec{x}}$ as the trace \begin{eqnarray} U^{(n)}_{\vec{x}} = P^{(n)}_{\vec{x}} \mathbbm{1} + i s^{(n)}\vec{d}({\vec{x}}) \cdot \vec{\sigma} \ , \label{config2} \eea where, to insure unitarity, \begin{eqnarray} s^{(n)} = \sqrt{1 - (P^{(n)}_{\vec{x}})^2 \over \vec{d}({\vec{x}}) \cdot \vec{d}({\vec{x}})} \ . \label{config3} \eea The calculation of $(\partial S_P/\partial a_{\vk})_\a$ proceeds as described above. For the choice of $U^{(n)}$ given above, it is easy to see that for a lattice of extension $L$ in the spatial directions \begin{eqnarray} {1\over L^3} \left({\partial S_P[U_{\vec{x}}(a_{\vk})] \over \partial a_{\vk}}\right)_{a_{\vk}=\a} \approx {1\over \gamma (M-1)} \sum_{m=1}^{M-1} \log\left( \left\langle \exp\Bigl[\Delta S^{(m+1)}[U]\Bigr] \right\rangle_m \right) \ . \eea The results for the derivatives are found to depend only weakly on the choice of thermalized time slice \rf{tslice} generated in an ordinary Monte Carlo run. The dependence is most pronounced at small $k_L$, with the variance on the order of 2\%. In practice we have averaged our results for $dS_P/da_\vk$ over eighty independent time slices. \begin{figure}[t!] \centerline{\scalebox{0.8}{\includegraphics{d24a05.eps}}} \caption{Derivatives of the PLA $L^{-3} \partial S_P/\partial a_{\vk}$ evaluated at $a_{\vk}=\a=0.05$, vs.\ lattice momenta $k_L$. Also shown is a linear best fit to the data at $k_L > 0.7$.} \label{d24a05} \end{figure} \begin{figure}[t!] \centerline{\scalebox{0.8}{\includegraphics{scaling.eps}}} \caption{Derivatives $L^{-3} (\partial S_P/\partial a_{\vk})_\a$ divided by $\a$, vs.\ lattice momenta $k_L$, for $\a=0.05,0.10,0.15,0.20$. It is clear that the derivatives of $S_P$ depend linearly on $\a$.} \label{scaling} \end{figure} In Fig.\ \ref{d24a05} we display our results for the $L^{-3} dS_P/da_\vk$ vs.\ $k_L$ at $\a=0.05$, and lattice spatial extension $L=24$. (Note that, apart from Figs.\ \ref{Pcorr} and \ref{cross}, all our figures show data derived at an $L=24$ extension.) The underlying SU(2) lattice gauge theory is defined as a Wilson action on a periodic $24^3 \times 4$ volume, at the coupling $\b=2.2$. The calculations were made, in this case, at lattice momenta with components $k_i = 2\pi m_i/L$, with the following $(m_1 m_2 m_3)$ triplets: \begin{eqnarray} & &\Bigl\{ (000),(100),(110),(111),(200),(210),(211),(300),(311),(320), \nonumber \\ & & (400),(322),(421),(430), (333),(433),(443),(444),(554), (654), \nonumber \\ & & (655), (665), (766), (777), (887), (988), (998), (10\; 9 9), (10\; 10\; 10)\Bigr\} \ . \eea On this plot the data point displayed at $k_L=0$ is a factor of two smaller than the actual data value; this was done for reasons to be explained shortly. The striking thing about this data is that, for lattice momenta $k_L > 0.7$, the data points clearly fall on a straight line. The second fact is that the data is linearly proportional to $\a$ at these small $\a$ values, as we see in Fig.\ \ref{scaling}. In this figure we divide $dS_P/da_\vk$ by $\a$, at $\a=0.05,0.10,0.15,0.20$, and find that the data points coincide. The linearity of the derivative w.r.t.\ $a_\vk$ implies that the action itself is quadratic in these variables, leading to a simple bilinear form \begin{eqnarray} S_P = \frac{1}{2} c_1 \sum_{{\vec{x}}} P^2_{{\vec{x}}} - 2c_2 \sum_{{\vec{x}} {\vec{y}}} P_{{\vec{x}}} Q({\vec{x}} - {\vec{y}}) P_{{\vec{y}}} \ , \label{SP1} \eea where \begin{eqnarray} Q({\vec{x}}-{\vec{y}}) &=& {1\over L^3} \sum_{\vk} \widetilde{Q}(k_L) e^{i \vk \cdot ({\vec{x}} - {\vec{y}})} \ . \eea This leads to derivatives \begin{eqnarray} {1\over L^3} \left({d S_P[U_{\vec{x}}(a_{\vk})] \over da_{\vk}}\right)_{a_{\vk}=\a} &=& \left\{ \begin{array}{cc} \a(\frac{1}{2} c_1 - 2c_2 \widetilde{Q}(k_L)) & k_L \ne 0 \cr & \cr 2\a(\frac{1}{2} c_1 - 2c_2 \widetilde{Q}(0)) & k_L=0 \end{array} \right. \ . \label{dS_P} \eea The relative factor of two in the $k_L=0$ and $k_L > 0$ cases is due to the fact that $\sum_{{\vec{x}}} 1 = L^3$, while $\sum_{{\vec{x}}} \cos^2(\vk \cdot {\vec{x}}) = \frac{1}{2} L^3$. The $k_L > 0$ data should extrapolate, as $k_L \rightarrow 0$, to a value which is half the result at $k_L=0$, which is why we have divided the derivative at $k_L=0$ by a factor of 2, when displaying these values on Figs. \ref{d24a05} and \ref{scaling}. The constants $c_1$ and $c_2$ are obtained from a linear fit to the data at $k_L > 0.7$, as shown in Fig.\ \ref{d24a05}.\footnote{In practice we fit the data for each $\a$, at $k_L>0.7$, to the form $A(\a) - B(\a) k_L$. We then fit $A(\a),~B(\a)$ to straight lines, and the constants $c_1,~c_2$ are extracted from the slopes, i.e.\ $dA/d\a = \frac{1}{2} c_1$, and $dB/d\a = 2 c_2$. The choice of $0.7$ as the lower limit is a potential source of systematic error, since the value for $c_1$ can vary up to $1\%$ when the lower limit is increased (the variation of $c_2$ is smaller). We find, however, that the choice of $0.7$ as the lower limit minimizes the reduced $\chi^2$ value of the linear fit.} \begin{figure}[htb] \centerline{\scalebox{0.7}{\includegraphics{zmode24.eps}}} \caption{The derivatives of $S_P$ with respect to the amplitude of the zero mode, evaluated at several values of $\a$. The slope of this data is used to determine $r_{max}$ of the bilinear kernel $Q({\vec{x}}-{\vec{y}})$, as explained in the text.} \label{zmode} \end{figure} It is clear that for $k_L > 0.7$, the $k$-space kernel is $\widetilde{Q}(k_L)=k_L$. If this were true at all $k_L$, then we would have $Q=\sqrt{-\nabla_L^2}$ in position space, where $\nabla^2_L$ is the lattice Laplacian. However, a kernel of this kind is infinite range, which would violate one of the assumptions of the Svetitisky-Yaffe analysis (cf.\ \cite{Svetitsky:1985ye}). In any case, $\widetilde{Q}(k_L)$ deviates from linearity at small momentum. We therefore make an ansatz for the kernel which imposes the finite range restriction on $Q$ in a simple way: \begin{eqnarray} Q({\vec{x}}-{\vec{y}}) = \left\{ \begin{array}{cc} \Bigl(\sqrt{-\nabla_L^2}\Bigr)_{{\vec{x}} {\vec{y}}} & |{\vec{x}}-{\vec{y}}| \le r_{max} \cr 0 & |{\vec{x}}-{\vec{y}}| > r_{max} \end{array} \right. \ . \label{Q} \eea Given $r_{max}$, $\widetilde{Q}(k_L)$ is obtained by a Fourier transform of $Q({\vec{x}}-{\vec{y}})$. To determine $r_{max}$, we do a linear fit of the $k_L=0$ data \begin{eqnarray} {1\over L^3} \left({d S_P \over da_0}\right)_{a_0=\a} \ , \eea as shown in Fig.\ \ref{zmode}. Let the slope of the line be $D$. Then, from \rf{dS_P}, \begin{eqnarray} c_1 - 4c_2 \widetilde{Q}(0)) = D \ , \eea and we choose $r_{max}$ to satisfy this condition as closely as possible. We then have $\widetilde{Q}(k_L)$ at all $\vk$. \begin{figure}[htb] \centerline{\scalebox{0.8}{\includegraphics{kernel24a.eps}}} \caption{A test of eq.\ \rf{dS_P} at $\a=0.05$. The derivative data of Fig.\ \ref{d24a05} is plotted against the conjectured fitting function $\a(\frac{1}{2} c_1 - 2c_2 \widetilde{Q}(k_L))$ with $r_{max}=3$.} \label{kernel} \end{figure} In Fig.\ \ref{kernel} we plot the data shown in Fig.\ \ref{d24a05} together with the values computed for \begin{eqnarray} \a(\frac{1}{2} c_1 - 2c_2 \widetilde{Q}(k_L)) \eea (cf.\ eq.\ \rf{dS_P}) at $\a=0.05$. Agreement seems to be quite good in the entire range of $k_L$. We have repeated this analysis at smaller volumes of spatial extension $L=12,16,20$. The results for $c_1,c_2,r_{max}$ are shown in Table \ref{tab1}. In Table \ref{tab2} we record non-zero components of $Q({\vec{x}})=(-\nabla^2_L)_{{\vec{x}} \mathbf{0}}$ up to $|{\vec{x}}| < 3.2$ and lattice volume $24^3$. For $r_{max}=3$, the last entry in the table should be replaced by $Q(3,1,0)=0$. The rest of the non-zero elements of $Q$ are obtained from the table via permutation symmetry, $x_i \leftrightarrow x_j$, and reflection symmetry $x_i \rightarrow -x_i$, among the coordinate components. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|} \hline $ L $ & $c_1$ & $c_2$ & $r_{max}$ \\ \hline 12 & 4.364(6) & 0.491(1) & 3.2 \\ 16 & 4.417(4) & 0.498(1) & 3.0 \\ 20 & 4.416(7) & 0.493(1) & 3.0 \\ 24 & 4.414(8) & 0.493(1) & 3.0 \\ \hline \end{tabular} \caption{Constants defining the effective Polyakov line action for pure YM theory, $L^3 \times 4$ lattice, $\b=2.2$.} \label{tab1} \end{center} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|} \hline $x_1 $ & $x_2$ & $x_3$ & $Q({\vec{x}})$ \\ \hline 0 & 0 & 0 & 2.38760 \\ 1 & 0 & 0 & -0.22001 \\ 1 & 1 & 0 & -0.02357 \\ 1 & 1 & 1 & -0.00774 \\ 2 & 0 & 0 & -0.01279 \\ 2 & 1 & 0 & -0.00455 \\ 2 & 1 & 1 & -0.00246 \\ 2 & 2 & 0 & -0.00160 \\ 2 & 2 & 1 & -0.00111 \\ 3 & 0 & 0 & -0.00200 \\ 3 & 1 & 0 & -0.00121 \\ \hline \end{tabular} \caption{Non-zero elements of the bilinear kernel $Q({\vec{x}})$ at $r_{max}=3.2$ and $L=24$.} \label{tab2} \end{center} \end{table} \begin{figure}[h!] \centering \subfigure[~ L=12]{ \resizebox{79mm}{!}{\includegraphics{corr12.eps}} \label{corr12} } \subfigure[~ L=16]{ \resizebox{79mm}{!}{\includegraphics{corr16.eps}} \label{corr16} } \subfigure[~ L=20]{ \resizebox{79mm}{!}{\includegraphics{corr20.eps}} \label{corr20} } \subfigure[~ L=24]{ \resizebox{79mm}{!}{\includegraphics{corr24.eps}} \label{corr24} } \caption{A comparison of the Polyakov line correlation functions $G(|{\vec{x}}-{\vec{y}}|) = \langle P_{{\vec{x}}} {P_{\vec{y}}} \rangle$ as computed via lattice Monte Carlo simulation of the underlying gauge theory on a $L^3 \times 4$ lattice at coupling $\b=2.2$, and via Monte Carlo simulation of the corresponding effective action $S_P$ of eq.\ \rf{SP1}. Lattices are of spatial extension $L=12,16,20,24$ lattice spacings. Note that off-axis displacements are included.} \label{Pcorr} \end{figure} \subsection{Comparing the PLA to the underlying lattice gauge theory} We now have a concrete proposal for the effective Polyakov line actions at various spatial volumes $L^3$ ranging from $12^3$ to $24^3$, and which correspond to an underlying lattice SU(2) gauge theory at $\b=2.2$ on an $L^3 \times 4$ lattice volume. The actions are specified by eqs.\ \rf{SP1}, \rf{Q}, and the constants in Table I. Above the lattice volume $12^3$, these actions are about the same. \begin{figure}[h!] \centerline{\scalebox{0.5}{\includegraphics{corrax24.eps}}} \caption{A high-statistics comparison of the Polyakov line correlation function $G(|{\vec{x}}-{\vec{y}}|) = \langle P_{{\vec{x}}} {P_{\vec{y}}} \rangle$ computed for the lattice gauge and effective theories, for displacements ${\vec{x}}-{\vec{y}}$ parallel to the $x,y$ or $z$-axes, and spatial volume $24^3$.} \label{corrax} \end{figure} The crucial question, of course, is whether these proposed Polyakov line actions are correct; they are certainly different from the action suggested in ref.\ \cite{Greensite:2012dy}, which was derived for gauge configurations in a rather unrepresentative region of configuration space. There is one obvious and essential test: Do the derived Polyakov line actions reproduce the Polyakov line correlator calculated in the corresponding lattice gauge theory? Thus we compute, via numerical simulation of the Polyakov line action at $L=12,16,20,24$, \begin{eqnarray} G(|{\vec{x}}-{\vec{y}}|) = \langle P_{{\vec{x}}} {P_{\vec{y}}} \rangle \ , \label{PP} \eea and compare the result to the same observable obtained from standard lattice Monte Carlo at $\b=2.2$ on an $L^3 \times 4$ volume. The result for these four cases is shown in Fig.\ \ref{Pcorr}. Note that off-axis displacements are included, with $xyz$-components of the displacement ${\vec{x}}-{\vec{y}}$ in the range $[-4,4]$. The data in Fig.\ \ref{Pcorr} is limited to displacements of magnitude $R < 7$ lattice spacings, and it is interesting to consider larger displacements for larger lattices. This calls for higher statistics. In Fig.\ \ref{corrax} we compare the results for $G({\vec{x}}-{\vec{y}})$ obtained on a $24^3$ lattice for the effective theory, and on a $24^3 \times 4$ lattice for the SU(2) gauge theory, again at $\b=2.2$. In this figure we show the results for displacements ${\vec{x}}-{\vec{y}}$ parallel to any one of the coordinate axes. The L\"uscher-Weisz noise reduction method \cite{Luscher:2001up} was used in obtaining the Polyakov line correlator in the lattice gauge theory, while for the effective Polyakov line action the correlator was obtained from 38,400 configurations (about two orders of magnitude more than was used in Fig.\ \ref{Pcorr}). The agreement of the correlators in the PLA and the underlying lattice gauge theory seen in Fig.\ \ref{corrax} is extraordinary, and it persists down to magnitudes of order $10^{-5}$.\footnote{The Polyakov line correlator derived from the Inverse Monte Carlo method in ref.\ \cite{Dittmann:2003qt,*Heinzl:2005xv} was displayed on a linear, rather than logarithmic, scale, and hence the precision of agreement with lattice gauge theory, in that approach, is difficult to judge.} While this is not a proof that $S_P$ is the correct effective action, it is difficult to believe that agreement of Polyakov line correlators to this level of precision is coincidental. \section{\label{sec:gh}Polyakov line action for an SU(2) gauge-Higgs system} We now add a scalar matter field in the fundamental representation of the gauge group, thereby breaking explicitly $Z_2$ center symmetry. The simplest case is a fixed-modulus Higgs field, and for SU(2) gauge theory the action can be written in the following way: \begin{eqnarray} S = \b \sum_{plaq} \frac{1}{2} \mbox{Tr}[UUU^{\dagger}U^{\dagger}] + \k \sum_{x,\mu} \frac{1}{2} \mbox{Tr}[\phi^\dagger(x) U_\mu(x) \phi(x+\widehat{\mu})] \ , \label{gh_action} \eea where $\phi(x)$ is SU(2) group-valued. The work of Fradkin and Shenker \cite{Fradkin:1978dv}, itself based on a theorem by Osterwalder and Seiler \cite{Osterwalder:1977pc}, demonstrated that the Higgs region and the ``confinement-like" regions of the $\b-\k$ phase diagram are continuously connected. Subsequent Monte Carlo studies found that there is only a single phase at zero temperature (there might have been a separate Coulomb phase), although there is a line of first-order transitions between the confinement-like and Higgs regions, which eventually turns into a line of sharp crossover around ${\b=2.775,\k=0.705}$, cf.\ \cite{Bonati:2009pf} and references therein. At $\b=2.2$ the crossover occurs at $\k \approx 0.84$, as seen in the plaquette energy data shown in Fig.\ \ref{cross}. There is also a steep rise in the Polyakov line expectation value as $\k$ increases past this point. \begin{figure}[h!] \centerline{\scalebox{0.7}{\includegraphics{cross.eps}}} \caption{Plaquette energy vs.\ gauge-Higgs coupling $\k$ at fixed $\b=2.2$, for the SU(2) gauge-Higgs theory with fixed Higgs modulus on a $16^4$ lattice volume, showing a sharp crossover at $\k \approx 0.84$.} \label{cross} \end{figure} We will work at $\b=2.2$ on a $24^3 \times 4$ lattice volume, but this time at Higgs coupling $\gamma=0.75$, which places us in the ``confinement-like'' phase a little below the crossover point. For these parameters, the Polyakov line has a VEV of $\langle P_{{\vec{x}}} \rangle =0.0515$. Once again, we generate sets of thermalized Polyakov line holonomies, and compute $L^{-3} dS_P/da_{\vk}$ as explained in the previous section. \begin{figure}[ht] \centering \subfigure[~ full range]{ \resizebox{79mm}{!}{\includegraphics{zmodeh.eps}} \label{zmodeh1} } \subfigure[~ close-up]{ \resizebox{79mm}{!}{\includegraphics{zmodeh_zoom.eps}} \label{zmodeh2} } \caption{The derivatives of $S_P$ with respect to the amplitude of the zero mode in the gauge-Higgs theory, evaluated at positive and negative values of $a_0=\a$. (a) shows the full range of the data; (b) is a closeup near $\a=0$. The $y$-intercept of this data is non-zero, and determines the coefficient $c_0$ of the linear, $Z_2$-symmetry breaking term in the effective PLA \rf{SP2}.} \label{zmodeh} \end{figure} The derivatives $\partial S_P/\partial a_{\vk}$ at $a_{\vk}=\a$ are computed as before, and at each $k_L > 0$ the results are simply proportional to $\a$. The constants $c_1,c_2$ are again extracted by a linear fit to the $k_L > 0.7$ data. However, the data at $k_L=0$ is not strictly proportional to $\a$; there is also an $\a$-independent constant contribution to the data. This fact can be seen in Fig.\ \ref{zmodeh}. The straight line is a best fit to $L^{-3} \partial S_P/\partial a_0$ evaluated at $a_0=\a$ for $\a$ in the range $[-0.2,0.2]$. The $y$-intercept of this line does not pass through zero, but rather through $y=0.0236(14)$. This implies that $S_P$ must contain a term which is linear in $P_{\vec{x}}$, i.e. \begin{eqnarray} S_P = c_0 \sum_{{\vec{x}}} P_{{\vec{x}}} + \frac{1}{2} c_1 \sum_{{\vec{x}}} P^2_{{\vec{x}}} - 2c_2 \sum_{{\vec{x}} {\vec{y}}} P_{{\vec{x}}} Q({\vec{x}} - {\vec{y}}) P_{{\vec{y}}} \ , \label{SP2} \eea and it is clear from inspection that $c_0$ must equal to the $y$-intercept in Fig.\ \ref{zmodeh}. It is also clear that only the $k_L=0$ mode contributes to the linear term, and is therefore invisible in the derivatives of $S_P$ at $k_L>0$. We define $Q({\vec{x}} - {\vec{y}})$ again by \rf{Q}, with $r_{max}$ determined as in the pure-gauge theory. The final set of parameters for the effective Polyakov line action is given in Table \ref{tab3}, and we plot the $k_L>0$ data, together with the quantity \begin{eqnarray} \a(\frac{1}{2} c_1 - 2c_2 \widetilde{Q}(k_L)) ~~\text{vs.}~~~ k_L \eea in Fig.\ \ref{kernelh}. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $ L $ & $c_0$ & $c_1$ & $c_2$ & $r_{max}$ \\ \hline 24 & .0236(14) & 4.447(9) & 0.501(1) & 3.2 \\ \hline \end{tabular} \caption{Constants defining the effective Polyakov line action for gauge-Higgs theory, $24^3 \times 4$ lattice, $\b=2.2,~\k=0.75$.} \label{tab3} \end{center} \end{table} \begin{figure}[h!] \centerline{\scalebox{0.8}{\includegraphics{kernel24h.eps}}} \caption{Same as Fig.\ \ref{kernel} for the gauge-Higgs theory. We plot the data for the derivative $L^{-3} \partial S_P/\partial a_{\vk}$ vs. $k_L$ against the conjectured fitting function $ \a(\frac{1}{2} c_1 - 2c_2 \widetilde{Q}(k_L))$ with $r_{max}=3.2$. } \label{kernelh} \end{figure} \begin{figure}[h!] \centerline{\scalebox{0.4}{\includegraphics{corr2_hi_24.eps}}} \caption{A comparison of the Polyakov line correlation functions $G(|{\vec{x}}-{\vec{y}}|) = \langle P_{{\vec{x}}} {P_{\vec{y}}} \rangle$ as computed via lattice Monte Carlo simulation of the underlying gauge-Higgs theory (black diamonds) on a $24^3 \times 4$ lattice, at couplings $\b=2.2,~\k=0.75$, and via Monte Carlo simulation of the corresponding effective action $S_P$ of eq.\ \ref{SP2} (blue triangles, $c_0=0.0236$). Also shown is a simulation of the effective action with a slightly different value of $c_0=.02165$ (red circles).} \label{corrH24} \end{figure} As in the pure gauge theory, the crucial test is to see whether the Polyakov line correlator \rf{PP} found from numerical simulation of the gauge-Higgs theory \rf{gh_action} agrees with the same observable computed in the derived Polyakov line action \rf{SP2}. The results are shown in Fig.\ \ref{corrH24}. In this case the agreement between the lattice gauge Higgs correlator (black diamonds) and the correlator of the effective action (blue triangles), while fairly close, is not perfect. However, the result for the effective action depends very sensitively on the value of $c_0$, and of course there is an errorbar associated with this quantity. In our best fits, $c_0=0.0236(14)$. With a little trial and error, one can find a value of $c_0$ for the effective effective action such that the corresponding correlator (red circles) agrees almost exactly with the gauge-Higgs value. This happens at a value $c_0=0.02165$ which is not far outside our errorbars, about 1.4 $\sigma$ away from $c_0=0.0236$. \section{\label{sec:conclude}Conclusions} Motivated by the well-known sign problem, we have applied the relative weights method to determine the effective Polyakov line action $S_P$ for both pure and gauge-Higgs lattice SU(2) gauge theory. This effective action turns out to be a remarkably simple expression, which is bilinear in the Polyakov line variables $P_{{\vec{x}}}=\frac{1}{2} \text{Tr}[U_{{\vec{x}}}]$: \begin{eqnarray} S_P &=& c_0 \sum_{{\vec{x}}} P_{{\vec{x}}} + \frac{1}{2} c_1 \sum_{{\vec{x}}} P^2_{{\vec{x}}} - 2c_2 \sum_{{\vec{x}} {\vec{y}}} P_{{\vec{x}}} Q({\vec{x}} - {\vec{y}}) P_{{\vec{y}}} \nonumber \\ Q({\vec{x}}-{\vec{y}}) &=& \left\{ \begin{array}{cc} \Bigl(\sqrt{-\nabla_L^2}\Bigr)_{{\vec{x}} {\vec{y}}} & |{\vec{x}}-{\vec{y}}| \le r_{max} \cr 0 & |{\vec{x}}-{\vec{y}}| > r_{max} \end{array} \right. \ , \label{SP3} \eea with $c_0=0$ in the pure-gauge theory, and non-zero in the gauge-Higgs theory. Our results so far have been obtained at lattice coupling $\b=2.2$, and $N_t=4$ lattice spacings in the time direction. The effective action has been checked by computing Polyakov line correlators in both the effective theory and the underlying gauge theory, and we have found that these correlators agree quite well with each other. This is especially true in the pure gauge theory, where agreement persists down to correlator values of order $10^{-5}$. These results, together with previous checks in the case of strong coupling \cite{Greensite:2012dy}, inspire some confidence that the method works. The next immediate step will be to understand how the couplings of the effective theory evolve as a function of coupling $\b$ and temperature $1/N_t$. In the longer term our interest is the sign problem, and to address that problem it will be necessary to derive the effective Polyakov line action corresponding to lattice SU(3) gauge fields coupled to matter at zero chemical potential. In lattice gauge theory fixed to temporal gauge, the chemical potential $\mu$ is introduced via the replacement \begin{eqnarray} U_0({\vec{x}},t=0) \rightarrow e^{N_t \mu} U_0({\vec{x}},t=0) ~~~,~~~ U^\dagger_0({\vec{x}},t=0) \rightarrow e^{-N_t \mu} U^\dagger_0({\vec{x}},t=0) \ . \label{intro_mu} \eea It is not hard to see that, to all orders in the strong coupling + hopping parameter expansion, the effective PLA obtained at $\mu \ne 0$ is related to the action at $\mu=0$ by a simple substitution \begin{eqnarray} S^{\mu\ne 0}_P[U_{{\vec{x}}},U^\dagger_{{\vec{x}}}] = S^{\mu=0}_P[U_{{\vec{x}}} \rightarrow e^{N_t \mu} U_{{\vec{x}}}, U^\dagger_{{\vec{x}}} \rightarrow e^{-N_t \mu} U^\dagger_{{\vec{x}}}] \ . \label{replace} \eea We will assume that this identity holds in general. The strategy is then to apply one or more of the methods \cite{Gattringer:2011gq,*Mercado:2012ue,Fromm:2011qi,Aarts:2011zn,Greensite:2012xv}, which were developed for solving Polyakov line actions with a chemical potential, to our derived effective action. If the sign problem is tractable in the effective Polyakov line theory, as suggested by the earlier work cited above, then it may be possible to extract useful results regarding the QCD phase diagram. \acknowledgments{J.G.\ would like to thank Kim Splittorff for helpful discussions. J.G.'s research is supported in part by the U.S.\ Department of Energy under Grant No.\ DE-FG03-92ER40711. K.L.'s research is supported by STFC under the DiRAC framework. We are grateful for support from the HPCC Plymouth, where the numerical computations have been carried out.}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Proof of Proposition \ref{prop:behaviourBessel}} \label{sec:behaviour} For the proof of Proposition \ref{prop:behaviourBessel}, we rely on known results about the Bessel process. For this, we let $(p_n)_{n=1}^\infty$ be a configuration taken from the Bessel point process such that $0 < p_1 < p_2 <\cdots$, and for every $T>0$, we let $N(T)$ be the random variable counting the number of points in $(p_n)_{n=1}^\infty \cap [0,T]$. Then, in \cite{Soshnikov}, it was shown that this variable has the following behaviour: \begin{align}\label{eq:ETasymptotics} \mathbb E N(T) &= \frac{\sqrt{T}}{\pi}+\mathcal O(1), & \qquad \mathrm{as} \ T\rightarrow \infty,\\ \label{eq:VarTasymptotics} \operatorname{Var} N(T) &= \frac{1}{4\pi^2} \log(T)+\mathcal O(1), &\qquad \mathrm{as} \ T\rightarrow \infty. \end{align} We use this behaviour to prove Proposition \ref{prop:behaviourBessel}. \begin{proof}[Proof of Proposition \ref{prop:behaviourBessel}] It will be somewhat more practical to prove that as $n\to\infty$ \begin{equation} \label{eq:behaviourpn+1} p_{n+1}=\pi^2 n^2+\mathcal O\left(n\sqrt n \log^{1+\epsilon} n\right) \end{equation} with probability $1$, which is equivalent to the statement of the proposition. We set \[T_n = \pi^2 \left(n^2 + n \sqrt n \log^{1+\epsilon} n\right).\] For $n$ big enough we have \begin{align} \label{eq:ETasymptotics1} \frac{\sqrt{T_n}}{\pi} - n \geq \frac{1}{3} \sqrt n \log^{1+\epsilon} n\quad \text{ and } \quad \left|\frac{\sqrt{T_n}}{\pi} - \mathbb E N(T_n)\right| \leq \frac{1}{2}\left(\frac{\sqrt{T_n}}{\pi} - n\right), \end{align} where the first inequality directly follows from the definition of $T_n$ and the second inequality follows from the first and \eqref{eq:ETasymptotics}. Now let us suppose that $n$ is big enough in this sense, and that we have \begin{align*} \left|N(T_n) - \sqrt{T_n}/\pi\right| \geq \frac{\sqrt{T_n}}{\pi} - n. \end{align*} We can then invoke \eqref{eq:ETasymptotics1} and the reverse triangle inequality to obtain \begin{align*} \left|N(T_n) - \mathbb E N(T_n)\right| &\geq |N(T_n) - \sqrt{T_n}/\pi| - \left|\sqrt{T_n}/\pi - \mathbb E N(T_n)\right|\\ &\geq \frac{1}{2}\left(\frac{\sqrt{T_n}}{\pi} - n\right) \geq \frac{1}{6} \sqrt n \log^{1+\epsilon} n. \end{align*} Next we make the observation that $p_{n+1} > T_n$ is equivalent to $N(T_n)\leq n$. Hence \begin{align*} \mathbb P\left(p_{n+1}>T_n\right) &= \mathbb P\left(N(T_n) \leq n\right)\\ &\leq \mathbb P\left(\left|N(T_n) - \sqrt{T_n}/\pi\right|\geq \sqrt{T_n}/\pi - n\right)\\ &\leq \mathbb P\left(\left|N(T_n) - \mathbb E N(T_n)\right| \geq \frac{1}{6} \sqrt n \log^{1+\epsilon} n\right). \end{align*} Using Chebyshev's inequality we then obtain, for $n$ big enough \begin{equation} \label{eq:usingChebyshev} \mathbb P\left(p_{n+1} > T_n\right) \leq \frac{\operatorname{Var} N(T_n)}{\operatorname{Var} N(T_n) + \frac{1}{36} n \log^{2+2\epsilon} n} \leq \frac{2}{n \log^{1+2\epsilon} n}, \end{equation} where we have used \eqref{eq:VarTasymptotics} in the last step. We notice that \[ \sum_{n=3}^\infty \frac{1}{n \log^{1+2\epsilon} n} \leq \int_2^\infty \frac{dt}{t \log^{1+2\epsilon} t} = \int_{\log 2}^\infty \frac{dt}{t^{1+2\epsilon}} < \infty, \] which means that the probability that appears in the left hand side of \eqref{eq:usingChebyshev} is summable, and hence the Borel-Cantelli lemma implies that, with probability $1$ \[ p_{n+1} > T_n = \pi^2 \left(n^2 + n \sqrt n \log^{1+\epsilon} n \right) \] occurs only finitely many times. Analogous reasoning leads to a corresponding statement for a lower bound for $p_{n+1}$, and the proposition follows. \end{proof} \section*{Acknowledgements} Leslie Molag is supported by a PhD fellowship of the Flemish Science Foundation (FWO). Marco Stevens is supported by EOS project 30889451 of the Flemish Science Foundation (FWO) and by the Belgian Interuniversity Attraction Pole P07/18. Both authors are partly supported by the long term structural funding-Methusalem grant of the Flemish Government. They are also grateful to Arno Kuijlaars for useful discussions and for proofreading the article. \section{Main result} \label{sec:mainresult} In this article, we are concerned with sequences $0 < p_1 < p_2 < \cdots$ that satisfy the growth condition \begin{equation} \label{eq:growthconditionpoints} \lim_{n\rightarrow \infty} \frac{p_n}{n^2}=\pi^2. \end{equation} Our major motivating example of such sequences is formed by the \textbf{Bessel point process}; this is the determinantal point process whose kernel is the \textbf{Bessel kernel} \begin{equation} \label{eq:Besselkernel} J_\nu(x,y)=\frac{J_\nu(\sqrt{x})\sqrt{y}J'_\nu(\sqrt{y}) - J_\nu(\sqrt{y})\sqrt{x}J'_\nu(\sqrt{x})}{2(x-y)}, \qquad x,y \in (0,\infty), \end{equation} where $\nu>-1$ is a parameter and we (ab)use the notation $J_\nu$ to mean the Bessel function if it has one variable and the Bessel kernel if it has two variables. The configurations taken from the Bessel point process almost surely satisfy the growth condition \eqref{eq:growthconditionpoints}; see Proposition \ref{prop:behaviourBessel}. The aim of this paper is to study the asymptotic behaviour of the conditional measure \cite{Bufetov_conditionalmeasures,Kuijlaars_MinaDiaz} of the Bessel point process; see Section \ref{sec:motivation} for more details. To this extent, we define weights $\bar{w}_{X,\nu,R}$ on the interval $[0,R]$, where $X=(p_n)_{n=1}^\infty$ represents an increasing sequence of positive numbers satisfying the growth condition \eqref{eq:growthconditionpoints}, $R>0$ and $\nu>-1$. This weight is defined by \begin{equation} \label{eq:weight} \bar{w}_{X,\nu,R}(t)=t^\nu \prod_{p_n>R} \Big(1-\frac{t}{p_n}\Big)^2, \qquad t\in [0,R]. \end{equation} We study the finite point process associated to these weights. More specifically, we denote, for every $R>0$, by $N(R)$ the number of points in $X \cap [0,R]$ and we consider the point process with $N(R)$ points $(t_1,\dots,t_{N(R)})$ that all lie in $[0,R]$, with joint probability density function \begin{equation} \label{eq:jointpdf} \frac{1}{Z} \prod_{1\leq i < j\leq N(R)} (t_i - t_j)^2 \prod_{i=1}^{N(R)} \bar{w}_{X,\nu,R}(t_i), \end{equation} where $Z$ is some normalization constant that depends on $X$, $\nu$ and $R$. If $X$ is taken as a configuration from the Bessel point process with parameter $\nu$, then the conditional measure of the Bessel point process associated to $X$ \cite{Bufetov_conditionalmeasures} is almost surely the point process described by \eqref{eq:jointpdf}. We are interested in the asymptotics of this point process as $R\rightarrow \infty$. For this, convergence is understood in the sense of \textit{convergence of kernels}. It is well-known that a point process with joint probability density function of the form \eqref{eq:jointpdf} is an orthogonal polynomial ensemble. In other words, it is a determinantal point process whose kernel is the normalized Christoffel-Darboux kernel (see Section \ref{sec:outline}) associated to the weight $\bar{w}_{X,\nu,R}$. For these kernels, we have the following asymptotic behaviour. \begin{theorem}[Main Theorem] \label{thm:maintheorem} Suppose that $X=(p_n)_{n=1}^\infty$ is a strictly increasing sequence of positive numbers satisfying the growth condition \eqref{eq:growthconditionpoints}, let $\nu>-1$ and write $N(R)$ for the number of points in $X\cap [0,R]$ for all $R>0$. Then the normalized Christoffel-Darboux kernel associated to the weights $\bar{w}_{X,\nu,R}$ (as defined in \eqref{eq:weight}) satisfies \begin{equation}\label{eq:limitkernel} \lim_{R\to \infty} K_{N(R)}(x,y; \bar{w}_{X,\nu,R}) = J_\nu(x,y), \qquad (x,y)\in (0,\infty)^2 \end{equation} uniformly on compact sets, where $J_\nu$ is the Bessel kernel with parameter $\nu$, as given in \eqref{eq:Besselkernel}. \end{theorem} Since this limit is independent of the exact choice of $X$, we speak of \textbf{universality} of the conditional measure. In this way, this article can be seen as a continuation of the result for the conditional measures of the \textit{sine point process} obtained in \cite{Kuijlaars_MinaDiaz}. \begin{remark} We note that if one alters the growth condition \eqref{eq:growthconditionpoints} to $\lim_{n\rightarrow \infty} \frac{p_n}{n^2}=c\pi^2$ for some $c>0$ that one obtains the rescaled Bessel kernel $\frac{1}{c} J_\nu\left(\frac{x}{c},\frac{y}{c}\right)$ as the limit in \eqref{eq:limitkernel}. Hence the real importance of the growth condition \eqref{eq:growthconditionpoints} is the quadratic nature of $p_n$; the constant $\pi^2$ can be seen as an artifact of the conventional scaling choice of the Bessel functions and kernels. \end{remark} The rest of this article is dedicated to proving Theorem \ref{thm:maintheorem} and is outlined as follows: \begin{itemize} \item In Section \ref{sec:motivation}, we give the reason of interest for this type of result. We discuss the notions of rigidity and conditional measures, which should be seen as the enveloping framework for this article. \item In Section \ref{sec:behaviour}, we show that the configurations taken from the Bessel point process almost surely satisfy the growth condition \eqref{eq:growthconditionpoints}. \item In Section \ref{sec:outline}, we give the proof of Theorem \ref{thm:maintheorem}, except for a Riemann-Hilbert analysis, which we give in Section \ref{sec:RHproblem}. The structure of the proof is analogous to the one given in \cite{Kuijlaars_MinaDiaz}, which solves the similar question for the conditional measure of the sine point process. \end{itemize} \section{Motivation} \label{sec:motivation} This paper should be considered as contributing to the understanding of \textit{rigid} point processes. Surrounding this concept of rigidity, there are three subsequent questions for a given point process: \begin{enumerate} \item Is the point process rigid? \item If the point process is rigid, what is the conditional measure of the point process on a compact subspace? Can one give closed formulas for these conditional measures for a family of well-chosen compact subsets? \item If one has the answer to the previous question, what is the asymptotic limit of the conditional measure as the compact subset grows to cover the whole space? \end{enumerate} These questions have been studied in the literature for several (classes of) point processes. See for example \cite{Bufetov_rigidity, Ghosh_Peres} for the first, \cite{Bufetov_conditionalmeasures} for the second and \cite{Kuijlaars_MinaDiaz} for the third question. We refer the reader to these sources for an extensive discussion about what rigidity and conditional measures are; in Section \ref{sec:mainresult} we stated everything we need in this paper from these concepts. In this article, we answer the above question 3 for the Bessel point process, i.e. the point process whose kernel is the Bessel kernel \eqref{eq:Besselkernel}. For this process, the first question above has been answered positively in \cite{Bufetov_rigidity}. Subsequently, in \cite{Bufetov_conditionalmeasures}, the second question has been answered explicitly for compact subsets of the form $[0,R]$. As mentioned in Section \ref{sec:mainresult}, the conditional measure of the Bessel point process almost surely is an orthogonal polynomial ensemble, with \eqref{eq:weight} as the involved weight. The natural next question for the Bessel point process is then the third question above. It is clear that one needs to understand the behaviour of the points $(p_n)_{n=1}^\infty$ as $n\rightarrow \infty$ in a configuration taken from the Bessel point process in order to be able to understand the asymptotics of the weight \eqref{eq:weight} as $R\rightarrow \infty$. For this, we have the following result. \begin{proposition} \label{prop:behaviourBessel} Suppose that the points $0 < p_1 < p_2 < \cdots$ form a configuration taken from the Bessel point process with some parameter $\nu>-1$. Then for every $\epsilon>0$ we have, with probability $1$, that \begin{equation} \label{eq:behaviourBessel} p_n = \pi^2 n^2 + \mathcal O\left(n \sqrt n \log^{1+\epsilon} n\right), \qquad \text{as }n\to\infty. \end{equation} \end{proposition} We prove this result in Section \ref{sec:behaviour}. By this Proposition, we know that the configurations taken from the Bessel point process almost surely satisfy the growth condition \eqref{eq:growthconditionpoints}. Hence our main result Theorem \ref{thm:maintheorem} answers the above third question for the Bessel point process; as one would expect, the asymptotic limit of the conditional measure on the compact subset $[0,R]$ is the Bessel point process itself, almost surely with respect to the chosen configuration $X$. In fact, Theorem \ref{thm:maintheorem} shows that these asymptotics are even more universal, since the growth condition \eqref{eq:growthconditionpoints} does not require any big-O term like the one appearing in \eqref{eq:behaviourBessel}. \section{Proof of Theorem \ref{thm:maintheorem}} \label{sec:outline} In this section, we give the main components of the proof of Theorem \ref{thm:maintheorem}. The only element that we do not give in this section is the Riemann-Hilbert analysis needed to prove the asymptotics of the kernel associated to a weight defined in this section; we postpone this to the next section. \subsection{Notation and general remarks} We start with some conventions regarding notation. To any weight $w$ on a compact interval $[a,b]$, we denote the associated orthogonal polynomials by $(\varphi_j(\cdot; w))_{j=0}^\infty$, i.e., for all $i,j\geq 0$, we have \begin{equation} \label{eq:orthpol} \int_{a}^b \varphi_j(t; w) \varphi_i(t; w) w(t) dt = \delta_{i,j}. \end{equation} These requirements do not completely define these polynomials. We use the convention that these polynomials have positive leading coefficients, which does define them uniquely. The \textbf{Christoffel-Darboux kernel} associated to the weight $w$ is defined by \begin{equation} \label{eq:defordCDkernel} \widehat{K}_n(x,y; w) = \sum_{i=0}^{n-1} \varphi_i(x; w) \varphi_i(y; w), \qquad x,y\in [a,b]. \end{equation} A slight modification leads to the \textbf{normalized Christoffel-Darboux kernel} \begin{equation} \label{eq:defnorCDkernel} K_n(x,y; w) = \sqrt{w(x) w(y)} \sum_{i=0}^{n-1} \varphi_i(x; w) \varphi_i(y; w), \qquad x,y\in [a,b]. \end{equation} It is well-known that the Christoffel-Darboux kernel has the property that \begin{equation} \label{eq:minpropChr} \frac{1}{\widehat{K}_n(x,x; w)} = \min_{\deg P<n} \frac{1}{P(x)^2} \int_{a}^{b} \lvert P(t) \rvert^2 w(t) dt, \end{equation} where the quantity on the left hand side is called the \textit{Christoffel function} of the weight $w$. In the rest of this section, we will also need the simple behaviour of orthogonal polynomials (and the kernels built out of them) under rescalings. For easy reference, we summarize this behaviour in the following lemma. \begin{lemma} \label{lem:rescalingweights} Suppose that $w$ is a weight on an interval $[a,b]$. Let $c,d>0$ and consider the new weight $\bar{w}(t):=dw(ct)$ on the interval $[a/c,b/c]$. Then we have that \begin{equation} \label{eq:orthpolrescaled} \varphi_i(t;\bar{w}) = \frac{\sqrt{c}}{\sqrt{d}} \varphi_i(ct;w), \qquad t\in [a/c,b/c], \end{equation} for all integers $i\geq 0$. Furthermore, we have \begin{equation} \label{eq:norCDkernrescaled} \frac{1}{c} K_n(\tfrac{x}{c},\tfrac{y}{c}; \bar{w}) = K_n(x,y;w), \qquad x,y\in [a,b], \end{equation} and \begin{equation} \label{eq:ordCDkernrescaled} \frac{1}{c} \widehat{K}_n(\tfrac{x}{c},\tfrac{y}{c}; \bar{w}) = \frac{1}{d}\widehat{K}_n(x,y;w), \qquad x,y\in [a,b]. \end{equation} \end{lemma} One immediately checks this by rewriting the orthogonality relations. \subsection{Rescaling to a point process on a fixed interval} In order to prove Theorem \ref{thm:maintheorem}, we want to compare the considered point processes on $[0,R]$ for various $R$. For asymptotic analysis, it is more convenient to rescale these point processes to the interval $[0,1]$, such that all the considered point processes are defined on the same space. We define the new weight $w_{X,\nu,R}$ on $[0,1]$ by \begin{equation} \label{eq:weighton01} w_{X,\nu,R}(t)=t^\nu \prod_{p_n>R} \left(1-\frac{Rt}{p_n}\right)^2 = R^{-\nu}\bar{w}_{X,\nu,R}(Rt), \qquad t\in [0,1]. \end{equation} By Lemma \ref{lem:rescalingweights}, we then know that the associated normalized Christoffel-Darboux kernel transforms as \[K_{N(R)}(x,y; \bar{w}_{X,\nu,R})=\frac{1}{R} K_{N(R)}(\tfrac{x}{R},\tfrac{y}{R}; w_{X,\nu,R}).\] Hence, in order to prove that Theorem \ref{thm:maintheorem} holds, it suffices to prove that we have \begin{equation} \label{eq:rescaledmaintheorem} \lim_{R\rightarrow \infty} \frac{1}{R} K_{N(R)}(\tfrac{x}{R},\tfrac{y}{R}; w_{X,\nu,R}) = J_\nu(x,y), \end{equation} uniformly on compact sets. Readers familiar with the Bessel kernel appearing at (for example) the hard edge of certain random matrix ensembles will recognize this kind of asymptotic result when properly rescaling and `zooming in on the hard edge'. For this, we note that $N(R)\rightarrow \infty$ as $R\rightarrow \infty$. We prove \eqref{eq:rescaledmaintheorem} with the technique used by Kuijlaars and Mi\~{n}a-D\'{i}az in \cite{Kuijlaars_MinaDiaz}. Namely, we approximate the weight $w_{X,\nu,R}$ by exponential weights in Section \ref{subsec:estimatingweights}. Subsequently, we discuss the asymptotics of the kernels associated to these approximating weights in Section \ref{subsec:asymptoticsapprox}. Due to the technical nature of the associated Riemann-Hilbert problem, we postpone the proof of these asymptotics until Section \ref{sec:RHproblem}. First we will show, in Section \ref{subsec:asymptoticsactualweight}, that the asymptotics of the approximating weights actually imply \eqref{eq:rescaledmaintheorem} and hence Theorem \ref{thm:maintheorem}. For this, we use techniques introduced by Lubinsky in \cite{Lubinsky} that are also used in \cite{Kuijlaars_MinaDiaz}. \subsection{Approximating weights} \label{subsec:estimatingweights} As mentioned above, we approximate the weight $w_{X,\nu,R}$ by exponential weights; such weights are well-studied in the literature. For this, we use results from \cite{Kuijlaars_MinaDiaz}; we combine these results in the following Lemma. \begin{lemma}[Kuijlaars{ }-{ }Mi\~{n}a-D\'{i}az, \cite{Kuijlaars_MinaDiaz}] \label{lem:approxKuijMin} Suppose that $(q_n)_{n\in \mathbb{Z}}$ is a strictly increasing doubly infinite sequence such that the following requirements hold: \begin{itemize} \item[(a)] The points are indexed such that \[\cdots < q_{-2} < q_{-1} < 0 \leq q_0< q_1 < \cdots,\] \item[(b)] the limit \[\lim_{S\rightarrow \infty} \sum_{0<\lvert q_n \rvert < S} \frac{1}{q_n}\] exists, \item[(c)] and \[\lim_{n \rightarrow \pm \infty} \frac{q_n}{n} =1.\] \end{itemize} Furthermore, let $\tilde{N}(R)$ be an integer depending on $R>0$ in such a way that \begin{equation} \label{eq:ratioNRsine} \lim_{R\rightarrow \infty} \frac{\tilde{N}(R)}{R}=2, \end{equation} and define \begin{equation} \label{eq:epsilonforsine} \epsilon_R = \frac{2R}{\tilde{N}(R)} \sum_{\lvert q_n \rvert >R} \frac{1}{q_n}. \end{equation} Lastly, let \begin{equation} \label{eq:weightsine} \tilde{w}_R(t)= \prod_{\lvert q_n\rvert >R} \Big(1-\frac{Rt}{q_n}\Big)^2, \qquad t\in [-1,1], \end{equation} and \begin{equation} \label{eq:sineexternalfield} \tilde{V}(t)=(1+t)\log(1+t) + (1-t)\log(1-t), \qquad t\in [-1,1]. \end{equation} Then the following two approximations hold: \begin{enumerate} \item For every $\alpha>1$, there is an $R_\alpha>0$ such that if $R\geq R_\alpha$, then \begin{equation} \label{eq:approxsineabove} \tilde{w}_R(t) \leq \exp(\tilde{N}(R)(\tilde{V}(\tfrac{t}{\alpha})+\epsilon_R t)), \qquad t\in [-1,1]. \end{equation} \item For every $\alpha>1$ and $\beta\in (0,1)$, there is an $R_{\alpha,\beta}>0$ such that $R\geq R_{\alpha,\beta}$ implies \begin{equation} \label{eq:approxsinebelow} \tilde{w}_R(t) \geq \exp(-\tilde{N}(R)(\tilde{V}(\alpha t)+\epsilon_R t)), \qquad t\in [-\beta,\beta]. \end{equation} \end{enumerate} \end{lemma} These approximations were used to study the conditional measures of the sine process. Surprisingly, although we are studying the Bessel point process here, we can immediately use these approximations, by making a suitable transformation. Instead of the external field \eqref{eq:sineexternalfield}, we consider \begin{equation} \label{eq:externalfield} V(t) = 2(1+\sqrt{t}) \log(1+\sqrt{t}) + 2(1-\sqrt{t}) \log(1-\sqrt{t}), \qquad t\in [0,1]. \end{equation} Clearly, we have \begin{equation} \label{eq:relationpotentialsBesselSine} V(t)=2\tilde{V}(\sqrt{t}), \end{equation} where $\tilde{V}$ is given by \eqref{eq:sineexternalfield}. Then, we find the following approximations. \begin{proposition} \label{prop:approx} Suppose that $X=(p_n)_{n=1}^\infty$ is an increasing sequence of positive numbers satisfying the growth condition \eqref{eq:growthconditionpoints}. Then, for every $\gamma>1$ there exists an $R_\gamma>0$ such that $R\geq R_\gamma$ implies \begin{equation} \label{eq:approx} t^{\nu} \exp(-N(R) V(\gamma t)) \mathfrak{1}_{[0,\gamma^{-2}]}(t) \leq w_{X,\nu,R}(t) \leq t^{\nu} \exp(-N(R) V(t/\gamma)), \qquad t\in [0,1], \end{equation} where $V$ is defined by \eqref{eq:externalfield} and $N(R)$ is the number of points in $X\cap (0,R]$. \end{proposition} \begin{proof} We notice that \begin{equation} \label{eq:splitweightintwoparts} t^{-\nu} w_{X,\nu,R}(t) = \prod_{p_n > R} \left(1-\frac{R t}{p_n}\right)^2 = \prod_{p_n>R} \left(1-\frac{\frac{\sqrt{R}}{\pi} \sqrt{t}}{\frac{\sqrt{p_n}}{\pi}}\right)^2 \left(1+\frac{\frac{\sqrt{R}}{\pi} \sqrt{t}}{\frac{\sqrt{p_n}}{\pi}}\right)^2, \qquad t\in [0,1]. \end{equation} Motivated by this, we define the doubly infinite sequence $(q_n)_{n\in \mathbb{Z}}$ by \[q_n = \begin{cases} \frac{\sqrt{p_n}}{\pi} &\text{if $n\geq 1$}\\ -\frac{\sqrt{p_{1-n}}}{\pi} &\text{if $n\leq 0$}. \end{cases}\] We note that this sequence $(q_n)_{n\in \mathbb{Z}}$ satisfies the requirements (a), (b) of Lemma \ref{lem:approxKuijMin} by construction, and (c) due to the growth condition \eqref{eq:growthconditionpoints} on $X$. We define $\tilde{N}(R)=2N(\pi^2R^2)$. Since we have that \begin{equation} \label{eq:RandNR} \lim_{R\rightarrow \infty} \frac{R}{N(R)^2}=\pi^2 \end{equation} by \eqref{eq:growthconditionpoints}, we also have that \eqref{eq:ratioNRsine} holds. Furthermore, we have that $q_n=-q_{1-n}$ for all $n\in \mathbb{Z}$, so \eqref{eq:epsilonforsine} gives us that $\epsilon_R=0$ for all $R>0$. The factorization in \eqref{eq:splitweightintwoparts} is especially useful since it implies that \begin{equation} \label{eq:wandwtilde} t^{-\nu} w_{X,\nu,R}(t) = \prod_{\lvert q_n \rvert > \frac{\sqrt{R}}{\pi}} \left(1-\frac{\frac{\sqrt{R}}{\pi} \sqrt{t}}{q_n}\right)^2 = \tilde{w}_{\frac{\sqrt{R}}{\pi}}(\sqrt{t}), \qquad t\in [0,1], \end{equation} where $\tilde{w}$ is given by \eqref{eq:weightsine}. Indeed, this allows us to transfer the approximations in \eqref{eq:approxsineabove}- \eqref{eq:approxsinebelow} to the approximations of the weight $w_{X,\nu,R}$. Namely, let $\gamma>1$ and let $\tilde{w}_R$ and $\tilde{V}$ be defined as in \eqref{eq:weightsine} and \eqref{eq:sineexternalfield}, respectively. Then since $\sqrt{\gamma}>1$, there is an $R_1>0$ such that $R\geq R_1$ implies that \begin{equation*} \tilde{w}_R(t) \leq \exp\left(\tilde{N}(R)\tilde{V}\left(\tfrac{t}{\sqrt{\gamma}}\right)\right), \qquad t\in [-1,1], \end{equation*} by using \eqref{eq:approxsineabove}. Furthermore, if we take $\alpha=\sqrt{\gamma}$ and $\beta=\gamma^{-1}$, we have that $\alpha>1$ and $\beta\in (0,1)$, so if we apply \eqref{eq:approxsinebelow} we obtain that there is an $R_2>0$ such that $R\geq R_2$ implies that \begin{equation*} \tilde{w}_R(t) \geq \mathfrak{1}_{[0,\gamma^{-1}]}(t) \exp(-\tilde{N}(R)\tilde{V}(\sqrt{\gamma} t)), \qquad t\in [-1,1]. \end{equation*} Now define $R_\gamma = \pi^2 \max(R_1,R_2)^2$. Then $R\geq R_\gamma$ implies that $\frac{\sqrt{R}}{\pi}\geq \max(R_1,R_2)$, and that implies that we have \[\mathfrak{1}_{[0,\gamma^{-1}]}(\sqrt{t}) \exp(-\tilde{N}(\sqrt{R}/\pi)\tilde{V}(\sqrt{\gamma t})) \leq \tilde{w}_{\frac{\sqrt{R}}{c}}(\sqrt{t}) \leq \exp(\tilde{N}(\sqrt{R}/\pi) \tilde{V}(\sqrt{t/\gamma})), \qquad t\in [0,1].\] Now using the fact that $\tilde{N}(\sqrt{R}/\pi)=2N(R)$ and $V(t)=2\tilde{V}(\sqrt{t})$, combined with \eqref{eq:wandwtilde}, gives us \[t^\nu \mathfrak{1}_{[0,\gamma^{-2}]}(t) \exp(-N(R) V(\gamma t)) \leq w_{X,\nu,R}(t) \leq t^\nu \exp(N(R) V(t/\gamma)), \qquad t\in [0,1],\] for every $R\geq R_\gamma$. The existence of such an $R_\gamma$ for every $\gamma>1$ is exactly what we set out to prove. \end{proof} We refer to the weights which give a lower and an upper bound for the weight $w_{X,\nu,R}$ in \eqref{eq:approx} as the \textbf{approximating weights}. We reserve a notation for these weights, namely \begin{align} \label{eq:varyingexponentialweightplus} \omega_{\gamma,n,\nu}^+ (t)&= t^\nu \exp(-nV(t/\gamma)), &\qquad t\in [0,1], \\ \label{eq:varyingexponentialweightminus} \omega_{\gamma,n,\nu}^- (t)&= t^\nu \mathfrak{1}_{[0,\gamma^{-2}]}(t) \exp(-n V(\gamma t)), &\qquad t\in [0,1], \end{align} where, as before, $V$ is defined by \eqref{eq:externalfield}, $\gamma>1$, $n\geq 1$ is an integer and $\nu>-1$. Using this notation, we have that \eqref{eq:approx} precisely becomes \begin{equation} \label{eq:approxinnotation} \omega_{\gamma,N(R),\nu}^-(t) \leq w_{X,\nu,R}(t) \leq \omega_{\gamma,N(R),\nu}^+(t). \end{equation} We note that the two weights in \eqref{eq:varyingexponentialweightplus}-\eqref{eq:varyingexponentialweightminus} are transformed to each other according to \begin{equation} \label{eq:transformplusminus} \omega_{\gamma,n,\nu}^-(t)=\frac{1}{\gamma^{2\nu}} \omega_{\gamma,n,\nu}^+(\gamma^2 t), \qquad t\in [0,\gamma^{-2}]. \end{equation} \subsection{The relevant equilibrium measure} \label{subsec:equilibrium} As mentioned above, we do not study the asymptotics of the weight $w_{X,\nu,R}$ directly, but instead study the asymptotics of the approximating weights $\omega_{\gamma,N(R),\nu}^{\pm}$. In fact, by \eqref{eq:transformplusminus}, we are only required to study the weight $\omega_{\gamma,n,\nu}^+$. Since this weight is of the form \eqref{eq:varyingexponentialweightplus}, a reader familiar with this kind of asymptotics knows that one is interested in the \textit{equilibrium measure} of the external field \begin{equation} \label{eq:Vgamma} V_\gamma(t) =V(t/\gamma), \qquad t\in [0,1]. \end{equation} Therefore, we compute this equilibrium measure before turning to the actual asymptotics. For an arbitrary external field $\tilde{V}$ on some interval $[a,b]$, the associated \textbf{equilibrium measure} $\mu_{\tilde{V}}$ is the unique probability measure $\mu_{\tilde{V}}$ with support in $[a,b]$ for which there is a constant $\ell_{\tilde{V}}$ such that the following equation holds: \begin{equation} \label{eq:equilibriumproblem} 2\int_0^1 \log|x-s| d\mu_{\tilde{V}}(s) \left\{\begin{array}{ll} = \tilde{V}(x) + \ell_{\tilde{V}},& x\in \operatorname{supp} \mu_{\tilde V},\\ \leq \tilde{V}(x) + \ell_{\tilde{V}},& x\in [a,b]\setminus \operatorname{supp} \mu_{\tilde V}. \end{array}\right. \end{equation} The search for this equilibrium measure is often referred to as the \textit{equilibrium problem}. We first solve this equilibrium problem on $[0,1]$ for the external field $V$, that was defined in \eqref{eq:externalfield}. \begin{lemma} \label{lem:eqmeasureforexternalfield} The equilibrium measure on $[0,1]$ for the external field $V$ given by \eqref{eq:externalfield} is \begin{equation} \label{eq:equilibriumgamma1} \frac{d\mu_V(s)}{ds}=\frac{1}{2\sqrt{s}}, \qquad s\in [0,1], \end{equation} and the constant $\ell_V=0$. \end{lemma} \begin{proof} For any $s\in [0,1]$ we have by making the transformation $s=t^2$ \begin{align*} 2\int_0^1 \log|x-s| \frac{1}{2\sqrt{s}} ds &= 2 \int_0^1 \log \lvert x-t^2 \rvert dt \\ &=2 \int_0^1 (\log \lvert \sqrt{x} +t \rvert +\log \lvert \sqrt{x} -t \rvert) dt \\ &=2 \int_{-1}^1 \log \lvert \sqrt{x} -t \rvert dt \\ &= V(x), \end{align*} where in the last step we used the computation of the equilibrium measure of the external field that plays a role in \cite{Kuijlaars_MinaDiaz}. \end{proof} Now we turn to the external field $V_\gamma$, as in \eqref{eq:Vgamma}. For brevity of notation, we denote the equilibrium measure on $[0,1]$ associated to this deformed external field by $\mu_\gamma$. We have an explicit expression. \begin{lemma} Let $\gamma>1$. The equilibrium measure $\mu_\gamma$ on $[0,1]$ for the external field $V_\gamma$ has a density with support $[0,1]$, given explicitly by \begin{align} \label{eq:equilibriummeasuregamma} \frac{d\mu_\gamma}{ds} &= \frac{1}{\sqrt{\gamma s}} \left(\frac{1}{2}+\frac{1}{\pi} \sqrt{\frac{\gamma-1}{1-s}} -\frac{1}{\pi} \arctan\left(\sqrt{\frac{\gamma-1}{1-s}}\right)\right), & s\in (0,1). \end{align} \end{lemma} \begin{proof} First we note that the measure $m_\gamma$ with density \begin{equation} \frac{dm_\gamma}{ds}(s) := \frac{1}{2\sqrt{\gamma s}}, \qquad s\in[0,\gamma], \end{equation} is a probability measure on $[0,\gamma]$, which can easily be checked. Next, we note that for any $x\in[0,\gamma]$ we have that \begin{equation*} 2\int_0^\gamma \log|x-s| dm_\gamma(s) = \int_0^1 \log\lvert x-\gamma t \rvert \frac{dt}{2\sqrt{t}} = 2\log(\gamma) + 2\int_0^1 \log \Big\lvert \tfrac{x}{\gamma} -t \Big\rvert \frac{dt}{2\sqrt{t}}. \end{equation*} Now, making use of Lemma \ref{lem:eqmeasureforexternalfield}, we see that the last term is in fact equal to $V_\gamma(x)$. Hence we obtain that \begin{equation*} 2\int_0^\gamma \log|x-s| dm_\gamma(s) = V_\gamma(x)+2\log(\gamma), \end{equation*} which means that the measure $m_\gamma$ satisfies all the requirements of the equilibrium problem \eqref{eq:equilibriumproblem}, except that it is supported on $[0,\gamma]$ instead of a subset of $[0,1]$. There is a general framework for dealing with this peculiarity, namely via the use of \textit{balayages}. According to equation (4.47) in \cite[Chapter II]{Saff_Totik}, we now have for $s\in [0,1]$ \begin{align*} \frac{d\mu_\gamma}{ds} &= \frac{dm_\gamma}{ds}+\frac{1}{\pi} \int_1^\gamma \frac{\sqrt{x(x-1)}}{(x-s)\sqrt{s(1-s)}} dm_\gamma(x)\\ &= \frac{1}{2\sqrt{\gamma s}} + \frac{1}{2\pi\sqrt \gamma} \frac{1}{\sqrt{s(1-s)}}\int_1^\gamma \frac{\sqrt{x-1}}{x-s} dx. \end{align*} It can be checked that for $s\in(0,1)$ \[ \int_1^\gamma \frac{\sqrt{x-1}}{x-s} dx = 2\sqrt{\gamma-1} - 2\sqrt{1-s} \arctan\left(\sqrt{\frac{\gamma-1}{1-s}}\right), \] and therefore \eqref{eq:equilibriummeasuregamma} follows. \end{proof} We notice that the behaviour of the equilibrium measure around $s=0$ and $s=1$ immediately follows from \eqref{eq:equilibriummeasuregamma}. Namely, for fixed $\gamma>1$ we have \begin{align} \frac{d\mu_\gamma}{ds} &= s^{-\frac{1}{2}} \frac{1}{\sqrt \gamma} \left(\frac{1}{2} + \frac{\sqrt{\gamma-1}-\arctan{\sqrt{\gamma-1}}}{\pi}\right)+\mathcal O(s^\frac{1}{2}), & s\to 0^+ \label{eq:equilbriumaround0}\\ \frac{d\mu_\gamma}{ds} &= \frac{1}{\pi} \sqrt{\frac{\gamma-1}{\gamma}} (1-s)^{-\frac{1}{2}}+\mathcal O((1-s)^\frac{1}{2}), & s\to 1^-.\label{eq:equilbriumaround1} \end{align} \subsection{Asymptotics for the approximating weights} \label{subsec:asymptoticsapprox} For the asymptotics of the approximating weight $\omega_{\gamma,n,\nu}^+$, we use the constant \begin{equation} \label{eq:defcgamma} c_\gamma=\frac{\pi^2 \gamma}{(\pi+2(\sqrt{\gamma-1}-\arctan \sqrt{\gamma-1}))^2}, \end{equation} that we consider for $\gamma>1$. The reason for this specific choice of constant follows from the Riemann-Hilbert problem that we study in Section \ref{sec:RHproblem}. We note that $c_\gamma>1$ and that \begin{equation} \lim_{\gamma \rightarrow 1^+} c_\gamma =1. \end{equation} We then have the following. \begin{proposition} \label{prop:resultRHprob} Suppose that $\gamma>1$ and let $c_\gamma$ be defined by \eqref{eq:defcgamma}. Then the normalized Christoffel-Darboux kernel of the weight in \eqref{eq:varyingexponentialweightplus} satisfies \begin{equation} \label{eq:resultRHprob} \lim_{n\rightarrow \infty} \frac{c_\gamma}{\pi^2 n^2} K_n\left(\frac{c_\gamma x}{\pi^2 n^2},\frac{c_\gamma y}{\pi^2 n^2}; \omega_{\gamma,n,\nu}^+\right) = J_\nu(x,y), \end{equation} uniformly for $(x,y)$ in compact subsets of $(0,\infty)^2$. \end{proposition} Proposition \ref{prop:resultRHprob} can be proven using standard Riemann-Hilbert techniques. These techniques were developed as a tool to study the asymptotics of orthogonal polynomials. This started with the seminal paper by Fokas, Its and Kitaev \cite{Fokas_Its_Kitaev} who studied orthogonal polynomials with respect to weights on contours in the complex plane. These techniques were refined by (amongst others) Deift, Kriecherbauer, McLaughlin, Venakides and Zhou \cite{Deift,Deift_Kriecherbauer_McLaughlin_Venakides_Zhou} for weights on the real line. The use of Riemann-Hilbert problems was extended to weights that were not supported on the real line, but on bounded intervals. Kuijlaars, McLaughlin, Van Assche and Vanlessen \cite{Kuijlaars_2004,Kuijlaars_2002} solved these problems in a way that has since become the standard. The main difference between the analysis on a bounded interval instead of the whole real line lies in the analysis that needs to be undertaken at the end-points. In a Riemann-Hilbert problem, the local parametrices are meant to deal with this local analysis. In our case, we are interested in the local parametrix around $z=0$, where the equilibrium measure blows up as an inverse square root, according to \eqref{eq:equilbriumaround0}. It is well-known that if this is the case and the weight is of the form \eqref{eq:varyingexponentialweightplus}, i.e., of varying exponential type, then one can use a modification of the \textit{Bessel parametrix} that was defined in \cite{Kuijlaars_2004} to find a solution of the local parametrix problem at hand. This is precisely what we do in Section \ref{sec:RHproblem}. Similar analyses have been carried out in for example \cite[Section 7.1]{Celsus_Silva}, \cite[Section IV.D]{Charlier_Claeys}, and \cite[Section 4.1.4]{Deano}. To explain the appearance of the constant $c_\gamma$ in \eqref{eq:resultRHprob}, we provide the details of the Riemann-Hilbert problem in Section \ref{sec:RHproblem}. The reader who is familiar with Riemann-Hilbert problems will already be familiar with these details. \begin{remark} If one would take $\gamma=1$, one obtains the equilibrium measure in \eqref{eq:equilibriumgamma1}. Note specifically that its behaviour around $s=1$ is qualitatively different from \eqref{eq:equilbriumaround1}; it does not blow up as $(1-s)^{-\frac{1}{2}}$ but in fact has leading order $O(1)$. We would not know how to solve the corresponding local parametrix problem. \end{remark} By the transformative property \eqref{eq:transformplusminus} of the approximating weights and the general transformation rule \eqref{eq:norCDkernrescaled} for normalized Christoffel-Darboux kernels, Proposition \ref{prop:resultRHprob} immediately implies the asymptotics for the other approximating weight. \begin{corollary} \label{cor:resultRHprobminus} The normalized Christoffel-Darboux kernel of the weight in \eqref{eq:varyingexponentialweightminus} satisfies \begin{equation} \label{eq:resultRHprobminus} \lim_{n\rightarrow \infty} \frac{1}{c_\gamma \pi^2 n^2} K_n\left(\frac{x}{c_\gamma \pi^2 n^2},\frac{y}{c_\gamma \pi^2 n^2}; \omega_{\gamma,n,\nu}^-\right) = J_\nu(x,y), \end{equation} uniformly for $(x,y)$ in compact subsets of $(0,\infty)^2$, for every $\gamma>1$. \end{corollary} For further analysis, it is also important to note that the approximating weights satisfy the following asymptotic behaviour uniformly on compact sets: \begin{equation} \label{eq:asymptoticapproxweight} \lim_{n\rightarrow \infty} n^{2\nu} \omega_{\gamma,n,\nu}^\pm\left(\frac{x}{n^2}\right) = x^\nu. \end{equation} This follows readily from the behavior $V(x)=\mathcal O(x)$ as $x\to 0$. We especially note that the limit is independent of $\gamma$. We proceed by considering the non-normalized kernels of the approximating weights. \begin{corollary} \label{cor:nonnormalizedresult} The non-normalized Christoffel-Darboux kernels of the weights in \eqref{eq:varyingexponentialweightplus} and \eqref{eq:varyingexponentialweightminus} have the following asymptotic behaviour: \begin{align} \lim_{n \rightarrow \infty} \frac{1}{(\pi n)^{2+2\nu}} \widehat{K}_n\left(\frac{x}{\pi^2 n^2},\frac{y}{\pi^2 n^2}; \omega_{\gamma,n,\nu}^+\right) &= (xy)^{-\nu/2} \frac{1}{c_\gamma} J_\nu\left(\frac{x}{c_\gamma},\frac{y}{c_\gamma}\right), \label{eq:resultRHprobhatplus} \\ \lim_{n \rightarrow \infty} \frac{1}{(\pi n)^{2+2\nu}} \widehat{K}_n\left(\frac{x}{\pi^2 n^2},\frac{y}{\pi^2 n^2}; \omega_{\gamma,n,\nu}^-\right) &= (xy)^{-\nu/2} c_\gamma J_\nu\left(c_\gamma x,c_\gamma y\right), \label{eq:resultRHprobhatminus} \end{align} uniformly for $(x,y)$ in compact subsets of $(0,\infty)^2$, for all $\gamma>1$. \end{corollary} \begin{proof} This follows directly by replacing $(x,y)$ by $c_\gamma^{\mp 1} (x,y)$ and combining \eqref{eq:resultRHprob} and \eqref{eq:resultRHprobminus} with the relationship between the normalized and non-normalized Christoffel-Darboux kernel and the asymptotic behaviour \eqref{eq:asymptoticapproxweight}. \end{proof} \subsection{Asymptotics for the actual weight} \label{subsec:asymptoticsactualweight} In this section, we use the above asymptotics for the kernels of the approximating weights $\omega_{\gamma,n,\nu}^\pm$ to obtain the asymptotics for the kernel of the weight $w_{X,\nu,R}$, i.e. Theorem \ref{thm:maintheorem}. Informally, we let $\gamma \rightarrow 1^+$, and by that `squeeze in' the kernel of interest by using \eqref{eq:approxinnotation}. \begin{proposition} \label{prop:maintheoremforNRhat} Suppose that $X=(p_n)_{n=1}^\infty$ is a strictly increasing sequence of positive numbers satisfying the growth condition \eqref{eq:growthconditionpoints}, and for every $R>0$, let $N(R)$ be the number of points in $X\cap (0,R]$. Then, uniformly for $(x,y)$ in compact subsets of $(0,\infty)^2$, we have that \begin{equation} \label{eq:maintheoremforNRhat} \lim_{R \rightarrow \infty} \frac{1}{(\pi N(R))^{2+2\nu}} \widehat{K}_{N(R)}\left(\frac{x}{\pi^2 N(R)^2},\frac{y}{\pi^2 N(R)^2}; w_{X,\nu,R}\right) = (xy)^{-\frac{\nu}{2}} J_\nu(x,y). \end{equation} \end{proposition} \begin{proof} We assume Proposition \ref{prop:resultRHprob} and in particular Corollary \ref{cor:nonnormalizedresult}. First we prove \eqref{eq:maintheoremforNRhat} on the diagonal, that is $x=y$, and then we extend this result to all $(x,y)$ by using a technique by Lubinsky, developed in \cite{Lubinsky}. Now let $\gamma>1$. By Proposition \ref{prop:approx}, we know that there is an $R_\gamma>0$ such that for all $R\geq R_\gamma$, we have that \begin{align} \label{eq:weightsIneqLub} \omega_{\gamma,N(R),\nu}^-(t) \leq w_{X,\nu,R}(t) \leq \omega_{\gamma,N(R),\nu}^+(t), \qquad t\in [0,1]. \end{align} Then, by using the extremal property of the Christoffel function (cf. \eqref{eq:minpropChr}), we also have for all $R\geq R_\gamma$ and for all $x\in [0,1]$, that \begin{align} \label{eq:kernelsIneqLub} \widehat{K}_{N(R)}(x,x; \omega_{\gamma,N(R),\nu}^+) \leq \widehat{K}_{N(R)}(x,x; w_{X,\nu,R}) \leq \widehat{K}_{N(R)}(x,x; \omega_{\gamma,N(R),\nu}^-). \end{align} Due to the differentiability of $(x,y)\mapsto \sqrt{x y} J_\nu(x,y)$ on $(0,\infty)^2$ we have uniformly for $(x,y)$ in any compact subset of $(0,\infty)^2$ that there exists a constant $M>0$ such that \begin{equation} \label{eq:existenceM1} (xy)^{-\nu/2} \Big\lvert J_\nu(x,y) - \frac{1}{c_\gamma} J_\nu\left(\frac{x}{c_\gamma},\frac{y}{c_\gamma}\right) \Big\rvert \leq M\left(1-\frac{1}{c_\gamma}\right). \end{equation} Now let $x$ be in a compact set $S$ and take $R$ big enough such that $x_R:=\frac{x}{\pi^2 N(R)^2}\in (0,1)$ for all $x\in S$. Furthermore, let $\varepsilon>0$. By \eqref{eq:kernelsIneqLub} and \eqref{eq:existenceM1} we can find a $\gamma>1$ such that uniformly on $S$ \begin{align} \label{eq:+K-1} \frac{1}{(\pi N(R))^{2+2\nu}} &\widehat{K}_{N(R)}(x_R,x_R; \omega_{\gamma,N(R),\nu}^+) - x^{-\nu} \frac{1}{c_\gamma} J_\nu\left(\frac{x}{c_\gamma},\frac{x}{c_\gamma}\right) - \varepsilon \\ \label{eq:+K-2} &\leq \frac{1}{(\pi N(R))^{2+2\nu}} \widehat{K}_{N(R)}(x_R,x_R; w_{X,\nu,R}) - x^{-\nu} J_\nu(x,x) \\ \label{eq:+K-3} &\leq \frac{1}{(\pi N(R))^{2+2\nu}} \widehat{K}_{N(R)}(x_R,x_R; \omega_{\gamma,N(R),\nu}^-) - x^{-\nu} c_\gamma J_\nu\left(c_\gamma x,c_\gamma x\right) + \varepsilon. \end{align} Here we have tacitly used that $c_\gamma$ tends to $1$ as $\gamma\to 1^+$. Then using the uniform convergence in Corollary \ref{cor:nonnormalizedresult} for the lower bound \eqref{eq:+K-1} and the upper bound \eqref{eq:+K-3} we infer that for $R$ big enough we have that uniformly for $x\in S$ \begin{align} \label{eq:eKe} -2\varepsilon \leq \frac{1}{(\pi N(R))^{2+2\nu}} \widehat{K}_{N(R)}(x_R,x_R; w_{X,\nu,R}) - x^{-\nu} J_\nu(x,x) \leq 2\varepsilon. \end{align} This proves \eqref{eq:maintheoremforNRhat} on the diagonal. For general $x,y>0$ we can use the result on the diagonal to prove that \eqref{eq:maintheoremforNRhat} holds on compact subsets. By \eqref{eq:weightsIneqLub} we may use an inequality by Lubinsky \cite{Lubinsky}, given by \begin{multline} \label{eq:LubOmega+w} \Big \lvert \widehat{K}_{N(R)}(x_R,y_R; w_{X,\nu,R}) - \widehat{K}_{N(R)}(x_R, y_R; \omega_{\gamma,N(R),\nu}^+) \Big \rvert^2 \\ \leq \Big\lvert \widehat{K}_{N(R)}(y_R,y_R; w_{X,\nu,R}) \Big\rvert \left(\widehat{K}_{N(R)}(x_R,x_R; w_{X,\nu,R})-\widehat{K}_{N(R)}(x_R,x_R; \omega_{\gamma,N(R),\nu}^+)\right). \end{multline} The proposition now follows by a similar argument that we applied for the diagonal, using the uniform convergence of \eqref{eq:maintheoremforNRhat} on compact subsets of the diagonal that we just proved, \eqref{eq:resultRHprobhatplus} and \eqref{eq:existenceM1}. \end{proof} By combining \eqref{eq:approxinnotation} and \eqref{eq:asymptoticapproxweight}, we immediately see that we have \begin{equation} \label{eq:asympforweight} \lim_{R\rightarrow \infty} \pi^{2\nu} N(R)^{2\nu} w_{X,\nu,R}\left(\frac{x}{\pi^2 N(R)^2}\right) = x^\nu, \end{equation} uniformly on compact subsets. From this, we can directly conclude the limiting behaviour of the normalized Christoffel-Darboux kernel of the weight $w_{X,\nu,R}$. \begin{corollary} \label{cor:maintheoremforNR} Suppose that $X=(p_n)_{n=1}^\infty$ is a strictly increasing sequence of positive numbers satisfying the growth condition \eqref{eq:growthconditionpoints}, and for every $R>0$, let $N(R)$ be the number of points in $X\cap [0,R]$. Then, uniformly for $(x,y)$ in compact subsets of $(0,\infty)^2$, we have that \begin{equation} \label{eq:maintheoremforNR} \lim_{R \rightarrow \infty} \frac{1}{\pi^2 N(R)^2} K_{N(R)}\left(\frac{x}{\pi^2 N(R)^2},\frac{y}{\pi^2 N(R)^2}; w_{X,\nu,R}\right) = J_\nu(x,y). \end{equation} \end{corollary} \begin{proof} Combine the relationship between the normalized and the non-normalized Christoffel-Darboux kernel with Proposition \ref{prop:maintheoremforNRhat} and \eqref{eq:asympforweight}. \end{proof} As a corollary, we have our main theorem. \begin{proof}[Proof of Theorem \ref{thm:maintheorem}] This directly follows from Corollary \ref{cor:maintheoremforNR} and \eqref{eq:RandNR}. \end{proof} \section{The Riemann-Hilbert problem} \label{sec:RHproblem} We now give the details for the RH-problem for the weight $\omega_{\gamma,n,\nu}^+$, which proves Proposition \ref{prop:resultRHprob}. For this, we fix $\gamma>1$ and $\nu>-1$. We also already note that $\omega_{\gamma,n,\nu}^+(x)=O(x^\nu)$ as $x\downarrow 0$ and that $\omega_{\gamma,n,\nu}^+$ is bounded as $x\to 1$, uniformly in $n$. Furthermore, for convenience, we define the function \begin{equation} \label{eq:defhnu} h_\nu(z) = \left\{\begin{array}{rl} 1 & \nu>0\\ \log z & \nu=0\\ z^\nu & \nu <0. \end{array}\right. \end{equation} The Riemann-Hilbert problem that we consider is the following: \begin{itemize} \item[RH-Y1] $Y : \mathbb{C}\setminus [0,1]\to \mathbb{C}^{2\times 2}$ is analytic. \item[RH-Y2] For $x\in(0,1)$ we have (oriented away from the origin) \[Y_+(x) = Y_-(x) \begin{pmatrix} 1 & \omega_{\gamma,n,\nu}^+(x)\\ 0 & 1\end{pmatrix}.\] \item[RH-Y3] As $z\to \infty$ we have \[Y(z) = (I+O(1/z)) \begin{pmatrix} z^n & 0\\ 0 & z^{-n}\end{pmatrix}.\] \item[RH-Y4a] As $z\to 0$ we have \[Y(z) = O \begin{pmatrix} 1 & h_\nu(z)\\ 1 & h_\nu(z)\end{pmatrix}.\] \item[RH-Y4b] As $z\to 1$ we have \[Y(z) = O \begin{pmatrix} 1 & \log(z-1)\\ 1 & \log(z-1) \end{pmatrix}.\] \end{itemize} In \cite{Kuijlaars_2004}, this Riemann-Hilbert problem was used for the first time to study the asymptotics of orthogonal polynomials on a bounded interval. We explicitly state that its solution is given by \begin{equation} \label{eq:explicitY} Y(z) = \begin{pmatrix} \gamma_n^{-1} \varphi_n(z;\omega_{\gamma,n,\nu}^+) & \frac{\gamma_n^{-1}}{2\pi i} \int_0^1 \frac{\varphi_n(s;\omega_{\gamma,n,\nu}^+)\omega_{\gamma,n,\nu}^+(s)}{s-z} ds\\ -2\pi i\gamma_{n-1} \varphi_{n-1}(z;\omega_{\gamma,n,\nu}^+) & -\gamma_{n-1} \int_0^1 \frac{\varphi_{n-1}(s;\omega_{\gamma,n,\nu}^+) \omega_{\gamma,n,\nu}^+(s)}{s-z} ds \end{pmatrix}, \end{equation} where the constant $\gamma_n$ is the leading coefficient of the orthonormal polynomial $\varphi_n(z;\omega_{\gamma,n,\nu}^+)$. The normalized Christoffel-Darboux kernel is explicitly given in terms of the solution $Y$ by: \begin{equation} \label{eq:kernelinY} K_n(x,y;\omega_{\gamma,n,\nu}^+) = \frac{1}{2\pi i (x-y)} \sqrt{\omega_{\gamma,n,\nu}^+(x) \omega_{\gamma,n,\nu}^+(y)} \begin{pmatrix} 0 & 1\end{pmatrix} Y_+(y)^{-1} Y_+(x) \begin{pmatrix} 1\\ 0 \end{pmatrix}. \end{equation} We note that in our analysis we use the convention to take the principle branch for logarithms and power functions, i.e. these will have cut $(-\infty,0]$ and they will be positive for large positive values. \subsection{First transformation: normalization} For our fixed $\gamma>1$, we define the function $g_\gamma:\mathbb{C}\setminus (-\infty,1]\to \mathbb{C}$ by \begin{equation} g_{\gamma}(z) = \int_0^1 \log(z-s) d\mu_\gamma(s), \end{equation} where the logarithm is defined with argument in $(-\pi,\pi)$, as usual, and $\mu_\gamma$ is the equilibrium measure \eqref{eq:equilibriummeasuregamma}. Then we have that $g_\gamma$ is analytic on $\mathbb{C}\setminus (-\infty,1]$ and it is obvious that we have \begin{equation} \label{eq:behaviourgatinfinity} g_\gamma(z) = \log(z)+\mathcal O\left(\frac{1}{z}\right), \qquad \textrm{as} \ z\to\infty. \end{equation} We note that $g_\gamma$ is bounded around $z=0$ and $z=1$, and furthermore, we have \begin{align} g_{\gamma,+}(x)+g_{\gamma,-}(x) &= 2 \int_0^1 \log|x-s| d\mu_\gamma(s) = V_\gamma(x)+\ell_\gamma, & x\in (0,1),\label{eq:gplus+minusinterval}\\ g_{\gamma,+}(x)-g_{\gamma,-}(x) &= 2\pi i\int_x^1 d\mu_\gamma(s), & x\in (0,1), \label{eq:gplus-minusinterval}\\ g_{\gamma,+}(x)-g_{\gamma,-}(x) &= 2\pi i, & x\leq 0 \label{eq:gplus-minusnegative}. \end{align} Now define the function $T: \mathbb{C}\setminus [0,1] \rightarrow \mathbb{C}^{2\times 2}$ by \begin{equation} \label{eq:fromYtoT} T(z) = \begin{pmatrix} e^{-\frac{n\ell_\gamma}{2}} & 0\\ 0 & e^{\frac{n\ell_\gamma}{2}}\end{pmatrix} Y(z) \begin{pmatrix} e^{-n g_\gamma(z)} & 0\\ 0 & e^{n g_\gamma(z)}\end{pmatrix} \begin{pmatrix} e^{\frac{n\ell_\gamma}{2}} & 0\\ 0 & e^{-\frac{n\ell_\gamma}{2}}\end{pmatrix}, \end{equation} where $\ell_\gamma$ is the constant of the equilibrium problem \eqref{eq:equilibriumproblem} associated to $V_\gamma$. Then, by the above properties of $g$, $T$ satisfies the following Riemann-Hilbert problem. \begin{itemize} \item[RH-T1] $T:\mathbb{C}\setminus [0,1]\to \mathbb{C}^{2\times 2}$ is analytic. \item[RH-T2] For $x\in(0,1)$ we have (oriented away from the origin) \begin{equation} \label{eq:RHT2} T_+(x) = T_-(x) \begin{pmatrix} e^{2\pi i n \int_0^x d\mu_\gamma(s)} & x^\nu\\ 0 & e^{-2\pi i n\int_0^x d\mu_\gamma(s)}\end{pmatrix}. \end{equation} \item[RH-T3] As $z\to\infty$ we have $T(z) = I+O(1/z)$. \item[RH-T4a] As $z\to 0$ we have \[T(z) = O \begin{pmatrix} 1 & h_\nu(z)\\ 1 & h_\nu(z)\end{pmatrix}.\] \item[RH-T4b] As $z\to 1$ we have \[T(z) = O \begin{pmatrix} 1 & \log(z-1)\\ 1 & \log(z-1) \end{pmatrix}.\] \end{itemize} \subsection{Opening of the lens} \label{subsec:openinglens} In order to open the lens, we define the following function: \begin{equation} \label{eq:defphi} \varphi_\gamma(z) = \left\{ \begin{array}{rl} \displaystyle \log\left(\frac{\sqrt\gamma\sqrt{1-z}+i\sqrt z\sqrt{\gamma-1}}{\sqrt\gamma\sqrt{1-z}-i\sqrt z\sqrt{\gamma-1}}\right) + \sqrt{\frac{z}{\gamma}} \log\left(\frac{\sqrt{\gamma-1}+i\sqrt{1-z}}{\sqrt{\gamma-1}-i\sqrt{1-z}}\right), & \operatorname{Im}(z)>0,\\ \displaystyle \log\left(\frac{\sqrt\gamma\sqrt{1-z}-i\sqrt z\sqrt{\gamma-1}}{\sqrt\gamma\sqrt{1-z}+i\sqrt z\sqrt{\gamma-1}}\right) + \sqrt{\frac{z}{\gamma}} \log\left(\frac{\sqrt{\gamma-1}-i\sqrt{1-z}}{\sqrt{\gamma-1}+i\sqrt{1-z}}\right), & \operatorname{Im}(z)<0. \end{array} \right. \end{equation} To avoid confusion, we repeat that we use the convention of taking the principal branch for the logarithm and power functions (square root functions in this case). We claim that this function $\varphi_\gamma$ is well-defined and analytic on $\mathbb{C}\setminus \mathbb{R}$. To see this, note that the linear fractional map \[\zeta \mapsto \frac{1+i\zeta}{1-i\zeta}\] maps the (projectively) extended real line to the unit circle. Therefore, $\frac{1+i\zeta}{1-i\zeta}$ is a negative real number if and only if it is $-1$, and hence if and only if $\zeta=\infty$. This concludes that $\varphi_\gamma$ is indeed analytic on $\mathbb{C}\setminus \mathbb{R}$. Next, we view $\mathbb{R}$ as a contour oriented from $-\infty$ to $+\infty$. Then we have the following boundary values for $\varphi_\gamma$. \begin{lemma} \label{lem:behaviourphirealaxis} We have that \begin{align} \varphi_{\gamma,\pm}(x) &= \pm \pi i\int_0^x d\mu_\gamma(s), &x\in(0,1), \label{eq:phiOn01} \\ \varphi_{\gamma,+}(x)&=\varphi_{\gamma,-}(x), &x<0, \label{eq:phinegative} \end{align} whence $\varphi_\gamma$ can be analytically continued to form an analytic function $\varphi_\gamma: \mathbb{C}\setminus [0,\infty)\to\mathbb C$. \end{lemma} \begin{proof} For \eqref{eq:phiOn01}, we note that we clearly have that $\varphi_{\gamma,-}(x)=-\varphi_{\gamma,+}(x)$ for all $x\in (0,1)$ by construction of $\varphi_\gamma$. Hence, we only prove the identity for $\varphi_{\gamma,+}$. Now, by considering the derivative and the value for $x=0$ of both sides of the following equation (using \eqref{eq:equilibriummeasuregamma}), we have that \[\int_0^x d\mu_\gamma(s) = \sqrt{\frac{x}{\gamma}} +\frac{2}{\pi}\arctan\left(\sqrt{\frac{\gamma-1}{\gamma}}\sqrt{\frac{x}{1-x}}\right) - \frac{2}{\pi}\sqrt{\frac{x}{\gamma}} \arctan\left(\sqrt{\frac{\gamma-1}{1-x}}\right), \qquad x\in (0,1).\] Then using that $\arctan(y)+\arctan(1/y)=\frac{\pi}{2}$ if $y>0$, we obtain that \begin{align*} i\pi\int_0^x d\mu_\gamma(s) &= 2i\arctan\left(\sqrt{\frac{\gamma-1}{\gamma}}\sqrt{\frac{x}{1-x}}\right) + 2i\sqrt{\frac{x}{\gamma}} \arctan\left(\sqrt{\frac{1-x}{\gamma-1}}\right)\\ &= \log\left(\frac{\sqrt\gamma\sqrt{1-x}+i\sqrt x\sqrt{\gamma-1}}{\sqrt\gamma\sqrt{1-x}-i\sqrt x\sqrt{\gamma-1}}\right) + \sqrt{\frac{x}{\gamma}} \log\left(\frac{\sqrt{\gamma-1}+i\sqrt{1-x}}{\sqrt{\gamma-1}-i\sqrt{1-x}}\right), \end{align*} where we used a standard identity between $\arctan$ and $\log$. The last expression is clearly equal to $\varphi_{\gamma,+}(x)$; this establishes \eqref{eq:phiOn01}. Since $\lim_{\varepsilon \rightarrow 0^+} \sqrt{x+i\varepsilon} = -\lim_{\varepsilon \rightarrow 0^+} \sqrt{x-i\varepsilon}$ for $x<0$, we conclude that \eqref{eq:phinegative} holds too. \end{proof} We now know that $\varphi_\gamma$ has the cut $[0,\infty)$. At the endpoint $z=0$ we have the following behaviour. \begin{lemma} \label{lem:phionendpoints} For the function $\varphi_\gamma$ defined by \eqref{eq:defphi}, we have the following: \begin{equation*} \varphi_\gamma(z) = \pm \frac{i}{\sqrt{c_\gamma}} \sqrt{z}+ O(z^{3/2}), \qquad \text{ as }z\to 0 \text{ and } \pm \operatorname{Im}(z)>0, \end{equation*} where $c_\gamma$ is as in \eqref{eq:defcgamma}. \end{lemma} \begin{proof} We only prove the behaviour of $\varphi_\gamma(z)$ around $z=0$ in the upper half plane; the behaviour on the lower half plane follows analogously. Hence, we are interested in the behaviour of \begin{equation} \label{eq:phiupperhalfplane} \log\left(\frac{\sqrt\gamma\sqrt{1-z}+i\sqrt z\sqrt{\gamma-1}}{\sqrt\gamma\sqrt{1-z}-i\sqrt z\sqrt{\gamma-1}}\right) + \sqrt{\frac{z}{\gamma}} \log\left(\frac{\sqrt{\gamma-1}+i\sqrt{1-z}}{\sqrt{\gamma-1}-i\sqrt{1-z}}\right) \end{equation} around $z=0$. For this, we write \[\zeta=\sqrt{\frac{\gamma-1}{\gamma}} \sqrt{\frac{z}{1-z}},\] such that the first term of \eqref{eq:phiupperhalfplane} becomes $\log \frac{1+i\zeta}{1-i\zeta}$. We know that \[\log \frac{1+i\zeta}{1-i\zeta} = 2i\zeta + O(\zeta^3), \qquad \zeta \rightarrow 0\] so for the first term of \eqref{eq:phiupperhalfplane} we obtain \[\log\left(\frac{\sqrt\gamma\sqrt{1-z}+i\sqrt z\sqrt{\gamma-1}}{\sqrt\gamma\sqrt{1-z}-i\sqrt z\sqrt{\gamma-1}}\right) = 2i \sqrt{\frac{\gamma -1}{\gamma}} \sqrt{z} + O(z^{3/2}), \qquad z\rightarrow 0.\] For the second term of \eqref{eq:phiupperhalfplane} we have \[\sqrt{\frac{z}{\gamma}} \log\left(\frac{\sqrt{\gamma-1}+i\sqrt{1-z}}{\sqrt{\gamma-1}-i\sqrt{1-z}}\right) = \sqrt{\frac{1}{\gamma}} \log\left(\frac{\sqrt{\gamma-1}+i}{\sqrt{\gamma-1}-i}\right) \sqrt{z} + O(z^{3/2}).\] Hence, for $\operatorname{Im}(z)>0$ and $z\rightarrow 0$, we obtain \begin{align*} \varphi_\gamma(z) &= \left(2i\sqrt{\gamma-1} + \log\left(\frac{\sqrt{\gamma-1}+i}{\sqrt{\gamma-1} -i}\right)\right) \sqrt{\frac{z}{\gamma}} + O(z^{3/2}) \\ &=\frac{2i}{\sqrt{\gamma}} \left(\sqrt{\gamma-1} - \arctan(\sqrt{\gamma-1}) +\frac{\pi}{2}\right) \sqrt{z}+ O(z^{3/2}), \end{align*} which gives the desired result when invoking \eqref{eq:defcgamma}. \end{proof} Next we open a lens from $0$ to $1$. This means that we take two contours $\Delta_+$ and $\Delta_-$, both going from $0$ to $1$, where $\Delta_+$ goes through the upper half plane and $\Delta_-$ through the lower half plane. We write $\Sigma_S=(0,1) \cup \Delta_+ \cup \Delta_-$ for the collection of contours that we now have under consideration, see Figure \ref{fig:contoursS}. \begin{figure}[h] \centering \resizebox{10.5cm}{6cm}{% \begin{tikzpicture}[>=latex] \draw[-] (-7,0)--(7,0); \draw[-] (-4,-4)--(-4,4); \draw[fill] (-4,0) circle (0.1cm); \draw[fill] (3,0) circle (0.1cm); \node[above] at (-4.2,-0.075) {\large 0}; \node[above] at (2.5,0) {\large $1$}; \node[above] at (-0.2,2) {\large $\Delta_+$}; \node[above] at (-0.2,-2.5) {\large $\Delta_-$}; \draw[-,ultra thick] (-4,0)--(3,0); \draw[->, ultra thick] (-0.5,0) to (-0.3,0); \draw[-, ultra thick] (-4,0) to [out=60, in=120] (3,0); \draw[-, ultra thick] (-4,0) to [out=-60, in=-120] (3,0); \draw[->, ultra thick] (-0.5,1.77) to (-0.3,1.77); \draw[->, ultra thick] (-0.5,-1.77) to (-0.3,-1.77); \draw[-, color=white] (-4,3)--(-4,4); \draw[-, color=white] (-4,-4)--(-4,-3); \end{tikzpicture} } \caption{The set of contours $\Sigma_S$.} \label{fig:contoursS} \end{figure} As is customary, we refer to the region enclosed between $\Delta_+$ and $\Delta_-$ as the interior of the lens, the region between $\Delta_+$ and $(0,1)$ as the upper part of the lens and between $(0,1)$ and $\Delta_-$ as the lower part of the lens. We then define the function $S: \mathbb{C}\setminus \Sigma_S \rightarrow \mathbb{C}^{2\times 2}$ by \begin{equation} \label{eq:defS} S(z) = \left\{ \begin{array}{ll} T(z) \begin{pmatrix} 1 & 0\\ - z^{-\nu} e^{2n\varphi_\gamma(z)} & 1\end{pmatrix} & \text{ in the upper part of the lens},\\ T(z) \begin{pmatrix} 1 & 0\\ z^{-\nu} e^{2n\varphi_\gamma(z)} & 1\end{pmatrix} & \text{ in the lower part of the lens}, \\ T(z) & \text{ outside the lens}. \end{array} \right. \end{equation} Then $S$ satisfies the following Riemann-Hilbert problem. \begin{itemize} \item[RH-S1] $S:\mathbb{C}\setminus \Sigma_S \to \mathbb{C}^{2\times 2}$ is analytic. \item[RH-S2a]On $(0,1)$ we have the following jump: \begin{equation*} S_+(x) = S_-(x) \begin{pmatrix} 0 & x^\nu \\ -x^{-\nu} & 0\end{pmatrix} \qquad x\in (0,1). \end{equation*} \item[RH-S2b] On the lips $\Delta_+$ and $\Delta_-$ we have the following jump, where $S_+$ and $S_-$ are determined by the orientation of $\Delta_+$ and $\Delta_-$: \begin{equation*} S_+(z) = S_-(z) \begin{pmatrix} 1 & 0\\ z^{-\nu} e^{2n\varphi_\gamma(z)} & 1\end{pmatrix} \qquad z\in \Delta_+ \cup \Delta_-. \end{equation*} \item[RH-S3] As $z\to\infty$ we have $S(z) = I + O(1/z)$. \item[RH-S4a] As $z\to 0$ we have \begin{align} S(z) &= O \begin{pmatrix} 1 & h_{\nu}(z)\\ 1 & h_{\nu}(z)\end{pmatrix} & \text{ for $z$ outside the lens.}\\ S(z) &= O \begin{pmatrix} h_{-\nu}(z) & h_\nu(z)\\ h_{-\nu}(z) & h_\nu(z)\end{pmatrix} & \text{ for $z$ inside the lens.} \end{align} \item[RH-S4b] As $z\to 1$ we have \begin{align} S(z) &= O \begin{pmatrix} 1 & \log(z-1)\\ 1 & \log(z-1)\end{pmatrix} & \text{ for $z$ outside the lens.}\\ S(z) &= O \begin{pmatrix} \log(z-1) & \log(z-1)\\ \log(z-1) & \log(z-1)\end{pmatrix} & \text{ for $z$ inside the lens.} \label{eq:S1inside} \end{align} \end{itemize} The only non-trivial properties to prove is the behaviour inside the lens in RH-S4a and RH-S4b. For the first, note that $z^{-\nu}h_\nu(z) = h_{-\nu}(z)$ by definition of $h_\nu$ \eqref{eq:defhnu}. Furthermore, by Lemma \ref{lem:phionendpoints}, we have that $e^{2 n \varphi_\gamma(z)} = O(1)$ as $z\to 0$, whence $O(1\pm h_\nu(z) e^{2n\varphi_\gamma(z)})=O(h_\nu(z))$, regardless of the exact value of $\nu$. So, for $z$ inside the lens we have as $z\to 0$ \begin{align*} S(z) &= O\begin{pmatrix} 1 & h_{\nu(z)}\\ 1 & h_{\nu}(z)\end{pmatrix} \begin{pmatrix} 1 & 0\\ \mp z^{-\nu} e^{2n\varphi_\gamma(z)} & 1\end{pmatrix} = O\begin{pmatrix} 1\mp h_{-\nu}(z) e^{2 n\varphi_\gamma(z)} & h_\nu(z)\\ 1\mp h_{-\nu}(z) e^{2 n\varphi_\gamma(z)} & h_\nu(z)\end{pmatrix} \\ &= O \begin{pmatrix} h_{-\nu}(z) & h_\nu(z)\\ h_{-\nu}(z) & h_\nu(z)\end{pmatrix}. \end{align*} For the behaviour inside the lens as $z\rightarrow 1$ in RH-S4b, we remark that $z^{-\nu} e^{2 n\varphi_\gamma(z)}\to 1$ as $z\to 1$. Combining this with RH-T4b and the definition of S \eqref{eq:defS}, yields \eqref{eq:S1inside}. \subsection{Global parametrix} For large $n$ the jump matrices of $S$ on the lips of the lens are close to the unit matrix, see Lemma \ref{lem:realpartoncontours}. The global parametrix problem for the Riemann-Hilbert problem neglects these jumps altogether and is hence the following: \begin{itemize} \item[RH-N1] $N:\mathbb{C}\setminus [0,1]\to\mathbb{C}^{2\times 2}$ is analytic. \item[RH-N2] We have the following jump for $x\in (0,1)$ (oriented away from the origin) \[N_+(x) = N_-(x) \begin{pmatrix} 0 & x^\nu\\ - x^{-\nu} & 0\end{pmatrix}\] \item[RH-N3] As $z\to\infty$ we have $N(z) = I+O(1/z)$. \end{itemize} Since we aim to approximate $S$ by $N$ sufficiently far away from the end points, we leave some freedom for the behavior of $N$ around $z=0$ and $z=1$. A solution is readily available in the literature \cite[Chapter 5]{Kuijlaars_2004}. After an appropriate translation it yields \begin{align} \label{eq:defN} N(z) &= 2^{-\nu\sigma_3} \begin{pmatrix} \displaystyle\frac{a(z)+a(z)^{-1}}{2} & \displaystyle\frac{a(z)-a(z)^{-1}}{2i}\\ \displaystyle\frac{a(z)-a(z)^{-1}}{-2i} & \displaystyle\frac{a(z)+a(z)^{-1}}{2} \end{pmatrix} \left(1+\sqrt{\frac{z-1}{z}}\right)^{\nu\sigma_3}, \end{align} where $a(z) = \left(\frac{z-1}{z}\right)^\frac{1}{4}$. Here and later, we adhere to the usual notation for the third Pauli matrix \[\sigma_3=\begin{pmatrix} 1 & 0 \\ 0 & -1 \\ \end{pmatrix}.\] With this definition of $N$ we should add the following local behaviors. \begin{itemize} \item[RH-N4a] As $z\to 0$ we have \[N(z) = z^{-1/4} O \begin{pmatrix} z^{-\nu/2} & z^{\nu/2}\\ z^{-\nu/2} & z^{\nu/2}\end{pmatrix}.\] \item[RH-N4b] As $z\to 1$ we have \[N(z) = O ((z-1)^{-1/4}).\] \end{itemize} \subsection{Local parametrices} We have two local parametrix problems; one around $z=0$ and one around $z=1$. For finding the solutions, we rely on the results available in \cite{Kuijlaars_2004}, where a similar local parametrix problem was studied. We only explicitly do this analysis for the local parametrix around $z=0$, since the other local parametrix problem is similar and its details are not needed for our further analysis. \subsubsection{The local parametrix around the origin} The local parametrix problem around $z=0$ is the following Riemann-Hilbert problem, where $r>0$ is a small number. \begin{itemize} \item[RH-P1] $P: \overline{D(0,r)}\setminus \Sigma_S \rightarrow \mathbb{C}^{2\times 2}$ is analytic. \item[RH-P2a] On $(0,r)$ we have the following jump: \begin{equation*} P_+(x) = P_-(x) \begin{pmatrix} 0 & x^\nu \\ -x^{-\nu} & 0\end{pmatrix}, \qquad x\in (0,r). \end{equation*} \item[RH-P2b] On the contours $\Delta_+\cap D(0,r)$ and $\Delta_-\cap D(0,r)$ we have the following jump: \begin{equation*} P_+(z) = P_-(z) \begin{pmatrix} 1 & 0\\ z^{-\nu} e^{2n\varphi_\gamma(z)} & 1\end{pmatrix}, \qquad z\in (\Delta_+ \cup \Delta_-)\cap D(0,r). \end{equation*} \item[RH-P3] As $z\to 0$ we have \begin{align} P(z) &= O \begin{pmatrix} 1 & h_{\nu}(z)\\ 1 & h_{\nu}(z)\end{pmatrix}, & \text{ for $z$ outside the lens,}\\ P(z) &= O \begin{pmatrix} h_{-\nu}(z) & h_\nu(z)\\ h_{-\nu}(z) & h_\nu(z)\end{pmatrix}, & \text{ for $z$ inside the lens.} \end{align} \item[RH-P4] We have that $P(z)=(I+O(1/n))N(z)$ uniformly for $\lvert z\rvert = r$ as $n\rightarrow \infty$. \end{itemize} To find the solution to this problem, we make use of the results in \cite{Kuijlaars_2004}, where a similar local parametrix problem was solved. For this, we consider the following two rays in the complex plane: \begin{align} \eta_+ &= \{t e^{\frac{\pi i}{3}} \mid t> 0\},\\ \eta_- &= \{t e^{\frac{-\pi i}{3}} \mid t> 0\}. \end{align} We see these rays as being oriented away from the origin. Then, we consider the function $B: \mathbb{C}\setminus ([0,\infty) \cup \eta_+ \cup \eta_-) \rightarrow \mathbb{C}^{2\times 2}$ defined by \begin{align} \label{eq:defB} B(z) &=\left\{\begin{array}{ll} \begin{pmatrix} \frac{1}{2} H_\nu^{(2)}\left(2\sqrt z\right) & \frac{1}{2} H_\nu^{(1)}\left(2\sqrt z\right)\\ \pi \sqrt{-z} (H_\nu^{(2)})'(2 \sqrt z) & \pi \sqrt{-z} (H_\nu^{(1)})'(2\sqrt z) \end{pmatrix} e^{-\frac{\pi i\nu}{2} \sigma_3}, & 0 < \arg z < \frac{\pi}{3},\\ \quad\\ \begin{pmatrix} I_\nu(2\sqrt{-z}) & -\frac{i}{\pi} K_\nu(2\sqrt{-z})\\ -2\pi i \sqrt{-z} I_\nu'(2\sqrt{-z}) & -2\sqrt{-z} K_\nu'(2\sqrt{-z}) \end{pmatrix}, & |\arg z|>\frac{\pi}{3},\\ \quad\\ \begin{pmatrix} \frac{1}{2} H_\nu^{(1)}\left(2\sqrt z\right) & -\frac{1}{2} H_\nu^{(2)}\left(2\sqrt z\right)\\ -\pi \sqrt{-z} (H_\nu^{(1)})'(2 \sqrt z) & \pi \sqrt{-z} (H_\nu^{(2)})'(2\sqrt z) \end{pmatrix} e^{\frac{\pi i\nu}{2} \sigma_3}, & -\frac{\pi}{3} < \arg z < 0. \end{array}\right. \end{align} Here $I_\nu$ and $K_\nu$ are modified Bessel functions and $H_\nu^{(1)}$ and $H_\nu^{(2)}$ are Hankel functions. See \cite[Chapter 9]{Abramowitz_Stegun} for more details on these functions. Indeed, if we compare \eqref{eq:defB} with the definitions on page 367 of \cite{Kuijlaars_2004}, we see that their function $\Psi$ is related to our function $B$ by \begin{equation} \label{eq:relationBPsi} B(z)=\sigma_3 \Psi(-z) \sigma_3. \end{equation} Using \eqref{eq:relationBPsi} and the results of \cite{Kuijlaars_2004}, we directly obtain that $B$ satisfies the following Riemann-Hilbert problem: \begin{itemize} \item[RH-B1] $B: \mathbb{C}\setminus ([0,\infty) \cup \eta_+ \cup \eta_-) \rightarrow \mathbb{C}^{2\times 2}$ is analytic. \item[RH-B2] $B$ has the following jumps (all contours oriented away from the origin) \begin{align} B_+(x) &= B_-(x) \begin{pmatrix} 0 & 1\\ -1 & 0\end{pmatrix} & x\in (0,\infty)\\ B_+(z) &= B_-(z) \begin{pmatrix} 1 & 0\\ e^{\mp \pi i\nu} & 1\end{pmatrix} & z\in \eta_\pm. \end{align} \item[RH-B3] As $z\to 0$ we have \begin{align} B(z) &= z^{\nu/2} O \begin{pmatrix} 1 & h_{-\nu(z)}\\ 1 & h_{-\nu}(z) \end{pmatrix} & \lvert \arg(z)-\pi \rvert < \frac{2\pi}{3}\\ B(z) &= z^{\nu/2} O \begin{pmatrix} h_{-\nu}(z) & h_{-\nu}(z)\\ h_{-\nu}(z) & h_{-\nu}(z) \end{pmatrix} & 0<\lvert\arg(z)\rvert<\frac{\pi}{3}. \end{align} \end{itemize} We note that $B$ is not the unique solution of the problem; requiring a certain asymptotic behaviour as $z\rightarrow \infty$ would make it unique. For the particular definition of $B$ in \eqref{eq:defB}, by \eqref{eq:relationBPsi} and the results in \cite{Kuijlaars_2004}, we have that \begin{equation} \label{eq:Binfty} B(z) = \left(2\pi \sqrt{-z}\right)^{-\frac{\sigma_3}{2}} \left(\frac{1}{\sqrt 2} \begin{pmatrix} 1 & -i\\ -i & 1\end{pmatrix} + \mathcal O\left(\frac{1}{\sqrt z}\right)\right) e^{2\sqrt{-z}\sigma_3}, \qquad z\rightarrow \infty. \end{equation} This function $B$ is not directly the solution that we need in our analysis; we need to transform $B$. For this, we write $D(0,r)$ and $\overline{D(0,r)}$ for the open and closed disc of radius $r$ around $z=0$, respectively. Then we define the function $f_\gamma: D(0,1) \rightarrow \mathbb{C}$ by \begin{equation} \label{eq:deffgamma} f_\gamma(z) = -\frac{1}{4} \varphi_\gamma(z)^2, \end{equation} where $\varphi_\gamma$ is as in \eqref{eq:defphi}. Although $\varphi_\gamma$ has $[0,\infty)$ as a cut, $f_\gamma$ is analytic on $D(0,1)$ due to the square in its definition and the jump \eqref{eq:phiOn01} of $\varphi_\gamma$. Then, by Lemma \ref{lem:phionendpoints} and \eqref{eq:defcgamma}, we have that \begin{equation} \label{eq:fgamma0} f_\gamma(z) = \frac{\pi^2}{4 c_\gamma} z + \mathcal O(z^2), \qquad \text{ as } z\rightarrow 0. \end{equation} In particular, there exists an $0<r_0<\frac{1}{2}$ such that the derivative of $f_\gamma$ does not vanish on $D(0,r_0)$. Then the same holds for every $0<r\leq r_0$ and hence the restriction of $f_\gamma$ to $D(0,r)$ is a conformal map for every such $r$. In what follows, we assume that $0<r\leq r_0$. We use the definition \eqref{eq:defN} of the matrix function $N$ and the above definition \eqref{eq:deffgamma} of $f_\gamma$, to define the function \begin{equation} \label{eq:defEn} E_n(z) = N(z) (-z)^{\frac{\nu}{2}\sigma_3} \frac{1}{\sqrt 2} \begin{pmatrix} 1 & i\\ i & 1\end{pmatrix} (2\pi n)^\frac{\sigma_3}{2} \left(-f_\gamma(z)\right)^{\frac{\sigma_3}{4}}, \end{equation} which we want to consider for $z\in D(0,r)$. In \cite{Kuijlaars_2004} a similar function is defined which is also denoted by $E_n$. Analogous to that, we have the following. \begin{lemma} The function $E_n$ defined by \eqref{eq:defEn} defines an analytic and non-singular function on the disc $D(0,r)$. \end{lemma} We remark that when we first used the contours $\Delta_+$ and $\Delta_-$ in subsection \ref{subsec:openinglens}, we only required that $\Delta_+$ went from $0$ to $1$ through the upper half plane, and $\Delta_-$ similarly through the lower half plane. Since our previous results hold for any such $\Delta_+$ and $\Delta_-$, we may assume more about these contours. Namely, since the function $f_\gamma$ defined in \eqref{eq:deffgamma} is conformal, and maps positive numbers to positive numbers and $0$ to $0$, we may assume that \begin{align} f_\gamma(\Delta_+ \cap D(0,r)) &\subset \eta_+, \label{eq:Delta+eta+} \\ f_\gamma(\Delta_- \cap D(0,r)) &\subset \eta_- \label{eq:Delta-eta-}, \end{align} i.e. $f_\gamma$ maps the contours $\Sigma_S \cap D(0,r)$ to the cuts of the function $B$. Hence for any $n\geq 1$ we can study the function $Q: \overline{D(0,r)} \setminus \Sigma_S$ defined by \begin{equation} \label{eq:defQ} Q(z)= E_n(z) B(n^2 f_\gamma(z)), \qquad z\in \overline{D(0,r)} \setminus \Sigma_S, \end{equation} where again we suppress the dependency on $n$ from the notation. It immediately follows that $Q$ satisfies the following Riemann-Hilbert problem: \begin{itemize} \item[RH-Q1] $Q : \overline{D(0,r)}\setminus \Sigma_S \to\mathbb{C}^{2\times 2}$ is analytic. \item[RH-Q2] $Q$ has the following jumps (all contours oriented away from the origin) \begin{align} Q_+(x) &= Q_-(x) \begin{pmatrix} 0 & 1\\ -1 & 0\end{pmatrix}, & x\in (0,r),\\ Q_+(z) &= Q_-(z) \begin{pmatrix} 1 & 0\\ e^{\mp \pi i\nu} & 1\end{pmatrix}, & z\in \Delta^\pm \cap D(0,r). \end{align} \item[RH-Q3] As $z\to 0$ we have \begin{align} Q(z) &= z^{\nu/2} O \begin{pmatrix} 1 & h_{-\nu(z)}\\ 1 & h_{-\nu}(z) \end{pmatrix}, & \text{ outside the lens,}\\ Q(z) &= z^{\nu/2} O \begin{pmatrix} h_{-\nu}(z) & h_{-\nu}(z)\\ h_{-\nu}(z) & h_{-\nu}(z) \end{pmatrix}, & \text{ inside the lens.} \end{align} \item[RH-Q4] We have \[Q(z) = (I+O(1/n)) N(z) (-z)^{\frac{\nu}{2}\sigma_3} e^{-n\varphi(z) \sigma_3}.\] uniformly on $|z|=r$ as $n\to\infty$. \end{itemize} Using this, one straightforwardly checks that the function $P: \overline{D(0,r)}\setminus \Sigma_S \rightarrow \mathbb{C}^{2\times 2}$, defined by \begin{equation} \label{eq:defP} P(z) = Q(z) e^{n\varphi_\gamma(z)\sigma_3} (-z)^{-\frac{\nu}{2}\sigma_3}, \qquad z\in \overline{D(0,r)}\setminus \Sigma_S, \end{equation} satisfies the local parametrix problem RH-P. A last property of $P$ that we need in the further analysis is the following fact. \begin{lemma} \label{lem:SP-1} We have that $S(z)P(z)^{-1}$ is analytic in $D(0,r)$. \end{lemma} \begin{proof} First of all, since the jumps of $P$ that appear in RH-P2a and RH-P2b are the same as the jumps of $S$ that appear in RH-S2a and RH-S2b, we conclude that the only potentially singular point is $z=0$. Then, using the property RH-B3 of $B$, we have that as $z\to 0$ outside the lens \begin{multline} P(z)^{-1} = (-z)^{\frac{\nu}{2}\sigma_3} e^{-n\varphi_\gamma(z)\sigma_3} B\left(n^2 f_\gamma(z)\right)^{-1} E_n(z)^{-1}\\ = \begin{pmatrix} z^{\nu/2} & 0\\ 0 & z^{-\nu/2} \end{pmatrix} z^{\nu/2} O\begin{pmatrix} h_{-\nu}(z) & h_{-\nu}(z)\\ 1 & 1\end{pmatrix} = O\begin{pmatrix} h_{\nu}(z) & h_{\nu}(z)\\ 1 & 1\end{pmatrix}. \end{multline} Then using RH-S4a we have outside the lens as $z\to 0$ that \begin{align} S(z) P(z)^{-1} &= O\left( \begin{pmatrix} 1 & h_\nu(z)\\ 1 & h_\nu(z)\end{pmatrix} \begin{pmatrix} h_{\nu}(z) & h_{\nu}(z)\\ 1 & 1\end{pmatrix}\right) = O(h_\nu(z)). \end{align} By the definition \eqref{eq:defhnu} of $h_\nu$, this implies that $S(z)P(z)^{-1}$ has no negative powers in its Laurent series around $z=0$. We conclude that $S(z) P(z)^{-1}$ is analytic in $z=0$. \end{proof} \subsubsection{The second local parametrix} The local parametrix problem around $z=1$ can be solved similarly as the local parametrix problem around $z=0$. More specifically, if we use the function $\Psi$ from \cite{Kuijlaars_2004} for the parameter $\alpha=0$ and transform it as we did in the previous section, we have the solution to the local parametrix problem. Namely, we find that that there is a $0<r_1<\frac{1}{2}$ such that for all $0<r\leq r_1$, we may assume that the contours $\Delta_+$ and $\Delta_-$ are chosen such that there exists a function $\tilde{P}: \overline{D(1,r)}\setminus \Sigma_S \rightarrow \mathbb{C}^{2\times 2}$, that satisfies the following Riemann-Hilbert problem. \begin{itemize} \item[RH-$\tilde P$1] $\tilde{P} : \overline{D(1,r)}\setminus \Sigma_S\to \mathbb{C}^{2\times 2}$ is analytic. \item[RH-$\tilde{P}$2a] On $(1-r,1)$ we have the following jump: \begin{equation*} \tilde{P}_+(x) = \tilde{P}_-(x) \begin{pmatrix} 0 & x^\nu \\ -x^{-\nu} & 0\end{pmatrix}, \qquad x\in (0,r). \end{equation*} \item[RH-P2b] On the contours $\Delta_+\cap D(1,r)$ and $\Delta_-\cap D(1,r)$ we have the following jump: \begin{equation*} \tilde{P}_+(z) = \tilde{P}_-(z) \begin{pmatrix} 1 & 0\\ z^{-\nu} e^{2n\varphi_\gamma(z)} & 1\end{pmatrix}, \qquad z\in (\Delta_+ \cup \Delta_-)\cap D(1,r). \end{equation*} \item[RH-$\tilde P$3] As $z\rightarrow 1$, we have \begin{align} \tilde{P}(z) &= O \begin{pmatrix} 1 & \log(z-1)\\ 1 & \log(z-1)\end{pmatrix}, & \text{ for $z$ outside the lens.}\\ \tilde{P}(z) &= O \begin{pmatrix} \log(z-1) & \log(z-1))\\ \log(z-1) & \log(z-1)\end{pmatrix}, & \text{ for $z$ inside the lens.} \end{align} \item[RH-$\tilde P$4] Matching: $\tilde P(z) = (I+O(1/n)) N(z)$ uniformly for $|z-1|=r$ as $n\to\infty$, where $N$ is defined in \eqref{eq:defN}. \end{itemize} Since the computations to arrive at this solution are standard by the work of \cite{Kuijlaars_2004} and we do not need the explicit form of this solution in our further analysis, we omit further details. However, we do note that we have the following, which is analogous to Lemma \ref{lem:SP-1}. \begin{lemma} \label{lem:StildeP-1} We have that $S(z)\tilde{P}(z)^{-1}$ is analytic in $D(1,r)$. \end{lemma} \subsection{Final transformation} \label{subsec:finaltransformation} We can now make the final transformation: informally, we show that if one `glues' the global parametrix solution $N$ to the local solutions $P$ and $\tilde{P}$, one asymptotically obtains the solution for the Riemann-Hilbert problem RH-S. We have the following result. \begin{lemma} \label{lem:realpartoncontours} There exists an $r<\min\{r_0,r_1\}$ and an opening of the lips of the lens such that \begin{align} \label{eq:realpartoncontours} \operatorname{Re}(\varphi_\gamma(z))&\leq-c, & z \in (\Delta_+ \cup \Delta_-) \setminus (D(0,r)\cup D(1,r)), \end{align} for some constant $c>0$, where $\varphi_\gamma$ is defined by \eqref{eq:defphi}. \end{lemma} \begin{proof} By $\eqref{eq:phiOn01}$, we have that $\varphi_{\gamma,\pm}$ is a purely imaginary function on $(0,1)$. In fact, $\operatorname{Im}(\varphi_{\gamma,+})$ is strictly increasing, and $\operatorname{Im}(\varphi_{\gamma,-})$ is strictly decreasing on $(0,1)$. By the Cauchy-Riemann equations $\operatorname{Re}(\varphi_\gamma(z))$ will be negative on any subset of the strip $0<\operatorname{Re}(z)<1$ that is close enough to $(0,1)$, but does not contain any point of $(0,1)$. Combining this with the continuity of $\varphi_\gamma$ yields the constant $c>0$. \end{proof} Now let us fix such an $r$ and opening. We collect all the cuts that we have used in the set of contours \[\Sigma_R=(0,1) \cup \Delta_+ \cup \Delta_- \cup \partial D(0,r) \cup \partial D(1,r),\] where $\partial D(0,r)$ and $\partial D(1,r)$ are the boundaries of the disks $D(0,r)$ and $D(1,r)$, respectively, see Figure \ref{fig:contoursR}. \begin{figure}[h] \centering \resizebox{10.5cm}{6cm}{% \begin{tikzpicture}[>=latex] \draw[-] (-7,0)--(7,0); \draw[-] (-4,-4)--(-4,4); \draw[fill] (-4,0) circle (0.1cm); \draw[fill] (3,0) circle (0.1cm); \draw[ultra thick] (-4,0) circle (1.5cm); \draw[ultra thick] (3,0) circle (1.5cm); \node[above] at (-4.2,0) {\large 0}; \node[above] at (-2.2,0) {\large $r$}; \node[above] at (3.3,0) {\large $1$}; \node[above] at (.95,0) {\large $1-r$}; \node[above] at (0.2,1.8) {\large $\Delta_+$}; \node[above] at (0.2,-2.5) {\large $\Delta_-$}; \draw[-,ultra thick] (-4,0)--(3,0); \draw[-, ultra thick] (-4,0) to [out=60, in=120] (3,0); \draw[-, ultra thick] (-4,0) to [out=-60, in=-120] (3,0); \draw[->, ultra thick] (-5.06,1.08) to (-5.16,0.96); \draw[->, ultra thick] (4.14,0.96) to (4.04,1.08); \draw[->, ultra thick] (-0.5,1.78) to (-0.3,1.78); \draw[->, ultra thick] (-0.5,-1.78) to (-0.3,-1.78); \draw[-, color=white] (-4,3)--(-4,4); \draw[-, color=white] (-4,-4)--(-4,-3); \end{tikzpicture} } \caption{The set of contours $\Sigma_R$.} \label{fig:contoursR} \end{figure} Then we define the function $R: \mathbb{C}\setminus \Sigma_R \rightarrow \mathbb{C}^{2\times 2}$, by \begin{equation} \label{eq:defR} R(z) = \left\{\begin{array}{ll} S(z) P(z)^{-1} & |z|<r\\ S(z) \tilde P(z)^{-1} & |z-1|<r\\ S(z) N(z)^{-1} & \text{otherwise.}\end{array}\right. \end{equation} We immediately note that the jumps of $R$ inside the disk $D(0,r)$ disappear due to the combination of RH-S2 and RH-P2. Likewise, the jumps inside the disk $D(1,r)$ disappear due to RH-S2 and RH-$\tilde{P}$2. Furthermore, by Lemma \ref{lem:SP-1} and Lemma \ref{lem:StildeP-1}, $R$ is analytic in the (potentially singular) points $z=0$ and $z=1$. Lastly, we also note that by combining RH-N2 with RH-S2a, we see that the (potential) jump of $R$ across $(r,1-r)$ does not exist. Therefore, we see $R$ as being defined on $\mathbb{C}\setminus \Sigma'_R$, where \[\Sigma'_R=(\Delta_+ \cup \Delta_- \cup \partial D(0,r) \cup \partial D(1,r)) \setminus(D(0,r) \cup D(1,r)) \cup \partial D(1,r)\] is the set of remaining contours, see Figure \ref{fig:contoursRprime}. \begin{figure}[h] \centering \resizebox{10.5cm}{6cm}{% \begin{tikzpicture}[>=latex] \draw[-] (-7,0)--(7,0); \draw[-] (-4,-4)--(-4,4); \draw[fill] (-4,0) circle (0.1cm); \draw[fill] (3,0) circle (0.1cm); \draw[ultra thick] (-4,0) circle (1.5cm); \draw[ultra thick] (3,0) circle (1.5cm); \node[above] at (-4.2,-0.075) {\large 0}; \node[above] at (-2.2,0) {\large $r$}; \node[above] at (2.8,0) {\large $1$}; \node[above] at (.95,0) {\large $1-r$}; \node[above] at (0,1.85) {\large $\Delta_+$}; \node[above] at (0,-2.5) {\large $\Delta_-$}; \draw[-, ultra thick] (-2.86,0.96) to [out=35, in=145] (1.939,1.061); \draw[-, ultra thick] (-2.86,-0.96) to [out=-35, in=-145] (1.939,-1.061); \draw[->, ultra thick] (-5.06,1.08) to (-5.16,0.96); \draw[->, ultra thick] (4.14,0.96) to (4.04,1.08); \draw[->, ultra thick] (-0.5,1.825) to (-0.3,1.825); \draw[->, ultra thick] (-0.5,-1.825) to (-0.3,-1.825); \draw[-, color=white] (-4,3)--(-4,4); \draw[-, color=white] (-4,-4)--(-4,-3); \end{tikzpicture} } \caption{The set of contours $\Sigma'_R$.} \label{fig:contoursRprime} \end{figure} Combining the definition of $R$ with the Riemann-Hilbert problems of $S$, $N$, $P$ and $\tilde{P}$, we then immediately have that $R$ satisfies the following Riemann-Hilbert problem. \begin{itemize} \item[RH-R1] $R: \mathbb{C}\setminus \Sigma'_R \rightarrow \mathbb{C}^{2\times 2}$ is analytic. \item[RH-R2a] For $z\in \partial D(0,r) \cup \partial D(1,r)$, we have that \[R_+(z) = R_-(z) (I + O(1/n)).\] \item[RH-R2b] For $z\in (\Delta_+ \cup \Delta_-) \setminus(D(0,r) \cup D(1,r))$, we have that \begin{equation*} R_+(z) = R_-(z) \begin{pmatrix} 1 & 0\\ z^{-\nu} e^{2n\varphi_\gamma(z)} & 1\end{pmatrix}. \end{equation*} \item[RH-R3] As $z\to\infty$ we have $R(z) = I+O(1/z)$. \end{itemize} Therefore, by \eqref{eq:realpartoncontours}, we have the following asymptotic behaviour for the jump matrix in RH-R2b: \[\begin{pmatrix} 1 & 0\\ z^{-\nu} e^{2n\varphi_\gamma(z)} & 1\end{pmatrix} = I+O\left(e^{-2nc}\right), \qquad n\to \infty.\] Since the jump matrices on $\Sigma'_R$ tend to the identity as $n\rightarrow \infty$, we can impose the general result concerning Riemann-Hilbert problems (see for example \cite{Bleher_Liechty,Deift}), to conclude that \begin{equation} \label{eq:asymptoticR} R(z)=I+O(1/n), \qquad n\rightarrow \infty, \end{equation} uniformly for $z\in \mathbb{C}\setminus \Sigma'_R$. In the following, we use this asymptotic result to obtain an asymptotic result for the original matrix $Y$, that we need to obtain the asymptotics for the kernel that has our interest, according to \eqref{eq:kernelinY}. \subsection{Asymptotics of the correlation kernel} We have all the ingredients necessary to complete the proof of Proposition \ref{prop:resultRHprob}. As indicated in Section \ref{sec:outline}, this is the last thing that we need to prove. \begin{proof}[Proof of Proposition \ref{prop:resultRHprob}] If we invert the steps $Y \mapsto T \mapsto S \mapsto R$ in our Riemann-Hilbert problem, we obtain that for $z$ in the upper lens such that $\lvert z \rvert <r$, we have that \begin{equation} \label{eq:YafterQ} \begin{pmatrix} Y_{11}(z) \\ Y_{21}(z)\end{pmatrix} =z^{-\frac{\nu}{2}} e^{n(g_\gamma(z)-\frac{\ell_\gamma}{2}+\varphi_\gamma(z))} e^{\frac{n\ell_\gamma}{2}\sigma_3} R(z) E_n(z) B(n^2 f_\gamma(z)) e^{\frac{\pi i\nu}{2} \sigma_3} \begin{pmatrix} 1 \\ 1\end{pmatrix}. \end{equation} Notice that, due to the definition \eqref{eq:defB} of $B$ for $0 < \arg z < \frac{\pi}{3}$, and the well-known identity \cite[Eq. 10.4.4]{DLMF} that connects Hankel and Bessel functions \[H_{\nu}^{(1)}(z) + H_{\nu}^{(2)}(z)=2J_\nu(z),\] we have for $x\in (0,r)$ that \[B_+(n^2f_\gamma(x)) e^{\frac{\pi i\nu}{2} \sigma_3} \begin{pmatrix} 1 \\ 1\end{pmatrix} = \begin{pmatrix} J_\nu(2n\sqrt{f_\gamma(x)}) \\ 2\pi n \sqrt{-f_\gamma(x)} J'_\nu(2n\sqrt{f_\gamma(x)}) \end{pmatrix}.\] We note that by \eqref{eq:gplus+minusinterval}, \eqref{eq:gplus-minusinterval} and \eqref{eq:phiOn01} we have for $x\in (0,1)$ that \[g_{\gamma,+}(x)-\frac{\ell_\gamma}{2}+\varphi_{\gamma,+}(x)=\frac{V_\gamma(x)}{2} + \pi i,\] which implies that for $x\in (0,r)$ \begin{equation} \label{eq:Ywithweight} \begin{pmatrix} Y_{11,+}(x) \\ Y_{21,+}(x)\end{pmatrix} =\frac{(-1)^n}{\sqrt{\omega_{\gamma,n,\nu}^+(x)}} e^{\frac{n\ell_\gamma}{2}\sigma_3} R(x) E_n(x) \begin{pmatrix} J_\nu(2n\sqrt{f_\gamma(x)}) \\ 2\pi i n \sqrt{f_\gamma(x)} J'_\nu(2n\sqrt{f_\gamma(x)}) \end{pmatrix}. \end{equation} Similarly, one obtains that \begin{align*} \begin{pmatrix} -Y_{21}(z) \\ Y_{11}(z) \end{pmatrix}^{T} &= \begin{pmatrix} 0 & 1 \end{pmatrix} Y(z)^{-1} \\ &= z^{-\frac{\nu}{2}} e^{n(g_\gamma(z)-\frac{\ell_\gamma}{2}+\varphi_\gamma(z))} \begin{pmatrix} -1 & 1\end{pmatrix} e^{-\frac{\pi i \nu}{2}\sigma_3} B(n^2 f_\gamma(z))^{-1} E_n(z)^{-1} R(z)^{-1} e^{-\frac{n\ell_\gamma}{2}\sigma_3} \end{align*} Inverting $B$ is straightforward, since it has determinant $1$. Using this and repeating the same steps as above, we obtain for $x\in (0,r)$ that \begin{equation} \label{eq:transposeYwithweight} \begin{pmatrix} -Y_{21,+}(x) \\ Y_{11,+}(x) \end{pmatrix}^{T} = \frac{(-1)^n}{\sqrt{\omega_{\gamma,n,\nu}^+(x)}}\begin{pmatrix} 2\pi i n \sqrt{f_\gamma(x)} J'_\nu(2n\sqrt{f_\gamma(x)}) \\ -J_\nu(2n\sqrt{f_\gamma(x)}) \end{pmatrix}^{T} E_n(x)^{-1} R(x)^{-1} e^{-\frac{n\ell_\gamma}{2}\sigma_3}. \end{equation} For any $x\in (0,\infty)$, we use the notation \begin{equation} \label{eq:defxn} x_n=\frac{c_\gamma x}{\pi^2 n^2}, \end{equation} for every $n\geq 1$, where $c_\gamma$ is defined as in \eqref{eq:defcgamma}. Note that for $n$ big enough, we have that $x_n\in (0,r)$ and that $x_n$ is precisely the quantity that appears in the scaling limit in \eqref{eq:resultRHprob}. Now let $(x,y) \in S$, where $S$ is a compact subset of the first quadrant. We turn to the asymptotic behaviour on $S$ as $n\rightarrow \infty$. From \eqref{eq:fgamma0} and the definition \eqref{eq:defcgamma} of $c_\gamma$ we infer that \begin{align*} 2n\sqrt{f_\gamma(x_n)}&=\sqrt{x}+O\left(\frac{x^{3/2}}{n^2}\right). \end{align*} Since $x$ is bounded both from below and from above we thus have, using the mean value theorem, that uniformly \begin{align} \label{eq:behavJnu} J_\nu\left(2n\sqrt{f_\gamma(x_n)}\right) &= J_\nu(\sqrt{x})+O\left(\frac{1}{n^2}\right) \\ \label{eq:behavJnu'} 2n\sqrt{f_\gamma(x_n)} J'_\nu\left(2n\sqrt{f_\gamma(x_n)}\right) &= \sqrt x J'_\nu(\sqrt{x})+O\left(\frac{1}{n^2}\right). \end{align} By Cauchy's integral formula we have uniformly for $|\xi|<\frac{r}{3}$ that as $n\to\infty$ \begin{align} \label{eq:RderivCauchy} R'(\xi) = \frac{1}{2\pi i} \oint_{|z|=\frac{r}{2}} \frac{R(z)-I}{(z-\xi)^2} dz = \mathcal O\left(\frac{1}{n}\right), \end{align} where in the last step we have used \eqref{eq:asymptoticR}. Then by \eqref{eq:defxn} and \eqref{eq:asymptoticR} again we have \[R(y_n)^{-1} R(x_n) = I +R(y_n)^{-1}(R(x_n)-R(y_n)) = I + R(y_n)^{-1} \int_{y_n}^{x_n} R'(\xi) d\xi = I+\mathcal O\left(\frac{x-y}{n^3}\right),\] uniformly for $(x,y)\in S$. From the definition \eqref{eq:defEn} and RH-N4a it follows that $E_n(x_n)=\mathcal O(\sqrt{n})$ and $E_n(x_n)^{-1}=\mathcal O(\sqrt{n})$ as $n \rightarrow \infty$. Repeating the argument in \eqref{eq:RderivCauchy} for $E_n(z)$ instead of $R(z) - I$ then yields \[E_n(y_n)^{-1} E_n(x_n) = I + \mathcal O\left(\frac{x-y}{n}\right),\] as $n\to\infty$ uniformly for $(x,y)\in S$. We conclude that uniformly for $(x,y)\in S$ \begin{align} \label{eq:behavERRE} E_n(y_n)^{-1} R(y_n)^{-1} R(x_n) E_n(x_n) &= I + \mathcal O\left(\frac{x-y}{n}\right) + \mathcal O\left(E_n(y_n)^{-1}\frac{x-y}{n^3} E_n(x_n)\right) \nonumber \\ &= I + \mathcal O\left(\frac{x-y}{n}\right), \end{align} as $n\to\infty$. Here we have used again that $E_n(x_n)=\mathcal O(\sqrt{n})$ and $E_n(y_n)^{-1}=\mathcal O(\sqrt{n})$. We now assemble this all to prove Proposition \ref{prop:resultRHprob}. For this, we start with \begin{align*} \frac{c_\gamma}{\pi^2 n^2} K_n(x_n,y_n;\omega_{\gamma,n,\nu}^+) &= \frac{c_\gamma}{\pi^2 n^2} \frac{1}{2\pi i (x_n-y_n)} \sqrt{\omega_{\gamma,n,\nu}^+(x_n)\omega_{\gamma,n,\nu}^+(y_n)} \begin{pmatrix} 0 & 1\end{pmatrix} Y_+(y_n)^{-1} Y_+(x_n) \begin{pmatrix} 1\\ 0 \end{pmatrix} \\ &= \frac{1}{2\pi i (x-y)} \sqrt{\omega_{\gamma,n,\nu}^+(x_n)\omega_{\gamma,n,\nu}^+(y_n)} \begin{pmatrix} -Y_{21,+}(y_n) \\ Y_{11,+}(y_n) \end{pmatrix}^{T} \begin{pmatrix} Y_{11,+}(x) \\ Y_{21,+}(x)\end{pmatrix}, \end{align*} by \eqref{eq:kernelinY}. Then inserting \eqref{eq:Ywithweight} and \eqref{eq:transposeYwithweight}, we obtain \begin{align*} \frac{c_\gamma}{\pi^2 n^2} K_n(x_n,y_n;\omega_{\gamma,n,\nu}^+) = \frac{1}{2\pi i (x-y)} & \begin{pmatrix} 2\pi i n \sqrt{f_\gamma(y_n)} J'_\nu(2n\sqrt{f_\gamma(y_n)}) \\ -J_\nu(2n\sqrt{f_\gamma(y_n)}) \end{pmatrix}^{T} E_n(y_n)^{-1} R(y_n)^{-1} \\ & \times R(x_n) E_n(x_n) \begin{pmatrix} J_\nu(2n\sqrt{f_\gamma(x_n)}) \\ 2\pi i n \sqrt{f_\gamma(x_n)} J'_\nu(2n\sqrt{f_\gamma(x_n)}) \end{pmatrix}. \end{align*} Inserting the asymptotic behaviours \eqref{eq:behavJnu}, \eqref{eq:behavJnu'} and \eqref{eq:behavERRE}, we obtain, as $n\to \infty$, that \begin{align*} \frac{c_\gamma}{\pi^2 n^2} K_n(x_n,y_n;\omega_{\gamma,n,\nu}^+) &= \frac{1}{2\pi i (x-y)}\begin{pmatrix} \pi i \sqrt{y} J'_\nu(\sqrt{y}) \\ -J_\nu(\sqrt{y}) \end{pmatrix}^{T} \begin{pmatrix} J_\nu(\sqrt{x}) \\ \pi i \sqrt{x} J'_\nu(\sqrt{x}) \end{pmatrix} + \mathcal O\left(\frac{1}{n}\right) \\ &= \frac{J_\nu(\sqrt{x})\sqrt{y}J'_\nu(\sqrt{y})-J_\nu(\sqrt{y})\sqrt{x}J'_\nu(\sqrt{x})}{2(x-y)} + \mathcal O\left(\frac{1}{n}\right), \end{align*} uniformly for $(x,y)\in S$. This concludes the proof. \end{proof} \section{Disclaimer} \label{sec:disclaimer} Before version 1.0 of this package is released, minor modifications and bug fixes may be performed. All documents making an orthodox use of this package will continue to compile and generate essentially the same output. However, if you have strict stability requirements (for instance, if you want to assure no page break changes will happen in your documents), keep a copy of the current version of the file \texttt{tikzlibrarycd.code.tex} in your document's directory. \section{Getting started} \label{sec:basic-usage} To invoke this package in \LaTeX, type \begin{verse} \index{tikz-cd@\protect\texttt{tikz-cd} package}% \index{Packages and files!tikz-cd@\protect\texttt{tikz-cd}}% |\usepackage{tikz-cd}|% \end{verse} or load \tikzname{} and then type \begin{verse}% \index{cd@\protect\texttt{cd} library}% \index{Libraries!cd@\protect\texttt{cd}}% |\usetikzlibrary{cd}|% \end{verse} \subsection{Creating a diagram} \label{sec:creating-diagrams} The basic tool to create a commutative diagram is the following environment. \begin{environment}{{tikzcd}\opt{\oarg{options}}} \end{environment} The environment contents should describe a matrix, as in \LaTeX's familiar |{tabular}| environment. The optional argument \meta{options} may be used to modify the appearance of the diagram. Any of the customization keys described in this manual, as well as those originally provided by \tikzname{}, can be used here. Arrows between matrix entries can be created with the |\arrow| command described below. Everything inside |{tikzcd}| is typeset in math mode, but you will probably want to use it inside an |{equation}| environment or |\[| \dots |\]|, so that the diagram is placed on a new line and centered. It is important to note that \textsc{dvi} viewers will not display diagrams correctly. It is necessary to convert the \textsc{dvi} file to \textsc{pdf} or \textsc{ps} format---or, even better, use a tool that generates \textsc{pdf} files directly, such as \texttt{pdflatex}. \subsection{Inserting arrows} \label{sec:inserting-arrows} Inside the |{tikzcd}| environment, the following synonymous commands are provided to produce arrows. \begin{pgfmanualentry} \extractcommand\arrow|[|\meta{options}|]|\@@ \extractcommand\ar|[|\meta{options}|]|\@@ \pgfmanualbody \end{pgfmanualentry} Here, \meta{options} is a comma-separated list of options which can be used to specify the arrow target, add labels, change arrow tips, and perform additional modifications. The arrow target can be specified by a direction parameter, which consists of a string of characters |r|, |l|, |d|, |u| (standing for right, left, down and up). Labels can be placed on an arrow by means of the quotes syntax, described in detail in the \pgfname{} manual \cite[\S\ref*{pgfman-section-label-quotes}]{pgfman}. Notice the use of |"\phi"| in the example below. \begin{codeexample}[] \begin{tikzcd} A \arrow[rd] \arrow[r, "\phi"] & B \\ & C \end{tikzcd} \end{codeexample} To further modify the appearance of an arrow, note that \meta{options} may contain any key that can be passed to \tikzname's |\path| command. Similarly, a label can receive additional options via the syntax \begin{verse} |"|\meta{label text}|"|\opt{\meta{label options}}. \end{verse} Both \meta{label text} and \meta{label options} need to be enclosed in curly braces if they contain commas. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, "\phi"] \arrow[d, red] & B \arrow[d, "\psi" red] \\ C \arrow[r, red, "\eta" blue] & D \end{tikzcd} \end{codeexample} Arrows can have an arbitrary number of labels, by repeated use of arguments in quotes. The example below shows how to control the positioning of labels. Notice in particular that an apostrophe as \meta{label option} causes the label to be placed on the opposite side of the arrow. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, "\phi" near start, "\psi"', "\eta" near end] & B \end{tikzcd} \end{codeexample} We provide two real-life examples. \begin{codeexample}[] \begin{tikzcd} T \arrow[drr, bend left, "x"] \arrow[ddr, bend right, "y"] \arrow[dr, dotted, "{(x,y)}" description] & & \\ & X \times_Z Y \arrow[r, "p"] \arrow[d, "q"] & X \arrow[d, "f"] \\ & Y \arrow[r, "g"] & Z \end{tikzcd} \end{codeexample} \begin{codeexample}[] \begin{tikzcd}[column sep=tiny] & \pi_1(U_1) \ar[dr] \ar[drr, "j_1", bend left=20] & &[1.5em] \\ \pi_1(U_1\cap U_2) \ar[ur, "i_1"] \ar[dr, "i_2"'] & & \pi_1(U_1) \ast_{ \pi_1(U_1\cap U_2)} \pi_1(U_2) \ar[r, dashed, "\simeq"] & \pi_1(X) \\ & \pi_1(U_2) \ar[ur]\ar[urr, "j_2"', bend right=20] & & \end{tikzcd} \end{codeexample} \subsection{Changing arrow tips} \label{sec:changing-arrow-tips} A set of |\arrow| options is provided to create different kinds of arrows. Some of these options have a short descriptive name, such as |hook|, and others are named after \TeX{} arrow-producing commands (without a ``|\|''), like |dashrightarrow| \begin{codeexample}[] \begin{tikzcd} X \arrow[r, hook] \arrow[dr, dashrightarrow] & \bar{X} \arrow[d]\\ & Y \end{tikzcd} \end{codeexample} The following list shows all available arrow types (each of them is a style key located in |/tikz/commutative diagrams|). \begin{multicols}{2}\raggedcolumns \subsubsection*{Basic arrows} \begin{tabular}{ll} \displayarrowstyle{to head}\\ \displayarrowstyle{rightarrow}\\ \displayarrowstyle{leftarrow}\\ \displayarrowstyle{leftrightarrow}\\ \displayarrowstyle{Rightarrow}\\ \displayarrowstyle{Leftarrow}\\ \displayarrowstyle{Leftrightarrow} \end{tabular} \subsubsection*{Arrows from bar} \begin{tabular}{ll} \displayarrowstyle{maps to}\\ \displayarrowstyle{mapsto}\\ \displayarrowstyle{mapsfrom}\\ \displayarrowstyle{Mapsto}\\ \displayarrowstyle{Mapsfrom}\\ \end{tabular} \subsubsection*{Arrows with hook} \begin{tabular}{ll} \displayarrowstyle{hook}\\ \displayarrowstyle{hook'}\\ \displayarrowstyle{hookrightarrow}\\ \displayarrowstyle{hookleftarrow}\\ \end{tabular} \subsubsection*{Arrows with tail} \begin{tabular}{ll} \displayarrowstyle{tail}\\ \displayarrowstyle{rightarrowtail}\\ \displayarrowstyle{leftarrowtail}\\ \end{tabular} \subsubsection*{Two-headed arrows} \begin{tabular}{ll} \displayarrowstyle{two heads}\\ \displayarrowstyle{twoheadrightarrow}\\ \displayarrowstyle{twoheadleftarrow}\\ \end{tabular} \subsubsection*{Harpoons} \begin{tabular}{ll} \displayarrowstyle{harpoon}\\ \displayarrowstyle{harpoon'}\\ \displayarrowstyle{rightharpoonup}\\ \displayarrowstyle{rightharpoondown}\\ \displayarrowstyle{leftharpoonup}\\ \displayarrowstyle{leftharpoondown}\\ \end{tabular} \subsubsection*{Dashed arrows} \begin{tabular}{ll} \displayarrowstyle{dashed}\\ \displayarrowstyle{dashrightarrow}\\ \displayarrowstyle{dashleftarrow}\\ \end{tabular} \subsubsection*{Squiggly arrows} \begin{tabular}{ll} \displayarrowstyle{squiggly}\\ \displayarrowstyle{rightsquigarrow}\\ \displayarrowstyle{leftsquigarrow}\\ \displayarrowstyle{leftrightsquigarrow} \end{tabular} \subsubsection*{Non-arrows} \begin{tabular}{ll} \displayarrowstyle{no head}\\ \displayarrowstyle{no tail}\\ \displayarrowstyle{dash}\\ \displayarrowstyle{equal}\\ \end{tabular} \end{multicols} A gray cross (\tikz \path[/pgf/tips=true,gray x-] (0,0) -- (1mm,0);) in the samples above indicates that the corresponding tip is kept unchanged. This allows several arrow styles to be superimposed. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, tail, two heads, dashed] & B \end{tikzcd} \end{codeexample} \subsection{Alternative syntax for arrows} \label{sec:altern-synt-arrows} The following forms of the arrow command were used before the appearance of the quotes syntax for labels, and now may seem somewhat convoluted. They are nonetheless still available for backwards compatibility. \begin{command}{\arrow\opt{\oarg{options}}\marg{direction}\meta{labels}} \end{command} Here, \meta{direction} is a string containing the characters |r|, |l|, |d|, |u| and is used to determine the arrow target. Alternatively, you can specify an explicit matrix cell by replacing \meta{direction} with something of the form \meta{row number}\texttt{-}\meta{column number}, or the name of a node. The trailing \meta{labels} can be the empty string or of the form \begin{verse} \opt{\oarg{label options}}\marg{label text}\meta{more labels}. \end{verse} The equivalent command |\ar| can also be used in this form. Here is an example. \begin{codeexample}[] \begin{tikzcd} A \arrow{d} \arrow{r}[near start]{\phi}[near end]{\psi} & B \arrow[red]{d}{\xi} \\ C \arrow[red]{r}[blue]{\eta} & D \end{tikzcd} \end{codeexample} There are further shortened forms: \begin{pgfmanualentry} \extractcommand\rar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\lar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\dar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\uar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\drar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\urar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\dlar\opt{\oarg{options}}\meta{labels}\@@ \extractcommand\ular\opt{\oarg{options}}\meta{labels}\@@ \pgfmanualbody \end{pgfmanualentry} The first one is equivalent to \begin{verse} |\arrow|{\oarg{options}}|{r}|\meta{labels} \end{verse} and the other ones work analogously. \subsection{Usage in plain \TeX{}} \label{sec:usage-plain-tex} To use this software in plain \TeX{}, load \tikzname{} and the \texttt{cd} library by saying \begin{verse} |\input tikz.tex|\\ \index{cd@\protect\texttt{cd} library}% \index{Libraries!cd@\protect\texttt{cd}}% |\usetikzlibrary{cd}| \end{verse} The |{tikzcd}| environment should then be replaced by the following: \begin{plainenvironment}{{tikzcd}\opt{\oarg{options}}} \end{plainenvironment} All other functions of this library work as described in this manual without change. \subsection{Usage in Con\TeX t} \label{sec:usage-plain-context} To use this software in Con\TeX t, load \tikzname{} and then the \texttt{cd} library by saying \begin{verse} |\usemodule[tikz]|\\ \index{cd@\protect\texttt{cd} library}% \index{Libraries!cd@\protect\texttt{cd}}% |\usetikzlibrary[cd]| \end{verse} The |{tikzcd}| environment should then be replaced by the following: \begin{contextenvironment}{{tikzcd}\opt{\oarg{options}}} \end{contextenvironment} Moreover, you may replace the column and row separators |&|, |\\| by their Con\TeX t analogues |\NC|, |\pgfmatrixendrow}}|. However, you should use |\NC| only \emph{between} cells, and not before the first column or after the last column, as in usual Con\TeX t tables. Similarly, |\pgfmatrixendrow}}| should be used only between rows. All other functions of this library work as described in this manual without change. \section{Controlling the appearance of diagrams} \label{sec:chang-appe-diagr} This section describes a number of customization keys defined by this package. All keys are located in the path |/tikz/commutative diagrams|. Options passed to |{tikzcd}| or |\arrow| are searched for in that path, and, if not found there, in |/tikz|. To set options globally, it is convenient to use the following command. \begin{command}{\pgfqkeys{/tikz/commutative diagrams}\marg{options}} Executes \meta{options} in the path |/tikz/commutative diagrams|. \end{command} Besides the keys described in this manual, numerous \tikzname\ parameters can affect the appearance of a diagram. However, only a few of them (namely those appearing in |every diagram|, |every cell|, |every arrow|, and |every label| below) are reinitialized when |{tikzcd}| is called. This means that modifying a certain \tikzname\ parameter globally may or may not affect the output of |{tikzcd}|. We also point out that besides the options and styles provided by this package, several keys defined by \tikzname{} are useful for arrows. Some examples are \texttt{dashed}, |dotted|, and its relatives, |line width=|\meta{dimension}, |color=|\meta{color}, |bend right|, |bend left|, |in=|\meta{angle}, |out=|\meta{angle}, |loop|, etc. See the \pgfname{} manual~\cite[\S\ref*{pgfman-section-cap-joins} and \S\ref*{pgfman-library-to-paths}]{pgfman}. Likewise, \tikzname{} provides several keys that are useful for labels, such as |above|, |below|, |left|, |right|, |swap| (which makes the label be placed on the right side of the arrow, relative to its direction), |sloped|, |pos=|\meta{fraction}, |near start|, |near end|, |inner sep=|\meta{dimension}, |font=|\meta{font command}, |text width=|\meta{dimension}, etc. See the \pgfname{} manual~\cite[\S\ref*{pgfman-section-nodes}, esp.\ \S\ref*{pgfman-section-pos-option}]{pgfman}. \subsection{General options} \label{sec:general-options} \begin{stylekey}{/tikz/commutative diagrams/every diagram} This style is applied to every |{tikzcd}| environment. Initially, it contains the following: \begin{verse} |row sep=normal||,|\\ |column sep=normal||,|\\ |/tikz/baseline=0pt| \end{verse} \end{stylekey} The |baseline=0pt| setting is used to make equation numbers be placed correctly (as an exception, one-row diagrams are anchored at their matrix base, which is exactly what you want). \begin{key}{/tikz/commutative diagrams/diagrams=\meta{options}} This key appends \meta{options} to the style |every diagram|. \end{key} \begin{stylekey}{/tikz/commutative diagrams/every matrix} This style is applied to the \tikzname{} matrix created internally by |{tikzcd}|. Initially, it contains the following: \begin{verse} |/tikz/inner sep=0pt| \end{verse} \end{stylekey} \begin{stylekey}{/tikz/commutative diagrams/every cell} This style is applied to every \tikzname{} matrix cell created by |{tikzcd}|. Initially, it contains the following: \begin{verse} |/tikz/shape=asymmetrical rectangle||,|\\ |/tikz/inner xsep=1ex||,|\\ |/tikz/inner ysep=0.85ex| \end{verse} \end{stylekey} The |asymmetrical rectangle| shape is described in \S\ref{sec:asymm-rect-shape}. The |inner xsep|, |inner ysep| options determine the spacing between a diagram entry and any arrows reaching it. \begin{key}{/tikz/commutative diagrams/cells=\meta{options}} This key appends \meta{options} to the style |every cell|. \end{key} \def\printsepaux+#1em{#1\,em} \def\printsep#1#2{\edef\temp{\pgfkeysvalueof{/tikz/commutative diagrams/#1 sep/#2}}\expandafter\printsepaux\temp} \begin{key}{/tikz/commutative diagrams/row sep=\meta{size}} This key acts as a ``frontend'' to \tikzname's |/tikz/row sep| key. If the key \begin{verse} |/tikz/commutative diagrams/row sep/|\meta{size} \end{verse} stores a \meta{value}, then it is read and |/tikz/row sep|=\meta{value} is set. If the key above is not initialized, then \meta{size} is presumed to be a dimension, and |/tikz/row sep|=\meta{size} is set. The initially available sizes, and their values, are the following: \begin{center} \begin{tabular}{cccccc} |tiny| & |small| & |scriptsize| & |normal| & |large| & |huge| \\ \printsep{row}{tiny} & \printsep{row}{small} & \printsep{row}{scriptsize} & \printsep{row}{normal} & \printsep{row}{large} & \printsep{row}{huge} \end{tabular} \end{center} \end{key} Notice that setting, say, |row sep=1cm| globally with |\pgfqkeys{/tikz/commutative diagrams}| will have no effect, since the |row sep| option is re-set at the beginning of each diagram. To make all diagrams have |row sep| equal to 1\,cm, you can modify the meaning of |normal| by saying \begin{verse} |\pgfqkeys{/tikz/commutative diagrams}{row sep/normal=1cm}|. \end{verse} You can also create new sizes, but note that \pgfname\ requires new keys to be initialized explicitly. For example, to create a size |my size|, meaning 1\,ex, you should use \begin{verse} |\pgfqkeys{/tikz/commutative diagrams}{row sep/my size/.initial=1ex}|. \end{verse} \begin{key}{/tikz/commutative diagrams/column sep=\meta{size}} This works analogously to the |row sep| key above. The sizes available initially are the following: \begin{center} \begin{tabular}{cccccc} |tiny| & |small| & |scriptsize| & |normal| & |large| & |huge| \\ \printsep{column}{tiny} & \printsep{column}{small} & \printsep{column}{scriptsize} & \printsep{column}{normal} & \printsep{column}{large} & \printsep{column}{huge} \end{tabular} \end{center} \end{key} \begin{key}{/tikz/commutative diagrams/sep=\meta{size}} This key sets |row sep=|\meta{size}, |column sep=|\meta{size}. \end{key} In the examples below, the triangular diagrams would look too wide or too tall if the column or row separation were not set appropriately. \begin{codeexample}[] \begin{tikzcd}[column sep=small] & A \arrow[dl] \arrow[dr] & \\ B \arrow{rr} & & C \end{tikzcd} \end{codeexample} \begin{codeexample}[] \begin{tikzcd}[row sep=tiny] & B \arrow[dd] \\ A \arrow[ur] \arrow[dr] & \\ & C \end{tikzcd} \end{codeexample} Section \ref*{pgfman-section-matrices}.3.2 of the \pgfname{} manual \cite{pgfman} contains further details on the spacing of matrix cells. \begin{stylekey}{/tikz/commutative diagrams/cramped} By default, a generous amount of white space is added around diagram cells, which is appropriate for large, displayed diagrams. The present style removes some of this extra white space, and is intended for smaller diagrams that should blend with the surrounding text, or very wide material that wouldn't fit the page otherwise. \end{stylekey} The picture below shows the (somewhat subtle) difference between the cramped and the non-cramped styles. \begin{codeexample}[pre=\minipage{6cm},post=\endminipage] This \begin{tikzcd} A \arrow[r] & B \end{tikzcd} is a regular diagram.\\ This \begin{tikzcd}[cramped, sep=small] A \arrow[r] & B \end{tikzcd} is a cramped diagram.\\ This $A \to B$ is just a formula. \end{codeexample} Keep in mind that while there are some legitimate uses for |{tikzcd}| diagrams on inline formulas, standard \LaTeX\ constructs such as |\overset| and |\xrigthtarrow| are often sufficient and should be preferred. \begin{key}{/tikz/commutative diagrams/math mode=\meta{boolean} (default true)} This key determines whether or not the contents of a diagram are typeset in math mode. If set globally or diagram-wise, it affects both the diagram entries and arrow labels. If used with |\arrow|, it affects only its labels. \end{key} \begin{key}{/tikz/commutative diagrams/background color=\meta{color} (initially white)} This key stores the name of a color, and is read by styles that fill the background, such as |description| and |crossing over|. It does not cause the background of diagrams to be filled. \end{key} \subsection{Global options for arrows} \label{sec:options-arrows} \begin{stylekey}{/tikz/commutative diagrams/every arrow} This style is applied to every |\arrow|. Initially, it contains the following: \begin{verse} |/tikz/draw,|\\ |/tikz/line width=rule_thickness||,|\\ |rightarrow| \end{verse} \end{stylekey} \begin{key}{/tikz/commutative diagrams/arrows=\meta{options}} This key appends \meta{options} to the style |every arrow|. \end{key} \begin{key}{/tikz/commutative diagrams/arrow style=\meta{style}} This key determines which collection of arrow tips is used by the arrow tip selection styles listed in \S\ref{sec:changing-arrow-tips}. The initial setting is suitable for documents using the Computer Modern font at any size. The available choices for \meta{style} are: \begin{description} \item[\texttt{Latin Modern}] A small variant of the initial settings, intended for documents using the Latin Modern font at any size. \item[\texttt{math font}] This setting uses the |Glyph| meta arrow tip described in \S\ref{sec:font-arrow-tips}. \item[\texttt{tikz}] This setting uses the arrow tips defined in \tikzname's |arrows.meta| library. It honors the option |/tikz/>|. \end{description} This key is usually invoked in the document preamble, and should be set only once. \end{key} If you are using a font different from Computer Modern or Latin Modern, you may find the best results by selecting the |math font| style. As detailed in \S\ref{sec:font-arrow-tips}, this is not guaranteed to work perfectly with all fonts, but gives good results in many cases. If the \texttt{math font} style gives unsatisfactory results, you can try selecting the \texttt{tikz} style, and setting |/tikz/>| to the value that best matches your font (among those shown in \cite[\S\ref*{pgfman-section-arrows-meta}]{pgfman}). \begin{codeexample}[] \pgfqkeys{/tikz/commutative diagrams}{ arrow style=tikz, diagrams={>={Straight Barb[scale=0.8]}} } \begin{tikzcd} A \arrow[r, tail] \arrow[rd] & B \arrow[d, two heads]\\ & D \end{tikzcd} \end{codeexample} \subsection{Absolute placement of arrows} \label{sec:absol-positioning} The usual behavior of |\arrow| is to produce an arrow starting at the diagram entry where the command appears, and ending at an entry whose location is specified relative to that. The following keys override this behavior, allowing source and target to be selected explicitly. \begin{key}{/tikz/commutative diagrams/from=\meta{argument}} If \meta{argument} is of the form \meta{row number}\texttt{-}\meta{column number}, or if it is a string of characters |r|, |l|, |d|, |u|, this key sets the arrow source to be the corresponding cell in the diagram matrix. Otherwise, it assumes the argument is the name of a node and sets the arrow source to \meta{argument}. \end{key} \begin{key}{/tikz/commutative diagrams/to=\meta{argument}} Similar to |from|, but refers to the arrow target. \end{key} Recall that it is possible to give a specific entry of a \tikzname{} matrix a name by using the \verb!|[!\meta{options}\verb!]|! syntax, as done for entry $C$ in the example below. You must be careful not to create nodes whose name contains only the characters |l|, |r|, |u|, |d| if you want to refer to them using |from| or |to|. The following illustrates several different uses of these keys. {\catcode`\|=12 \begin{codeexample}[] \begin{tikzcd} A \arrow[to=Z, red] \arrow[to=2-2, blue] & B \\ |[alias=Z]| C & D \arrow[from=ul, to=1-2, purple] \end{tikzcd} \end{codeexample} } In the next examples, empty labels are used to create nodes for later reference. The |draw=red| option is used to show where these empty nodes are located, but of course you want to remove that when using this technique. \begin{codeexample}[] \begin{tikzcd}[column sep=scriptsize] A \arrow[dr] \arrow[rr, ""{name=U, below, draw=red}]{} & & B \arrow[dl] \\ & C \arrow[Rightarrow, from=U, "\psi"] \end{tikzcd} \end{codeexample} \begin{codeexample}[] \begin{tikzcd} A \arrow[r, bend left=50, ""{name=U, below, draw=red}] \arrow[r, bend right=50, ""{name=D, draw=red}] & B \arrow[Rightarrow, from=U, to=D] \end{tikzcd} \end{codeexample} \subsection{Phantom arrows} \label{sec:phantom-arrows} Sometimes it is necessary to insert a symbol outside the grid subjacent to the diagram. The easiest way to achieve this is as a label to an invisible arrow. \begin{stylekey}{/tikz/commutative diagrams/phantom} Creates an invisible arrow. Labels to this arrow are not invisible. They will be anchored at their center and typeset in full size (i.e., with |\textstyle|). To get smaller labels, as in ordinary arrows, use the |\scriptstyle| command. \end{stylekey} In the picture below, the arrow containing the |phantom| option goes from $A$ to $D$, and the |\ulcorner| symbol ($\ulcorner$) is inserted closer to the starting point $A$. \begin{codeexample}[] \begin{tikzcd} A \arrow[r] \arrow[d] \arrow[dr, phantom, "\ulcorner", very near start] & B \arrow[d] \\ C \arrow[r] & D \end{tikzcd} \end{codeexample} \subsection{Fine-tuning the placement of arrows} \label{sec:fine-tuning-arrows} \begin{key}{/tikz/commutative diagrams/shift left=\meta{dimension} (default 0.56ex)} Shifts arrows by \meta{dimension} to the left, relative to the arrow direction. A dimensionless argument causes that multiple of the default value to be used. \end{key} \begin{key}{/tikz/commutative diagrams/shift right=\meta{dimension} (default 1)} A shortcut to |shift left=-|\meta{dimension}. \end{key} \begin{codeexample}[] \begin{tikzcd} A \arrow[r, red, shift left=1.5ex] \arrow[r] \arrow[dr, blue, shift right=1.5ex] \arrow[dr] & B \arrow[d, purple, shift left=1.5ex] \arrow[d]\\ & C \end{tikzcd} \end{codeexample} The default values of |shift left| and |shift right| are appropriate for a pair of parallel arrows, and dimensionless arguments are useful to create sets of multiple parallel arrows. \begin{codeexample}[] \begin{tikzcd} A \arrow[r] & B \arrow[r, shift left] \arrow[r, shift right] & C \arrow[r] \arrow[r, shift left=2] \arrow[r, shift right=2] & \cdots \end{tikzcd} \end{codeexample} \begin{key}{/tikz/commutative diagrams/shift=\marg{coordinate}} Shifts arrows by \meta{coordinate}. \end{key} \begin{key}{/tikz/commutative diagrams/xshift=\meta{dimension}} Shifts arrows right by \meta{dimension}. \end{key} \begin{key}{/tikz/commutative diagrams/yshift=\meta{dimension}} Shifts arrows up by \meta{dimension}. \end{key} \begin{codeexample}[] \begin{tikzcd} A \arrow[r, yshift=0.7ex] \arrow[r, yshift=-0.7ex] & B \arrow[d, xshift=0.7ex] \arrow[d, xshift=-0.7ex] \\ & C \end{tikzcd} \end{codeexample} \begin{key}{/tikz/commutative diagrams/start anchor=\opt{{\ttfamily\char`\{}\ooarg{coordinate transformations}}\meta{anchor}\opt{\ttfamily\char`\}}} This key specifies at which anchor of the source node the arrow should start. Optionally, additional coordinate transformations can be supplied. An empty \meta{anchor} argument causes no anchor to be specified, which is the usual behavior. \end{key} \begin{key}{/tikz/commutative diagrams/end anchor=\opt{{\ttfamily\char`\{}\ooarg{coordinate transformations}}\meta{anchor}\opt{\ttfamily\char`\}}} This key works analogously, but refers to the target node of the arrow. \end{key} See the picture on \S\ref{sec:asymm-rect-shape} for some of the possible values for \meta{anchor}. \begin{codeexample}[] \begin{tikzcd}[cells={nodes={draw=gray}}] A \arrow[r, black] \arrow[r, blue, end anchor=north east] \arrow[r, red, start anchor={[xshift=-1ex]}, end anchor={[yshift=2ex]north east}] & B \end{tikzcd} \end{codeexample} \begin{key}{/tikz/commutative diagrams/shorten=\meta{dimension}} This key shortens each end of the arrow by \meta{dimension}. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, shift left] \ar[r, shorten=2mm, shift right] & B \end{tikzcd} \end{codeexample} \end{key} Note that the ends of an arrow can be shortened individually using \tikzname's built-in keys |shorten <| and |shorten >|. \subsection{Three-dimensional diagrams} \label{sec:crossing-over} \begin{stylekey}{/tikz/commutative diagrams/crossing over} This style makes a thicker line, with color |background color|, to be drawn under the current arrow, simulating the effect of its passing over other arrows. \begin{codeexample}[] \begin{tikzcd} A \arrow[dr] & B \arrow[dl, crossing over] \\ C & D \end{tikzcd} \end{codeexample} \end{stylekey} Note that, since arrows are drawn in the order they are read, it may be necessary to defer the drawing of certain arrows to achieve the desired result. This can be done using the |from| key, as shown in the following picture. \begin{codeexample}[] \begin{tikzcd}[row sep=scriptsize, column sep=scriptsize] & f^* E_V \arrow[dl] \arrow[rr] \arrow[dd] & & E_V \arrow[dl] \arrow[dd] \\ f^* E \arrow[rr, crossing over] \arrow[dd] & & E \\ & U \arrow[dl] \arrow[rr] & & V \arrow[dl] \\ M \arrow[rr] & & N \arrow[from=uu, crossing over]\\ \end{tikzcd} \end{codeexample} \begin{key}{/tikz/commutative diagrams/crossing over clearance=\meta{dimension} (initially 1.5ex)} This key specifies the width of the background-colored line drawn under a |crossing over| arrow. \end{key} \subsection{Options for labels} \label{sec:options-labels} \begin{stylekey}{/tikz/commutative diagrams/every label} This style is applied to every label produced with |\arrow|. It is initially set to \begin{verse} |/tikz/auto,|\\ |/tikz/font=|\meta{something}|,|\\ |/tikz/inner sep=0.5ex| \end{verse} where \meta{something} is something that makes |\scriptstyle| be applied to labels in math mode. \end{stylekey} The key |/tikz/auto| makes the label be placed on the left side of the arrow, relative to its direction. The key |/tikz/inner sep| controls the distance between a label and the corresponding arrow. \begin{key}{/tikz/commutative diagrams/labels=\meta{options}} This key appends \meta{options} to |every label|. \end{key} \begin{stylekey}{/tikz/commutative diagrams/marking} This style causes the label to be placed over the arrow. It is useful to decorate arrows using ordinary math symbols. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, "/" marking] \arrow[rd, "\circ" marking] & B \\ & C \end{tikzcd} \end{codeexample} \end{stylekey} \begin{stylekey}{/tikz/commutative diagrams/description} This style causes the label to be placed over the arrow, with the background filled. The clearance around the label is determined by \texttt{/tikz/inner sep}. \begin{codeexample}[] \begin{tikzcd} A \arrow[r, "\phi" description] & B \end{tikzcd} \end{codeexample} \end{stylekey} \section{Advanced usage} \label{sec:advanced-usage} This section provides further details on the functioning of this package, with the aim of allowing the advanced user to make a more or less arbitrary use of other \tikzname{} features within |{tikzcd}|. \subsection{Internals of \texttt{tikzcd} and the arrow commands} \label{sec:intern-arrow-comm} The |{tikzcd}| environment works by substituting code of the form \begin{verse} |\begin{tikzcd}[|\meta{options}|]|\\ \hspace*{1.5ex} \meta{contents}\\ |\end{tikzcd}| \end{verse} with roughly the following: \begin{verse} |\begin{tikzpicture}[|\meta{options}|]|\\ \hspace*{1.5ex}| \matrix[matrix of nodes] {|\\ \hspace*{3ex}| |\meta{contents} |\\|\\ \hspace*{1.5ex}| };|\\ \hspace*{1.5ex}| |\meta{paths}\\ |\end{tikzpicture}| \end{verse} Not shown above are a number of initialization procedures, such as defining |\arrow| and its relatives, as well as applying the default settings specified by |every diagram| and its relatives. Note that the next-row command |\\| for the last row is inserted by |{tikzcd}|, and therefore should not be present in \meta{contents}. Notice also that you can use the key |execute at end picuture| in \meta{options} to have arbitrary \tikzname{} code executed after a diagram is drawn. Initially, \meta{paths} is the empty string. A command |\arrow[|\meta{options}|]| does nothing at the point it is inserted, and causes the following code to be appended to \meta{paths}: \begin{verse} |\path[|\meta{options}|] (|\meta{source~node}|) to (|\meta{target~node}|);| \end{verse} By default, \meta{source node} and \meta{target node} refer to the node corresponding to the matrix cell where the command |\arrow| is present. This can be changed using the |from| and |to| keys, or a direction argument (a string consisting of characters |r|, |l|, |d|, |u|). \subsection{Tweaking \texttt{to} paths} \label{sec:tweaking-to-paths} Recall that the \texttt{to} path operation used in the paths created by |\arrow| can take a number of options, as described in \S\ref*{pgfman-library-to-paths} of the \pgfname{} manual~\cite{pgfman}. In particular, the key |/tikz/to path| determines the path that is actually drawn, and can be used to do all sorts of fiddling. \begin{codeexample}[] \begin{tikzcd} A \arrow[dr, controls={+(1.5,0.5) and +(-1,0.8)}] \arrow[dr, dashed, to path=|- (\tikztotarget)] & \\ & B \arrow[loop right] \end{tikzcd} \end{codeexample The following example shows how to produce a ``snake'' map. The arrow with the |phantom| option (going from $B$ to $E$) has the sole purpose of creating a coordinate, named |Z|, lying halfway between these two cells. The arrow starting at $C$ has target $D$, so the macros |\tikztostart| and |\tikztotarget| will expand to the nodes corresponding to these two cells in the argument of |to path|. Notice also the use of |\tikztonodes| at the point where we want the label to be inserted. \begin{codeexample}[] \begin{tikzcd} A \arrow[r] & B \arrow[r] \arrow[d, phantom, ""{coordinate, name=Z}] & C \arrow[dll, "\delta", rounded corners, to path={ -- ([xshift=2ex]\tikztostart.east) |- (Z) [near end]\tikztonodes -| ([xshift=-2ex]\tikztotarget.west) -- (\tikztotarget)}] \\ D \arrow[r] & E \arrow[r] & F \end{tikzcd} \end{codeexample} \subsection{Drawing diagrams directly with Ti\emph{k}Z} \label{sec:draw-diagr-directly} If you find that this package is not flexible enough for some particular application, you can use the methods described in \cite{lenders}, \cite{milne} and draw diagrams directly with \tikzname. In this case, you can still use the styles provided here to obtain pictures with a uniform appearance throughout your document. The pictures below show how this can be done (the second one is adapted from \cite{milne}). \begin{codeexample}[] \begin{tikzpicture}[commutative diagrams/every diagram] \matrix[matrix of math nodes, name=m, commutative diagrams/every cell] { X & \bar X \\ & Y \\}; \path[commutative diagrams/.cd, every arrow, every label] (m-1-1) edge[commutative diagrams/hook] (m-1-2) edge[commutative diagrams/dashed] (m-2-2) (m-1-2) edge (m-2-2); \end{tikzpicture} \end{codeexample} \begin{codeexample}[] \begin{tikzpicture}[commutative diagrams/every diagram] \node (P0) at (90:2.3cm) {$X\otimes (Y\otimes (Z\otimes T))$}; \node (P1) at (90+72:2cm) {$X\otimes ((Y\otimes Z)\otimes T))$} ; \node (P2) at (90+2*72:2cm) {\makebox[5ex][r]{$(X\otimes (Y\otimes Z))\otimes T$}}; \node (P3) at (90+3*72:2cm) {\makebox[5ex][l]{$((X\otimes Y)\otimes Z)\otimes T$}}; \node (P4) at (90+4*72:2cm) {$(X\otimes Y)\otimes (Z\otimes T)$}; \path[commutative diagrams/.cd, every arrow, every label] (P0) edge node[swap] {$1\otimes\phi$} (P1) (P1) edge node[swap] {$\phi$} (P2) (P2) edge node {$\phi\otimes 1$} (P3) (P4) edge node {$\phi$} (P3) (P0) edge node {$\phi$} (P4); \end{tikzpicture} \end{codeexample} \subsection{Issues with active characters} \label{sec:issues-with-active-ampersand} By default, \tikzname{} makes the character |&| active inside matrices, and this causes the error message \begin{verse} |! Package pgfbasematrix Error: Single ampersand used with wrong catcode.| \end{verse} when |{tikzcd}| is used inside the argument to a macro such as a Beamer frame or a footnote. One solution to this problem is to call |{tikzcd}| with the option |ampersand replacement=\&|, and replace all occurrences of |&| with |\&| in the diagram. This procedure is also needed if you want to use matrices in a diagram cell or label. \begin{codeexample}[/tikz/commutative diagrams/diagrams={column sep=large}] \begin{tikzcd}[ampersand replacement=\&] A \oplus B \ar[r, "{\begin{pmatrix} e & f \\ g & h \end{pmatrix}}"] \& C \oplus D \end{tikzcd} \end{codeexample} An alternative fix to this issue that does not require replacing |&| with a different column separator consists in adding the following line to your document after all packages have been loaded: \begin{verse} |\def\temp{&} \catcode`&=\active \let&=\temp| \end{verse} However, this may interfere in unexpected ways with other packages. Use this trick at your own risk. A different but related issue is that some packages, notably \texttt{babel}, modify the catcodes of certain characters in a way that may upset \tikzname's parser. To fix this, add \begin{verse} |\usetikzlibrary{babel}| \end{verse} to your document preamble. \section{Additional goodies} \label{sec:general-infra} This package provides some general \pgfname\ infrastructure to achieve its goals. These additional goodies are documented in this section. \subsection{The \texttt{asymmetrical rectangle} shape} \label{sec:asymm-rect-shape} The following shape is used inside |{tikzcd}| to ensure that arrows between nodes in the same row are perfectly horizontal, even if the nodes contain text with different heights and depths. \begin{shape}{asymmetrical rectangle} This shape is similar to the |rectangle| shape, but its |center| is located at a fixed distance of the |base|, as determined by the |center yshift| key, rather than lying at the shape's geometric center. The numerical anchors, as well as |east| and |west|, are modified accordingly, and there are anchors called |real center|, |real east|, and |real west| matching |rectangle|'s original definitions. All other anchors provided by |rectangle| are available and remain unmodified. \end{shape} \begin{key}{/tikz/commutative diagrams/center yshift=\meta{dimension} (initially axis\_height)} Determines the distance between |asymmetrical rectangle|'s |base| and |center| anchors. \end{key} The picture below shows some of the available anchors. \begin{center}\Huge \begin{tikzpicture} \node[name=s,shape=asymmetrical rectangle,shape example] {Asymmetrical rectangle\vrule width 1pt height 2cm}; \foreach \anchor/\placement in {north west/above left, north/above, north east/above right, west/left, center/right, east/right, real west/left, real center/right, real east/right, base west/left, base/right, base east/right, south west/below left, south/below, south east/below right, text/left, 10/right, 130/above, 230/below, -10/below} \draw[shift=(s.\anchor)] plot[mark=x] coordinates{(0,0)} node[\placement] {\scriptsize\texttt{\anchor}}; \end{tikzpicture} \end{center} \subsection{Reading font parameters} \label{sec:read-font-param} The following are |pgfmath| functions used to access relevant math font parameters. They take no arguments, but the result depends of the currently selected font size. \begin{math-function}{axis\_height} Returns the axis height parameter (a.k.a.\ $\sigma_{22}$) of the document's math font. \end{math-function} \begin{math-function}{rule\_thickness} Returns the fraction rule thickness (a.k.a.\ $\xi_8$) of the document's math font. \end{math-function} \subsection{Computer Modern arrow tips} \label{sec:comp-modern-arrow} The following arrow tips mimic the Computer Modern designs. It is useful to know that at size 10\,pt, the Computer Modern arrow stems are 0.4\,pt thick; for other font sizes, scale this parameter accordingly, or set \texttt{line width=rule\_thickness}. Notice that by using the mechanism explained in \S\ref{sec:changing-arrow-tips}, it is not necessary, and in fact not advisable, to directly refer to the arrow tips listed in this section inside a |{tikzcd}|. \begin{multicols}{2}\raggedcolumns \begin{tabular}{ll} \displayarrow{cm to}\\ \displayarrow[/tikz/commutative diagrams/double line]{cm implies}\\ \displayarrow[line width=1.5*rule_thickness]{cm bold to}\\ \displayarrow{cm double to}\\ \displayarrow{cm to reversed}\\ \end{tabular} \begin{tabular}{ll} \displayarrow{cm bar}\\ \displayarrow{cm left to}\\ \displayarrow{cm right to}\\ \displayarrow{cm left hook}\\ \displayarrow{cm right hook}\\ \end{tabular} \end{multicols} \subsection{Glyph arrow tips} \label{sec:font-arrow-tips} As an attempt to provide a general solution to the problem of having matching arrow tips in text and pictures, this feature produces arrow tips that consist of (pieces of) font glyphs carefully placed at the endpoints of the path. To activate it in |{tikzcd}| diagrams, refer to the |arrow style| key. \begin{arrowtipsimple}{Glyph} An arrow tip made from a piece of text. It accepts the following parameters. \begin{key}{/pgf/arrow keys/glyph math command=\meta{name}} The name of a command (to be used inside |$\csname| \dots |\endcsname$|) producing the desired glyph. \end{key} \begin{key}{/pgf/arrow keys/glyph length=\meta{dimension} (initially 1ex)} The length of the portion of the glyph not clipped away. Also used to set the `tip end' parameter. \end{key} \begin{key}{/pgf/arrow keys/glyph axis=\meta{dimension} (initially axis\_height)} A vertical displacement applied to the glyph in order to make the glyph's central axis (typically an arrow stem) aligned with the path. \end{key} \begin{key}{/pgf/arrow keys/glyph shorten=\meta{dimension} (initially -0.1ex)} An additional amount by which the end of the path is shortened. This is used to compensate for the gap that usually exists between the tip end and the glyph's bounding box. \end{key} \end{arrowtipsimple} Below are some usage examples. Notice that glyph arrow tips do not scale with \pgfname{} line width but their size depends on the current font size, so you will probably want to set \texttt{line width=rule\_thickness} when using them. Also, contrarily to the arrow parameters defined by the \texttt{arrows.meta} library, the parameters described above are evaluated only at the time the arrow tip is drawn, so they can (and should) be given in the units em or ex. \begin{codeexample}[] \tikzset{ math to/.tip={Glyph[glyph math command=rightarrow]}, loop/.tip={Glyph[glyph math command=looparrowleft, swap]}, weird/.tip={Glyph[glyph math command=Rrightarrow, glyph length=1.5ex]}, pi/.tip={Glyph[glyph math command=pi, glyph length=1.5ex, glyph axis=0pt]}, } \begin{tikzpicture}[line width=rule_thickness] \draw[loop-math to, bend left] (0,2) to (1,2); \draw[math to-weird] (0,1) to (1,1); \draw[pi-pi] (0,0) to (1,0); \end{tikzpicture} \end{codeexample} It is important to be aware of some drawbacks of this feature. First, the transition between a line and the arrow tip may become visible with some printers (especially in low resolutions or draft mode) and document viewers, as you may be able to see in the samples above. Second, these rather long tips may (\tikz[baseline=-axis_height]\draw[dash pattern=on 0.8ex off 0.4ex,-{Glyph[glyph math command=rightarrow]}] (0,0) -- (3.4ex,0);) or may not (\tikz[baseline=-axis_height]\draw[dash pattern=on 0.8ex off 0.4ex,-{Glyph[glyph math command=rightarrow]}] (0,0) -- (4ex,0);) fit nicely with dashed or curved lines. Finally, the method used to place the arrow tip at the end of a stroked path and clip away the arrow stem makes certain assumptions about the font design and could fail in cases where unusual design choices are made.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Utilizing large amounts of data has helped machine learning algorithms achieve significant success in many real applications. However, such work also raises privacy concerns. For example, a diagnostic system based on machine learning algorithms may be trained on a large quantity of patient data, such as medical images. It is important to protect training data from adversarial attackers \citep{shokri2017membership}. However, even the most widely-used machine learning algorithms such as deep learning could implicitly memorize the training data \citep{papernot2016semi}, meaning that the learned model parameters implicitly contain information that could violate the privacy of training data. Such algorithms may be readily attacked. The above potential model vulnerability can be addressed by differential privacy (DP), a general notion of algorithm privacy \citep{dwork2008differential,dwork2006calibrating}. It is designed to provide a strong privacy guarantee for general learning procedures, such as statistical analysis and machine learning algorithms, that involve private information. Among the popular machine learning algorithms, Bayesian inference has realized significant success recently, due to its capacity to leverage expert knowledge and employ uncertainty estimates. Notably, the recently developed stochastic gradient Markov chain Monte Carlo (SG-MCMC) technique enables scalable Bayesian inference in a big-data setting. While there have been many extensions of SG-MCMC, little work has been directed at studying the privacy properties of such algorithms. Specifically, \citet{wang2015privacy} showed that an SG-MCMC algorithm with appropriately chosen step sizes preserves differential privacy. In practice, however, their analysis requires the step size to be extremely small to limit the risk of violating privacy. Such a small step size is not practically useful for training models with non-convex posterior distribution landscapes, which is the most common case in recent machine learning models. More details of this issue are discussed in Section~\ref{sec:step size_bound}. On the other hand, \citet{abadi2016deep} introduced a new privacy-accounting method, which allows one to keep better track of the privacy loss (defined in Section~\ref{sec:dp_definition}) for sequential algorithms. Further, they proposed a differentially-private stochastic gradient descending (DP-SGD) method for training machine learning models privately. Although they showed a significant improvement in calculating the privacy loss, there is no theory showing that their DP-SGD has a guaranteed performance under privacy constraints. In this paper, built on the notation of the privacy accounting method, we show that using SG-MCMC for training large-scale machine learning models is sufficient to achieve strong differential privacy. Specifically, we combine the advantages of the aforementioned works, and prove that SG-MCMC methods naturally satisfy the definition of differential privacy even without changing their default step size, thus allowing both good utility and strong privacy in practice. \section{Preliminaries} \label{sec:review} The following notation is used through out the paper. An input database containing $N$ data points is represented as $X=(\db_1,\dots,\db_N)\in \mathcal{X}^N$, where $\db_i\in\mathcal{X}$. The parameters of interest in the model are denoted ${\bm{\theta}} \in \mathbb{R}^r$, {\it e.g.}, these may be the weights of a deep neural network. The identity matrix is denoted $\Ib$. \subsection{Differential Privacy}\label{sec:dp_definition} The concept of differential privacy was proposed by \citet{dwork2008differential} to describe the privacy modeling property of a randomized mechanism (algorithm) on two adjacent datasets. \begin{definition}[Adjacent Datasets] Two datasets $X$ and $X^\prime$ are called adjacent if they only differ by one record, {\it e.g.}, $\mathbf{d}_i \neq \mathbf{d}_i^\prime$ for some $i$, where $\mathbf{d}_i \in X$ and $\mathbf{d}_i^\prime \in X^\prime$. \end{definition} \begin{definition}[Differential Privacy]\label{def:dp} Given a pair of adjacent datasets $X$ and $X^\prime$, a randomized mechanism $\mathcal{M}:\mathcal{X}^N\rightarrow\mathcal{Y}$ mapping from data space to its range $\mathcal{Y}$ satisfies $(\epsilon,\delta)$-differential privacy if for all measurable $\mathcal{S} \subset \text{range}(\mathcal{M})$ and all adjacent $X$ and $ X^\prime$ $$ Pr(\mathcal{M}(X) \in \mathcal{S }) \leq e^{\epsilon} Pr(\mathcal{M}(X^\prime) \in \mathcal{S}) + \delta $$ where $Pr(e)$ denotes the probability of event $e$, and $\epsilon$ and $\delta$ are two positive real numbers that indicate the loss of privacy. When $\delta=0$, we say the mechanism $\mathcal{M}$ has $\epsilon$-differential privacy. \end{definition} Differential privacy places constraints on the difference between the outputs of two adjacent inputs $X$ and $X^\prime$ by a random mechanism. If we assume that $X$ and $X^\prime$ only differ by one record $\db_i$, by observing the outputs, any outside attackers are not able to recognize whether the output has resulted from $X$ and $X^\prime$, as long as $\epsilon$ and $\delta$ are small enough (making these two probabilities close to each other). Thus, the existence of the record $\db_i$ is protected. Since the record in which the two datasets differ by is arbitrary, the privacy protection is applicable for all records. To better describe the randomness of the outputs of $\mathcal{M}$ with inputs $X$ and $X^\prime$, we define a random variable called privacy loss. \begin{definition}[Privacy Loss] Let $\mathcal{M}:\mathcal{X}^N\rightarrow\mathcal{Y}$ be a randomized mechanism, and $X$ and $X^\prime$ are a pair of adjacent datasets. Let $\mathsf{aux}$ denote any auxiliary input that does not depend on $X$ or $X^\prime$. For an outcome $o\in\mathcal{Y}$ from the mechanism $\mathcal{M}$, the privacy loss at $o$ is defined as: $$c(o;\mathcal{M},\textsf{aux},X,X^\prime)\overset{\Delta}{=}\log\frac{Pr[\mathcal{M}(\textsf{aux},X)=o]}{Pr[\mathcal{M}(\textsf{aux},X^\prime)=o]}$$ \end{definition} It can be shown that the $(\epsilon,\delta)$-DP is equivalent to the tail bound of the distribution of its corresponding privacy loss random variable \citep{abadi2016deep} (see Theorem~\ref{theorem:1} in the next section), thus this random variable is an important tool for quantifying the privacy loss of a mechanism. \subsection{Moments Accountant Method} \label{sec:moment} A common approach for achieving differential privacy is to introduce random noise, to hide the existence of a particular data point. For example, Laplace and Gaussian mechanisms \citep{dwork2014algorithmic} add {\it i.i.d.}\! Laplace random noise and Gaussian noise, respectively, to an algorithm. While a large amount of noise makes an algorithm differentially private, it may sacrifice the utility of the algorithm. Therefore, in such paradigms, it is important to calculate the smallest amount of noise that is required to achieve a certain level of differential privacy. The moments accountant method proposed in \citet{abadi2016deep} keeps track of a bound of the moments of the privacy loss random variables defined above. As a result, it allows one to calculate the amount of noise needed to ensure the privacy loss under a given threshold. \begin{definition}[Moments Accountant] Let $\mathcal{M}:\mathcal{X}^N\rightarrow\mathcal{Y}$ be a randomized mechanism, and let $X$ and $X^\prime$ be a pair of adjacent data sets. Let $\mathsf{aux}$ denote any auxiliary input that is independent of both $X$ and $X^\prime$. The moments accountant with an integer parameter $\lambda$ is defined as: $$\alpha_{\mathcal{M}}(\lambda)\overset{\Delta}{=}\underset{\textsf{aux},d,d^\prime}{\max}\alpha_{\mathcal{M}}(\lambda;\textsf{aux},X,X^\prime) $$ where $\alpha_\mathcal{M}(\lambda;\textsf{aux},X,X^\prime)\triangleq\log\mathbb{E}[\exp(\lambda c(\mathcal{M},\textsf{aux},X,X^\prime))]$ is the log of the moment generating function evaluated at $\lambda$, that is the $\lambda^{\text{th}}$ moment of the privacy loss random variable. \end{definition} The following moments bound on Gaussian mechanism with random sampling is proved in \citep{abadi2016deep}. \begin{theorem} \label{theorem:1} \textbf{[Composability]} Suppose that $\mathcal{M}$ consists of a sequence of adaptive mechanisms $\mathcal{M}_1,\dots,\mathcal{M}_k$ where $\mathcal{M}_i:\prod_{j=1}^{i-1}\mathcal{Y}_j\times \mathcal{X}\rightarrow\mathcal{Y}_i$, and $\mathcal{Y}_i$ is the range of the $i$th mechanism, {\it i.e.}, $\mathcal{M} = \mathcal{M}_k\circ \cdots\circ\mathcal{M}_1$, with $\circ$ the composition operator. Then, for any $\lambda$ $$\alpha_{\mathcal{M}}(\lambda)=\sum_{i=1}^k\alpha_{\mathcal{M}_i}(\lambda)$$ where the auxiliary input for $\alpha_{\mathcal{M}_i}$ is defined as all $\alpha_{\mathcal{M}_j}$'s outputs, $\{o_j\}$, for $j < i$; and $\alpha_{\mathcal{M}}$ takes $\mathcal{M}_i'$s output, $\{o_i\}$ for $i<k$, as the auxiliary input. \textbf{[Tail bound]} For any $\epsilon>0$, the mechanism $\mathcal{M}$ is $(\epsilon,\delta)$-DP for $$\delta=\underset{\lambda}{\min}\exp\left(\alpha_\mathcal{M}(\lambda)-\lambda\epsilon\right)$$ \end{theorem} For the rest of this paper, for simplicity we only consider mechanisms that output a real-valued vector. That is, $\mathcal{M}:\mathcal{X}^N\rightarrow \mathbb{R}^p$. Using the properties above, the following lemma about the moments accountant has been proven in \citep{abadi2016deep}: \begin{lemma} \label{lemma:1} Suppose that $f:\mathcal{D}\rightarrow \mathbb{R}^p$ with $\|f(.)\|\leq 1$. Let $\sigma\geq 1$ and $J$ is a mini-batch sample with sampling probability $q$, {\it i.e.}, $q=\frac{\tau}{N}$ with minibatch size of $\tau$. If $q<\frac{1}{16\sigma}$, for any positive integer $\lambda\leq \sigma^2\ln \frac{1}{q\sigma}$, the mechanism $\mathcal{M}(X) = \sum_{i\in J} f(\db_i)+N(0,\sigma^2I)$ satisfies $$\alpha_{\mathcal{M}}(\lambda)\leq \frac{q^2\lambda(\lambda+1)}{(1-q)\sigma^2}+O(q^3\lambda^3/\sigma^3)$$ \end{lemma} In the following, we build our analysis of the differentially-private SG-MCMC based on this lemma. \subsection{Stochastic Gradient Markov Chain Monte Carlo} SG-MCMC is a family of scalable Bayesian sampling algorithms, developed recently to generate approximate samples from a posterior distribution $p({\bm{\theta}} | X)$, with ${\bm{\theta}}$ a model parameter vector. SG-MCMC mitigates the slow mixing and non-scalability issues encountered by traditional MCMC algorithms, by $\RN{1})$ adopting gradient information of the posterior distribution, and $\RN{2})$ using minibatches of data in each iteration of the algorithm. It is particularly suitable for large-scale Bayesian learning, and thus is becoming increasingly popular. SG-MCMC algorithms are discretized numerical approximations of continuous-time It\^{o} diffusions \citep{ChenDC:NIPS15,MaCF:NIPS15}, whose stationary distributions are designed to coincide with $p({\bm{\theta}} | X)$. Formally, an It\^{o} diffusion is written as \begin{align}\label{eq:ito} \mathrm{d}{\bm{\Theta}}_t &= F({\bm{\Theta}}_t)\mathrm{d}t + g({\bm{\Theta}}_t)\mathrm{d}\mathcal{W}_t~, \end{align} with $t$ is the time index; ${\bm{\Theta}}_t \in \mathbb{R}^p$ represents the full variables in a system, where typically ${\bm{\Theta}}_t \supseteq {\bm{\theta}}_t$ (thus $p \geq r$) is an augmentation of the model parameters; and $\mathcal{W}_t \in \mathbb{R}^p$ is $p$-dimensional Brownian motion. Functions $F: \mathbb{R}^p \to \mathbb{R}^p$ and $g: \mathbb{R}^p \rightarrow \mathbb{R}^{p\times p}$ are assumed to satisfy the Lipschitz continuity condition \citep{Ghosh:book11}. Based on the It\^{o} diffusion, SG-MCMC algorithms further develop three components for scalable inference: $\RN{1})$ define appropriate functions $F$ and $g$ in \eqref{eq:ito} so that their (marginal) stationary distributions coincide with the target posterior distribution $p({\bm{\theta}}|X)$; $\RN{2})$ replace $F$ or $g$ with unbiased stochastic approximations to reduce the computational complexity, {\it e.g.}, approximating $F$ with a random subset of data points instead of using the full data; and $\RN{3})$ solve the generally intractable continuous-time It\^{o} diffusions with a numerical method, which typically brings estimation errors that are controllable. The stochastic gradient Langevin dynamic (SGLD) model defines ${\bm{\Theta}} = {\bm{\theta}}$, and $F({\bm{\Theta}}_t) = -\nabla_{{\bm{\theta}}} U({\bm{\theta}}), \hspace{0.1cm} g({\bm{\Theta}}_t) = \sqrt{2}\Ib_r$, where $U({\bm{\theta}}) \triangleq -\log p({\bm{\theta}}) - \sum_{i=1}^N \log p(\db_i | {\bm{\theta}})$ denotes the unnormalized negative log-posterior, and $p({\bm{\theta}})$ is the prior distribution of ${\bm{\theta}}$. The stochastic gradient Hamiltonian Monte Carlo (SGHMC) method \citep{pmlr-v32-cheni14} is based on second-order Langevin dynamics, which defines ${\bm{\Theta}} = ({\bm{\theta}}, \qb)$, and {\small\begin{align* F({\bm{\Theta}}_t)= \Big( \begin{array}{c} \qb \\ -B \qb-\nabla_{\bm{\theta}} U({\bm{\theta}}) \end{array} \Big),\hspace{0.1cm} g({\bm{\Theta}}_t) = \sqrt{2B}\Big( \begin{array}{cc} {\bf 0} & {\bf 0} \\ {\bf 0} & \Ib_n \end{array} \Big) \end{align*}} for a scalar $B > 0$; $\qb$ is an auxiliary variable known as the momentum \citep{pmlr-v32-cheni14,DingFBCSN:NIPS14}. Similar formulae can be defined for other SG-MCMC algorithms, such as the stochastic gradient thermostat \citep{DingFBCSN:NIPS14}, and other variants with Riemannian information geometry \citep{PattersonT:NIPS13,MaCF:NIPS15,LICCC:AAAI16}. To make the algorithms scalable in a large-data setting, {\it i.e.}, when $N$ is large, an unbiased version of $\nabla_{\bm{\theta}} U({\bm{\theta}})$ is calculated with a random subset of the full data, denoted $\nabla_{\bm{\theta}} \tilde{U}({\bm{\theta}})$ and defined as \begin{align*} \nabla_{\bm{\theta}} \tilde{U}({\bm{\theta}}) = \nabla\log p({\bm{\theta}})+ \frac{N}{\tau}\sum_{\db_i\in J}\log p(\db_i|{\bm{\theta}})~, \end{align*} where $J$ is a random minibatch of the data with size $\tau$ (typically $\tau \ll N$). We typically adopt the popular Euler method to solve the continuous-time diffusion by an $\eta$-time discretization (step size being $\eta$). The Euler method is a first-order numerical integrator, thus inducing an $O(\eta)$ approximation error \citep{ChenDC:NIPS15}. Algorithm \ref{alg:1} illustrates the application of SGLD algorithm with the Euler integrator for differential privacy, which is almost the same as original SGLD except that there is a gradient norm clipping in Step 4 of the algorithm. The norm-clipping step ensures that the computed gradients satisfy the Lipschitz condition, a common assumption on loss functions in a differential-privacy setting \citep{song2013stochastic,bassily2014differentially,wang2015privacy}. The reasoning is intuitively clear: since differential privacy requires the output to be non-sensitive to any changes on an arbitrary data point, it is thus crucial to bound the impact of a single data point to the target function. The Lipschitz condition is easily met by clipping the norm of a loss function, a common technique for gradient-based algorithms to prevent gradient explosion \citep{pascanu2013difficulty}. The clipping is equivalent to using an adaptive step size as in preconditioned SGLD \citep{LICCC:AAAI16}, and thus it does not impact its convergence rate in terms of the estimation accuracy discussed in Section~\ref{sec:bound}. \begin{algorithm} \caption{Stochastic Gradient Langevin Dynamics with Differential Privacy} \begin{algorithmic}[1] \label{alg:1} \REQUIRE Data $X$ of size $N$, size of mini-batch $\tau$, number of iterations $T$, prior $p(\boldsymbol{{\bm{\theta}}})$, privacy parameter $\epsilon, \delta$, gradient norm bound $L$. A decreasing/fixed-step-size sequence $\{\eta_t\}$. Set $t=1$. \FOR{$t\in[T]$} \STATE Take a random sample $J_t$ with sampling probability $q = \tau/N$. \STATE Calculate $g_t(\db_i)\leftarrow \nabla \log\ell(\boldsymbol{{\bm{\theta}}}_t|\db_i)$ \STATE Clip norm: $\tilde{g}_t(\db_i)\leftarrow g_t(\db_i)/\max\left(1,\frac{\|g_t(\db_i)\|_2}{L}\right)$ \STATE Sample each coordinate of $\mathbf{z}_t$ iid from $N(0, \frac{\eta_t}{N}I)$ \STATE Update $\boldsymbol{{\bm{\theta}}}_{t+1} \leftarrow\boldsymbol{{\bm{\theta}}}_t-\eta_t\left(\frac{\nabla\log p(\boldsymbol{{\bm{\theta}}})}{N}+\frac{1}{\tau}\sum_{i\in J_t}\tilde{g}_t(\db_i)\right) +\mathbf{z}_t$ \STATE Return $\boldsymbol{{\bm{\theta}}}_{t+1}$ as a posterior sample (after a predefined burn-in period). \STATE Increment $t\leftarrow t+1$. \ENDFOR \STATE ${\bm{\theta}}_T$ and compute the overall privacy cost $(\epsilon,\delta)$ using a the moment accountant method. \end{algorithmic} \end{algorithm} \section{Privacy Analysis for Stochastic Gradient Langevin Dynamics} \label{sec:dp} We first develop theory to prove Algorithm~\ref{alg:1} is $(\epsilon,\delta)$-DP under a certain condition. Our theory shows a significant improvement of the differential privacy obtained by SGLD over the most related work by \cite{wang2015privacy}. To study the estimation accuracy (utility) of the algorithm, the corresponding mean square error estimation bounds are then proved under such differential-privacy settings. \subsection{Step size bounds for differentially-private SGLD}\label{sec:step size_bound} Previous work on SG-MCMC has shown that an appropriately chosen decreasing step size sequence can be adopted for an SG-MCMC algorithm \citep{TehTV:arxiv14,ChenDC:NIPS15}. For the sequence in the form of $\eta_t = O(t^{-\alpha})$, the optimal value is $\alpha = \frac{1}{3}$ in order to obtain the optimal mean square error bound (defined in Section~\ref{sec:bound}). Consequently, we first consider $\eta_t = O(t^{-1/3})$ in our analysis below, where the constant of the stepsize can be specified with parameters of the DP setting, shown in Theorem~\ref{theo:dp}. The differential privacy property under a fixed step size is also discussed subsequently. \begin{theorem}\label{theo:dp} If we let the step size decrease at the rate of $O(t^{-1/3})$, there exist positive constants $c_1$ and $c_2$ such that given the sampling probability $q=\tau/N$ and the number of iterations $T$, for any $\epsilon<c_1q^2T^{2/3}$, Algorithm \ref{alg:1} satisfies $(\epsilon, \delta)$-DP as long as $\eta_t$ satisfies: \begin{enumerate} \item $\eta_t\leq \frac{N}{L^2}$ \item $\eta_t>\frac{q^2N}{256L^2}$ \item $\eta_t<\frac{\epsilon^2Nt^{-1/3}}{c_2^2L^2T^{2/3}\log(1/\delta)}$. \end{enumerate} \end{theorem} \begin{proof} See Section \ref{app:theorem3} of the SM. \end{proof} \begin{remark} In practice, the first condition is easy to satisfy as $\frac{N}{L^2}$ is often much larger than the step size. The second condition is also easy to satisfy with properly chosen $L$ and $q$, and we will verify this condition in our experiments. In the rest of this section, we only focus on the third condition as an upper bound to the step size.\end{remark} It is now clear that with the optimal decreasing step size sequence (in terms of MSE defined in Section~\ref{sec:bound}), Algorithm~\ref{alg:1} maintains $(\epsilon, \delta)$-DP. There are other variants of SG-MCMC which use fixed step sizes. We show in Theorem~\ref{remark:fix_DP} that in this case, the algorithm still satisfies $(\epsilon, \delta)$-DP. \begin{theorem}\label{remark:fix_DP} Under the same setting as Theorem~\ref{theo:dp}, but using a fixed-step size $\eta_t = \eta$, Algorithm \ref{alg:1} satisfies $(\epsilon,\delta)$-DP whenever $\eta<\frac{\epsilon^2N}{c^2L^2Tlog(1/\delta)}$ for another constant $c$. \end{theorem} \begin{proof} See Section~\ref{app:fixed_DP} of the SM. \end{proof} In \citep{wang2015privacy}, the authors proved that the SGLD method is $(\epsilon, \delta)$-DP if the step size $\eta_t$ is small enough to satisfy \begin{align*} \eta_t<&\frac{\epsilon^2N}{128L^2T\log\left(\frac{2.5T}{\delta}\right)\log(2/\delta)}~. \end{align*} This bound is relatively small compared to ours (explained below), thus it is not practical in real applications. To address this problem, \cite{wang2015privacy} proposed the Hybrid Posterior Sampling algorithm, that uses the One Posterior Sample (OPS) estimator for the ``burn-in'' period, followed by the SGLD with a small step size to guarantee the differential privacy property. We note that for complicated models, especially with non-convex target posterior landscapes, such an upper bound for step size still brings practical problems even with the OPS. One issue is that the Markov chain will mix very slowly with a small step size, leading to highly correlated samples. By contrast, our new upper bound for the step size in Theorem~\ref{theo:dp}, $\eta_t<\frac{\epsilon^2Nt^{-1/3}}{c_2^2L^2T^{2/3}\log(1/\delta)}$, improves the bound in \cite{wang2015privacy} by a factor of $T^{1/3}\log (T/\delta)$ at the first iteration. Note the constant $c_2^2$ in our bound is empirically smaller than $128$ (see the calculating method in Section~\ref{app:constants} of the SM), thus still giving a larger bound overall. To provide intuition on how our bound compares with that in \cite{wang2015privacy}, consider the MNIST data set with $N=50,000$. If we set $\epsilon=0.1$, $\delta=10^{-5}$, $T=10000$, and $L=1$, our upper bound can be calculated as $\eta_t<0.103$, consistent with the default step size when training MNIST \citep{LICCC:AAAI16}. More importantly, our theory indicates that using SGLD with the default step size $\eta_t=0.1$ is able to achieve $(\epsilon,\delta)$-DP with a small privacy loss for the MNIST dataset. As a comparison, \citep{wang2015privacy} gives a much smaller upper bound of $\eta_t<1.54\times 10^{-6}$, which is too small too be practically used. More detailed comparison for these two bounds is given in Section~\ref{sec:exp_upper}, when considering experimental results. Finally, note that as in \citep{wang2015privacy}, our analysis can be easily extended to other SG-MCMC methods such as SGHMC \citep{pmlr-v32-cheni14} and SGNHT \citep{DingFBCSN:NIPS14}. We do not specify the results here for conciseness. \subsection{Utility Bounds} \label{sec:bound} The above theory indicates that, with a smaller step size, one can manifest an SG-MCMC algorithm that preserves more privacy, {\it e.g.}, $(0, \delta)$-DP in the limit of zero step size. On the other hand, when the step size approaches zero, we get (theoretically) exact samples from the posterior distributions. In this case, the implication of privacy becomes transparent because changing one data point typically would not impact prediction under a posterior distribution in a Bayesian model. However, as we note above, this does not mean we can choose arbitrarily small step sizes, because this would hinder the exploration of the parameter space, leading to slow mixing. To measure the mixing and utility property, we investigate the estimation accuracy bounds under the differential privacy setting. Following standard settings for SG-MCMC \citep{ChenDC:NIPS15,VollmerZT:arxiv15}, we use the {\em mean square error} (MSE) under a target posterior distribution to measure the estimation accuracy for a Bayesian model. Specifically, our utility goal is to evaluate the {\em posterior average} of a test function $\phi({\bm{\theta}})$, defined as $\bar{\phi} \triangleq \int \phi({\bm{\theta}}) p({\bm{\theta}}|\mathcal{D}) \mathrm{d}{\bm{\theta}}$, with a posterior distribution $p({\bm{\theta}}|\mathcal{D})$. The posterior average is typically infeasible to compute, thus we use the {\em sample average}, $\hat{\phi}_L \triangleq \frac{1}{\sum_t \eta_t} \sum_{t = 1}^T \eta_t \phi({\bm{\theta}}_{t})$, to approximate $\bar{\phi}$, where $\{{\bm{\theta}}_{t}\}_{t=1}^T$ are the samples from an SG-MCMC algorithm. The MSE we desire is defined as $\mathbb{E}\left(\hat{\phi}_T - \bar{\phi}\right)^2$. Our result is summarized in Proposition~\ref{lem:mse}, an extension of Theorem~3 in \citep{ChenWZSC:arxiv17} for the differentially-privacy SG-MCMC with decreasing step sizes. In this section we impose the same assumptions on an SG-MCMC algorithm as in previous work \citep{VollmerZT:arxiv15,ChenDC:NIPS15}, which are detailed in Section~\ref{app:assumption} of the SM. We assume both the corresponding It\^{o} diffusion (in terms of its coefficients) and the numerical method of an SG-MCMC algorithm to be well behaved \begin{proposition}\label{lem:mse} Under Assumption~\ref{ass:assumption1} in the SM, the MSE of SGLD with a decreasing step size sequence $\{\eta_t<\frac{\epsilon^2Nt^{-1/3}}{c_2^2L^2T^{2/3}\log(1/\delta)}\}$ as in Theorem~\ref{theo:dp} is bounded, for a constant $C$ independent of $\{\eta, T, \tau\}$ and a constant $\Gamma_M$ depending on $T$ and $U(\cdot)$, as $\mathbb{E}\left(\hat{\phi}_L - \bar{\phi}\right)^2$ \begin{align*} \leq C\left(\frac{2}{3}\left(\frac{N}{n}-1\right)N^2\Gamma_MT^{-1}+\frac{1}{3\tilde{\eta}_0}+2\tilde{\eta}_0^2T^{-2/3}\right)~. \end{align*} where $\tilde{\eta}_0\triangleq\frac{\epsilon^2}{c_2^2L^2\log(1/\delta)}.$ \end{proposition} The bound in Proposition~\ref{lem:mse} indicates how the MSE decreases to zero w.r.t.\! the number of iterations $T$ and other parameters. It is consistent with standard SG-MCMC, leading to a similar convergence rate. Interestingly, we can also derive the optimal bounds w.r.t.\! the privacy parameters. For example, the optimal value for $\tilde{\eta}_0$ when fixing other parameters can be seen as $\tilde{\eta}_0 = O\left(T^{2/9}\right)$. Consequently, we have $\epsilon^2 = O\left(L^2T^{2/9}\log(1 / \delta)\right)$ in the optimal MSE setting. Different from the bound of standard SG-MCMC \cite{ChenDC:NIPS15}, when considering a $(\epsilon, \delta)$-DP setting, the MSE bound induces an asymptotic bias term of $\frac{1}{3\tilde{\eta}_0}$ as long as $\epsilon$ and $\delta$ are not equal to zero. We also wish to study the MSE under the fixed-step-size case. Consider a general situation, {\it i.e.}, $\eta_t = \eta$, for which \cite{ChenWZSC:arxiv17} has proved the following MSE bound for a fixed steps size, rephrased in Lemma~\ref{lem:mse1}. \begin{lemma}\label{lem:mse1} With the same Assumption as Proposition~\ref{lem:mse}, the MSE of SGLD is bounded as\footnote{With a slight abuse of notation, the constant $C$ is independent of $\{\eta, T, \tau\}$, but might be different from that in Proposition~\ref{lem:mse}.}: {\begin{align*} \mathbb{E}&\left(\hat{\phi}_L - \bar{\phi}\right)^2 \leq C\left(\frac{(\frac{N}{\tau}-1)N^2\Gamma_M}{T} + \frac{1}{T\eta} + \eta^2\right)~. \end{align*}} Furthermore, the optimal MSE w.r.t.\! the step size $\eta$ is bounded by \begin{align*} \mathbb{E}&\left(\hat{\phi}_L - \bar{\phi}\right)^2 \leq C\left(\frac{(\frac{N}{\tau}-1)N^2\Gamma_M}{T} + T^{-2/3}\right)~, \end{align*} with the optimal step size being $\eta = O(T^{-1/3})$. \end{lemma} From Lemma~\ref{lem:mse1}, the optimal step size, {\it i.e.}, $\eta = O(T^{-1/3})$, is of a lower order than both our differential-privacy-based algorithm ($\eta = O(T^{-1})$) and the algorithm in \cite{wang2015privacy}, {\it i.e.}, $\eta = O(T^{-1}\log^{-1} T)$. This means that for $T$ large enough, both ours and the method in \cite{wang2015privacy} might not run on the optimal step size setting. A remedy for this is to increase the step size at the cost of increasing privacy loss. Because for the same privacy loss, our step sizes are typically larger than in \cite{wang2015privacy}, our algorithm is able to obtain both higher approximate accuracy and differential privacy. Specifically, to guarantee the desired differential-privacy property as stated in Theorem~\ref{remark:fix_DP}, we substitute a step size of $\eta = \frac{\epsilon^2N}{c^2L^2Tlog(1/\delta)}$ into the MSE formula in Lemma~\ref{lem:mse1}. Consequently, the MSE is bounded by $\mathbb{E}\left(\hat{\phi}_L - \bar{\phi}\right)^2 \leq C\left(\frac{(\frac{N}{\tau}-1)N^2\Gamma_M}{T} + \frac{c_2^2L^2\log \frac{1}{\delta}}{\epsilon^2 N} + \frac{\epsilon^4 N^2}{c_2^4L^4T^2\log^2(1/\delta)}\right)$, which is smaller than for the method in \cite{wang2015privacy}. \section{Experiments} \label{sec:experiment} \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{Bound_large.eps} \caption{Upper bounds for fixed-step size and decreasing-step size with different privacy loss $\epsilon$, as well as the upper bound from \citet{wang2015privacy}.} \label{fig:bound} \end{center} \end{figure} We test the proposed differentially-private SG-MCMC algorithms by considering several tasks, including logistic regression and deep neural networks, and compare with related Bayesian and optimization methods in terms of both algorithm privacy and utility. \subsection{Upper Bound}\label{sec:exp_upper} \vspace{-0.1cm} We first compare our upper bound for the step size in Section~\ref{sec:step size_bound} with the bound of \citet{wang2015privacy}. Note this upper bound denotes the largest step size allowed to preserve $(\epsilon,\delta)$-DP. In this simulation experiment, we use the following setting: $N=50,000$, $T=10,000$, $L=1$, and $\delta=10^{-5}$. We vary $\epsilon$ from $0.02$ to $1.7$ for different differential-privacy settings, for both ours (fixed and decreasing-step size cases) and the bound in \cite{wang2015privacy}, with results in Figure~\ref{fig:bound}. It is clear that our bounds give much larger step sizes than from \cite{wang2015privacy} at a same privacy loss, {\it e.g.}, $10^{-1}$ vs.\! $10^{-4}$. Our step sizes appear to be much more practical in real applications. In the rest of our experiments, we focus on using the decreasing-step size SGLD as it gives a nicer MSE bound as shown in Proposition~\ref{lem:mse}. For the parameters in our bounds, {\it i.e.}, $(N,T,\epsilon,\delta, L)$, the default setting is often chosen to be $\delta=O(1/N)$ and $T=O(N)$; $L$ is typically selected from a range such as $L\in\{0.1, 1, 10\}$. In this experiment, we investigate the sensitivity of our proposed upper bound w.r.t.\! $N$ and $L$ when fixing other parameters. The results are plotted in Figure~\ref{fig:diffbound}, from which we observe that our proposed step size bound is stable in terms of the data size $N$, and is approximately proportional to $1/L$. Such a conclusion is not a direct implication from the upper bound formula in Theorem~\ref{theo:dp}, as the constant $c_2$ also depends on $(N,T,\epsilon,\delta,L)$. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{diffbound_expand.eps} \caption{Step size upper bounds for $N=10^{3}, 10^{4}, 10^{5}, 10^{6}$ with fixed $L=1$ (top), and $L=0.1, 0.5, 1.0, 10.0$ with fixed $N=10^{4}$ (bottom). In both simulations, we let $\delta=1/N$ and $T=N$.} \label{fig:diffbound} \end{center} \end{figure} The result also indicates a rule for choosing step sizes in practice by using our upper bound, which fall into the range of $(10^{-4}, 0.1)$. When using such step sizes, we observe that the standard SGLD automatically preserves $(\epsilon,\delta)$-DP even when $\epsilon$ is small. \subsection{Logistic Regression} In the remaining experiments, we compare our proposed differentially-private SGLD (DP-SGLD) with other methods. The Private Aggregation of Teacher Ensembles (PATE) model proposed in \citet{papernot2016semi} is the state-of-the-art framework for differentially private training of machine learning models. PATE takes advantage of the moment accountant method for privacy loss calculation, and uses a knowledge-transfer technique via semi-supervised learning, to build a teacher-student-based model. This framework first trains multiple teachers with private data; these teachers then differentially and privately release aggregated knowledge, such as label assignments on several public data points, to multiple students. The students then use the released knowledge to train their models in a supervised learning setting, or they can incorporate unlabeled data in a semi-supervised learning setting. The semi-supervised setting generally works for many machine learning models, yet it requires a large amount of non-private unlabeled data for training, which are not always available in practice. Thus, we did not consider this setting in our experiments. We compare DP-SGLD with PATE and the Hybrid Posterior Sampling algorithm on the Adult data set from the UCI Machine Learning Repository \citep{Lichman:2013}, for a binary classification task with Bayesian logistic regression, under the DP setting. We fix $\delta=10^{-4}$, and compare the classification accuracy while varying $\epsilon$. We repeat each experiment ten times, and report averages and the standard deviations, as illustrated in Figure \ref{fig:glm}. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{glm.eps} \caption{Test accuracies on a classification task based on Bayesian logistic regression for One-Posterior Sample (OPS), Hybrid Posterior sampling based on SGLD, and our proposed DP-SGLD with different choice of privacy loss $\epsilon$. The non-private baseline is obtained by standard SGLD.} \label{fig:glm} \end{center} \end{figure} Our proposed DP-SGLD achieves a higher accuracy compared to other methods and is close to the baseline where the plain SGLD is used. In fact, when $\epsilon\approx 0.3$ or above, our DP-SGLD becomes the standard SGLD, therefore has the same test accuracy as the baseline. Note that PATE obtains the worst performance in this experiment. This might be because when $\epsilon$ is small and without unlabeled data, the students in this framework are restricted to using supervised learning with an extremely small amount of training data. \subsection{Deep Neural Networks} We test our methods for training deep neural networks under differentially-private settings. We compare our methods with PATE and the DP-SGD proposed in \citet{abadi2016deep}. Since the performance of PATE highly depends on the availability of public unlabeled data, we allow it to access a certain amount of unlabeled data, though it is not a fair comparison to our method. We do not include the results with Hybrid Posterior sampling, as it does not converge due to its small step sizes in the experiments. We use two datasets: ($i$) the standard MNIST dataset for handwritten digit recognition, consisting of 60,000 training examples and 10,000 testing examples \citep{lecun-mnisthandwrittendigit-2010}; and ($ii$) the Street View House Number (SVHN) dataset, which contains 600,000 $32\times32$ RGB images of printed digits obtained from pictures of house number in street view \citep{netzer2011reading}. We use the same network structure as for the PATE model, which contains two stacked convolutional layers and one fully connected layer with ReLUs for MNIST, and two more convolutional layers for SVHN. We use standard Gaussian priors for the weights of the DNN. For the MNIST dataset, the standard SGLD with step size $\eta_t=0.1t^{-1/3}$ satisfies $(\epsilon,\delta)$-DP for $\epsilon=0.10$ and $\delta=10^{-5}$ when we set $L=4$. For the SVHN dataset, the standard SGLD with step size $\eta_t=0.1t^{1/3}$ satisfies $(\epsilon,\delta)$-DP for $\epsilon=0.12$ and $\delta=10^{-6}$ when we set $L=1$. In both settings, we let $q=1/\sqrt{N}$ to satisfy the second condition in Theorem \ref{theo:dp}. In addition, we also ran a differentially-private version of the SGHMC for comparison. The test accuracies are shown in Table~\ref{table: nn}. It is shown that SGLD and SGHMC obtain better test accuracy than the state-of-the-art differential privacy methods, remarkably with much less privacy loss. They even outperformed the non-private baseline model using Adam, due to the advantages of Bayesian modeling. \begin{table}[ht] \label{table: nn} \centering{ \caption{Test accuracies on MNIST and and SVHN for different methods.} \begin{tabular}{ c |l| c|c| c } Dataset &Methods &$\epsilon$ &$\delta$ & Accuracy\\\hline &Non-Private & & & 99.23\%\\\cline{2-5} &PATE(100) &$2.04$&$10^{-5}$ & 98.00\% \\\cline{2-5} MNIST&PATE(1000) &$8.03$&$10^{-5}$ & 98.10\% \\\cline{2-5} &\textbf{DP-SGLD} &$0.10$&$10^{-5}$& 99.12\% \\\cline{2-5} &\textbf{DP-SGHMC} &$0.24$&$10^{-5}$&$\mathbf{99.28}$\%\\ \hline &Non-Private & & & 92.80\%\\\cline{2-5} &PATE(100) &$5.04$&$10^{-6}$ & 82.76\% \\\cline{2-5} SVHN&PATE(1000) &$8.19$&$10^{-6} $ & 90.66\% \\\cline{2-5} &\textbf{DP-SGLD} &$0.12$&$10^{-6}$& 92.14\% \\\cline{2-5} &\textbf{DP-SGHMC} &$0.43$&$10^{-6}$&$\mathbf{92.84}$\%\\ \hline \end{tabular} } \end{table} \section{Related Work} \label{sec:relatedwork} There have been several papers that have considered differentially-private stochastic gradient based methods. For example, \citet{song2013stochastic} proposed a differentially-private stochastic gradient descent (SGD) algorithm, which requires a large amount of noise when mini-batches are randomly sampled. The theoretical performance of noisy SGD is studied in \citet{bassily2014differentially} for the special case of convex loss functions. Therefore, for a non-convex loss function, a common setting for many machine learning models, there are no theoretical guarantee on performance. In \citet{abadi2016deep} another differentially private SGD was proposed, requiring a smaller variance for added Gaussian noise, yet it still did not provide theoretical guarantees on utility. On the other hand, the standard SG-MCMC has been shown to be able to converge to the target posterior distribution in theory. In this paper, we discuss the effect of our modification for differential privacy on the performance of the SG-MCMC, which endows theoretical guarantees on the bounds for the mean squared error of the posterior mean. Bayesian modeling provides an effective framework for privacy-preserving data analysis, as posterior sampling naturally introduces noise into the system, leading to differential privacy \citep{dimitrakakis2014robust,wang2015privacy}. In \citet{foulds2016theory}, the privacy for sampling from exponential families with a Gibbs sampler was studied. In \citet{wang2015privacy} a comprehensive analysis was proposed on the differential privacy of SG-MCMC methods. As a comparison, we have derived a tighter bound for the amount of noise required to guarantee a certain differential privacy, yielding a more practical upper bound for the step size. \section{Conclusion} \label{sec:conclusion} Previous work on differential privacy has modified existing algorithms, or has built complicated frameworks that sacrifice a certain amount of performance for privacy. In some cases the privacy loss may be relatively large. This paper has addressed a privacy analysis for SG-MCMC, a standard class of methods for scalable posterior sampling for Bayesian models. We have significantly relaxed the condition for SG-MCMC methods being differentially private, compared to previous works. Our results indicate that standard SG-MCMC methods have strong privacy guarantees for problems in large scale. In addition, we have proposed theoretical analysis on the estimation performance of differentially private SG-MCMC methods. Our results show that even when there is a strong privacy constraint, the differentially private SG-MCMC still endows a guarantee on the model performance. Our experiments have shown that with our analysis, the standard SG-MCMC methods achieve both state-of-the-art utility and strong privacy compared with related methods on multiple tasks, such as logistic regression and deep neural networks. Our results also shed lights onto how SG-MCMC methods help improving the generalization for training models, as it is well acknowledged that there is a connection between differential privacy and generalization for a model (cite Learning with Differential Privacy: Stability, Learnability and the Sufficiency and Necessity of ERM Principle). For example, in \cite{SaatchiW:NIPS17}, a Bayesian GAN model trained with SGHMC is proposed and shows promising performance in avoiding mode collapse problem in GAN training. According to \cite{AroraGLMZ:ICML17}, the mode collapse problem is potentially caused by weak generalization. Therefore, it is very likely that Bayesian GAN moderated mode collapse problem because SGHMC naturally leads to better generalization.
{ "redpajama_set_name": "RedPajamaArXiv" }